Max Me Out a Second Time

After our recent article about Max-T, we received quite a few questions and comments from readers. We passed the questions on to Mike Hughes, V.P. Business Development at Max-T, who took a moment to respond. In this evolving world of shared storage in the post production environment, comparisons between various solutions can quickly become complex. Mike Seymour takes a closer look in this Q&A session…..

maxT/faceplateI read your “Max Me Out” article on FXGuide.  I demoed the Max-T product for some time and it is a nice box – especially if you have Discreet products.  However, I passed on Max-T and went with an SGI Origin 350 with 4TB of Fibre drives instead.  As a NAS it at least matches the Max-T performance and probably exceeds since I have 5 gigabit connections to my switch – I could add more but it is already overkill.  The big difference from the Sledgehammer is that the SGI product can easily grow into a SAN by adding a switch, etc. (From an fxguide.com reader)

fxguide: The SGI NAS solution is very good. Origin servers have great internal bandwidth and are very scalable, and they have the multiple ethernet load balancing software that works very well. And it easily grows into a SAN for the ultimate bandwidth. However, Sledgehammer can also grow into a SAN. In fact SGI sell a disk-less version of Sledgehammer called Wedge for that exact purpose.

Mike Hughes: As the reader indicates, Sledgehammer is indeed a nice box :o). As far as the number of interfaces is concerned, just because you put five interfaces on a system doesn’t mean you’re filling five pipes – this is a “more is better” game that doesn’t necessarily translate into improved performance as measured in aggregate at the client level. We recommend only putting as many interfaces as you can truly fill, otherwise you’re just wasting valuable switch ports – we’ve just recently moved from two interfaces to four as we can now deliver, under the right network configuration, well in excess of the 220MB/sec. that can be sustained over 2 links. With respect to the upgrade comment, Sledgehammer does indeed scale onto a SAN, and the “diskless” version of Sledgehammer (really a video spigot for a CXFS-enabled SAN) is actually called Chisel. Also, if you use the metric of $$$/MB/sec. (which we believe to be relevant as it defines how much you’re paying to get to your storage), our customers have found that we’re highly cost effective.


In your article you say the Sledgehammer delivers “bandwidths close to SAN performance without the high cost.”  This, of course is Max-T’s line and was a major point of contention for me.  Yes – is has total bandwidth available, but any single client ONLY gets the speed of a single gigabit connection which varies with different hardware and operating systems.   For example – OSX on a G5 moves a 1gb file in about 1 minute with Max-T or the SGI NAS.  With a 2gb fibre connection to my SGI SAN I can move that same file in less than 15 seconds. (From an fxguide.com reader)

fxguide: Our first point in response is to point out that the Mac has very poor ethernet performance out of the box, eg no jumbo frame support and is only capable of around 35MB/sec transfer rates compared to XP and Linux at around 80MB/s and Irix at 110MB/s. Our readers figures suggest even lower performance at around 16MB/sec (which suggests something is not right with the set up) on the ethernet side At best it is a comparison between ethernet to Fibre Channel. But even the FC performance is low at 66MB/sec — a 2Gb/s FC loop should be capable of greather than 200MB/s on an SGI machine.

Mike Hughes: The reader is absolutely correct that if you need massive throughput for a single client and are willing to pay for it, then a DAS or SAN implementation is the only way to go. If instead you’re looking for a simple, OS-agnostic, plug-and-play shared storage option that can deliver throughput akin to what a user would get off the local drive in the machine (varies by platform, but is usually in the 30-60MB/sec. range), with integrated video I/O and DDR functionality, then many facilities have found that Sledgehammer fits the bill. Our tests have shown that a Mac OS X G5 can draw ~40MB/sec. off a Sledgehammer, so the 1GB file mentioned would be transferred in about 25 seconds.


maxT/sanblockI believe the claim that the Sledgehammer is a NAS with SAN performance is very misleading and does not properly represent the product. (From an fxguide.com reader)

fxguide: 😀 Yes, it is some what marketing speak from Max-T, but….. the point we think they are trying to make is – they can get 80 MB/sec out of Gig-E which is close to what 1GB Fibre Channel (FC) is capable of, even on an SGI machine. However FC has moved to 2Gb/sec and is going to 4 Gb/s, so the comparison needs to be qualified. But ethernet is going to 10Gig as well, so the bar is moving again. BTW, PCI-X bandwidth tops out at 4Gb/s so there’s no real advantage to going to 10G ethernet until one has PCI-Express.

Mike Hughes: Everyone has his or her own definition of what defines SAN performance. Of course you can build and stripe a SAN across a dozen or more 2Gb FC loops (you’d need at least one array on each loop) and get aggregate performance that blows Sledgehammer out of the water. And, it will easily cost in excess of $1,000,000, and a good deal of engineering time and money will be spent keeping it up and running. If instead you look at a SAN that is in the $100,000 range, then we absolutely do deliver a SAN level of performance, and most likely we’ll beat it hands down. Unlike a lot of other vendors who claim SAN performance in their NAS systems, we actually give real-world performance numbers which define what we mean by SAN levels of performance.



I’m just looking for an interesting discussion AND to stop the spread of propaganda! (From an fxguide.com reader)

fxguide: Qualified propaganda ! Look your points are well made, but we still feel that a Sledgehammer with HD i/o, virtual tape machine capability, coupled with its ability to look like a server to 3D and even firewire output makes it a great box, and one we enjoyed reviewing.


In my facility the extra cost of the SAN seems justified because of the significant impact on work flow for my Designers especially now moving into HD with larger files.  We hope to expand our SAN to include render nodes and edit systems and eliminating all local storage with everyone working to and from one large central server.  NAS is fine but SAN significantly speeds up the process. (From an fxguide.com reader)

fxguide: SAN is the ultimate bandwidth solution for our industry right now. It can deliver real time 2K data because it’s FC based. But, it’s expensive to implement and isn’t really necessary to all machines on the network, most connections are already via ethernet (zero client cost, low switch cost, good performance) and NAS. And Max-T make a very cost effective NAS. Consider, a 4TB SGI NAS is around 80% more expensive than say a Max-T. And throw in HD video i/o, xstoner and a browsable media interface (something which is very significant to some users).

If a typical faciltiy had a realtiime 2K scanner (like Spirit2), 20 Shakes, 2 infernos, 30 maya seats, a render wall of 100 CPU’s and Lustre then only 4 of those devises are capable of realtime 2K and need FibreChannel/SAN connectivity, the other 98% of the systems would be on NAS boxes which might tie into the SAN in larger facilties.

Taking nothing away from the suggested alternative, but a Max-T is still a great solution for most connectivity storage need in most facilities. But as was stated above – the discussion is very welcome and very helpful, and we welcome any other thoughts readers have.

Mike Hughes: Per above, if multiple guaranteed real-time HD streams are required then DAS or SAN is indeed the only way to go. With that said, a couple of words of caution might be useful. First, there is no guaranteed rate I/O (GRIO) available today for the SAN environment. DAS is the only guaranteed bandwidth configuration, but at the cost of not being able to share the data. Without GRIO, no client can be guaranteed the throughput it needs, and as such and depending on the load on the SAN, dropped frames will occur. The second point to note is that it’s a bad idea to throw a render farm at a SAN concurrently with clients requiring high levels of sustained throughput as the randomness this access profile imposes will have the effect of lowering the available aggregate bandwidth.

>
Your Mastodon Instance
Share to...