The new Mac Pro: the cube comes of age

Apple’s Phil Schiller previewed the new Mac Pro at the a of the World Wide Developer Conference in San Francisco on June 10th. Sure to annoy as many users as delighting them, the new hardware marks a significant departure from the currently shipping Mac Pro tower. The key discussion concerns seem to be around expandability and graphics card support. Actually, discussion isn’t the right word. I’ll call it what it is: the latest internet shitstorm to hit our industry since the previous darkest day in recent history: May 6, 2013 – the Adobe Creative Cloud announcement.

What Apple didn’t do with the new Mac Pro — and what seems to be the source of the greatest outcry among editors and artists — is create a traditional, slot-filled PC with plenty of expansion and user exchangeable parts. If you’re a user who needs to load up a machine with multiple cards or likes to add or remove your own internal hardware, you’re not going to be happy with the new machine.

Instead of making a computer that could be easily upgraded to work well for everyone, it is clear they are attempting to re-invent the PC with a tightly integrated package that will work *really* well for *most* everyone. Some argue that is Apple telling us what the user needs and forcing it upon them. They’re not forcing anything on anyone, but there are certainly situations where the new hardware will not be an effective solution. And there are alternatives for those users.

Some users who have reacted to the announcement have said Apple doesn’t know anything about the PC market. I think this hardware, as well as Apple’s business success says exactly the opposite. The fact that Apple generates more profit from PC sales than the top five PC makers combined says Apple knows a bit about what they’re doing outside of iPhones and iPads.

A followup argument to this is that Apple should license the OS to external hardware manufacturers in order to provide users what they need. They tried this in the dark days of Apple and it failed miserably. Having been the owner of a PowerPC clone, I know this first-hand. It didn’t work well. There are some who will disagree, but there are significant benefits to controlling both the hardware and software OS as Apple does.

There are quite a few more questions than answers at this point, but the fact that there aren’t any internal expansion slots (or user serviceable/changeable parts) shouldn’t be the end of the discussion. The fact is, there’s a lot to like about the new Mac Pro if you take a moment to examine some real world applications. For many artists out there, this could be the ideal machine. And as one software manufacturer said, in their short time with the new box, it is the “best out of the box performance” they have ever seen.

In a nutshell, let’s start with the facts (as we know them) about what is coming later this year.

  • CPU: Appears to be one CPU (Xeon E5 chipset), based upon Phil Schiller’s comments during the keynote. With configurations offering up to 12 cores of processing power
  • Memory: Looks like 4 slots are available. Includes a four-channel DDR3 memory controller running at 1866MHz. It delivers up to 60GB/s of memory bandwidth (vs 30GB/s for current MacPro)
  • Graphics: Two FirePro GPUs in each system, supporting up to three 4K displays. State-of-the-art AMD FirePro workstation-class GPU with up to 6GB of dedicated VRAM.
  • Internal Storage: PCI/E Flash storage, up to 1250MB/s transfer
  • Peripherals: 4 x USB3, 6 x Thunderbolt 2, 1 HDMI 1.4, 2 x GigE
  • Network: 2 x GigE, 802.11ac Wi-Fi
  • Expansion (Thunderbolt 2): Six Thunderbolt 2 connections, each with 20Gbps bandwidth

Form factor

The new MacPro is less than half the height of the curent tower and, according to Schiller, occupies “1/8th the space”. It seems to bear a remarkable heritage with the Apple Cube, which was groundbreaking at the time from an industrial design standpoint. It was a beautiful computer, but a bit ahead of its time. A tangible difference to the Apple Cube (of which I was an owner) is that the 2013 model has much more expandability due to Thunderbolt, so its built-in limitations are less of an issue. There’s a comparison to be made to another old-school Apple product, the Newton. It was also a bit ahead of its time, and could be considered the predecessor to the iPhone and iPad.

The new version is built around a triangular empty core in the center, with the CPU and two GPUs forming the three sides. Each side has heat sinks, which dissipates the heat from the sides through a fan in the top. Apple maintains the fan is quiet — and if it is at least takes some design cues from the retina MacBook Pro then there is hope that it will, in fact, be quiet. My guess it will be a very quiet workstation.

The computer rotates around on a base stand, revealing peripheral and expansion ports on the back, with a nice touch of lighted outlines around the ports, allowing users to easily see the connection points. The functionality has been a bit mocked in online reactions, pointing out that the idea of having all the wires connected and then rotating will easily create a spider web of cable tangle-ness. Those folks are free to pick up and rotate their MacPro tower around by hand to connect things. The rotate option seems better to me.

As far as industrial design, I’ll make a judgement when I see it in person. Trashcan, R2D2, whatever. After the iPad announcement, many made fun of it, calling it a big iPhone with name that sounded like a women’s hygiene product. In the end, how’d that work out for Apple?

Expandability

We mention the form factor, because it does lead to an lack of internal expandability. Apple calls it “our most expandable Mac yet,” but that’s a bit disingenuous as it is only expandable in an external sense via USB3 and Thunderbolt 2. It appears as though no internal parts will be customizable by the end user. So expansion in traditional terms? Not so much.

What Apple has done here is effectively say…look, we’ll give means for external expansion to those users who need it. But what they didn’t do is build a box with lots of internal slots and a beefy power supply to support those cards — only to have many of their customers not use the expansion capability. As I mentioned earlier, many feel this is forcing their customers to do something, but one could also argue it’s simply a different take on what a workstation means.

Through its lack of internal expansion, the hardware doesn’t support PCIe3 expansion, which could limit its use moving forward. But speaking in practical terms, other than an external GPU, what limits does Thunderbolt 2 place on real world hardware? A four drive RAID 0 6G SSD SAS array can provide approximately 1.5GB/sec in bandwidth. This fits well within the 20Gbps (~2.5GB/sec) theoretical limit of Thunderbolt 2, so we can expect similar performance from a Thunderbolt array. Apple bills 4K prominently in the graphics section of their Mac Pro page, so that deserves some attention. For 4K, you’d need close to 4Gbps for 12bit 24fps uncompressed playback (double that for 48fps), so both would also be well within the spec for Thunderbolt 2 connections. So our takeaway is that Thunderbolt 2 will be adequate for all but a few edge cases other than housing a GPU in the chassis.

It’s also important to note that the lack of internal expansion means that users will be spending money to add external enclosures.  These expansion boxes range from $400 for half-length boxes to $1,000 and more for larger boxes.

RAM

It appears as though the new Mac Pro has four slots for RAM, which look to be user installable due to their position inside the case. As with all things “New Mac Pro”, this is an educated guess as well. While the new DDR3 RAM running at 1866Mhz is a speed king, the maximum size for currently shipping ECC modules near that speed is 16GB. This means that a maximum of 64GB looks to be able to be installed in the system at this point in time if you want to use matched RAM. There’s slower RAM available, including load-reduced RAM, but that might cause issues.

64GB could be quite limiting for many (but not all) high end applications. Fast flash storage could change the dynamic of RAM v. “hard drive” space and what is important, but this is an initial red flag. This situation may change by the time the product ships later this year, but it is certainly a point to pay attention to moving forward.

Storage

Built-in storage on the Mac Pro is limited to PCI/E flash storage which provides about 1GB/s of transfer speed. So while storage space will be limited compared to adding hard drives, there is a *significant* improvement in performance when moving to flash. This is a night and day game changer for performance. One key will be how much storage is included (and how expensive it is). The bump from 512GB to 768GB on a Retina MacBook Pro is about $400, so having over 1TB of storage means close to a $1,000 price tag. But with the fast transfer speed, this could be significant speed benefit for everyday users for things such as After Effects hash cache. The jury is still out as to whether the storage is upgradable, as with previous products such as the MacBook Air or Retina MacBook Pro.

What about large scale storage, such as what we use currently with the built-in hard drives in the tower Mac Pro? If you want more storage, you’ll have to use drives externally via USB3 or Thunderbolt, which means an extra connection as well as external power.

We’ve seen a lot of posts from people saying that more connections means more points of failure and that’s not a risk willing to take. While true that it’s a risk, I feel the risk is overblown. Externally connected RAID systems are incredibly common on desktop systems via fibre, thunderbolt, or esata. On top of this, we have breakout boxes for video i/o and other things. I’ve been using external peripherals such as the Promise R4 array and the Blackmagic Intensity Extreme without issue with my Mac Book Pro. In fact, the benefits of being portable and only having to buy one Thunderbolt peripheral and share it among multiple systems could a big cost savings.

Frankly, saying the tangle of multiple cables is ugly might be more of a valid concern. But not a big concern. As far as external drive chassis, I can see manufacturers making units that would fit nicely under the Mac Pro , similar to expansion peripherals that work with the Mac Mini.

CPU

The new Mac Pro will ship with one Intel Xeon E5 chipset CPU, available with up to 12 cores. According to Apple, this single chip provides “Up to 2x faster speed” than the currently shipping Mac Pro. We’re a bit perplexed by the inclusion of only one CPU. Yes, it’s much faster than the current generation. But for RAW image processing power and hard core rendering we can see real benefit from having a multiple-CPU system. More cores = faster processing.

It doesn’t seem as though cost would be deciding issue for Apple when deciding whether to include dual CPUs. From our research, it appears as though the cost of the dual GPUs would be greater than dual CPUs. And this pales in comparison to the cost of the flash storage. A sensible reason for not having more than one CPU might be heat generation and dissipation. Perhaps the form factor and layout make having two CPUs out of the question. But our hope is that one day we’ll see the addition of another CPU, at least as an option.

That being said, there are a couple of things to keep in mind. We’ve been following GPU tech for years here at fxguide, and the fact is that the performance to watts/heat ratio on GPU is far superior to what we’re seeing in CPU development. In other words, the general purpose processing power is increasing much more dramatically on the GPU, while at the same time reducing power consumption and heat generation per processing cycle. Time and time again, presentations at the  NVIDIA GPU Tech Conference made this clear.

At the user end, we’re seeing more and more reliance on the GPU for processing power in all of our applications. This is just starting to be tapped. From OpenGL shaders being used to do grading and effects in Autodesk Smoke and Adobe After Effects, to The Foundry using their blink technology to support identical processing on both the GPU and the CPU, to Adobe Premiere Pro and Blackmagic Design’s Resolve utilizing CUDA and OpenCL processing….it is certainly the trend in the industry. The fact is, much of the heavy lifting of image processing is being done by the GPU today and is likely to be more prominent with the shift to 4K. This doesn’t mean the CPU is irrelevant by any stretch, but the GPU may prove to be more important for overall performance in many applications.

One other problem of relying on pure CPU benchmarks and horsepower as a litmus test for a “pro system” is that it doesn’t fully take into account how we work with software on the job. By this I mean that the amount of time that many of us are doing full up processing and maxing out the CPU and GPU in an app can be quite minimal. We spend a lot of time in the app simply working with the software in various ways: setting up a vfx composite, doing keying, masking, kerning of type, color correcting, animating, etc. During that process, we’re generally not maxing out the CPU or GPU by any stretch. It’s one reason I feel that my Retina MacBook Pro *feels* much faster than my Mac Pro tower. Even though the Quadro K5000 and more CPU cores render faster…due to the tech (flash storage, decent Quadro mobile GPU), it feels as though it keeps up.

Before you jump on me and say that’s bullshit…yes, I understand that this lack of full CPU/GPU utilization is not always true. And I would very much like to see a dual CPU system in the future. Editing multiple streams of 2K and higher video, debayering RED footage, applying realtime LUTs to footage, complex grading pipelines in Resolve, background renders in After Effects — all those things can max out your system. And that’s why having two CPUs would be a welcome addition to the new Mac Pro. But the point is not every “pro” user has those concerns, and for many of those users the new hardware will be big leap forward…because the overall hardware *is* a leap forward.

Optical Drive

There is no built-in optical drive.

Shocker.

Graphics

The new MacPro has not one, but two ATI FirePro GPU units built in. This is a critical feature and one key spec that is incredibly important from a “pro” standpoint. And these don’t seem to be entry level GPUs. In trying to guess the level of GPU that Apple is including in the new MacPro, it looks as though the specs seem close in performance to something like the ATI Fire Pro S9000 card (which has 6GB VRAM and 3.23 TFLOPS of performance). This isn’t your mother’s Radeon. These seem to be at the top end of the ATI product line, performance that OS X users haven’t had access to in the past due to limited support. It’s also a good thing they’re top of the line, as from the design of the computer, it looks as though it is a custom GPU hardware build in order to fit in the case. In other words, not user-replaceable.

The inclusion of two cards by default is a big step forward and a critical aspect of the base system. While PCI expansion is available through a Thunderbolt external enclosure, the spec doesn’t allow using an externally housed GPU card as a display device. You can use a GPU card in an external Thunderbolt expansion chassis for CUDA or OpenCL processing. But even with the 20Gbps improved bandwidth of Thunderbolt 2, it still isn’t high enough to matter. In most situations  it simply doesn’t make sense to offload CUDA/Open CL processing to an external GPU card due to the limited bandwidth. Having the GPU (or GPUs in this case) connected to the RAM and CPU by the fastest means possible is key to maximizing processing power.

The NVIDIA issue

One big outcry voiced among pro users after the WWDC keynote is the apparent lack of an NVIDIA graphics chipset in the new MacPro. While this could be problematic for many due to the lack of CUDA support — and let me just say at fxguide, we’re huge fans of NVIDIA’s GPU tech — recent improvements in the ATI product line has led many software manufacturers to beef up their OpenCL support. It’s important to note that OpenCL performance is not up to par with CUDA performance in many implementations, and OpenCL programming between NVIDIA and ATI isn’t as simple as using the same code twice. In other words, manufacturers still need to verify and bless the various graphics cards.

The lack of CUDA in the Mac Pro could actually lead to an improvement in the OpenCL situation and in the end lead to less dependence on one manufacturer for GPU computing. But in the end, who really knows? In this specific case, emerging OpenCL support allows Apple to only offer a “custom build” AMD GPU and still give pro users much of what they need. All this being said, we would prefer to have options on the GPU front…and would very much welcome the addition of NVIDIA hardware in the new Mac Pro.

A real shortcoming will be the lack NVIDIA support for certain After Effects users. Not for the majority use cases, since all but one AE GPU accelerated feature is supported with both CUDA and OpenCL. But it’s a big one: If you want an effective workflow, Adobe requires an NVIDIA GPU to do fast raytracing. By this I mean that while you can do raytracing utilizing the CPUs on your system, it is simply too slow to rely on as a creative technique in real production.

With the new Cinema 4D support in CC as well as options such as Element 3D, it may not be as much as a showstopper. But in a collaborative environment, do you really want to take a chance that you’re not going to be able to render something that another artist can? Probably not. Artists have been asking for Adobe AE to support raytracing on more than NVIDIA cards since the day it was released. This will hopefully give the team at Adobe reasons to do so in a timely manner, though I’m sure it won’t be a simple or straightforward process.

One possible workaround is that even though you can’t run a display device off a GPU card in an external Thunderbolt PCI expansion box such as the Sonnet line, you can (in theory) run a CUDA card for general purpose processing. The OS should recognize the GPU in the external box, and therefore After Effects *should* be able to use it for raytracing. With Thunderbolt 2, bandwidth shouldn’t be an issue for something like the AE raytracer, since you’re not really streaming a lot of imagery or layers in real time. You’re basically loading images to the card in buffers, doing the render (which takes the majority of the time), and then offloading the images.

These negatives being said, we do have concrete examples of “pro” apps which have made significant changes to support ATI and reduce the reliance on a single GPU manufacturer. And give support to the idea that having ATI-only cards — and serious high level cards — is OK in the long run. Maybe even preferred.

First, the upcoming version of Adobe Premiere Pro CC dramatically increases support for AMD cards. The shipping software has support for OpenCL in a wide variety of ATI Radeon and FirePro offerings. In addition, the two built-in cards could dramatically increase performance for export, since according to Adobe “configurations containing multiple GPUs, Premiere Pro CC can use all of them during export.”

Next up, Resolve. There were lots of initial complaints on Twitter from Resolve users because in the past, there have been performance and tool benefits to the app’s NVIDIA CUDA processing. But Blackmagic Design’s Grant Petty was quick to postthe company’s thoughts regarding the new Mac Pro, and it’s incredibly positive:

We have been testing with DaVinci Resolve 10 builds and this screams. Its amazing and those GPUs are incredible powerful. I am not sure what I can say as I am only going off what Apple has talked about publicly here in the keynote for what I can say right now, however there is a whole new OpenCL and DaVinci Resolve 10 has had a lot of performance work done to integrate it and its really really fast. Those GPUs are very powerful and have lots of GPU memory so this is the Mac we have been waiting for! We have lots of Thunderbolt products too so video in and out is taken care of.

Petty’s full post can be seen in the Blackmagic forums.

Finally, MARI. Phil Schiller mentioned that The Foundry would be showing off one of their products in a special session on Tuesday. We learned from The Foundry that they would be showing a tech preview MARI running on OS X, which has been in development for a while. According to product manager Jack Greasley, they’re getting really good performance on current hardware. As part of the development process, they had been pinging Apple tech support and asking some very pointed questions about OpenGL, performance, and throughput. About six weeks ago they got a call from Apple and they asked if they wanted to come in and have a chat….

The team from the Foundry was allowed to work with the new Mac Pro at Apple, albeit shrouded in a large cabinet so they couldn’t see the actual form factor or know the actual specs. According to Greasley, “it is absolutely the best out of the box performance I’ve ever seen. So in terms of not being a custom bit of hardware, it’s incredible.” The thing that is quite interesting that people have been glossing over in discussing the new Mac Pro is the 1GB/sec data throughput from storage into memory, says Greasley. “That just makes a huge difference…it feels like a really balanced system. It’s not as if the GPU is faster than the RAM can keep up with it…it’s effectively a system designed for 4K…it kind of blew me away, actually.”

In the WWDC keynote, they showed a still from PIXAR’s Monsters U — and those were the test case assets that The Foundry team pulled over to try out the new hardware. “It was full animation, lots of textures, lots of data,” in the scene according to Greasley, “and we were playing back the animation at 60 frames per second in raw per-frame animation.”

The hardware requirements for MARI aren’t all that advanced; the baseline card is a state of the art card from three years ago. But the main issue The Foundry is seeing in MARI is data throughput, especially at the high end facilities. This is because MARI processes everything in the GPU and you’re accessing more data than you have RAM, so you hit throughput bandwidth getting from the file system to RAM to the GPU and back. Having fast access to flash memory and even Thunderbolt 2 will, in theory, help make processing and interactivity even more efficient. 

The presentation by The Foundry will be available on the Apple Developer web site.

Pricing

We really have no idea at this point as to pricing, but looking at the parts included it’s unlikely that it will be cheaper than current Mac Pro offerings. The inclusion of two GPUs as well as the flash storage will most likely take away any cost benefits from less materials and a smaller size.

Takeaways from the announcement

It’s clear from the WWDC announcement that users looking for an old-school MacPro update are disappointed. At least in the Twitterverse, the lack of internal expandability has upset a lot of artists and editors. Even with the options for external expansion such as Sonnet’s Echo Express expansion line or Magma’s ExpressBox 3T, the expansion isn’t appropriate for them.

For those users, thankfully they have fairly easy access to other options. For true tech-savvy folks who like tinkering and overclocking their GPU, making custom hardware builds, and more, there’s a fairly vibrant Hackintosh community on the web. For others, Windows systems are an option. The Adobe suite as well as Media Composer run incredibly well on Windows hardware and within the apps it feels the same as working on OS X. MARI, NUKE, and other Foundry products are also available on Windows.

As far as further shortcomings in the upcoming Mac Pro, since Apple is building for the future, our feeling is that 10GigE would have been a solid base upon which to build from a networking standpoint. Granted, current implementation can be a bit sketchy, but if you’re truly touting cutting edge performance for professionals, network collaboration is critical.

We do also have concerns regarding the form factor and large scale installs of the new MacPro — such as a machine-room install of multiple systems feeding rooms around a facility. While today’s MacPro isn’t the easiest tech room install, the form factor does lead to sensible rack mounting on a large scale. Storage of the new Mac Pro will be problematic to say the least, as they don’t seem to be incredibly friendly  from a large installation standpoint. And while I poked a bit of fun at the tangle of cables that will come off the back, this will also prove problematic for installs. Great design and functional design aren’t mutually exclusive.

Potential use cases

So what about the systems and how they might fit into facilities? A quick check in with several sysadmins and engineers today found that most graphics or vfx artist OS X workstations have only a decent graphics card, lots of RAM, and a large amount of storage (built-in RAID or attached enclosure). Artists at the “next level up”, so to speak, have some kind of broadcast monitor as well. These two users are probably the sweet spot for the new MacPro — though the lack of After Effects raytracing capability may be a deal killer for many. It seems as though the following would suit many artists quite well as a baseline system:

  • MacPro
  • Promise Pegasus R6 Thunderbolt RAID

This would provide local storage performance via the thunderbolt RAID and could even take advantage of the HDMI output for use as a calibrated broadcast monitor output in many situations. If it’s not enough, the following accessories could be added which could provide monitoring and I/O:

This seems like a very sensible, if not great, workstation for a great number of artists — whether they be motion graphics designers, vfx compositors, or even some editors. Autodesk Smoke on the current shipping iMac line is fast — faster than the flagship Flame software even. If it is fast on the iMac, it’s gonna be even faster on the upcoming Mac Pro. In fact, just last week, Autodesk’s Stig Gruman alluded to the fact that a system was coming out that would be great for their Smoke product.

What if you’re editing and/or working in a collaborative environment on a variety of footage? Editors and artists doing color grading might be examples of this. A key consideration in this area might  be the need to connect to a fibre channel network — or offload processing of R3D files to a RED Rocket card. Take the next step up and go with an external Thunderbolt to PCI expansion chasis to provide access the following:

  • Echo Express expansion chassis
    • RED rocket card
    • ATTO Celerity Fibre Channel card
    • Fusion ioFX

I would suggest that this setup would be approaching the “high end” of workflow in the professional vfx and post field. The cards in the chassis alone cost around $10,000. Having three Thunderbolt 2 controllers (six channels total) means that you can hook RAID storage off a separate controller from the expansion chassis.  While the improvements in Thunderbolt 2 (double current gen) break down some of the previous barriers to performance. The expansion chassis use case does fall down in a large scale workflow environment, where you might have dozens or even a hundred workstations. For starters, there will simply be a considerable cost in outfitting all of these with expansion chassis

For instance, with current shipping Thunderbolt products, the Fusion ioFX sees less than optimal performance due to the fact that the interface is limited to 1 Gigabyte per second. But with Thunderbolt 2, the full 1.4 Gigabytes per second for the ioFX should be attainable. The drawback to using the ioFX in this way is that the technology is not hot-swappable. You must plug in the Thunderbolt cable before powering up….and disconnect the ioFX after powering down. Hopefully, this can be improved the in the future but the main takeaway should be that throughput becomes less of an issue with the new hardware.

We have a lot of time to ponder the new hardware

It’s important to spend some time examining what you as an artist truly do on your box and not jump to quickly dismissing the new hardware from Apple. And thankfully, since the Mac Pro won’t be shipping for at least six months, you’ll have time to do so.

It’s certainly not for everyone – machine room installations, maxed out primary workstations that need many slots, and more. This new offering won’t provide a new path for those types of installs. And while I poked a bit of fun earlier about the tangle of cables, this certainly something the could be problematic when added to the fact that the Thunderbolt connector itself is not the most secure connection.

But upon closer examination, the hardware is nowhere near as bad as the initial (over)reactions from the online world convey. In fact, I’ll go further: it’s definitely not bad. In fact, it’s great for what I need to do as an individual artist. Get beyond the misguided/misinformed information posted by many on the net, and you’ll likely find it is actually a positive move forward for a majority of creative artists who are currently using OS X. And you don’t have to take my word for it, you already read what some of the software manufacturers have to say about it.

22 thoughts on “The new Mac Pro: the cube comes of age”

  1. Great Article! As a motion graphics artist I would say the most important part of the hardware is the Graphic Card. The new After Effects comes with a Cinema4D native render inside of it and it will use a lot of OpenGL.
    I’m still wondering about how it will work when using the Cineware plugin.

  2. Johnny Farmfield

    Best I’ve heard about it was, – The new MacPro; The solution to a problem that didn’t exist. [Random smart guy tweeting]

    And I agree. Apple is so good at removing choice, I can’t really understand why people are still taking the ‘abuse’ of it. My best advice is building a Hackintosh. You’ll get (basically) the same performance for less money but you will have the control of what you need in terms of hardware. Or if you’re lazy, buy an HP z800 & live with running Win/Linux. It’s the apps that count, you can learn to live without OS X…

    Or bend over. I haven’t for a few years now & not looking back at all – and I was on Mac since ’89.. 😉

    1. John Montgomery

      I think the thing is, Johnny, is that many artists/owners like myself don’t see Apple as removing choice per se. You’re correct that they don’t have a wide variety of options to expand. But I buy and use their products…and they work really well for what I need to do. I’m frankly not going to lose my clients based upon having a Quadro 5000K in my MacPro compared with with a 6000 in it. Or having not overclocked my graphics card or swapped out the CPUs. Users cried about the lack of user serviceable parts in the MacBook Air and now the Retina MacBook Pro. And I’m a guy who sapped out the HD in my previous gen Mac Book for an SSD. But the Retina is simply the best overall computer I’ve ever owned and HP, Acer, and others are now doing blatant ripoffs of those systems.

      I’m not certainly a mac-only zealot. I do have a very nice HPZ800 running Flame and other vfx software, an HP 8600, as well as a Dell Precision workstation running Windows for various software needs as well.

      One thing I’m not willing to do is give up OS X for my primary OS. The tie to linux (I use linux functionality ALL the time) coupled with a mainstream, user-friendly OS really works for me. It’s telling that when I want to run the Adobe suite, I always run it on my older Mac Pro tower as opposed to the faster, newer Dell.

  3. Nice article, Jon. Sensible as always. I’ve read it’s possible to use multiple thunderbolt connections to a expansion chassis to increase the bandwidth. It’ll be interesting to see what those manufacturers come up with between now and the launch.
    Thunderbolt already provides most of what you need from expansion cards, like Fibre Channel and Video IO.

  4. Emilijo Mihatov

    Refreshing to read such an objective and concise rebuttal to the typical hysteria around radical designs from the likes of Apple. That said, however, being in a position to experience a wide variety of possible uses for a high-performance Mac OS system from a disparate range of clients, one can’t help but ask why, with the level of financial backing, incredible design talent, and mountain of feedback from the field, it should be so difficult to innovate and provide the sort of no-holes-barred tech and industrial design creativity we’ve come to appreciate from Apple, but without such an impractical and obtuse design for the case, and without the restrictions of a fixed GPU from ATI and not offer the option of using nVidia.

    We’ve been begging Apple to make something of the technical calibre of a high-performance Mac for the machine room rack ever since the demise of the Xserve, their responses or lack of have been difficult to dismiss as “them knowing better than us”. It’s a bitter pill to swallow and seems to serve no one’s interests. They seem to know that we’ll buy whatever they make regardless of how impractical the external design is to the person for whom rack density is an issue and it’s difficult not to feel that the decision to put this new workstation into a shape so incredibly impractical as this for the likes of us as if they’re doing it to spite us.

    I actually think that it’s cosmetically beautiful, and seems to tick all the boxes for high-performance components, and I can get over the lack of nVidia. I agree that it’s going to force the hand for the Open CL development. However, just as there’s a need to create a design that inspires creative professionals, there’s also a need to create a design that’s going to provide this sort of innovation and performance in a form factor that’s professional, functional and practical. Apple’s been able to do all this in the past. These do not need to be mutually exclusive design objectives and we have seen many instances where all these targets can be achieved together. Does this really need to be said again?

    Frustrated, and annoyed, and disappointed, I still want one.

    1. I had a rack of Xserves and loved the power but not the power hungry nature of the beasts. With so many cloud services opening up dedicated to rendering and other computationally expensive tasks I don’t see the need for a noisy over engineered rack system. Apple have a knack for minimising the clutter in our life before we know we need them to. I remember the uproar over the lack of a floppy disc drive on the original imac.

      I think the new mac pro is a great solution to a problem that does exist. As always if you don’t like it. Don’t buy it. I for one will buy one. Hell Ill probably buy two and use them as mini bar stools that just happen to render their pants off.

  5. Great article John. Its nice to see Apple bringing out a new machine but the lack of dual CPU and mandatory ATI graphics is a killer for me and I’m sure others.

    The high end will go for Dual CPU… why… because you can have more processing power, and isn’t that the point?

  6. MacPro

    Very nice article John! Since we bounced a few comments around on twitter on the MacPro and the fact that I see traces of that in here I though I should write something longer than 140 characters. A lot longer (apologies but there’s a lot to say). =)

    First of all I think the new MacPro looks cool. It’s pretty much what we expected from Apple and even though some of us have hoped for something else. I do think that my view of Pro isn’t the same as Apple’s view of pro, not saying that any one of us is wrong though. I admit that I might have a bit of an old school view of this and I’m coming at it from a (small) multi user facility where I’m in charge of the tech. “Pro” today isn’t necessarily about huge boxes with loads of internal horsepower and giant fibre SAN’s and huge cooled machine rooms. Just like everything else “pro” have been democratised and today everyone can, and is allowed, to be a pro! Which is rather awesome! =) But it also means that the word “pro” have a slight different meaning today and that it have been a bit watered out. So for the sake of this I will use MultiPro for the multi-user environment with 10+ users in a facility. And also, as John pointed out in the article there’s a lot of speculation here since we don’t have all the facts yet.

    I share John’s view on OSX about the *nix-iness of it. I LOVE the *nix foundation in OSX just as much as I hate the Finder. =) The *nix-iness is why I will have a hard time moving over to HP Z-workstations, and cygwin is NOT the total solution to that. We’re a production company (film/episodical drama) and we have our own (small) internal post department. We have 9pcs of 2010 MacPro based Avid editing suites with MojoDX’s and all freelance editors have their own suite in our facility. If production demands, we sometimes rent an office in another town or part of the town where we build a smaller suite with iMac’s and accessories. On top of that we have a grading suite with a MacPro resolve with a cubix and also an iQ and a Clipster. We have a stornext based SAN at 90TB, expanding to 250TB this year, and an ISIS 5000 for the Avid’s. All hardware is in racks in a monitored cooled machine room.

    Coming from this I can see a few, potential, issues with the new MacPro. I write potential since I’m aware that like with everything Apple it’s my mindset that might need an adjustment. “We are the Apple. You will be assimilated. Resistance is futile.” But hey, don’t get me wrong. I don’t hate or love Apple just as I don’t hate or love Microsoft. I use what I have to use to get the job done and in context they’re just different tools and they all suck in their own way. 😉 If it was up to a younger me we would all use Amiga’s, but I guess it ain’t up to me… =/

    Today the 2010 MacPro actually fill most our needs when it comes to editing. But the pain will come the day they start to break down and need to be replaced. There’s a HUGE benefit of having identical systems in a setup like this so I’d prefer not to mix and match. Since it’s Avid we also try to keep things on qualified hardware/software so we get the most out of support.

    And running Hackintoshes in an environment like this would be nothing else than stupid. I have ran my few share of hackintoshes in the past and I have nothing against them and there’s for sure a real valid place for them. But this isn’t one of those places and you cannot use that in a MultiPro environment where so many pieces have to work together 24/7 and also be within support for the different components.

    How do you really rackmount these puppies? They suck air from the bottom and blow it out in top. This means that the computers on the shelf above would suck in the hot air from the one below and for every shelf they would run hotter and hotter. I was joking about building a wine rack for them but maybe that would be a valid solution. Not sure how the new fan and highly engineered cooling solution they have would work in a horizontal position though. Connectors might be an issue in horizontal mode too.

    Not a problem here but still worth mentioning. Not having seen the bottom nor base of it I do see a potential issue with having them on the floor like many had with the old tower. All floor computers gets dusty but old ones sucked in air at the front where you’re more likely to have vacuumed. The new ones will probably take air from a 360 circle from the base which means that all dust from the back will get sucked into it. And vacuum behind that puppy with all your precious thunderbolt connectors and extra boxes might be pain. Mount it under the desk like with traditional towers would stop air from blowing out (and potentially setting your desk on fire ;)) leaving the option to have it on your desk or at a separate shelf/table. Like you don’t have enough shizznitz on your desk already…? =/

    I have said this before and will say it again. The design of the thunderbolt connector sucks and there’s nothing pro about it. I have used TB in my Retina MacBook pro since the day they shipped and I have lost count of the times I accidentally unplugged both connectors when only meaning to remove one. The connectors or if it’s the ports have been loosened up over time and it’s not firm at all. I also see no benefit at all with daisy chaining gear. In my world that’s a bad bad thing. If the first one in the chain breaks breaks or acts up you’ll loose the rest of the chain. The troubleshooting mess it means is just horrendous, I’ve had it a few times with my thunderlink and metadata connection (eth adapter). Also, the TB cables have controller chips in them and have you guys felt how hot then can get? We all know what heat and computer components means and it will be interesting to see the fail ratio on TB-cables/adapters in a year or two.

    If Avid shipped new mojos with TB (and a nice trade-in) for old mojo or if they open up 100% of the API for 3rd party the new MacPro could possibly be okay for our basic suits. Current mojo would need a TB breakout box for the PCI card and to have a breakout for a breakout is not okay. I could potentially live with a few systems running breakout boxes, like resolve, but I do see a disturbing vision of turning the machine room into a uncontrolled cabling hell. It also means that we’d have to replace our rather expensive Cubix. I think you must have had the responsibility of a machineroom to fully understand what mess it can turn into and all the extra points of failures you’re introducing. Especially if you don’t run your machine room like a nazi and allow other peeps into it.

    Just to look at an example at rebuilding our LTO Archive server (PresSTORE) into a mac system, something I’ve intended to do since the HP server it’s currently on is nearing end of life and for various reasons OSX would be a better platform for it here.

    * Preferably it would have two separate fibre HBA with dual 8Gbps. Worst case 1xQuad 8Gbps. One connector for each LTO drive in the T40+. Two connectors to the SAN fibre switch.
    * 1 eth connection for metadata to the SAN
    * 2 eth connectors to the ISIS5k. Potentially 1x10Gbe
    * 1 eth connector to our in-house network
    * External RAID chassis for the Archive database.

    For the ones archiving you know that it’s a mission critical system and probably the most important system you have. In this config I would need two eth TB adapters, 1 external RAID storage (probably pegasus R4) and one TB PCIe breakout box, maybe even two for full speed of the fibre HBA’s (?). Compare this to one single unified system where you have everything in one box. You have to be blind if you don’t see the risks and fundamental issues and problems with this.

    And if it was for my Hiero/Nuke workstation I’d want a CUDA gpu (up for discussion since it seems like TF have been playing with the MacPro), 1 or 2 Fusion ioFX, dual fibre SAN connection, ISIS connection, house network connection, local RAID, 2 monitors. I don’t see the new macpro as a suitable solution here either. I have not done the math on the ports but it feels like I need at least two PCIe expansion boxes for it and a few GigE adapters. What will the total cost be with all these extra boxes and what mess will it create in the machine room? Also… Can I live with one CPU? Most probably not. And what happens after a year or two when I want the newest GPU? I have to replace the whole computer which seems a tad overkill, not to mention expensive. There will most likely be bigger memory sticks for it but 64GB which seems to be max now is for sure not enough for me, I need 2-4 times that.

    How compatible is all current breakout boxes with TB2? Will they benefit from it or would they all have to be replaced to benefit from the extra speed? Are there TB control chips in them or is that in the cable and host computer?

    The MacPro looks cool and is probably a beast and exactly what some peeps want/need. From my point of view it would have been better suited with an i7, lower price and marketed as different levels of a desktop computer sitting between iMac and MacPro. Not sure why they went Xeon since the biggest benefit with the E5’s would be dual CPU setup. It will be really interesting to see benchmarks of the new single E5-2600 v2 compared to single i7. In my fictional parallel universe the MacPro would have been a beefy rack mountable tower with room for disks and PCIe cards. Something that could’ve been sold as an enterprise server as well. But Apple isn’t stupid. They have for sure done their market research and if this is what they came up with I guess they have good grounds for it even if I don’t like it. =)

    We don’t have an immediate need to replace our current MacPro’s but the day will come. A lot will have happened from now until then and I’m eager to see how it all pans out and the creative solutions for having them in racks in machine rooms in a MultiPro environment. Maybe pimp them out like alien/prometheus eggs or Matrix towers or something.

    It’s an interesting and scary future.

    1. John Montgomery

      Thanks for taking the time to respond, Henrik.

      I think the lack of a workable machine room type of unit is problematic for the post community. I think Apple has simply decided it wasn’t worth it to go for that market. Similar to the server market. I may be on crack, but I think there is large number of professional users who will find the new Mac Pro useful and that’s who Apple is targeting.

      But as a replacement for the former “big iron” systems? Not so much. Your archiving station example is a great one. Would be painful with the new Mac Pro.

    2. Nice post Henrik! Lots of interesting stuff in there.

      Up until recently I was a Post Supervisor at a very similar post production facility (although all our offline MacPro’s as well as the Resolve MacPro were in the individual rooms – with only our NAS, ISIS, Smoke Z800, etc. in the server room).

      I think for offline editing – the new MacPro is great. The best thing is, if a machine has issues, you can literally just “swap it out” with ease – much easier than carting around an old MacPro between buildings or rooms! I will say however, that you don’t HAVE to use an Avid Mojo – AJA and Blackmagic both offer Thunderbolt solutions that work with Avid. That said, I think it’s only a matter of time until Avid release a Thunderbolt version of all of their Media Composer & Pro Tools hardware. If you’re just cutting DNxHD36 or ProRes in Avid – then a single 1GbE connection for your LAN, and a single 1GbE connection for the ISIS will be fine.

      The archiving box is an interesting problem. At my previous company we had an old MacPro in the server room with a 10GbE connection to the ISIS 5000 & NAS and an LTO deck via SAS – very similar to what you’ve got. HOWEVER, for my new company – we just use an iMac for all our LTO archiving. All our storage is Thunderbolt connected, and we use a expansion chassis with a SAS card for LTO. It works great. With a new MacPro – even though it’s more cables and mess – it’ll still have a smaller footprint than an old MacPro. Have you considered just using a Mac mini for your archiving needs in the future? The “xMac mini Server 2H” is pretty cool for this kind of thing. You could just buy two of them – and you’ve got 4 x PCI slots (2 x Fibre HBA, 1 x 10GbE Card, 1 x Dual 1GbE Card) in a 4U rack.

      The Resolve box is an interesting one. If you’re just grading HD – then I think the new MacPro will be sufficient for most applications. However, if you’re grading anything 2K+, then maybe it’s time to upgrade to a Linux version of Resolve (which is FREE with the Resolve panel if you already have one) – or move across to Windows. Or… just keep your old MacPro running – it’s probably still got another good 2-3 years in it!

      In terms of rack-mounting the new MacPro in a server room… I don’t think anyone can really discuss this until we can actually play with the shipping units – there are too many unanswered questions. That said… I think it’s all solvable – we’ll just need to all think outside the box a bit.

      In answer to your question – Thunderbolt 2 is backwards compatible, so you can use all your existing Thunderbolt hardware. A number of TB expansion chassis manufactures have already announced that you will be able to swap out the board on existing chassis’ to upgrade to the faster TB2.

      Also, the reason they went with Xeon is lower power usage, more reliable if you have a machine running 24/7, better build quality, offer more memory, better protected and stable memory, higher memory bandwidth, higher I/O bandwidth and higher computational performance compared to their consumer counterparts.

      It’s definitely a really interesting time for anyone working in a post facility!

  7. With technology constantly evolving at record pace, it seems counter-intuitive to buy a computer that can’t be internally upgraded and is essentially a chunk of hardware frozen in time. Based on the specs, you can already build an expandable PC that can outperform it. Once the price is revealed there will be a long pause of silence… and then a sharp drop in interest.The only good thing I foresee the MacPro doing is helping to highlight and increase the demand for PCIe Flash storage.

  8. I appreciate the review of the MacPro, especially since it helps justify my position. As an individual, the system is a viable upgrade, but at my work, where we have 20 edit bays, 10 graphics bays, and a handful of audio suites, it becomes more problematic, due to the cost of external expansion. A more expensive (or similarly priced) base computer, but now I have to purchase all new external (local) storage, and a PCIe chassis, raising my base price by $1500 per computer. That’s a lot when you talk about upgrading 30+ systems.

    One comment about bandwidth & Thunderbolt. Thunderbolt 2 is 20Gb/s if I read that right. I don’t know about the bandwidth requirements of the Red Rocket, but with the Fusion i/o and an 8GB fibre connection you’ve used half of one of your busses already, your displays will use another, and the third is your large external storage, and now you out of high-speed connections.

  9. Personally, I think for the majority of post houses – these new MacPro’s will be a welcome addition. They’re perfect for edit suites, assistant machines, motion graphics, etc.

    If you’re doing really high-end compositing, I still think they’ll be a great addition, once OpenCL is properly implemented on software like Nuke and After Effects – it’s just a matter of time.

    If you’re doing really high-end 3D – then this will be a great end-user box – and you can still use your Linux/Windows boxes for actual rendering.

    The only end-users that I can really think of that will hate the new MacPro are colourists who currently use existing MacPro’s in their suites. If you’re just grading HD material on Resolve – then I think the new MacPro will be absolutely fine. If you currently have an existing MacPro with 5 x GTX Titan cards (using external expansion), then sadly the only option is to move to Linux or Windows.

    That said… my predication is that Blackmagic will eventually come out with a method of using your new MacPro as a GUI, with the ability to connect up a Windows “GPU” box for the actual processing – giving you the familiar Mac OS front end (with the benefit of native ProRes support), but can take advantage of a “GPU render farm” in the server room, for times when the new MacPro simply can’t keep up. All you’d need is a Thunderbolt to 10GbE connection, and away you go (similar to Autodesk’s Burn Nodes, Adobe Anywhere, etc.).

    I do find it pretty funny reading online about so many people complaining about the new MacPro, and then when you ask them what they have in their current MacPro, they say a Apple RAID Card and Quadro 4000. For MOST users, I think the new MacPro is a massive step up. For the FEW users that are currently “maxing out” their existing MacPro with a Cubix Expansion Chassis and multiple GPUs – sadly this new MacPro isn’t for you. But really… most really high end colourists are using Linux anyway – and the Mac-based Resolve world is firmly stuck in HD-land.

    ProTools users are also going to be unhappy initially – but I think this will eventually change once Avid start releasing Thunderbolt enabled hardware.

    If I think about all our local post houses – in 90% of cases you could swap out an old MacPro with a new one, add one or two Thunderbolt add-ons, and you would actually get faster results. For those Resolve users using a Quadro 4000 + GTX 690 (WITHOUT expansion) – it’ll be really interesting to see how the new MacPro compares – I think the performance will be pretty similar. For those Resolve users using a Cubix and 5 x GTX Titan’s – then I think you either need to make the step up to Resolve on Linux (the software of which is FREE if you already own the Resolve panel), or swap out your MacPro tower with a Z820 – keeping your Cubix and GTX cards.

    For those complaining that the back of the new MacPro will just be a mess of cables… well obviously! Look at the back of most existing MacPro’s and you’ll see cables going everywhere! I think this is a non-issue.

    For those complaining about the lack of internal storage – again, I think this is a non-issue. For most post and VFX facilities, the local machines only require a limited amount of storage for the OS, cache, etc. as all the “big data” will be stored on a NAS or SAN. For stand-alone workstations (i.e. individual users) – a Thunderbolt RAID + new MacPro is still SMALLER than an existing MacPro tower. Yes, it’s one more cable – but that’s not really an issue. I’m sure 3rd parties will come out with sexy black circular RAIDs that will fit nicely with the new MacPro if you’re worried about looks.

    I was actually predicting that the new MacPro would keep the old cheese grater design, and just make it smaller. Man, I was wrong. But I think the direction they’ve gone with is great and makes a lot of sense for MAJORITY of users.

    You can upgrade the RAM, you can upgrade the internal storage – but you can’t update the CPU or GPU. Well… that’s the same as all the Apple laptops, and no body complains about that! If you’re making money off your new MacPro, then upgrading the box isn’t that much of an issue every three years – technology changes so quickly anyway.

    Although Thunderbolt 2 doesn’t have enough bandwidth for existing GPUs – it has enough bandwidth to move around large amounts of data very quickly – so I think over the next 12 months we’ll start seeing 3rd parties building Thunderbolt 2 enabled CUDA/OpenCL accelerators (i.e. boxes that just do external processing and grunt work – not actual video IO). We’ve already seen people using CUDA GPUs over Thunderbolt on Windows machines – so it’s all technically possible. Personally – I think even though currently CUDA has a better development platform, and it slightly faster than OpenCL in a testing environment for most things – the industry will start focussing much more time and energy on OpenCL now, simply because it means that it gives end users more options in terms of GPU providers.

    Just like Apple basically indirectly killed Flash, I think CUDA’s days are numbered, even if it’s currently faster, and much more loved by developers. Time will tell…

    My 2c.

  10. In your article you listed the Thunderbolt 2 specification on the New MacPro as:

    – “Expansion (Thunderbolt 2): Six Thunderbolt 2 connections, each with 20Gbps bandwidth”

    May I ask what your source is and why you think each of the 6 connections can independently support the full 20Gb/s when there are only three controllers? I do believe this is correct but there is some debate over this point in several forums and I have not been able to locate any documentation directly specifying this is indeed the case. Many people interpret the casual jargon from both Apple and Intel, used to describe TB2 as possibly meaning that each pair within the 6 ports share the 20Gb/s bandwidth.

    Which is it? Is there a *total* of 60Gb/s or 120Gb/s from the 6 ports (3 controllers) offered on the MacPro?

    Is it 3 controllers with 6 20Gb/s independent connections or 3 controllers at 20Gb/s offering two connections each?

    Thanks!

    1. I think the confusion comes from the word “connector” as opposed to “controller”. I find it very clear that this is about each connector.

      Regarding the current thunderbolt implementation:

      “A Thunderbolt connector is capable of providing two full-duplex channels. Each channel provides bi-directional 10 Gbps of band-width. ” (https://thunderbolttechnology.net/tech/how-it-works)

      For thunderbolt 2:

      “It is achieved by combining the two previously independent 10Gbs channels into one 20Gbs bi-directional channel that supports data and/or display. Current versions of Thunderbolt, although faster than other PC I/O technologies on the market today, are limited to an individual 10Gbs channel” (http://blogs.intel.com/technology/2013/06/video-creation-bolts-ahead-%E2%80%93-intel%E2%80%99s-thunderbolt%E2%84%A2-2-doubles-bandwidth-enabling-4k-video-transfer-display-2/)

      Of course, could be wrong — but this seems really clear to me.

      1. Thanks. Yeah, it seems clear to me too. But I was looking for something more solid – where it can’t be said: “Of Course, could be wrong”. 🙂 I have of course already seen the pages you referenced there.

        i guess the hunt continues… (I don’t get it though, why oh why wouldn’t this be technically and clearly stated by Intel… or even Apple for that matter… seems so odd. One could even make a conspiracy theory outta this. :D)

  11. I’d like to direct your attention to one such discussion thread as mentioned above:

    http://forums.macrumors.com/showthread.php?t=1621534 from post 130 on.

    Basically the point of interest is that each *controller* MAY only have four version two PCIe lanes to utilize (according to Intel’s documentation). So the two connectors per controller are switching. Then there is actually only 20Gb/s total shared between each of the two connectors on each of the three controllers. http://en.wikipedia.org/wiki/PCIe – (notice the “Capacity” in box at top/right)

    So for example if you had two 4-SSD RAID0 TB2 enclosures. Each connected to one of a single controllers two ports you could not get 20Gb/s sustained from both of them – at the same time. Sustained, you would only get ~10Gb/s if both were attempting to fully saturate a sustained 20Gb/s signal. Although from one at a time 20Gb/s would be possible.

    How many lanes and what PCIe version per connector are at issue here. If indeed it is as above then the 2 connectors per controller are not “20Gb/s each”. We shouldn’t say they are or we’d be misleading a lot of folks. We could say “20Gb/s per controller – 2 connectors per controller” or something like that.

    Or… we would have to show that each controller is using 8 v2 PCIe lanes. Or maybe some dance with 4 v3 PCIe lanes – if that’s even possible.

    Thanks for reading!
    I appreciate your time!

    Tess.

  12. Pingback: Is the New Mac Pro right for you? | Jonny Elwyn - Film Editor

Comments are closed.