Benchmarks and observations.
Ace computer builder FU Steve reports on benchmarking results for the HP100, now renamed the HP100Plus.
Power consumption:
The first thing I did with the Hackintosh HP100Plus was to finally measure real power use. Theoretical tables where you list your components seem to assume that everything is running full bore simultaneously, which is clearly unrealistic. So I used a Kill-a-Watt power consumption meter (Amazon – $20) and inserted it between the HP100Plus (computer only, not the displays) to determine real, live power use in a variety of situations. HP100Plus uses a stock 500 watt power supply which has done fine with the old nVidia 9800GTX+ card and the new nVidia GTX 660 is benchmarked at a lower wattage than the old card. That means power use should not be an issue, but nothing beats this test method.
The Kill-a-Watt monitoring real time power use.
Here’s a table of findings:
HP100Plus power use.
So none of these activities remotely tax the 500 watt power supply, with the most demanding being the Unigine rendering test which uses the most sophisticated graphics around. To put these data in perspective, the CPU’s operating limit is 191F, and HP100 is running the Core i7 CPU at 4.3GHz, or 23% faster than the stock frequency of 3.5GHz – a modest overclock made possible by the use of the excellent Coolermaster 212Plus after market CPU cooler. The rest of the innards include a Gigabyte Z68 motherboard, a Core i7 Sandybridge 2600K CPU, 16GB of 1600MHz RAM, two SSDs both 120GB (one SATA 3, one SATA 2), two 1TB 7200 rpm SATA 3 HDDs, a USB socket card, and a wireless Airport card. Bottom line? Even for gamers, 500 watts should be sufficient.
Benchmarks:
Traditional GPU performance benchmarking apps like Cinebench do not cut it. The app fails to test all the great new technologies in the Kepler cards being put out by nVidia. The GTX660 has some 2.54 billion transistors, compared to a mere 800 million for the Core i7 CPU, and four times the memory of the old 9800GTX which HP100 used to use. The new standard in GPU performance measurement is Unigine which has immensely sophisticated video graphics – right down to fields of swaying grass blades – so that is what I used. Unigine refused to run on the 9800GTX+ as that card simply cannot hack it, so there are no comparative data.
In addition to installing nVidia’s drivers for the GTX 660, as explained yesterday, I also installed their CUDA drivers which make the best use of the latest rendering technology in the new card. CUDA speeds complex math calculations and will halve the time in ripping and encoding a movie, typically from 14 to 7 minutes. For reference, my system rips (no compression) a 4GB movie in 6.4 minutes.
Luxmark is another rendering benchmark tool which I ran to simultaneously test CPU and GPU functions.
And finally, while Cinebench is outdated, I ran the GPU test this one last time and the HP100Plus came in top of the heap.
Here are the screenshots:
Note that in the Cinebench run I have also tested the integrated HD3000 GPU which comes with every i5 and i7 Sandybridge CPU. The current Ivybridge comes with the better HD4000 GPU and can be expected to maybe deliver twice the framing rate of the HD3000. Call it 25fps, still leagues below what the GTX 660 delivers.
The CPU speed for all tests was 4.3GHz – not all the apps report it correctly. Like tests without CUDA installed came in a few percent lower. Not dramatic, but why not install this enhancement?
Finally, Novabench is yet another benchmarking app and in this case I was able to run it on the old and new cards.
Novabench – 9800GTX+ GPU
Novabench – GTX 660 GPU
The significant change here is the doubling of the Graphics Tests score, much as predicted on theoretical grounds in yesterday’s piece.
Lightroom 4 and Photoshop CS5:
In practical use there is little change from the 9800GTX+. The old card was already blindingly fast in these relatively undemanding tasks. The main advantage of the new card is that it will be able to far better drive larger monitors. Thomas’s three Dells are 1680 x 1050, and good 27″ displays are now sporting 2560 x 1440 pixel densities. That’s twice the number of pixels per square inch, and a lot more square inches to cover.
Other sockets:
The GTX 660 comes with two DVI sockets, one DisplayPort and one HDMI. Thomas currently has two Dells connected to the two DVI sockets with the third driven via USB using a DisplayLink adapter. I have read that the HDMI and DisplayPort outlets can be used at the same time as the two DVI ones to power two additional monitors, but until he gets the cables to test that I cannot comment. The advantage of this approach is that if it works, higher resolutions can be supported, as the DisplayLink is limited to 2048 pixels on the long side. That said, it has been super reliable, requiring only the occasional driver update as Apple introduces new major OS X releases.
Use with MacPro:
The GTX 660 only works with OS X Mountain Lion. It is not supported in Lion or earlier versions and it seems nVidia has no plans to release drivers for those. The latest builds are rumored to include nVidia drivers and at least one much maligned and disregarded MacPro user has reported success in installing the GTX 660 in a MacPro chassis with the latest version of 10.8.2 (with supplemental updates). I have not tested it but any MacPro user still poking along with older video cards should try the upgrade or, better yet, build a HackPro.
PCIe x16:
To make sure you are using the fastest 16-bit bandwidth to communicate with the GTX 660, make sure to turn off TurboSATA/USB3 in BIOS – Integrated Periipherals. Your USB3 devices will still work fine if the USB3 driver is installed. By doing so you will ensure that the x16 data path is used, rather than the x8, which will be the case if only one card is installed and the BIOS is wrongly set. Also make sure that the card is in the x16 slot, not the x8. On the Z68 Gigabyte motherboards, the x16 slot is the one nearest to the CPU.
PCIe = x16 correctly set.
Does it make a difference? Yes. In the Unigine bench test an occasional minor stutter at x8 disappears at x16.
Use of two SSDs:
HP100Plus uses two 120GB SSDs which store the OS and all apps, cloned nightly using CarbonCopyCloner. I highly recommend this setup as it makes major upgrades, like this one, very easy. The backup drive is used as a test bed and if anything blows – as it usually does – can be restored in a matter of two minutes using an incremental restore using CCC. I mean two minutes! Ask me how I know …. Life is simply too short to do major upgrades using spinning disk drives.
Cold start:
The time from the Apple logo splash screen to the Login screen is 14 seconds. For comparison the 2012 MacBook Air takes 10 seconds.
Warranty:
The Zotac USA warranty is for two years. No need to waste money on AppleCare ….
Thank you, FU Steve. Here’s until the next time you decide I need something upgraded.
Update March, 2013:
Apple has just released OS X 10.8.3 which now includes native GTX660 support for nVidia cards, whether EVGA, Zotac, PNY or any other brand. They did this as one of the 2012 iMac variants use the 660M GPU, the mobile (and less speedy) version of the real thing. Once you upgrade to 10.8.3 you can delete these two lines from your Boot Drive/Extra/org.chameleon.Boot.plist file on your Hackintosh as the presence of native drivers no longer requires the Hack to be informed that a 660 Kepler card is installed:
The above lines can be deleted.
nVidia also released an update to its CUDA driver, which you can download through the related System Preferences pane, after which you will see this:
Latest CUDA driver.
The above System Information display is also updated to reflect the use of native nVidia 660 drivers:
I can confirm that all works well with 10.8.3.