Every new custom PC build is a bit of an experiment. You may know all the specs of the components, but once it’s all assembled, it’s time to find out if your expectations were correct, or if there are any hidden issues to work out. The PC I built in the earlier accompanying article began by running completely within my expectations, then quickly exceeded them, and all at a rather reasonable cost for a high-performance computer — around $1600 US dollars. The rest of this article will go into more depth on the design choices of the build and how it all worked out.
“Unlocked” (Overclockable) i7 CPU
There are two major considerations in choosing a CPU — overall performance and projected useful life. High performance comes primarily from a combination of clock speed and number of cores, but cache memory and memory controller technology also have a significant impact. Without a doubt, as performance and features go up, so does the price. The difficult part of the decision process is determining what kind of software you run now and what you expect to run in the future — and how the needs of that software will determine what CPU capabilities are essential.
MSTS is highly CPU-bound, so a fast CPU will certainly keep framerates up. MSTS, however, is largely obsolete in terms of software design. It’s single-threaded, so multiple CPU cores are irrelevant. MSTS will run well on any modern CPU as long as you have a graphics card and driver that it won’t fight with — which pretty much means anything from Nvidia. But MSTS is virtually useless as a baseline since it’s such old software, other than I want the system to accommodate it (and other old games). Open Rails is a little better as a defining application — it’s evolving into a more modern application, with CPU and graphics processing split up. It’s not highly multithreaded, though, so any fast modern CPU is still appropriate.
Modern games are very dependent on the graphics processor and optimized graphics drivers. Strictly speaking, modern major “AAA” game titles can run just fine on a mid-range processor like the less expensive i5 CPU. They tend to use no more than two cores, though, and rarely take advantage of hyperthreading, which makes an i5 or similar processor still a popular choice for just mainstream gaming.
In addition to train sims, I also run Flight Simulator X, and that carries with it more complicated demands despite its age. With some modifications to its configuration, FSX can be coaxed into using more than one thread. In practice, you can shift its workload onto at least two cores, with intermittent usage of two more. While this is still within the realm of what a mid-range CPU can do well, FSX itself is poorly optimized and will simply grab all the resources it can. Additionally, even with DirectX 10 support enabled, FSX is more CPU intensive than it is GPU-enabled. The i7 and higher-end processors come into play here simply because they have more capacity for whatever an application throws at them. High-end processcors like the i7 have more cache memory and the best overall memory bandwidth to make them better at shoving a massive workload between the active cores and memory.
I use virtual machines periodically, for testing alternate configurations and software that might conflict with my normal system configuration. VM optimization is typically only found in higher-end processors like the i7.
Finally, I do periodically need to re-encode/transcode media files that I own to other formats in order to play them on various devices. For that, I tend to use the open-source Handbrake encoder. It can do some really amazing work, but it will drive a CPU to its absolute limits. So much so that encoding high-resolution video with Handbrake has become a common tool used to benchmark CPU performance. In order to avoid tying up my computer for a half hour or more at a time, the fastest, most efficient CPU will give the most benefit.
My use case leans toward a high-end processor. While a normal i7 could satisfy all these needs, an “unlocked” overclockable processor allows me to fine-tune its performance beyond the factory defaults. Unfortunately in computers, speed and efficiency are major factors in determining how useful a system is as it ages. The extra speed gained in overclocking helps make an unlocked processor a better value over the long term.
Ultimately, the i7 6700K CPU made the most sense. It can easily handle the wide variety of uses I tend to put my computer to, and the benefit of overclocking it will help it remain useful longer — which means I can upgrade more inexpensive components around it over time to easily keep the system viable for at least five years, possibly more now that advances in processor technology are coming more slowly and tend to be focused more on power efficiency rather than speed.
Why not AMD? It’s true that AMD’s processors are attractively priced, but they fall behind Intel’s offerings in terms of individual thread and core performance. That’s why AMD has been pushing more cores in their CPUs, to try to offset performance deficits per-core. Unfortunately, the side effect of that is more power consumption and more heat output. That might change once AMD’s new “Zen” architecture goes into production, but for now, their current offerings can’t compete with Intel for my uses.
“Enthusiast” Grade Motherboard
The motherboard is one of the three most expensive components of the system, the other two being the CPU and the graphics card. Once you decide on a CPU, you’re locked into a range of motherboards based on that CPU’s socket and chipset requirements. Nevertheless, there are good values out there if you need to save some money on a motherboard without crippling yourself on performance.
To a large extent, the chipset defines the baseline capability of any motherboard. Using the i7 6700K processor automatically dictates using the Z170 chipset. By itself, the Z170 is quite full-featured. The main criticism leveled at it is that it’s PCI-E lane support is somewhat limited compared to other chipsets, but that’s by necessity to allow for other aspects of its SATA bus architecture for high-speed devices as well as other features. For multi-GPU setups, threre are other platforms that do better — such as Intel’s X99 platform, but that also requires a different (and more expensive) CPU family. I only plan in using one graphics card, so Z170 is fine for my purposes.
While I want to extract the best day-to-day performance out of my PC, I don’t do competitive overclocking as a hobby, so my needs fall somewhere in the middle of the “enthusiast” category of motherboards. There’s lots to choose from. I settled on the Maximus VIII Hero board from ASUS. It’s actually the entry-level full ATX size board in their gaming and enthusiast line. There’s not much to set it apart from their more mainstream performance-grade Z170 boards, but it does have some extra attention paid to good onboard audio circuitry and generally has the best balance of useful features that focus on raw performance. Yow really wouldn’t go wrong with one of ASUS’ or other manufacturers’ slightly less expensive Z170 designs — this one just fits the bill for me exceptionally well. I don’t need the extra features of more fancy lighting and radical overclocking support of the more expensive boards in the Maximus or similar lines, and this “entry-level” board is anything but stripped down. In fact, it forms the core of everything offered in the more expensive boards.
In particular, the “M8H” as it’s sometimes called offers a particularly refined and feature-rich UEFI BIOS. It can store and switch between multiple complete configurations, which makes testing and fine-tuning a system easier.
It’s possible to flash the firmware from a USB flash drive without even booting to the BIOS configuration screen — simply provide power to the board, insert a USB stick with the correct firmware file, and press a dedicated hardware button. This feature is common on many ASUS motherboards, and its a surprisingly useful way to apply a foolproof method of flashing the firmware. It’s rare to have a firmware flash operation go wrong, or to have settings get corrupted and make re-flashing difficult — but if you’ve ever experienced a BIOS firmware failure, this kind of low-level flashing capability is invaluable.
Overall, the Maximus series from ASUS has a reputation for particularly good control over CPU and memory parameters and power delivery, which make for very stable overclock capability. That’s not to say that boards from others aren’t equally good. Gigabyte and MSI, for instance, offer equal performance in different feature combinations. ASRock is also a good alternative for slightly lower cost and competitive features in enthusiast/gaming category boards; in fact, they’re somewhat of an independent spinoff from ASUS.
Original Graphics Card — Overclocked Nvidia GTX960 with 4GB VRAM
(See update immediately below this section; a new graphics card has been added.) I had to save a little money somewhere — and unlike many gaming enthusiasts, I chose the graphics card as the place to compromise and shave some cost. However, I managed to sacrifice as little as possible.
Since I run MSTS as well as other older games, an Nvidia GPU was basically the only choice. AMD definitely offers outstanding performance for any given price in their GPUs, but their drivers are certainly not the easiest to get working with MSTS. In general, Nvidia’s GPUs and drivers tend to offer the best support for retro-gaming.
At the time I selected my components, Nvidia was still producing the 9xx GPU series, and the 10xx series was still officially in development, but expected to be released in the next year. So spending a lot on a graphics card that would very soon be made outdated by what was expected to be a significant generation jump didn’t make much sense.
Additionally, I’m somewhat constrained in my desk space, and the biggest monitor that will effectively fit is a 16:9 widescreen monitor with 1440×900 resolution. My screen just won’t support the resolutions of the popular lager screen sizes — so it’s a waste to spend too much on a graphics card optimized for higher resolutions until I replace both my desk and screen. I also prefer not to use large amounts of anti-aliasing in my graphics settings. AA tends to make things look blurry too easily to my eyes; I’d rather have crisp, sharp images even if it comes with a little edge aliasing sometimes. And I also generally don’t care for hyper-real lighting effects with excessive bloom and camera lens flare effects. I prefer my game graphics to look real-world realistic — not like movie special effects all the time.
The optimum choice then worked out to be a GTX960 GPU with 4GB of memory and a factory overclock of 1.25 GHz. That’s twice the VRAM and a significantly higher clock speed than a typical GTX960 — probably enough to wring the most performance possible out of the GTX960, and likely to be adequate for my preferred game graphics settings, which tend to be high-qualty but intentionally without as much special effects processing.
The particular GTX960 I chose happened to be from ASUS as well. Branding isn’t necessarily a major concern with graphics cards; the GPU is going to be made by NVidia (or AMD if you choose that family). It’s best to choose based on overall construction and component choices on the card, the efficiency of the heatsink and fan assembly, and overclock characteristics of the card. In my search, ASUS happened to offer the best value for a highly overclocked card, particularly when paired with additional VRAM.
Graphics cards all use drivers direct from Nvidia or AMD; any software included is optional and non-essential. However, for overclocking any video card, MSI’s Afterburner and EVGA’s Precision X software are the most popular. I use MSI Afterburner to manage graphics overclocking on my system even though it’s an ASUS card — the overclock software works hand-in-hand with the graphics card’s standard hardware interface so there’s no manufacturer lock-in.
New High-Performance Graphics Card — Overclocked Nvidia GTX1080
(Updated July 24, 2017)
The beauty of custom PCs is that upgrades are generally very easy. As I noted above, the original graphics card was a budget consideration, particularly in anticipation of the GTX 10xx series cards from Nvidia. The new cards arrived and promptly outclassed the 9xx-series cards by a wide margin, so the strategy of starting with an excellent budget card and then upgrading to the new series played out well.
The next step was to figure out the best value among the new generation of Nvidia cards, based on their capabilities plus market factors. I waited a bit to see if prices would continue to fall once the initial rollout high prices settled down, but that never really happened to any large degree. Two factors were in play — manufacturers rushed intial versions to market, then doubled back and began producing improved revisions of their designs. As of mid-2017, most manufacturers have gone through at least one or two revisions of their new GeForce card lineups. This tends to create scarcity as one lineup goes out of production and a new one begins, keeping prices up. Also, DRAM has been in short supply globally, which puts limits on production schedules and drives prices up.
This graphics card purchase was intended to be the long-term card for this PC, so I didn’t want to skimp on performance. By the numbers, a GTX 1060 would offer excellent performance but might show its age sooner as graphics demands from software increase. A GTX 1080 is on everybody’s wishlist, but the high prices tend to be a bit high of a hurdle to cross. The GTX 1070 tends to be in the best price/performance “sweet spot”, particularly if, like me, you don’t necessarily plan on going to a 4K monitor right now. (A monitor upgrade is planned, but I’m not going for 4K resolution.) But then something happened — because of supply limits due to production delays resulting from design lifecycles and DRAM shortages, GTX 1070 cards became scarce. They’d get bought up quickly at full price, and then prices on the limited remaining inventory would climb due to short supply. Suddenly, GTX 1070 cards weren’t such a good value after all. I didn’t want to step down to a GTX 1060, so I waited a bit longer and snatched a GTX 1080 card on one of Amazon’s price swings, just before it went out of stock again. I was at least able to capture a price below US $600, which isn’t too bad for a higher-end GTX 1080. Note that this is for a factory-overclocked card, not a “reference” style stock-clock card, so naturally the price is more — but that’s the market sector I was interested in.
The card I chose was EVGA’s GTX 1080 FTW2 card, with their new “ICX” cooling package. This particular card is factory-overclocked to 1726 MHz, with the boost clock speed raised to 1860 MHz — that’s more than enough to run anything I want at maxxed-out graphics settings, or nearly so. It’s possible to overclock the card further, but at this time there’s just no need for that. EVGA went through a period with some cooling issues on their 10xx-series cards which has since been resolved. They then went on to work up their advanced “ICX” cooling system which includes monitoring of multiple temperature sensors and allows asynchronous control of the fans for optimal cooling with less noise. That was what made this card particularly attractive — good cooling tends to prolong the life of components.
Changing to an EVGA card also necessitates changing from MSI Afterburner to EVGA’s PrecisionXOC software to monitor and control advanced features on the card, such as its fan profiles. EVGA’s software isn’t quite as nice to use for overclocking as MSI Afterburner, but it’s very full-featured and it does sidestep the occasional problem where the combination of MSI Afterburner and its companion RivaTuner Statistics Server could prevent some applications with Java front-ends or components from launching.
The card is well-made with a robust heat sink and metal heat spreader on the back of the card. Like all GTX 10xx cards, its power consumption is relatively low for its capability, although it does still use two eight-pin PCI power connectors to provide a stable power input. Like nearly everything else targeted at the high-performance PC market, it has RGB lighting. The lighting can be managed from the PrecisonXOC software, and set something reasonably subtle and attractive, or even off. It also has indicator LEDs for the primary GPU, memory, and power regulator temperature sensors which can be turned on or off and have their colors set to respond to the temperatures on the card — which is actually a thoughtful feature.
In operation, the card is quiet and doesn’t put any undue load on the power supply; as I expected the new of generation graphics cards, with their lower power consumption, would be fine without needing to upgrade the PC’s power supply. When the card is running at full clock speed, there is some appreciable heat thrown off; the card’s own heatsink and fans do an excellent job of keeping its components’ temperatures low — So far, I’ve never seen it exceed its low-to-middle temperature range — up to 67C or so. The PC’s large case fans are more than up to the task of exhausting the graphics card’s heat output, and since the CPU is liquid cooled, there’s little change of added heat-soak which can affect the heatsink of an air cooler.
Memory
Memory is frequently somewhat misunderstood. Memory performance has a significant impact on overall processing speed. More capacity is better — up to a point. Faster memory speed is good — up to a point. In the simplest terms, the object is to have plenty of memory available to the CPU, and for it to always be ready to be read from or written to whenever the CPU requires.
Typical gaming systems work well with 8 GB of memory; that’s plenty to load the game code into and have lots of room for Windows to work with without getting in the way. Since I also run virtual machines, I opted to double the capacity up to 16 GB, so I can run a well-provisioned virtual machine without impacting the host operating system and any applications open at the same time.
Memory speed is harder to gauge, and I did a lot of research before settling on the DDR4-3000 memory I ultimately chose. Statistics show that up to around 2.8GHz, DDR4 memory gives steady performance gains, but then the improvement starts to fall off. There are still gains beyond that, but the necessary voltage applied to the memory starts to increase, which also increases heat — and for smaller and smaller performance gains. It also becomes more difficult to achieve memory timings that will coincide with the CPU’s demands without forcing it to wait to sync up. So it’s possible to increase memory speed beyond the point of getting any useful performance improvement. From my research, the best point tended to be around 3GHz. It takes a little more voltage to achieve that speed, but the memory modules can still keep the latency levels (the speed at which memory can respond and “sync up” to CPU requests) at that of overall slower memory, and without the significant price jump that comes in on memory rated at 3.2GHz and above.
Additionally, I chose to populate the memory with two modules instead of four. Two high-density modules will have slightly better voltage and current characteristics for the motherboard to manage compared to four. It’s also a bit simpler for the CPU’s memory manager to regulate traffic to and from two modules as opposed to four — there’s somewhat of a reduction in the chances of having timing problems across the modules. Although high-performance memory is sold in matched sets, each module is always slightly different, so minimizing the chances of varying tolerances helps performance.
Hard Drive Storage
I’m somewhat conservative about storage. Solid-state drives are fast, but don’t have the decades of long-term use to document their reliability quite yet. Also, SSDs offer excellent performance for large block transfers — like booting Windows, launching applications and minimizing loading times in games when moving between map areas — but they don’t necessarily perform much better, or at all on small file read/write accesses when compared to fast mechanical hard drives. My computer doesn’t have to reboot often, and I can wait for applications to launch or game maps to load. But train sims and flight simulation tend to rely on reading lots of small files to load scenery and objects. SSD performance for that kind of work isn’t definitively better for the cost.
I opted to stay with mechanical hard drives for now, although I chose high-performance Western Digital Black series drives running at 7200 RPM. I also spit up how data is stored across them; one is for the Windows OS and normal applications like web browsers and Microsoft Office, and the other is where large, I/O-intensive games and simulators are installed. This helps separate read/write accesses across the two different drive assemblies, so that Windows’ activity won’t conflict with a running game or simulation as much. Since I’m using drives of 1 and 2 terabytes capacity, there’s still a compelling price difference that favors mechanical drives.
Power Supply
The Skylake series processors are designed to be more power-efficient, and modern CPUs in general are designed with low power consumption in mind. Additionally, I wasn’t planning on running a terribly power-hungry graphics card now and the anticipated next generation of graphics cards was likely need little more. After calculating the expected power demand of my build off of the manufacturers’ spec sheets, it appeared a 650 watt power supply would be all that was necessary, not likely to be loaded beyond 50% capacity most of the time, occasionaly up to 70% at most. That’s well within the desired load range for efficiency. So Antec’s Edge 650 PSU would fit the bill nicely.
Antec’s power supplies are actually built by SeaSonic, which as earned an excellent reputation for reliability. This particular one uses two separate 12V rails, so that the CPU and graphics card can have separate power feeds. In the unlikely event something electrically bad happens, the non-faulted rail is more thoroughly isolated for safety. Additionally, load-induced voltage fluctuations are somewhat more isolated in this design.
A fully-modular design means that all power cables can be disconnected from the power supply itself, which makes building, managing cables, and making upgrades later considerably easier.
Case
The Antec 900 case that I chose might be dismissed as outdated by some system builders. It definitely doesn’t have the tool-less hard drive bays and ample cable management capabilities of newer designs, but with care and planning, it’s possible to run cables neatly and out of airflow paths. Its advantage is the large and quiet 200mm fan on top which can remove heat from the case very efficiently. In general, the case has an industrial, no-nonsense style vaguely reminiscent of big IBM hardware from the ’90s. It doesn’t scream “3L1T3 G@M3R!!” like a lot of cases designed for high performance builds, nor is it a bland metal box like some popular “Minimalist” designs, either. It’s just down-to-business hi-tech.
I wanted to avoid the more enclosed “low-noise” cases because all of them seemed to use dual 120mm top fans, which tend to be noisy, and they often used solid front panels with narrow, baffled air intakes which will tend to be more adversely affected by any dust that gets in them. The Antec 900’s big metal-mesh front will still continue to pull in plenty of air even if a little dust accumulates, and it’s easy to remove with a swipe of the fingers across the mesh. My approach to moderating noise was to use a case with plenty of airflow so that the fans wouldn’t have to run as fast except during heavy CPU loads, which the 900 case can do quite well, despite the age of its design.
The case is an integral part of cooling the system, even with a water cooling loop. Although the 900 case was originally designed for use with large air-cooling heatsinks, its airflow characteristics appeared to lend themselves well to use with an all-in-one closed-loop liquid cooler.
Closed-Loop CPU Water Cooler
From the beginning, I decided to use a closed-loop, or “all-in-one” water cooler instead of a more traditional tower-style fan-cooled heatsink. The thinner circuitboard substrate used on Skylake processors had quickly earned some early reports of being too easily bent and damaged by large, heavy heatsinks. The much smaller, lighter water block of a liquid cooler would be safer. I decided early on that for this, and any other Skylake builds I might construct, would use some form of water cooling if they needed anything beyond the stock Intel heatsink.
The case I chose had an impact on the cooler to use. Since I intended to mount an optical drive in the case, I would lose the ability to mount a dual-fan radiator in the front — a typical water cooler setup — because the optical drive would take up part of the upper drive bay area and the hard drives would mount in the lower bay, leaving only the middle drive bay area unused. A dual-fan radiator takes up the height of two drive bay sections — each three-drive bay is the size of one 120mm fan.
I did have a viable option with the Corsair H80i V2 cooler. It’s a single 120mm fan sized radiator, but it’s extra-thick for more surface area and uses push-pull fans to overcome the air resistance of the thicker radiator. In performance tests, it comes close to the dual-fan H100i in cooling performance, and I suspected that with a slightly non-standard fan arrangement in the Antec 900 case, I could get the cooling performance I wanted.
The trick was to mount the radiator and its fans in what’s normally the exhaust fan location at the back of the case, but to arrange the fans to pull fresh air in across the radiator instead. The big 120mm top fan would then act as the exhaust fan. By itself, the 200mm fan can move more air than a pair of 120mm fans, and at lower speed and therefore less noise.
Here’s a diagram of how the airflow works in the case:

Modified air flow in an Antec 900 case with an H80i water cooler in the exhaust fan location. Air flow in the back of the case is reversed to pull fresh air through the radiator, and the main exhaust is through the top 200mm fan. For larger graphics cards, the push-pull fan arrangement in the front isn’t necessary, as the card will extend to the drive cage and sit in the airflow. For a smaller card, the push-pull arrangement ensures a strong airflow to the GPU heatsink.
The push-pull fan arrangement in the front of the case ensures ample airflow to the GPU heatsink when the card isn’t terribly large. For larger cards, the “pull” fan can be omitted since the edge of the card is closer to the empty drive cage “duct” and well into the airflow from the front fan. In either case, the graphics card splits the airflow so that half is directed up into the CPU and and RAM area of the motherboard, an the other half is directed down and across the graphics card heatsink and fans. The power supply fan will pull the heat from the lower area of the case out. The normal vent holes in the back of the case and in the expansion slot covers also allow heat in the lower area to exit freely.
July 2017 Update:
The new GTX 1080 video card is longer and wider than the original card. Due to this, the interior “pull” fan has been removed. The end of the card is directly in front of, and completely across the duct formed by the empty drive bay. It still splits the airflow cleanly between the upper and lower sections of the case, so the benefit is effectively the same. With the smaller card, the “pull” fan helped prevent the airflow from diffusing too soon after leaving the duct. The larger card simply sits in the fully ducted airflow. In testing, there has been no need to increase the speed curve of the “push” fan at the front of the case, although this remains an option if it ever became necessary.
While the power supply fan is supposed to pull heat down and away from the GPU area, there is still a significant gap between the case side and the side edge of the graphics card. The large 200mm top fan generates quite a bit of upward draft, and pulls a lot of the graphics card’s heat up the side of the case and out the top. This is one of the inherent benefits of a top-vented case.
Results
How does it all come together? Quite well. In fact, it exceeded my expectations. Based on published reviews, I expected to reach 4.5 to 4.6 GHz on the CPU. That turned out to be almost trivially easy to achieve. Continued careful fine-tuning resulted in overclocking the CPU to 5GHZ in testing, and settling on 4.9GHz for its everyday configuration. In its final configuration, the CPU temperature never exceeds 80C under maximum load, and normally never exceeds bursts of 70C on average when running Flight Simulator X. Modern games tend to run at 50-60C because they rely more heavily on the GPU.
Speeds and temperatures like that validate the performance of the cooling efficiency that comes from combining the H80i V2 water cooler with the case airflow optimization.
I also fine-tuned the graphics card primarily with the popular Unigine “Valley” benchmark and can get stable 60FPS at very high settings, even with only a GTX960. That’s partially due to the fact that I prefer not to run heavy full-screen anti-aliasing, but even so, I still use moderately strong multisampling anti-aliasing (MSAA) of 8x or 4x and can still run games on high detail settings. Open Rails will run to ridiculously high frame rates unless I turn on V-Sync to lock it to the monitor’s 60Hz refresh rate. And Flight Simulator X, notorious for frame rate headaches, runs smoothly at the recommended 30FPS with dense scenery settings everywhere except for frame-rate-killing places like LaGuardia Airport / New York City.
Ultimately, this build was so successful that I wound up duplicating it in another build for my wife, who needed a new computer. Hers has a larger 24-inch HD monitor; between that and the release of the new 10xx-series graphics cards from Nvidia which meant the GTX960 card like mine being discontinued, led to changing the graphics card to a GTX1060 card with 6GB VRAM which handles the higher pixel density of her screen just fine.
The average cost of this system typically works out to $1600 to $1700 US dollars. That’s not cheap, but it’s still quite inexpensive for the performance it’s capable of. OEM gaming systems with lower specs tend to cost closer to $2000 US, and custom-built high-end systems from boutique builders often reach $3000 for similar specs and performance. So in the end, it’s an excellent value and well worth taking the time to build.