The Wonders of Technology: A Tale of Two GPUs

, Tormaid

GT 210 and GTX 970

This week, I completed the rather drawn-out process—given my lack of free time during the semester—of converting my old media server into a gaming rig for my girlfriend. This involved: replacing the stock cooler with an old AIO water loop, overclocking the 1st-generation Core i3 530 to a more-respectable 3.97GHz, upping the memory to 8GB, and (finally) replacing the GPU. It's this last step that I want to focus on because the sheer difference in performance between the two cards floored me.

The 1st generation of Intel Core series CPUs did not include on-board graphics, so in order to actually use the system, and to take advantage of GPU-assisted DXVA video decoding, I had included an EVGA-branded Nvidia GT-210. Just to be clear, this was very much a budget card at the time of purchase. I am not attempting to compare these cards solely on the basis of their performance, because they were designed for different tasks. I merely want to share my reaction to discovering just how much faster the reference GTX 970 I upgraded to is. I think the spec sheets speak for themselves:

GPU specs compared

The GTX 970 isn't just "a lot faster" than the GT 210, it's on the order of one hundred times faster, if you go by core count alone! If you were to compare the day-to-day performance of my first computer, a Compaq Windows 95 machine, with the macbook I'm using right now, it still wouldn't come close to the day-and-night difference between these two components which were released a mere five years apart. Yes, they sit at opposite ends of the performance ladder relative to their contemporaries, but even if you compare apples to apples and look at the performance gains in the same market segments over time it becomes clear that GPU performance has scaled orders of magnitude faster than consumer CPUs have. Indeed, the equally-old, and similarly bottom-of-the pile Core i3 in this system performs quite adequately, despite my fears that it wouldn't hold up in demanding, open-world games like Titanfall.

Titanfall gameplay

This got me thinking: where exactly is moore's law with regards to CPUs, if generational performance increases are typically only around 10%? Then it hit me: that's only consumer hardware. On the professional side, Intel has been churning out Xeons with increasing numbers of cores, massive L3 caches, and huge memory bandwidth for years. Somewhere along the line, though, it was decided that dual- and quad-core CPUs were "enough," and that's what we've been stuck with ever since the Core 2 era. The more reading I do the more it seems like this has to do primarily with software. It's very difficult to take advantage of large numbers of CPU threads, and even more difficult to make your program easily scaleable, and so nobody bothers, except in the most extreme use cases—most of which are professional anyway.

It seems to me that, since we've reached a thermal threshold on clock speeds, the way forward is better multi-threaded software. Ideally, this software could take advantage of the massive amounts of computing power in today's GPUs, as well. In regards to my own needs, I've been pleasantly surprised at the level of maturity Vapoursynth has achieved in the last couple of years. All of my tests indicate that it scales very well. I hope, though, that more plugin developers will embrace GPU-accelerated filters, since us lowly commoners cannot possibly afford 18-core Xeons.