top of page

Reviewing the RTX4090


Nvidia recently unveiled the GeForce RTX 4090, a significant upgrade from its previous version in nearly every aspect. However, my level of enthusiasm falls short. The exorbitant price tag is comparable to purchasing a second-hand vehicle at a much lower cost. Additionally, there are other concerns that require attention.


RTX 3090 VS RTX4090

Two years ago, Nvidia launched the GeForce RTX 3090 with a hefty price tag of $1500, claiming it to be a Titan class GPU. However, the recent release of the RTX 3090 TI just six months ago in March 2022 renders that justification false. Priced at $2000 with only a slight improvement in specifications, neither card stands up to the Titan class floating Point compute functionality. Today, due to the crash in GPU demand and competition from AMD, both cards are available for around a thousand dollars or even less. On the other hand, the RTX 4090, which launches at $1600, offers the same amount of VRAM as the previous gen RTX 3090, but the similarities end there. Not only is the memory as fast as that of an RTX 3090 TI, but it also boasts over 50% more Cuda cores, each clocked at over 35% higher. This alone represents a substantial upgrade for a modest 7% increase in MSRP, which is only half the rate of inflation since 2020. However, despite the impressive features, the price of the RTX 4090 still matches that of an entire mid-range gaming PC. So, what additional advantages does it bring to the table?


New Architecture

Nvidia asserts that their Cuda cores have been significantly enhanced compared to the previous ampere architecture in various aspects, resulting in notable performance improvements of up to two times the capabilities of the 30 series. This is mainly attributed to the nearly doubled L1 cache and a significant alteration in the core layout. Additionally, the utilization of TSMC's new N4 process allows for a reduction in the total die area by approximately 150 millimeters squared compared to its predecessor. However, in order to verify Nvidia's claims, we conducted tests using our newly acquired Socket AM5 bench, equipped with a fresh installation of Windows 11 22 H1. We opted for the 22 H1 version due to reported issues with the 22 H2 version that Nvidia has yet to resolve. Now, let's shift our focus to the primary subject at hand, which is gaming performance. Buy an RTX4090 here!


PCIE Gen 5

Looking at the specifications, you may have unanswered questions regarding why Nvidia doesn't support PCI Express Gen 5 and why DisplayPort 2.0 is missing. The answer to both questions, whether you like it or not, is that Nvidia believes these features are unnecessary for a graphics card. Although it is true that a GPU utilizing 16 Lanes of PCI Express Gen 5 may not be highly practical at present, a GPU running 8 Lanes of Gen 5 would be particularly useful, especially for users requiring ample nvme storage. This makes it a solid choice for a card in this class. Keep in mind that the faster the PCI Express link, the fewer lanes are necessary to meet bandwidth requirements. Nvidia should be aware of this. Regarding DisplayPort 2.0, Nvidia's official stance is that DisplayPort 1.0 already supports 8K resolution at 60Hz, and consumer displays won't require more for a while. This argument is valid, as DisplayPort 1.4 can already reach up to 120Hz at full resolution, and can even reach 268Hz without chroma subsampling by utilizing display stream compression, which is said to provide visually lossless results. Considering that displays capable of refreshing faster than this are still relatively uncommon, it can be argued that this is not an optimal experience for such premium hardware. It should be noted that 240Hz 4K displays already exist and will soon have support for DisplayPort 2.0.


Display Port 2.0

Arc has already implemented this feature, and rdna3 has been officially announced to offer support for DisplayPort 2.0 since May. It is evident that Nvidia is attempting to cut costs once again by releasing a GPU that is priced equivalent to the combined cost of an Xbox Series X, PlayStation 5, Nintendo Switch, and Steam Deck. Alternatively, it is possible that these graphics cards have been stored in a warehouse for a longer period than we initially thought. It is quite entertaining to consider that this might be the first GPU capable of effectively running games in 8K resolution.




Benchmarking from LTT

Most of the time, we can achieve 60 FPS in Native 4K. However, in Forza Horizon 5, the minimum frame rates go beyond 120 FPS at 4K. This is a significant improvement compared to the sub 90 FPS on the 3090 TI. The performance of Assassin's Creed Valhalla also sees a boost of about 50 percent on the 4090, allowing for 4K 120 FPS gameplay without any compromises. It's important to note that these improvements are specifically related to rasterization performance, which Nvidia might have been trying to downplay by emphasizing their RT performance in the press materials. On the other hand, the improvement in games like Far Cry 6 and other Ubisoft titles is not as significant, averaging around 30 percent across the board. While this is still a decent improvement, it pales in comparison to the results mentioned earlier. In Shadow of the Tomb Raider, we start to encounter CPU limitations at 4K, leading to more modest improvements. Interestingly, when we run CS:GO, there is a surprising flip in performance. The RTX 3090 TI outperforms the 4090 in multiple runs, despite both GPUs maintaining high GPU core clocks and load. However, this anomaly seems to be resolved when playing CS:GO at 1440p, suggesting a possible driver bug at 4K. Now, let's revisit the results for Cyberpunk. In proportion to traditional rendering, the performance improvement is even more significant, almost double that of the 3090 TI. However, without DLSS, we still can't quite reach 60 FPS at 4K. But when we enable ray tracing in the older Shadow of the Tomb Raider, the 4090 is capable of nearly doubling the minimum FPS of the 3090 TI. This means the game can now achieve a buttery smooth framerate of 100 to 120 FPS at 4K, even without DLSS. With DLSS and performance mode, Cyberpunk manages to get close to 100 FPS in minimum frame rates, providing a smooth experience with G-Sync enabled, as long as you can tolerate a slight decrease in image quality. It's worth mentioning that the 3090 TI falls short of reaching 90 FPS in 5% lows, while the 4090 is fast enough to comfortably drive a 144Hz 4K monitor.



Productivity Applications

If you're not a dedicated gamer, there's good news – this device is also a productivity powerhouse. In Blender, it achieves over double the samples per minute in both the "Monster" and "Jump Shop" scenes and just under double in the older "Classroom" scene. This results in significant time savings, especially if you're a 3D artist. Similarly, our 4K Da Vinci Resolve export finishes nearly a minute faster, representing a roughly 25 percent improvement, which can accumulate over time, especially for projects with extensive graphical effects. The 4090 GPU demonstrates substantial gains across various software, with the most notable improvements seen in 3ds Max, Maya, Medical, and SolidWorks, where scores nearly double. Although Creo saw a more modest performance increase of around 10 percent, it's evident that this GPU belongs to a different class altogether, not just a typical generational improvement. Considering all these performance enhancements, Nvidia may have had to price these GPUs at $1,600 to clear their surplus of 3090s caused by the mining boom. And that's not all; another crucial aspect of productivity, notably focused on by companies like Intel and Apple, is video encoding.



Lets Talk Power

The target Nvidia has set for the RTX 4090 universally demands an ATX 3.0 connector and includes adapters in the box. As indicated by the specifications, our RTX 4090 draws nearly as much power under gaming load as the RTX 3090 TI, although it remains closer to 425 Watts rather than its rated 450 Watts. This contrasts sharply with the RTX 3090 and 6950XT, both consistently drawing nearly 100 Watts less. This raises questions about the potential power consumption of a future RTX 490 TI. Fortunately, the power targets on the 4080 series cards are lower, although we don't have any of those to test today. When we subject the RTX 4090 to a more demanding load using MSI Combustor, power consumption spikes, reaching the red line for both the RTX 4090 and the 3090 TI. Interestingly, about halfway through, it dips down to around 440 Watts, which is peculiar. Meanwhile, the RTX 3090 once again doesn't exceed that 350 Watt threshold, and neither does the 6950XT. With great power, of course, comes substantial thermal output capacity.


Thermals and Clock Speeds

The RTX 4090 is no different when it comes to thermal performance during gaming. It maintains temperatures roughly in line with its power consumption. The massive three-and-a-half-slot cooler ensures that the hot spot temperature hovers around 80 degrees Celsius while gaming, placing it between the RTX 3090 and the 6950 XT. This showcases Nvidia's effective cooler design. Core clocks, on the other hand, reach significantly higher levels than previous-generation cards and remain stable at around 2.6 to 2.7 gigahertz throughout the run.


However, when we examine the results from MSI Combustor, the hot spot temperature surpasses the 80-degree threshold, alongside the RTX 3090 TI, with a slight dip in performance halfway through the run. In contrast, the RTX 3090 maintains temperatures below 70 degrees Celsius, but core clocks end up considerably lower, at around 2.25 gigahertz for the RTX 4090, with the 3090 TI throttling down to levels below its less power-hungry counterpart.


For context, these tests were conducted inside a Corsair 5000D Airflow case equipped with a 360-millimeter radiator at the top and three 120-millimeter fans drawing in air from the front. This setup ensures ample airflow. Surprisingly, despite this airflow, both the RTX 4090 and 3090 TI caused internal case air temperatures to peak at 39 to 40 degrees Celsius, with significantly higher minimum internal temperatures compared to the RTX 3090. These tests were carried out at an ambient temperature of approximately 21 degrees Celsius. Consequently, if your case can only just accommodate an RTX 3090, managing an RTX 4090 may prove challenging.




Closing Thoughts

Indeed, the RTX 4090 boasts monumental performance that currently outshines anything available. However, as we've witnessed with the RTX 3090, this supremacy won't endure indefinitely. The current state of affairs is peculiar; while the RTX 4090 consumes power on par with the RTX 3090 Ti, it delivers substantial upgrades over both that card and the standard RTX 3090 in nearly every aspect, surpassing the price increase. For content creators, choosing the RTX 4090 is a clear choice. However, I cannot, in good conscience, recommend gamers with deep pockets but limited practicality invest in a piece of hardware incapable of driving the equally expensive displays that could truly harness its potential. Especially when a more affordable and less power-hungry option, perhaps Nvidia's own RTX 4080s, could suffice.


bottom of page