PC To upgrade or to not upgrade? RTX 4080 super

Where you live greatly effects the price/performance equation for comparisons. If you're not a US resident, chances are you'll have to do the bang for buck calculations yourself.
Here in Oz for instance the RX 7800 XT was never $100 AUD cheaper than the entry level RTX 4070s, let alone $100 USD.
Then there's use cases that might apply to you, but are rarely evaluated in most reviews. Such as AMD cards generally being worse for VR, mostly due to headset manufacturer support (Quest link/airlink is awful, DPVR E4 is incompatible with AMD cards, Varjo Aero is incompatible with AMD etc etc). Or Nvidia's issues with Linux drivers (proprietary vs open source, although I understand it's improved a lot).
I'm looking to upgrade myself, and since VR is a major use I'll probably shift back to Nvidia. Last 2 cards were a 1660 Ti and 6600 XT, which is the sort of bang for buck I prefer, but the lower range just won't cut it for some of the VR games and applications I'm trying to run (using a Quest 3, so 4124x2208 native on the panels and I prefer to run at 1.2x super sampling to allow for the lens distortion correction). I'm thinking of grabbing one of the run out 4080s which have hit $1499 AUD (previously retailed at $1899). 4080 Supers are priced at $1870 AUD. I'll wait a week to see if prices drop as they have with the other Super launches though.
 
Where you live greatly effects the price/performance equation for comparisons.
The purchase date itself affects the price/performance ratio. The price floors change all the time-- sometimes drastically from month to month. That's why I like that both Passmark and Techpowerup show the price next to the score calcuations.
Went for 4080 super TUF version (non OC), paired with 7950x3d, 64gb ram 5200 mhz. Hopefully I squeeze good juice out of it for the next 5-7 years at 1440p
Beautiful. The baseline variant of a card is almost always the most sensible purchase. Yeah, you should be set. That's a setup almost any PC gamer alive envies. I'm guessing you secured the 4080 Super on preorder.
 
Few things-
5000 series cards are believed to be coming out sometime in 2025. You’re probably looking at a year or more for that.

While people are saying they can’t predict it will be good for 5 years and this is true, the likelihood is that yes it will be more than good for 5 years. I mean the 1080 is 7-8 years old at this point and it still runs modern games fine, you just can’t crank them.

The crux of your question I can’t really provide guidance on with it knowing more. What kind of games are you playing? What resolution? Are you trying to hit a high frame rate count? Do you care about ray tracing?
 
The purchase date itself affects the price/performance ratio. The price floors change all the time-- sometimes drastically from month to month. That's why I like that both Passmark and Techpowerup show the price next to the score calcuations.

Beautiful. The baseline variant of a card is almost always the most sensible purchase. Yeah, you should be set. That's a setup almost any PC gamer alive envies. I'm guessing you secured the 4080 Super on preorder.
Next day delivery, in this case I moved it onto Friday as I'm away tomorrow. Do you think sag bracket would be of use or 3 slot should do it? Anyway that's about it when it comes to big CPU spending for a while now I hope.
 
The purchase date itself affects the price/performance ratio. The price floors change all the time-- sometimes drastically from month to month. That's why I like that both Passmark and Techpowerup show the price next to the score calcuations.

The US price/performance calculations and resulting conclusions are basically always useless here though, because we get very different prices across ranges for both the RRP and actual street prices. Prices here tend to remain fairly stable until there's a shortage, official price drop and/or new product launch.
So when the RX 7800 XT was getting reviewed as the clear choice for upper midrange value across the majority of tech channels, it just wasn't the case here as the entry point for the RX 7800 XT and RTX 4070 was $799 AUD and $855 AUD respectively (although weirdly the price on Asus 4070s went up $100 AUD just before the 7800 XT released and has only just come back down). Due to audience numbers, even Hardware Unboxed tends to use US pricing for their calculations (both RRP and street price) and will usually just mention when local prices and conclusions are significantly different.
Makes a big difference in product evaluations and recommendations, since reviews and comparisons are largely based on price/performance and there's so much regional variation. The performance numbers are still valid of course.
 
Went for 4080 super TUF version (non OC), paired with 7950x3d, 64gb ram 5200 mhz. Hopefully I squeeze good juice out of it for the next 5-7 years at 1440p
Lol that is overkill for 1440P at the moment. You'll be able to play anything at 4K flawlessly.

Congrats on the purchase
 
The US price/performance calculations and resulting conclusions are basically always useless here though, because we get very different prices across ranges for both the RRP and actual street prices.
Naturally. I couldn't remember off the top of my head what Nameless's location was.
 
Yes. I hate having to admit it, because NVIDIA are so often dicks, but it's true. The 4090 still enjoys a gaping advantage over its nearest competitor (RX 7900 XTX) in raw rasterization, around <20%+ at 4K, and the architecture more generally is just superior as can be observed in metrics like energy efficiency.

What matters most to any consumer is what you get for your dollar, though. But all those features I've long dismissed as inconsequential gimmicks are becoming increasingly important as we advance into a graphical landscape of diminishing gains from rasterization. And NVIDIA is further ahead than ever before. AMD finally put out their own framerate generation, and like with other features, it's an ersatz offering to what NVIDIA pioneers. AMD is hustling, but they're always chasing the leader.
  • DLSS > FSR (this includes frame generation technologies)
  • RTX Ray-Tracing > AMD Ray-Tracing
  • Gsync > Freesync
  • RTX Super Resolution > Radeon Super Resolution
  • ULMB 2 (Ultra Low Motion Blur) > FMF (Fluid Motion Frames)
  • NVENC > AVC
  • RTX Voice > AMD Audio Noise Suppression
  • NVIDIA Reflex > nothing

And there's a whole bunch of stuff in its infancy, a lot of it associated with A.I., like RTX Remix, that NVIDIA is already working to mature where AMD will be forced to scramble to throw out a hastily assembled Frankenstein to compete when that feature really gets going, and starts generating buzz at some point in the next several years. That's in addition to the fact that typically NVIDIA's implementations of technology that both companies support tends to be superior such as with DirectX 12 Ultimate features (ex. DirectStorage performance). These are also continuing to become more relevant.

Driver stability/support has mostly been equalized, as far as I can tell, tough to tell, and software interfaces is mostly subjective (GeForce Experience vs. Adrenalin).

Ultimately, I think that AMD is viable, and often offers a much better value at various purchase points depending on the date. But I don't know where you're getting this notion that AMD is ahead of NVIDIA in terms of GPU engineering.

AMD is getting much better and the gap is way less than what people think in terms of software. I personally wouldn't touch a 4080 Super because it isn't that good of a card and falls heavy on NVIDIA software, which isn't that much different anymore. FSR 3 and AMD frame gen is legit. The gap between the two is less than 12 months in my opinion. Ultimately, I would wait but I would only select NVIDIA if you are using AI or applications that require NVIDIA support. In my opinion the software gap is going to close fast as FSR sees itself ported everywhere ,consoles, while DLSS is stuck with NVIDIA cards. Wait or if you find something cheap go with that. Or wait until the next gen comes out and buy a cheap last gen card. Graphics cards are generally outpacing the software. You keep seeing the same few games being used as benchmark tools because not a lot stretches them.
 
AMD is getting much better and the gap is way less than what people think in terms of software. I personally wouldn't touch a 4080 Super because it isn't that good of a card and falls heavy on NVIDIA software, which isn't that much different anymore. FSR 3 and AMD frame gen is legit. The gap between the two is less than 12 months in my opinion. Ultimately, I would wait but I would only select NVIDIA if you are using AI or applications that require NVIDIA support. In my opinion the software gap is going to close fast as FSR sees itself ported everywhere ,consoles, while DLSS is stuck with NVIDIA cards. Wait or if you find something cheap go with that. Or wait until the next gen comes out and buy a cheap last gen card. Graphics cards are generally outpacing the software. You keep seeing the same few games being used as benchmark tools because not a lot stretches them.
Look over my posts in the past three years. I've mocked DLSS as some sort of consolation to a gaping rasterization deficit at relatively equal price points here in the US; specifically within the context of the RTX 2000 series vs. RX 5000 series head to head. I talked about how few titles were actually supported, and for the first few years, how almost none of those enjoyed DLSS 2+, when the original DLSS was a shitshow not worth turning on at all. I applauded AMD on those grounds when it launched FSR with such a large number of titles supported. At the time, they were outhustling NVIDIA. That hasn't been true since.

Because it's not 2019 (or even 2022), anymore. We're in 2024. It's up to 378 games supported by DLSS 2+, now. This is a sizeable enough body of games for the technology to meaningfully influence a rational buyer's decision. AMD hasn't kept pace. FSR 2+ support is limited to just 140 games. So it's gone from 29 to 140 since October 2022 when I made that post [+111]. To iterate, DLSS 2+ is at 378 games. It's gone from 165 to 378 during that span [+213]. Ergo, while AMD came out of the gate fast, NVIDIA has been roughly doubling their added support rate since late 2022.

And as I've also highlighted more recently, FSR isn't keep pace in terms of its quality. In fact, I thought XeSS looked better in Techpowerup's recent comparisons. This was really apparent in the Assassin's Creed Mirage showdown. It's game to game, however, and FSR doesn't always look worse.

Meanwhile, ULMB 2 is magic. I discussed this in a thread recently with @KaNesDeath as it matters to competitive MP gamers. BenQ's DyAc held a supreme advantage over all rival monitor manufacturer technologies for years. Then NVIDIA swoops in an undermines them at the GPU-level. Any monitor manufacturer simply has to accommodate it. AMD is hapless. They have nothing.

Same thing goes for the (live) stream encoding. Remember when the RX 5000 series launched that blew all the reviewer's minds, and they actually care about that stuff more than gaming because it's part of their everyday workload. AMD was outperforming NVIDIA. Not anymore. AVC is the best open source standard, so AMD doesn't lose in the context of general media consumption, but if you're looking to leverage your GPU as a streamer whether on Twitch/YouTube, or even privately on Discord, NVIDIA has leapt ahead again with NVENC.

It's stuff like that. I've championed AMD, but I've had to watch as NVIDIA has become increasingly comfortable in their driver seat over the past two years.
 
And as I've also highlighted more recently, FSR isn't keep pace in terms of its quality. In fact, I thought XeSS looked better in Techpowerup's recent comparisons. This was really apparent in the Assassin's Creed Mirage showdown. It's game to game, however, and FSR doesn't always look worse.

FSR 3 is not joke. When you buy an NVIDIA card, you are basically buying into a software license that only holds for that card. FSR is compatible with any graphics card so going forth, all upgrades work on old AMD and NVIDIA cards. I am just going to say that FSR is behind but most of the comparison is based on older version of FSR and not FSR 3.0. Like I said earlier, I think FSR is 12 months behind NVIDIA when it comes to software. Most people who buy NVIDIA cards are basically paying for software because very few use the card for AI or video/graphic editing. The difference between the brands for gaming only is large when you get a 4090 and you can buy two 7900 xtx for what it goes for.
 
FSR 3 is not joke. When you buy an NVIDIA card, you are basically buying into a software license that only holds for that card. FSR is compatible with any graphics card so going forth, all upgrades work on old AMD and NVIDIA cards. I am just going to say that FSR is behind but most of the comparison is based on older version of FSR and not FSR 3.0. Like I said earlier, I think FSR is 12 months behind NVIDIA when it comes to software. Most people who buy NVIDIA cards are basically paying for software because very few use the card for AI or video/graphic editing. The difference between the brands for gaming only is large when you get a 4090 and you can buy two 7900 xtx for what it goes for.
Are you not reading what I'm writing? I have pointed out AMD often provides better values at various price points. And I pull for AMD like anyone else due to their open-source philosophy. It's venerable. But that isn't what this is about. This is about industry leadership.

Where did I say that FSR 3 is a joke? I explicitly pointed out the comparisons that matter are FSR 2 vs. DLSS 2. The reason is because those are the two technologies that are actually matured. They're out there. They're in games. They matter.

We can discuss DLSS 3.5 and FSR 3 two years down the road when they are no longer a tech demo. Know how many games DLSS 3.5 runs on right now? 3 games. Know how many FSR 3 runs on? 11. So who cares?
 
Are you not reading what I'm writing? I have pointed out AMD often provides better values at various price points. And I pull for AMD like anyone else due to their open-source philosophy. It's venerable. But that isn't what this is about. This is about industry leadership.

Where did I say that FSR 3 is a joke? I explicitly pointed out the comparisons that matter are FSR 2 vs. DLSS 2. The reason is because those are the two technologies that are actually matured. They're out there. They're in games. They matter.

We can discuss DLSS 3.5 and FSR 3 two years down the road when they are no longer a tech demo. Know how many games DLSS 3.5 runs on right now? 3 games. Know how many FSR 3 runs on? 11. So who cares?

I am not meaning to offend or anything. I just felt you were downplaying AMD pretty hard when the only major level difference is the 4090 and that is priced on a completely different tier.
 
I am not meaning to offend or anything. I just felt you were downplaying AMD pretty hard when the only major level difference is the 4090 and that is priced on a completely different tier.
It's not about value. He was asking why I thought NVIDIA so far ahead in terms of GPU engineering technology. I think they're at least a year ahead both in terms of hardware and software engineering.
 
ULMB 2 (Ultra Low Motion Blur) > FMF (Fluid Motion Frames)
I'm confused about this comparison. FMF is the frame generation (or rather they call it AFMF), it's a different thing to ULMB.
 
I'm confused about this comparison. FMF is the frame generation (or rather they call it AFMF), it's a different thing to ULMB.
It might not be best analogized, because I'm not certain they incorporate any actual backlight strobing, but FMF supposedly includes their image smoothing software.

They've rebranded everything, including the anti-latency stuff, under the "Hyper-RX" label:
So "AMD Anti-Lag" is probably a partial analogue to NVIDIA Reflex, and FMF appears to subsume their image smoothing techniques, but it might just be more accurate to jot down "nothing" for both in comparison to Reflex and ULMB.

I haven't perused any PDF literature.
 
It might not be best analogized, because I'm not certain they incorporate any actual backlight strobing, but FMF supposedly includes their image smoothing software.

They've rebranded everything, including the anti-latency stuff, under the "Hyper-RX" label:
So "AMD Anti-Lag" is probably a partial analogue to NVIDIA Reflex, and FMF appears to subsume their image smoothing techniques, but it might just be more accurate to jot down "nothing" for both in comparison to Reflex and ULMB.

I haven't perused any PDF literature.
Hmm I think anti-lag+ is/was the competitor to Reflex but it got shelved due to the fuck up from like 3 months ago with how they implemented it.
 
Hmm I think anti-lag+ is/was the competitor to Reflex but it got shelved due to the fuck up from like 3 months ago with how they implemented it.
Which is why I didn't even think to name up there off the top of my head. Perhaps not enough games supported it. Not that Reflex is generating a lot of demand. I have a Reflex-capable mouse, but NVIDIA has barely added any new mice to the list in the past few years despite the growth of all these other technologies.

When NVIDIA debuted their Frame Generation technology it was only implemented in DLSS 3.5, but I think they've since added it to DLSS 3+. For now, at least in label, they seem to be keeping ULMB separate from this, and they're not even developing a separate label, but just tucking it under the DLSS banner. However, in both cases, generating additional frames-- even if they are merely black frames as is the case with traditional backlight strobing-- is the essence of low motion blur technologies. But I'm not sure that is what ULMB (or DyAc) is doing as they are the only two that don't sacrifice any brightness. So I'm not sure where FMF fits in there. I just know that nobody talks about it. While ULMB somehow rocketed past DyAc as the preeminent low motion blur technology.
 
4080s at $1499 were immediately out of stock, so I ended up grabbing a 4070 Ti Super (hasn't arrived yet). Picked one up for $1300 AUD which is the best I've seen.
Not as good value as the 4070 Super (could get one for $970 AUD), but 12GB VRAM and 192 bit bus appears to make 4K a stretch already.
My main use is PCVR with a Quest 3 (4128x2208@120hz), and I prefer to super sample to account for lens distortion. 7900 XT ($1149 AUD) was another possibility, but I had so many issues with VR and AMD that I didn't think the improved raster was worth the hassle.
Still seems like stupid money for an upper mid-range card, but the 4070 Ti Super should more than double the performance of my poor 6600 XT at least.
 
With gpu and cpu chips, there's something known as the "silicon lotttery". Different chips will be more stable at higher clock speeds than others. Case in point, I won the silicon lottery 9700k. Been running it at 4600 mhz for 5 years with no issues...


i was able to get 4.5 ghz off my trusty old i5-4690k and felt pretty good about it. running off just a hyper 212 evo, that thing would stay between 55-65 degrees while gaming. lots of people crapped out and were only able to hit 4.2-4.3 ghz on their chips but for the most part most people would be able to hit 4.4-4.6 off their 4690k chips. some people were lucky and managed to get theirs to 4.7 - 5.0 ghz but they had to use custom water loops just to keep it cool enough.

i might have been able to hit 4600 mhz off that chip too. but i would have had to bump up the vcore and it would end up running a little hotter than i would like. in fact i think i might have even tried that but i couldnt keep it stable after stress tests without running up my voltage past 1.3v and the temps were getting off the charts. i probably would have needed some better cooling to make it happen, but i was able to hit 4.5 ghz off 1.235 volts on that chip which was more than enough to keep it stable and still have the temperatures in check.

that cpu is still going strong although i'm not gaming on that machine anymore. it was a beast for its time though! i dont even bother with manual overclocking anymore. on the newer end intel models this shit just takes care of itself. if you just let it draw as much wattage as it wants and so long as your system keeps the temperatures in check, that shit will just boost itself. if not then you just fuck with the boost multiplier and thats really all there is to it these days. setting up an overclock is not quite the headache that it used to be. even for gpu's they have software and shit that will run all kinds of tests and shit and determine the optimal settings for you and shit. i guess this shit has came a long way.
 
Last edited:
4080s at $1499 were immediately out of stock, so I ended up grabbing a 4070 Ti Super (hasn't arrived yet). Picked one up for $1300 AUD which is the best I've seen.
Not as good value as the 4070 Super (could get one for $970 AUD), but 12GB VRAM and 192 bit bus appears to make 4K a stretch already.
My main use is PCVR with a Quest 3 (4128x2208@120hz), and I prefer to super sample to account for lens distortion. 7900 XT ($1149 AUD) was another possibility, but I had so many issues with VR and AMD that I didn't think the improved raster was worth the hassle.
Still seems like stupid money for an upper mid-range card, but the 4070 Ti Super should more than double the performance of my poor 6600 XT at least.
Lol I had a poor 6600xt as well but just upgraded to a 7800xt since it was on sale for almost the same price CAD. It's crazy how expensive half decent GPUs are these days (at least outside the US).
 

Similar threads

Back
Top