AMD Radeon R9 380 to Radeon RX 560 upgrade/downgrade remarks (WHL #73)

Groan, computer hardware again, please spare us… :mad:

Well, in this day and age of mismanagement, “chip shortage” and even war in Europe, you get what you get and you don’t get upset. I scored a graphics card on eBay (aftermath of my Covid quarantine) and I need to get rid of the old one as quickly as possible. As insane as GPU prices are nowadays, I don’t think a card that was released in June 2015 (close to fucking seven years ago) and that has a current market value still equal/higher than what I paid for in 2018 will ever go up in price again, ever. So let’s compare them and sell it off. This is what we do today.

Don’t get me wrong – the 380 is still 100% suitable for my “gaming” needs, because basically every graphics chip with less than four rebadging cycles is. We’re no longer in the late 90’s or early 2000s where a 2 year old computer component from GPU to CPU, from memory to networking cards, from hard disks to power supplies, basically anything, is out of date and just garbage. This 7 year old card, despite being mid-range at the time, still produces enough FPS for my needs and has DX 12_0 level support like the new one does. But we’re in driver, support and feature decline here. All Radeon 200 and 300 series including the Fury cards are now in legacy driver status, meaning there has not been a driver update since June 2021 (what an idiotic time to end driver support…), and there’s no Windows 11 driver as a result. While that doesn’t pose ANY issue since “drivers” are just bloatware nowadays, it still indicates the cards are getting old.

Starting from a Radeon 380, my percieved upgrade path options are as follows:
560: -20% performance
470: +20%
570: +25%
480: +35%
5500: +40%
580: +50%
5500XT: +55%

(very rough numbers, but I wouldn’t like include 590 and 5600 for size and/or price reasons – I do not need more performance, I just need a more modern card that is roughly comparable)

Well, here’s the contenders: A Sapphire Radeon R9 380 4GB OC-but-not-really-the-OC-badged-version, and an Asus Strix RX 560 “04G-Gaming”, the O once again standing for “overclocked”, in contrast to “4G-Gaming” that is, well, also overclocked :roll:

Stock clocks being 970 MHz GPU and 1425 MHz RAM, so this is a 15/25 MHz or 1.5%/1.7% advantage compared to a reference model. The OC variant adds another 25 MHz (total 1450 MHz or +4.1%) and exactly no additional RAM clock. Rather silly OC advertising if you ask me, but whatever.

And this is the RX 560 which runs 1326 MHz GPU clock in “Gaming mode”, or 1336 MHz in OC mode. It is beyond me why they didn’t go for 1337 MHz in 1337 mode; after all, they added a single RGB LED that is software adressable since that clearly gives an FPS advantage :roll: . Anyway, RAM always is at 1750 MHz, which is identical to the reference board. Stock GPU clock would be 1175 to 1275 MHz at max boost, which is identical to what Asus advertises but rarely reaches. So for my OC model that is an 4.0% to 4.8% GPU clock advantage best-case, and for the non-OC model that is 0% OC in “Gaming mode”, or 1285 MHz / 0.8% in “OC Mode”, because, let’s face it, they also need to OC the non-OC model because every idiot manufacturer in the world also does. Again, silly numbers, 5% best-case technically is overclocked, but it’s clearly not earth-shattering on one of the smallest models available.

All ranting aside, here’s the two side-by-side:

The 560 is 4 to 5cm shorter, depending on if you count the cooler overlapping the PCB. It is also standing significantly taller, check the slot brackets for comparison, the 560 has two protruding heat pipes but the 380 uses the entire width for additional PCB space and larger fans. Both are dual-slot cards.

Note also the amount of decoupling capacitors on the back of the chip – the 380 Tonga is almost 3x the physical size, since the die is manufactured in 28nm instead of 14nm (both marketing figures) and has ~66% more transistors for e.g. 75% more compute cores and a memory interface that is double the size. All of that translates to 788g vs 490g total mass of these cards – larger heat sinks and more copper instead of aluminium.

In terms of connectors, well, there hasn’t been much progress since AMD started selling Eyefinity cards as HD 5000 series in late 2009. Since then, every card should theoretically be able to drive 6 displays, but most manufacturers still opt for a variety of DP, HDMI and DVI connectors, some Asian vendors apparently still include physical VGA ports as well. VGA can also be used via the DVI-I port (no longer present on the 560!), or, as I would do it, via a passive adapter over Displayport. Just ditch that DVI and HDMI crap, DP is the way to go.

A small and insignificant detail for the 560 card, but with big implications for more modern PCIe 4.0 cards like the 5500XT: AMD now only uses 8 PCIe lanes for low(er)-end cards. As you can see, the other 8 lane connectors are there for physical stability (hope some vendors make true x8 variants!), but are not connected to anything. There’s suddenly no vias for all the gold fingers, these are just fake connectors.

This gets important and notable for the newer cards. While the 560 still is a PCIe 3.0 card and most somewhat modern boards do at least offer a x16 3.0 slot for them, the successor uses PCIe 4.0. While 4.0 x8 technically offers identical bandwidth to 3.0 x16, there’s a problem: Those are lower-end cards, typically aimed at people running “older” systems. Older systems however do not have PCIe 4.0, as AMD only introduced it in mid 2019 (Ryzen 3000), and Intel did in early 2021 (Rocket Lake, Gen 11), so exactly one year ago. So while still compatible, they can only use a 3.0 x8 link in systems older than these, meaning the card would be capable of transferring 16 GB/s over PCIe, but only can do 8 GB/s due to the board/CPU restrictions. That still sounds like a lot, but it is terribly slow compared to the 112 GB/s or even 186 GB/s of the 380 to their onboard RAM. And exactly those lower end cards are of course sold with less RAM than their high-performance brothers and sisters, so they might run into memory shortages much quicker than those with PCIe 4.0 boards and x16 width GPU links (5600 and up). Reviewers found that the 5500XT can suffer basically zero disadvantage when everything fits perfectly inside the onboard RAM, but for tailored, worst-case workloads, a 5500XT 4GB on PCIe 3.0 can be up to 50% slower than an otherwise identical 5500XT 8GB on PCIe 4.0. Overfull VRAM and a slow PCIe link can be absolutely crippling.

Oh, and while we’re at considering future upgrade paths: I specifically bought this card for the x265 hardware encoder support for software like Handbrake, more on that at the very end. While the 300 series did have some hardware encoder support, it never made it to Handbrake. 400 and 500 series do (600 was only an OEM rebrand), and so does the 5000 series. 6000 series? Well, AMD decided those would be paired with their new CPUs/APUs. Those got hardware decoders in the CPU – so they fucking dropped it for the graphics card. Well, no 6000 series for me then, thank you very much. And 5000 series is postponed until I have a PCIe 4.0 capable platform, which the next logical steps for me, socket 2011 v4 processors (Broadwell, Gen 5) and socket 2066 (Skylake, Gen 7 or 9), do not offer. I’ll probably upgrade along the 570-580-590 path in case I need more GPU power in the near future.

Now, benchmark data. Known system with the Fujitsu board with NVMe (#P42), Xeon E5-2697v3, 64 GB Reg ECC RAM, the Mellanox ConnectX-3 (WHL #42F2) and not much else. Power supply is a Seasonic Focus Gold 450W with, ha, you guessed it, 80 Plus Gold rating.
I’m not in the mood for fancy graphics, so here it goes:

Data with the R9 380 always first, RX 560 second.

Windows 10 idle: 54W vs 47W, -7W or -13% improvement.

Prime 95 as scaling factor test for the power supply – v307b9, small FFTs. Starting at 210W vs. 207W, steady-state (yeah, it does move) after ten minutes 226W vs 211W. So differences in CPU usage plus conversion efficacy on board board and power supply can easily make up for like -7% total power consumption. Not a great start.

Furmark. 3440×1440 Fullscreen. 380 does 894 MHz/1450MHz as intended, 560 immediately downclocks to 1247-1267 MHz / 1750 MHz after the initial 1326 MHz “boost”. Thanks for that, but this is just marketing BS then.
252W -> 257W after ten minutes vs. 162W -> 142W. So while the old 380 card uses more power due to fans running faster and leakage increasing with temperature, the 560 actually uses less because it cannot boost to full speed perpetually. Great.
FPS min/max/avg are 33/34/33 to 20/23/20, so for just 55% the total system power consumption, frames are also dropping to 60%. That’s not bad, given a third of that power draw is static and there likely is a bit of CPU load in Furmark as well. Oh, temperatures: 80C vs. 76C, both not running their fans at 100%

Path of Exile, just standing in the hideout (as used in #P28, but there likely was some game optimization as well in addition to the 2011v3 platform change), 3440×1440, DX11 game version, no chat, no ingame performance monitor (only the Windows game bar thing):
290W start to 308W after 10 minutes of idling, vs. 155W to 155W at the end. It takes so long getting there that the initial boost isn’t visible here. -50% power usage is mightily impressive given that FPS are dropping from 55-56 to 37-38 on the 560, so only -33% (again, 50W of that is static use in idle already). CPU usage also dropped from 6-8% to 3-6% which is strange, but maybe the newer architecture has some advantages. RAM usage was 11% on both tests, VRAM usage at 36% as well.

And then there’s Shadow Tactics which I currently play quite a bit. Idling at the start of the first mission, just zoomed out at 3440×1440 “60 Hz V-Sync unlocked” (which is clearly not the case), High GFX settings.
380 uses 311W -> increasing to 317W after ten minutes, 560 just takes 162W -> 167W. Again a -48% drop in total power consumption. FPS go down from 61-62 to 44fps, so -29%.
CPU usage also decreases here from 8-12% to 5-8%, and strangely the 4GB 380 reports 55% VRAM usage, while the 4GB 560 only says 35% – compression at work? System RAM usage was 8% in both cases.

And finally Handbrake, the initial reason for thinking about an upgrade. I have to state that the h265/x265 codecs aren’t all that great for high core count machines, since adding a single file to the conversion queue only utilizes the 2697v3 (14C28T) to around 1/4 to 1/3. Someone on the interwebs said that they are limited to 6 Threads, which doesn’t really add up as well, but is in the ballpark. Adding a second file does not slow down the queue, but increases CPU usage and power draw significantly. Adding a third one starts showing a couple percent slowdown of the others, so 3 is probably the sweet spot and 4, at around 95% average CPU usage, is maybe too much.
Still, when running a single file conversion with the h265/x265 1080p preset, the CPU encodes at 71fps avg and the entire system (with 380) draws 211W. When replaced with the 560 and using the AMD h265 VCE encoder instead, it’s suddenly 143fps avg – at only 122W. +101% performance at -42% power consumption, or 3.5x fps per Watt…that is what hardware acceleration can do.

Technically a downgrade for games but an upgrade for the stuff that got hardware acceleration, I’m very satisfied with the results. Great card for a very competitive price (given how fucked up the current market really is), and it simplifies my future upgrade options. 470/480 cards are now out of the question since I will not downgrade, 6000 series are unexpectedly crossed off the list for their missing encoder/decoder features, and 5500 4GB require a much more modern overall platform. RX 570 and 580 (4GB each) are my next sniping goals, or maybe 8GB versions plus the 5500XT 8GB if for some reasons prices drop or budget increases. Pretty sure the RX 560 will serve me well for the rest of this year, and maybe 2023 is the year of “buy whatever you need, it’s in stock for a reasonable price”. Maybe it’s not.


Subscribe
Notify of
guest
:mrgreen:  :neutral:  :twisted:  :arrow:  :shock:  :smile:  :???:  :cool:  :evil:  :grin:  :idea:  :oops:  :razz:  :roll:  ;-)  :cry:  :eek:  :lol:  :mad:  :sad:  :suspect:  :!:  :?:  :bye:  :good:  :negative:  :scratch:  :wacko:  :yahoo:  :heart:  B-)  :rose:  :whistle:  :yes:  :cry2:  :mail:  :-((  :unsure:  :wink: 
 
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
postnut

I found it weird for you to have downgraded in terms of performance, but you must not play any new titles (and there are very few worth playing).
I keep an eye on new GPU prices quite often, and for most developed countries the cost of new GPU has now dropped from “crazy”, to “bad”, and now are “somewhat reasonable”.
Still, those who doesn’t need the performance but wants modern goodies like adaptive sync, integer scaling, modern encoder/decoder, modern HDMI and DP connection will have to get a somewhat expensive card, like 6600, or 3060ti/3070, or a somewhat garbage card (3050), and 6500XT doesn’t even have the codec engines…
That said, despite oil prices, GPU prices are still dropping by the day. Maybe soon with release of new cards, and eventual-now-inevitable ethereum PoS update, those of us not chasing greatest in performance will be able to scope up a modern RDNA or Turing/Ampere card.