LT Panel
RT Panel
Just Visiting
Monday | August 10, 2020
AMD Radeon R9 NANO Review

AMD Radeon R9 NANO Review

Back when AMD started talking in detail about their Fury GPU almost all of the information was about the Fury X, their liquid cooled high end part. Some mention was made of the Fury, a more “traditional” style GPU but Fury X was the focus… until later in the presentation when AMD teased the Nano. A product based on that same Fury X GPU, but air cooled but in a form factor that suited ITX. Since then little has been said of the product but today the NDA is lifted and we can talk about the performance in our AMD Radeon R9 NANO Review.

AMD Radeon R9 NANO Review – The Fury NANO

amd-radeon-r9-nano-review-fury-front amd-radeon-r9-nano-review-fury-back

At just 6 inches this is one compact high end card. Barely extending past the PCIe connector the R9 NANO uses a black PCB and dual slot cooler. Underneath that metal shroud (aluminium) and single fan is a dual vapor chamber cooler with multiple heatpipes and aluminium fins.  A Front plate adds stability to the product. AMD rate the cooler used for approx 42dBA under normal use.


Flipping the card round we find the standard Fury  layout of 3x DisplayPort and a Single HDMI 1.4 connector. Using a DisplayPort Hub we can power up to 6 displays and of course for the average user 4k monitors/TVs are supported as is audio over our DisplayPort/HDMI cable. Then on the opposite end of the card we see that it is open ended and that this is the location for the single 8 pin power connector which helps the card receive its 175w of power.

amd-radeon-r9-nano-review-fury-outputs amd-radeon-r9-nano-review-fury-input-itx

All of the usual features are present such as support for DirextX 12, FreeSync, OpenCL, Direct Compute and the like with AMD adding also noting two key features on their Fury based cards. The first is Virtual Super Resolution which benefits those on 1920×1080 (for example) screens. The card is able to  render the game at higher settings (such as 4k) and then scale down the image to smooth out jagged edges and enhance image quality. The card also supports Frame Rate Target Control (FRTC) which allows us to specify a maximum framerate and have the card hit this, rather than its maximum, reducing power use and heat generated. This is ideal for games such as DOTA 2 where 300fps doesn’t really benefit, so we could for example limit to 90fps and save power, reduce heat and lower noise without really impacting the gaming experience.

As far as the key specs go, this is a Fuji GPU, the same used on Fury X and not the lower spec part used on Fury. It is a DirectX 12 compatible, 28nm GPU with 4096 stream processors, 64 compute units and 256 texture units. ROPs are 64 and the memory interface is listed as 4096-bit. With a core speed of 1000MHz and memory of 500MHz (4GB) that gives us a bandwidth of 512GB/s. This is of course AMDs new HBM memory which allows them to significantly enhance bandwidth, reduce power use and minimise component size.


AMD Radeon R9 NANO Review – Performance

Testing was performed on the Intel Core i7-5960X running on an X99 board with 16GB of DDR4 and a Samsung 850 Pro SSD. Windows 10 was the OS and all games along with the OS were patched.

All testing was performed on a BenQ BL3201 4k Display


NVIDIA Driver: 355.82
AMD Driver: 15.x Beta

NOTE: Our GTX 970 testing uses the 970 ITX, the closest comparison part for that form factor to the NANO. The Fury OC card is Sapphire’s model and you can see that review here.


3dmark-nano-oc temps-nano power-nano nano-review-atilla-1440 nano-review-attila-4k nano-review-bfhardline-1440 nano-review-bfhardline-4k nano-review-cars-1440 nano-review-cars-4k nano-review-dota2-1440 nano-review-dota2-4k nano-review-gtav-1440 nano-review-gtav-4k nano-review-witcher3-1440 nano-review-witcher-3-4k

AMD Radeon R9 NANO Review – Conclusion

Lets start this conclusion with something you are going to hear a lot about regarding the NANO… noise. Not the noise you would normally expect though as despite this card’s compact cooler and AMDs patchy history with reference cooler noise, the R9 NANO actually has a pretty quiet load noise profile. No worse than any other high end card really. It doesn’t spin down at idle which is a shame… but overall quiet. The noise we have to deal with is from the other components on the card. Coil Whine/Noise as it is commonly called is very much an issue on the R9 NANO. At above average framerates its almost a rattle and then as the FPS escalate it really turns into a near whistle. It’s the worst case of this phenomenon we have heard to date and it was close to causing the R9 NANO not to get any sort of award/recommendation.

In the end we have given it one and there are a number of reasons for this. Firstly, AMD are really pushing the limits in terms of performance of what this form factor is capable of. Pushing limits in technology almost always results in compromise and in the case of the NANO it’s a design which suffers from whine. For other products the compromise is heat, or fan noise, or tweaked feature set. Thats pretty weak though as a reason to award the card and something really should have been done to minimise it. Our second reason is that whine can be reduced using Frame Rate Target Control. In a compact build you want to balance power, noise, heat and framerates as best you can and certainly much more actively than a standard desktop. Enabling FRTC allows us to set the card to peak at a particular framerate and minimise heat/fan speed/power use. Of course the silver lining here is that the lower framerate minimises the coil noise. Finally, and again it is a barely acceptable “excuse”, often during gaming sessions we use headphones or decent volume through speakers and in that scenario the whine is not noticeable. Sit the R9 NANO on an open bench and game in silence/low volume and it will drive you mad. Sit it in a case, fire up the game with some immersive sound levels and you are probably ok… just. For the record, at the desktop and when watching media the card is whine free.

The only other significant negative with this product is the branding… it wouldn’t be an AMD launch if they didn’t try to overcomplicate things. R9 has been branding used for nearly 2 years now and the NANO has less in common with cards released back then than the new range. This is a Fiji/Fury GPU… own it AMD. This is Fury Nano.

Anyways… as far as overall design goes. Great stuff here. Compact, nice matt PCB, compact cooler, metal shroud. A backplate would have been pretty nice to have. Maybe some sort of grill on the back end of the card next to the power socket. But overall no complaints.

For performance as we noted earlier this is a low noise card, the power use for its class is excellent too and thermals very impressive considering the spec. Framerates were also very good. It compares well with the standard Fury card at standard resolutions and at 4k. Compared to the GTX 970, its direct competition when that card is in the ITX form factor, the R9 Nano provides higher framerates across a wide range of games.

Summary: A product which pushes boundaries… sometimes too far. Not cheap but for certain consumers who want to work around its issues this will be a product which brings new performance levels to small form factor systems.

Performance Award

Review Date
Reviewed Item
AMD Radeon R9 NANO
Author Rating

About Author

Stuart Davidson


  1. IvanV

    So… This is essentially the full FuryX (minus 50MHz), only presumably without boost and with stricter power controls? And thanks to that, it doesn’t need water cooling and uses 100W less than the stronger card, but still puts out strong performance.

    I would be interested in seeing one thing: there are scenarios where FuryX opens a big lead compared to Nano (Witcher 3 4k, for example). I would like to see the performance of Nano at +20% power set in CCC.

  2. |2A|N

    I’ve said it elsewhere and I’ll say it here to. You can only rename and sell the same card so many times before people start catching on. This has been going on for years with the Radeon brand. Does anyone remember the ATI Radeon 9800, 9800 Pro, 9800XT debacle. Same card but one would cost you $500 because it had XT re-branded to it.

  3. Coil whine has been a problem across all branded nvidia and amd cards from various manufacturers for a few years and has only gotten worse…. In fact a recent 970 had such a howl of a whine that people 30ft away thought there was a siren going off outside due to being so horribly loud, even i thought something was very wrong and was expecting smoke to start billowing out. However one thing i’ve come to learn having dealt with a range of gpus from a range of manufacturers using either nvidia or amd, is that the coil whine issue seems to have become common place and it’s a luck of the draw if it exhibits it at any level.

    One thing to consider though however is that the coil whine appears to be temporary on most if not all cards. Having dealt with the issue, depending on how often you put the card through the cycles will determine how long it takes for the whine to either completely dissipate or drop to nearly inaudible levels. The 970 for example that was a bloody siren took about 2 weeks to basically stop doing it. The R9 290X i had took about 2 weeks, the R9 285 i had took about 2 weeks as well. The 980 TI took nearly 3 weeks. The list goes on that usually things calm down for any of them within about 2 weeks give or take depending on usage.

    Considering that and the fact that we KNOW that coil whine has been affecting nearly every product on the market and people have been reporting the nasty noise from all over the world for every product out there, that while it should be acknowledged now as common place and almost even expected unfortunately (would be nice to see either manufacturer sort that issue out), I don’t think it’s reasonable to give it a “failed” conclusion so to speak, not without either seeing what happens in a few weeks (which as a review, isn’t likely possible) or without swapping it for another, as i know there have been a number of fury nano’s without a single bit of noise produced out of the box as well.

    In responce to |2A|N, this isn’t a rebrand…. this is entirely new silicon in every which way…. the fact that it uses HBM should indicate that MASSIVE changes had to be made at fundamental levels in order to make this work. While sure plenty of components are similar to the previous generation gpus, just like every gpu on the market, it is the furthest thing from rebranding. Then again i’m not sure what you’re referring to…. just the “versions” of the same thing essentially?…. The 9700 and 9800 days were among some of the most wonderful, when nvidia had been really hitting hard with extreme pricing.. ATI brought a product to market that not only was trumping anything nvidia had.. but usually forcing prices down through aggressive competition and reasonable pricing and through that, managed to bring about various binned models. Most people jumped on the non pros….. and some managed to soft mod them to pro’s or even XTs… Sometimes it worked sometimes it didn’t… still that’s not “rebranding” that’s just model differences…. Rebranding is when you take a previous generation product…. lets say ATI’s 8000 series which were prior to the 9700 and 9800’s….. if they had made the 9700 and 9800 out of the 8000’s .. not only would ATI have been in deep crap, but those chips being just basically the EXACT same gpu would have been considered a rebranding…. much like the HD7750 being dumped into the R7 250…. they are in fact.. the EXACT same card….. so much so that the HD7750’s are even recognized by the newer drivers to be seen as a R7 200 series card and apps/programs see it as such. The Mobile gpus.. listed as HD8xxxx ARE rebrands… they are the HD7xxx with a model number changed to 8xxx. The nice thing is that there are indeed newer chips in the range.. but it’s mixed in…. and with the Rx 3xx models…. they are rebranded, however they do include a few noticeable changes that make for example… an R9 390 perform on par with a R9 290X…..

    What make things interesting is that nvidia’s 9xx cannot do async computing with graphics and compute simultaniously… that’s what they are in hot water right now, and when it comes to directx12 and mantle/vulkan APIs, if the developers of games don’t put in an exclusion to DISABLE async-compute, nvidia’s gpus fail to perform… get hit with a pretty big disadvantage. Disabling it gives them back most of the speed they should have. Async-compute should significantly improve performance… which on AMD gpu’s they do because amd has implemented them properly.

    • 00blahblahblah00

      Coil whine never actually “goes away” it just moves to a different part of the sound spectrum because the frequency changed for whatever reason and our human ears can no longer hear it. Those little copper coils on PCBs are designed to not be audible to humans when operating independently but when you combine certain parts that were never tested together by the individual manufacturers like certain PSUs, motherboards and GPUs which all have those little coily bastards on them, it is possible that together they will produce a sort of harmony whose frequency is audible to us humanoids even if all the parts used are expensive and high end. Why the frequency changes over time, I have no idea, but that noise technically doesn’t go away.

      Also, I think what |2A|N was talking about isn’t re-branding from one generation to another, he was referring to the 2 Fury X/Fiji cards in the current 300 series lineup. Essentially, they are both 4,096 shader, 4 GB HBM Fiji GPUs but with different style coolers. I don’t think he realizes, though, that the regular Fury has 3,584 shaders and is basically a binned version of the Fury X/Nano and not a simple re-brand.

      • Indeed the noise doesn’t technically disappear.. just gradually changes over time. You can run into this on almost any electronic device, and depending on the amount of power or the fluctuation in voltages and amperage as well as frequencies all the components run at will determine what kind of noise it’ll make at what frequency. My best guess is that as time goes on with frequent use of the cards is basically a burn in. The components get hot, expand and contract several times, each time those micro changes occur ends up affecting the frequency that the sound is making either lowering or more likely raising it. This also explains why one of my customers dogs disliked being in the room in which their computer sat as the machine initially had bad coil whine but appeared to disappear, but for the dog, it was still incredibly intolerable. Either way, coil whine has always been present, but for us humanoids, it’s only gotten really bad in the last few years, and people are treating it as something “new”… Considering the complexity of the chips and components and the raw increase in power as well as demand for greater voltage and power regulation in order to save power, all of this contributes to putting significantly more stress/variables on the card that ends up causing these things to occur. So while it’s completely reasonable to complain about it, it’s still unreasonable to consider it a fault at the start without giving it time to potentially resolve itself in short time due to the fact that EVERY card on the market now that is in the midrange to highend has a high chance of producing the same problem and also self resolving over time.

        • 00blahblahblah00

          lol, poor dog. The owner was probably wondering why the dog barked every time he moved the mouse around.

  4. 00blahblahblah00

    [AMD GPU owner here (R9 290)]… $650? c’mon son. I know this is a tiny little card that can fit in your pocket and uses less power, produces less heat and pumps out more frames than a 970 but the price at launch is more than double the 970 while looking at the games tested above. In the resolutions tested, you are getting, what, 20-25% increase in performance when comparing average FPS? In some games, the 970 comes within less than 10% of it or flat out beats it. c’mon son. It seems like the “roles” have suddenly reversed in that AMD’s new architecture is seemingly more efficient when it comes to thermals and power, their cards are now quieter not to mention that nifty high bandwidth memory gizmo that looks like it is stapled to the GPU core. But, you are paying a huge premium for those features vs an Nvidia card which is technically a class below it but which is only slightly slower and much, much cheaper. When it comes to currently available DX11 games, it doesn’t look like HBM is worth this huge price premium over something like a 970 or even a Hawaii card. Even in 4k, the HBM doesn’t seem to be giving Fiji much of an advantage over a standard GDDR5 GPU such as the 390x or 970. That advantage is probably the result of it simply having more shaders than everyone else. My guess is that if the Fury cards had the same shader count as the 390x and the same core clock speed, we won’t see much of a difference. In fact, in some cases, the 390x might actually be better since it has double the frame buffer. Do the math: 4,096 – 2,816 = a difference of 1,280 shaders or roughly 30%. I didn’t bust out the calculator for every game above but I don’t think that the Nano beats the 390x in any game by and average FPS of more than 30%. Assuming that the HBM really is a huge leap forward, shouldn’t it beat its little sister in at least a few games by more than 30%?

    • Considering it’s intended purpose.. it’s performance figures and it’s ultra small form factor… add to this that AMD clearly is targeting a niche within a niche market… i can only consider that this gpu/card is mostly just an example, engineering dream made real to show off the benefits of HBM, call it a practical demo/showcase which consumers are allowed to purchase. I don’t expect AMD to sell many, and i don’t expect that they expect to sell many either. Considering the costs involved, While it would be AMAZING to see this card in the $450 range…. lets not kid ourselves. The card performs only a smidge worse than the full fury X… it draws fairly little power as well. You generally always have to pay a premium price for something small. For them to attempt to sell this gpu at $450 or even $550 would be royally kicking themselves in many regards. So while i’m too a bit taken aback by the price as i had a thought in mind at the time, re-evaluating the situation and reasoning/logical point of the product, it does indeed make sense to cost what the fury x does.

      To be honest, I was looking at a fury nano from the start when they announced them, size/power/heat and all that seemed like an amazing deal, I thought jamming 3 of those together into a single machine would allow me to power through a 3x 4k setup fairly reasonably, but that kinda turned into a pipe dream with the advent of a 4gb limit on vram which we know 4gb alone is basically a minimum for any highend 4k gaming, even though the 285’s which are 2gb run 4k VERY well considering.

      Another point to ponder or consider, is that AMD and Nvidia have been trying their best to get their hands on 20nm but it just isn’t happening… and it sounds like everything will skip straight to 16/14nm… which leads me to believe one of many potential senarios.

      AMD likely had a 28nm plan for a potential HBM based graphics product drawn up for quite a long time, and they also had 20nm plans drawn up that were intended to be fiji. But due to the 20nm issues, they had to back down and take a bit of a hit by dumping resources into a 28nm part due to their newer concepts simply not working on anything larger than 20nm, scaling things back and changing things up. And this is not discounting that we’re all sure nvidia was banking on 20nm as well with prepped materials only to be forced to make arrangements to make 9xx out of something that was intended to be on the smaller pipeline.

      Really i think both GPU manufacturers are sitting on some serious gold, as even though the 9xx showed some significant improvements over the previous generation specially in power consumption for graphics tasks only (their power consumption climbs and even surpasses AMD’s competition product usage even which almost no one appears to be aware of when doing heavy compute tasks). We’ve seen otherwise some fairly unimpressive performance improvements for the last number of years though and due to be wowed, i’m feeling the next product launches from both on the 16/14nm scales will bring about some killer stuff, and hopefully then AMD will have HBM 2.0 sorted with 8gb or even 16gb vram support with ease not to mention 4k @ 60-120fps being a reality. Nvidia’s pascal should have HBM… but we don’t know what’s happening there just yet, as amd is likely to have priority on the tech still just like they did with GDDR5 and GDDR4.

      • 00blahblahblah00

        Yeah, it really is more of a proof of concept than something they intended to make money on. It is kind of like a Bugatti Veyron – something that Audi decided to make because they could but which actually cost them money for each one they sold. In AMD’s case with the Fiji cards, it isn’t that extreme. In fact, they are probably actually making money on each card/chip sold but not that much despite the massive price to performance ratio compared to something like a 970 or 290x. The real winners here are future mid-range customers who will get many of the features found in the Fiji on some future mid-range card (something that might be called the 560x for example) a few years down the line. But, as you pointed out, that won’t happen until TSMC starts pumping out 14nm FinFet desktop parts or AMD and Nvidia find another silicon fabrication company who can do it better and cheaper on the same scale. There are rumors that Samsung might be making Nvidia GPUs one day which is a result, in large part, to TSMC’s inability to stay on a consistent die shrink schedule (basically Moore’s law).

        On a side notw, when they do get down to under 14nm for desktop GPU parts, it would be cool to see Nvidia work with Intel to put something like a GTX 960 into an i5 and sell it for like $350. I’d buy that over an i7 all day. AMD will probably have something like that in their APU line in 2 or 3 years with 8 CPU cores to boot. We know they can fit 8 cores and the equivalent of an R9 260x on a single die thanks to the Xbox One and PS4 and they are doing it on 28nm. Give them transistors that are spaced half the distance apart and they should easily be able to fit the equivalent of a R9 285 onto an APU die along with an octa-core that can clock way higher than the current console chips.

It appears you have AdBlocking activated

Unfortunately AdBlockers interfere with the shopping cart process

To continue with the payment process can we ask you to

deactivate your AdBlocking plugin

or to whitelist this site. Then refresh the page

We thank you for your understanding

Hardwareheaven respect you right to employ plugins such as AdBlocker.
We would however ask you to consider whitelisting this site
We do not allow intrusive advertising and all our sponsors supply items
relevant to the content on the site.

Hardwareheaven Webmaster