LT Panel
RT Panel
Tuesday | February 21, 2017
Popular Review Links:
AMD Radeon R9 NANO Review

AMD Radeon R9 NANO Review

Dissertation Editing Service

AMD Radeon R9 NANO Review – The Fury NANO

Dissertation Statistical Services Analysis Consulting

Homework Machine Poem

Do My Admission Essay Future

My Life Essays


AMD Radeon R9 NANO Review – Performance

Homework Help Dpcdsb

Essay Paper Help

AMD Driver: 15.x Beta

What Is Dissertation Report

AMD Radeon R9 NANO Review – Conclusion

Ghostwriter Tattoo Hagen

Mass Media In Great Britain Essay

Review Date
Reviewed Item
AMD Radeon R9 NANO
Author Rating

About Author


  1. IvanV

    I would be interested in seeing one thing: there are scenarios where FuryX opens a big lead compared to Nano (Witcher 3 4k, for example). I would like to see the performance of Nano at +20% power set in CCC.

  2. |2A|N

    I’ve said it elsewhere and I’ll say it here to. You can only rename and sell the same card so many times before people start catching on. This has been going on for years with the Radeon brand. Does anyone remember the ATI Radeon 9800, 9800 Pro, 9800XT debacle. Same card but one would cost you $500 because it had XT re-branded to it.

  3. Coil whine has been a problem across all branded nvidia and amd cards from various manufacturers for a few years and has only gotten worse…. In fact a recent 970 had such a howl of a whine that people 30ft away thought there was a siren going off outside due to being so horribly loud, even i thought something was very wrong and was expecting smoke to start billowing out. However one thing i’ve come to learn having dealt with a range of gpus from a range of manufacturers using either nvidia or amd, is that the coil whine issue seems to have become common place and it’s a luck of the draw if it exhibits it at any level.

    One thing to consider though however is that the coil whine appears to be temporary on most if not all cards. Having dealt with the issue, depending on how often you put the card through the cycles will determine how long it takes for the whine to either completely dissipate or drop to nearly inaudible levels. The 970 for example that was a bloody siren took about 2 weeks to basically stop doing it. The R9 290X i had took about 2 weeks, the R9 285 i had took about 2 weeks as well. The 980 TI took nearly 3 weeks. The list goes on that usually things calm down for any of them within about 2 weeks give or take depending on usage.

    Considering that and the fact that we KNOW that coil whine has been affecting nearly every product on the market and people have been reporting the nasty noise from all over the world for every product out there, that while it should be acknowledged now as common place and almost even expected unfortunately (would be nice to see either manufacturer sort that issue out), I don’t think it’s reasonable to give it a “failed” conclusion so to speak, not without either seeing what happens in a few weeks (which as a review, isn’t likely possible) or without swapping it for another, as i know there have been a number of fury nano’s without a single bit of noise produced out of the box as well.

    In responce to |2A|N, this isn’t a rebrand…. this is entirely new silicon in every which way…. the fact that it uses HBM should indicate that MASSIVE changes had to be made at fundamental levels in order to make this work. While sure plenty of components are similar to the previous generation gpus, just like every gpu on the market, it is the furthest thing from rebranding. Then again i’m not sure what you’re referring to…. just the “versions” of the same thing essentially?…. The 9700 and 9800 days were among some of the most wonderful, when nvidia had been really hitting hard with extreme pricing.. ATI brought a product to market that not only was trumping anything nvidia had.. but usually forcing prices down through aggressive competition and reasonable pricing and through that, managed to bring about various binned models. Most people jumped on the non pros….. and some managed to soft mod them to pro’s or even XTs… Sometimes it worked sometimes it didn’t… still that’s not “rebranding” that’s just model differences…. Rebranding is when you take a previous generation product…. lets say ATI’s 8000 series which were prior to the 9700 and 9800’s….. if they had made the 9700 and 9800 out of the 8000’s .. not only would ATI have been in deep crap, but those chips being just basically the EXACT same gpu would have been considered a rebranding…. much like the HD7750 being dumped into the R7 250…. they are in fact.. the EXACT same card….. so much so that the HD7750’s are even recognized by the newer drivers to be seen as a R7 200 series card and apps/programs see it as such. The Mobile gpus.. listed as HD8xxxx ARE rebrands… they are the HD7xxx with a model number changed to 8xxx. The nice thing is that there are indeed newer chips in the range.. but it’s mixed in…. and with the Rx 3xx models…. they are rebranded, however they do include a few noticeable changes that make for example… an R9 390 perform on par with a R9 290X…..

    What make things interesting is that nvidia’s 9xx cannot do async computing with graphics and compute simultaniously… that’s what they are in hot water right now, and when it comes to directx12 and mantle/vulkan APIs, if the developers of games don’t put in an exclusion to DISABLE async-compute, nvidia’s gpus fail to perform… get hit with a pretty big disadvantage. Disabling it gives them back most of the speed they should have. Async-compute should significantly improve performance… which on AMD gpu’s they do because amd has implemented them properly.

    • 00blahblahblah00

      Coil whine never actually “goes away” it just moves to a different part of the sound spectrum because the frequency changed for whatever reason and our human ears can no longer hear it. Those little copper coils on PCBs are designed to not be audible to humans when operating independently but when you combine certain parts that were never tested together by the individual manufacturers like certain PSUs, motherboards and GPUs which all have those little coily bastards on them, it is possible that together they will produce a sort of harmony whose frequency is audible to us humanoids even if all the parts used are expensive and high end. Why the frequency changes over time, I have no idea, but that noise technically doesn’t go away.

      Also, I think what |2A|N was talking about isn’t re-branding from one generation to another, he was referring to the 2 Fury X/Fiji cards in the current 300 series lineup. Essentially, they are both 4,096 shader, 4 GB HBM Fiji GPUs but with different style coolers. I don’t think he realizes, though, that the regular Fury has 3,584 shaders and is basically a binned version of the Fury X/Nano and not a simple re-brand.

      • Indeed the noise doesn’t technically disappear.. just gradually changes over time. You can run into this on almost any electronic device, and depending on the amount of power or the fluctuation in voltages and amperage as well as frequencies all the components run at will determine what kind of noise it’ll make at what frequency. My best guess is that as time goes on with frequent use of the cards is basically a burn in. The components get hot, expand and contract several times, each time those micro changes occur ends up affecting the frequency that the sound is making either lowering or more likely raising it. This also explains why one of my customers dogs disliked being in the room in which their computer sat as the machine initially had bad coil whine but appeared to disappear, but for the dog, it was still incredibly intolerable. Either way, coil whine has always been present, but for us humanoids, it’s only gotten really bad in the last few years, and people are treating it as something “new”… Considering the complexity of the chips and components and the raw increase in power as well as demand for greater voltage and power regulation in order to save power, all of this contributes to putting significantly more stress/variables on the card that ends up causing these things to occur. So while it’s completely reasonable to complain about it, it’s still unreasonable to consider it a fault at the start without giving it time to potentially resolve itself in short time due to the fact that EVERY card on the market now that is in the midrange to highend has a high chance of producing the same problem and also self resolving over time.

        • 00blahblahblah00

          lol, poor dog. The owner was probably wondering why the dog barked every time he moved the mouse around.

  4. 00blahblahblah00

    [AMD GPU owner here (R9 290)]… $650? c’mon son. I know this is a tiny little card that can fit in your pocket and uses less power, produces less heat and pumps out more frames than a 970 but the price at launch is more than double the 970 while looking at the games tested above. In the resolutions tested, you are getting, what, 20-25% increase in performance when comparing average FPS? In some games, the 970 comes within less than 10% of it or flat out beats it. c’mon son. It seems like the “roles” have suddenly reversed in that AMD’s new architecture is seemingly more efficient when it comes to thermals and power, their cards are now quieter not to mention that nifty high bandwidth memory gizmo that looks like it is stapled to the GPU core. But, you are paying a huge premium for those features vs an Nvidia card which is technically a class below it but which is only slightly slower and much, much cheaper. When it comes to currently available DX11 games, it doesn’t look like HBM is worth this huge price premium over something like a 970 or even a Hawaii card. Even in 4k, the HBM doesn’t seem to be giving Fiji much of an advantage over a standard GDDR5 GPU such as the 390x or 970. That advantage is probably the result of it simply having more shaders than everyone else. My guess is that if the Fury cards had the same shader count as the 390x and the same core clock speed, we won’t see much of a difference. In fact, in some cases, the 390x might actually be better since it has double the frame buffer. Do the math: 4,096 – 2,816 = a difference of 1,280 shaders or roughly 30%. I didn’t bust out the calculator for every game above but I don’t think that the Nano beats the 390x in any game by and average FPS of more than 30%. Assuming that the HBM really is a huge leap forward, shouldn’t it beat its little sister in at least a few games by more than 30%?

    • Considering it’s intended purpose.. it’s performance figures and it’s ultra small form factor… add to this that AMD clearly is targeting a niche within a niche market… i can only consider that this gpu/card is mostly just an example, engineering dream made real to show off the benefits of HBM, call it a practical demo/showcase which consumers are allowed to purchase. I don’t expect AMD to sell many, and i don’t expect that they expect to sell many either. Considering the costs involved, While it would be AMAZING to see this card in the $450 range…. lets not kid ourselves. The card performs only a smidge worse than the full fury X… it draws fairly little power as well. You generally always have to pay a premium price for something small. For them to attempt to sell this gpu at $450 or even $550 would be royally kicking themselves in many regards. So while i’m too a bit taken aback by the price as i had a thought in mind at the time, re-evaluating the situation and reasoning/logical point of the product, it does indeed make sense to cost what the fury x does.

      To be honest, I was looking at a fury nano from the start when they announced them, size/power/heat and all that seemed like an amazing deal, I thought jamming 3 of those together into a single machine would allow me to power through a 3x 4k setup fairly reasonably, but that kinda turned into a pipe dream with the advent of a 4gb limit on vram which we know 4gb alone is basically a minimum for any highend 4k gaming, even though the 285’s which are 2gb run 4k VERY well considering.

      Another point to ponder or consider, is that AMD and Nvidia have been trying their best to get their hands on 20nm but it just isn’t happening… and it sounds like everything will skip straight to 16/14nm… which leads me to believe one of many potential senarios.

      AMD likely had a 28nm plan for a potential HBM based graphics product drawn up for quite a long time, and they also had 20nm plans drawn up that were intended to be fiji. But due to the 20nm issues, they had to back down and take a bit of a hit by dumping resources into a 28nm part due to their newer concepts simply not working on anything larger than 20nm, scaling things back and changing things up. And this is not discounting that we’re all sure nvidia was banking on 20nm as well with prepped materials only to be forced to make arrangements to make 9xx out of something that was intended to be on the smaller pipeline.

      Really i think both GPU manufacturers are sitting on some serious gold, as even though the 9xx showed some significant improvements over the previous generation specially in power consumption for graphics tasks only (their power consumption climbs and even surpasses AMD’s competition product usage even which almost no one appears to be aware of when doing heavy compute tasks). We’ve seen otherwise some fairly unimpressive performance improvements for the last number of years though and due to be wowed, i’m feeling the next product launches from both on the 16/14nm scales will bring about some killer stuff, and hopefully then AMD will have HBM 2.0 sorted with 8gb or even 16gb vram support with ease not to mention 4k @ 60-120fps being a reality. Nvidia’s pascal should have HBM… but we don’t know what’s happening there just yet, as amd is likely to have priority on the tech still just like they did with GDDR5 and GDDR4.

      • 00blahblahblah00

        Yeah, it really is more of a proof of concept than something they intended to make money on. It is kind of like a Bugatti Veyron – something that Audi decided to make because they could but which actually cost them money for each one they sold. In AMD’s case with the Fiji cards, it isn’t that extreme. In fact, they are probably actually making money on each card/chip sold but not that much despite the massive price to performance ratio compared to something like a 970 or 290x. The real winners here are future mid-range customers who will get many of the features found in the Fiji on some future mid-range card (something that might be called the 560x for example) a few years down the line. But, as you pointed out, that won’t happen until TSMC starts pumping out 14nm FinFet desktop parts or AMD and Nvidia find another silicon fabrication company who can do it better and cheaper on the same scale. There are rumors that Samsung might be making Nvidia GPUs one day which is a result, in large part, to TSMC’s inability to stay on a consistent die shrink schedule (basically Moore’s law).

        On a side notw, when they do get down to under 14nm for desktop GPU parts, it would be cool to see Nvidia work with Intel to put something like a GTX 960 into an i5 and sell it for like $350. I’d buy that over an i7 all day. AMD will probably have something like that in their APU line in 2 or 3 years with 8 CPU cores to boot. We know they can fit 8 cores and the equivalent of an R9 260x on a single die thanks to the Xbox One and PS4 and they are doing it on 28nm. Give them transistors that are spaced half the distance apart and they should easily be able to fit the equivalent of a R9 285 onto an APU die along with an octa-core that can clock way higher than the current console chips.

It appears you have AdBlocking activated

Unfortunately AdBlockers interfere with the shopping cart process

To continue with the payment process can we ask you to

deactivate your AdBlocking plugin

or to whitelist this site. Then refresh the page

We thank you for your understanding

Hardwareheaven respect you right to employ plugins such as AdBlocker.
We would however ask you to consider whitelisting this site
We do not allow intrusive advertising and all our sponsors supply items
relevant to the content on the site.

Hardwareheaven Webmaster