LT Panel
RT Panel
Friday | January 20, 2017
Popular Review Links:
AMD Radeon R9 NANO Review

AMD Radeon R9 NANO Review How Writing Companies Help Students

AMD Radeon R9 NANO Review – The Fury NANO

Requirements For Essays Dissertation In Cold Blood

Essay Writer 10.00 Per Page

Writing A Rationale For A Dissertation

Best Site To Buy College Essay

Energy Policy Phd Thesis

Dag Mossige Dissertation

AMD Radeon R9 NANO Review – Performance

Help With Doing Essay Essay Services Uk

English Help Online AMD Driver: 15.x Beta

Dissertation Couching

Buy A Research Paper

AMD Radeon R9 NANO Review – Conclusion

Buy OriginLab OriginPro 8 oem

Essay Writing Service Manchester

Dissertation Health

Dissertation For S Business Plan Order Master Thesis Humor

Review Date
Reviewed Item
AMD Radeon R9 NANO
Author Rating

About Author

Stuart Davidson


  1. Marketing Research Paper

    In responce to |2A|N, this isn’t a rebrand…. this is entirely new silicon in every which way…. the fact that it uses HBM should indicate that MASSIVE changes had to be made at fundamental levels in order to make this work. While sure plenty of components are similar to the previous generation gpus, just like every gpu on the market, it is the furthest thing from rebranding. Then again i’m not sure what you’re referring to…. just the “versions” of the same thing essentially?…. The 9700 and 9800 days were among some of the most wonderful, when nvidia had been really hitting hard with extreme pricing.. ATI brought a product to market that not only was trumping anything nvidia had.. but usually forcing prices down through aggressive competition and reasonable pricing and through that, managed to bring about various binned models. Most people jumped on the non pros….. and some managed to soft mod them to pro’s or even XTs… Sometimes it worked sometimes it didn’t… still that’s not “rebranding” that’s just model differences…. Rebranding is when you take a previous generation product…. lets say ATI’s 8000 series which were prior to the 9700 and 9800’s….. if they had made the 9700 and 9800 out of the 8000’s .. not only would ATI have been in deep crap, but those chips being just basically the EXACT same gpu would have been considered a rebranding…. much like the HD7750 being dumped into the R7 250…. they are in fact.. the EXACT same card….. so much so that the HD7750’s are even recognized by the newer drivers to be seen as a R7 200 series card and apps/programs see it as such. The Mobile gpus.. listed as HD8xxxx ARE rebrands… they are the HD7xxx with a model number changed to 8xxx. The nice thing is that there are indeed newer chips in the range.. but it’s mixed in…. and with the Rx 3xx models…. they are rebranded, however they do include a few noticeable changes that make for example… an R9 390 perform on par with a R9 290X…..

    What make things interesting is that nvidia’s 9xx cannot do async computing with graphics and compute simultaniously… that’s what they are in hot water right now, and when it comes to directx12 and mantle/vulkan APIs, if the developers of games don’t put in an exclusion to DISABLE async-compute, nvidia’s gpus fail to perform… get hit with a pretty big disadvantage. Disabling it gives them back most of the speed they should have. Async-compute should significantly improve performance… which on AMD gpu’s they do because amd has implemented them properly.

    • 00blahblahblah00

      Coil whine never actually “goes away” it just moves to a different part of the sound spectrum because the frequency changed for whatever reason and our human ears can no longer hear it. Those little copper coils on PCBs are designed to not be audible to humans when operating independently but when you combine certain parts that were never tested together by the individual manufacturers like certain PSUs, motherboards and GPUs which all have those little coily bastards on them, it is possible that together they will produce a sort of harmony whose frequency is audible to us humanoids even if all the parts used are expensive and high end. Why the frequency changes over time, I have no idea, but that noise technically doesn’t go away.

      Also, I think what |2A|N was talking about isn’t re-branding from one generation to another, he was referring to the 2 Fury X/Fiji cards in the current 300 series lineup. Essentially, they are both 4,096 shader, 4 GB HBM Fiji GPUs but with different style coolers. I don’t think he realizes, though, that the regular Fury has 3,584 shaders and is basically a binned version of the Fury X/Nano and not a simple re-brand.

      • Indeed the noise doesn’t technically disappear.. just gradually changes over time. You can run into this on almost any electronic device, and depending on the amount of power or the fluctuation in voltages and amperage as well as frequencies all the components run at will determine what kind of noise it’ll make at what frequency. My best guess is that as time goes on with frequent use of the cards is basically a burn in. The components get hot, expand and contract several times, each time those micro changes occur ends up affecting the frequency that the sound is making either lowering or more likely raising it. This also explains why one of my customers dogs disliked being in the room in which their computer sat as the machine initially had bad coil whine but appeared to disappear, but for the dog, it was still incredibly intolerable. Either way, coil whine has always been present, but for us humanoids, it’s only gotten really bad in the last few years, and people are treating it as something “new”… Considering the complexity of the chips and components and the raw increase in power as well as demand for greater voltage and power regulation in order to save power, all of this contributes to putting significantly more stress/variables on the card that ends up causing these things to occur. So while it’s completely reasonable to complain about it, it’s still unreasonable to consider it a fault at the start without giving it time to potentially resolve itself in short time due to the fact that EVERY card on the market now that is in the midrange to highend has a high chance of producing the same problem and also self resolving over time.

        • 00blahblahblah00

          lol, poor dog. The owner was probably wondering why the dog barked every time he moved the mouse around.

  2. 00blahblahblah00

    [AMD GPU owner here (R9 290)]… $650? c’mon son. I know this is a tiny little card that can fit in your pocket and uses less power, produces less heat and pumps out more frames than a 970 but the price at launch is more than double the 970 while looking at the games tested above. In the resolutions tested, you are getting, what, 20-25% increase in performance when comparing average FPS? In some games, the 970 comes within less than 10% of it or flat out beats it. c’mon son. It seems like the “roles” have suddenly reversed in that AMD’s new architecture is seemingly more efficient when it comes to thermals and power, their cards are now quieter not to mention that nifty high bandwidth memory gizmo that looks like it is stapled to the GPU core. But, you are paying a huge premium for those features vs an Nvidia card which is technically a class below it but which is only slightly slower and much, much cheaper. When it comes to currently available DX11 games, it doesn’t look like HBM is worth this huge price premium over something like a 970 or even a Hawaii card. Even in 4k, the HBM doesn’t seem to be giving Fiji much of an advantage over a standard GDDR5 GPU such as the 390x or 970. That advantage is probably the result of it simply having more shaders than everyone else. My guess is that if the Fury cards had the same shader count as the 390x and the same core clock speed, we won’t see much of a difference. In fact, in some cases, the 390x might actually be better since it has double the frame buffer. Do the math: 4,096 – 2,816 = a difference of 1,280 shaders or roughly 30%. I didn’t bust out the calculator for every game above but I don’t think that the Nano beats the 390x in any game by and average FPS of more than 30%. Assuming that the HBM really is a huge leap forward, shouldn’t it beat its little sister in at least a few games by more than 30%?

    • Considering it’s intended purpose.. it’s performance figures and it’s ultra small form factor… add to this that AMD clearly is targeting a niche within a niche market… i can only consider that this gpu/card is mostly just an example, engineering dream made real to show off the benefits of HBM, call it a practical demo/showcase which consumers are allowed to purchase. I don’t expect AMD to sell many, and i don’t expect that they expect to sell many either. Considering the costs involved, While it would be AMAZING to see this card in the $450 range…. lets not kid ourselves. The card performs only a smidge worse than the full fury X… it draws fairly little power as well. You generally always have to pay a premium price for something small. For them to attempt to sell this gpu at $450 or even $550 would be royally kicking themselves in many regards. So while i’m too a bit taken aback by the price as i had a thought in mind at the time, re-evaluating the situation and reasoning/logical point of the product, it does indeed make sense to cost what the fury x does.

      To be honest, I was looking at a fury nano from the start when they announced them, size/power/heat and all that seemed like an amazing deal, I thought jamming 3 of those together into a single machine would allow me to power through a 3x 4k setup fairly reasonably, but that kinda turned into a pipe dream with the advent of a 4gb limit on vram which we know 4gb alone is basically a minimum for any highend 4k gaming, even though the 285’s which are 2gb run 4k VERY well considering.

      Another point to ponder or consider, is that AMD and Nvidia have been trying their best to get their hands on 20nm but it just isn’t happening… and it sounds like everything will skip straight to 16/14nm… which leads me to believe one of many potential senarios.

      AMD likely had a 28nm plan for a potential HBM based graphics product drawn up for quite a long time, and they also had 20nm plans drawn up that were intended to be fiji. But due to the 20nm issues, they had to back down and take a bit of a hit by dumping resources into a 28nm part due to their newer concepts simply not working on anything larger than 20nm, scaling things back and changing things up. And this is not discounting that we’re all sure nvidia was banking on 20nm as well with prepped materials only to be forced to make arrangements to make 9xx out of something that was intended to be on the smaller pipeline.

      Really i think both GPU manufacturers are sitting on some serious gold, as even though the 9xx showed some significant improvements over the previous generation specially in power consumption for graphics tasks only (their power consumption climbs and even surpasses AMD’s competition product usage even which almost no one appears to be aware of when doing heavy compute tasks). We’ve seen otherwise some fairly unimpressive performance improvements for the last number of years though and due to be wowed, i’m feeling the next product launches from both on the 16/14nm scales will bring about some killer stuff, and hopefully then AMD will have HBM 2.0 sorted with 8gb or even 16gb vram support with ease not to mention 4k @ 60-120fps being a reality. Nvidia’s pascal should have HBM… but we don’t know what’s happening there just yet, as amd is likely to have priority on the tech still just like they did with GDDR5 and GDDR4.

      • 00blahblahblah00

        Yeah, it really is more of a proof of concept than something they intended to make money on. It is kind of like a Bugatti Veyron – something that Audi decided to make because they could but which actually cost them money for each one they sold. In AMD’s case with the Fiji cards, it isn’t that extreme. In fact, they are probably actually making money on each card/chip sold but not that much despite the massive price to performance ratio compared to something like a 970 or 290x. The real winners here are future mid-range customers who will get many of the features found in the Fiji on some future mid-range card (something that might be called the 560x for example) a few years down the line. But, as you pointed out, that won’t happen until TSMC starts pumping out 14nm FinFet desktop parts or AMD and Nvidia find another silicon fabrication company who can do it better and cheaper on the same scale. There are rumors that Samsung might be making Nvidia GPUs one day which is a result, in large part, to TSMC’s inability to stay on a consistent die shrink schedule (basically Moore’s law).

        On a side notw, when they do get down to under 14nm for desktop GPU parts, it would be cool to see Nvidia work with Intel to put something like a GTX 960 into an i5 and sell it for like $350. I’d buy that over an i7 all day. AMD will probably have something like that in their APU line in 2 or 3 years with 8 CPU cores to boot. We know they can fit 8 cores and the equivalent of an R9 260x on a single die thanks to the Xbox One and PS4 and they are doing it on 28nm. Give them transistors that are spaced half the distance apart and they should easily be able to fit the equivalent of a R9 285 onto an APU die along with an octa-core that can clock way higher than the current console chips.

It appears you have AdBlocking activated

Unfortunately AdBlockers interfere with the shopping cart process

To continue with the payment process can we ask you to

deactivate your AdBlocking plugin

or to whitelist this site. Then refresh the page

We thank you for your understanding

Hardwareheaven respect you right to employ plugins such as AdBlocker.
We would however ask you to consider whitelisting this site
We do not allow intrusive advertising and all our sponsors supply items
relevant to the content on the site.

Hardwareheaven Webmaster