Discussion in 'Overclocking, Benching & Modding' started by IvanV, Feb 5, 2013.
So the beta5 drivers are valid but not the 12.10 set?
12.10? Heck, 13.1 isn't valid! Only the betas 3, 4 and 5. Out of them, I haven't tried 3, 4 crashes and 5 works, but it's a couple of percents slower than 13.1.
Also, Windows 8 scores a bit higher than 7.
What I read is that the WHQL 13.1 drivers were preventing certain items from rendering properly. That's the only reason they were deemed invalid. AMD worked with FM on these beta drivers to correct whatever the problem was and is why the betas are accepted. If the same problem existed in the 12.10 drivers, then that would explain the rejection of those, too. I just don't know if that's it or not.
Looks like I won a copy of the Advanced Edition from a Galaxy giveaway. They'll have a new one soon, so, if you care, like the page and wait for it.
Last time, they limited the number of participants to 50 and eventually upped the number of winners to 20, so chances to win were pretty good. The next question will probably be about something from the Ice Storm demo, there is a link to the HD version on their page.
Did a new run with the 13.2beta5 driver.
A small improvement on the first test but lower on the other two compared with the 12.10 set.
In case anyone is interested, the second giveaway is underway. Again 50 participants and 20 winners. Hurry, because they started a few hours ago, so not many spots left. The question is how many times does the Galaxy logo appear in the Ice Storm demo. Also, don't forget to like the page!
EDIT: I got my key today and did another run. Windows 8 and 32bit version of the test seem to be the winning combo:
Gotta run my benches! This week sometime for sure.
Relatively recent run through with the beta 5 drivers so I could get a "valid" result.
This test is made with 1 GPU
If you want to test Crossfire, use Catalyst 13.2 beta 5, they work far better than any others.
I still find it mildly amusing that physics scores really don't appear to factor into the whole total score thing much at all..
I mean look at press's and mine... the 4 core xeon vs the 3930k... 2 extra cores ... now the first test isn't that big of a difference.... only about 14,000 score difference...
but the next 2 the gab widens considerably each time.
Meanwhile my poor cards overall performance can't touch a 7970... that's understandable..
But i'm just thinking... at the rate of physics testing.... i'm really thinking they are going to have to provide a 3Dmark that gives an overall for both tests which have significant impacts on the total. And then have just the 3d and physics as sub totals... and instead of added together and then apply a ratio to determine the total.. they should be multiplied.... that way physics total impacts things A LOT.
But i'm sure someone would complain about 3D mark having physics which isn't 3d... which would be idiotic system physics IS 3D..... 3D world impacts and significantly impacting how 3D world looks to us..
NOW where'd our openCL physics processing capabilities go?
I think they try to be representative of performance in actual gaming, where the fact that you have paired your 69** with a SB hex rather than an IB quad really isn't that significant.
I'm not sure what formula they are using, but it's not arithmetic mean. Whenever scores from individual tests are far apart, the end result is closer to the lower one. Example: in Cloud Gate, I have just under 30k graphics score and a bit over 3k physics score; my total score is 10k. That's a bit higher than geometric mean, but way lower than arithmetic, so I think that it's some sort of "weighted geometric mean" that slightly favours the graphics score.
EDIT: What Blibbax said makes sense.
I understand that for older 3D Marks.. but with this new one they are pushing the envelope of the coming future games and graphical/physics systems.
I'm still dieing to see or playback even a preprogramed Unreal 4 Engine demo. Something they are investing heavily into on unreal 4 engine is the Physics systems. Granted the consolized version won't be clearly as heavy.. they DO appear to want to make it part of the "new" thing on the consoles that the previous ones couldn't along with the graphical improvements. Personally i'm looking forward to physics based animation playback rather than pre scripted and totally unrealistic physics animation.
Throw it on a modern computer and i'm betting we'll start seeing more opencl and cpu intensive physics rolling out in which multi core cpus are going to really start seeing the preasure.
I think the main reason is that OpenCL/DirectCompute is something that works on both AMD and NV GPU's. It just isn't interesting to benchmark with PhysX since it's only avaliable with NV GPU's.
I wasn't talking PhysX... I was talking opencl physics
Test with CF enabled and 13.2 beta 5 driver
yeah the crossfire scaling puts SLI to shame...
it even look more impressive when the resolution starts exceeding 2560x1440...
Separate names with a comma.