Not long ago AMD released a new driver which was ~9% faster than the earlier one. They said, they found (and fixed) a bug in the clock driver routine.
So why should nVidia be error free?
As Murphy said:
If it happens, it must be posssible!
no good news, back to normal i think. it was a "one day fly".
cpu usage raised by 100% , and so wus took again 600 seconds more. They all different. cant fix a score.
It's a bit of a hack, but the way I measure performance is to take a copy of a current job and run it standalone (i.e. not under BOINC). Somewhere I read that BOINC applications can be run directly so with a bit of fiddling (delete status.cpt and the output files) you can run the apps directly and measure the exact time taken. Although I've only done this under linux, I guess the same trick will work for Mac and Windows.
On linux, I run it as "time sh runjob.sh", Mac OS I guess will be the same and Windows equivalent is probably similar (but need to be a .BAT and use whatever it has in place of the 'time' command). You need to get the command line from a current job though (as the above is specific to my set-up).
And that makes me think that if there was a "standard candle" job we'd really be able to tell the difference between drivers, cards, etc!
Could that be ? Nvidia Beta 313.96
)
Not long ago AMD released a new driver which was ~9% faster than the earlier one. They said, they found (and fixed) a bug in the clock driver routine.
So why should nVidia be error free?
As Murphy said:
If it happens, it must be posssible!
No change in speed on 2x
)
No change in speed on 2x 650ti with 2 units each.
ok, i used a GT 640 1GB .
)
ok, i used a GT 640 1GB . what ever happend idont know. its on the secondary pcie port. i forgot: XP 32-bit running
certainly not 10mins faster
)
certainly not 10mins faster on my 670 at least, but maybe a minute or two faster on its first couple of wu
No change in runtime on my
)
No change in runtime on my gtx470m
no good news, back to normal
)
no good news, back to normal i think. it was a "one day fly".
cpu usage raised by 100% , and so wus took again 600 seconds more. They all different. cant fix a score.
RE: no good news, back to
)
It's a bit of a hack, but the way I measure performance is to take a copy of a current job and run it standalone (i.e. not under BOINC). Somewhere I read that BOINC applications can be run directly so with a bit of fiddling (delete status.cpt and the output files) you can run the apps directly and measure the exact time taken. Although I've only done this under linux, I guess the same trick will work for Mac and Windows.
Here's my scrappy script:-
On linux, I run it as "time sh runjob.sh", Mac OS I guess will be the same and Windows equivalent is probably similar (but need to be a .BAT and use whatever it has in place of the 'time' command). You need to get the command line from a current job though (as the above is specific to my set-up).
And that makes me think that if there was a "standard candle" job we'd really be able to tell the difference between drivers, cards, etc!
sorry i am not familiar with
)
sorry i am not familiar with linux etc. i dont think get the job done ;)
i am also poor in installing nvidia drivers under linux, if i would start that "project" it will take weeks, i swear. XD