Hi,
is your Q6600 overclocked or running at default speed? I´m looking for some performance numbers between different cards. I want to know, which card gives the most bang for the buck.
Hi,
is your Q6600 overclocked or running at default speed? I´m looking for some performance numbers between different cards. I want to know, which card gives the most bang for the buck.
hotze
2.4
260 216 core is my card and i'm running the newest driver...
295 is 2*240 core so it's the most bang for buck; but, be sure your power supply can handle it...
i think the gtx 295 can knock it out, with i7 is under 2 hrs if benchmarks are any key..
- the 1 MB vs 1 million bytes issue ( and GB .... )
1 MB equals 1 million bytes - officially, anyway. There are binary prefixes for use with Bytes, i.e. MiB. Unfortunately, no operating system currently in existence honors these prefixes. (and Mebibyte doesn't sound quite as charming as Megabyte)
295 is 2*240 core so it's the most bang for buck; but, be sure your power supply can handle it.....
Don't agree with you there, ZPM.
The GTX295 is certainly the most powerful card, so the most bang...
...but for the buck? I don't think so. Compared with single-processor 240 shader cards such as the GTX275, the shader clock is slower, so you don't get twice the performance: but you pay more than twice the price. I reckon you get almost 25% more bang for the buck with a 275, compared with the 295.
Have you use the latest CUDA to compile, there are some corrections in 2.3
Distributed.net have done this also, bug-fixes and some WU-corruption.
Different field but still a good advise.
Quote:
I have crunching a CUDA WU with a GTX 285 in 5,5 hours. My wingmen has crunched the same WU with a CPU in 6 hours.
This is not a big difference .......
Post the system specs, It's all BETA.
So the more details the better.
Speed should be ecpected (grow in future) better or am i wrong.
Have you use the latest CUDA to compile, there are some corrections in 2.3
Distributed.net have done this also, bug-fixes and some WU-corruption.
Different field but still a good advise.
Quote:
I have crunching a CUDA WU with a GTX 285 in 5,5 hours. My wingmen has crunched the same WU with a CPU in 6 hours.
This is not a big difference .......
Post the system specs, It's all BETA.
So the more details the better.
Speed should be ecpected (grow in future) better or am i wrong.
OK :
ASUS P5E3
Q9300
2x1 Gb DDR3 1333
GTX 285 190.38
Vista Ultimate
BOINC 6.6.36
The other task was crunched with a X7900 - Darwin 9.8.0
If I understand correctly then the current run is a cuda assisted cpu run. The gpu is used for some calculations but not all. Right now your are mainly limited by the cpu. The gpu gives you just a boost of ca. 20-50%.
If your Q9300 is not overclocked then every 4GHz Core2 runs faster than your cpu with cuda help.
When there is a standalone app for the gpu then yours will fly ;)
If I understand correctly then the current run is a cuda assisted cpu run. The gpu is used for some calculations but not all. Right now your are mainly limited by the cpu. The gpu gives you just a boost of ca. 20-50%.
If your Q9300 is not overclocked then every 4GHz Core2 runs faster than your cpu with cuda help.
When there is a standalone app for the gpu then yours will fly ;)
I have been getting the same sort of errors within seconds of starting execution, both on 3.07 and 3.10:
6.6.36
The system cannot find the path specified. (0x3) - exit code 3 (0x3)
Activated exception handling...
[10:30:57][9612][INFO ] Starting data processing...
[10:30:57][9612][INFO ] Using CUDA device #0 "GeForce 8500 GT" (44.06 GFLOPS)
[10:30:57][9612][INFO ] Checkpoint file unavailable: status.cpt (No such file or directory).
------> Starting from scratch...
[10:30:57][9612][INFO ] Header contents:
------> Original WAPP file: p2030_54162_47225_0054_G48.85-01.81.C_4.wapp
------> Sample time in microseconds: 128
------> Observation time in seconds: 268.9792
------> Time stamp (MJD): 54162.546585648146
------> Number of samples/record: 512
------> Center freq in MHz: 1440
------> Channel band in MHz: 0.390625
------> Number of channels/record: 256
------> Nifs: 1
------> RA (J2000): 192747.923054
------> DEC (J2000): 131107.828389
------> Galactic l: 48.7926
------> Galactic b: -1.8848
------> Name: G48.85-01.81.C
------> Lagformat: 0
------> Sum: 1
------> Level: 3
------> AZ at start: 348.1477
------> ZA at start: 5.1279
------> AST at start: 0
------> LST at start: 0
------> Project ID: p2030
------> Observers: Kevin
------> File size (bytes): 16190754
------> Data size (bytes): 16179201
------> Number of samples: 2097152
------> Trial dispersion measure: 186.6 cm^-3 pc
------> Scale factor: 7711.9
[10:30:58][9612][INFO ] Seed for random number generator is 1009522123.
[10:31:00][9612][ERROR] Error creating CUDA FFT plan (error code: 2)
[10:31:00][9612][ERROR] Demodulation failed (error: 3)!
called boinc_finish
The WUs appear to fail with both the CUDA 2.1 and 2.3 DLLs - the driver version installed is 190.38.
Am I missing something really obvious? The host runs SETI CUDA WUs okay. I am not complaining, just reporting - I know improvements can't be made without reports from testers.
RE: my last cuda wu
)
Hi,
is your Q6600 overclocked or running at default speed? I´m looking for some performance numbers between different cards. I want to know, which card gives the most bang for the buck.
hotze
RE: RE: my last cuda wu
)
2.4
260 216 core is my card and i'm running the newest driver...
295 is 2*240 core so it's the most bang for buck; but, be sure your power supply can handle it...
i think the gtx 295 can knock it out, with i7 is under 2 hrs if benchmarks are any key..
I recommend Secunia PSI: http://secunia.com/vulnerability_scanning/personal/
RE: Reminiscent of : - the
)
1 MB equals 1 million bytes - officially, anyway. There are binary prefixes for use with Bytes, i.e. MiB. Unfortunately, no operating system currently in existence honors these prefixes. (and Mebibyte doesn't sound quite as charming as Megabyte)
RE: 295 is 2*240 core so
)
Don't agree with you there, ZPM.
The GTX295 is certainly the most powerful card, so the most bang...
...but for the buck? I don't think so. Compared with single-processor 240 shader cards such as the GTX275, the shader clock is slower, so you don't get twice the performance: but you pay more than twice the price. I reckon you get almost 25% more bang for the buck with a 275, compared with the 295.
I have crunching a CUDA WU
)
I have crunching a CUDA WU with a GTX 285 in 5,5 hours. My wingmen has crunched the same WU with a CPU in 6 hours.
This is not a big difference .......
[pre] [/pre]
Intel I7 930 - GTX 480 - Windows 7 64
Join BOINC Synergy, the best team in the galaxy!
@ Einstein@home: Have you
)
@ Einstein@home:
Have you use the latest CUDA to compile, there are some corrections in 2.3
Distributed.net have done this also, bug-fixes and some WU-corruption.
Different field but still a good advise.
Post the system specs, It's all BETA.
So the more details the better.
Speed should be ecpected (grow in future) better or am i wrong.
RE: @ Einstein@home: Have
)
OK :
ASUS P5E3
Q9300
2x1 Gb DDR3 1333
GTX 285 190.38
Vista Ultimate
BOINC 6.6.36
The other task was crunched with a X7900 - Darwin 9.8.0
[pre] [/pre]
Intel I7 930 - GTX 480 - Windows 7 64
Join BOINC Synergy, the best team in the galaxy!
If I understand correctly
)
If I understand correctly then the current run is a cuda assisted cpu run. The gpu is used for some calculations but not all. Right now your are mainly limited by the cpu. The gpu gives you just a boost of ca. 20-50%.
If your Q9300 is not overclocked then every 4GHz Core2 runs faster than your cpu with cuda help.
When there is a standalone app for the gpu then yours will fly ;)
RE: If I understand
)
Ok, we'll see then :-)
Thanks
[pre] [/pre]
Intel I7 930 - GTX 480 - Windows 7 64
Join BOINC Synergy, the best team in the galaxy!
I have been getting the same
)
I have been getting the same sort of errors within seconds of starting execution, both on 3.07 and 3.10:
The system cannot find the path specified. (0x3) - exit code 3 (0x3)
Activated exception handling...
[10:30:57][9612][INFO ] Starting data processing...
[10:30:57][9612][INFO ] Using CUDA device #0 "GeForce 8500 GT" (44.06 GFLOPS)
[10:30:57][9612][INFO ] Checkpoint file unavailable: status.cpt (No such file or directory).
------> Starting from scratch...
[10:30:57][9612][INFO ] Header contents:
------> Original WAPP file: p2030_54162_47225_0054_G48.85-01.81.C_4.wapp
------> Sample time in microseconds: 128
------> Observation time in seconds: 268.9792
------> Time stamp (MJD): 54162.546585648146
------> Number of samples/record: 512
------> Center freq in MHz: 1440
------> Channel band in MHz: 0.390625
------> Number of channels/record: 256
------> Nifs: 1
------> RA (J2000): 192747.923054
------> DEC (J2000): 131107.828389
------> Galactic l: 48.7926
------> Galactic b: -1.8848
------> Name: G48.85-01.81.C
------> Lagformat: 0
------> Sum: 1
------> Level: 3
------> AZ at start: 348.1477
------> ZA at start: 5.1279
------> AST at start: 0
------> LST at start: 0
------> Project ID: p2030
------> Observers: Kevin
------> File size (bytes): 16190754
------> Data size (bytes): 16179201
------> Number of samples: 2097152
------> Trial dispersion measure: 186.6 cm^-3 pc
------> Scale factor: 7711.9
[10:30:58][9612][INFO ] Seed for random number generator is 1009522123.
[10:31:00][9612][ERROR] Error creating CUDA FFT plan (error code: 2)
[10:31:00][9612][ERROR] Demodulation failed (error: 3)!
called boinc_finish
]]>
Host number is 480469
The WUs appear to fail with both the CUDA 2.1 and 2.3 DLLs - the driver version installed is 190.38.
Am I missing something really obvious? The host runs SETI CUDA WUs okay. I am not complaining, just reporting - I know improvements can't be made without reports from testers.
Soli Deo Gloria