Just got a few CUDA work units on my GT120. But it really doesn't worth the effort. Even though it utilises 50% of GPU power + 100% CPU, it resulted in my display being sluggish.
If I were to install another graphics card, is there anyway to program BOINC to use CUDA app to crunch only on 1 GPU, while using the other GPU mainly for display...?
There are two options that come to my mind: you could either tell BOINC not to use GPU at all when you are working with the PC (detected by keyboard and mouse activity IIRC).
The other, as you said, would be to use a dedicated crunching card plus one card for "real" graphics display.
See also this thread http://einsteinathome.org/node/194673 for things to keep in mind when using more than one GPU with BOINC (e.g. some platforms will only enable CUDA if there's actually a display plugged into the card ... :-) ).
There are two options that come to my mind: you could either tell BOINC not to use GPU at all when you are working with the PC (detected by keyboard and mouse activity IIRC).
The other, as you said, would be to use a dedicated crunching card plus one card for "real" graphics display.
See also this thread http://einsteinathome.org/node/194673 for things to keep in mind when using more than one GPU with BOINC (e.g. some platforms will only enable CUDA if there's actually a display plugged into the card ... :-) ).
CU
H-B
Interesting... I know it sounds stupid to have two identical cards, but what if I have such scenario. BOINC can't differentiate these two identical cards with that tag.., can it..?
I'm not worried about CUDA not enabled on a second GPU, since I've got another monitor lying around unused.
Interesting... I know it sounds stupid to have two identical cards, but what if I have such scenario. BOINC can't differentiate these two identical cards with that tag.., can it..?
It can, since the devices are distinguished by number, not by type. (They are numbered by the operating system :-) For example:
NVIDIA GPU 0: GeForce 9800 GT (driver version 19107, CUDA version 2030, compute capability 1.1, 512MB, 351 GFLOPS peak)
NVIDIA GPU 1: GeForce 9800 GT (driver version 19107, CUDA version 2030, compute capability 1.1, 512MB, 351 GFLOPS peak)
Gruß,
Gundolf
Computer sind nicht alles im Leben. (Kleiner Scherz)
Is there any way to increase the max number of WUs per day per core?
I am asking because on my i7-920 I turned HT off (so processing w/ 4 cores), cranked the CPU to 4.4 GHz, pushed the shaders on my GTX-295 to 1728 with an end result of processing two ABP2 WU every 17 to 18 minutes. I am now looking at just a couple of WUs left and a message saying I have reached my quota for the day but I'm not ready to be finished yet :-(
I know I have low-end NVidia cards (8400 GS and 9500 GS), but the processing times I'm getting using my GPU's is worse than if I don't use them at all (e.g. a q6600 with no CUDA beat my Q8400 with a 8400 GS almost by a factor of 2 -- I doubt the q6600 is overclocked that much). I've disabled use of my GPU's by Einstein (if I had better cards, I'd use them).
I know I have low-end NVidia cards (8400 GS and 9500 GS), but the processing times I'm getting using my GPU's is worse than if I don't use them at all (e.g. a q6600 with no CUDA beat my Q8400 with a 8400 GS almost by a factor of 2 -- I doubt the q6600 is overclocked that much). I've disabled use of my GPU's by Einstein (if I had better cards, I'd use them).
Indeed, a GeForce 8400 GS is rated at only 33.60 GFlOPS theoretical peak performance by BOINC
Quote:
[10:10:38][3628][INFO ] Using CUDA device #0 "GeForce 8400 GS" (33.60 GFLOPS)
This means that your card is 20 to 30 times (!) slower than modern high end GPUs.
Except I seem to be having difficulty disallowing use of my GPUs. I changed my Einstein@home preferences to not allow work on the GPU, updated my BOINC client, and it was still using my GPU. I even reset the project and it's still using my GPU. Do I need to reinstall the client?
Except I seem to be having difficulty disallowing use of my GPUs. I changed my Einstein@home preferences to not allow work on the GPU, updated my BOINC client, and it was still using my GPU. I even reset the project and it's still using my GPU. Do I need to reinstall the client?
No, that will not be necessary. I'd think that already downloaded units allocated to the GPU will be completed. Are you saying that it also gets new units for the GPU?
No, that will not be necessary. I'd think that already downloaded units allocated to the GPU will be completed. Are you saying that it also gets new units for the GPU?
CU
HBE
Yes, after I had changed my Einstein@home preferences to disallow use of the GPU and reset the project, it's still downloading CUDA units and using the GPU. The client doesn't appear to be following the preferences (and I do have a 6.10+ client).
Any information on what to do would be greatly appreciated (I guess I could reinstall the client and see what happens).
RE: Just got a few CUDA
)
There are two options that come to my mind: you could either tell BOINC not to use GPU at all when you are working with the PC (detected by keyboard and mouse activity IIRC).
The other, as you said, would be to use a dedicated crunching card plus one card for "real" graphics display.
This page http://boinc.berkeley.edu/wiki/Client_configuration describes how you can locally force BOINC to ignore a certain GPU card, see the description of the tag.
See also this thread http://einsteinathome.org/node/194673 for things to keep in mind when using more than one GPU with BOINC (e.g. some platforms will only enable CUDA if there's actually a display plugged into the card ... :-) ).
CU
H-B
RE: There are two options
)
Interesting... I know it sounds stupid to have two identical cards, but what if I have such scenario. BOINC can't differentiate these two identical cards with that tag.., can it..?
I'm not worried about CUDA not enabled on a second GPU, since I've got another monitor lying around unused.
RE: Interesting... I know
)
It can, since the devices are distinguished by number, not by type. (They are numbered by the operating system :-) For example:
Gruß,
Gundolf
Computer sind nicht alles im Leben. (Kleiner Scherz)
Is there any way to increase
)
Is there any way to increase the max number of WUs per day per core?
I am asking because on my i7-920 I turned HT off (so processing w/ 4 cores), cranked the CPU to 4.4 GHz, pushed the shaders on my GTX-295 to 1728 with an end result of processing two ABP2 WU every 17 to 18 minutes. I am now looking at just a couple of WUs left and a message saying I have reached my quota for the day but I'm not ready to be finished yet :-(
Steve
--------------------------
- Crunch, Crunch, Crunch -
--------------------------
I know I have low-end NVidia
)
I know I have low-end NVidia cards (8400 GS and 9500 GS), but the processing times I'm getting using my GPU's is worse than if I don't use them at all (e.g. a q6600 with no CUDA beat my Q8400 with a 8400 GS almost by a factor of 2 -- I doubt the q6600 is overclocked that much). I've disabled use of my GPU's by Einstein (if I had better cards, I'd use them).
RE: I know I have low-end
)
Indeed, a GeForce 8400 GS is rated at only 33.60 GFlOPS theoretical peak performance by BOINC
This means that your card is 20 to 30 times (!) slower than modern high end GPUs.
HB
Except I seem to be having
)
Except I seem to be having difficulty disallowing use of my GPUs. I changed my Einstein@home preferences to not allow work on the GPU, updated my BOINC client, and it was still using my GPU. I even reset the project and it's still using my GPU. Do I need to reinstall the client?
RE: Except I seem to be
)
No, that will not be necessary. I'd think that already downloaded units allocated to the GPU will be completed. Are you saying that it also gets new units for the GPU?
CU
HBE
RE: ...I'd think that
)
I thought that too, but after a reset, there shouldn't be left any. :-)
Computer sind nicht alles im Leben. (Kleiner Scherz)
RE: No, that will not be
)
Yes, after I had changed my Einstein@home preferences to disallow use of the GPU and reset the project, it's still downloading CUDA units and using the GPU. The client doesn't appear to be following the preferences (and I do have a 6.10+ client).
Any information on what to do would be greatly appreciated (I guess I could reinstall the client and see what happens).
Thanks,
joe