Exactly. The version number 3 in "BRP3" was chosen as the code is a direct successor to ABP2. It actually does the same thing but the CUDA flavor of the app is doing less on the CPU and more on the GPU. The only reason that the app is not called "ABP3" is that the workunits it crunches now are no longer from Arecibo. Currently they are from Parkes Observatory in Australia, but there's nothing in the app that is specific to any observatory in particular. So it's now called "Binary Radio Pulsar, Version 3" app.
I could use a clarification. The present requirements for these CUDA BRP3s are:
Quote:
- Windows
- BOINC 6.10.x
- CUDA driver >= 3.2 (>= 260.00)
- 100% GPU (it just uses up to 75%)
- 20% CPU
- 300 MB RAM required
- Speed up compared to CPU: up to 20x (240 cores)
The 300MB RAM required, is that system RAM or videocard RAM?
I ask this, as the CPU BRP3s use 270-300MB RAM and can see the CPU app supporting the GPU to use the same amount of system RAM. But does it?
The 300MB RAM required, is that system RAM or videocard RAM?
I ask this, as the CPU BRP3s use 270-300MB RAM and can see the CPU app supporting the GPU to use the same amount of system RAM. But does it?
The BRP3 GPU app uses up to 300MB video ram. But only 42MB system RAM on my system.
I have two GPU's, one of which has <300MB Video RAM. The project keeps trying to use that card and restarting tasks after 5 seconds incessantly. The projects are not aborted but shuffled through the queue. The other card works fine with the new app. My cards are archaic but the project or client should know whether to use them with each card.
FYI, the card that is failing is:
NVIDIA GPU 1: Quadro NVS 295 (driver version 26658, CUDA version 3020, compute capability 1.1, 231MB, 21 GFLOPS peak)
While the card that is crunching fine is:
NVIDIA GPU 0: GeForce 8400 GS (driver version 26658, CUDA version 3020, compute capability 1.1, 488MB, 22 GFLOPS peak)
Thanks for the suggestion but I was not asking how to turn that card off to all projects. Both cards run at about the same turtle's pace on other projects and I would prefer to use both cards on another project instead of only one utilized by E@H with the other completely off.
Thanks for the suggestion but I was not asking how to turn that card off to all projects. Both cards run at about the same turtle's pace on other projects and I would prefer to use both cards on another project instead of only one utilized by E@H with the other completely off.
no way - DA refuses to implement a thing like assigning resources to projects for many years..
I have two GPU's, one of which has <300MB Video RAM. The project keeps trying to use that card and restarting tasks after 5 seconds incessantly. The projects are not aborted but shuffled through the queue.
Assuming these two cards are in the same machine, you must have a configuration file that tells the client to use both GPUs. So you are a bit on your own there, there's nothing that the BOINC Client or the project could do for you. The client detects and reports only the parameters of the "best" card, and the project scheduler sends work for these parameters. Pitty that they don't fit your smaller card, but per default the BOINC Client wouldn't use it anyway.
We are still not sure of how much memory the BRP computation actually takes, from just the reports we get it looks like this varies a lot between different cards, at least on Windows. Might be a driver issue. At least there were quite some 256MB cards that couldn't run these tasks successfully. For now we raised the memory requirement to 300MB, just to be on the safe side, and we'll keep monitoring the actual memory usage of our application (currently on Linux only). When we're sure of what's happening there, we may lower this requirement again.
When we're sure of what's happening there, we may lower this requirement again.
It will be great if it will be done. I have a lot of CUDA cards with 256Mb only. And for those machines CUDA is the only way now to keep running along with the new machines. If fact CUDA brings second life to these rigs, because P4 and Pentium D (and of cause Celereons) are now getting old for computation wars.
Hi! Exactly. The version
)
Hi!
Exactly. The version number 3 in "BRP3" was chosen as the code is a direct successor to ABP2. It actually does the same thing but the CUDA flavor of the app is doing less on the CPU and more on the GPU. The only reason that the app is not called "ABP3" is that the workunits it crunches now are no longer from Arecibo. Currently they are from Parkes Observatory in Australia, but there's nothing in the app that is specific to any observatory in particular. So it's now called "Binary Radio Pulsar, Version 3" app.
CU
HB
ok thanks
)
ok thanks
I could use a clarification.
)
I could use a clarification. The present requirements for these CUDA BRP3s are:
The 300MB RAM required, is that system RAM or videocard RAM?
I ask this, as the CPU BRP3s use 270-300MB RAM and can see the CPU app supporting the GPU to use the same amount of system RAM. But does it?
RE: The 300MB RAM required,
)
The BRP3 GPU app uses up to 300MB video ram. But only 42MB system RAM on my system.
I have two GPU's, one of
)
I have two GPU's, one of which has <300MB Video RAM. The project keeps trying to use that card and restarting tasks after 5 seconds incessantly. The projects are not aborted but shuffled through the queue. The other card works fine with the new app. My cards are archaic but the project or client should know whether to use them with each card.
FYI, the card that is failing is:
NVIDIA GPU 1: Quadro NVS 295 (driver version 26658, CUDA version 3020, compute capability 1.1, 231MB, 21 GFLOPS peak)
While the card that is crunching fine is:
NVIDIA GPU 0: GeForce 8400 GS (driver version 26658, CUDA version 3020, compute capability 1.1, 488MB, 22 GFLOPS peak)
Thanks
RE: I have two GPU's, one
)
RE: try putting 0 into
)
Thanks for the suggestion but I was not asking how to turn that card off to all projects. Both cards run at about the same turtle's pace on other projects and I would prefer to use both cards on another project instead of only one utilized by E@H with the other completely off.
RE: Thanks for the
)
no way - DA refuses to implement a thing like assigning resources to projects for many years..
RE: I have two GPU's, one
)
Assuming these two cards are in the same machine, you must have a configuration file that tells the client to use both GPUs. So you are a bit on your own there, there's nothing that the BOINC Client or the project could do for you. The client detects and reports only the parameters of the "best" card, and the project scheduler sends work for these parameters. Pitty that they don't fit your smaller card, but per default the BOINC Client wouldn't use it anyway.
We are still not sure of how much memory the BRP computation actually takes, from just the reports we get it looks like this varies a lot between different cards, at least on Windows. Might be a driver issue. At least there were quite some 256MB cards that couldn't run these tasks successfully. For now we raised the memory requirement to 300MB, just to be on the safe side, and we'll keep monitoring the actual memory usage of our application (currently on Linux only). When we're sure of what's happening there, we may lower this requirement again.
BM
BM
RE: When we're sure of
)
It will be great if it will be done. I have a lot of CUDA cards with 256Mb only. And for those machines CUDA is the only way now to keep running along with the new machines. If fact CUDA brings second life to these rigs, because P4 and Pentium D (and of cause Celereons) are now getting old for computation wars.