I do not have any experience yet with AMD OpenCL but with the CUDA app, the most optimal setup for HT enabled CPUs is to set the maximum CPU usage to 50% as to avoid HT performance penalties. Being that the OpenCL app is also CPU dependent and if your CPU has HT, then setting cores to 50% could be beneficial. This change may also improve performance for Bulldozer processors that have a shared FPU for every two integer cores but I have not tested that myself.
Thanks for the responses folks.
CPU is a Phenon II, X4 (no hyperthreading)
Ah. Setting max CPU CORE usage to 75% (I have four cores) gets GPU usage to 80-90%. Setting back to 0% (i.e., no value) results in GPU usage plummeting again.
Setting CPU usage to to 50% (not core usage, but CPU usage) results in about 60% of my CPU being used.
Even suspending CPU usage entirely (with the suspend option) results in only about 80-90% of the GPU being used by E@H.
The problem with these settings is that I then lose usage of some of my CPU.
What Collatz does is that it gives the CPU thread high priority for the GPU task to ensure it will always be fed. It also seems to only allocate 0.01% of a CPU. In reality it is using about 0.2% of my CPU, or <1% of a core.
E@H claims to use 0.5 cores on BOINC, apparently set to "below normal" priority (which is higher than the "idle" the other threads are using). In reality its using 6% of my CPU time or 25% of one core.
You may wish to consult with the collatz folks to discover how they get it to use 100% of the GPU without needing to "disable" BOINC's use of a core.
Hello! Thank you for explanation. I have 1Gb model/Core i7-2600 (8 virtual cores), ratio 0,5. But nevertheless I have only 9 tasks running at a time. Why so? I thought there will be 10 tasks (8 cores and 2 GPUs). 2 of the tasks are marked "0,5 CPU + 0,5 GPU" - what's that supposed to mean?
that actually makes perfect sense. think about it - the 7 CPU tasks that are running require one core each, leaving only one core free. the 2 GPU tasks that are running require only 0.5 CPU's each (for a total of 1 whole core). so 7 CPU tasks consume 7 CPU cores, while 2 GPU tasks consume the 8th and final CPU core, for a total of 9 tasks running ay any one time...
As always, your explanation is greatly appreciated.
But I still don't understand how the number of CPU cores occupied by moderating GPU tasks depends on quantity of this GPU tasks run simulateonously. As far as I understood no matter how many tasks u run on GPU simulationously - only one CPU core will be busy moderating GPU tasks. Am i right? So if I run only one task at a time on GPU one of CPU cores will still be occupied by the GPU task, correct?
But I still don't understand how the number of CPU cores occupied by moderating GPU tasks depends on quantity of this GPU tasks run simulateonously. As far as I understood no matter how many tasks u run on GPU simulationously - only one CPU core will be busy moderating GPU tasks. Am i right? So if I run only one task at a time on GPU one of CPU cores will still be occupied by the GPU task, correct?
No, that's not correct. It all depends on the "0,5 CPU + 1 GPU" part; i.e. if you run one GPU task, no CPU core will be reserved. If you run two ore three GPU tasks concurrently, one CPU core will be used to support them. If you run four or five GPU tasks, two CPU cores will be set aside...
Gruß,
Gundolf
Computer sind nicht alles im Leben. (Kleiner Scherz)
As far as I understood no matter how many tasks u run on GPU simulationously - only one CPU core will be busy moderating GPU tasks. Am i right? So if I run only one task at a time on GPU one of CPU cores will still be occupied by the GPU task, correct?
this is incorrect, as Gundolf pointed out. i'm not sure i understand his answer completely, but to elaborate on his answer, this is how things work as i understand it:
if i run a single BRP4 task on my AMD/ATI GPU, the info for that task in the status column of the BOINC manager will read "0.5 CPUs + 1 ATI GPUs." in other words, that single BRP4 task will consume half of a CPU core. if i change my web preferences so that 2 BRP4 tasks run simultaneously, the task info for each task will then read "0.5 CPUs + 0.5 ATI GPUs." in other words, each of those tasks will consume approx. half of the GPU, but since they're both still consuming half a CPU core each, that amounts to the consumption of 1 full core. naturally, if i run 4 BRP4 tasks simultaneously, they'll each consume one quarter of the GPU, but they'll still consume half of a CPU core each, amounting to the consumption of 2 full cores. so you can see that GPU tasks can easily consume more than a single CPU core if 1) a single GPU task consumes an appreciable fraction of a CPU core (like Einstein@Home BRP4 ATI tasks, which consume half of a CPU core each), and 2) enough of those tasks are run simultaneously. in stark contrast to this, Milkyway@Home ATI GPU tasks only consume 0.05 CPUs apiece...so if the GPU and/or its memory didn't become a limitation first, 20 MW@H tasks could be handled by a single CPU core...that is, i'd have to run 21 or more MW@H tasks at once in order to consume more than a single CPU core. not so w/ Einstein BRP4 ATI tasks, as i showed above, which can easily consume more than a single CPU task if you run more than 2 of them at once...
i should also note that CPU portion of the task info ("0.5 CPUs + 1 ATI GPUs," the bolded part) has always seemed more like an estimated value to me rather than an actual value, even though it can be altered w/ an app_info.xml file (its the parameter). using an app_info.xml file (for another projects and applications, not Einstein@Home BRP4 ATI tasks), i've played w/ the and parameters, and it didn't seem to do anything for my CPU/GPU utilization or run times. so if you really want to know how much CPU resources are being consumed by a particular GPU task, suspend all other projects/applications in BOINC, open the Windows Task Manager, run a single task on the GPU, and monitor CPU usage in the Windows Task Manager.
With my 3.2Ghz CPU, E@H GPU uses about 1/4 of a core. That means it costs me one core to run 4 E@H GPU tasks.
Collatz uses 100 GPU tasks per core.
I have no interest in the "points" and believe E@H to be more a worthwhile project (physics > maths ;-) ), but for simple optimal use of my hardware, I'll go with Collatz until E@H GPU becomes more CPU friendly.
Edit: Rather than Task Manager, on windows use Process Explorer - http://technet.microsoft.com/en-gb/sysinternals/bb896653 - now by microsoft and it even tells you how much of your GPU a specific task is using so you don't need to stop other tasks.
With my 3.2Ghz CPU, E@H GPU uses about 1/4 of a core. That means it costs me one core to run 4 E@H GPU tasks.
Collatz uses 100 GPU tasks per core.
I have no interest in the "points" and believe E@H to be more a worthwhile project (physics > maths ;-) ), but for simple optimal use of my hardware, I'll go with Collatz until E@H GPU becomes more CPU friendly.
Edit: Rather than Task Manager, on windows use Process Explorer - http://technet.microsoft.com/en-gb/sysinternals/bb896653 - now by microsoft and it even tells you how much of your GPU a specific task is using so you don't need to stop other tasks.
if you're looking for a project application that utilizes the GPU much better and utilizes the CPU much less than the Einstein@Home BRP4 ATI application, but don't want to waste resources on prime numbers (or any pure math for that matter), try your ATI GPU is double-precision capable (i can't tell b/c you have your computer hidden). my HD 6950 averages 99% utilization ( as opposed to Collatz's 97% utilization), and while it uses approx. 5 times as much CPU resources, a MW@H GPU task still only consumes ~5% of a single CPU core.
also, thanks for the reference to Process Explorer - i'll have to check it out.
*EDIT* - i just DLed and installed Process Explorer, and it turns out Milkyway@Home GPU tasks only consume 0.11-0.12% of my 6-core CPU (or 0.66-0.72% of a single core)...so i guess it actually consumes quite a bit less CPU than the default "0.05 CPUs" as shown in the BOINC manager...
If you are experiencing the below error on Ubuntu 12.04 64bit with an AMD / ATI graphics card. I have found that the fix is rather simple. In '/etc/OpenCL/vendors/' create a file called 'amdocl32.icd', in that file enter the following 'libamdocl32.so' and save. The next time the client is started E@H can use the card for processing.
Quote:
7.0.28
process exited with code 255 (0xff, -1)
[16:06:07][3184][INFO ] Application startup - thank you for supporting Einstein@Home!
[16:06:07][3184][INFO ] Starting data processing...
[16:06:07][3184][ERROR] Failed to get OpenCL platform/device info from BOINC (error: -1)!
[16:06:07][3184][ERROR] Demodulation failed (error: -1)!
16:06:07 (3184): called boinc_finish
I do not have any experience
)
I do not have any experience yet with AMD OpenCL but with the CUDA app, the most optimal setup for HT enabled CPUs is to set the maximum CPU usage to 50% as to avoid HT performance penalties. Being that the OpenCL app is also CPU dependent and if your CPU has HT, then setting cores to 50% could be beneficial. This change may also improve performance for Bulldozer processors that have a shared FPU for every two integer cores but I have not tested that myself.
Thanks for the responses
)
Thanks for the responses folks.
CPU is a Phenon II, X4 (no hyperthreading)
Ah. Setting max CPU CORE usage to 75% (I have four cores) gets GPU usage to 80-90%. Setting back to 0% (i.e., no value) results in GPU usage plummeting again.
Setting CPU usage to to 50% (not core usage, but CPU usage) results in about 60% of my CPU being used.
Even suspending CPU usage entirely (with the suspend option) results in only about 80-90% of the GPU being used by E@H.
The problem with these settings is that I then lose usage of some of my CPU.
What Collatz does is that it gives the CPU thread high priority for the GPU task to ensure it will always be fed. It also seems to only allocate 0.01% of a CPU. In reality it is using about 0.2% of my CPU, or <1% of a core.
E@H claims to use 0.5 cores on BOINC, apparently set to "below normal" priority (which is higher than the "idle" the other threads are using). In reality its using 6% of my CPU time or 25% of one core.
You may wish to consult with the collatz folks to discover how they get it to use 100% of the GPU without needing to "disable" BOINC's use of a core.
RE: RE: Hello! Thank you
)
As always, your explanation is greatly appreciated.
But I still don't understand how the number of CPU cores occupied by moderating GPU tasks depends on quantity of this GPU tasks run simulateonously. As far as I understood no matter how many tasks u run on GPU simulationously - only one CPU core will be busy moderating GPU tasks. Am i right? So if I run only one task at a time on GPU one of CPU cores will still be occupied by the GPU task, correct?
RE: But I still don't
)
No, that's not correct. It all depends on the "0,5 CPU + 1 GPU" part; i.e. if you run one GPU task, no CPU core will be reserved. If you run two ore three GPU tasks concurrently, one CPU core will be used to support them. If you run four or five GPU tasks, two CPU cores will be set aside...
Gruß,
Gundolf
Computer sind nicht alles im Leben. (Kleiner Scherz)
RE: As far as I understood
)
this is incorrect, as Gundolf pointed out. i'm not sure i understand his answer completely, but to elaborate on his answer, this is how things work as i understand it:
if i run a single BRP4 task on my AMD/ATI GPU, the info for that task in the status column of the BOINC manager will read "0.5 CPUs + 1 ATI GPUs." in other words, that single BRP4 task will consume half of a CPU core. if i change my web preferences so that 2 BRP4 tasks run simultaneously, the task info for each task will then read "0.5 CPUs + 0.5 ATI GPUs." in other words, each of those tasks will consume approx. half of the GPU, but since they're both still consuming half a CPU core each, that amounts to the consumption of 1 full core. naturally, if i run 4 BRP4 tasks simultaneously, they'll each consume one quarter of the GPU, but they'll still consume half of a CPU core each, amounting to the consumption of 2 full cores. so you can see that GPU tasks can easily consume more than a single CPU core if 1) a single GPU task consumes an appreciable fraction of a CPU core (like Einstein@Home BRP4 ATI tasks, which consume half of a CPU core each), and 2) enough of those tasks are run simultaneously. in stark contrast to this, Milkyway@Home ATI GPU tasks only consume 0.05 CPUs apiece...so if the GPU and/or its memory didn't become a limitation first, 20 MW@H tasks could be handled by a single CPU core...that is, i'd have to run 21 or more MW@H tasks at once in order to consume more than a single CPU core. not so w/ Einstein BRP4 ATI tasks, as i showed above, which can easily consume more than a single CPU task if you run more than 2 of them at once...
i should also note that CPU portion of the task info ("0.5 CPUs + 1 ATI GPUs," the bolded part) has always seemed more like an estimated value to me rather than an actual value, even though it can be altered w/ an app_info.xml file (its the parameter). using an app_info.xml file (for another projects and applications, not Einstein@Home BRP4 ATI tasks), i've played w/ the and parameters, and it didn't seem to do anything for my CPU/GPU utilization or run times. so if you really want to know how much CPU resources are being consumed by a particular GPU task, suspend all other projects/applications in BOINC, open the Windows Task Manager, run a single task on the GPU, and monitor CPU usage in the Windows Task Manager.
To further on Sunny's
)
To further on Sunny's point.
With my 3.2Ghz CPU, E@H GPU uses about 1/4 of a core. That means it costs me one core to run 4 E@H GPU tasks.
Collatz uses 100 GPU tasks per core.
I have no interest in the "points" and believe E@H to be more a worthwhile project (physics > maths ;-) ), but for simple optimal use of my hardware, I'll go with Collatz until E@H GPU becomes more CPU friendly.
Edit: Rather than Task Manager, on windows use Process Explorer - http://technet.microsoft.com/en-gb/sysinternals/bb896653 - now by microsoft and it even tells you how much of your GPU a specific task is using so you don't need to stop other tasks.
RE: To further on Sunny's
)
if you're looking for a project application that utilizes the GPU much better and utilizes the CPU much less than the Einstein@Home BRP4 ATI application, but don't want to waste resources on prime numbers (or any pure math for that matter), try your ATI GPU is double-precision capable (i can't tell b/c you have your computer hidden). my HD 6950 averages 99% utilization ( as opposed to Collatz's 97% utilization), and while it uses approx. 5 times as much CPU resources, a MW@H GPU task still only consumes ~5% of a single CPU core.
also, thanks for the reference to Process Explorer - i'll have to check it out.
*EDIT* - i just DLed and installed Process Explorer, and it turns out Milkyway@Home GPU tasks only consume 0.11-0.12% of my 6-core CPU (or 0.66-0.72% of a single core)...so i guess it actually consumes quite a bit less CPU than the default "0.05 CPUs" as shown in the BOINC manager...
No double precision here I'm
)
No double precision here I'm afraid (already investigated MW@H). I'm stuck with Collatz for now. But at least its doing /something/.
It's nice work on notebook
)
It's nice work on notebook this AMD llano 3610mx + seymour ati gpu.
4CPU+2GPU process run. Good work/
Sorry 4 my bad eng.
If you are experiencing the
)
If you are experiencing the below error on Ubuntu 12.04 64bit with an AMD / ATI graphics card. I have found that the fix is rather simple. In '/etc/OpenCL/vendors/' create a file called 'amdocl32.icd', in that file enter the following 'libamdocl32.so' and save. The next time the client is started E@H can use the card for processing.