Total GPU flops on Einstein?

Peter Hucker
Peter Hucker
Joined: 12 Aug 06
Posts: 207
Credit: 155,408,870
RAC: 1,072,610
Topic 220983

Is there somewhere I can see total GPU flops for all of Einstein?  Just out of interest.  The server status page mentions "CPU TFLOPS" (under "workunits and tasks", "computing" on the right), does that include GPUs?  Also near the bottom is "GPU productivity" but it's just got number of hosts and credit.  Then under that is "computing capacity" in TFLOPS which seems to equal the total for CPUs as I first mentioned.  I guess I could get a rough estimate from the credit from GPUs, but why isn't it shown on the status page directly?  By the way, my rough estimate can't be right, I worked it out as 450 TFLOPS per second from all AMD and Nvidia GPUs, which would only be a tenth of what's being done on CPUs.

Jan Vaclavik
Jan Vaclavik
Joined: 1 Sep 05
Posts: 8
Credit: 842,438
RAC: 4,807

I am wondering about Intel

I am wondering about Intel GPUs. Despite its large installed base, it remains rather exotic platform as far as distributed computing goes. AFAIK, Einstein is currently one of just two active projects supporting it.

Peter Hucker
Peter Hucker
Joined: 12 Aug 06
Posts: 207
Credit: 155,408,870
RAC: 1,072,610

Jan Vaclavik wrote:I am

Jan Vaclavik wrote:
I am wondering about Intel GPUs. Despite its large installed base, it remains rather exotic platform as far as distributed computing goes. AFAIK, Einstein is currently one of just two active projects supporting it.

On every CPU I've looked at, using the onboard GPU is no better than just using it as a CPU.  When the GPU section is in use, the CPU part slows down or has less cores available.  It's handy to have it there so you don't need a graphics card for display or even games, but as a platform for computing like Einstein, I fail to see the point, just use it as a CPU.

Having said that, if you want to use it, don't most projects allow it?  Since OpenCL works on them aswell as AMD and Nvidia, any GPU project should support Intel GPUs.

Jan Vaclavik
Jan Vaclavik
Joined: 1 Sep 05
Posts: 8
Credit: 842,438
RAC: 4,807

Peter Hucker wrote:On every

Peter Hucker wrote:

On every CPU I've looked at, using the onboard GPU is no better than just using it as a CPU.  When the GPU section is in use, the CPU part slows down or has less cores available.  It's handy to have it there so you don't need a graphics card for display or even games, but as a platform for computing like Einstein, I fail to see the point, just use it as a CPU.

Having said that, if you want to use it, don't most projects allow it?  Since OpenCL works on them aswell as AMD and Nvidia, any GPU project should support Intel GPUs.

The apps are usually opencl-ati or opencl-nvidia and the system looks for the respective graphics card. Einstein and the now inactive Seti have opencl-intel_gpu.

My experience with the Intel GPU is bit complex as well. I use an acient HD Graphics P4000 (the first OpenCL capable GPU generation from Intel no less) along with 4C/8T CPU. Running GPU app did not seem to affect the CPU, which maintained the expected boost frequency. To make the GPU work at the boost frequency however, I had to sacrifice single CPU thread (I suspect the GPU app was CPU starved and the GPU underclocked as it was not fully utilized), but it seemed like a worthwhile trade as the Seti was running faster on the GPU compared to CPU thread. I cant really compare the Einstein as the BRP4 seems to be reserved for odd platforms like ARM, PPC or Intel GPU.

Peter Hucker
Peter Hucker
Joined: 12 Aug 06
Posts: 207
Credit: 155,408,870
RAC: 1,072,610

Jan Vaclavik wrote:My

Jan Vaclavik wrote:
My experience with the Intel GPU is bit complex as well. I use an acient HD Graphics P4000 (the first OpenCL capable GPU generation from Intel no less) along with 4C/8T CPU. Running GPU app did not seem to affect the CPU, which maintained the expected boost frequency. To make the GPU work at the boost frequency however, I had to sacrifice single CPU thread (I suspect the GPU app was CPU starved and the GPU underclocked as it was not fully utilized), but it seemed like a worthwhile trade as the Seti was running faster on the GPU compared to CPU thread. I cant really compare the Einstein as the BRP4 seems to be reserved for odd platforms like ARM, PPC or Intel GPU.

 

It probably depends on the CPU.  All mine got approximately the same total flops across everything running on the CPU and GPU sides, whether I ran a GPU app or not, and whether I freed up a core or not.

I thought any project that used opencl automatically made it available for every type of GPU.  For example every project which has AMD work also has Nvidia work, but not vice versa (as Nvidia also runs Cuda).

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 647
Credit: 562,689,003
RAC: 1,015,901

Quote:I thought any project

Quote:
I thought any project that used opencl automatically made it available for every type of GPU.  For example every project which has AMD work also has Nvidia work, but not vice versa (as Nvidia also runs Cuda).

Not necessarily.  It all depends on whether the project developers write the individual OpenCL applications.  They could very well decide to only offer OpenCL-ATI and OpenCL-Intel applications and decide NOT to develop an OpenCL-Nvidia app and decide to develop a native CUDA application only because it is faster than OpenCL. 

Then leave it up to the scheduler to send out whatever application matches what the host can run.  There certainly have been many cases over the years of Microsoft only sending out Nvidia drivers with CUDA enablement and not supplying any OpenCL component.  This had tripped up Seti users for years where BOINC only reports CUDA drivers and no OpenCL drivers at startup.  So those hosts would not run any Astropulse OpenCL tasks.

 

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1,302
Credit: 1,582,341,572
RAC: 990,637

Keith Myers wrote:Quote:I

Keith Myers wrote:
Quote:
I thought any project that used opencl automatically made it available for every type of GPU.  For example every project which has AMD work also has Nvidia work, but not vice versa (as Nvidia also runs Cuda).

Not necessarily.  It all depends on whether the project developers write the individual OpenCL applications.  They could very well decide to only offer OpenCL-ATI and OpenCL-Intel applications and decide NOT to develop an OpenCL-Nvidia app and decide to develop a native CUDA application only because it is faster than OpenCL. 

Then leave it up to the scheduler to send out whatever application matches what the host can run.  There certainly have been many cases over the years of Microsoft only sending out Nvidia drivers with CUDA enablement and not supplying any OpenCL component.  This had tripped up Seti users for years where BOINC only reports CUDA drivers and no OpenCL drivers at startup.  So those hosts would not run any Astropulse OpenCL tasks.

 

That's a problem here too; after a major windows update you need to reinstall NVidia drivers to replace the now missing openCL support.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.