Discussion Thread for the Continuous GW Search known as O2MD1 (now O2MDF - GPUs only)

Matt White
Matt White
Joined: 9 Jul 19
Posts: 120
Credit: 280,798,376
RAC: 0

My GW host just completed its

My GW host just completed its first O2MD GPU task this morning. Run-time was 4,444 seconds or 1.23 hours. Still waiting for validation. I have about 8 more of these tasks in the work queue. This is the Win7/NVIDIA box.

Clear skies,
Matt
Zalster
Zalster
Joined: 26 Nov 13
Posts: 3,117
Credit: 4,050,672,230
RAC: 0

Have done 330 GPU O2MD1

Have done 330 GPU O2MD1 tasks, 200 pending 130 validated, no errors, no invalids.  3.2 minutes average. 

Anonymous

Zalster wrote:Have done 330

Zalster wrote:
Have done 330 GPU O2MD1 tasks, 200 pending 130 validated, no errors, no invalids.  3.2 minutes average. 

37 minutes per unit for me on a newly commissioned Linux box with a single AMD Radeon RX 560 Series (3970MB) GPU.  quite a bit of difference in time.  

This box is dedicated to O2MD1 (v2.01) WUs only.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,802
Credit: 17,875,529,819
RAC: 2,708,440

Must be a big difference in

Must be a big difference in the OpenCL stack between Nvidia and AMD for the hardware to be that far apart. Or the compiler uses much better optimization for Nvidia hardware compared to AMD hardware.

 

cecht
cecht
Joined: 7 Mar 18
Posts: 1,460
Credit: 2,561,914,090
RAC: 2,042,542

robl wrote:37 minutes per

robl wrote:

37 minutes per unit for me on a newly commissioned Linux box with a single AMD Radeon RX 560 Series (3970MB) GPU.  quite a bit of difference in time.  

This box is dedicated to O2MD1 (v2.01) WUs only.

Is that while running at 1X? My RX 560 2GB (which is listed as a second RX 460 for that Linux host) gives ~16.5 min/unit running at 3X concurrent tasks.

Ideas are not fixed, nor should they be; we live in model-dependent reality.

archae86
archae86
Joined: 6 Dec 05
Posts: 3,149
Credit: 7,110,524,931
RAC: 639,478

We have had a situation here

We have had a situation here at Einstein for some time that modern AMD cards over-perform relative to modern Nvidia cards on the Gamma--Ray Pulsar tasks relative to their capabilities on widely published competitive reviews.

Now we seem to have a developing situation in which modern Nvidia cards over-perform relative to modern AMD cards on the current (and a previous few) GW tasks.

If this persists, one might speculate that it would be inviting to equip a PC with one each of suitable AMD and Nvidia cards, directing the GW tasks to the green side and the GRP tasks to the red side.

Is this easy to do? Hard to do but possible, or out of the question?

And, yes, I suspect that for people with large fleets it probably would make much more sense to build dedicated boxes of flavor GW and flavor GRP.  But there are plenty of us with one to four boxes who might find such a setup interesting.

 

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3,117
Credit: 4,050,672,230
RAC: 0

It is possible to prevent

It is possible to prevent certain work types from running on specific GPUs by use of the exclude option, type and work unit in a cc_config.xml.  I know I did it a long time ago for his project, would need to brush up on it.

Anonymous

Keith Myers wrote:Must be a

Keith Myers wrote:
Must be a big difference in the OpenCL stack between Nvidia and AMD for the hardware to be that far apart. Or the compiler uses much better optimization for Nvidia hardware compared to AMD hardware.

I may have made a mistake.  should i be looking at "run times" or "cpu times".  my earlier value was based upon "run time".  

Anonymous

cecht wrote:robl wrote:37

cecht wrote:
robl wrote:

37 minutes per unit for me on a newly commissioned Linux box with a single AMD Radeon RX 560 Series (3970MB) GPU.  quite a bit of difference in time.  

This box is dedicated to O2MD1 (v2.01) WUs only.

Is that while running at 1X? My RX 560 2GB (which is listed as a second RX 460 for that Linux host) gives ~16.5 min/unit running at 3X concurrent tasks.

running @ 3X concurrent

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,802
Credit: 17,875,529,819
RAC: 2,708,440

Use an gpu_exclude statement

Use an gpu_exclude statement in the cc_config.xml file for the gpu/appname combination.

https://boinc.berkeley.edu/wiki/Client_configuration#Application_configuration

<exclude_gpu>
   <url>project_URL</url>
   [<device_num>N</device_num>]
   [<type>NVIDIA|ATI|intel_gpu</type>]
   [<app>appname</app>]
</exclude_gpu>

 

You can set up different exclusions for AMD types and Nvidia types.

<exclude_gpu>
   <url>http://einstein.phys.uwm.edu/</url>
   <device_num>0</device_num>
   <type>NVIDIA</type>
   <app>hsgamma_FGRPB1G</app>
</exclude_gpu>

<exclude_gpu>
    <url>http://einstein.phys.uwm.edu/</url>
    <device_num>1</device_num>
    <type>ATI</type>
    <app>einstein_O2MD1</app>
</exclude_gpu>

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.