Gamma-ray pulsar binary search #1 on GPUs

petri33
petri33
Joined: 4 Mar 20
Posts: 123
Credit: 4043395819
RAC: 7023493

Hi, Just saw LATeah30xxx

Hi,

Just saw LATeah30xxx (fast) and LATeah40xxx (slow) tasks. All GPUs run on the same machine with no CPU tasks.

On my TITAN V the time difference is 22 seconds. That is 8.6 % more time [151 s vs 139 s]

On RTX2080Ti the difference is 23 seconds. i.e. 13 %  more time [200 s vs 177 s]

On GTX1080 the difference is 44 seconds -- 15 %  more time [340 s vs 296 s]

 

 

I think that both 30xx and 40xx tasks pay a fair amount of 'credit'. I'll have to wait for my toaster anyway.

 

 

 

Burned
Burned
Joined: 25 Jun 21
Posts: 32
Credit: 388221900
RAC: 0

I am (very) late to BOINC, so

I am (very) late to BOINC, so I have missed all  the backstory, especially around the Theory of Points.  I do see there is a huge disparity in GPU vs. CPU units, but its my understanding (at least from Folding which is my home project), that GPUs are able to do a lot more science per unit of time and cost, and so are encouraged by the points system.  Over there, work units widely range in complexity and ongoing progress is dependent on wu completion, so the basic metric is how fast you return your result.  Here, work units are discreet and mostly equivalent (as I understand it anyway), so just make your deadline to get your "benchmarked" points.

I've also noticed that Gravitational Wave Search rewards a lot more points per unit CPU time than Gamma Ray Pulsar search, so my assumption is that the scientists want us to process O2MD1 work first. so that's what I'm doing.

Cameron
Cameron
Joined: 26 Apr 05
Posts: 15
Credit: 75287230
RAC: 194854

I've always felt that the

I've always felt that the 'Credit' awarded was fair, maybe a fraction generous.

Still a fair and equible amount for an hour of my GPU time.

rbpeake
rbpeake
Joined: 18 Jan 05
Posts: 266
Credit: 1131787797
RAC: 755361

I seem to recall that

I seem to recall that assigning points was a bit of a random process. 

At the beginning of the project around 16 years ago they used the default system where BOINC assigned the points based on a complicated formula, and nobody was happy with that.  Next they went to a fixed-point awards system, and that seemed to be somewhat randomly assigned.  For instance, there is a big discrepancy between the project points awarded for the Gamma-ray pulsar binary search #1 on GPUs (3,465 points) versus the new Gravitational Wave search O3 All-Sky #1 (1,000 points).  

My 2-cents.

Burned
Burned
Joined: 25 Jun 21
Posts: 32
Credit: 388221900
RAC: 0

The GPU points disparity

The GPU points disparity exists everywhere that I am aware of. 

My AMD 6900XT can do 480 Pulsar Searches a day at 3465 points per task or 1,663,200 ppd.  The Ryzen 7 5800X that feeds it can do about 7 Gravitational Wave searches per day per CPU.  I run it on the remaining 6 CPU's that aren't feeding the GPU, so about 42 tasks per day at 1000 points per task or 42,000 ppd.  About a 40 to 1 ratio.

In Folding@Home, the same machine does about 4,250,000 ppd on the GPU, and 272,000 ppd on the CPU, so about a 15 to 1 ratio.  Obviously, the absolute number of points is arbitrarily determined based on each project's benchmarking process, which aren't equivalent.

Two notes to those ratios though.  Einstein is optimized for AMD, Folding for NVIDIA.  Einstein can also run two GPU work units at once and accomplish more work.  So you would expect Einstein to to "outperform" Folding@home on an AMD card.

My gut feeling is that the points disparity is fair based on the dollar costs involved to deploy and power a GPU, and the amount of science accomplished by a GPU vs, a CPU for a given project.  It would be nice if this was explained in the FAQ.

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3956
Credit: 46869072642
RAC: 64439560

Burned wrote: Einstein is

Burned wrote:

Einstein is optimized for AMD

I would rephrase this to "Einstein is not well optimized for Nvidia" rather than being optimized for AMD.

Nvidia can go significantly faster. Inefficiencies in the nvidia app are holding nvidia cards back.

_________________________________________________________________________

Guðni Már Gilbert
Guðni Már Gilbert
Joined: 30 Jun 20
Posts: 12
Credit: 439124302
RAC: 229391

My GTX 1070 card usually

My GTX 1070 card usually needs 11-12 minutes to finish each GPU task.

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

My RX 570 under Ubuntu 20.4.2

My RX 570 under Ubuntu 20.4.2 took a little under 10 minutes for the FGRP.

https://einsteinathome.org/host/12878436/tasks/2/40

As I recall, it was using around 110 watts.  That really is not much different than my GTX 1070 for efficiency.

 

But I think the greater efficiency of the AMD cards is shown in the GW work.  My RX 570 was doing them 1X in 16.5 minutes and about 95 watts when supported by two or more cores of a Ryzen 3600.  The last time I checked the GTX 1070, it was about half that power efficient.  

And the GTX 1070 is at least one generation later than the RX 570.  But it is good enough for now.

 

TRAPPIST-713
TRAPPIST-713
Joined: 13 May 20
Posts: 12
Credit: 2487802831
RAC: 1417660

FGRPB1G seems to be over in ~

FGRPB1G seems to be over in ~ 2 weeks. Is FGRPB2G on the horizon?

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3956
Credit: 46869072642
RAC: 64439560

looks like the FGRPB1G

looks like the FGRPB1G validator is having some kind of issue. looks like validation has stopped around 15:00UTC today.

https://einsteinathome.org/content/suddenly-most-tasks-marked-validate-error

_________________________________________________________________________

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.