Vega56: much more efficient on FGRP than on GW-O2-MD?

fastbunny
fastbunny
Joined: 20 Apr 06
Posts: 22
Credit: 91424422
RAC: 0
Topic 223707

I notice that on my Vega 56, the 'Gamma-ray pulsar binary search' tasks finish much quicker (6m42) than the 'Gravitational Wave search O2 Multi-directional' tasks (22m43). However, the credit for the quicker task is much higher (3450) than for the slower tasks (1000). This seems the wrong way around.

If I only were to run the faster tasks, which run 3.4 times as fast and give 3.5 times the credit, I would get almost 12 times as much credit compared to running only the slower tasks. My computer gets sent mostly the slower tasks however.

Is this huge difference in credit reward caused by a wrong valuation of the work done, or is my computer simply MUCH more efficient running the faster tasks?
In the first case, that's fine, I care more about the project than the credits.
However, in the second case, I should adjust my project preferences so that I only get the faster tasks, because then the project would benefit much more from my contributed computing power.

I'm looking forward to your insights.

Edit: perhaps I should add that with both types of work I run two tasks at the same time on the GPU. I remember from the past this was optimal, when I tested it, but I have been out of the loop for a while.

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3963
Credit: 47183642642
RAC: 65413422

one thing to keep in mind, is

one thing to keep in mind, is that your CPU will play a more significant role in the GPU processing with the GW tasks than with the GR tasks. that's part of the reason that the GW runs slower, especially if you have a rather weak (or overcommited) CPU. also GW work uses a lot more GPU memory than the GR tasks, but with 8GB on your card, 2x should be fine.

 

are you running CPU projects also? are you running the CPU at high load on those other projects? you need to make sure you are not running your CPU at 100% if you are running other projects.

 

I agree with you that GW should pay more to better represent the computational effort necessary.

_________________________________________________________________________

fastbunny
fastbunny
Joined: 20 Apr 06
Posts: 22
Credit: 91424422
RAC: 0

That's good to know,

That's good to know, thanks.

Yes I am also using the CPU for Einstein and Rosetta or LHC. I am running a Ryzen 1700X, but at max 44% of the (logical) cores used. That means 7 cores maximum. Usually 6 CPU tasks are running and 2 GPU tasks, so each GPU task always has a full core at its disposal. I guess because the GW tasks require 0.9 CPUs that it's possible to go over this 7 cores slightly (6 + 0.9 + 0.9).

Perhaps a faster CPU would help me. GPU utilization is nowhere near 100% when running two GW tasks.

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7229411545
RAC: 1150008

The Einstein project takes

The Einstein project takes the position that credit rate reflects useful computational results realized.  This is rather inexact (as shown by the very round numbers they sometimes use for standard credit awards), but is the intent.  So an appeal to increase one rate compared to the other just because systems score lower does not align with the stated objectives.

While the ratio varies quite a bit on different systems, I think all of us see drastically higher credit rate/hour on current GPU Gamma-ray work than on current GPU Gravity-Wave work.

Some folks here attribute this to the immaturity of the GW application.  I think a bigger issue may simply be that the GW computation being undertaken is less fully parallelizable on current GPU architectures.  

If you want maximum credit, at the moment the simple answer is to run GR GPU only.  If you want your system to run smoothly, without extreme fluctuation in work fetch, it is prudent to allow one or the other, but not both.  I, personally, choose to run one system exclusively GW GPU, and two systems exclusively GR GPU.

The GR GPU task in the current variant is famous for requiring very little of the host system CPU and PCI bus.  So very economical systems succeed with old slow CPUs and old slow motherboards.  The current GW task is far more demanding of these resources, so is much more likely to show improvement with a given GPU if faster CPU or motherboard support is provided.  I suspect it is also more common to see a big improvement by relieving the host system of other tasks (such as BOINC CPU work) and perhaps by raising the priority of the support task for GW that runs on the CPU.

One complication in my usual advice to "test, test, test", is that unlike the current GR GPU tasks, the current GW GPU tasks vary quite a lot in how long they take to complete because of internal task characteristics.

 

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3963
Credit: 47183642642
RAC: 65413422

fastbunny wrote:That's good

fastbunny wrote:

That's good to know, thanks.

Yes I am also using the CPU for Einstein and Rosetta or LHC. I am running a Ryzen 1700X, but at max 44% of the (logical) cores used. That means 7 cores maximum. Usually 6 CPU tasks are running and 2 GPU tasks, so each GPU task always has a full core at its disposal. I guess because the GW tasks require 0.9 CPUs that it's possible to go over this 7 cores slightly (6 + 0.9 + 0.9).

Perhaps a faster CPU would help me. GPU utilization is nowhere near 100% when running two GW tasks.

 

keep an eye on real world CPU usage, not just the limit you've set. make sure you're not hitting 100%, if you are you're just causing bottlenecks for everything, especially GW GPU tasks, which need the CPU to do work for the GPU tasks.

"BOINC math" is a fickle thing, where 0.9=0 and 0.9+0.9=1.0. so it ends up thinking you have more free threads than you really do, and will try to allocate them to other tasks if you've allowed it. this is compounded by the fact that the GW GPU tasks routinely use MORE than 1.0 threads anyway, sometimes up to 1.5 depending on the CPU.

your best bet will be to use an app_config.xml file to force BOINC allocation of 1.0 CPU per GPU task rather than the 0.9 default value from the project. also best to set your max CPU use % in your compute preferences to 90-95% (adjust as necessary to keep real CPU% under 100%) to prevent over-allocation of CPU resources.

_________________________________________________________________________

fastbunny
fastbunny
Joined: 20 Apr 06
Posts: 22
Credit: 91424422
RAC: 0

It's not really about

It's not really about credits, it's the question whether these (lack of) credits mean that I'm not contributing as much as I could. It's strange that an Nvidia 1050Ti takes the same amount of time to complete a GW workunit as my Vega56. I saw that on one of my workunits, where the other participant had a 1050Ti. So I thought: perhaps I could utilize my GPU better if I only make it run the other tasks.

I'll try to limit CPU use a bit further still. My CPU has 16 threads so if I run 5 instead of 6 CPU tasks, there should be plenty resources left for 2 GPU tasks.

San-Fernando-Valley
San-Fernando-Valley
Joined: 16 Mar 16
Posts: 411
Credit: 10240753455
RAC: 19922048

BOINC Manager tab

BOINC Manager tab "Properties" shows/says for "Estimated computation size" for a WU of type

    GW   -->     144,000 Flops       Credit  1,000    

    GR    -->     525,000 Flops       Credit  3,465

What did Einstein say about "Time"?

Seems OK to me.

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3963
Credit: 47183642642
RAC: 65413422

San-Fernando-Valley

San-Fernando-Valley wrote:

BOINC Manager tab "Properties" shows/says for "Estimated computation size" for a WU of type

    GW   -->     144,000 Flops       Credit  1,000    

    GR    -->     525,000 Flops       Credit  3,465

What did Einstein say about "Time"?

Seems OK to me.

where do you see this? there is no tab in Boinc Manager labeled "Properties" i see [Notices/Projects/Tasks/Transfers/Statistics/Disk]

and the properties button on the Projects tab does not show this info.

 

--edit--

Found it. on the "Tasks" tab. select an individual task and then click the properties button on the button panel on the left side.

but I'd be interested to know where those values come from. if they are set by the project, or if they are estimated by BOINC (boinc isnt good at flop counting, especially GPUs).

_________________________________________________________________________

tullio
tullio
Joined: 22 Jan 05
Posts: 2118
Credit: 61407735
RAC: 0

I am running both GR and GW

I am running both GR and GW tasks on my intel i5 and GTX 1650 with 4GB VRAM. When I was running them on a AMD Ryzen 5 1400 with a GTX 1060 and 3 GB Video RAM I was severely scolded because they said that  3 GB are not sufficient on GW tasks. But now on GPU-Z I see that they are using less then 2 GB VRAM. But I see that the GTX 1650 (Turing) uses less power that the GTX 1060 (Pascal).

Tullio

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3963
Credit: 47183642642
RAC: 65413422

tullio wrote:I am running

tullio wrote:

I am running both GR and GW tasks on my intel i5 and GTX 1650 with 4GB VRAM. When I was running them on a AMD Ryzen 5 1400 with a GTX 1060 and 3 GB Video RAM I was severely scolded because they said that  3 GB are not sufficient on GW tasks. But now on GPU-Z I see that they are using less then 2 GB VRAM. But I see that the GTX 1650 (Turing) uses less power that the GTX 1060 (Pascal).

Tullio

back then, some GW tasks used more than 3GB. but now, they generally use less, the project admins made changes to both the scheduler and the tasks. things change.

 

but this has nothing to do with the topic at hand. please stay on topic. we are discussing credit earned per task type. not how much memory is being used.

_________________________________________________________________________

tullio
tullio
Joined: 22 Jan 05
Posts: 2118
Credit: 61407735
RAC: 0

OK. even for me the credits

OK. even for me the credits are not right. GW tasks should earn the same credits as GR tasks.

Tullio

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.