Fastest GPU for Einstein@home

AgentB
AgentB
Joined: 17 Mar 12
Posts: 915
Credit: 513211304
RAC: 0

Rolf_13 wrote:The guy has two

Rolf_13 wrote:
The guy has two R9 cards, that probably explains why - each card does maybe 150 FGRPB1G jobs a day? The open cl code might not be optimised for the latest AMD cards either.

AMD have made a couple of dual GPU cards, R9 295X2 and HD7990 so this might be one of those.

I used to run the 7990 but power ~500W and temperature >85C became too much for carbon based life forms in the same room.  Silicon based life forms - no problem.  My RX-480 is averaging about 620K at x3 =0.33 whilst behaving nicely as a daily driver and is, mostly, carbon life form friendly.

I can't remember if AMD fixed the bug with running multiple OpenCL tasks on Windows OS on the GCN 1.2(?) and above. 

Chooka
Chooka
Joined: 11 Feb 13
Posts: 117
Credit: 3230260814
RAC: 6

Thanks Rolf - Patience was

Thanks Rolf - Patience was all I needed for the changes to kick in :)

 

I used to run the 7990 but power ~500W and temperature >85C became too much for carbon based life forms in the same room.  Silicon based life forms - no problem.  My RX-480 is averaging about 620K at x3 =0.33 whilst behaving nicely as a daily driver and is, mostly, carbon life form friendly.

LOL AgentB. :D Sounds like my R9 390.

I've yet to compare my old 390 to this Vega. Not much difference I'm guessing.

I've got 2 x WU's running now. I'll wait and see how that goes.

Thx guys


Mumak
Joined: 26 Feb 13
Posts: 325
Credit: 3291744532
RAC: 1531208

AgentB wrote:Rolf_13 wrote:I

AgentB wrote:
I can't remember if AMD fixed the bug with running multiple OpenCL tasks on Windows OS on the GCN 1.2(?) and above. 

Doesn't seem so. I have retested Polaris (RX 480) and Fiji (Fury X) with latest drivers and still no go.
But Vega works well.

-----

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109380066183
RAC: 35973212

Chooka wrote:Thanks Rolf -

Chooka wrote:
Thanks Rolf - Patience was all I needed for the changes to kick in :)

You don't need to be patient if you know a few tricks :-).

When you change the GPU utilization factor on the website, the change is propagated to your host(s) when new tasks of that particular type are downloaded.  So, doing an 'update' is not sufficient if it doesn't result in new work.  The quickest way to get the change propagated is to make a temporary small increase in your work cache size so that an 'update' will cause new work to be sent.  You can then revert the cache size change.

The other way to configure concurrent GPU tasks is to use BOINC's application configuration mechanism.  It might look unduly complicated but most of the entries are optional and a basic file to control GPU task concurrency will be quite simple.  It can be quite useful if you are wanting to do a series of experiments to see what works best.  Edit a parameter in the file, click 'reread config files' in BOINC Manager, immediate change - job done.

Apart from that aspect, it gives you greater control over the allocation of CPU cores to support GPU tasks.  The default is 1 CPU core per GPU task instance.  That is needed for nvidia but not for AMD.  I'm running a few hosts with Pentium dual core CPUs and dual Pitcairn series (HD7850, R7 370, etc.) GPUs.  I've also got a host with dual RX 460s.  All of these are using app_config.xml files with gpu_usage set to 0.5 and cpu_usage is set to 0.2.  Since 4x0.2=0.8, (i.e. less than 1 full core) this means that no CPU cores are actually 'reserved' for GPU support, even though 4 GPU tasks are running concurrently.  It will actually work (somewhat more slowly :-) ) like this - I have done it by accident for a short while (and wondered why the GPU tasks are taking a bit longer than normal) until I've woken up and set BOINC's CPU usage preference to 50% (which I was supposed to have done beforehand) :-).

The advantage of using the BOINC 50% of CPUs setting is that BOINC will only fetch CPU tasks for one CPU rather than two.  If I set 100% of CPUs and set cpu_usage to 0.25 in app_config.xml,  I get the same outcome of 4 GPU tasks and 1 CPU task crunching concurrently but BOINC will actually still make work requests (and potentially overfetch) as if two CPUs were crunching.

I had a quick look at your host with Vega - very interesting.  Your crunch times seem to have gone from (roughly) around 400s (or 800s for two consecutively) to around 630s for two concurrently.  That's a nice speedup.  I see you are doing GW tasks on your CPUs in quite a nice time (still a good CPU despite its age).  The concurrent CPU tasks would have reduced from 3 to 2 with the change to running two concurrent GPU tasks, assuming BOINC is allowed to use 100% of CPU cores.

If it were my host (wishful thinking :-) ) my next experiment would be to try 3x on the GPU without further reducing CPU usage.  It's entirely up to you but you could do just that by using an app_config.xml file.  If you want to do that, you could copy and paste just the entries at the bottom of this message into a plain text editor (very important) and save it in your Einstein project directory making sure it is called exactly app_config.xml.  The parameters given would allow 3x GPU tasks without reserving any CPU cores.  In fact, no CPU cores would be reserved until you tried to run 5 GPU tasks - I'm certainly not suggesting that might be worthwhile, though.  So (also very important) you should set BOINC's % of CPUs allowed to either 25% or 50% or 75%, depending on how many cores you want to reserve.  I'd suggest 50% so that only two cores are allowed.  You can set this quickly through BOINC Manager - local preferences.  That way the setting only applies to this host and not any others.  I tend to suspect that 3x GPU tasks and 2x CPU tasks might be optimum for your host but the only way to be sure is to do the experiments :-).

Doing it exactly as described above (with the suggested parameters below) gives you the ability to play with CPU task concurrency without having to further edit app_config.xml.  Just change the BOINC % of CPUs value locally -> instant change.  To really know if a setting is 'better' you need to accumulate lots of results at each setting and work out averages.  Patience is a virtue :-).

========== Suggested content for an app_config.xml file ===========

<app_config>
    <app>
        <name>hsgamma_FGRPB1G</name>
        <gpu_versions>
            <gpu_usage>0.33</gpu_usage>
            <cpu_usage>0.2</cpu_usage>
        </gpu_versions>
    </app>
</app_config>

==========   End of content for an app_config.xml file   ===========

There is a lot to digest above and you are bound to have questions.  If you do, please start a new thread so we don't further divert the conversation here.

 

Cheers,
Gary.

Chooka
Chooka
Joined: 11 Feb 13
Posts: 117
Credit: 3230260814
RAC: 6

Hi Gary,   Thank you very

Hi Gary,

 

Thank you very much for taking the time to explain that! I do use a app_config file for Milkyway@Home.

I must admit, I feel really useless when I have to deal with those files. I'm not an I.T guru but you have explained it really well. I think it was my R9 280x or R9 390 which first resulted in having to use one of those files. 

I'm happy to leave it running 2 x WU's at the moment. I'll sit back and watch how it goes for a while then make some tweaks :) I'm really happy to see I've cracked the 1Mil credits per day running 2 WU's. Little bit worried about my power bill though :/ (Being Australian.... you know what I mean!)

For now...off to work.

Thanks once again, I appreciate your feedback!

Andrew

 


Variable
Variable
Joined: 6 Oct 13
Posts: 31
Credit: 832903158
RAC: 886455

FWIW, I tried an experiment

FWIW, I tried an experiment yesterday limiting to 0.5 cpu usage per task on one of my machines with Nvidia cards (1070 & 750), and it doesn't seem to have slowed them down.  Task run times overnight are pretty much the same as before when it was using a full thread per Einstein task.

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7023064931
RAC: 1831842

Variable wrote:an experiment

Variable wrote:
an experiment yesterday limiting to 0.5 cpu usage per task

That is a misconception of the behavior of the control you were excercising.

The CPU usage number you adjusted is a estimate used for scheduling purposes.  The thing which is directly controlled is the number and types of BOINC tasks which are started.  Once started, unless suspended, they use what they use--not influenced by the value of that number.  

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109380066183
RAC: 35973212

Variable wrote:... Task run

Variable wrote:
... Task run times overnight are pretty much the same as before when it was using a full thread per Einstein task.

That statement doesn't mean anything if you don't also indicate how many CPU cores your machine has and how many of them were fully loaded with other compute intensive tasks.  As your computers are hidden, we can't know that but I suspect you may not be running CPU tasks at all.  As Archae86 explains, your change will make no difference to what a GPU task needs (and will take) for CPU support.   If there are CPU cores doing little else, those needs can easily be met with no significant change to the crunch time.

Try the same experiment with all available CPU cores running 100% loads and then see what happens :-).

 

Cheers,
Gary.

Variable
Variable
Joined: 6 Oct 13
Posts: 31
Credit: 832903158
RAC: 886455

My compute machines run other

My compute machines run other projects on CPU as well, so the CPU's are normally kept loaded to 90% or so.

Jonathan Jeckell
Jonathan Jeckell
Joined: 11 Nov 04
Posts: 114
Credit: 1341945207
RAC: 1

Sorry if this is a tangent,

Sorry if this is a tangent, but does anyone know why NVIDIA performance on macOS became so abysmal when we switched to FGRPB1G/OpenCL?  I have a GTX 950 on Windows that does 1 WU/35 min or 2/55 min, but my GTX 960 takes 1 hr 1 min on average on the Mac.

I thought this might be Apple or NVIDIA's fault, but OpenCL works great on the R9 280X, and the NVIDIA card benchmarks as expected outside of this project.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.