Former GPU mining rig now running BOINC

steffen_moeller
steffen_moeller
Joined: 9 Feb 05
Posts: 78
Credit: 1773655132
RAC: 0
Topic 219169

Hello,

I finally bought a former mining rig  on eBay and contributed it to Einstein@Home, see  https://einsteinathome.org/de/host/12781345 . It has 5 (soon 6) AMD RX 580 GPUs but is constrained by a non-hyperthreading dual-core CPU. 

When I bought the machine I thought that I should quickly substitute the CPU with a faster model. But now, I found the CPU mostly idle. Other than for the NVidia cards running Einstein, the AMD OpenCL implementation does not demand much overhead from the CPU at all. So, I do not really want to make an apparently unnecessary investment into an extra CPU. I have already installed extra memory (the initially installed 4GB seem to have been sufficient, though) and an SSD (the USB stick provided likely would also have worked, but what the heck). Just, the BOINC scheduler keeps requesting some 0.98 CPUs for the task, so only 3 of the 5 cards are used when running E@H. Is there any trick to override that setting locally? Or should there possibly be an adjustment for AMD-GPUs wrt to CPU demands? SETI, Collatz and Primegrid use all 5 GPUs on that machine but this is not what I want to run.

I have some hope that more of the altcoin mining routine and laid-off hardware finds its way to the sciences with BOINC.  If Einstein could spread the news that no extra investments over the typical mining gear are required then this I sense to be important.

Many thanks!

QuantumHelos
QuantumHelos
Joined: 5 Nov 17
Posts: 190
Credit: 64239858
RAC: 0

For a start the CPU pci lans

For a start the CPU pci lans would improve with a better CPU, Find out what socket it is first...

As for configuration alterations ... https://einsteinathome.org/content/best-appconfigxml

Holmis
Joined: 4 Jan 05
Posts: 1118
Credit: 1055935564
RAC: 0

steffen_moeller wrote:Just,

steffen_moeller wrote:
Just, the BOINC scheduler keeps requesting some 0.98 CPUs for the task, so only 3 of the 5 cards are used when running E@H. Is there any trick to override that setting locally?

Read up on app_config.xml and then adjust it to your needs!

If you can't make sense of the documentation then ask and I or someone else will help you get started. Wink

As a start I use this:

<app_config>
   <app>
      <name>hsgamma_FGRPB1G</name>
         <gpu_versions>
            <gpu_usage>0.33</gpu_usage>
            <cpu_usage>0.2</cpu_usage>
         </gpu_versions>
   </app>
   <app>
      <name>einstein_O1OD1I</name>
         <gpu_versions>
            <gpu_usage>0.5</gpu_usage>
            <cpu_usage>0.16</cpu_usage>
         </gpu_versions>
    </app>
</app_config>
steffen_moeller
steffen_moeller
Joined: 9 Feb 05
Posts: 78
Credit: 1773655132
RAC: 0

Works! I happily see five

Works! I happily see five running processes:

12480 boinc     30  10 1589916 155428  90992 R  17,9  0,5   0:10.41 hsgamma_FGRPB1G 12482 boinc     30  10 1589892 154684  90304 S  17,9  0,5   0:10.40 hsgamma_FGRPB1G 12506 boinc     30  10 1590600 221096  90556 S  17,5  0,7   0:10.48 hsgamma_FGRPB1G 12548 boinc     30  10 1589760 220088  90224 R  17,5  0,7   0:10.49 hsgamma_FGRPB1G 12455 boinc     30  10 1589684 154748  90596 R  17,2  0,5   0:10.45 hsgamma_FGRPB1G

Funnily enough I had app_config.xml on my radar only for self-compiled executables. Thank you so much!

I decided to leave it with 1 task per GPU for now since the card is already close to maximum power (this is what lm-sensors states, need to invest a bit more into finding the right monitoring tools for Ubuntu) and will allow the settings to evolve over time.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5840
Credit: 109039247092
RAC: 33841597

steffen_moeller wrote:... I

steffen_moeller wrote:
... I should quickly substitute the CPU with a faster model.

The CPU should be fine - it's fast enough - but it would be nice to have a couple more cores for 5 (6??) GPUs :-).  The most important thing is not to run any CPU tasks as I would guess that would have a very negative effect on GPU crunch times.  I have a lot of experience with AMD GPUs under Linux and the very low CPU support requirement is very nice to have.  I use some quite old (and slow) CPUs with RX 570 GPUs and the crunch time hardly changes at all compared to a much more modern CPU.  However I've never tried to run more than 2 GPUs in the one host.  That's unknown territory for me :-).

steffen_moeller wrote:
... the BOINC scheduler keeps requesting some 0.98 CPUs for the task, so only 3 of the 5 cards are used when running E@H. Is there any trick to override that setting locally?

Holmis has pointed you in the correct direction but I think you'd be wise to avoid the new GW GPU app, at least until you get some experience with how the Gamma-ray pulsar app works.  He has shown the file he uses and you will need to make some changes for your situation.  For example, the first block of his file is designed to run 3 concurrent GPU tasks (0.33 gpu_usage) per GPU.  If you tried to use that with 5 GPUs, that would give a total of 15 tasks, each one set to 'reserve' 0.2 CPUs.  So 15 x 0.2 = 3 full CPUs - but you only have 2 - so it can't work properly without being amended for your situation.

It's always best to start conservatively and increase gradually and test each step to make sure you're gaining, not losing.  You should test just one task per GPU.  From his file, remove the 2nd <app> ... </app> block and in the remaining first block, change <gpu_usage> to 1 and <cpu_usage> to 0.16.  I've chosen that 2nd number as a precaution.  You suggested you were going to have 6 GPUs and if you did and if you also tried to run 2 concurrent tasks per GPU at some later stage, then you wouldn't want the total CPU support requirement to exceed 2 cores.  Since 2x6x0.16=1.92, it doesn't exceed 2 cores so BOINC wouldn't prevent you from using 6 GPUs @ 2 tasks per GPU.  I'm certainly NOT suggesting that this will work out to be a viable operating condition, but since it's unknown territory, you won't know until you decide to try it.  Just be assured that the 0.16 setting for cpu_usage will not cause a GPU task to be denied CPU support when you are running fewer GPU tasks.

So if you install the amended version of the file in the Einstein project directory and restart BOINC, that should allow BOINC to run a single GPU task on either 5 or 6 GPUs, whatever you had installed.  I strongly suggest you let it run that way for a day or three until you get comfortable with seeing how each individual GPU performs.  From what is showing currently on the website, everything looks pretty good.  Ignoring the stuff earlier than July 4, you have no errors and only 1 invalid for a total of 167 valid results (when I looked).  An invalid rate of 1% or so is quite normal. Yours (so far) is less than 1% so all is well.

The crunch times for your tasks seem to be around the 720 - 740 secs mark.  If you were running a single RX 580, I would consider that to be just a little slow.  With 3 out of your 5 GPUs actually running, it may well be quite OK.  If you get 5 running, it will probably become a little slower.  There's bound to be some effect from the extra PCIe bandwidth needed to support the extra GPUs.  Please take the time to observe what happens as you run a single task on each installed GPU.  Please report any slowdown you see when you get the extra GPUs working.  Once you get a feel for how well things work and if there are any problems that show up, there will be plenty of time later to try running more than one task per GPU :-).

For your information, I think you are doing a great thing - getting an ex-mining rig to do some useful work for a change :-).  I'm sure many people will be interested in your experiences so please do keep us all informed :-).

Cheers,
Gary.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5840
Credit: 109039247092
RAC: 33841597

steffen_moeller wrote:Works!

steffen_moeller wrote:
Works! I happily see five running processes

Sorry, I hadn't seen your 'success news' until after I posted my previous response.  Now you should check for any effect on crunch times :-).

I'm really glad you got all cards working properly.

Cheers,
Gary.

steffen_moeller
steffen_moeller
Joined: 9 Feb 05
Posts: 78
Credit: 1773655132
RAC: 0

Thank you, Gary.   The

Thank you, Gary.

 

The biggest problem so far was to find a Windows machine to flash the original BIOS back - and that machine for now still harbors that 6th card until a new USB-riser card arrives for the rig. With the mining BIOS, the cards were not stable, even when used alone, and lasted only for minutes when only two were crunching in parallel and the whole machine went unresponsive. All fine with the original BIOS.

 

I found a little switch on the RX 580 that is supposedly a mining mode the card offers. That may also be responsible for being a bit on the slow side. I'll find a wattmeter and investigate. The machine is not running at the moment to help distinguish the 3 and 5 card setups.

 

Cheers,
Steffen

cecht
cecht
Joined: 7 Mar 18
Posts: 1407
Credit: 2431552799
RAC: 1513927

steffen_moeller wrote:I found

steffen_moeller wrote:
I found a little switch on the RX 580 that is supposedly a mining mode the card offers.

If the card is an XFX model, then the forward position (toward the ports) is the mining position. With the factory default dual-bios settings, I've found the mining bios to work well in a Linux system. Perhaps the previous owner flashed the mining bios to suit a specific cryptomining function such that it isn't compatible with E@H crunching? Sapphire cards also have a mining bios (quiet mode), but I don't know which switch position it is.

Ideas are not fixed, nor should they be; we live in model-dependent reality.

steffen_moeller
steffen_moeller
Joined: 9 Feb 05
Posts: 78
Credit: 1773655132
RAC: 0

cecht schrieb:steffen_moeller

cecht wrote:
steffen_moeller wrote:
I found a little switch on the RX 580 that is supposedly a mining mode the card offers.
If the card is an XFX model, then the forward position (toward the ports) is the mining position. With the factory default dual-bios settings, I've found the mining bios to work well in a Linux system. Perhaps the previous owner flashed the mining bios to suit a specific cryptomining function such that it isn't compatible with E@H crunching? Sapphire cards also have a mining bios (quiet mode), but I don't know which switch position it is.

Yes, that's the one: https://www.sapphiretech.com/en/consumer/nitro-rx-580-4g-g5 The cards are running in mining mode -  same "forward" setting. And they are silent, indeed.

With the 4th and 5th card also active, total compute times have increased by 30 seconds to about 765s. The CPU time is invariant at 145s. Since these are all independent PCIe 1x connects, I would expect the bottleneck at the CPU as suggested by QUANTUMHELOS. But then again, reading through https://www.gigabyte.com/Motherboard/GA-H110-D3A-rev-10#sp, I saw that these are only PCIe 2.0 1x. The only PCIe 3.0 slot ("x16") is not connected - will be interesting to learn if that slot will induce an extra delay. The motherbord's chipset says it would support only 6 PCIe  lanes. Hm. https://ark.intel.com/content/www/us/en/ark/products/90590/intel-h110-chipset.html

So, if the community does not suggest otherwise in this thread, I will get both a 1x and a full length riser card for the 6th card in the only PCI 3.0 slot to assess the effect of the different number of PCI lanes at the cards' disposal.

Holmis
Joined: 4 Jan 05
Posts: 1118
Credit: 1055935564
RAC: 0

steffen_moeller wrote:Works!

steffen_moeller wrote:

Works! I happily see five running processes.

Funnily enough I had app_config.xml on my radar only for self-compiled executables. Thank you so much!

Great!

App_info.xml is used to describe a self-compiled executable so Boinc can use it.
App_config.xml is used to fine tune how Boinc runs a specific search/application and it also supports tuning of multiple searches/applications.
Do be aware that once you've used a app_config.xml if you ever want to revert to only using the project preferences you will need to reset the project to revert all changes.

Good luck and have fun fine tuning your new rig and I totally agree with Gary about it finally is doing something useful!

Gary Roberts wrote:
Holmis has pointed you in the correct direction but...

Thank you for giving some more detail, I didn't have time to write a more detailed message.

cecht
cecht
Joined: 7 Mar 18
Posts: 1407
Credit: 2431552799
RAC: 1513927

Steffen_Moeller wrote:Since

Steffen_Moeller wrote:
Since these are all independent PCIe 1x connects, I would expect the bottleneck at the CPU as suggested by QUANTUMHELOS. But then again, reading through https://www.gigabyte.com/Motherboard/GA-H110-D3A-rev-10#sp, I saw that these are only PCIe 2.0 1x. The only PCIe 3.0 slot ("x16") is not connected - will be interesting to learn if that slot will induce an extra delay. The motherbord's chipset says it would support only 6 PCIe  lanes. Hm. https://ark.intel.com/content/www/us/en/ark/products/90590/intel-h110-chipset.html

The H110 chipset supports only 6 PCIe lanes, while your Pentium G4400 CPU can support a maximum of 16 PCIe lanes, so, yeah, it would seem that the mobo chipset is the limiting factor, right? With a mobo limit of 6 PCIe lanes and 5 lanes taken by the 5 cards running on x1 risers, any card put in the x16 slot would only have the remaining one PCIe lane available to it, which would hobble it's performance. (What I just wrote taps out my really limited understanding of PCIe.)  As I recall, there was another E@H discussion within the past few months where someone said they had good E@H GPU performance with x4 lanes, and passable performance with x2 lanes, but things suffered with x1 lane for a GPU.

In any event, thanks for making me aware of the differences in PCIe support among chipsets. More on those specs here, https://www.intel.com/content/www/us/en/products/chipsets/view-all.html

Ideas are not fixed, nor should they be; we live in model-dependent reality.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.