Multiple tasks on GTX 1080 Ti

Alessio Susi
Joined: 7 Mar 15
Posts: 31
Credit: 217846544
RAC: 102
Topic 215495

Hi. How can I run multiple tasks on a GTX 1080 Ti? Does this increase the number of completed tasks per day?

ASUS X570 E-Gaming
AMD Ryzen 9 3950X, 16 core / 32 thread 4.4 GHz
AMD Radeon Sapphire RX 480 4GB Nitro+
Nvidia GTX 1080 Ti Gaming X Trio
4x16 GB Corsair Vengeance RGB 3466 MHz

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4704
Credit: 17546095283
RAC: 6401655

You would need to run a

You would need to run a app_config.xml file on the project to increase the number of gpu tasks per card.  It would be applied to both the AMD and Nvidia cards unless you made a lot more complicated app_config to break out the individual card types.  This is an example of mine.


<app_config>

<app>
<name>hsgamma_FGRPB1G</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>1</cpu_usage>
</gpu_versions>
</app>

<project_max_concurrent>2</project_max_concurrent>

</app_config>

 

This runs two gpu tasks on my Nvidia 1080Ti card and also limits the total project concurrent tasks to two.  You could try even a 0.33 gpu count for three tasks per card.  You have enough cpu cores in the Ryzen 7 to support that.

 

Betreger
Betreger
Joined: 25 Feb 05
Posts: 987
Credit: 1421692400
RAC: 790985

Or you could do the easy way

Or you could do the easy way in your computing preferences. Select GPU utilization factor of FGRP apps: the same logic. .5 = 2 tasks, .33 = 3 tasks. Save changes and the update the computer. 

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109398346673
RAC: 35721078

Alessio Susi wrote:... Does

Alessio Susi wrote:
... Does this increase the number of completed tasks per day?

I'll answer the bit that wasn't answered :-).

Yes, it does, and by enough to make it quite worthwhile if you're interested in increasing your daily output.  The biggest increase will come from going from x1 to x2 - ie. changing the GPU utilization factor from 1 to 0.5 in your project preferences.  Be aware that the change kicks in when new work is downloaded and not just by clicking update if work fetch doesn't occur.

For the impatient, the easiest way to guarantee immediate work fetch is to make a small (but sufficient) increase to your work cache minimum setting locally in BOINC Manager.  You can easily reverse this after downloading new work occurs.  Once the change arrives, a second task will start and BOINC will attempt to double the number of tasks on board because it now sees two tasks being processed where the estimate is still the old x1 value.  As soon as tasks with longer running times finish, BOINC will immediately correct the estimates.

With a 1080 Ti, you will be able to go higher than x2 but it will be a case of diminishing returns.  The only way to know what is best for you is to experiment and observe the crunch times at each setting for a sufficiently large number of tasks to get a good average.  Something like 20 tasks and see if most of the results are pretty close to the average value.  Also note that you will produce a bit more heat and consume a bit more power.  You will get a bit better efficiency, particularly if you try to ensure that all task instances aren't starting and finishing simultaneously.  It's fairly easy to stagger the start times for each task and because run times are fairly uniform, this tends to be maintained, once established.

The one thing to keep in mind is that each GPU task instance needs to 'reserve' a CPU core for support duties.  As you have a 16 core machine, this shouldn't be a problem.  If 15 of your cores are currently crunching CPU tasks, one of those tasks will stop temporarily when a second GPU task starts.  It might be worthwhile (through your compute preferences in BOINC Manager) to further limit the number of CPU cores that crunch CPU tasks, if you really want to extract the best performance from your GPU.

Whilst the GPU utilization factor is the easy way to do what you want if you don't need anything special, you can get more fine-grained control if you are prepared to master the application configuration system which uses a special file called app_config.xml.  Check out the link for the documentation.

 

Cheers,
Gary.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4704
Credit: 17546095283
RAC: 6401655

I didn't post the "easy"

I didn't post the "easy" method of the project preferences utilization  . . . .  because I forgot about it.  Einstein is the only project that offers it as far as I know.  Every other project I contribute to needs the app_config file to manage utilization.

Also that would be the only method to differentiate task utilization separately for both the AMD and Nvidia cards.

 

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

Yes Keith but your method

Yes Keith but your method allows to you set a number of work units you want on your machine, both CPU and GPU without having to guess with the other methods.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.