Automating changes to task multiples

cecht
cecht
Joined: 7 Mar 18
Posts: 703
Credit: 758,850,273
RAC: 644,497

It's actually working well

It's actually working well for 1 and 2 GPU systems! (I haven't tested it for more GPUs.)

Latest version has been uploaded (link in OP). 

  • Now has look-ahead ability for n number of tasks ready-to-run, given n GPUs.
  • Evaluates all tasks waiting and all n ready tasks for which has the highest DF task and bases  increment-decrement decisions on that.
  • The frequency of task suspensions has been reduced by optimizing evaluation conditions and providing sufficient pausing of the script to let the GPU(s) memory usage get up to speed after a task suspension and after a task completes and a new one starts.
  • The .cfg settings for VRAM usages of the three O2MDF DF task classes has been revamped.
  • Log output is more readable.
  • The findDF.sh utility script has been updated to report n tasks ready-to-run calculated from n GPUs * current task multiple.

Details are in comments of the .cfg. and .sh files.

Ideas are not fixed, nor should they be; we live in model-dependent reality.

cecht
cecht
Joined: 7 Mar 18
Posts: 703
Credit: 758,850,273
RAC: 644,497

A new and improved version,

A new and improved version, taskXDF and related files, is up on the Dropbox page,

https://www.dropbox.com/sh/kiftk65mg59ezm6/AACeHeeKYvvrj65VawsRM107a?dl=0

In addition to improved look-ahead functions for matching VRAM usage to task multiplicities, a timer script can now be used for running the script on timed intervals; no more need for systemd.

Given the slow release of tasks with higher delta frequencies, DF, I decided not to wait and put the new scripts out now so folks can have a play with them, praise me, curse me, etc. :)

The only thing missing, in the taskXDF.cfg file, are VRAM GB values for tasks above a DF of 0.40. Those values will need to be updated as higher DF tasks roll out (if they ever do). The script(s) now reports the average VRAM GB used for running tasks, along with individual DFs for running tasks, so when all tasks of the same DF are running, you'll know what the VRAM use is for that task DF.

As before, it's still only for Linux systems running supported AMD cards. I'm working on a Python version, and have dreams of Windows implementation, but that might be a while....

Ideas are not fixed, nor should they be; we live in model-dependent reality.

cecht
cecht
Joined: 7 Mar 18
Posts: 703
Credit: 758,850,273
RAC: 644,497

Update: .cfg file has new

Update: .cfg file has new VRAM GB memory values, all taskXDF scripts have minor improvements in reporting and tweaks to logic.

Ideas are not fixed, nor should they be; we live in model-dependent reality.

cecht
cecht
Joined: 7 Mar 18
Posts: 703
Credit: 758,850,273
RAC: 644,497

Update: Added feature to


Update: Added feature to periodically report and log average task times and multiplicities. Improved formatting of reports for added clarity of data.

Ideas are not fixed, nor should they be; we live in model-dependent reality.

DF1DX
DF1DX
Joined: 14 Aug 10
Posts: 61
Credit: 1,251,473,779
RAC: 421,124

Here is another data

Here is another data point:

Host 12801270 (Linux, Radeon VII) shows a memory usage of 12.64 GB for 6x O2MDF tasks.

This is the highest value of this host so far. I only use the script vramDF-timer. Thanks for that.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.