Argh, Windows only. Would you know of a similar Linux tool? All I could find were some rather uninspiring command line tools that didn't really give any useful information.
The VRAM info is, for example, shown in the nice tool from TechPowerUp named "GPU-Z".
Welcome.
Argh, Windows only. Would you know of a similar Linux tool? All I could find were some rather uninspiring command line tools that didn't really give any useful information.
a simple nvidia-smi command will print out nearly everything you want to know at an instantaneous level. this is inluded with the nvidia driver (if you're planning to use an Nvidia GPU).
if you're looking for AMD metrics, you can use the gpu-utils tool: https://github.com/Ricks-Lab/gpu-utils
(this tool actually works for both AMD and Nvidia)
Hope we have provided enough information for you to move forward.
Hope I am wrong about running these two apps at the same time slowing both down. With All-Sky GW you often get increased total production for 2x, 3x etc. With brp7/meerKat 2x usually slows down production.
But only testing will show which is most productive.
Respectfully,
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Hope we have provided enough information for you to move forward.
Hope I am wrong about running these two apps at the same time slowing both down. With All-Sky GW you often get increased total production for 2x, 3x etc. With brp7/meerKat 2x usually slows down production.
But only testing will show which is most productive.
Respectfully,
Thanks. I think I have all I need. Everything else I can determine through trial and error. I'm not really overly interested in "best/most productive", since I am also running Rosetta and all 3 LHC tasks, and I have a 12 core/24 thread CPU. I just discovered that my system needs 4 threads to really run smoothly (I had been assuming it would work with just 2, but some issues arose which suggested computing power allotted to the system wasn't quite adequate).
So what I am really interested in is "acceptable" productivity, with the decision on the meaning of that mine alone :D
I need some more help with this, this time on the exact way <cpu_usage> works. I've been completely unable to find any decent explanation of it.
What, for example, would <cpu_usage>1.0</cpu_usage> do? If it will reserve a whole CPU thread for the whole time a task is running, then with only 20 threads total available for all Boinc tasks, I can hardly use this for the 8 (total) GPU tasks it looks like I should be able to run.
If, however, Boinc will do something like suspend another task so the GPU task can have a CPU thread only when it needs it, that is something I could probably live with. Otherwise, I'll have to allocate only a fraction of a thread to each GPU task, limiting the total number of threads reserved for those tasks to 2 (so my setting would be <cpu_usage>.25</cpu_usage> for the 8 total tasks).
You like most people have no understanding of what the gpu and cpu usage reservations mean and do.
They are only useful for 'guidance' for the project scheduler to figure how many tasks a host is able to compute.
They have no actual influence on how ANY task actually runs. That is solely decide by the application itself and how the application developer wrote the application.
For cpu tasks, a task always occupies a full cpu core irrespective of any cpu_usage setting.
For gpu tasks, a task will use as how much or as how little cpu support is needed by the gpu application. Depending on how the developer wrote the application and how smart they are in how to use all of or how little of the gpu's resources at any time is what determines how much cpu support any one task needs at any time.
So some tasks will need a full cpu core to support the gpu task computation and some tasks may need as little as 0.1 of a cpu core to support and feed the task. You have no control over this function. It it all up to the gpu application itself.
You can set the cpu_usage for a gpu task in the app_config file as guidance for how many tasks will run sensibly on the host and for how much work to guide the scheduler in delivering to you.
You like most people have no understanding of what the gpu and cpu usage reservations mean and do.
Well, du-uhh. Why do you think I am asking?
But it would seem that you don't understand this quite as well as you think you do. Read on.
Quote:
They are only useful for 'guidance' for the project scheduler to figure how many tasks a host is able to compute.
They have no actual influence on how ANY task actually runs. That is solely decide by the application itself and how the application developer wrote the application.
For cpu tasks, a task always occupies a full cpu core irrespective of any cpu_usage setting.
Well, for CPU-only tasks, that is not at all correct. I have my settings on LHC@H to send only tasks configured to run on only a single core. In the app_config file for LHC, I override that to run ATLAS on 2 cores and CMS (when available) on 4. It works, as intended.
Moreover, <cpu_usage> is found in the <gpu_versions> section of the app_config file, and has nothing to do with CPU-only tasks. That is controlled by the <avg_ncpus> line in the <app_version> section, and by setting the same number as a command-line parameter sent to the VM (line <cmdline> also in the <app_version> section). Like I said, that works, as intended. But here, we are not speaking of CPU-only tasks, but GPU tasks.
Quote:
For gpu tasks, a task will use as how much or as how little cpu support is needed by the gpu application. Depending on how the developer wrote the application and how smart they are in how to use all of or how little of the gpu's resources at any time is what determines how much cpu support any one task needs at any time.
Well, then there wouldn't seem to be much point in having the <cpu_usage> line in there in the first place, would there?
But what I actually did ask goes beyond that. I specifically asked this:
Does a core get dedicated to each GPU task for as long as that task is running, or does it get allocated only when the task needs it?
Quote:
You can set the cpu_usage for a gpu task in the app_config file as guidance for how many tasks will run sensibly on the host and for how much work to guide the scheduler in delivering to you.
According to everyone else, that is actually set by the <gpu_usage> line.
The science app will use however much CPU support it needs. There are no settings in BOINC that will change that. The role of BOINC is to call the science app to run, but has no control over what the app does after that.
the cpu_usage setting only informs BOINC of how much CPU is being used by the app, for the client’s own bookkeeping of available resources. If you set this to 1.0, it will reserve a whole thread for the GPU task while it’s running, and will take that thread away from the pool of available threads to run other tasks. If that GPU task is suspended, the CPU thread will be used for something else, up to your total CPU usage settings in the compute preferences.
The science app will use however much CPU support it needs. There are no settings in BOINC that will change that. The role of BOINC is to call the science app to run, but has no control over what the app does after that.
the cpu_usage setting only informs BOINC of how much CPU is being used by the app, for the client’s own bookkeeping of available resources. If you set this to 1.0, it will reserve a whole thread for the GPU task while it’s running, and will take that thread away from the pool of available threads to run other tasks. If that GPU task is suspended, the CPU thread will be used for something else, up to your total CPU usage settings in the compute preferences.
What I was expecting to hear. Thanks.
So, if I run 8 tasks total in the GPU and set cpu_usage to 0.125, will that still reserve only a single thread? Would things change if I am running 2 apps, 4 tasks each? Depending on what impact that had on completion time, I could live with that -- one thread per task I cannot. I could probably still live with giving the GPU tasks 2 threads for their use.
hadron wrote: Argh,
)
Try alternativeto.net/software/gpu-z/?platform=linux
cheers
hadron
)
a simple nvidia-smi command will print out nearly everything you want to know at an instantaneous level. this is inluded with the nvidia driver (if you're planning to use an Nvidia GPU).
if you're looking for AMD metrics, you can use the gpu-utils tool: https://github.com/Ricks-Lab/gpu-utils
(this tool actually works for both AMD and Nvidia)
_________________________________________________________________________
Ian&Steve C. wrote: if
)
That's the one. Thanks.
Hadron, Good luck on your
)
Hadron,
Good luck on your quest.
Hope we have provided enough information for you to move forward.
Hope I am wrong about running these two apps at the same time slowing both down. With All-Sky GW you often get increased total production for 2x, 3x etc. With brp7/meerKat 2x usually slows down production.
But only testing will show which is most productive.
Respectfully,
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Tom M wrote: Hadron, Good
)
Thanks. I think I have all I need. Everything else I can determine through trial and error. I'm not really overly interested in "best/most productive", since I am also running Rosetta and all 3 LHC tasks, and I have a 12 core/24 thread CPU. I just discovered that my system needs 4 threads to really run smoothly (I had been assuming it would work with just 2, but some issues arose which suggested computing power allotted to the system wasn't quite adequate).
So what I am really interested in is "acceptable" productivity, with the decision on the meaning of that mine alone :D
I need some more help with
)
I need some more help with this, this time on the exact way <cpu_usage> works. I've been completely unable to find any decent explanation of it.
What, for example, would <cpu_usage>1.0</cpu_usage> do? If it will reserve a whole CPU thread for the whole time a task is running, then with only 20 threads total available for all Boinc tasks, I can hardly use this for the 8 (total) GPU tasks it looks like I should be able to run.
If, however, Boinc will do something like suspend another task so the GPU task can have a CPU thread only when it needs it, that is something I could probably live with. Otherwise, I'll have to allocate only a fraction of a thread to each GPU task, limiting the total number of threads reserved for those tasks to 2 (so my setting would be <cpu_usage>.25</cpu_usage> for the 8 total tasks).
You like most people have no
)
You like most people have no understanding of what the gpu and cpu usage reservations mean and do.
They are only useful for 'guidance' for the project scheduler to figure how many tasks a host is able to compute.
They have no actual influence on how ANY task actually runs. That is solely decide by the application itself and how the application developer wrote the application.
For cpu tasks, a task always occupies a full cpu core irrespective of any cpu_usage setting.
For gpu tasks, a task will use as how much or as how little cpu support is needed by the gpu application. Depending on how the developer wrote the application and how smart they are in how to use all of or how little of the gpu's resources at any time is what determines how much cpu support any one task needs at any time.
So some tasks will need a full cpu core to support the gpu task computation and some tasks may need as little as 0.1 of a cpu core to support and feed the task. You have no control over this function. It it all up to the gpu application itself.
You can set the cpu_usage for a gpu task in the app_config file as guidance for how many tasks will run sensibly on the host and for how much work to guide the scheduler in delivering to you.
Suggest reading the application_configuration section of this document.
https://boinc.berkeley.edu/wiki/Client_configuration#Application_configuration
Keith Myers wrote: You like
)
Well, du-uhh. Why do you think I am asking?
But it would seem that you don't understand this quite as well as you think you do. Read on.
Well, for CPU-only tasks, that is not at all correct. I have my settings on LHC@H to send only tasks configured to run on only a single core. In the app_config file for LHC, I override that to run ATLAS on 2 cores and CMS (when available) on 4. It works, as intended.
Moreover, <cpu_usage> is found in the <gpu_versions> section of the app_config file, and has nothing to do with CPU-only tasks. That is controlled by the <avg_ncpus> line in the <app_version> section, and by setting the same number as a command-line parameter sent to the VM (line <cmdline> also in the <app_version> section). Like I said, that works, as intended. But here, we are not speaking of CPU-only tasks, but GPU tasks.
Well, then there wouldn't seem to be much point in having the <cpu_usage> line in there in the first place, would there?
But what I actually did ask goes beyond that. I specifically asked this:
Does a core get dedicated to each GPU task for as long as that task is running, or does it get allocated only when the task needs it?
According to everyone else, that is actually set by the <gpu_usage> line.
Well, du-uuh again. Did it not occur to you that I might have already found this, and that this
cpu_usage: The number of CPU instances (possibly fractional) used by GPU versions of this app.
is about as useful here as breasts on a boar?
The science app will use
)
The science app will use however much CPU support it needs. There are no settings in BOINC that will change that. The role of BOINC is to call the science app to run, but has no control over what the app does after that.
the cpu_usage setting only informs BOINC of how much CPU is being used by the app, for the client’s own bookkeeping of available resources. If you set this to 1.0, it will reserve a whole thread for the GPU task while it’s running, and will take that thread away from the pool of available threads to run other tasks. If that GPU task is suspended, the CPU thread will be used for something else, up to your total CPU usage settings in the compute preferences.
_________________________________________________________________________
Ian&Steve C. wrote: The
)
What I was expecting to hear. Thanks.
So, if I run 8 tasks total in the GPU and set cpu_usage to 0.125, will that still reserve only a single thread? Would things change if I am running 2 apps, 4 tasks each? Depending on what impact that had on completion time, I could live with that -- one thread per task I cannot. I could probably still live with giving the GPU tasks 2 threads for their use.