A single task of the acemd type produces nearly 100% utilization. No benefit from trying to run more than 1X.
The Python tasks can use as much as 6GB of VRAM in spurts so will not fit more than a single task in the typical 8GB gpu.
The main issue with the Python tasks is the heavy usage of system RAM. A single task uses 10GB or more of main memory. So unless you have multiple gpus and lots of system memory, 64GB or greater it is difficult to run more than singles there also.
Would you not be better running more than 1 per GPU? Or is there insufficient VRAM?
My understanding is the GPUGRID tasks consume 100% of the gpu resources per task.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Would you not be better running more than 1 per GPU? Or is there insufficient VRAM?
My understanding is the GPUGRID tasks consume 100% of the gpu resources per task.
Tom M
GPUGRID acemd3 (and some acemd4) tasks will consume ~95+% of the GPU core. and high PCIe bus bandwidth use. low VRAM utilization and VRAM bus utilization though.
other acemd4 tasks will have ~80% GPU core use, high VRAM bus use, and low PCIe bus use.
conversely, the GPUGRID Python tasks will use very little GPU core, but a lot of VRAM utilization, and also a lot of of the CPU threads (but at low utilization for each thread).
so GPUGRID has a wide variety of tasks now, you need to be specific in which task type you're talking about.
"As I stated the tasks use a mixed cpu-gpu application. You will see very little gpu usage then periodic spurts of activity and then back to low utilization."
So I assumed you had the GPU idle quite a lot.
Not tried one myself, can't get hold of any tasks, but I don't have cards over 4GB VRAM, will these just fail?
first, depends on what kind of card you have (your hosts are hidden). GPUGRID only has CUDA apps, so they will only run on Nvidia cards.
and yes, while the Python tasks have rather low GPU core utilization, they have high memory requirements. a 4GB card probably isn't enough.
first, depends on what kind of card you have (your hosts are hidden). GPUGRID only has CUDA apps, so they will only run on Nvidia cards.
and yes, while the Python tasks have rather low GPU core utilization, they have high memory requirements. a 4GB card probably isn't enough.
I didn't ask it to hide them, no idea why that's happening.
I thought the new GPUGrid python apps ran on AMD aswell?
Oh well, off to folding.
Einstein, being a European project, is bound by the GDPR. so the default is hidden. you need to unhide them.
GPUGRID had a beta AMD GPU application like 7 years ago, but I don't think it ever went mainstream, and their apps have otherwise always been CUDA based. AMD cards can't run CUDA.
Mikey, were those the test beta tasks or the new production run tasks.
If I remember correctly, the test trial setup tasks were not as demanding of gpu VRAM because they were not really doing any proper machine learning. They were just being used to figure out the correct packaging of the app and tasks.
Einstein, being a European project, is bound by the GDPR. so the default is hidden. you need to unhide them.
GPUGRID had a beta AMD GPU application like 7 years ago, but I don't think it ever went mainstream, and their apps have otherwise always been CUDA based. AMD cards can't run CUDA.
I have no interest in the EU, GDP whatever, etc and will not adjust settings to fix their mistakes. I was never asked if I wanted them public or not, the option was not given. Or maybe it was in one of those infamous cookie notices I've adblocked.
You'll notice other projects ignore that rule.
Somebody somewhere said the new python stuff could run on opencl so any card.
I couldn’t care less what you do lol. But I’m not aware of any European based project that is “ignoring” GDPR, since it’s law.
but the short answer is that whoever wherever told you that GPUGRID Python tasks could run on AMD are wrong. They don’t. GPUGRID only sends CUDA apps.
A single task of the acemd
)
A single task of the acemd type produces nearly 100% utilization. No benefit from trying to run more than 1X.
The Python tasks can use as much as 6GB of VRAM in spurts so will not fit more than a single task in the typical 8GB gpu.
The main issue with the Python tasks is the heavy usage of system RAM. A single task uses 10GB or more of main memory. So unless you have multiple gpus and lots of system memory, 64GB or greater it is difficult to run more than singles there also.
Quote:Lamberto Vitali
)
My understanding is the GPUGRID tasks consume 100% of the gpu resources per task.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Quote:Tom M wrote: Lamberto
)
GPUGRID acemd3 (and some acemd4) tasks will consume ~95+% of the GPU core. and high PCIe bus bandwidth use. low VRAM utilization and VRAM bus utilization though.
other acemd4 tasks will have ~80% GPU core use, high VRAM bus use, and low PCIe bus use.
conversely, the GPUGRID Python tasks will use very little GPU core, but a lot of VRAM utilization, and also a lot of of the CPU threads (but at low utilization for each thread).
so GPUGRID has a wide variety of tasks now, you need to be specific in which task type you're talking about.
_________________________________________________________________________
Lamberto Vitali wrote: I was
)
first, depends on what kind of card you have (your hosts are hidden). GPUGRID only has CUDA apps, so they will only run on Nvidia cards.
and yes, while the Python tasks have rather low GPU core utilization, they have high memory requirements. a 4GB card probably isn't enough.
_________________________________________________________________________
Lamberto Vitali
)
Einstein, being a European project, is bound by the GDPR. so the default is hidden. you need to unhide them.
GPUGRID had a beta AMD GPU application like 7 years ago, but I don't think it ever went mainstream, and their apps have otherwise always been CUDA based. AMD cards can't run CUDA.
_________________________________________________________________________
Ian&Steve C. wrote: and
)
I'm pretty sure I've run those on gpu's with 4gb on them.
Mikey, were those the test
)
Mikey, were those the test beta tasks or the new production run tasks.
If I remember correctly, the test trial setup tasks were not as demanding of gpu VRAM because they were not really doing any proper machine learning. They were just being used to figure out the correct packaging of the app and tasks.
Lamberto Vitali
)
Since the python tasks use pytorch you would need to install the pytorch on ROCm installation which the tasks don't package.
They only package the Nvidia based packages. The project still focuses on the Nvidia platform.
Machine learning is very poorly supported on AMD gpus.
Lamberto Vitali
)
I couldn’t care less what you do lol. But I’m not aware of any European based project that is “ignoring” GDPR, since it’s law.
but the short answer is that whoever wherever told you that GPUGRID Python tasks could run on AMD are wrong. They don’t. GPUGRID only sends CUDA apps.
_________________________________________________________________________
I mean it’s kind of weird
)
I mean it’s kind of weird that Peter created a new account under a new name. Did your other one get the ban hammer?
_________________________________________________________________________