Not when they impact all the other users who don't want to run beta applications.
The O3MD* work generators are running unthrottled and producing more than enough work for the few users who want to run beta tasks.
But they have overloaded the RTS buffers and everybody else that is running Gamma Ray and BRP4/7 work is getting no work when requested even though there is plenty of it in the Ready to Send categories.
The beta work is swamping the download servers and schedulers and preventing all the other work from being sent out.
I am down over a thousand tasks in my 3 card hosts from my set cache levels and continuing to fall without replenishment. I will be out of work in just 8 hours.
Has just 7 validations vs. 554 pending. Spot-checking through the pending list shows that tasks sent to my machine from December 3 until now nearly all show the second task required to form a quorum as unsent.
I wonder if some similarity/dissimilarity task dispatch rules are in effect which might orphan some machines?
While my second host running O3 initially had good success at getting quorum partners and validating, it now also has hundreds of pending tasks for which the required quorum partner task is unsent.
Each CPU task is requiring ~2 GB of ram.(!) I don't think I have ever seen tasks with such large memory requirements. Our systems are chewing away at them, but wow- very memory intensive.
Each CPU task is requiring ~2 GB of ram.(!) I don't think I have ever seen tasks with such large memory requirements. Our systems are chewing away at them, but wow- very memory intensive.
I'm not sure about currently, but I know at one time Rosetta@home was also using about 2GB per task.
GPUGRID's Python tasks, which are a hybrid CUDA/MT task, use ~10GB system ram, ~3GB VRAM, and 32+ cores for each task lol.
Each CPU task is requiring ~2 GB of ram.(!) I don't think I have ever seen tasks with such large memory requirements. Our systems are chewing away at them, but wow- very memory intensive.
I'm not sure about currently, but I know at one time Rosetta@home was also using about 2GB per task.
GPUGRID's Python tasks, which are a hybrid CUDA/MT task, use ~10GB system ram, ~3GB VRAM, and 32+ cores for each task lol.
True- I forgot about those GPUGRID tasks- those are something! I ran those for a while but it really limited what else we could run at the same time.
XLAL Error - MAIN (/home/jenkins/workspace/workspace/EaH-GW-OpenCL-Testing/SLAVE/LIBC215/TARGET/linux-x86_64/EinsteinAtHome/source/lalsuite/lalapps/src/pulsar/GCT/HierarchSearchGCT.c:2240): Internal function call failed: Generic failure
2022-12-12 14:30:34.0385 (16431) [CRITICAL]: ERROR: MAIN() returned with error '-1'
when I try to run two O3MDF together on the same GPU. One on its own is fine, as is one paired with a gamma-ray pulsar task on the same GPU. Device is a GTX 1660 Super with 6GB, so it's not just simply 'twice 2GB breaks the bank'. This is going to take some managing.
Keith Myers wrote:These
)
Surely a fair number of us (including myself) are happy to run beta tasks?
The only time I've avoided beta was a certain batch on a certain old GPU that crashed them all.
If beta testing is needed to make a better version of the other work, then it has to be done.
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
Not when they impact all the
)
Not when they impact all the other users who don't want to run beta applications.
The O3MD* work generators are running unthrottled and producing more than enough work for the few users who want to run beta tasks.
But they have overloaded the RTS buffers and everybody else that is running Gamma Ray and BRP4/7 work is getting no work when requested even though there is plenty of it in the Ready to Send categories.
The beta work is swamping the download servers and schedulers and preventing all the other work from being sent out.
I am down over a thousand tasks in my 3 card hosts from my set cache levels and continuing to fall without replenishment. I will be out of work in just 8 hours.
I've completed 8 CPU tasks on
)
I've completed 8 CPU tasks on my hyper-threaded Windows 10 i7-6700K system with a wide variation of CPU time:
"The ultimate test of a moral society is the kind of world that it leaves to its children." - Dietrich Bonhoeffer
archae86 wrote:Elphidieus
)
Beta Settings: Run Test Applications = Yes, as long as they are NATIVE Arms app, neither Intel nor Legacy apps...
Allow non-preferred apps = Already No...
Looks like I have to turn Beta Settings OFF then... sad...
Thanks archae86...
archae86 wrote: My two hosts
)
I'm seeing the same thing, lots of no wingman tasks sent out even though it says 'initial replication 2 tasks'
Each CPU task is requiring ~2
)
Each CPU task is requiring ~2 GB of ram.(!) I don't think I have ever seen tasks with such large memory requirements. Our systems are chewing away at them, but wow- very memory intensive.
Boca Raton Community HS
)
I'm not sure about currently, but I know at one time Rosetta@home was also using about 2GB per task.
GPUGRID's Python tasks, which are a hybrid CUDA/MT task, use ~10GB system ram, ~3GB VRAM, and 32+ cores for each task lol.
_________________________________________________________________________
Ian&Steve C. wrote: Boca
)
True- I forgot about those GPUGRID tasks- those are something! I ran those for a while but it really limited what else we could run at the same time.
See Task 1390224758. I'm
)
See Task 1390224758.
I'm getting
when I try to run two O3MDF together on the same GPU. One on its own is fine, as is one paired with a gamma-ray pulsar task on the same GPU. Device is a GTX 1660 Super with 6GB, so it's not just simply 'twice 2GB breaks the bank'. This is going to take some managing.
OpenCL kernel failed with
)
From your task:
these O3MDF tasks use ~3200 MB per task. so 2x on a 6GB card wont work.
if you want to splice in with gamma ray without running 2x on O3MDF, try this:
in app_config
set gpu use for O3MDF to 0.6
set gpu use for gamma ray to 0.4
this will allow you to run O3+GR (1.0), or GR+GR (0.8), but never O3+O3 (1.2)
_________________________________________________________________________