I'm not sure if I understand the question. In O3ASHF1 we are analyzing O3 data in a "high" (for GW) frequency range (800-1500Hz), in 2Hz per workunit. These 2Hz of a workunit will be split in halves and done in two 1Hz passes. Does that help?
First such "new-style" "bundled" tasks are generated and sent (labeled "O3HS1b"). Some have a short deadlie to get faster feedback, sorry. Depending on that feedback I'll start continuous workunit generation later today.
[Edit] the validator has to be adapted, I don't think I will have enough results to test it with before Monday. So the new tasks will likely not be validated over the weekend.
Thanks for the reply. Monitoring this was indeed a bit tricky, as nvidia-smi or nvtop alone is enough to bring the GPU up. I had to monitor power through s-tui without anything directly probing GPU. I verified s-tui number with wattage meter, so it should be trust worthy for my system.
My system is P1 Gen 5 (i9-12900H + A5500). What I observed was that idling GPU consumes around ~30W and limits CPU power (core + package) to 25W . Only when GPU is completely off will the CPU be allowed to use 40W. The moment I run nvidia-smi, the GPU idle power kicks in and CPU power get throttled until some short period after GPU powers off again. During the entire non-GPU period of E@H, the system behaves like having an idle GPU. However, if I don't run any GPU WUs, I get back the 40W envelope. So what I concluded was that just those memory allocation is enough to keep GPU powered up, though I can't rule out it's some firmware bug or other more subtle behavior.
Anyway, looks like my question won't be relevant soon for good and it's not worth any effort given the complexity. Freeing memory in the middle definitely isn't a good idea. Not only it won't really help my case, but it can also wasted all the work if second memory allocation fails for any reason. I do see memory utilization change a lot for graphic tasks like video when I'm actively using a system. Better hold onto them until the WU finishes. :-)
It is. "new" workunits ("O3ASHF1b") are assigned to apps with the "GW-opencl-*-2" plan classes, which are set to require only 3GB of VRAM and uso only 2.5 when running. This is very conservative - all results I've looked at so far report a max memory usage of less than 2GB.
It is. "new" workunits ("O3ASHF1b") are assigned to apps with the "GW-opencl-*-2" plan classes, which are set to require only 3GB of VRAM and uso only 2.5 when running. This is very conservative - all results I've looked at so far report a max memory usage of less than 2GB.
i don't see a GW-opencl-nvidia-2 app published for Windows on the Applications page. if all new task are this -2 plan class, that's probably why his Windows machine isn't getting anything.
usually I've seen this behavior when the app is inadvertently hard coded to use device 0 rather than the device BOINC communicates to the app. can you confirm this isn't the case? has anyone else with a multi-GPU setup had this problem?
i don't see a GW-opencl-nvidia-2 app published for Windows on the Applications page. if all new task are this -2 plan class, that's probably why his Windows machine isn't getting anything.
Oh dear, the output of "update_versions" (BOINC tool to publish Apps) is a bis confusing, particularly if you have many app versions present already. Anyway, something went wrong when publishing this app version and I didn't notice. Should be fixed soon.
nice, i see the new app published now. that should fix the issue for Boca Raton.
are you planning to re-enable the previous "non-2" apps that use more VRAM, but run faster? I think many folks with enough VRAM would probably like that. I assume you have some mechanism to compare VRAM on the host, and if it has 4GB or less, send the -2 app, and if it has >4GB, send the original app.
Bernd Machenschalk
)
Yes very much thankyou
First such "new-style"
)
First such "new-style" "bundled" tasks are generated and sent (labeled "O3HS1b"). Some have a short deadlie to get faster feedback, sorry. Depending on that feedback I'll start continuous workunit generation later today.
[Edit] the validator has to be adapted, I don't think I will have enough results to test it with before Monday. So the new tasks will likely not be validated over the weekend.
BM
Thanks for the reply.
)
Thanks for the reply. Monitoring this was indeed a bit tricky, as nvidia-smi or nvtop alone is enough to bring the GPU up. I had to monitor power through s-tui without anything directly probing GPU. I verified s-tui number with wattage meter, so it should be trust worthy for my system.
My system is P1 Gen 5 (i9-12900H + A5500). What I observed was that idling GPU consumes around ~30W and limits CPU power (core + package) to 25W . Only when GPU is completely off will the CPU be allowed to use 40W. The moment I run nvidia-smi, the GPU idle power kicks in and CPU power get throttled until some short period after GPU powers off again. During the entire non-GPU period of E@H, the system behaves like having an idle GPU. However, if I don't run any GPU WUs, I get back the 40W envelope. So what I concluded was that just those memory allocation is enough to keep GPU powered up, though I can't rule out it's some firmware bug or other more subtle behavior.
Anyway, looks like my question won't be relevant soon for good and it's not worth any effort given the complexity. Freeing memory in the middle definitely isn't a good idea. Not only it won't really help my case, but it can also wasted all the work if second memory allocation fails for any reason. I do see memory utilization change a lot for graphic tasks like video when I'm actively using a system. Better hold onto them until the WU finishes. :-)
Is any work being sent out
)
Is any work being sent out right now for O3? It looks like I am not receiving anymore.
It is. "new" workunits
)
It is. "new" workunits ("O3ASHF1b") are assigned to apps with the "GW-opencl-*-2" plan classes, which are set to require only 3GB of VRAM and uso only 2.5 when running. This is very conservative - all results I've looked at so far report a max memory usage of less than 2GB.
BM
Bernd Machenschalk wrote: It
)
i don't see a GW-opencl-nvidia-2 app published for Windows on the Applications page. if all new task are this -2 plan class, that's probably why his Windows machine isn't getting anything.
_________________________________________________________________________
also check out this post:
)
also check out this post: https://einsteinathome.org/content/multi-gpu-not-consistantly-applied-boincs-request
usually I've seen this behavior when the app is inadvertently hard coded to use device 0 rather than the device BOINC communicates to the app. can you confirm this isn't the case? has anyone else with a multi-GPU setup had this problem?
_________________________________________________________________________
Ian&Steve C. wrote: i don't
)
Oh dear, the output of "update_versions" (BOINC tool to publish Apps) is a bis confusing, particularly if you have many app versions present already. Anyway, something went wrong when publishing this app version and I didn't notice. Should be fixed soon.
BM
nice, i see the new app
)
nice, i see the new app published now. that should fix the issue for Boca Raton.
are you planning to re-enable the previous "non-2" apps that use more VRAM, but run faster? I think many folks with enough VRAM would probably like that. I assume you have some mechanism to compare VRAM on the host, and if it has 4GB or less, send the -2 app, and if it has >4GB, send the original app.
_________________________________________________________________________
That did it. Thank you! They
)
That did it. Thank you! They are behind some MeerKAT tasks now but I am curious to see how they run.