Einstein’s scheduler has a bit more logic built into it than most projects. They can make distribution decisions based on the host details, like how much VRAM it has, or in the case of Nvidia cards, by the compute capability and card generation. It can withhold work from GPUs without enough VRAM (as far as BOINC displays, based on the first/best GPU), and also use plan classes to differentiate what kinds of devices get which kind of work, much like how they keep ARM devices to BRP4 and faster x86 CPUs and intel iGPUs to BRP4G
they have a lot of options to fine tune data distribution.
Disagree with this. My PC is not able to run the "Gravitational Wave search O3 All-Sky" (it can however run the "Gamma-ray pulsar binary search #1 (GPU)" in about 30 - 40 minutes). When the job is enabled in my settings, the scheduler happily sends out the tasks but they time out long before completion. When I complained about the scheduler assigning tasks to systems incapable of running them, I only received dismissive comments.
Disagree with this. My PC is not able to run the "Gravitational Wave search O3 All-Sky" (it can however run the "Gamma-ray pulsar binary search #1 (GPU)" in about 30 - 40 minutes). When the job is enabled in my settings, the scheduler happily sends out the tasks but they time out long before completion. When I complained about the scheduler assigning tasks to systems incapable of running them, I only received dismissive comments.
Nvidia puts more information about each GPU's capabilities in their drivers and what is reported by BOINC than what AMD does. Nvidia clearly groups their cards under a metric called "compute capability" and this is reported to BOINC and transmitted to the project to use at their discretion. Nvidia groups their cards by this metric specifically to track actual compute features and capabilities, but the groupings fall roughly along generational/architectural lines. AMD has no such similar metric, and so when a certain application requires features not available in an older generation product, BOINC, and hence projects, have no way to know that your device is incapable of processing it.
sure they get the device name/model, but there are endless models and names and it would be a bit too burdensome to have the project(s) keep a running list of all the different models and which ones fall into which generations and which ones work and dont work, etc.
maybe you tried gravitational wave tasks when they were new and before they started implementing restricting by memory, your 1GB card shouldn't be getting GW tasks anyway (even without the fact that the old AMD GPUs can't run them) as I think they all use more than that. I know when I ran some gravitational wave tasks a few years ago, they wouldn't sent tasks to GPUs with less than what it estimated was needed. that was with the older O2MDF project. it absolutely checked for memory size
though it seems they are under-estimating again with O3AS, I just tried, and it gives an estimate of 300MB needed, where actual use shows >1100MB.
Quote:
[version] Checking plan class 'GW-opencl-nvidia'
[version] parsed project prefs setting 'gpu_util_gw': 1.000000
[version] GPU RAM calculated: min: 512 MB, use: 300 MB, WU#646951747 CPU: 300 MB
[version] NVidia compute capability: 601
[version] Peak flops supplied: 2.956e+11
[version] plan class ok
but the scheduler certainly has the capability to check these things, and it's implemented, though admittedly in some instances not perfectly.
I have a good and a poor GPU in one system. Is there a way to get Gamma on the poor one and Gravity on the good one? If I allow both Gravity and Gamma at the server end, I just get Gravity, so the poor GPU has no work to do (I forbid Gravity locally).
Also, I'm not getting any Radio work for GPU. Should I be? Is this ready to send yet? Have I got the wrong type of GPU?
I have about 12 Tahiti cards (AMD R9 280X), and one Fury card (AMD R9 Nano). The Nano shares a computer with a 280X. The Nano will run Gravity, the Tahiti will not.
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
The only way I have been able to get gamma ray and gravity on the same computer at the same time is setup two profiles and switch my system between the two.
I have not been able to get the server to download both on the same profile.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
The only way I have been able to get gamma ray and gravity on the same computer at the same time is setup two profiles and switch my system between the two.
I have not been able to get the server to download both on the same profile.
Tom M
You can also load up two instances of Boinc on your pc and exclude gpu 1 from Einstein in one and gpu 0 in the other and use different venues that way as well
Ok nevermind, I'll stick to Gamma, or do Gravity + Milkyway.
But where are the Radio ones? Are those being issued for AMD GPUs yet?
no. They aren’t being sent out for AMD (or Nvidia) yet. I’m sure Bernd will make a post here when it’s ready. His last post indicated they are still working on the application. Timeline ~1 month.
Ian&Steve C.
)
Disagree with this. My PC is not able to run the "Gravitational Wave search O3 All-Sky" (it can however run the "Gamma-ray pulsar binary search #1 (GPU)" in about 30 - 40 minutes). When the job is enabled in my settings, the scheduler happily sends out the tasks but they time out long before completion. When I complained about the scheduler assigning tasks to systems incapable of running them, I only received dismissive comments.
Mr Anderson wrote: Disagree
)
Nvidia puts more information about each GPU's capabilities in their drivers and what is reported by BOINC than what AMD does. Nvidia clearly groups their cards under a metric called "compute capability" and this is reported to BOINC and transmitted to the project to use at their discretion. Nvidia groups their cards by this metric specifically to track actual compute features and capabilities, but the groupings fall roughly along generational/architectural lines. AMD has no such similar metric, and so when a certain application requires features not available in an older generation product, BOINC, and hence projects, have no way to know that your device is incapable of processing it.
sure they get the device name/model, but there are endless models and names and it would be a bit too burdensome to have the project(s) keep a running list of all the different models and which ones fall into which generations and which ones work and dont work, etc.
maybe you tried gravitational wave tasks when they were new and before they started implementing restricting by memory, your 1GB card shouldn't be getting GW tasks anyway (even without the fact that the old AMD GPUs can't run them) as I think they all use more than that. I know when I ran some gravitational wave tasks a few years ago, they wouldn't sent tasks to GPUs with less than what it estimated was needed. that was with the older O2MDF project. it absolutely checked for memory size
though it seems they are under-estimating again with O3AS, I just tried, and it gives an estimate of 300MB needed, where actual use shows >1100MB.
but the scheduler certainly has the capability to check these things, and it's implemented, though admittedly in some instances not perfectly.
_________________________________________________________________________
I have a good and a poor GPU
)
I have a good and a poor GPU in one system. Is there a way to get Gamma on the poor one and Gravity on the good one? If I allow both Gravity and Gamma at the server end, I just get Gravity, so the poor GPU has no work to do (I forbid Gravity locally).
Also, I'm not getting any Radio work for GPU. Should I be? Is this ready to send yet? Have I got the wrong type of GPU?
I have about 12 Tahiti cards (AMD R9 280X), and one Fury card (AMD R9 Nano). The Nano shares a computer with a 280X. The Nano will run Gravity, the Tahiti will not.
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
Peter, The only way I have
)
Peter,
The only way I have been able to get gamma ray and gravity on the same computer at the same time is setup two profiles and switch my system between the two.
I have not been able to get the server to download both on the same profile.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Tom M wrote: Peter, The
)
You can also load up two instances of Boinc on your pc and exclude gpu 1 from Einstein in one and gpu 0 in the other and use different venues that way as well
Ok nevermind, I'll stick to
)
Ok nevermind, I'll stick to Gamma, or do Gravity + Milkyway.
But where are the Radio ones? Are those being issued for AMD GPUs yet?
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
Peter Hucker of the Scottish
)
no. They aren’t being sent out for AMD (or Nvidia) yet. I’m sure Bernd will make a post here when it’s ready. His last post indicated they are still working on the application. Timeline ~1 month.
_________________________________________________________________________
@Bernd : After two months is
)
@Bernd : After two months, does the progress of the BRP search ( for your PhD student ) seem adequate to get it all done in sufficient time ?
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
I'm guessing we need the GPUs
)
I'm guessing we need the GPUs on it for that. I haven't seen that running yet.
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
Bernd Machenschalk
)
Hey Bernd, any progress update?
O3AS is out of work, and FGRPB1G looks like it'll run out in ~2 weeks.
It's no rush, just curious ;)
_________________________________________________________________________