Discussion Thread for the Continuous GW Search known as O2MD1 (now O2MDF - GPUs only)

Mr P Hucker
Mr P Hucker
Joined: 12 Aug 06
Posts: 838
Credit: 519371204
RAC: 15292

robl wrote: Total agreement

robl wrote:

Total agreement here.  i too am in my mid  70s.  While the mind is willing the body is not.  Alot of stuff I used to fix is now hired out.  I hate it because you have to admit that your aging.  Oh well it is what the future holds for all of us.  Just be accepting and  move on.  

My neighbour is 94.  His wife is 95.  Anything needs doing, they just ask me (I'm 44).  No point throwing money at rip off tradesmen.

If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.

Anonymous

Peter Hucker wrote:robl

Peter Hucker wrote:

robl wrote:

Total agreement here.  i too am in my mid  70s.  While the mind is willing the body is not.  Alot of stuff I used to fix is now hired out.  I hate it because you have to admit that your aging.  Oh well it is what the future holds for all of us.  Just be accepting and  move on.  

My neighbour is 94.  His wife is 95.  Anything needs doing, they just ask me (I'm 44).  No point throwing money at rip off tradesmen.

Everyone's circumstances are different.  There is no one size fits all.  

EDIT:  Peter you and I are now both guilty of causing this thread to "drift off topic" so let us both fix that problem.  

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3965
Credit: 47232882642
RAC: 65389549

starting to have doubts about

starting to have doubts about this mythical "27%" number for nvidia GPUs and openCL.

first, we have reports of 3GB AMD GPUs failing also.

second, when you look at actual GPU memory used by the GW app, you see it using wayyy more than this 27% limit. on 3GB cards, the memory use climbs until the card runs out of memory and fails. all the way up to 100%.

now I swapped in a 4GB card (GTX 1650), which we know works fine, and the GW app is using over 3200MB, which is over 80% of the available memory. and it's processing just fine.

 

I don't think there is such a limit anymore. the problem with GW doesnt seem to be any kind of limitation on nvidia cards due to openCL, it's due to the straight memory limit in that the card simply doesnt have enough to begin with. 3200MB > 3GB, so it fails.

_________________________________________________________________________

Richie
Richie
Joined: 7 Mar 14
Posts: 656
Credit: 1702989778
RAC: 0

Meaning of that 27% limit in

Meaning of that 27% limit in practise has been unclear to me too. What does that limit matter if AMD cards also load equally as much stuff in their VRAM. I know tasks vary in size but at the moment for example I see my GTX 1060 6GB is using 3.5GB while running 1 task. Maybe 200-300 MB of that for showing desktop environment. So, if the card is only able to use 27% for computing OpenCL does that mean the task is actually around 0.27x 3.2GB = 864MB. Are Nvidia cards running only tasks that are this "small" in reality and that excess 2.4GB is needed for them to reserve enough 'usable' VRAM  ? And why are AMDs then using 3.2GB for a task or over 6GB for 2 tasks ?

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3965
Credit: 47232882642
RAC: 65389549

3.2GB is more than 50% of

3.2GB is more than 50% of your ram on your 6GB card.

3.2GB is more than 80% of ram on a 4GB card.

that's how much is actually being used by the application and both of those are obviously greater than 27%. this large value being greater than the cards total VRAM on GPUs with 3GB or less is why they are failing. all of the data can't be pumped into the GPU.

 

this is why I think the limit of 27% is either not a thing anymore, or its implementation and meaning isn't as well understood as some think.

_________________________________________________________________________

TBar
TBar
Joined: 3 Apr 20
Posts: 24
Credit: 891961726
RAC: 0

The 27% thing is from a very

The 27% thing is from a very long time ago, even the ATI cards had a restriction. I was wondering when someone would realize it doesn't apply anymore. My 4 GB GTX 970 will run the Vela tasks showing 3.3 GB in use all day, my 3 GB GTX 1060 won't. Back when it was in force there was a simple hack to avoid it, all you had to do was to add this to your .profile file in Home;

#GPU driver options
export set GPU_MAX_ALLOC_PERCENT=100
export set GPU_MAX_HEAP_SIZE=100

Now you don't even have to use that variable. The driver tells BOINC how much vram can be used at startup, that is the working number.

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3965
Credit: 47232882642
RAC: 65389549

TBar wrote:The driver tells

TBar wrote:

The driver tells BOINC how much vram can be used at startup, that is the working number.

that value doesn't get passed to the science application though. what the science app does is independent of whatever BOINC detects. BOINC just starts the app. which is why with BOINC's broken VRAM detection for nvidia cards with greater than 4GB, you can still use more than the 4GB no problem (provided the GPU actually has enough memory) when running multiple WU instances per GPU. for example, running 2x GW tasks on say a GTX 1080ti 11GB, or RTX 2070 8GB. it will work, even though stock BOINC says you only have 4GB available. 

_________________________________________________________________________

TBar
TBar
Joined: 3 Apr 20
Posts: 24
Credit: 891961726
RAC: 0

That's because BOINC isn't

That's because BOINC isn't listening to the Driver correctly, it's only reading the 32 bit number. The App usually also asks the driver how much it has, and the App reads it correctly. Look at the stderr from SETI, you can see the numbers are correct there, and it's a great deal More than 27%.

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3965
Credit: 47232882642
RAC: 65389549

can't check SETI now that's

can't check SETI now that's it's down ATM, but I'll follow up. it's not just as simple as "read the driver correctly" since that implies that boinc and the special app are doing the same thing, they are not.

I'm (and have been) aware that the SETI special app reads the memory correctly. It was actually my idea for Ville to implement into our team BOINC client whatever method petri was using in his special app to try to fix that problem. Ville said what petri was doing to get the available memory was very different than what BOINC was doing and would have required a hefty code rewrite to do that. but after digging into it a bit more, he discovered a newer version of the same API call/method (i'll update the exact method when I can check the exact post) that could be implemented very easily in BOINC, if I recall he simply added "_v2" to the method/function name after discovering it and made some minor tweaks logic to failover to the old method if the new method wasnt found in the case that someone was using an old driver version without it. As far as I know the information has been passed to the devs, it's up to them to implement it in the official version. You'll note that everyone running the new GPUUG team boinc client shows the correct GPU memory for nvidia cards over 4GB.

My point was that whatever BOINC detects for available VRAM has absolutely nothing to do with whatever the science application is doing. they are basically independent of each other except for the fact that BOINC tells the science app what device to use. that's about it.

_________________________________________________________________________

TBar
TBar
Joined: 3 Apr 20
Posts: 24
Credit: 891961726
RAC: 0

Jesus.... The simple Fact

Jesus....

The simple Fact is the 27% thing hasn't been around for over 8 YEARS. Even back then people were entering Environmental variables to avoid it. The Driver determines how much vram can be used, and reports it to whoever is listening. Whether or not a particular App reads the driver correctly is up to the developer. Hopefully the Einstein developers are aware of this simple Fact...I believe they are.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.