Can I increase the GPUs memory in Windows from the max of 4GB?

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 3074
Credit: 4975047686
RAC: 1431718
Topic 224388

I'm running Windows 10 and using a 3950X CPU set to 4.0GHz and 30 of my 32 threads being used with 2 x RTX 2070 Super GPUs, and they are doing OK but I'd like them to do more.  As I understand it, the 4GB limit on the GPUs is limited by BOINC, and not Einstein or Milkyway or any other project.  At the present time I do not wish to change over to Linux (I already have my second machine on Linux). 

Currently using BOINC manager for these task values, my GPUs are running GW-GPU tasks, 2 per each GPU, taking approximately 22 to 24 minutes for each task.  When I run Gamma-ray pulsar binary search #1 on GPUs v1.22 I take well over 2 hours per task.

With 32GB of memory installed, I'm presently running at ~64% use, or roughly ~21GB of memory (with no other major Windows project in use).  GPU #0 is running at 2,450MB of allocated memory, though it did use as much as 4,300 MB which is over the 4GB limitation.  GPU #1 is running at nearly 3,000 MB with a max allocation of 4,675 MB.  How can this be with BOINC setting a limit of 4GB of video memory?  Oh, and these values are from HWiNFO64 v6.34.

So I am asking: Can I increase the GPUs memory in Windows 10 from the max of 4GB to 8GB, or whatever the memory capacity is of my GPUs?

[EDIT]

Maybe I should clarify something.

As I understand it, BOINC has a limitation of 4GB on the compute capability with an additional 3,550 MB available per GPU.  The OpenCL 1.2 CUDA is at 8GB with the same 3,550 MB available, again per GPU.

I still don't understand the significance of the 3,550 MB available for OpenCL.

George

Proud member of the Old Farts Association

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4969
Credit: 18771314179
RAC: 7242398

Quote:As I understand it, the

Quote:
As I understand it, the 4GB limit on the GPUs is limited by BOINC,

What!?  There is no 4gb limit on gpus? Ignore what BOINC reports for memory on Nvidia cards.

It is simply reporting only what it can for a 32bit API call.  The BOINC developers have been told what is wrong with the BOINC gpu probing code and how to fix it, but they have ignored the problem and continue to do so.  Still not a worry as the underreporting of VRAM memory amount on a card has no impact on crunching.

The science applications use all of a cards memory, if that is how they are designed.

There was for a brief while a problem with the GW app tasks that needed to use more than 4GB of memory and those tasks bombed out on cards with 4GB or less of memory on board.  That issue has been resolved as I believe that no tasks are designed to use that much memory anymore.

 

[Edit] Your whole post is basically wrong.  No difference in CUDA or OpenCL reporting of memory.

A card has what it has for memory as built, no more, no less.

 

MarkJ
MarkJ
Joined: 28 Feb 08
Posts: 437
Credit: 139002861
RAC: 0

Keith Myers wrote:Your

Keith Myers wrote:

Your whole post is basically wrong.  No difference in CUDA or OpenCL reporting of memory.

A card has what it has for memory as built, no more, no less.



For iGPU’s it is possible to change their allocated maximum memory as they use the PC’s memory. A discreet GPU has the memory soldered onto the graphics card so you can't change it.

The OpenCL code correctly reports the total memory for the GPU but the Cuda one is wrong. This is under Linux, it might still be wrong under Windows.

3/01/2021 7:51:44 AM    Starting BOINC client version 7.16.11 for x86_64-pc-linux-gnu   
3/01/2021 7:51:44 AM    CUDA: NVIDIA GPU 0: GeForce GTX 1660 Ti (driver version 450.80, CUDA version 11.0, compute capability 7.5, 4096MB, 3972MB available, 5484 GFLOPS peak)    
3/01/2021 7:51:44 AM    OpenCL: NVIDIA GPU 0: GeForce GTX 1660 Ti (driver version 450.80.02, device version OpenCL 1.2 CUDA, 5942MB, 3972MB available, 5484 GFLOPS peak)   

Having said that, Keith is correct in that it doesn’t effect the crunching as that is done by the projects science app. 

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 3074
Credit: 4975047686
RAC: 1431718

Keith and Mark, I respect

Keith and Mark, I respect your opinions and take them at your word but I can't get over the fact that BOINC apparently doesn't show what it actually means in the BOINC Manager Event Log.


01/02/21 1:43:53 PM |  | CUDA: NVIDIA GPU 0: GeForce RTX 2070 SUPER (driver version 456.55, CUDA version 11.1, compute capability 7.5, 4096MB, 3549MB available, 9062 GFLOPS peak)
01/02/21 1:43:53 PM |  | CUDA: NVIDIA GPU 1: GeForce RTX 2070 SUPER (driver version 456.55, CUDA version 11.1, compute capability 7.5, 4096MB, 3549MB available, 9062 GFLOPS peak)
01/02/21 1:43:53 PM |  | OpenCL: NVIDIA GPU 0: GeForce RTX 2070 SUPER (driver version 456.55, device version OpenCL 1.2 CUDA, 8192MB, 3549MB available, 9062 GFLOPS peak)
01/02/21 1:43:53 PM |  | OpenCL: NVIDIA GPU 1: GeForce RTX 2070 SUPER (driver version 456.55, device version OpenCL 1.2 CUDA, 8192MB, 3549MB available, 9062 GFLOPS peak)


Keith, I'm trying to understand this so please bear with me.

If the BOINC Manager Event Log shows that I have 4096MB of compute capability available (and I realize that you're saying don't believe it), but it also says that I have 3549MB available for additional(?) computing?  And if the event log also indicates for OpenCL that I have 8192MB for CUDA (which I do), and it says that I have 3549MB (still) available, then why can't I use it?

If I have my 'GPU utilization factor of GW apps' set to 0.5 then I would expect to get 2 GW tasks per GPU as I currently have.  By what I believe you're saying, if I change it to 0.33 then I should get 3 GW tasks per GPU, and 0.25 would give me 4 GW tasks per GPU.  But I can't, I have changed the setting to 0.33, exited, rebooted, relaunched and it still only gives me 2 tasks per GPU.

I realize that I'm running Windows and you're on Linux, and there are differences between the two.  But either I throw out what BOINC is telling me and just DO IT! (as Nike says), or BOINC needs to change what they tell us in at least the event log so dummies like me don't get hung up on things like this.

In Windows Task Manager it is showing me that I'm using approximately 80%-85% of my CUDA in each GPU.  Would that explain why I don't get additional tasks being done when I change my 'GPU utilization factor of GW apps' from 0.5 to 0.33 or 0.25?  And also, I'm just going to ask the dumb question because I can, why doesn't the Task Manager show the amount of allocated memory for the GPUs as ~80%-85% when the CUDA values are that high?  Are CUDA cores different than memory?  I thought CUDA cores was a software language protocol that was being used within GPU memory.

George

Proud member of the Old Farts Association

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4969
Credit: 18771314179
RAC: 7242398

The CUDA reporting of memory

The CUDA reporting of memory is wrong with BOTH Windows and Linux.  The problem is with BOINC using the wrong API call in the gpu_nvidia.cpp module that has been using an old, deprecated 32bit call to determine onboard memory on a card since the beginning of gpu usage at Seti.

The reason that CUDA detection only reports 4GB of memory is because 32bits can only represent 4GB of memory max.

The gpu_opencl.cpp module however calls the correct 64 bit API in both Windows and Linux environments and so reports the correct amount of VRAM memory on a card.

Our gpu developer at Seti GPUUG group figured the issue and reported to the BOINC developers exactly what they did wrong, simply not using the current, correct API in the driver to report the proper amount of memory.

This is documented as Issue#1773 at the BOINC github repository.

https://github.com/BOINC/boinc/issues/1773

It is a simple matter to change the little bit of code in the gpu_nvidia.cpp module to use the correct API call and have Nvidia cards detected properly.

I run a corrected client and all my cards display all the VRAM on every card.

Not a single official BOINC developer has reviewed the code and made a PR request for merging.  Looks like it's can has simply been kicked down the road for some future milestone for a later client branch.  Likely whenever the developers drop support for 32 bit clients entirely.

As I stated, the reporting of the memory on a Nvidia card for its CUDA capabilities in the Event Log has NO bearing on the actual amount of memory an application can use.  The science application alone uses as much or as little memory as it needs.  BOINC has no control at ALL over this fact.

 

 

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4969
Credit: 18771314179
RAC: 7242398

You should be able to run 3X

You should be able to run 3X GW task in 8GB of memory . . . IF the GW tasks are only about 2GB in size.

For a long while, they  took over 4GB of memory and of course there is not enough memory to run 3X per card. But I believe they have reduced their size so the tasks will run on 3GB and 4GB cards now.

Also you have to have enough cpu support to run 3X per card so that normally means you have to drop some of your cpu cores from crunching to support the gpu tasks.

My testing on Nvidia cards has NEVER shown that running more than 1X task per card is better than running just a single task per card.  Production at 2X or 3X does not scale on Nvidia.

Completely different case for AMD however apparently.

WARNING!!@

Your mileage may vary.

 

mikey
mikey
Joined: 22 Jan 05
Posts: 12702
Credit: 1839107474
RAC: 3620

George wrote: Keith and

George wrote:

Keith and Mark, I respect your opinions and take them at your word but I can't get over the fact that BOINC apparently doesn't show what it actually means in the BOINC Manager Event Log.


01/02/21 1:43:53 PM |  | CUDA: NVIDIA GPU 0: GeForce RTX 2070 SUPER (driver version 456.55, CUDA version 11.1, compute capability 7.5, 4096MB, 3549MB available, 9062 GFLOPS peak)
01/02/21 1:43:53 PM |  | CUDA: NVIDIA GPU 1: GeForce RTX 2070 SUPER (driver version 456.55, CUDA version 11.1, compute capability 7.5, 4096MB, 3549MB available, 9062 GFLOPS peak)
01/02/21 1:43:53 PM |  | OpenCL: NVIDIA GPU 0: GeForce RTX 2070 SUPER (driver version 456.55, device version OpenCL 1.2 CUDA, 8192MB, 3549MB available, 9062 GFLOPS peak)
01/02/21 1:43:53 PM |  | OpenCL: NVIDIA GPU 1: GeForce RTX 2070 SUPER (driver version 456.55, device version OpenCL 1.2 CUDA, 8192MB, 3549MB available, 9062 GFLOPS peak)


Keith, I'm trying to understand this so please bear with me.

If the BOINC Manager Event Log shows that I have 4096MB of compute capability available (and I realize that you're saying don't believe it), but it also says that I have 3549MB available for additional(?) computing?  And if the event log also indicates for OpenCL that I have 8192MB for CUDA (which I do), and it says that I have 3549MB (still) available, then why can't I use it?

If I have my 'GPU utilization factor of GW apps' set to 0.5 then I would expect to get 2 GW tasks per GPU as I currently have.  By what I believe you're saying, if I change it to 0.33 then I should get 3 GW tasks per GPU, and 0.25 would give me 4 GW tasks per GPU.  But I can't, I have changed the setting to 0.33, exited, rebooted, relaunched and it still only gives me 2 tasks per GPU.

I realize that I'm running Windows and you're on Linux, and there are differences between the two.  But either I throw out what BOINC is telling me and just DO IT! (as Nike says), or BOINC needs to change what they tell us in at least the event log so dummies like me don't get hung up on things like this. 

Let me see if I can help a little bit here....Boinc was and IS written to make it compatible with 32bit hardware, when the 64bit hardware came along they make it compatible with the new 64 bit stuff BUT never changed some of the underlying 32bit stuff and gpu memory reporting is one of those holdovers. They are working on a 64bit only version of Boinc which will fix all of the 32bit holdover quirks  and then Boinc will be able to report the correct size of memory in all gpu cards. Can you imagine having a 20gb gpu card and it only showing 4gb of ram on it? IOW Boinc KNOWS you have 8gb or 20gb or whatever on your gpu but due to programming restrictions it can't report the correct number so it stops at 4gb. My 3gb 7970 is reported correctly as it's under the 4gb reporting limit. My 4gb AMD 460 is also reported correctly but my AMD 580 with 8gb of ram is also reported as only having 4gb of ram.

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 3074
Credit: 4975047686
RAC: 1431718

Keith Myers wrote: You

Keith Myers wrote:

You should be able to run 3X GW task in 8GB of memory . . . IF the GW tasks are only about 2GB in size.

For a long while, they  took over 4GB of memory and of course there is not enough memory to run 3X per card. But I believe they have reduced their size so the tasks will run on 3GB and 4GB cards now.

Also you have to have enough cpu support to run 3X per card so that normally means you have to drop some of your cpu cores from crunching to support the gpu tasks.

My testing on Nvidia cards has NEVER shown that running more than 1X task per card is better than running just a single task per card.  Production at 2X or 3X does not scale on Nvidia.

Completely different case for AMD however apparently.

WARNING!!@

Your mileage may vary.

Okay Keith, once again you've given me much to think about...... and that's a good thing.  I never stop thinking about BOINC, and I will always want to know the 'why' and 'what if' because that's who I am.  My dad was an engineer and he taught me to question everything.

So I'm going to think on this awhile, likely overnight, and we will see what comes of it tomorrow.

LOL  "Your mileage may vary."  Very funny!!!

 

George

Proud member of the Old Farts Association

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 3074
Credit: 4975047686
RAC: 1431718

mikey wrote: Let me see if I

mikey wrote:

Let me see if I can help a little bit here....Boinc was and IS written to make it compatible with 32bit hardware, when the 64bit hardware came along they make it compatible with the new 64 bit stuff BUT never changed some of the underlying 32bit stuff and gpu memory reporting is one of those holdovers. They are working on a 64bit only version of Boinc which will fix all of the 32bit holdover quirks  and then Boinc will be able to report the correct size of memory in all gpu cards. Can you imagine having a 20gb gpu card and it only showing 4gb of ram on it? IOW Boinc KNOWS you have 8gb or 20gb or whatever on your gpu but due to programming restrictions it can't report the correct number so it stops at 4gb. My 3gb 7970 is reported correctly as it's under the 4gb reporting limit. My 4gb AMD 460 is also reported correctly but my AMD 580 with 8gb of ram is also reported as only having 4gb of ram.

Thanks Mikey, I've sorta surmised that with chats with Keith.  I can only hope that BOINC comes around and does something soon so that no one gets confused by the issues at hand as I have.  Until then...

I guess I should apologize to Keith (HEY KEITH, you listening?) for not thinking about what we had talked about long ago.  Once you brought up the 32 bit vs 64 bit I started to think back about it and I seem to remember long ago we had this conversation (in part) before.  So Keith, I do apologize immensely for being to hastily involved as to not remembering the topic at hand.  I'm sorry!

George

Proud member of the Old Farts Association

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4969
Credit: 18771314179
RAC: 7242398

No problem George. Just

No problem George.

Just wanted to comment about Mikey's 8GB AMD card is simply a problem of his 32bit AMD drivers.  The newer AMD cards that have 8GB or more of memory and using the latest drivers report all their memory correctly.

The Nvidia cards do to with their current drivers to all programs except BOINC that I know of. The simple call to the Nvidia 64bit API function cuDeviceTotalMem_v2 is all it takes to make BOINC work correctly.

BOINC is just using the old 32bit cuDeviceTotalMem_v API call and that is the problem.

But until the developers incorporate the code fix and compile a new client, you will have to live with the reporting issue unless you compile your own client with the code fix.

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.