All things Navi 10

cecht
cecht
Joined: 7 Mar 18
Posts: 1535
Credit: 2908328759
RAC: 2134235

Tom M wrote:I have not been

Tom M wrote:
I have not been able to locate a amd gpu memory utility.  The docs on the amd gpu utilities don't clearly point to how to get it to display memory usage :(

Memory Load % is displayed in amdgpu-utils. How does that differ from memory usage?

Ideas are not fixed, nor should they be; we live in model-dependent reality.

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7225174931
RAC: 1042149

cecht wrote:Memory Load % is

cecht wrote:
Memory Load % is displayed in amdgpu-utils. How does that differ from memory usage?

Some utilities show the fraction of available memory transfer rate actually used, and also the fraction of memory storage used.  On Windows, HWiNFO uses the terms "Memory Controller Utilization" in percent, and "GPU memory" in megabytes.  GPU-Z uses "Memory Controller Load" in percent, and "Memory Used" in megabytes.

Do you think "Memory Load %" is showing something related to transfer traffic, or to actual memory occupancy?  The dynamics should give a clear clue.

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3958
Credit: 46992482642
RAC: 64852117

cecht wrote: Tom M wrote:I

cecht wrote:

Tom M wrote:
I have not been able to locate a amd gpu memory utility.  The docs on the amd gpu utilities don't clearly point to how to get it to display memory usage :(

Memory Load % is displayed in amdgpu-utils. How does that differ from memory usage?

"memory load" to me sounds like memory controller load. how hard the is controller working. not how much space is being used.

he wants to know how to check the percentage or actual value of the space used. like 6GB/8GB or 1GB/8GB

_________________________________________________________________________

cecht
cecht
Joined: 7 Mar 18
Posts: 1535
Credit: 2908328759
RAC: 2134235

archae86 wrote:Some utilities

archae86 wrote:

Some utilities show the fraction of available memory transfer rate actually used, and also the fraction of memory storage used.  On Windows, HWiNFO uses the terms "Memory Controller Utilization" in percent, and "GPU memory" in megabytes.  GPU-Z uses "Memory Controller Load" in percent, and "Memory Used" in megabytes.

Do you think "Memory Load %" is showing something related to transfer traffic, or to actual memory occupancy?  The dynamics should give a clear clue.

Ian&Steve C. wrote:

"memory load" to me sounds like memory controller load. how hard the is controller working. not how much space is being used.

he wants to know how to check the percentage or actual value of the space used. like 6GB/8GB or 1GB/8GB

Got it! Thanks!

Tom M, you might raise that as a request on the amdgpu-utils Github Issues page and ask if it's possible to include memory usage as feature for the monitoring function of the utility.

 

Ideas are not fixed, nor should they be; we live in model-dependent reality.

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6458
Credit: 9581383859
RAC: 7164217

Quote:Quote:he wants to know

Quote:

Quote:
he wants to know how to check the percentage or actual value of the space used. like 6GB/8GB or 1GB/8GB

Got it! Thanks!

Tom M, you might raise that as a request on the amdgpu-utils Github Issues page and ask if it's possible to include memory usage as feature for the monitoring function of the utility.

Thank you for the reminder and the link.  I am already subscribed to that email list.  I have just added a new issue about memory used.

Tom M

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

cecht
cecht
Joined: 7 Mar 18
Posts: 1535
Credit: 2908328759
RAC: 2134235

Performance data and initial

Performance data and initial optimization for RX 5600 XT running gravitational wave or binary pulsar tasks.
GPU: Sapphire Pulse RX 5600 XT, 6GB, dual BIOS switch set to default performance mode (away from video ports)
CPU: Pentium G5600, 2c/4t, 3.9 Ghz
System: Ubuntu 18.04.4, Linux kernel 5.3.0, OpenCL installed from AMD's Radeon driver 20.10; 24 GB DDR4 2133 MHz

Gamma-ray binary pulsar, FGRBPG1  app 1.18 (FGRPopencl1K-ati)

gpu_usageT-taskWWh/taskSCLKMCLKLimitPPMclock speed note
18.510114.41780875160Comdefault
0.506.512713.71780875160Compdefault
0.336.013313.21780875160Compdefault
0.256.013413.31780875160Compdefault
0.336.012312.31700875150Compcustom
0.336.79811.01560750150BootAMD ref card
0.336.612413.71780750150Boot"quite" BIOS

Gravitational Wave, WU series O2MDFV2g_VelaJr1, app 2.08 (GW-opencl-ati)

gpu_usageT-taskW-cWh/taskSCLKMCLKLimitPPMT-cpuclock speed note
113.18919.41780875160Comp95%default
0.509.611818.81780875160Comp80%default
0.3315.29724.61780875160Comp43%default
0.5011.510217.31700875150Comp66%custom
0.5012.18617.31560750150Boot68%AMD ref card

T-task: normalized task time, in minutes, from average run time * gpu_usage
W-c: crunch Watts = Watts-at-wall while running tasks minus resting Watts, averaged over several hours
Wh/task:  Watt-hours per day divided by calculated number tasks per day
SCLK: endpoint shader clock frequency, in Mhz
MCLK: endpoint memory clock frequency, in Mhz; values may read as doubled in some monitoring apps (Windows?)
PPM: power performance mode; Comp=COMPUTE, Boot=BOOT_DEFALT
Limit: power limit, in Watts
T-cpu: percentage CPU run time of task run time
notes: For all runs, used default p-states and default VDDC curve points (0: 800 Mhz/707 mV; 1: 1290/750; 2: 1780/959).
Card run parameters were monitored and set with amdgpu-utils, from GitHub.
Tables were formatted for BBCODE at https://theenemy.dk/table/.

In summary: for gamma-ray tasks, my preferred settings are running 3 concurrent tasks (0.33 gpu_usage, 3x), with 1700 MHz shader clock speed and 875 (or 1750) memory clock speed (and either 150W or 160W power limit); for gravitational wave tasks, preferred parameters are 2 concurrent tasks with the card's default settings.

One interesting observation for GW tasks, is that the proportion of CPU time is nearly the same as total runtime for 1x task concurrency, but decreases with increasing task concurrency. With AMD RX 5xx cards, CPU time stays constant at about 45% of runtime.  

For GW tasks, the RX 5600XT had no sets of exceptionally "long" runs at 1x or 2x, but did at 3x, where "long" means run times 4 to 10 times longer than the more frequent short runs. May be because of GPU memory limitations? It's a 6 GB card. CPU resources did not appear to be limiting; at 3x, CPU usage was ~54% with a load average of 2.4.  

No task time differences were seen with different PPMs at the same clock settings.

 

 

Ideas are not fixed, nor should they be; we live in model-dependent reality.

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6458
Credit: 9581383859
RAC: 7164217

cecht wrote: Performance

cecht wrote:

Performance data and initial optimization for RX 5600 XT running gravitational wave or binary pulsar tasks.
GPU: Sapphire Pulse RX 5600 XT, 6GB, dual BIOS switch set to default performance mode (away from video ports)

 

Thank you for the report.  I appreciate the additional data.

Tom M

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

cecht
cecht
Joined: 7 Mar 18
Posts: 1535
Credit: 2908328759
RAC: 2134235

Just an update on my last

Just an update on my last post here, 30 May, on RX 5600 XT performance for gravitational wave tasks. The slow down seen when going from running 2x to 3x tasks does seem to be a GPU memory issue.

At 2x tasks, VRAM Use is ~82% and memory use (out of 6 GB) is 0.8%.
At 3x tasks, VRAM Use is ~99.6% and memory use (out of 6 GB) is 9%.

Memory use data was from the 'amdgpu-monitor' command of the latest amdgpu-utils Master on GitHub. VRAM total is the same as the card's total physical memory (from file: /sys/class/drm/card1/device/mem_info_vram_total). So the more card memory, the more VRAM, and the better performance for running multiple tasks. 

That's my working hypothesis anyway. I hope somebody has Navi 10 data or other ideas to support or refute it.

Happy crunching!

Ideas are not fixed, nor should they be; we live in model-dependent reality.

T-800
T-800
Joined: 21 Jun 16
Posts: 2
Credit: 4691520
RAC: 0

**Diclaimner** I'm pretty new

**Diclaimner** I'm pretty new to cruching (hello by the way!) and most of my understanding of this stuff comes from building gaming PC's so what I've written below might be complete guff or already common knowledge, so I apologise if this is the case!

My understanding of your memory numbers is that VRAM refers to the dedicated memory on the graphics card itself whereas "memory usage" refers to main system memory (RAM) that the card can access if its own memory is full.

For example the attached screenshot shows my GPU info in windows task manager, the total GPU memory being 8.0Gb, which is made up of 2.0GB of "dedicated GPU memory" (VRAM on the card itself) and 6.0GB of "shared GPU memory" which is main system RAM that the GPU is allowed to access.

 

6GB seems like the default amount of main system memory windows allows the GPU to utilise. At least on both of my two machines it is 6GB despite one having 12GB of system RAM and the other having 16GB.

This practice of GPU's using their dedicated VRAM first and then having an area of main system memory to use as a backup if this runs out has been standard practice for both nVidia and AMD cards for some time I believe. Indeed modern integrated graphics (intel or in AMD APU's) entirely use a portion of main system memory for graphics purposes.

Your result for 2x shows only 82% of the cards VRAM is utilised and so it hasn't had to access the shared main system memory.

Your result for 3x shows the VRAM is full (99.6% is basically 100%) and its spilled over into the shared system memory.

I would guess einstein's application has issues handling this when it happens, which results in the slow downs you measured. So yes, the larger the amount of VRAM on your graphics card, the more tasks you should be able to run concurrently before it has to utilise system memory and slowdowns are observed.

I would therefore assume that this same behaviour would be observed with other cards as well, not just your 5600XT.

cecht
cecht
Joined: 7 Mar 18
Posts: 1535
Credit: 2908328759
RAC: 2134235

T-800 wrote:....Your result

T-800 wrote:

....Your result for 3x shows the VRAM is full (99.6% is basically 100%) and its spilled over into the shared system memory.

I would guess einstein's application has issues handling this when it happens, which results in the slow downs you measured. So yes, the larger the amount of VRAM on your graphics card, the more tasks you should be able to run concurrently before it has to utilise system memory and slowdowns are observed.

I would therefore assume that this same behaviour would be observed with other cards as well, not just your 5600XT.

That all makes sense.  Thanks!

Ideas are not fixed, nor should they be; we live in model-dependent reality.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.