All things Nvidia GPU

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6578
Credit: 306851431
RAC: 188409

Well the VPN is via our

Well the VPN is via our National Broadband Network which is optic fibre to the node locally. That gives about 100Mbps download and 40 upload currently, even though they are just next door.

It's important that I see the data within minutes of acquisition, but not within seconds so some patience is required. The idea is that I can interrogate the database of the imaging provider to make my own clinical judgments in the very near future, rather than awaiting a formal radiological interpretation - which takes at least several hours typically. While it's nice to speak with the radiologist ad hoc & promptly ( if they're available ! ) when needed, we still need to be looking at the same image when conversing. Bland/traditional X/R data is a doddle - just digitised planar films which I'm very used to - it's the 3D aspect that is becoming increasingly telling at a clinical level. Being able to rotate a viewpoint off the canonical anatomy planes is so simple but ever so helpful. Plus it makes me look really cool/clever ..... but don't tell anyone that I've had to rapidly re-jig some of my anatomy knowledge to cope with that. ;-)

Now I'm amenable to some future proofing, so maybe a better class of system than strictly required would help eg. threadripper ( SSD and oodles of memory is a given here ). I haven't had a major IT outlay for around five years and probably won't do so for another five.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4893
Credit: 18431228429
RAC: 5737516

I got a chance to work with a

I got a chance to work with a company doing 3D holography of MRI based imaging of tumor volumes for integration into the stereotactic planning computer for the first multi-leaf collimators in the linacs that I was installing.

Neat to grab a hologram and rotate it around to change the viewing angles.

I know what you need now. The hardware of today will have no issues but I think it smart to build in some future proofing.

 

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6578
Credit: 306851431
RAC: 188409

Yep, you got it all right.

Yep, you got it all right. That would have been a fascinating project.

Interestingly, recent medical graduates typically know their anatomy via teaching using similar technology eg. 3D immersive systems. What a treat this is ! Hence a really fast machine with multiple video slots is I think a good choice. Plus it seems the video card market is slanting towards buyers now.

I did anatomy* in 1981**, pre-computerised tomography et al. I knew a few students then that were highly challenged by an inability to imagine/visualise in 3D terms ( 'aphantasia', a sort of dyslexia variant ) and subsequently did poorly with interpreting 2D slices of 3D objects. They just couldn't 'see' it and many either dropped ( or were pushed ) out of the course. Some very clever & earnest people lost from medicine ....

Cheers, Mike

* Two components basically : the identity or naming of parts, and the rather more difficult relations of parts. 

** pre-just-about-every-other-gadget too. We saw neither the utility nor the perversions of The Internet then ( on balance a winner though ).

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 2994
Credit: 4926034438
RAC: 161522

I've got another curious one

I've got another curious one for Nvidia.

I've just finished setting up the 3950X with my 1000W PSU & cables from my 5950X computer, plus I've installed the two 2070S's from the 5950X as well.  How come my NVIDIA X Server Settings shows the Graphics Clock and Memory Transfer Rate Offset for "GPU 0", but not for "GPU 1"?

Look at my pics:

Any ideas?

Both GPU's are working, crunching Einstein about the same as when they were in the 5950X.  In the 3950X I have an ASUS Prime X470 Pro motherboard, where as my 5950X computer has an ASUS ROG X570 Crosshair VIII with Wi-Fi.  My thoughts are related to my motherboard, but...  should I purge the NVIDIA X Server Settings file and download it again?  I just don't know.

George

Proud member of the Old Farts Association

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3911
Credit: 43670185976
RAC: 63105779

GWGeorge007 wrote: I've got

GWGeorge007 wrote:

I've got another curious one for Nvidia.

I've just finished setting up the 3950X with my 1000W PSU & cables from my 5950X computer, plus I've installed the two 2070S's from the 5950X as well.  How come my NVIDIA X Server Settings shows the Graphics Clock and Memory Transfer Rate Offset for "GPU 0", but not for "GPU 1"?

you need to run the coolbits command again to enable overclocking for the new GPU. the command will run on all GPUs in the system at time of running it. so if you add a GPU later, you need to run it again to enable for the new GPU. I'm guessing when you first setup this system, you only had one GPU, then added another later on. or maybe you moved the GPU from one slot to another slot (changing the PCI address, and thus making it look like a new GPU).

re-run this:

sudo nvidia-xconfig --thermal-configuration-check --cool-bits=28 --enable-all-gpus

then reboot.

_________________________________________________________________________

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 2994
Credit: 4926034438
RAC: 161522

Ian&Steve C.

Ian&Steve C. wrote:

GWGeorge007 wrote:

I've got another curious one for Nvidia.

I've just finished setting up the 3950X with my 1000W PSU & cables from my 5950X computer, plus I've installed the two 2070S's from the 5950X as well.  How come my NVIDIA X Server Settings shows the Graphics Clock and Memory Transfer Rate Offset for "GPU 0", but not for "GPU 1"?

you need to run the coolbits command again to enable overclocking for the new GPU. the command will run on all GPUs in the system at time of running it. so if you add a GPU later, you need to run it again to enable for the new GPU. I'm guessing when you first setup this system, you only had one GPU, then added another later on. or maybe you moved the GPU from one slot to another slot (changing the PCI address, and thus making it look like a new GPU).

re-run this:

sudo nvidia-xconfig --thermal-configuration-check --cool-bits=28 --enable-all-gpus

then reboot.

Ahhh...  Thank you Ian!!   That's what I forgot about.  DUH!

I had a 2060 GPU before, and I took that one out and replaced it with the two 2070S's, never thinking about Coolbits.

Thanks again.

George

Proud member of the Old Farts Association

Ronald McNichol
Ronald McNichol
Joined: 28 Feb 22
Posts: 27
Credit: 99853798
RAC: 0

Color me perplexed!I was

Color me perplexed!

I was wondering about the relative performance of my 4 systems. The FP performance fell into what I thought was a reasonable range, but I was astounded by what BOINC thought of the Integer performance of my systems.
Results:
CPU                Integer
i7-8750h         12,389.41

R9 5900X        21,395.3

Pi4 32bit OS    56,344.55 !!!

Pi4 64bit OS    75,488.77 !!!

Something HAS to be wrong with the way they measure these things for INTEGER performance!

Both Pis were roughly 1/2 as fast as my laptop on FP operations (which I think says a lot for the Pis)
The laptop was about 2/3 of the performance of a current desktop, which says a lot for my older laptop).

 

 

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4893
Credit: 18431228429
RAC: 5737516

Because BOINC benchmark code

Because BOINC benchmark code was written before ARM ever came on the scene.  It does not handle ARM processors correctly and produces outlandish numbers that are not realistic.

Compare ARM32 and ARM64 processors against a Intel i7 desktop processor for realistic numbers.

Android Benchmarks For 32 Bit and 64 Bit CPUs from ARM, Intel and MIPS

 

Ronald McNichol
Ronald McNichol
Joined: 28 Feb 22
Posts: 27
Credit: 99853798
RAC: 0

Keith Myers wrote: Because

Keith Myers wrote:

Because BOINC benchmark code was written before ARM ever came on the scene.  It does not handle ARM processors correctly and produces outlandish numbers that are not realistic.

Compare ARM32 and ARM64 processors against a Intel i7 desktop processor for realistic numbers.

Android Benchmarks For 32 Bit and 64 Bit CPUs from ARM, Intel and MIPS



Thanks for the link to the table.

 

Ronald McNichol
Ronald McNichol
Joined: 28 Feb 22
Posts: 27
Credit: 99853798
RAC: 0

I just did an experiment

I just did an experiment with 2 of my 3 Raspberry Pi 4s, The two in question are running Buster/32-bit Debian.

Running 1 job takes ~3.6 hours. Or 6.67 jobs/day

Running 2 jobs takes ~4.5 hours Or 10.67 Jobs/day

Running 4 jobs takes ~5 hours. Or 19.20 jobs/day

So, my worry about cache misses causing slowness due to more cores accessing memory simultaneously was founded, but not by enough to stop running extra jobs. The numbers above are somewhat askew, as they were taken from too few samples, but I think are good enough that I am going to keep all 4 cores going on my 32Bit workaday Pis, and 2 jobs on my file server (which I did the experiments on, and just added to the mix yesterday). (The 1st row was only one sample.)

My 64-bit Pi, running a different (Beta still?), gets around 18.46 jobs/day on the same basis. Obviously not optimized for Bullseye. :(

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.