All things Nvidia GPU

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6258
Credit: 8913923658
RAC: 10410866

Keith Myers wrote: No,  The

Keith Myers wrote:

No,  The article is about Vulkan technology.  What Ian uses is for CUDA technology.

Not the same.

Are they taking advantage of the same concepts though?

Clearly they are running on the same hardware :)

 

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3912
Credit: 43739029309
RAC: 63197548

It’s not really applicable to

It’s not really applicable to BOINC. BOINC treats all GPUs singularly. The use case is different. 

_________________________________________________________________________

KLiK
KLiK
Joined: 1 Apr 14
Posts: 56
Credit: 364244110
RAC: 1482090

Tom M

Tom M wrote:

https://towardsdatascience.com/parallelizing-heavy-gpu-workloads-via-multi-queue-operations-50a38b15a1dc

I ran across this. And I remember discussion about using Nvidia GPU's in a specific mode that both allowed more performance and did not allow multiple different GPU applications.

I am wondering if the underlying technology that is described in the above article is what Ian&Steve discussed in that high performance mode for Nvidia GPU's?

Tom M

Some project can run this way...example, SETI@home could run several instances in the same time.

Yes, the GPu does not have HT...but it has so much cores, so that this can be done - on some apps.

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6258
Credit: 8913923658
RAC: 10410866

https://youtu.be/yKOg1DjtB50?

https://youtu.be/yKOg1DjtB50?si=iIm3P8QS4sroeZcM

Shades of the rtx 3080 ti vs  Titan V 

Competitive bang in a certain nich for a lot less buck!

Tom M

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3912
Credit: 43739029309
RAC: 63197548

I'm having trouble

I'm having trouble understanding how that L40S video has anything to do with a 3080Ti or Titan V.

the L40S looks basically like a passive cooled (server airflow needed) RTX 6000 Ada. for more money.

_________________________________________________________________________

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6258
Credit: 8913923658
RAC: 10410866

Ian&Steve C. wrote:I'm

Ian&Steve C. wrote:

I'm having trouble understanding how that L40S video has anything to do with a 3080Ti or Titan V.

the L40S looks basically like a passive cooled (server airflow needed) RTX 6000 Ada. for more money.

Sorry. YAOP (yet another obscure post).

The L40S provides a competitive performance in 8 bit FP compared to the H100 at a much lower price point. Apparently this is a good thing in AI inference.

The Titan V provides a higher performing 64 bit FP in comparison to the rtx 3080 ti, at a lower price point. Apparently this is a good thing in some Boinc projects/tasks.

That was where the obscure "nich" reference was coming from.

Tom M

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6258
Credit: 8913923658
RAC: 10410866

I stumbled across v100 smx2

I stumbled across v100 smx2 GPU's fo $199 a piece on eBay.

So went looking for motherboards that support that non-pcie interface.  Found one for $1700 (case, PSU etc).

I suspect these GPU's need active coolers too?

So my suspicion is this solution runs into the noisy server issue?

The same GPU's in PCIe form are much more expensive.

Tom M

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3912
Credit: 43739029309
RAC: 63197548

SXM2 is a proprietary format

SXM2 is a proprietary format from Nvidia. PCIe, power, and nvlink connections are all provided via the two mezzanine socket connectors. It mounts flat to a socket sort of like how a CPU mounts. You need custom/proprietary daughter board to accept it which usually requires a whole server solution as the daughter board will be either vendor locked or need some proprietary connection to the motherboard for it. And yes it will be incredibly loud. 
 

mounting the GPU to the board is very tricky and requires extreme care and precision to get it right. If you have trouble mounting an EPYC CPU, I wouldn’t attempt anything with SXM

 

That $1795 “system” is JUST the daughterboard/chassis setup for it with proprietary connectors. Doesn’t include the other half of the chassis, PSUs, motherboard, or anything else. 

_________________________________________________________________________

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6258
Credit: 8913923658
RAC: 10410866

Talk about confusing. It is

Talk about confusing. It is enough to drive you back to PCIe only systems.

:)

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3912
Credit: 43739029309
RAC: 63197548

and you can't put newer GPUs

and you can't put newer GPUs into the SXM2 systems. SXM3 and SXM4 are not compatible. so no hope to put in anything faster than V100s (which are still competent, just squash all dreams of putting in an A100 or something later)

that's why the GPUs are so cheap. they only make sense to buy as a replacement for something you already have. the systems that accept them are still very expensive and make little sense to buy unless you get a screaming deal and also don't mind similar levels of "screaming" noise.

_________________________________________________________________________

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.