Few words about future of tech & NPUs? What do you know? What do you think?
Is NPU going to be included into E@h? What about some other project?
Will it be limited to CPU-NPU (like intel or AMD) or will it include some other NPU devices? Such as Googles Tensor & Coral TPU/NPU? Falcon cards?
Lets start the discussion here...
Copyright © 2024 Einstein@Home. All rights reserved.
An NPU is designed to do
)
An NPU is designed to do neural network operations. Like inferencing. This project does not use NN and so an NPU would not be useful here. It’s like the tensor or RTX cores on your GPU, they’re there, but won’t be used.
_________________________________________________________________________
If e@h were to start using AI
)
If e@h were to start using AI inference for data analysis you might see it used here. But it would require significant retooling.
Unless it became "turnkey" it doesn't sound likely.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor)
Tom M wrote:If e@h were to
)
What about the nvidia Tensor Cores? Would they be useful at e@h if the software could be rewritten to make use of them?
Filipe wrote: Tom M
)
Sounds like an answer Petri could help with, he's VERY good at optimizing the crunching software for gpu's.
For most projects Tensor
)
For most projects Tensor cores won’t be used unless it’s coded into the app (and that it makes sense for the app to do) like an NPU, Tensors are just another kind of ASIC. They can do one kind of operation, matrix FMAs. Most GPU apps won’t need this. It’s more for ML and AI stuff also. I think only a couple of the GPUGRID apps would use this. No other projects that I know of.
Unless the project wants to move to AI based signals detection, and rewrite all their apps, and change their whole workflow, AND have a working/reliable trained model… I just don’t see this being a feasible or realistic option. They have little enough time and resources as it is.
_________________________________________________________________________
mikey wrote: Filipe
)
NVIDIA could train a model to recognize a CUDA code pattern, a CPU/GPU i/o or a certain compiler output that would wake up a special AI phase for compiler chain assisted AI optimizations (ccaiop) to automagically produce sass code that uses Tensor cores.
--
petri33
petri33 wrote: mikey
)
Sounds simple! (I jest)
Ian&Steve C- do some of the current GPUGRID apps actually use tensor cores?
Also, is there a way to tell if tensor cores are being actively used, other than the word of the app creator?
Boca Raton Community HS
)
they are supposed to be use used as the underlying software packages are supposed to be able to use them.
the only way to see Tensor core usage would be to use the nvprof profiler in the cuda toolkit. I haven't gotten around to trying yet.
_________________________________________________________________________