If the epyc and threadripper sockets share a form factor could you use quieter thread ripper liquid or air cooling on the CPUs?
Instead of the really loud server class fans/CPUs coolers?
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
If the epyc and threadripper sockets share a form factor could you use quieter thread ripper liquid or air cooling on the CPUs?
Instead of the really loud server class fans/CPUs coolers?
Tom M
the Supermicro 4U cooler does a very competent job keeping the EPYC CPUs cool and they really aren't that loud.
but yes you can use a threadripper liquid or air cooler too as long as it isn't too wide to impede on the ram sticks which tend to be a little closer to the CPU socket on EPYC boards. three out of my 4 Asrock EPYC boards are running with a threadripper EK waterblock.
just keep in mind that the threadripper socket is rotated 90 degrees from the EPYC socket. so if you put a threadripper air cooler on an EPYC board you will have airflow across the board 90 degrees from normal, towards or away from the PCIe slots
just keep in mind that the threadripper socket is rotated 90 degrees from the EPYC socket. so if you put a threadripper air cooler on an EPYC board you will have airflow across the board 90 degrees from normal, towards or away from the PCIe slots
Ah. That is what I missed.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Hmmm.... a threadripper 1950x and a few 2950x are actually going for less than the 5950x on eBay. (All are 16c/32t CPUs).
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Windows 10 Pro 64, 6 core (intel NVidia laptop), 12 core (AMD/AMD):
I have been running whatever Einstein@Home has thrown at me. I run 5 to 6 Jobs on my laptop and 11 jobs on my desktop.
I was just wondering just how cacheable these jobs are. With that in mind, I was wondering if my jobs are being slowed by actual memory accesses. I would think SIMD stuff would be pretty cacheable, but I was looking around for a way to keep track. If there are a lot of cache misses/memory accesses, would it be better if I ran fewer jobs to avoid the thrashing? I am not seeing a lot of memory misses/paging going on, so I think I am OK there.
BTW, on my laptop 2 of the 5 or 6 jobs are run on my two GPUs. One on the dedicated NVidia GPU and the other on the Intel APU (Surprise to me). Naturally, the NVidia one runs a LOT faster.
Bottom line, what is everyone here running to keep track of cache misses? ( To see if memory is a bottleneck for N jobs.)
I've seen people talk about cache misses and testing. Some CPUs have tens of cores but due to cache misses they have been set to run only on part of the cores. Some cores are left 'free' to handle OS/Disk/Net/IRQ/... and some cores are dedicated to run GPU tasks. I do not run any CPU tasks - GPU only.
People run first N tasks at a time and keep track of the run time. The increase or decrease the N by M and so on. Until they hit the sweet spot. Then fine tuning. Experimentation!
This is by far better than trying to measure or reason your way from secondary parameters, assuming you can find a way to get appropriate measurements.
Here at Einstein, for weeks at a time, the GPU tasks distributed for "Gamma-ray pulsar binary search #1 on GPUs" are remarkably well-matched in work content, so it is very easy to use experiments to explore the productivity impact not only of number of simultaneous tasks, but other adjustments. From time to time there have been abrupt shifts, so one must avoid being put off by those boundaries.
However other Einstein work types at other times have had a great deal of variation even in the short term, which makes experimentation rather more difficult--but still the right way to go.
If the epyc and threadripper
)
If the epyc and threadripper sockets share a form factor could you use quieter thread ripper liquid or air cooling on the CPUs?
Instead of the really loud server class fans/CPUs coolers?
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Tom M wrote: If the epyc and
)
the Supermicro 4U cooler does a very competent job keeping the EPYC CPUs cool and they really aren't that loud.
but yes you can use a threadripper liquid or air cooler too as long as it isn't too wide to impede on the ram sticks which tend to be a little closer to the CPU socket on EPYC boards. three out of my 4 Asrock EPYC boards are running with a threadripper EK waterblock.
just keep in mind that the threadripper socket is rotated 90 degrees from the EPYC socket. so if you put a threadripper air cooler on an EPYC board you will have airflow across the board 90 degrees from normal, towards or away from the PCIe slots
_________________________________________________________________________
Ian&Steve C. wrote: just
)
Ah. That is what I missed.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Hmmm.... a threadripper 1950x
)
Hmmm.... a threadripper 1950x and a few 2950x are actually going for less than the 5950x on eBay. (All are 16c/32t CPUs).
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
dont waste your time with the
)
dont waste your time with the 1950X IMO.
honestly I wouldn't get anything X399. it's a dead platform.
_________________________________________________________________________
Ian&Steve C. wrote: dont
)
Even though I own an X399 platform and cpu, I agree with Ian's assessment.
Dead platform. Even with the latest sTRX40 chipset. I don't see any future development from AMD.
Windows 10 Pro 64, 6 core
)
Windows 10 Pro 64, 6 core (intel NVidia laptop), 12 core (AMD/AMD):
I have been running whatever Einstein@Home has thrown at me. I run 5 to 6 Jobs on my laptop and 11 jobs on my desktop.
I was just wondering just how cacheable these jobs are. With that in mind, I was wondering if my jobs are being slowed by actual memory accesses. I would think SIMD stuff would be pretty cacheable, but I was looking around for a way to keep track. If there are a lot of cache misses/memory accesses, would it be better if I ran fewer jobs to avoid the thrashing? I am not seeing a lot of memory misses/paging going on, so I think I am OK there.
BTW, on my laptop 2 of the 5 or 6 jobs are run on my two GPUs. One on the dedicated NVidia GPU and the other on the Intel APU (Surprise to me). Naturally, the NVidia one runs a LOT faster.
Bottom line, what is everyone here running to keep track of cache misses? ( To see if memory is a bottleneck for N jobs.)
That is actually a difficult
)
That is actually a difficult thing to do unless you are a developer and can debug your code.
I Googled the issue and came up with this single answer from the Stackoverflow forum.
Intel® VTune™ Profiler
Hi, I've seen people talk
)
Hi,
I've seen people talk about cache misses and testing. Some CPUs have tens of cores but due to cache misses they have been set to run only on part of the cores. Some cores are left 'free' to handle OS/Disk/Net/IRQ/... and some cores are dedicated to run GPU tasks. I do not run any CPU tasks - GPU only.
People run first N tasks at a time and keep track of the run time. The increase or decrease the N by M and so on. Until they hit the sweet spot. Then fine tuning. Experimentation!
petri33 wrote: fine tuning.
)
This is by far better than trying to measure or reason your way from secondary parameters, assuming you can find a way to get appropriate measurements.
Here at Einstein, for weeks at a time, the GPU tasks distributed for "Gamma-ray pulsar binary search #1 on GPUs" are remarkably well-matched in work content, so it is very easy to use experiments to explore the productivity impact not only of number of simultaneous tasks, but other adjustments. From time to time there have been abrupt shifts, so one must avoid being put off by those boundaries.
However other Einstein work types at other times have had a great deal of variation even in the short term, which makes experimentation rather more difficult--but still the right way to go.