Hello all,
I'm currently running Einstein on a dedicated 24/7 server and often my desktop. However I am planning to move out in 2 to 3 years and I will definitely set up a high performance server. I was looking at the Nvidia Tesla P100 as they are low power (250W TDP). They state a 9+ Terraflop performance. Would it technically be possible to run Einstein on it? It supports OpenCL and Cuda but it's, as you can imagine a very high performance dedicated server card. I was wondering how fast BOINC or einstein can deliver tasks or is this purely dependant on GPU performance.
If anyone else has a different option to run a high performance 24/7 server, feel free to add your ideas. (money wise it can be up to €11000 and has to be energy efficient).
Many thanks,
TheAstroneer.
Copyright © 2024 Einstein@Home. All rights reserved.
Einstein can of course run on
)
Einstein can of course run on Tesla GPUs, I was running BRP4G and BRP6 on a Tesla K20c, but the performance wasn't as great as someone might expect. The performance was comparable to a desktop GPU with similar parameters (shader count, clock). Teslas are much better at DPFP (FP64) and feature ECC memory, which improves reliability, but the ECC mode costs some performance. Running pure FP64 code (Milkyway@Home) was comparable to a HD7950 (which has also good FP64 ratio). So cost-wise, it's not worth to use a Tesla for such projects.
Also note, that the current FGRPB app is OpenCL only, which runs sub-optimal on NVIDIA GPUs. When I tried these tasks on the Tesla K20c, the performance was very bad, far below much cheaper medium-performance GPUs. I don't know why is that, probably the OpenCL performance for Teslas is very reduced. But on the other hand, Milkyway's code is OpenCL too and runs there pretty well.
There's a CUDA app in works here, but we don't know how well it will perform.
-----
Hmmm interesting. So I might
)
Hmmm interesting. So I might do better with a set of 2 servers running 2 1080Ti's each rather than 1 with a Tesla?
I haven't seen any
)
I haven't seen any performance results for these GPUs yet, but I believe that 2 x 1080 Ti would give better results.
And for the current FGRPB (OpenCL) app I believe AMD's produce better, especially if one considers the price. I and several other users have posted performance results in other threads here: https://einsteinathome.org/content/observations-fgrbp1-118-windows
AFAIK, top NVIDIA Pascal models were only slightly faster processing these OpenCL tasks than AMDs. I have a few AMDs running, check the run times. All except the HD 7950 run 1 task, the HD 7950 does 2 tasks at once. From what I have seen for example the RX 480 runs 2 tasks with similar time as the GTX 1080 (950-1000 s).
-----
Hmmm I still have a long road
)
Hmmm I still have a long road ahead of me who knows what the future will bring, especially the Vega Gpu's. For now I'm running a Hd5870 but seeing nice results on RX460's. It's difficult finding info on them but by the looks of it I might give myself a upgrade to an RX560 once they release and make my system a whole lot more energy efficient. Thanks so far!
Hello THEASTRONEER, I am
)
Hello THEASTRONEER,
I am running i7 7700, 64GB, GTX1080(8GB) and am getting ~60000 credits per day. I was getting 20k per day unit I uninstalled and reinstalled without UAC. I don't know how much the Intel HD card helps though.
NOTE: BOINC is incorrectly reading my video card memory as 4095MB instead of 8190MB
;
Hi, @KUP_70: 60k credit
)
Hi,
@KUP_70: 60k credit running 24/7 one 1080 is quite poor. It should be able of 500k to 650k per day.
I used to have a GTX 1070 which made 510k to 520k per day.
Right now I am running a GTX 1080ti with 3 wu at a time and it takes about 1000s per wu. So it should be able of 880k per day, only GPU credit.
I really can recommend GTX 1070 for energy saving high speed crunching (60-70% of power target 170W)
And for top peak crunching the GTX 1080ti is also highly recommended because of its 90% of 250W and awful >800k per day credit.
Do you have a dedicated CPU
)
Do you have a dedicated CPU core for each GPU task? It is needed else your GPU tasks will greatly suffer.