All my hosts run dry on GPU tasks now. Bernd said 11 Dec 2019 " "V2" will be finished next week, though, we will continue (on the GPUs) with an extension of "G2" ". Maybe that is now and some switches need to be flicked at the main complex until G2 will be flowing into pipes.
edit: After a break of few hours GPUs now started crunching O2MDFG2e tasks. "e" like electro-vitamin edition
Yes, Einstein is PCIe dependent. Also RAM speed as well. If you are running anything less than a 16x you will see a lengthing in the time to complete.
Thank you very much for the info! Yes, the computer with multi-GPU also happen to have slightly slower RAM, compared to my other rig. Then the performance I get more or less lines up. However, I realized my 1080 is actually on x16. Not sure why it's still so slow. I guess I will have to stay away from e@h for that computer until I upgrade my MB.
The GW (O2MD) GPU app has difficulties to make use of fast GPUs, regardless of CPU performance, RAM speed and whatnot. The runtimes don't scale well with GPU performance.
If you like to make full use of the performance of your 1080, you should go with the FGRP app. It does also give a lot more credit than the GW tasks.
AFAIK the Fermi app is much less PCIe dependent as well as not caring much what CPU its paired with. The GW GPU app OTOH is still at the stage where it only uses the GPU to accelerate part of the work, with the CPU doing the remainder, and is very chatty between them as a result.
The GW (O2MD) GPU app has difficulties to make use of fast GPUs, regardless of CPU performance, RAM speed and whatnot. The runtimes don't scale well with GPU performance.
If you like to make full use of the performance of your 1080, you should go with the FGRP app. It does also give a lot more credit than the GW tasks.
Those O2MDFG2_G34731 tasks run mostly fine to me. It's not as efficient as FGRP, but this is more because of how efficient FGRP is compared to its CPU counterpart. AFAIK, not many GPU project yields that amounts of speed up compared to CPU equivalent. However, O2MDFV2_VelaJr1 really struggles. I think we did some calculation in previous comments, and I am better off running its CPU version if I care about power efficiency. Another funny part is that I use my GPUs as heaters, and it's getting cold now. I'd rather not have some workload that doesn't really use GPU but occupies them without generating enough heat. :-P
Unfortunately I can't pick one O2MD over the other since it's same application. :-( I noticed that server seems to stopped sending out VelaJr1 apps since 2-3 days ago. Not sure if it's a coincidence, or if it's BOINC or project admin tracking the best workload to send out to each system.
Yes, Einstein is PCIe
)
Yes, Einstein is PCIe dependent. Also RAM speed as well. If you are running anything less than a 16x you will see a lengthing in the time to complete.
All my hosts run dry on GPU
)
All my hosts run dry on GPU tasks now. Bernd said 11 Dec 2019 " "V2" will be finished next week, though, we will continue (on the GPUs) with an extension of "G2" ". Maybe that is now and some switches need to be flicked at the main complex until G2 will be flowing into pipes.
edit: After a break of few hours GPUs now started crunching O2MDFG2e tasks. "e" like electro-vitamin edition
Zalster wrote:Yes, Einstein
)
Thank you very much for the info! Yes, the computer with multi-GPU also happen to have slightly slower RAM, compared to my other rig. Then the performance I get more or less lines up. However, I realized my 1080 is actually on x16. Not sure why it's still so slow. I guess I will have to stay away from e@h for that computer until I upgrade my MB.
@wujj123456The GW GPU app
)
@wujj123456
The GW (O2MD) GPU app has difficulties to make use of fast GPUs, regardless of CPU performance, RAM speed and whatnot. The runtimes don't scale well with GPU performance.
If you like to make full use of the performance of your 1080, you should go with the FGRP app. It does also give a lot more credit than the GW tasks.
AFAIK the Fermi app is much
)
AFAIK the Fermi app is much less PCIe dependent as well as not caring much what CPU its paired with. The GW GPU app OTOH is still at the stage where it only uses the GPU to accelerate part of the work, with the CPU doing the remainder, and is very chatty between them as a result.
Stef wrote:@wujj123456 The
)
Those O2MDFG2_G34731 tasks run mostly fine to me. It's not as efficient as FGRP, but this is more because of how efficient FGRP is compared to its CPU counterpart. AFAIK, not many GPU project yields that amounts of speed up compared to CPU equivalent. However, O2MDFV2_VelaJr1 really struggles. I think we did some calculation in previous comments, and I am better off running its CPU version if I care about power efficiency. Another funny part is that I use my GPUs as heaters, and it's getting cold now. I'd rather not have some workload that doesn't really use GPU but occupies them without generating enough heat. :-P
Unfortunately I can't pick one O2MD over the other since it's same application. :-( I noticed that server seems to stopped sending out VelaJr1 apps since 2-3 days ago. Not sure if it's a coincidence, or if it's BOINC or project admin tracking the best workload to send out to each system.
wujj123456 wrote:I noticed
)
https://einsteinathome.org/goto/comment/174809
11 Dec 2019 ... " "V2" will be finished next week, though, we will continue (on the GPUs) with an extension of "G2" "
Can someone please provide me
)
Can someone please provide me an example of a cc_config.xml to change the (OS) process priority of the O2MDF GPU tasks?
I don't get it working.
Thank you.
I never could get the
)
I never could get the cc_config.xml special parameter <process_priority>N</process_priority>, <process_priority_special>N</process_priority_special>
to correctly obey the setting. I set the gpu applications to my desired process priority through an application called schedtool.
sudo apt install schedtool
I just run a script that constantly resets the default process priority for my gpu applications.
Oh ok, that's also a way to
)
Oh ok, that's also a way to do it, thanks.