- There are some kind of optimized app (Win XP or 7 - 32/64)f or Einstein like the one avaiable for SETI?
Not for some years now, though there was a string of amazingly improved execution time applications created by akosf back then. Trial versions are now tested on Albert--where you are also welcome, but here it is restricted to the released applications--helps keep the science clean, you know.
Quote:
- In any forum, is avaiable a thread just for chance some waste of time reagar some totaly out of focus thing? Like humor, speak about beverages, trips, etc. something realy just to pass the time and make some friends or enemys who knows?.
That would be the Cafe Einstein forum, though it is less active than you might be used to. Perhaps you will help activate it!
Quote:
- Any place we could see some test results that could help to optimizes the number of WU task per GPU? Just don´t want to loose time trying to reinvent the weel.
some such results have been posted here in the Cruncher's corner. I already pointed you to mine, so I won't repeat that, especially since they were on an older version which had slightly different characteritics.
I think the general observation is that going from one to two task per GPU is almost often a big help, while going above that never gives much help, and can begin to hurt--depending on the details of the system. I also think a general observation is that doing things which reduce the latency of the CPU service application which supports the GPU are hugely helpful. Anything--including reducing the number of pure-CPU BOINC tasks allowed, improving memory speed, fiddling with priorities (for example with Process Lasso or the efmer priority adjusting application). There are some folklore problems here, as elsewhere, so you'll want to explore for yourself.
Given that you tend to have systems with huge GPU capacity relative to CPU capacity compared to most of us. My honest suggestion is to start with two GPU tasks, and ZERO CPU tasks, then inch upward in GPU and CPU one alternating step at a time.
While another poster has a different memory, I continue to think you are likely to see very bad performance during any period when a SETI and Einstein GPU task are simultaneously executing on the same GPU. I think you can avoid this by setting differing numbers of simultaneous GPU tasks for the SETI and the Einstein side. I don't currently run SETI GPU, so will be very interested in your report on how this turns out on your systems.
Oh, and in case you did not spot it yet, here at Einstein you don't need to go to app_info just to run multiple tasks per GPU. It is an item in the Einstein Computing preferences list:
GPU utilization factor of BRP apps
where, 1 means run 1, 0.5 means run 2, 0.33 means run 3...
The one point which confuses folks on first try is that the change propagates to your host (after you have set it on the preferences page at the web site) ONLY the next time work is downloaded to that host. But once propagated, it applies to ALL Einstein BRP GPU work on that host--including work downloaded before you asked for the change.
One very good thing for optimization here: the GPU tasks are astonishingly closely matched in actual computation needed--time variation will be coming from your system, which is a big help in tuning. However some of the sources of system variation (usually because of latency delays external to the GPU) are not so easy to nail down. So you'll find at least a little challenge for tuning.
I realy belive, running SETI and Einstein at the same time, with so many diferent hosts and configurations will be a challenge for me (don´t forget my hosts makes a lot of other type of work on my company, most of them in a 24/7 cycle), and of course will need a lot of help to do that, but i´m sure i count with a lot of unknows friends ready to help me, as allways happens in the SETI forums and expect nothing less here.
Having several hosts, the easier way to get Einstein and SETI working is to run each project on different hosts, Ive set up some hosts as Einstein crunchers and in those hosts SETI is set as backup with 0% resource, and in the other hosts the roles are inverted... That way each host can be set to different cache sizes as needed (as you wont need a hughe cache on Einstein -meassured in days, the space needed will be bigger anyway-) and also its easy to choose the right number of free CPU cores (as Einstein needs more than SETI)...
Not to mention the number of concurrent tasks on GPU which is not a big issue but if the projects have different counts you will end up with fractions of the GPUs unused when the next task need 0.5 and the space available is for example 0.33...
(I know that having SETI as backup sounds weird, the truth is that they were running SETI and they were already optimized for it, so instead of dettaching them Ive just set it as backup, that way, if i want to change what project a host works on the only thing I need is to change the venues)
Oh, and in case you did not spot it yet, here at Einstein you don't need to go to app_info just to run multiple tasks per GPU. It is an item in the Einstein Computing preferences list:
GPU utilization factor of BRP apps
where, 1 means run 1, 0.5 means run 2, 0.33 means run 3...
The one point which confuses folks on first try is that the change propagates to your host (after you have set it on the preferences page at the web site) ONLY the next time work is downloaded to that host. But once propagated, it applies to ALL Einstein BRP GPU work on that host--including work downloaded before you asked for the change.
One very good thing for optimization here: the GPU tasks are astonishingly closely matched in actual computation needed--time variation will be coming from your system, which is a big help in tuning. However some of the sources of system variation (usually because of latency delays external to the GPU) are not so easy to nail down. So you'll find at least a little challenge for tuning.
That's the info I was looking for. I have two very hungry water cooled, heavilly OC'ed GTX 480's that are looking for an electronic meal.
What I learned when I previously crunched Einstein is that hyperthreading does help.
What I learned when I previously crunched Einstein is that hyperthreading does help.
The importance of low latency on the CPU support application for GPUs may complicate that relationship a bit.
Having said that I do currently have HT turned on for my single host which has it, but I only let that host run a single CPU task, despite the fact it is supporting a mere single GTX460 (obviously it could at maximum run eight CPU tasks). This gives slightly less productive output than two CPU tasks, but not by enough for it to be a win from a productivity per unit power consumption--which I happen to value. At least I thought so when I made the choice--perhaps with the revised applications the optimum CPU job count has moved up, as I think the latest GPU BRP is somewhat less CPU-hungry.
Perhaps the strongest argument against HT is one that msattler has pushed--that if you are committed to overclocking you may find that the same processor running HT has to run slower, by enough to ruin the HT improvement, even though there nearly always is an improvement when holding clock rate constant.
For non over-clockers, I think careful comparisons have quite generally shown HT benefit, though often by rather modest ratios. Power efficiency, again is a bit of a weak point not usually reported in such comparisons.
- There are some kind of optimized app (Win XP or 7 - 32/64)f or Einstein like the one avaiable for SETI?
Not for some years now, though there was a string of amazingly improved execution time applications created by akosf back then. Trial versions are now tested on Albert--where you are also welcome, but here it is restricted to the released applications--helps keep the science clean, you know.
For anyone newish to the project and wondering what happened and why they're not availabl: After the conclusion of the first science run that Akos was tweaking app performance, E@H ended up hiring him as a consultant to optimize the official apps.
Juan wrote:- There are some
)
Not for some years now, though there was a string of amazingly improved execution time applications created by akosf back then. Trial versions are now tested on Albert--where you are also welcome, but here it is restricted to the released applications--helps keep the science clean, you know.
That would be the Cafe Einstein forum, though it is less active than you might be used to. Perhaps you will help activate it!
some such results have been posted here in the Cruncher's corner. I already pointed you to mine, so I won't repeat that, especially since they were on an older version which had slightly different characteritics.
I think the general observation is that going from one to two task per GPU is almost often a big help, while going above that never gives much help, and can begin to hurt--depending on the details of the system. I also think a general observation is that doing things which reduce the latency of the CPU service application which supports the GPU are hugely helpful. Anything--including reducing the number of pure-CPU BOINC tasks allowed, improving memory speed, fiddling with priorities (for example with Process Lasso or the efmer priority adjusting application). There are some folklore problems here, as elsewhere, so you'll want to explore for yourself.
Given that you tend to have systems with huge GPU capacity relative to CPU capacity compared to most of us. My honest suggestion is to start with two GPU tasks, and ZERO CPU tasks, then inch upward in GPU and CPU one alternating step at a time.
While another poster has a different memory, I continue to think you are likely to see very bad performance during any period when a SETI and Einstein GPU task are simultaneously executing on the same GPU. I think you can avoid this by setting differing numbers of simultaneous GPU tasks for the SETI and the Einstein side. I don't currently run SETI GPU, so will be very interested in your report on how this turns out on your systems.
Oh, and in case you did not spot it yet, here at Einstein you don't need to go to app_info just to run multiple tasks per GPU. It is an item in the Einstein Computing preferences list:
GPU utilization factor of BRP apps
where, 1 means run 1, 0.5 means run 2, 0.33 means run 3...
The one point which confuses folks on first try is that the change propagates to your host (after you have set it on the preferences page at the web site) ONLY the next time work is downloaded to that host. But once propagated, it applies to ALL Einstein BRP GPU work on that host--including work downloaded before you asked for the change.
One very good thing for optimization here: the GPU tasks are astonishingly closely matched in actual computation needed--time variation will be coming from your system, which is a big help in tuning. However some of the sources of system variation (usually because of latency delays external to the GPU) are not so easy to nail down. So you'll find at least a little challenge for tuning.
RE: I realy belive, running
)
Having several hosts, the easier way to get Einstein and SETI working is to run each project on different hosts, Ive set up some hosts as Einstein crunchers and in those hosts SETI is set as backup with 0% resource, and in the other hosts the roles are inverted... That way each host can be set to different cache sizes as needed (as you wont need a hughe cache on Einstein -meassured in days, the space needed will be bigger anyway-) and also its easy to choose the right number of free CPU cores (as Einstein needs more than SETI)...
Not to mention the number of concurrent tasks on GPU which is not a big issue but if the projects have different counts you will end up with fractions of the GPUs unused when the next task need 0.5 and the space available is for example 0.33...
(I know that having SETI as backup sounds weird, the truth is that they were running SETI and they were already optimized for it, so instead of dettaching them Ive just set it as backup, that way, if i want to change what project a host works on the only thing I need is to change the venues)
RE: Oh, and in case you
)
That's the info I was looking for. I have two very hungry water cooled, heavilly OC'ed GTX 480's that are looking for an electronic meal.
What I learned when I previously crunched Einstein is that hyperthreading does help.
Steve
Crunching as member of The GPU Users Group team.
SciManSteve wrote:What I
)
The importance of low latency on the CPU support application for GPUs may complicate that relationship a bit.
Having said that I do currently have HT turned on for my single host which has it, but I only let that host run a single CPU task, despite the fact it is supporting a mere single GTX460 (obviously it could at maximum run eight CPU tasks). This gives slightly less productive output than two CPU tasks, but not by enough for it to be a win from a productivity per unit power consumption--which I happen to value. At least I thought so when I made the choice--perhaps with the revised applications the optimum CPU job count has moved up, as I think the latest GPU BRP is somewhat less CPU-hungry.
Perhaps the strongest argument against HT is one that msattler has pushed--that if you are committed to overclocking you may find that the same processor running HT has to run slower, by enough to ruin the HT improvement, even though there nearly always is an improvement when holding clock rate constant.
For non over-clockers, I think careful comparisons have quite generally shown HT benefit, though often by rather modest ratios. Power efficiency, again is a bit of a weak point not usually reported in such comparisons.
RE: Juan wrote:- There are
)
For anyone newish to the project and wondering what happened and why they're not availabl: After the conclusion of the first science run that Akos was tweaking app performance, E@H ended up hiring him as a consultant to optimize the official apps.
Thanks for the info, as you
)
Thanks for the info, as you could see few friends from the SETI GPUUG Team already joint us. So there are a lot of newish hungry hosts around.
Just another curiosity, do you know how often the stast are updated here?
Err ... forgive my ignorance
)
Err ... forgive my ignorance ... 'kitties' are OK, right? Do they scratch or bite ? :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: Err ... forgive my
)
Naw, the kitties are my constant companions, my confidants, and my computer snoopervisors. They keep an eye on the rigs whilst I am away from home.
I love all kitties, and am known as the kittyman over on Seti.
Badger on board. Another
)
Badger on board.
Another computer("Badger") at work.
I'm aboard...let's sail!
)
I'm aboard...let's sail!