Has anyone else noticed a trend where their systems running time of the task seems to be creeping up?
Mine used to run under an hour (may 40-45 minutes?) lately they are topping an hour. I have dropped from 4 tasks per GPU back the 3 to see if that will help.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
you should give the new CUDA version a shot. Info and download link in the technical news thread.
I now have read what you referenced. I would have to drop my heavy mixed load of E@H cpu tasks to do it. I think I have only one cpu task successfully implemented in app_info.xml. My attempts to get the other one to work keep acting like I am missing something.
If/When the new version is pushed out as a beta task my current setup will pick it up automagically.
I am pretty sure that it may be the cpu processing speed rather than the gpu processing speed that was slowing things down.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
you should give the new CUDA version a shot. Info and download link in the technical news thread.
I would have to drop my heavy mixed load of E@H cpu tasks to do it.
As an experiment I jumped my allowed CPU tasks for e@h to 246 for either application. The result was 12-15 hours CPU processing times at more than double the threads (was 64). It was more like 8 hours previously.
The Free-DC result for "yesterday" dropped significantly. So I have put the CPU tasks back down to 64/app. To see if the Free-DC recovers.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
The server that collects the results of this search is filling up faster than I can get the data off it. I'll suspend sending new GW tasks for a couple of days.
If I'd seen this about an hour or two ago, it would have saved me a *lot* of angst.
I was just trying to start a machine that was properly configured but had run out of work because it had insufficient disk space. After doubling the partition size and making sure BOINC was allowed to use it all, I kept getting a very simple 'not sending any work' type of message with no other type of explanation as to why.
It should be very useful to others wondering why, if the message could be supplemented by something like 'work generation temporarily suspended server-side' so that people don't spend a lot of wasted effort trying to work out how they might have stuffed up their local configuration. Lots of people probably don't read the boards regularly but some sort of high visibility announcement might help.
Due to the relatively short deadline for O3AS, there will be lots of people with small cache sizes quickly running out and wondering why they can't get more. If work is out for "a couple of days", all of mine will run out. That will be quite painful for me because I've developed a 'task staggering system' that needs quite a bit of user intervention to get it properly started. The performance gains make it very worthwhile so I guess I'll just have to suck it up.
** I didn't want to sidetrack the Technical News thread, so I am quoting here.
Did you do any long term testing of trying to stagger tasks? I messed around with it a bit, but no way was I going to babysit tasks by hand. I played around with a few scripts hooking in to gpu_busy_percent, but it seemed to be a race without an end. The runtime variance is large enough on individual tasks, that I was either constantly starting/pausing/resuming tasks, or the tasks would sync up anyway. I ended up just leaving it alone.
My 2x Radeon VII host has been basically performing the same just leaving it alone running 4x tasks per GPU. 12883788
The server that collects the results of this search is filling up faster than I can get the data off it. I'll suspend sending new GW tasks for a couple of days.
If I'd seen this about an hour or two ago, it would have saved me a *lot* of angst.
Server page is showing "0" available to send for All-Sky GW (again).
I have "other tasks" set so mine has been downloading brp7/meerKat.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Bernd turned off the work generator on March 27th because there were too many tasks getting completed and the server was filling up with results faster than they could be archived off. not sure what the status is.
Has anyone else noticed a
)
Has anyone else noticed a trend where their systems running time of the task seems to be creeping up?
Mine used to run under an hour (may 40-45 minutes?) lately they are topping an hour. I have dropped from 4 tasks per GPU back the 3 to see if that will help.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Tom you should give the new
)
Tom
you should give the new CUDA version a shot. Info and download link in the technical news thread.
_________________________________________________________________________
Ian&Steve C. wrote: Tom you
)
I now have read what you referenced. I would have to drop my heavy mixed load of E@H cpu tasks to do it. I think I have only one cpu task successfully implemented in app_info.xml. My attempts to get the other one to work keep acting like I am missing something.
If/When the new version is pushed out as a beta task my current setup will pick it up automagically.
I am pretty sure that it may be the cpu processing speed rather than the gpu processing speed that was slowing things down.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Tom M wrote: I am pretty
)
Tom, which computer are you talking about?
Proud member of the Old Farts Association
GWGeorge007 wrote: Tom M
)
https://einsteinathome.org/host/13149171
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Tom M wrote:Ian&Steve C.
)
As an experiment I jumped my allowed CPU tasks for e@h to 246 for either application. The result was 12-15 hours CPU processing times at more than double the threads (was 64). It was more like 8 hours previously.
The Free-DC result for "yesterday" dropped significantly. So I have put the CPU tasks back down to 64/app. To see if the Free-DC recovers.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Gary Roberts wrote:Bernd
)
** I didn't want to sidetrack the Technical News thread, so I am quoting here.
Did you do any long term testing of trying to stagger tasks? I messed around with it a bit, but no way was I going to babysit tasks by hand. I played around with a few scripts hooking in to gpu_busy_percent, but it seemed to be a race without an end. The runtime variance is large enough on individual tasks, that I was either constantly starting/pausing/resuming tasks, or the tasks would sync up anyway. I ended up just leaving it alone.
My 2x Radeon VII host has been basically performing the same just leaving it alone running 4x tasks per GPU. 12883788
Gary Roberts wrote: Bernd
)
Server page is showing "0" available to send for All-Sky GW (again).
I have "other tasks" set so mine has been downloading brp7/meerKat.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
So has "it" died or not?
)
So has "it" died or not?
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Bernd turned off the work
)
Bernd turned off the work generator on March 27th because there were too many tasks getting completed and the server was filling up with results faster than they could be archived off. not sure what the status is.
you can ask here: https://einsteinathome.org/content/all-sky-gravitational-wave-search-o3-data-o3ashf1?page=5#comment-223664
_________________________________________________________________________