While we wait for the LSC to release the data from their third observation run ("O3"), we will test the workunit setup and application planned to analyze that new data in a short "Engineering" run (on generated mock data). This will be the locality scheduling application for the next few weeks.
The App is currently in "Beta test" status, and workunit generation is limited untill we find the app reliable enough. Probably tomorrow we will release the app to the larger public and will generate workunits continuously.
BM
Copyright © 2024 Einstein@Home. All rights reserved.
Some initial observations
)
Some initial observations already posted in comment 185223
A reminder for those who are
)
A reminder for those who are running 2x on this app.
If you run them with staggered starting times you will get a major improvement in throughput.
My GTX1660s needs about 33m wo staggered starts whereas staggering them reduces run time to ~28m.
Betreger wrote:A reminder
)
If we could have app_mutex on GPU with 2x tasks, the tasks would run 20 seconds parallel, one doing CPU and initial GPU setup, second one doing full GPU calculations. When GPU calculations on the second on finished, the first GPU could start full GPU stuff. The second one would do 200+ seconds checking with CPU and the first one would do GPU. When second one stops CPU stuff it would report and start a new one and do 20 seconds CPU and minor GPU setup.
That was used in Seti NVIDIA linux app to run one GPU intensive part of the task at a time and overlap initial setup & cleanup & report stages.
petri33 wrote: Betreger
)
perfect use case for this! especially since there are several minutes of CPU-only processing at the tail end of every GW task.
_________________________________________________________________________
Curious George here, Why
)
Curious George here,
Why couldn't we use this sort of formula for O2MD1 tasks?
Proud member of the Old Farts Association
GWGeorge007 wrote: Curious
)
could have. but i think the CPU "wrap up" portion at the end of the task is significantly longer with O3AS than it was with O2MD. So it has a bigger effect now than before.
but in either case, it's up to the Einstein developers to implement something like this. logic that needs to be coded into the application.
_________________________________________________________________________
GWGeorge007 wrote:Why
)
Because O2MD1 is a CPU only search and doesn't use the GPU.
Before the O3AS engineering test run started there was a GPU search using O2 data called O2MDF. It's been gone for a while and there are no apps listed for it.
If you were talking about O2MDF rather than O2MD1, it's highly unlikely that it would be used again. The emphasis would now be to use GPUs for just the new O3 data since that is the data most likely to provide a detection.
Cheers,
Gary.
Nothing but transient http
)
Nothing but transient http upload errors on O3AS tasks for the past day plus. I had just switched from GRPB on several computers and haven't had issues with those tasks. Several tasks did get credit or are pending but I have hundreds now failing to upload.
Can't upload and can't
)
Can't upload and can't download different work because of the uploads.
Changed back to Gamma ray and reset project.
STILL was getting O3 tasks over and over.
I could change locations and the project would still send this junk.
I could completely de-select ATI or NV and it would still send tasks for those cards.
I could select CPU only, a GPU app, everything else No and STILL get GPU work.
Each PC was getting 12 tasks no matter the speed or GPU count.
Server is F'd up
mmonnin wrote:Each PC was
)
Sounds like the server is doing what it always does if you don't properly remove allocated work. Resetting the project doesn't remove what has already been allocated. You need to abort it first and make sure that work you don't want is properly reported before you reset a project.
The other thing to check also is that you don't have the setting for 'Allow non-preferred apps' set to 'yes' for the location a particular host is assigned to.
EDIT: I was about to set up a host to run O3AS work to check if there really is a widespread upload problem. I haven't started running any of this yet so don't know the situation. I'm surprised that a lot more people aren't complaining if this is truly widespread. This is a serious issue (being a weekend) since it doesn't take long for stuck uploads to destroy new work fetch and completely jam up a system.
Are you sure it's not something to do with your ISP for example?
Cheers,
Gary.