Spot the New CPU Application einstein_O1Spot1THi

Darren Peets
Darren Peets
Joined: 19 Nov 09
Posts: 37
Credit: 98721552
RAC: 28197

I left a comment earlier on

I left a comment earlier on the problems/bug reports forum:  These new tasks are memory hogs.  My machine has 4 physical cores (8 hyperthreaded) and 8GB RAM (which I'd consider a reasonable amount).  It can run only 3 of these tasks simultaneously, with the fourth "waiting for memory" (I don't run more tasks than physical cores).  Each task typically eats 1.75GB swap plus 0.85-1.75GB RAM.

If these tasks are sent to computers with what I'd consider average amounts of RAM, and are not diluted by gamma-ray tasks, expect to lose a lot of clock cycles as the tasks wait for memory

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

These work units are

These work units are definitely using a lot of memory. 1.324 GB physical and 1.324 GB Virtual memory.  Darren you might consider restricting how many are running at only time to free up some RAM.

Christian Beer
Christian Beer
Joined: 9 Feb 05
Posts: 595
Credit: 127440526
RAC: 354713

Yes, we are aware of the high

Yes, we are aware of the high memory requirement. Unfortunately there is little we can do to lower that. This is not a problem with the application, it's inherent to how this search works. For higher search frequencies we need to provide more data to look for a continuous gravitational wave which means more memory usage. Normally this would be evened out by tasks that do not use a lot of memory but BOINC and the locality scheduling system we use to minimize the amount of data you need to download, was not designed for this case where the memory requirements differ from 400MB to 1600MB.

Nevertheless we identified that this particular testrun uses up to 200MB more memory (for higher frequencies) than we estimated and we are trying to find out why, to not have this problem when the science run starts.

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

I have 32 GB memory on my

I have 32 GB memory on my Linux machines and 24 GB on my Windows machine.  That was sized for the CERN projects using VirtualBox and I don't pay any attention to it for Einstein.  But as a general rule, I always favor using more memory if it provides a performance benefit.  And of course you will do what you must for the science, which is why we are here.

Christian Beer
Christian Beer
Joined: 9 Feb 05
Posts: 595
Credit: 127440526
RAC: 354713

FYI: I stopped sending out

FYI: I stopped sending out work for O1Spot1 because of the validation problems. We are trying to find out what's the cause of this incompatibility and how this affects the scientific results.

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

Rats! Just as I started to

Rats! Just as I started to crunch some of those.  7 hr 45 min to complete. Hope you figure it out. Still have a bunch in my cache. Will let those crunch through just the same.  Sidenote, I too always try to use 32 GB on my builds.

Christian Beer
Christian Beer
Joined: 9 Feb 05
Posts: 595
Credit: 127440526
RAC: 354713

We just canceled the search

We just canceled the search because of multiple problems and will restart tomorrow. We will grant credit to already uploaded tasks. If you have unstarted tasks they should automatically be aborted right now. If not please abort them yourself. If you have tasks running you can still upload them and will get Credit later.

Jimbocous
Jimbocous
Joined: 31 Mar 17
Posts: 51
Credit: 1113013755
RAC: 1009108

All mine showed Completed,

All mine showed Completed, can't validate and wing-man's units show cancelled by server and no credit granted.

Is this the Windows build problem?

https://einsteinathome.org/host/12512253/tasks/invalid

Dirk Broer
Dirk Broer
Joined: 10 Sep 05
Posts: 13
Credit: 27627153
RAC: 7763

I always build my BOINC

Darren Peets wrote:
These new tasks are memory hogs.  My machine has 4 physical cores (8 hyperthreaded) and 8GB RAM (which I'd consider a reasonable amount).

I always build my BOINC systems with a minimum of 2Gb per thread -the last ones even with 4GB per thread.

Christian Beer
Christian Beer
Joined: 9 Feb 05
Posts: 595
Credit: 127440526
RAC: 354713

Update on the status of this

Update on the status of this testrun:

New datafiles were produced and distributed to our mirror network. I started the work generator now and I already see work being send out. I deprecated the old 1.00 versions and we are now using version 1.01. Both versions were scientifically not compatible and we need all results processed with the same application. Another positive effect is that now each task should use less memory than before and not exceed the amount of the previous multi-directed search.

I also already granted credit for all invalid or inconclusive tasks that where reported until half an hour ago. Because of the Pentathlon I'm going to do this again later today and tomorrow for tasks that are currently in progress and are reported in the meantime. After that we are going to wait until the last of those old in progress tasks was either reported or timed out and we will do a final cleanup and grant credit for those too.

I would like to apologize in the name of the Eisntein@Home team for this drastic measure but it was necessary. That is science, you can prepare everything as good as you want, but there will always be one thing that will slip through and you have to redo the whole thing. Let's just say: that's why we do testruns.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.