ATLAS joining Einstein@home - RFC

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 686042913
RAC: 589795

RE: Only if those systems

Message 84864 in response to message 84863

Quote:

Only if those systems were allocated to other projects. Some people concentrate on a single project. For example, my Pentium 4 is attached to 3 projects now, but it is currently only doing Einstein work, as SETI and Cosmology are both set to not get any new work.

True, but even for single-project hosts there might be some effect from the DCF problem because some got more work than they actually could handle during the deadline. The projection of the project run progress and duration on the server status page is made based on results created, not on results returned (!), so the initial "over-creation" is biasing the projection a bit. Currently the WU creation rate is below 0.2% of the total effert, so 5 days for one percent progress. I guess this needs some more time to stabilize.

The daily credit output is now around 12 Mio credits per day, the total credits for the whole run should be (based on current credit scheme) around 5,5 billion credits , so at current pace the run would require about 450 days, but that's without Atlas and with current apps.

CU
Bikeman

Stranger7777
Stranger7777
Joined: 17 Mar 05
Posts: 436
Credit: 417499564
RAC: 33879

My two cents to set ATLAS as

My two cents to set ATLAS as a standalone user.
Moreover, it will be great to reset its statistics (at least to start of S5R4), just to look how it is going through the top-users list from the ground and make decisions and comparisons.

About the project completion:
Bikeman, I told somewhere here earlier this summer that we will finish S5R3 even before autumn starts and won that :)

Now I predict (if the run would not start from the beginning because of new optimisations) the succesful finish of the run to 8-9 months.

So, we'll see whos predictions will be exact enough.

Blue Northern Software
Blue Northern S...
Joined: 10 Dec 05
Posts: 38
Credit: 144621
RAC: 0

I think the best solution is

I think the best solution is to just transfer ATLAS's credits to my account where I'll hug and pet and squeeze and kiss them daily. Knowing that they have found a good home should make everyone happy.....right?

_________________
*** I used to have a little friend, but now he don't move no more.... Tell me about the credits again, George.

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7023314931
RAC: 1824054

If you like the BOINCstats

If you like the BOINCstats way of showing a user, you might like to check out Atlas at this URL:

BOINCStats page for ATLAS on Einstein

It only updates once a day from the published xml, so there is a time lag, and so far it has only seen one day, but in that one day Atlas has surged ahead of 95% of all Einstein individual users.

tullio
tullio
Joined: 22 Jan 05
Posts: 2118
Credit: 61407735
RAC: 0

I think it is important for

I think it is important for BOINC the fact that a big iron like ATLAS runs a @home program. I hope Bruce Allen points this out at the Grenoble meeting, it may teach something to the people at CERN.
Tullio

Stranger7777
Stranger7777
Joined: 17 Mar 05
Posts: 436
Credit: 417499564
RAC: 33879

As of BoincSynergy stats it

As of BoincSynergy stats it looks like only 1162 processors are working for E@H and it currently on the 1463rd place for the total score and on the 7th by productivity. Looks promising... But, when will all the cores start crunching?

Bruce Allen
Bruce Allen
Moderator
Joined: 15 Oct 04
Posts: 1119
Credit: 172127663
RAC: 0

RE: As of BoincSynergy

Message 84870 in response to message 84869

Quote:
As of BoincSynergy stats it looks like only 1162 processors are working for E@H and it currently on the 1463rd place for the total score and on the 7th by productivity. Looks promising... But, when will all the cores start crunching?

In comparison to Einstein@Home, Atlas is very general-purpose. It offers high IO bandwidth, rapid access to more than 1 Petabyte of data, fast interprocessor communication, 'reliable' hardware, and other features that E@H lacks. Since there are many other types of gravitational wave searches other than searches for Continuous Wave sources, our hope is that Atlas is primarily used for these, and that the Atlas cores are occupied running analysis that can not be done on E@H.

For example two of the significant activities on Atlas just now are the post-processing of the E@H S5R1 and S5R3 results. In the past week, Holger Pletsch has completed a first pass through the S5R1 results. This work requires a resource like Atlas to carry out.

Cheers,
Bruce Allen

Director, Einstein@Home

Brian Silvers
Brian Silvers
Joined: 26 Aug 05
Posts: 772
Credit: 282700
RAC: 0

RE: RE: As of

Message 84871 in response to message 84870

Quote:
Quote:
As of BoincSynergy stats it looks like only 1162 processors are working for E@H and it currently on the 1463rd place for the total score and on the 7th by productivity. Looks promising... But, when will all the cores start crunching?

In comparison to Einstein@Home, Atlas is very general-purpose. It offers high IO bandwidth, rapid access to more than 1 Petabyte of data, fast interprocessor communication, 'reliable' hardware, and other features that E@H lacks. Since there are many other types of gravitational wave searches other than searches for Continuous Wave sources, our hope is that Atlas is primarily used for these, and that the Atlas cores are occupied running analysis that can not be done on E@H.

For example two of the significant activities on Atlas just now are the post-processing of the E@H S5R1 and S5R3 results. In the past week, Holger Pletsch has completed a first pass through the S5R1 results. This work requires a resource like Atlas to carry out.

Cheers,
Bruce Allen

Thanks for the explanation Dr. Allen... Post-processing is definitely a high priority...

Brian

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 686042913
RAC: 589795

Hi! I wonder: Is maybe

Hi!

I wonder: Is maybe preliminary post-processing of S5R3 data used to prioritize the work generator for S5R4? Because we are seeing so many jumps in the frequency of workunits, it's not like in S5R3 where we would pretty much start with the low frequencies and progress steadily upwards in the frequency range.

Cheers
Bikeman

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4265
Credit: 244922893
RAC: 16808

RE: I wonder: Is maybe

Message 84873 in response to message 84872

Quote:
I wonder: Is maybe preliminary post-processing of S5R3 data used to prioritize the work generator for S5R4? Because we are seeing so many jumps in the frequency of workunits, it's not like in S5R3 where we would pretty much start with the low frequencies and progress steadily upwards in the frequency range.


Random distribution is how it's designed to be. But as S5R2 already covered the lower frequencies of S5R3 and we used the same data files for both, people's machines already had the lower data files at the start of S5R3. Together with the "locality scheduling" that tries to minimize the additional downloads this lead to eating up the frequency band from bottom to top.

BM

BM

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.