Plans for near future of E@H ?

Filipe
Filipe
Joined: 10 Mar 05
Posts: 175
Credit: 365526743
RAC: 45157

RE: It is still under

Quote:
It is still under discussion whether we will run the third follow-up for "S6Bucket" in-house (on Atlas) or on Einstein@Home. With currently envisioned parameters on Einstein@Home it would run only for about two weeks.

Does the third follow-up run will be run on eintein?

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4266
Credit: 244923706
RAC: 16785

It was decided today that the

It was decided today that the third "S6Bucket" follow-up run will be done on Einstein@home. It will start in about two weeks from now and will be rather demanding for participating hosts in terms of data throughput. About a million WUs with an anticipated runtime of 2h and rather short deadlines (2-3d). The whole run should be done within about a week.

BM

BM

astro-marwil
astro-marwil
Joined: 28 May 05
Posts: 511
Credit: 402500833
RAC: 1068853

Hallo BM ! Very, very nice to

Hallo BM !
Very, very nice to read.
I just adjusted my "Maintain enough work for an additional xxx days" to suit for in the preferences.

Kind regards and happy crunching
Martin

Sasa Jovicic
Sasa Jovicic
Joined: 17 Feb 09
Posts: 75
Credit: 76794864
RAC: 48716

I am ready! Any app changes?

I am ready! Any app changes?

Filipe
Filipe
Joined: 10 Mar 05
Posts: 175
Credit: 365526743
RAC: 45157

RE: It was decided today

Quote:

It was decided today that the third "S6Bucket" follow-up run will be done on Einstein@home. It will start in about two weeks from now and will be rather demanding for participating hosts in terms of data throughput. About a million WUs with an anticipated runtime of 2h and rather short deadlines (2-3d). The whole run should be done within about a week.

BM

Sign me in!

Daniels_Parents
Daniels_Parents
Joined: 9 Feb 05
Posts: 101
Credit: 1877689213
RAC: 0

RE: ... About a million WUs

Quote:
... About a million WUs with an anticipated runtime of 2h and rather short deadlines (2-3d). The whole run should be done within about a week.
BM


What do you mean by "anticipated runtime of 2 hours"? For the host 4546148 i.e. it is already clear that it needs 5 to 6 hours to complete (extrapolation after 40% done in 2:15 hours).

Thanks for any clarification in advance,
Arthur

I know I am a part of a story that starts long before I can remember and continues long beyond when anyone will remember me [Danny Hillis, Long Now]

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7023354931
RAC: 1816661

RE: For the host

Quote:
For the host 4546148

That reports as a i7-2600K supporting two graphics cards, of which at least one is a GTX760.

So it is a quad-core Sandy Bridge nominally running 3.4 GHz hyperthreaded to be reported as 8 cores.

While in aggregate an 8-core Sandy Bridge is a quite capable machine, hyperthreading means not only that individual task times are longer than a fully dedicated core could support, but also that mixed loads (of disparate CPU tasks) my give both longer and shorter elapsed times than expected due to sharing effects. In particular, a task which requests I/O services much more frequently than do the other tasks currently active will get considerably less than a "fair share" of real computation service, but because all eight of the "cores" account for full time all the time, the sub-sharing task will report a longer run time. This effect, if applicable, will stretch both reported CPU time and reported elapsed time. Contrary-wise, a task requesting external service much less frequently than the others will "hog" resource and complete with lower CPU time and elapsed time than when paired with exact peers.

Additionally, if you have not restricted the CPU tasks to run fewer than all the 8 "cores" available, supporting your graphics cards will consume considerable resource, further extending elapsed times, though perhaps not reported CPU times.

Daniels_Parents
Daniels_Parents
Joined: 9 Feb 05
Posts: 101
Credit: 1877689213
RAC: 0

RE: That reports as a

Quote:
That reports as a i7-2600K supporting two graphics cards, of which at least one is a GTX760.


Thanks ! That is clear, sorry, I could even leave out the example. I would like to know if at E@H a reference computer to get a "anticipated runtime of 2h" is present, and if so, how it is configured.

BTW: BOINC is not able to recognize, that the second GPU is a GTX560Ti.

I know I am a part of a story that starts long before I can remember and continues long beyond when anyone will remember me [Danny Hillis, Long Now]

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7023354931
RAC: 1816661

RE: BOINC is not able to

Quote:
BOINC is not able to recognize, that the second GPU is a GTX560Ti.


When there is more than one, and generations differ, I believe the BOINC behavior is that they are all reported as being whichever one reports the highest CUDA compute capability. This is more likely to be the more modern card than necessarily the most powerful card.

For example, two of my hosts report as [2] NVIDIA GeForce GTX 750, while each in fact has one 750 and one 660. Much as I admire the 750 for very high power efficiency, ease of cooling, and quiet running, it is clearly slower on Einstein work than the 660.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4266
Credit: 244923706
RAC: 16785

RE: RE: ... About a

Quote:
Quote:
... About a million WUs with an anticipated runtime of 2h and rather short deadlines (2-3d). The whole run should be done within about a week.
BM

What do you mean by "anticipated runtime of 2 hours"? For the host 4546148 i.e. it is already clear that it needs 5 to 6 hours to complete (extrapolation after 40% done in 2:15 hours).

Thanks for any clarification in advance,
Arthur

Well, 1M WUs means 2M tasks. Given that we still get fresh Arecibo data for BRP4G tasks this would push our DB to its limits. So we decided to "bundle" four WUs together (as we already do in BRP4G/6 and did in S6BucketFU1UB). So we'll finally have ~250k WUs with about 8h each.

BM

BM

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.