Now I'm not the most educated person nor am I a scientist man like you scientist men out there.
But I do have a knack, maybe because I'm a salesman, to stir up conversations.
First if you can, view this video http://youtube.com/watch?v=vLujLtgBJC0
It shows that Virginia Tech clustered 1100 computers and made it into, at that time, the 3rd fastest computer in the world (I don't know what you scientist men and women define as "fastest" but I say it because I am a layman and I have an excuse to throw terms lazily ^_^)
If Virginia tech decided to add those computers to the einstein network and these computers began to analyze data the way we do on our own pcs are doing now would it be a big boost in analyzing the rest of the data that needs to be analyzed?
In other words would we all pack our bags and go home if these 1100 computers went online and started helping us analyze the data.
By data I mean the stuff we download, compute and then re upload the results to the server.
Also results have to match with other results right? So if these 1,100 macs did a terabyte of work per day then that terabyte would still have to be independently verified by other computers and *thus we don't move any quicker.
*I used the word thus because I thought it would make me sound smarter.
I hope I didn't disrupt the space time continuum for you scientist men.
Copyright © 2024 Einstein@Home. All rights reserved.
What if 1100 computers joined the fray?
)
Ah, but you've joined E@H, so you'd be forgiven then ... :-)
Again, not to be held against you ... :-)
Droool .....
It seems the networking hardware is HOT also.
Yup, no doubt.
No, E@H is a signal processing exercise. Searching and searching again for needles in haystacks. There really is no end to what types ( templates ) of such signals that can be searched for. The work can always expand to fill the computer mojo that is provided.
No, computers within those 1100 would either partner up in quorums with others within that set, or more likely in the general E@H pool. The partitioning of a work unit to separate boxes is essentially random.
And, thus you did ... :-)
Not at all. You don't know who the Sys Admin is for this cluster, do you? Could be worth a referral to our bribery program subcommittee .... :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Hi! I think most of the
)
Hi!
I think most of the increase in the statistical "Floating Point Speed" of the Einstein@Home network since December stems from the intensified participation of large science clusters in E@H:
Prime "suspects" are the Albert Einstein Institute's own Merlin/Morgane clusters and the German Astronomy Community Grid (GACG).
Both clusters/grids are doing really fine in Einstein@Home, and it also demonstrates that the BOINC platform can deal just fine with such large contributors.
Each of the two mentioned clusters provides CPU time worth roughly half a million credits per day atm.
CU
Bikeman
RE: RE: Now I'm not the
)
Ahh so even with over a thousand Macs working together the beast that is Einstein@home will always find a way to expand its appetite no matter how many comps are added.
Now my understanding is that Ligo and the other inferometers process a boat load of data and then we sift throught it and check to see if we can find any gravitational waves, right?
So what we're doing is basically analyzing information that has already been captured by the inferometers.
How much data are we talking about here?
With so many computers id like to think (Laymen talk here) that I we'd be over and done with in no time.
RE: How much data are we
)
I've no idea how much data could be generated for the entire S5 science run of the LIGO interferometers, but for the current S5R3 Einstein@Home analysis run, we have 7,369,434 workunits. Now, every workunit has to be processed by two computers independently, so we'll have 14,738,868 results.
A single Quadcore Core 2 can process (say) 14 of them per day, so a pair of Quadcores would take 0.5 million days or over 1400 years to complete the analysis.
:-).
Bikeman
I wondered about
)
I wondered about Merlin/Morgane. I have been paired with more than 30 different computers with that name. Seems like they like AMD processors.
Ligo's server farm was built
)
Ligo's server farm was built when it was Opteron vs Preshott, naturally they went AMD.
RE: A single Quadcore
)
Depends on the platform... ;-) Peanut's Mac system would only take about 1180-1190 years...
RE: I wondered about
)
It's a cluster of the Albert Einstein Institute in Germany (where Bernd is working), exclusively made of AMD nodes.
CU
Bikeman
RE: Ahh so even with over a
)
Quite right. The realtime signal from the interferometer(s) ( IFO ), essentially numbers representing the moment to moment difference between the length of each ( 4 km ) arm, is sequentially stored. The 'good' data is used, when the interferometer is 'locked' or properly comparing those two distances.
At Hanford there are two IFO's in the vacuum system with one twice the length of the other. There we expect a real wave detection to have twice the signal in the longer version than the shorter. This is a crosscheck.
At Livingston there is an IFO the same length of Hanford's longer one. It is not only many miles away from it, but is aligned as fairly close to a 90 degrees orientation compared to Hanford ( that Earth's curvature allows ). Because of the geometry of gravity wave propagation this not only improves chances of a confirmation of detection but locating it's source in the sky. You work backwards from the difference in arrival times of a signal at the IFO's.
There are other IFO's : VIRGO ( Italy ), GEO ( Germany ), TAMA ( Japan ) whose data either have, are or will be part of the E@H 'pipeline'.
Other signal factors are the assumed waveform 'shape' which depends on a long list of astronomical considerations - like the source event, it's orientation in space, distance and direction from Earth.
So you see there's a blizzard of possible tests/combinations that could be conceivably applied. A work unit roughly translates to a simple/single search for one of them. Presently a 'hierarchical' search strategy is being used, basically a first look at a particular sky position with follow-up even closer look(s) if a promising lead is found. If I remember rightly earlier E@H work had three ( four even? ) computers per work unit ( quorum ) to enhance reliability. ( At worst, work units can be re-issued into the pool ).
So, indeed the E@H appetite is large, larger, largest ....... :-)
The project scientists have selected the detail of the signal processing to suit likely success in answering ( in the positive or negative ) interesting astronomical questions.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: Prime "suspects" are
)
Merlin/Morgane are actually two Clusters at the AEI: Merlin is now four years old and dying node per node (in the order of 1-2 per week). It originally consisted of 180 dual-CPU Athlon MP machines. Morgane is a cluster of 615 single-socket dual-core Opterons.
The two "e-science" / "Grid" accounts don't actually correspond to single clusters. They are bunches of Einstein@home tasks that are submitted to various clusters / supercomputers (of two different sub-structures) of "The Grid" as ordinary computing jobs. Some of them are clusters of the Numerical Relativity group at AEI, and one is the old Medusa cluster at UWM that Bruce Allen set up 7 years ago, after which Merlin once was modeled and which was the predecessor of Nemo.
Oh, and to the original question: 10 TFlops are impressive, especially at the price, put they are less than 20% of the computing power Einstein@home delivers 24/7 (not only for the time of a benchmark as for the Top500).
BM
BM