First search on the advanced-generation LIGO detector data

The first E@h search on the advanced-generation LIGO detector data (O1) has started ! We are searching the sky for gravitational wave signals with frequencies between 20 Hz and 100 Hz. We have packed two searches in a single application: one for standard ever-lasting continuous gravitational waves and the other for continuous signals lasting only some days. The run was designed to last no more than a few months because we have a long list of exciting searches that we want to launch on the O1 data: we want to look at frequencies above 100 Hz and also concentrate our computing power on a few specific promising objects.

Since the last gravitational wave run we have developed a faster application that hinges on the power of the Fast Fourier Transform algorithm. There is no such thing as a free lunch so we are paying a price for this : the performance of our application depends now on the size of the cache of the volunteer computer that it is running on. In order to be able to assign credit fairly for the work done by all volunteer hosts and in order to balance well the computational load among the different hosts, we have split the work for this search in two separate runs for different host classes. A work-unit from any of these runs is equally likely to harbour a signal and both runs are crucial to the search!

M.Alessandra Papa for the E@H team

Comments

Bill592
Bill592
Joined: 25 Feb 05
Posts: 786
Credit: 70,825,065
RAC: 0

First search on the advanced-generation LIGO detector data

Thanks Marialessandra !

Exciting !

Bill

PhiAlpha
PhiAlpha
Joined: 8 Nov 04
Posts: 34
Credit: 836,625,900
RAC: 1

Wonderful news!

Wonderful news!

"Everything should be made as simple as possible, but not simpler." A. Einstein

LG Training
LG Training
Joined: 1 Nov 10
Posts: 1
Credit: 1,551,306
RAC: 0

Excellent news. Let's hope

Excellent news. Let's hope that LIGO continues to live up to a spectacular beginning!

Benva
Benva
Joined: 19 Jul 08
Posts: 4
Credit: 12,838,249
RAC: 0

Great news! Thanks a lot!

Great news! Thanks a lot!

Todderbert
Todderbert
Joined: 3 Jun 15
Posts: 1,285
Credit: 645,963,019
RAC: 0

This is great, glad to be a

This is great, glad to be a participant. Thank you.

Sasa Jovicic
Sasa Jovicic
Joined: 17 Feb 09
Posts: 75
Credit: 82,813,982
RAC: 38,657
Jordan Kallinen
Jordan Kallinen
Joined: 15 Aug 15
Posts: 35
Credit: 135,136,067
RAC: 1

I'm looking forward to

I'm looking forward to crunching some data!

Jonathan Jeckell
Jonathan Jeckell
Joined: 11 Nov 04
Posts: 114
Credit: 1,341,945,207
RAC: 0

So I assume you are alluding

So I assume you are alluding to the "O1I" and "O1F" applications. If so, which is which, and what does the last position denote?

Thunder
Thunder
Joined: 18 Jan 05
Posts: 138
Credit: 46,754,541
RAC: 0

RE: So I assume you are

Quote:
So I assume you are alluding to the "O1I" and "O1F" applications. If so, which is which, and what does the last position denote?

Well, F denotes "Fast" hosts and I is "I don't know, but it's all the others". ;-)

(Fast hosts are apparently those with sufficient cache for the tasks)

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,578
Credit: 306,786,019
RAC: 194,420

This is so cool !

This is so cool ! :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Bob Mazanec
Bob Mazanec
Joined: 8 Nov 15
Posts: 1
Credit: 21,738
RAC: 0

RE: [...] the performance

Quote:
[...] the performance of our application depends now on the size of the cache of the volunteer computer [...]

So there's a connection between Space & Time...? :-)

erpol[CactusCo]
erpol[CactusCo]
Joined: 5 Feb 09
Posts: 1
Credit: 16,892,746
RAC: 0

E VIRGO? O i suoi dati sono

E VIRGO? O i suoi dati sono inglobati tra quelli di LIGO?

Ciao,
Ermanno che, decenni fa, lavorava a 50 mt da un'antenna gravitazionale (cilindro di alluminio)

Davissimo
Davissimo
Joined: 14 Oct 08
Posts: 2
Credit: 378,727,100
RAC: 378,889

I am now running on a newer

I am now running on a newer computer with 6GB memory. Can I assume that it has enough cache to handle this newer work or is there some adjustment needed to the properties so that it can be handled? My little desktop does indeed want to join in the crunching.

Thunder
Thunder
Joined: 18 Jan 05
Posts: 138
Credit: 46,754,541
RAC: 0

RE: I am now running on a

Quote:
I am now running on a newer computer with 6GB memory. Can I assume that it has enough cache to handle this newer work or is there some adjustment needed to the properties so that it can be handled? My little desktop does indeed want to join in the crunching.

The amount of cache is dependent on what model of CPU you have (different ones are made with larger/smaller cache based on many factors like price/intended purpose/etc.). The amount of RAM memory that you mentioned is completely independent of cache.

That said, your Core i3-2130 (which is similar to my i3-4130, just a bit older) should be able to handle the "I" tasks, so it will probably get them and certainly contribute. I notice back on the 4th and 5th it completed 2 of the "tuning" tasks, so in that way, it already has. :-)

Andrew Wade
Andrew Wade
Joined: 23 Nov 15
Posts: 1
Credit: 36,598,384
RAC: 0

Why exactly is the updated

Why exactly is the updated use of the FFT require more cache size?

Also do you have an elog of your development?

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4,305
Credit: 248,747,331
RAC: 32,002

RE: Why exactly is the

Quote:
Why exactly is the updated use of the FFT require more cache size?

The code that does the actual search is all new and has never been used before on E@H. In previous versions we didn't use an actual FFT (like FFTW), you could think of what we did as s very narrow-band FFT (16 bins). This was highly efficient when targeting specific frequencies (and in memory terms it still is), but required a lot of "looping" when the frequency is one of the things you're looking for. The new code "resamples" the time domain data and then uses a single FFT to get the whole frequency range. This actually uses FFTW and is more efficient with a larger cache than with a smaller one. It's not that it actually requires larger caches, it just runs a lot faster.

Quote:
Also do you have an elog of your development?

Development of the application is done in a git repo that is publicly visible. However, it's not really obvious which changes will affect an E@H application. The application currently running is built from lalapps/src/pulsar/GCT/HierarchSearchGCT.c, the main ingredients for the actual search code are in lalpulsar/src, and technical stuff to make this a BOINC application is in lalapps/src/pulsar/EinsteinAtHome.

BM

BM

Robert Rizzotto
Robert Rizzotto
Joined: 5 Nov 15
Posts: 3
Credit: 144,773
RAC: 0

I am so very impressed. Now I

I am so very impressed. Now I ask this question. Why can we not explore the gravitational waves within our own solar system? Should we not be able pick up the gravitational waves better just outside of the Ort cloud.

Harm
Harm
Joined: 24 Aug 05
Posts: 6
Credit: 16,606,027
RAC: 0

I think they would be too

I think they would be too weak! Remember the only gravitational waves found so far (not by this project but in data from the same LIGO detector this project gets data from also) were determined to be caused by the collision of two black holes. So on the one hand it seems a pity we cannot detect gravitational waves within our own solar system, one the other hand we are lucky as a black hole (or other massive objects heavy enough to cause detectable gravitational waves) being so close would likely destroy our planet and turn it into the equivalent of a string of spaghetti due to the massive tidal gradient.

Harm
Harm
Joined: 24 Aug 05
Posts: 6
Credit: 16,606,027
RAC: 0

Also putting up detectors for

Also putting up detectors for gravitational waves in space is a totally valid idea that is currently being explored.

Mad_Max
Mad_Max
Joined: 2 Jan 10
Posts: 154
Credit: 2,191,255,067
RAC: 426,559

RE: I think they would be

Quote:
I think they would be too weak!


Not only weak, but also have very low frequencies. Like very tiny fractions of 1 Hz. Because in our solar system we do not have any fast spinning objects of cosmic mass scale. All orbital and spinning periods lie in the range from hours to years. 1 Hr period = 0.00028 Hz. 1 day = 0.000012 Hz

And all current GW detectors is not sensitive for such low frequencies. Their working range is something like 10 - 1000 Hz.

Gary Charpentier
Gary Charpentier
Joined: 13 Jun 06
Posts: 2,036
Credit: 103,631,828
RAC: 33,486

RE: RE: I think they

Quote:
Quote:
I think they would be too weak!

Not only weak, but also have very low frequencies. Like very tiny fractions of 1 Hz. Because in our solar system we do not have any fast spinning objects of cosmic mass scale. All orbital and spinning periods lie in the range from hours to years. 1 Hr period = 0.00028 Hz. 1 day = 0.000012 Hz

And all current GW detectors is not sensitive for such low frequencies. Their working range is something like 10 - 1000 Hz.


Well, lets look at the closest object to us, the moon. Now if we built a detector that could look at those low frequencies we have a problem. Tides. As the gravitational waves would be in sync with the tides is becomes a real issue to subtract out the tidal force from the data and leave just the GW. I do strongly suspect at the distance of the moon (inverse square law) the GW energy is above the threshold of a detector, just that we haven't yet figured out how to isolate the detector from the tidal forces or how to keep our detector quiet for the month long period of the wave. I'm also not sure if the wave front would require detectors at the poles or the equator. Also we may well have to find a way to remove the earth's rotation as the arms may need to point the same way in space for such a low frequency detection. If at the poles I suppose you could build a huge lazy Susan to allow the earth to rotate under the lab!

PeteBB
PeteBB
Joined: 24 Nov 05
Posts: 6
Credit: 449,559
RAC: 0

This is probably a dumb

This is probably a dumb (ignorant, yes) question!?
Would not a Forward Gravitational Detector be of use here? My understanding is it is VERY sensitive.

PsiberMan
PsiberMan
Joined: 16 Apr 07
Posts: 11
Credit: 793,546
RAC: 0

My question is probably

My question is probably dumber than most... It concerns measuring the frequencies below 10 hz...

With all the issues around frequencies, revolutions, tidal forces and such, would it not be possible to build and station a GW detector in a neutral orbit in space? Of course nothing can definitively be said to be stationary except but relatively speaking. I hear people talking about isolating movements using lazy-susans at earth's poles.

I would suppose the only argument against this would be a need for maintaining the absolute positioning required for the interferometric measurements. Even so, how do they maintain the absolute positioning required for extended time-lapse photography of Hubble, Spitzer, COBE, et.al...?

Sign me, "Curious George"

-_=G.g

rbpeake
rbpeake
Joined: 18 Jan 05
Posts: 266
Credit: 1,093,967,800
RAC: 711,294

RE: My question is probably

Quote:

My question is probably dumber than most... It concerns measuring the frequencies below 10 hz...

With all the issues around frequencies, revolutions, tidal forces and such, would it not be possible to build and station a GW detector in a neutral orbit in space? Of course nothing can definitively be said to be stationary except but relatively speaking. I hear people talking about isolating movements using lazy-susans at earth's poles.

I would suppose the only argument against this would be a need for maintaining the absolute positioning required for the interferometric measurements. Even so, how do they maintain the absolute positioning required for extended time-lapse photography of Hubble, Spitzer, COBE, et.al...?

Sign me, "Curious George"


Have you seen this thread? https://einsteinathome.org/node/196945

MAGIC Quantum Mechanic
MAGIC Quantum M...
Joined: 18 Jan 05
Posts: 1,855
Credit: 1,344,659,380
RAC: 1,506,899

RE: This is probably a dumb

Quote:
This is probably a dumb (ignorant, yes) question!?
Would not a Forward Gravitational Detector be of use here? My understanding is it is VERY sensitive.

Doesn't sound like it would work here.

https://en.wikipedia.org/wiki/Robert_L._Forward

Betreger
Betreger
Joined: 25 Feb 05
Posts: 991
Credit: 1,551,691,834
RAC: 696,605

I assume this new LIGO data

I assume this new LIGO data is the projects highest current priority so I will turn off the other CPU work fetches.

Muskoka
Muskoka
Joined: 20 Dec 09
Posts: 2
Credit: 1,177,059,525
RAC: 0

For our einstein@home team

For our einstein@home team (at geopense.net), we would really like to see more crunching clients that run on the Raspberry Pi. Why? Because this $35 board has brought millions of people into the maker movement, and a sizeable number of those are also into citizen science. The GPU in the Pi is proprietary, but with the impending nvidia pascal GPU, FFT algorithms may really start to pay off. Here's the thing: the Pi has had FFT on GPU for over two years !

https://www.raspberrypi.org/blog/accelerating-fourier-transforms-using-the-gpu/

Quote:
"The scientific spirit is of more value than its products"

- Thomas Huxley

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 770
Credit: 568,367,012
RAC: 130,256

Any chance for a GPU version

Any chance for a GPU version of the new apps? They should run the FFT well. And integrated Intel GPUs would have access to unusually large caches, i.e. the CPU L3 and L4 (if present). And most importantly, from my point of view: the Skylake GPUs can't run BRP at all, probably due to driver problems, so maybe the new app could work.

MrS

Scanning for our furry friends since Jan 2002

Poorman
Poorman
Joined: 25 Apr 15
Posts: 6
Credit: 1,135,424
RAC: 0

I have two questions: 1.

I have two questions:

1. on the home page for GEO600 it states: GEO600 ist Teil des weltweiten Netzwerks von Gravitationswellen-Detektoren und ist derzeit der einzige Detektor, der durchgängig Messdaten aufnimmt. (GE600 is a part the world wide network of GW dectors and is at present the only detector that continuously collects mesurement data.)
Does any of this data from GEO600 find its way into the E@H analysis?

2. Both Hanford and Livingston are producing data. How is this handled by E@H, i.e. is the data handled seperately or is combined. If it is handled seperately, can one see on the data packet which is downloaded into E@H from which site the data is coming?

Thanks,

Larry

Betreger
Betreger
Joined: 25 Feb 05
Posts: 991
Credit: 1,551,691,834
RAC: 696,605

How are the estimated run

How are the estimated run times calculated? I recently added a CPU only host in order to process more of this data. The project sends it what appear to be twin packs because they are awarded 2K in credit. At first the estimated run time was well over 1 day when they actually take a bit over 13 hrs. Now the est. time is down to 17 h 7 m.

Holmis
Joined: 4 Jan 05
Posts: 1,118
Credit: 1,055,935,564
RAC: 0

For a new host Boinc uses the

For a new host Boinc uses the benchmark and the -value sent by the server to calculate how long a task will take. As tasks complete Boinc then adjusts the DCF (Duration Correction Factor) to adjust to the actual time the tasks take and so calibrate the estimates for future tasks.
Tasks taking a shorter time than the estimate will make Boinc adjust the DCF by a bit, task taking longer than the estimate will make Boinc adjust the DCF to account for the whole difference so that future tasks will have estimates to match the long running task.
There is only one DCF per project in Boinc so running more applications from the same project can and probably will make the DCF swing up and down and so also the estimates.

Newer server software will handle the estimate calculation on the server and Boinc will then disable the use of DCF for that project. Unfortunately Einstein has yet to adopt this so we are still stuck with the use of DCF.

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1,364
Credit: 3,562,358,667
RAC: 0

RE: I have two

Quote:

I have two questions:

1. on the home page for GEO600 it states: GEO600 ist Teil des weltweiten Netzwerks von Gravitationswellen-Detektoren und ist derzeit der einzige Detektor, der durchgängig Messdaten aufnimmt. (GE600 is a part the world wide network of GW dectors and is at present the only detector that continuously collects mesurement data.)
Does any of this data from GEO600 find its way into the E@H analysis?

2. Both Hanford and Livingston are producing data. How is this handled by E@H, i.e. is the data handled seperately or is combined. If it is handled seperately, can one see on the data packet which is downloaded into E@H from which site the data is coming?

In previous searches hanford data was work units starting with an 'H', Livingston data was work units starting with an 'L'. I assume this is still the case, but only have 'h' tasks on my PCs at present. GEO600 data was never sent out to E@H because the detector, due to its smaller size, is significantly less sensitive despite having equivalent hardware installed.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,870
Credit: 115,780,002,413
RAC: 35,227,570

RE: Does any of this data

Quote:
Does any of this data from GEO600 find its way into the E@H analysis?


That data is not processed by E@H volunteers. I believe it is processed 'in house' using the Atlas supercomputer.

Quote:
Both Hanford and Livingston are producing data. How is this handled by E@H, i.e. is the data handled seperately or is combined.


The data is provided in pairs of files at a particular frequency. The two members of a 'pair' come from the two observatories. A data file starting h1_... has come from Hanford and the corresponding l1_... has come from Livingston. The app that you use for crunching a single task will be processing the data in (usually) 6 pairs of files with closely related frequencies. There are a large number of tasks that can use the same 6 pairs of data files - this is the essence of locality scheduling.

Here is a complete example, chosen from one of my machines. It happens to come from the 'F' series. It's quite similar for 'I' with just small differences in the names:-

[pre]Full task name: h1_0090.45_O1C01Cl2In3__O1AS20-100F_90.55Hz_2380_0
Workunit name: h1_0090.45_O1C01Cl2In3__O1AS20-100F_90.55Hz_2380
Large data files - pair 1: h1_0090.45_O1C01Cl2In3.we4j l1_0090.45_O1C01Cl2In3.we4j
Large data files - pair 2: h1_0090.50_O1C01Cl2In3.ljGc l1_0090.50_O1C01Cl2In3.ljGc
Large data files - pair 3: h1_0090.55_O1C01Cl2In3.Gxec l1_0090.55_O1C01Cl2In3.Gxec
Large data files - pair 4: h1_0090.60_O1C01Cl2In3.ddDA l1_0090.60_O1C01Cl2In3.ddDA
Large data files - pair 5: h1_0090.65_O1C01Cl2In3.nUUp l1_0090.65_O1C01Cl2In3.nUUp
Large data files - pair 6: h1_0090.70_O1C01Cl2In3.x34r l1_0090.70_O1C01Cl2In3.x34r[/pre]

Some points about the above.

  • * The term 'workunit' describes the group of identical tasks which get sent to enough computers to ensure that at least two are returned which can be checked against each other for validation purposes.
    * The scheduler sends out two copies initially. These are the primary tasks and they have extensions to the workunit name (_0 and _1) to form their task name. If either of these two fail, further identical copies (_2, _3, _4, etc) are sent out to replace failed tasks, as necessary. A workunit will be completed when there are two successful tasks in agreement with each other returned and processed by the validator.
    * A task is just a set of parameters which is interpreted by the app. Each computer receiving a copy of a task also needs the complete set of 12 large data files (and other standard files) that the set of parameters will point to.
    * Both the workunit name and the task name have a 'sequence' number attached - _2380 in the above example. This number is just a position in a sequence from some high starting number all the way down to zero. The full sequence represents all the tasks that will use exactly the same set of large data files. The scheduler issues tasks in reverse numerical order so when you see sequence numbers getting down towards zero, you know that the tasks for that particular sequence are just about all issued. If we call the above example the "90.45Hz set or sequence", it's very convenient for the scheduler to also give you tasks for the adjacent sequence - 90.50Hz. For tasks in this sequence, you could use 10 out of the previous 12 data files and just add two new ones, h1_0090.75_.... and l1_0090.75_.... - a very minimal extra download.
    * At any one time, the workunit generator keep just a small number of tasks in each sequence. This small number can quickly (just temporarily) run out if a host asks for a lot of tasks (or several hosts ask almost simultaneously). In this case you will be switched to an adjacent sequence (0.05Hz apart) for the balance of tasks needed. For this reason, a fast multicore machine can have several adjacent sequences 'on the go' at one time. When more work is generated, a depleted sequence will be 'topped up' again. The scheduler will always try to give you work for the data files you have - locality scheduling.
    * For low frequency sequences (e.g. 20Hz) the number of tasks in a full sequence is relatively small - just a few hundred. This number becomes progressively larger as the frequency increases. Above 90Hz, as in the above example, the number of tasks is around 2,500 or more.

Cheers,
Gary.

Poorman
Poorman
Joined: 25 Apr 15
Posts: 6
Credit: 1,135,424
RAC: 0

Thank both of you for the

Thank both of you for the detailed information!

Larry

Honza_C
Honza_C
Joined: 30 Nov 13
Posts: 4
Credit: 16,158,997
RAC: 0

I'm going to buy a new comp.

I'm going to buy a new comp. Will be HP ProBook 450 G2 suitable for "F"? Thanks for advice. Honza

Holmis
Joined: 4 Jan 05
Posts: 1,118
Credit: 1,055,935,564
RAC: 0

RE: I'm going to buy a new

Quote:
I'm going to buy a new comp. Will be HP ProBook 450 G2 suitable for "F"? Thanks for advice. Honza


No, as per Christian's post here no Intel i3 processors can get the "F" tasks, they can get the "I" tasks that are equally important.

Honza_C
Honza_C
Joined: 30 Nov 13
Posts: 4
Credit: 16,158,997
RAC: 0

Thanks for your answer, but

Thanks for your answer, but HP ProBook 450 G2 contains Intel Core i5 5200U. What about this processor? Thanks. Honza

Honza_C
Honza_C
Joined: 30 Nov 13
Posts: 4
Credit: 16,158,997
RAC: 0

Hi Alessandra, what size has

Hi Alessandra, what size has to have cashe for FFT tasks please? Honza

Christian Beer
Christian Beer
Joined: 9 Feb 05
Posts: 595
Credit: 172,082,033
RAC: 271,465

Hi Honza, I don't know what

Hi Honza, I don't know what you mean by "cashe for FFT tasks". Did you see the FAQ on the O1AS search? I think it should answer your question when FFT tasks mean O1AS20-100F tasks.

Stefan
Stefan
Joined: 11 Feb 16
Posts: 2
Credit: 24,222,160
RAC: 0

After processing a big chunk

After processing a big chunk of the current available data, are there already some indications that something interesting might have been found.