When he said search tools were inadequate did that mean the collected raw data (assuming it is still available) couldn't be processed with different tools?
Tom M
Yes I would assume the data is still available to other researchers to use as they wish, after all there were several times when they just kept reissuing the same tasks over and over and over again just to keep the users from going to other projects. Of course they just threw out the new results as they already had the results for the tasks that showed there was nothing to see, but they did give out credits for the reissued tasks.
Yes i'm aware. The science database was full of RFI
But a new algorithm was proposed, something that could use ML/AI to better analyse/classify the data and results.
But budget was always a problem.
From what I've gleaned from David Anderson's detailed explanations of Nebula, one problem has turned out to be the taken 'piggybacking' approach. The SETI@home receiver initially used a separate antenna next to the 'Gregorian Dome' above the 300m telescope dish. Later they used the multibeam ALFA receiver. Depending on the date and time of year, the telescope aims at a point in the sky/zenith. Earth rotation moves the sky background by fractions of a degree in about 100 seconds (the length of a Seti@home workunit). A work unit thus covers a narrow strip in the sky. Over the years, some (37%?) of the sky visible from Arecibo has been covered. With millions of short stripes (Begin Right Ascension/Declination, some waypoints (ra1,dec1), (ra2, dec2), ... End R.A./Dec) mostly covering 0.4 degrees (angle _range). If the telescope was briefly tracked to a target by other scientists for another observation (movement of the receiver by pulling steel cables), then the workunit only covered 0.01..0.03 degrees (low angle range workunit).
Seti@home searched for signal peaks (spikes), tried to prove their galactic origin, which means that the signal strength increases, reaches a maximum and then decreases again (Gaussian fitting) due to the earth's rotation (movement of the telescope's aim). In addition, short pulses recurring twice and three times ("pulses" and "triplets") were searched. The problem with the persistent signals, which must have a Gaussian signal strength distribution, is that the re-observations of the same point in the sky over the years never covered the same stripe in the sky, but those stripes were offset at an angle to one another, crossing each other. This makes it almost impossible to verify signals by 'Nebula', i.e. that the same signal was recorded again months or years later and can be correlated with previous ones from the same spot. (picking signal candidates from an ocean of RFI). 'Nebula' does it as best it can. The result was a list of candidates for re-observations.
When defining the project, none of the scientists initially thought of the final evaluation of a gigantic data heap. 'Nebula' required a powerful cluster computer for this. The Seti@home servers of the last ~10 years were only financed by volunteer donations to keep things running for millions of data-hungry clients with great difficulty. Paul Allen generously allowed the use of the einstein@home cluster for 'Nebula'. Likewise, there was initially no thought of multi-beam receiver, which came along a decade later and covers multiple closely spaced stripes in the sky. The GPU revolution also happened years later.
Yes I would assume the data is still available to other researchers to use as they wish, after all there were several times when they just kept reissuing the same tasks over and over and over again just to keep the users from going to other projects. Of course they just threw out the new results as they already had the results for the tasks that showed there was nothing to see, but they did give out credits for the reissued tasks.
Later versions of the Seti@home app greatly increased the analysis accuracy, as sufficient computing power from millions of hosts/GPUs was available. The range of evaluated Doppler drift rates was greatly expanded to include hypothetical transmitters on planet-orbiting satellites. Analysis of narrow band signals was improved too. For this purpose, old records were re-analyzed (of course also due to a lack of new data).
Unfortunately, during the development of Nebula, it turned out that the computationally expensive fine frequency resolution using large Fourier transforms (max FFTs lengths 2^18 = 262144) for narrow-band signals naturally only results in a low time resolution. The jumps in time (and thus the position in the sky) then become too large, so that when two signals are correlated by Nebula, i.e. two short stripes in the sky, only one point in time is covered by the two signals (received months or years apart). Only the small Fourier transformations (low computation effort) for broadband signals that have been done since project start result in precise time resolution, but low frequency resolution. Narrowband signals can hardly be verified by Nebula. It is precisely these that are not radiated by natural phenomena and are of particular interest.
(I'm not a mathematician. Please excuse the lousy explanation of Fourier transforms)
Tom M wrote: When he said
)
Yes I would assume the data is still available to other researchers to use as they wish, after all there were several times when they just kept reissuing the same tasks over and over and over again just to keep the users from going to other projects. Of course they just threw out the new results as they already had the results for the tasks that showed there was nothing to see, but they did give out credits for the reissued tasks.
Yes i'm aware. The science
)
Yes i'm aware. The science database was full of RFI
But a new algorithm was proposed, something that could use ML/AI to better analyse/classify the data and results.
But budget was always a problem.
The University of California
)
The University of California Los Angeles has launched a SETI project based on Zoo Universe. I have watched a very pleasant Webinar.
Tullio
Filipe schrieb: Yes i'm
)
From what I've gleaned from David Anderson's detailed explanations of Nebula, one problem has turned out to be the taken 'piggybacking' approach. The SETI@home receiver initially used a separate antenna next to the 'Gregorian Dome' above the 300m telescope dish. Later they used the multibeam ALFA receiver. Depending on the date and time of year, the telescope aims at a point in the sky/zenith. Earth rotation moves the sky background by fractions of a degree in about 100 seconds (the length of a Seti@home workunit). A work unit thus covers a narrow strip in the sky. Over the years, some (37%?) of the sky visible from Arecibo has been covered. With millions of short stripes (Begin Right Ascension/Declination, some waypoints (ra1,dec1), (ra2, dec2), ... End R.A./Dec) mostly covering 0.4 degrees (angle _range). If the telescope was briefly tracked to a target by other scientists for another observation (movement of the receiver by pulling steel cables), then the workunit only covered 0.01..0.03 degrees (low angle range workunit).
Seti@home searched for signal peaks (spikes), tried to prove their galactic origin, which means that the signal strength increases, reaches a maximum and then decreases again (Gaussian fitting) due to the earth's rotation (movement of the telescope's aim). In addition, short pulses recurring twice and three times ("pulses" and "triplets") were searched. The problem with the persistent signals, which must have a Gaussian signal strength distribution, is that the re-observations of the same point in the sky over the years never covered the same stripe in the sky, but those stripes were offset at an angle to one another, crossing each other. This makes it almost impossible to verify signals by 'Nebula', i.e. that the same signal was recorded again months or years later and can be correlated with previous ones from the same spot. (picking signal candidates from an ocean of RFI). 'Nebula' does it as best it can. The result was a list of candidates for re-observations.
When defining the project, none of the scientists initially thought of the final evaluation of a gigantic data heap. 'Nebula' required a powerful cluster computer for this. The Seti@home servers of the last ~10 years were only financed by volunteer donations to keep things running for millions of data-hungry clients with great difficulty. Paul Allen generously allowed the use of the einstein@home cluster for 'Nebula'. Likewise, there was initially no thought of multi-beam receiver, which came along a decade later and covers multiple closely spaced stripes in the sky. The GPU revolution also happened years later.
mikey schrieb:Yes I would
)
Later versions of the Seti@home app greatly increased the analysis accuracy, as sufficient computing power from millions of hosts/GPUs was available. The range of evaluated Doppler drift rates was greatly expanded to include hypothetical transmitters on planet-orbiting satellites. Analysis of narrow band signals was improved too. For this purpose, old records were re-analyzed (of course also due to a lack of new data).
Unfortunately, during the development of Nebula, it turned out that the computationally expensive fine frequency resolution using large Fourier transforms (max FFTs lengths 2^18 = 262144) for narrow-band signals naturally only results in a low time resolution. The jumps in time (and thus the position in the sky) then become too large, so that when two signals are correlated by Nebula, i.e. two short stripes in the sky, only one point in time is covered by the two signals (received months or years apart). Only the small Fourier transformations (low computation effort) for broadband signals that have been done since project start result in precise time resolution, but low frequency resolution. Narrowband signals can hardly be verified by Nebula. It is precisely these that are not radiated by natural phenomena and are of particular interest.
(I'm not a mathematician. Please excuse the lousy explanation of Fourier transforms)