you need a lot more than just a single fast system for the server. If you could do all this with even $10,000 worth of hardware, SETI would have never stopped but the scope of costs is well beyond that. you need many systems all with defined tasks, and a fat pipe to the internet, a single server sitting on a 1Gb fiber residential connection ain't gonna cut it. Raw data from the listening stations needs to be pre-processed and formatted before it ever gets to the distribution stage, does anyone know how to do that? or what hardware/software is required for such tasks? probably unlikely. I remember seeing a video several years back looking into some of the pre-processing that SETI was doing, and they had some custom hardware (FPGAs) doing some of that work. If you think writing GPU code is hard, FPGA programming is a whole new ballgame of black magic.
even if someone managed to get a grassroots setup going to process data, what then? processing data just to process data with no researchers to analyze or sort through the data is pointless. Might as well just run Collatz if you want to aimlessly crunch something.
I apologize for my bleak outlook on this, but I'm trying to be realistic here.
I apologize for my bleak outlook on this, but I'm trying to be realistic here.
I doubt I have anywhere near the the knowledge required to even direct someone who HAS the knowledge to do what Ian is saying. So... I guess it is all for naught. Sadly, we'll just say goodbye for now and HOPE that someone picks up the ball again in the future.
If Artificial Intelligence tools are being applied to SETI then supporting AI on Boinc might help[? (ML@HOME)
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Beside of twiddling constantly with your cooling system, you could ask the developers (official) to take a look at the hw support for PTX atomic global add f32 atleast on NVIDIA hardware.
5 seconds cut away in tha first try.
The next advice is to take a look at the access pattern of twiddle_dee: that would shave off some 50 seconds on NVIDIA and 17% on some ATI model I know of.
The next advice is to take a look at the access pattern of twiddle_dee: that would shave off some 50 seconds on NVIDIA and 17% on some ATI model I know of.
Petri33, could I ask you what is the "access pattern of twiddle_dee" is?
It’s a section of the OpenCL code in the Einstein GPU apps that causes big slowdowns for nvidia GPUs
A new (Beta) version of the GW app released on Thursday has significantly increased NVidia GPU speeds. Whether these two matters are related is still a moot point.
It’s a section of the OpenCL code in the Einstein GPU apps that causes big slowdowns for nvidia GPUs
A new (Beta) version of the GW app released on Thursday has significantly increased NVidia GPU speeds. Whether these two matters are related is still a moot point.
It’s a section of the OpenCL code in the Einstein GPU apps that causes big slowdowns for nvidia GPUs
A new (Beta) version of the GW app released on Thursday has significantly increased NVidia GPU speeds. Whether these two matters are related is still a moot point.
im skeptical that they added the improvements from the other thread since I tested the code myself and saw no change with the GW app in an apples to apples comparison. And the timing of the new app release seems too fast to have been implemented (and I don’t believe they would blindly make a change without some testing on their end). I honestly think they changed something else.
it’s much harder to standardize the testing of GW apps since there’s a rather large variance in nominal run times even with no app changes. Small differences in the freq and delta freq seem to induce rather large variances in total run time. It’s not like the GR tasks which are very homogenous and every task runs within +/- 1sec when at steady state on the same device, which makes performance much easier to track.
the GW GPU app certainly needs some improvement, and get the reliance on CPU processing shifted back over to GPU, to increase GPU utilization and speed things up.
I don't think it's coming
)
I don't think it's coming back.
you need a lot more than just a single fast system for the server. If you could do all this with even $10,000 worth of hardware, SETI would have never stopped but the scope of costs is well beyond that. you need many systems all with defined tasks, and a fat pipe to the internet, a single server sitting on a 1Gb fiber residential connection ain't gonna cut it. Raw data from the listening stations needs to be pre-processed and formatted before it ever gets to the distribution stage, does anyone know how to do that? or what hardware/software is required for such tasks? probably unlikely. I remember seeing a video several years back looking into some of the pre-processing that SETI was doing, and they had some custom hardware (FPGAs) doing some of that work. If you think writing GPU code is hard, FPGA programming is a whole new ballgame of black magic.
even if someone managed to get a grassroots setup going to process data, what then? processing data just to process data with no researchers to analyze or sort through the data is pointless. Might as well just run Collatz if you want to aimlessly crunch something.
I apologize for my bleak outlook on this, but I'm trying to be realistic here.
_________________________________________________________________________
Ian&Steve C. wrote: I don't
)
I doubt I have anywhere near the the knowledge required to even direct someone who HAS the knowledge to do what Ian is saying. So... I guess it is all for naught. Sadly, we'll just say goodbye for now and HOPE that someone picks up the ball again in the future.
Proud member of the Old Farts Association
If Artificial Intelligence
)
If Artificial Intelligence tools are being applied to SETI then supporting AI on Boinc might help[? (ML@HOME)
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Btw,Beside of twiddling
)
Btw,
Beside of twiddling constantly with your cooling system, you could ask the developers (official) to take a look at the hw support for PTX atomic global add f32 atleast on NVIDIA hardware.
5 seconds cut away in tha first try.
The next advice is to take a look at the access pattern of twiddle_dee: that would shave off some 50 seconds on NVIDIA and 17% on some ATI model I know of.
Keep on crunching!
Ever since the ARICEBO
)
Ever since the ARICEBO telescope bit the dust, I sort of have to doubt SETI will be making a comeback anytime soon. Sad day.
petri33 wrote: The next
)
Petri33, could I ask you what is the "access pattern of twiddle_dee" is?
Proud member of the Old Farts Association
It’s a section of the OpenCL
)
It’s a section of the OpenCL code in the Einstein GPU apps that causes big slowdowns for nvidia GPUs
_________________________________________________________________________
Ian&Steve C. wrote: It’s a
)
A new (Beta) version of the GW app released on Thursday has significantly increased NVidia GPU speeds. Whether these two matters are related is still a moot point.
Richard Haselgrove
)
Hmmm..... Let's hope!
Proud member of the Old Farts Association
Richard Haselgrove
)
im skeptical that they added the improvements from the other thread since I tested the code myself and saw no change with the GW app in an apples to apples comparison. And the timing of the new app release seems too fast to have been implemented (and I don’t believe they would blindly make a change without some testing on their end). I honestly think they changed something else.
it’s much harder to standardize the testing of GW apps since there’s a rather large variance in nominal run times even with no app changes. Small differences in the freq and delta freq seem to induce rather large variances in total run time. It’s not like the GR tasks which are very homogenous and every task runs within +/- 1sec when at steady state on the same device, which makes performance much easier to track.
the GW GPU app certainly needs some improvement, and get the reliance on CPU processing shifted back over to GPU, to increase GPU utilization and speed things up.
_________________________________________________________________________