Next ABP generation

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4265
Credit: 244922893
RAC: 16808

RE: So whenever I checked

Message 96964 in response to message 96962

Quote:
So whenever I checked during the last few days, the throughput of ABP2 was between 101% and 168% of the data acquisition rate. I guess this means that now the ABP2 search IS running at sustained real-time speed or better (compared to ca 40-50% real-time speed during ABP1 search) !!!

Yep, seen this too. Actually that's a posting for "Milestones" I guess...

BM

BM

hotze33
hotze33
Joined: 10 Nov 04
Posts: 100
Credit: 368387400
RAC: 1355

Just some observation: With

Just some observation:
With the longer cuda wu the boinc scheduler (6.10.18) is completely off.
Before the new wus I had to crunch cpu wus due to the 128 task per day limit. This is now gone. Now I´ve got like 160 abps 3.08 and 400 S5R6sse2. They won´t be finished, because boinc is crunching the cuda stuff.
So my wingmans have to wait 2 weeks before the work is reassigned.

Gundolf Jahn
Gundolf Jahn
Joined: 1 Mar 05
Posts: 1079
Credit: 341280
RAC: 0

The BOINC scheduler is off,

Message 96966 in response to message 96965

The BOINC scheduler is off, but not because of the longer CUDA tasks.

Anyhow, you could abort surplus tasks that have not been started yet, to shorten the waiting time.

Gruß,
Gundolf

Computer sind nicht alles im Leben. (Kleiner Scherz)

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7023304931
RAC: 1827262

RE: Anyhow, you could abort

Message 96967 in response to message 96966

Quote:
Anyhow, you could abort surplus tasks that have not been started yet, to shorten the waiting time.

Or you could set No New Task and work off the queue, rather than adding to it.

But your point, I presume, is that when left to itself, as intended, without active user intervention, it produces this undesirable result.

I don't think there is much disagreement that the scheduler is challenged in dealing with mixed streams of work with varying performance vs. expected characteristics. This applies whether the variation comes from totally different task types (as in ABP vs. GW, or CPU vs. GPU), or just from the sort of variation seen by Angle Range in SETI, or across the systematic variation waves of Einstein GW work.

But users who themselves push things off the default settings (say by longer queue, app_info restriction, etc.) in my mind take on by that active intervention a degree of responsibility for watching the outcomes, and taking appropriate action to avoid inappropriate ones.

Quote:
not because of the longer CUDA tasks

It did actually make a difference, as hotze33 mentioned. Specifically, highly productive hosts, whether GPU drive or modern CPU, could process more than their daily acquisition limit of 32 ABP2 tasks/CPU/day (is there a separate GPU limit at 128 tasks/GPU/day?) with the 1x tasks, and thus were effectively throttled to less than 100%. With the 4x tasks, effectively the daily limit was quadrupled overnight, meaning few if any hosts are constrained to less ABP2 4x work than their capacity.

Mad_Max
Mad_Max
Joined: 2 Jan 10
Posts: 153
Credit: 2134779636
RAC: 446606

RE: Hi! Sampling

Message 96968 in response to message 96947

Quote:

Hi!

Sampling frequency:
OK, but if you want to record on tape, say, a morse code transmission over AM radio, what would be your choice of sampling frequency?
You would pick a sampling frequency sufficient to capture the acoustic signal that is *modulated* on the carrier frequency, and not something in the order of magnitude of the carrier frequency itself
(100s of kHz for AM).

For the pulsars, we are not interested in the exact waveform of the EM emissions of the pulsar, we just want to capture and time the pulses as such.
The fastest spinning pulsars will send a pulse every few milliseconds, so a sampling frequency of 1 / 128 microseconds seems reasonable for me, because that's enough to catch the modulated "signal" we are interested in.

I know that some kind of compression is used for the sampled data, but because it's mostly noise, you cannot expect large compression ratios.

CU
HBE


I think now I understand :)
This is means that the original data files from ARECIBO not contain the original "raw" data from the radio telescope, but the result of pre-processing(made at ARECIBO itself), showing only changes in the intensity of EM-radiation in a certain range of frequencies? Typical for radiation of pulsars? (about 1440 MHz i think, based on information from APB2 log ?)

Mad_Max
Mad_Max
Joined: 2 Jan 10
Posts: 153
Credit: 2134779636
RAC: 446606

RE: I guess one thing to

Message 96969 in response to message 96956

Quote:

I guess one thing to have an eye on is how BOINC cients are coping with the new units, scheduler-wise. Are the predicted runtimes foir the new units about right?

CU
Bikeman


No. My BOINС set predicted runtimes for newGen APB2 task to about 24 hours.
Compared to ~3-3.5 hours on old APB2 task, and ~15-16 hours on GW task (it is correct - on Athlon XP 2600+ real runtimes are near).
And 24 hours is not correct - real runtimes ~ 14-15 hours on Athlon XP 2600+
On the second computer (Core 2 Duo) newGen APB2 task predicted runtimes some overvalued too.
But it is not a big problem because "overestimation" is not too big and almost does not interfere with normal operation.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2139
Credit: 2752669280
RAC: 1469953

I have a test rig which is so

I have a test rig which is so temporary and bare-bones that it doesn't even have a case. Both the projects it was attached to ran out of work, so I attached it here as a temporary measure - host 2367295

It got allocated 40 tasks on 'first contact', which felt like much more than the two days' work that venue defaults to: I completed the last task today, nine days after issue. The 31 tasks still visible on this website show a total CPU time of just under 6.5 days (3.22 days elapsed - it's a dual-core): it ended with a DCF of 1.560676, so when first issued those 31 tasks alone must have been estimated at over 2 days' work. Next time, I'll turn on work fetch debug logging first!

What I have noticed was that the first half (roughly) of the tasks were GW tasks, which did take longer than estimated. Then, when I got onto the second tranche of ABP2 tasks, the revised estimates (on the basis of the GW DCF) were too high, and ABP2 tasks processed more quickly, comparatively speaking, than GW.

Also, the ABP2 tasks came in two sizes. Although the task names were similar, some were clearly resends of just a single task, and some were the combined qradruple tasks. Both estimates and runtimes varied by a factor of four, which is entirely appropriate.

Billy
Billy
Joined: 2 Jun 06
Posts: 30
Credit: 3514004
RAC: 0

RE: You need to have

Message 96971 in response to message 96961

Quote:

You need to have appropriate clauses in your app_info.xml pointing to the 3.08 executable. Make sure you put the 308 clause before the 306 one. That way the newly downloaded tasks will be correctly 'branded' as 3.08. If you have the 306 clause first, new tasks will still be 'branded' as 3.06. They will crunch and validate successfully but will possibly cause consternation to someone looking at the task details on the website and wondering how a new 'long' task could possibly be crunched without error seemingly after using the old app. Of course, the old app wasn't used - it was just mis-reported that way :-).

The changeover (for the CPU only app) is very simple. You can do exactly the following:-

  • * Perform the edits on app_info.xml in-situ or overwrite the current one with a pre-prepared new copy (make really sure the executable is specified as 3.08 in all relevant places).
    * Drop in a copy of the new 3.08 executable
    * Stop BOINC
    * Start BOINC

The new app will be fired up and will correctly read the last written checkpoint and continue from where the old app had got to. As I don't have any CUDA capable graphics cards, I have no experience with a CUDA changeover.

If anything is not clear - please ask.

I tried to download the Mac intel file 5.09, but just got gibberish.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109385900147
RAC: 35922248

RE: I tried to download the

Message 96972 in response to message 96971

Quote:
I tried to download the Mac intel file 5.09, but just got gibberish.


I presume you're not really saying that the download was 'gibberish' but rather that when you tried to run what you downloaded, you got some sort of error response. Is that correct?

Version 5.09 of the ABP2 application was for CUDA capable Macs. It's not the version you should be running if you don't have a suitable GPU. It's also not the current version. There is a more recent 5.11 CUDA version that I presume has replaced the 5.09 version.

Your recently completed tasks show you are actually using 5.08 quite successfully, even though the app version reported at the bottom of the details page is being misreported as version 5.09. You should fix this up so that you are not sending back incorrect information.

Cheers,
Gary.

Speedy
Speedy
Joined: 11 Aug 05
Posts: 39
Credit: 22006889
RAC: 521

RE: Over the last weekend

Quote:
Over the last weekend we tried to push up work generation for ABP2 such that we would process the data as fast as we get it from ARECIBO.


Why are you trying to process ABP2 data in realtime from APECTBO? How offen is the ABP status updated?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.