Questions, comments and problems on new Fermi LAT gamma-ray pulsar search

Meteor Wayne
Meteor Wayne
Joined: 15 Mar 05
Posts: 3
Credit: 887,632
RAC: 0

Nice "Goofy" Freudian slip :)

Nice "Goofy" Freudian slip :)

archae86
archae86
Joined: 6 Dec 05
Posts: 2,604
Credit: 2,077,887,978
RAC: 2,335,758

RE: The App 0.17 sent out

Quote:
The App 0.17 sent out minutes ago should have most issues of the 0.16 fixed (checkpointing).

My Q9550 host ran Task 237400650 remarkably soon after it downloaded it. I did not spot it and hand expedite it, only saw it when completed and expedited reporting.

The message log reports that the work request which got this task was made at 6:16:02 a.m. today, file downloads of executable and required files completed with JPLEF.405 at 6:16:54, and that same second the task started. Computation finished at 4:11:20 p.m. I don't spot the means by which it jumped to the head of the queue. Just curious, seems a good thing to do, actually.

astro-marwil
astro-marwil
Joined: 28 May 05
Posts: 419
Credit: 148,722,774
RAC: 4,888

Hallo archae86! The reason

Hallo archae86!
The reason for this is the date of expiration of the LAT-files during this tests, which became set to 5 days, where as normal project files have 14 days. And as the boincmanager priviliges the files with the most early date of expiration, it jumps up to the forefront. It´s a simple but effective trick by BM to accelerate these tests. When I got my first file of this, I was also very much astonished about this behavior.

Kind regards
Martin

archae86
archae86
Joined: 6 Dec 05
Posts: 2,604
Credit: 2,077,887,978
RAC: 2,335,758

RE: And as the

Quote:
And as the boincmanager priviliges the files with the most early date of expiration, it jumps up to the forefront.

Not the whole truth but part of the truth, I think. On reflection I now think that the host which picked up this task had a work queue with a current estimated completion time over five days. As a result the system calculated that left alone Gamma ray task would not complete in time and immediately initiated it in high priority mode and kept it there until finished.

When it does not forecast deadline risk, system does not routinely prioritize near deadlines over farther ones, and the host which picked up my previous gamma ray task had a shorter queue and did not start the task until I suspended all the competing work.

archae86
archae86
Joined: 6 Dec 05
Posts: 2,604
Credit: 2,077,887,978
RAC: 2,335,758

RE: The App 0.17 sent out

Quote:

The App 0.17 sent out minutes ago should have most issues of the 0.16 fixed (checkpointing).

Result's won't validate against these of the 0.16 version, but this should only affect < 5 WUs.

In looking for some more test WU (simply by trying WU numbers near my own, I spotted some completions reporting use of version 0.18, and in one case a quorum mixed of 0.17 and 0.18 validated.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 1,909
Credit: 130,017,799
RAC: 63,140

RE: RE: The App 0.17 sent

Quote:
Quote:

The App 0.17 sent out minutes ago should have most issues of the 0.16 fixed (checkpointing).

Result's won't validate against these of the 0.16 version, but this should only affect < 5 WUs.

In looking for some more test WU (simply by trying WU numbers near my own, I spotted some completions reporting use of version 0.18, and in one case a quorum mixed of 0.17 and 0.18 validated.


Keep an eye on the applications page. Looks like they had to take a second bite at the Linux build, resulting in v0.18 for Linux coming out a day after the v0.17 for Windows and Mac.

archae86
archae86
Joined: 6 Dec 05
Posts: 2,604
Credit: 2,077,887,978
RAC: 2,335,758

RE: RE: The App 0.17 sent

Quote:
Quote:
The App 0.17 sent out minutes ago should have most issues of the 0.16 fixed (checkpointing).
My Q9550 host ran Task 237400650 remarkably soon after it downloaded it.

Quorum partner has reported in and this one validated. Credits reduced from the rather generous 350 awarded to my test v0.16 unit to a rather parsimonious 200--much less than parity to gravitational wave S6 on my hosts.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 1,909
Credit: 130,017,799
RAC: 63,140

I missed out on the second

I missed out on the second released batch, but got task 237933356 as a resend. Checkpointing is properly reported in the BOINC event log, but still only once per 2.000% - that's about once every 12 minutes on my Q9300. stderr_txt is less verbose, but still 2.5x the maximum size that BOINC will display - web display starts with checkpoint for skypoint 30. Runtime is considerably longer than S6Bucket (see valid tasks for computer 1226365), but yield fewer gollum-points.

robertmiles
robertmiles
Joined: 8 Oct 09
Posts: 122
Credit: 5,946,662
RAC: 1,050

Depends on how that feature

Depends on how that feature is implemented. If it's implemented by a way that requires no more than having each workunit download a file specific to your settings, app_info.xml shouldn't be needed. If it's implemented by a way that requires the OpenCL feature allowing GPUs to run more than one program at a time, it may well take until sometime after a new version of BOINC provides more support for OpenCL GPU workunits.

Quote:

Hi Bernd, et al.

I guess the feature requested is setting the number of parallel CUDA tasks. This would eliminate the need to use app_info.xml entirely.

Michael

Quote:

I'm running BRP3 currently and would like to try gamma-ray pulsar search. The problem is - I'm using 560 TI card with 2048M so it is running 6 tasks simultaneously because I'm using app_info.xml file to run 6 tasks. Can anybody provide me information what should I add to app_info.xml file to be able to run this CPU task ?


Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3,515
Credit: 426,307,797
RAC: 197,300

RE: Depends on how that

Quote:
Depends on how that feature is implemented. If it's implemented by a way that requires no more than having each workunit download a file specific to your settings, app_info.xml shouldn't be needed. If it's implemented by a way that requires the OpenCL feature allowing GPUs to run more than one program at a time, it may well take until sometime after a new version of BOINC provides more support for OpenCL GPU workunits.

I think you missed Richard's point: He already *has* an app_info.xml file installed, in order to do several BRP3/4 WUs in parallel. If you are using an app_info.xml file, only those apps listed in that file will get work, all other apps that are supported by the project, but missing from the app_info.xml file, are ignored. So the question was to receive the data needed to be included in app_info.xml file to support the Fermi search.

HBE

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.