Discussion Thread for the Continuous GW Search known as O2MD1 (now O2MDF - GPUs only)

Matt White
Matt White
Joined: 9 Jul 19
Posts: 120
Credit: 280798376
RAC: 0

My GW host finished the first

My GW host finished the first O2MD1 tasks this morning, both valid, with run times of 64,594 and 63,598 seconds (17 hrs). By comparison, the same machine crunches the O2AS tasks around 113,500 seconds (31.52 hrs). A 46% decrease in crunch time, if I figured it correctly.

Clear skies,
Matt
Richie
Richie
Joined: 7 Mar 14
Posts: 656
Credit: 1702989778
RAC: 0

So, is it still

So, is it still scientifically useful to run v1.01 cpu tasks? What did that sensitivity thing mean for the fate of these tasks?

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117724358963
RAC: 35010406

Matt White wrote:My GW host

Matt White wrote:
My GW host finished the first O2MD1 tasks this morning, both valid, with run times of 64,594 and 63,598 seconds (17 hrs). By comparison, the same machine crunches the O2AS tasks around 113,500 seconds (31.52 hrs). A 46% decrease in crunch time, if I figured it correctly.

The ones you have completed so far are CPU tasks using the V1.01 app.  Your most recent tasks downloaded are for the V2.0 CPU app.  These will probably take longer.

Since you have both versions still in your cache of work, it looks like the project will be allowing the old version to run rather than canceling them.  I checked your validated tasks (there are now three of them) and two were validated against the V1.10 GPU app.  So, to answer Richie's (and my own) question about what to do with the former version stuff, it looks like the answer is to let them run and be validated against a corresponding V1.01 or V1.10 task.

I'll try a little experiment with the V1.10 tasks I still have, currently suspended waiting for an official word on what the staff want us to do with them.  I'll cause a couple to become 'lost' and then see what the scheduler decides to do about it.  Normally, it would immediately replace them using the 'resend lost tasks' mechanism.  If that happens, then the answer is that the project expects you to crunch them - or abort them if you really don't want to crunch them.

The other possibility is that they wont be resent, in which case the project doesn't really need them to be done.  I'd like to know that as there's no point crunching something that's only going to be thrown away.

I guess, a further possibility is that they might be resent as tasks for the new version :-).  That would be quite cool :-).  However I'd be surprised if the scheduler was that smart :-).

I'll report back after I try the experiment.

**** EDIT: ****

I created a couple of 'lost' results as mentioned above.  The scheduler promptly sent replacements - but as CPU tasks and NOT the GPU tasks they were before becoming 'lost'.  There was also a download of the GWold plan class for the V1.02 CPU app.  So, looks like they are prepared to have all these 'lower sensitivity' tasks done.  I had deliberately excluded the use of the CPU and had disallowed running CPU versions when GPU versions are available but the scheduler ignored all that and sent the CPU version anyway.

I aborted the unwanted CPU tasks and then tried to get more GPU tasks for the newly listed 2.01 (GW-opencl-ati) (beta test) entry on the applications page.  The machine still has about 12 hours worth of FGRPB1G and I want to get some of the V2.01 GPU tasks before the FGRPB1G tasks run out and before I decide what to do with the remaining V1.10 O2MD1 GPU tasks.  My preference is not to waste time on "lower sensitivity" work if I can get the newer stuff.  So far the scheduler keeps claiming "no app version available (einstein_O2MD1)" but I'll keep trying intermittently.

From past experience, it can take a little while to convince the scheduler to be cooperative :-).

Cheers,
Gary.

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

Looks like the new work units

Looks like the new work units are called O2MD1Gn as opposed to O2MD1G.  Done a few on the GPU but the CPU will probably not be crunched until later tonight.

cecht
cecht
Joined: 7 Mar 18
Posts: 1535
Credit: 2911002034
RAC: 2078651

So far, I have about 90 O2MD1

So far, I have about 90 O2MD1 v2.01 pending and running tasks on my RX570 host and about 30 on my RX460 host, with no valids yet, but also no invalids or errors.  Per task times are about 7 min when run at 4x on the RX570, which are shorter than FGRBP times (yay!), but times are very variable. I'm not sure whether that's because of the nature of the O2MD1 data or whether it is because most of my v2.01 runs were co-crunched with other GW and binary pulsar GPU work, which run at different task multiplicities. From the few stretches of v2.01-only work I've seen, it looks like there is inherent variability in crunch times. All this is with no CPU-specific tasks running.

EDIT: Running only v2.01 tasks now and times are consistent, given slight differences between the cards. With my RX570s running at 4x, times average ~7 min per task.; RX460 cards at 3x, times average ~14 min.

Ideas are not fixed, nor should they be; we live in model-dependent reality.

solling2
solling2
Joined: 20 Nov 14
Posts: 219
Credit: 1577614639
RAC: 19838

cecht schrieb:...O2MD1 v2.01

cecht wrote:
...O2MD1 v2.01 ... but times are very variable. I'm not sure whether that's because of the nature of the O2MD1 data or whether it is because most of my v2.01 runs were co-crunched with other ...

Same here. Another hint to inherent variability are the estimated GFLOPs as shown in properties. Also, granted credits vary: I got 360 for a small one as opposed to 1000 for a normal one. However, valid is valid. :-)

solling2
solling2
Joined: 20 Nov 14
Posts: 219
Credit: 1577614639
RAC: 19838

Archae86 mentioned the other

Archae86 mentioned the other day that Nvidia cards are doing better with O2... tasks than with FGRP tasks on a relative basis. Recallling that observation, I pulled an old 750ti out of a dusty corner. With no slot free I reactivated it via riser card in the PCIe x1, the GPU tied to the outside of the open case. Strange construction, but the 750ti is now doing O2... (beta) tasks about on par with your RX460, which isn't bad given that the latter has 40% more shaders. :-)

cecht
cecht
Joined: 7 Mar 18
Posts: 1535
Credit: 2911002034
RAC: 2078651

Woohoo! Woke up this morning

Woohoo! Woke up this morning to see 6 valids for v2.01 tasks (4 tasks from the RX570s, 2 from the RX460s). No invalids, hundreds pending. All were validated with Windows v2.00 GWnew CPU partners. As Solling mentioned, credits vary.

Ideas are not fixed, nor should they be; we live in model-dependent reality.

Anonymous

Same as cecht's and

Same as cecht's and solling2's experience with the v2.01 tasks on Linux with amd GPUs.

Matt White
Matt White
Joined: 9 Jul 19
Posts: 120
Credit: 280798376
RAC: 0

Gary Roberts wrote:The ones

Gary Roberts wrote:

The ones you have completed so far are CPU tasks using the V1.01 app.  Your most recent tasks downloaded are for the V2.0 CPU app.  These will probably take longer.

Since you have both versions still in your cache of work, it looks like the project will be allowing the old version to run rather than canceling them.  I checked your validated tasks (there are now three of them) and two were validated against the V1.10 GPU app.  So, to answer Richie's (and my own) question about what to do with the former version stuff, it looks like the answer is to let them run and be validated against a corresponding V1.01 or V1.10 task.

I'll try a little experiment with the V1.10 tasks I still have, currently suspended waiting for an official word on what the staff want us to do with them.  I'll cause a couple to become 'lost' and then see what the scheduler decides to do about it.  Normally, it would immediately replace them using the 'resend lost tasks' mechanism.  If that happens, then the answer is that the project expects you to crunch them - or abort them if you really don't want to crunch them.

That explains why the version number didn't change. I guess I didn't understand they were pumping new data into the old app. I see 4 V2.00 tasks being crunched right now. There is also a few V2.01 GPU tasks in the queue. I'm currently running all the GW GPU work at x3 and this host is dedicated to GW work. I had some Gamma Ray tasks sneak in overnight, I aborted those this morning, and unchecked the tab for accepting non preferred work.

I'm interested to see how the GPU tasks perform.

Clear skies,
Matt

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.