GW GPU Issue 22 oddity

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,570
Credit: 81,820,059,951
RAC: 67,121,669

Sorry, I've been busy and not

Sorry, I've been busy and not able to post an immediate response.

I believe this is something that we've seen before with GRP tasks.  Many years ago you asked about a similar observation and I gave this response where I mentioned that Bernd had described the process of 'slicing and dicing' the data to give equally sized workunits and that there would be "short ends" or "corner cases" (to use his terminology) where the run time and the corresponding credit award would be some fraction of the normal value.

You actually followed my search suggestion at the time and found the Bernd comment and posted a link to it.  Unfortunately, that link is no longer valid - for some odd reason.

If you look through the announcement thread for the current GW search, there is also a comment from Bernd about the difficulty of producing 'equally sized' work for the current search.  The current low crunch time/credit award for a particular issue number has been in every h1_nnnn.nn_... task series all along.  it just happens to be at the _22 or _23 issue number (or thereabouts) at the moment for the particular frequencies being issued.  I've got three of these at the moment.

As an example of previous behaviour, I dug out some data I recorded almost two months ago - May 3rd.  At that time, I was crunching tasks in the h1_1348.15_... and h1_1348.20_... series and the "short end" happened to be at an issue number of _7.  It was easy to spot since the task estimate was very obviously different from all the others.  From my observations, there seems to be a 'rule' for which issue number will be the short end.  I've observed quite a few over the last couple of months and it always seems to be the highest issue number in the lowest DF group.

The table below shows a small selection of data from that day.  I had been testing different groups of three (with combinations of different DF values) and because of the low estimate in the 1st task of the 0.2 DF group, I chose to stick with all three tasks from the one DF group to get a direct comparison for the low estimate task.  For comparison, I've included the groups of 3 that were crunched immediately before and after the group that included the _7 task.  The _7 task is the highest issue number in the 0.20 DF group.

Task Series    Multi    DF Values    Issue Numbers used    Crunch Times (min)
h1_1348.15...     x3  0.75,  0.25,  0.25   _278,  _15,  _14    45.5,  37.2,  37.6
h1_1348.15...     x3  0.75,  0.25,  0.25   _277,  _13,  _12    47.2,  38.9,  37.3
h1_1348.15...     x3  0.75,  0.25,  0.25   _276,  _11,  _10    47.0,  39.0,  39.4
h1_1348.15...     x3  0.75,  0.25,  0.25   _275,   _9,   _8    50.0,  36.4,  37.8
h1_1348.15...     x3  0.20,  0.20,  0.20      _7,   _6,   _5    22.8,  32.6,  32.9
h1_1348.15...     x3  0.20,  0.20,  0.20      _4,   _3,   _2    32.8,  32.6,  32.3

The _7 task was estimated at about 2/3rds of all the others and you can see that the crunch time ended up being at very close to that fraction of what the other two tasks returned.

There are other interesting "corner cases" that I'll save for a further message.  You can't spot them from any difference in the estimate but they can have quite an impact when crunching multiple tasks.  I'm still assembling sufficient data to be sure about those.

Cheers,
Gary.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 2,406
Credit: 6,707,522,892
RAC: 23,200,367

Thanks for the post Gary and

Thanks for the post Gary and solving the "mystery"

 

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,570
Credit: 81,820,059,951
RAC: 67,121,669

You're most welcome!

You're most welcome!

Cheers,
Gary.

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 904
Credit: 1,736,253,272
RAC: 6,776,556

FYI, on my AMD 3950X computer

FYI, on my AMD 3950X computer I have found one task ("21") (969720357) out of 1,394 valid tasks which was awarded a credit of 240 and CPU time of 3.2 min, versus the ones with a credit of 2,000 and CPU run times of ~8.9 - ~16.3 min.  This "21" task has ~36% - ~20% CPU times compared with those that receive a credit of 2,000.

I have found no others with a credit issue value less than 2,000 on this computer. 

 

 

George

A proud member of the O.F.A. (Old Farts Association)

archae86
archae86
Joined: 6 Dec 05
Posts: 3,071
Credit: 6,019,069,927
RAC: 2,441,826

My GW host happened to

My GW host happened to receive an issue 22 task with an unusually low forecasted ET compared to standard units, so I took a little detailed data.

Name: h1_1637.25_O2C02Cl4In0__O2MDFV2i_VelaJr1_1637.50Hz_22_2

Forecast ET of all other tasks just before running (at 2X) 21:50
DCF just before running 1.332246
Forecast ET of this issue 22 task just before running 0:39.

Actual ET for this task: 2:34

DCF just after running 5.233207
Forecast ET for other tasks just after running 1:25:47

So when I unsuspended tasks after the test, naturally High Priority kicked in, so one running task was set aside in favor of another with a deadline 6 seconds sooner.

Credit received was 60 (compared to 2000 for standard GW GPU tasks).

I'd not be surprised if there are some even more extreme cases out there, but this is the most extreme I have encountered.

If one wishes to find these, an easy method for tasks already downloaded but not yet run is to look for unusually short estimated run time, for which a sort of the estimated column in the task list of either boincmgr or BOINCTasks is convenient.  For tasks already run, it is easy to select GW GPU Valid tasks on the user account web page and sort by either the run time or the granted credit column.

While mildly entertaining to some of us, I don't claim this is of much importance.  It just might explain a sudden departure of a usually well behaved system into High Priority mode.  But if left alone, that will probably recover quickly.

archae86
archae86
Joined: 6 Dec 05
Posts: 3,071
Credit: 6,019,069,927
RAC: 2,441,826

archae86 wrote:Credit

archae86 wrote:

Credit received was 60 (compared to 2000 for standard GW GPU tasks).

I'd not be surprised if there are some even more extreme cases out there, but this is the most extreme I have encountered.

I happened to notice that my GW machine suddenly went into high priority processing.

The cause was clearly task 979040627

This was an issue 19 task, which completed in elapsed time of 113 seconds, but that was so far over the estimate that TDCF moved up enough to trigger HP mode.  Credit received was 40, so lower than the 60 on my previous champion.  DCF moved up at least to 5.67, possibly higher as I did not look immediately.

As this was an issue 19 task, it illustrates Gary Robert's point that this issue is not purely found on issue 22, though that was popular near our operating point at the time I started this thread.

Looking through the task list of my quorum partner on this WU showed that the system had processed a yet more severe case (shorter elapsed time, lower credit of only 20).  with an issue number of 20

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.