Multi-Directed Continuous GW production work begins

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,551
Credit: 79,054,268,418
RAC: 65,151,561

archae86 wrote:I don't know

archae86 wrote:
I don't know what the distinction is, nor whether 7-day deadline issuance continues at the current time.

Based on what has sometimes happened in the past, and based on the large number of 'fiddly bits' to remember and attend to just before launch, I would guess that someone initially 'forgot' to reset the previous 'test run' 7 day setting to the full 14 days! SurprisedLaughing

However, it could have just been a bit of 'protection' in case of some unintended 'hiccup' with the production launch. Wink

 

Cheers,
Gary.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,551
Credit: 79,054,268,418
RAC: 65,151,561

Jim1348 wrote:I usually keep

Jim1348 wrote:
I usually keep the default buffer though (0.1 + 0.5 days), and sometimes shorter.

A small work size buffer is the perfect way to avoid overfetch issues (which subsequently trigger panic mode) if the initial estimates of crunch time happen to be large underestimates.  This can easily occur when a new run is launched.  Once the first tasks are returned and the estimates are adjusted by the client, the buffer size can be adjusted upwards to what is desired without the risk of panic mode.  With new runs, I always reduce the setting to 0.2 days total until I see how things are going.  Ultimately, when things become stable, I try to keep about 3 days total to protect against an outage just after 'close of business' on a Friday (Murphy's Law). Smile

If I want to keep X days of work, I use X days + 0.01 days rather than spreading the total between the two settings.  I don't particularly like the idea of the work on hand being able to fluctuate between a 'high water mark' and a 'low water mark'.  That's just a personal preference of mine to try to keep a constant amount of work on hand.

 

Cheers,
Gary.

robl
robl
Joined: 2 Jan 13
Posts: 1,701
Credit: 1,423,574,067
RAC: 10,113

Just got credit for two of

Just got credit for two of these jobs.  30 and 210 credits respectively.  agree with earlier comment that it is nice to see these WUs finally online.  woohoo!!!

robl
robl
Joined: 2 Jan 13
Posts: 1,701
Credit: 1,423,574,067
RAC: 10,113

Really!?  Why does a 50037

Really!?  Why does a 50037 run time mdcg earn 1000 pts while a BRP4G-Beta-opencl-ati at 4502 earn 1000 pts?

Just curious.  

[EDIT]  Also the runtime and cpu times for the mdcgs are almost the same (50000 each in the above example) .  

AgentB
AgentB
Joined: 17 Mar 12
Posts: 915
Credit: 513,211,304
RAC: 0

robl wrote:Really!?  Why does

robl wrote:
Really!?  Why does a 50037 run time mdcg earn 1000 pts while a BRP4G-Beta-opencl-ati at 4502 earn 1000 pts?

BRP4G uses a GPU coprocessor, and mdcg doesn't.  Or am i missing something?

robl
robl
Joined: 2 Jan 13
Posts: 1,701
Credit: 1,423,574,067
RAC: 10,113

AgentB wrote:robl

AgentB wrote:
robl wrote:
Really!?  Why does a 50037 run time mdcg earn 1000 pts while a BRP4G-Beta-opencl-ati at 4502 earn 1000 pts?

BRP4G uses a GPU coprocessor, and mdcg doesn't.  Or am i missing something?

I was thinking (or maybe not) a 13 hour crunch earns 1000 credits while a 1.25 hour crunch earns 1000 credits.  Does not seem equitable.  But don't get me wrong, I am not a credit whore.  Surprised

AgentB
AgentB
Joined: 17 Mar 12
Posts: 915
Credit: 513,211,304
RAC: 0

robl wrote:I was thinking (or

robl wrote:
I was thinking (or maybe not) a 13 hour crunch earns 1000 credits while a 1.25 hour crunch earns 1000 credits.  Does not seem equitable.  But don't get me wrong, I am not a credit whore.  Surprised

LOL - GPU are 10x to 200x faster for some applications, so that reflects the difference.

Source https://boinc.berkeley.edu/wiki/GPU_computing

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1,359
Credit: 2,929,215,914
RAC: 2,977,956

Jim1348 wrote:archae86

Jim1348 wrote:
archae86 wrote:
In my case I had most of my hosts not running CPU work.  The newly running Multi-Directed Continuous GW work takes enough longer than the runtime estimated based on my GPU work that several of my hosts fell into "panic mode" shortly after they arrived. While this is not a dire problem, I've chosen to lower the requested work queue from 3.5 to 2.5 days on two machines, hoping to quell it.

The recent versions of BOINC are supposed to calculate this separately for the GPU and the CPU.  It has not been a problem for me with 7.6.33 (Win7 64-bit and Ubuntu 16.xx).  I have a wide variety of CPU and GPU work, but the only time I see a panic mode is when there is an initial misestimation of the run times, combined with an unrealistically short deadline.  I usually keep the default buffer though (0.1 + 0.5 days), and sometimes shorter.

 

Is this a change that was only done in the beta releases?  If so, it might be the first beta client I'll have installed in a few years.  

 

My 7.6.22 is currently sitting with FGRP ETAs that are too short by a factor of 2, and BRP4 tasks with ETAs too long by 2x.    GW tasks look like they're in about the right ballpark judging by ETAs on a few that are ~50% done, but the variable runtimes make it harder to estimate there.

ku4eto
ku4eto
Joined: 29 Oct 16
Posts: 25
Credit: 152,116
RAC: 0

DanNeely wrote:Jim1348

DanNeely wrote:
Jim1348 wrote:
archae86 wrote:
In my case I had most of my hosts not running CPU work.  The newly running Multi-Directed Continuous GW work takes enough longer than the runtime estimated based on my GPU work that several of my hosts fell into "panic mode" shortly after they arrived. While this is not a dire problem, I've chosen to lower the requested work queue from 3.5 to 2.5 days on two machines, hoping to quell it.

The recent versions of BOINC are supposed to calculate this separately for the GPU and the CPU.  It has not been a problem for me with 7.6.33 (Win7 64-bit and Ubuntu 16.xx).  I have a wide variety of CPU and GPU work, but the only time I see a panic mode is when there is an initial misestimation of the run times, combined with an unrealistically short deadline.  I usually keep the default buffer though (0.1 + 0.5 days), and sometimes shorter.

 

Is this a change that was only done in the beta releases?  If so, it might be the first beta client I'll have installed in a few years.  

 

My 7.6.22 is currently sitting with FGRP ETAs that are too short by a factor of 2, and BRP4 tasks with ETAs too long by 2x.    GW tasks look like they're in about the right ballpark judging by ETAs on a few that are ~50% done, but the variable runtimes make it harder to estimate there.

On my home PC, the BRP4 estimates are around 3x higher that they should be. The FRPSSE are areound 20% only higher. The GW - Estimated: XX:XX:XX, but for 5 Elapsed seconds, about only 1-2 Remaining are deducted.

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 453
Credit: 233,112,569
RAC: 784

DanNeely wrote:My 7.6.22 is

DanNeely wrote:
My 7.6.22 is currently sitting with FGRP ETAs that are too short by a factor of 2, and BRP4 tasks with ETAs too long by 2x.    GW tasks look like they're in about the right ballpark judging by ETAs on a few that are ~50% done, but the variable runtimes make it harder to estimate there.

My BRP4G-Beta-cuda55 estimates are too long by a factor of 2 also on my GTX 750 Ti's (Win7 64-bit, BOINC 7.6.33).  They are estimated at 1 hour 12 minutes, but actually take only 36 minutes.

The GW CV are estimated on my i7-4771 at 3 hours 48 minutes, but are usually taking between 12 and 13 hours thus far, with the shortest being 4 hours.  Those estimate might change later however.

EDIT: The GW CV are now estimated at 8 hours, so they are closing the gap.

 

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.