I don't know what the distinction is, nor whether 7-day deadline issuance continues at the current time.
Based on what has sometimes happened in the past, and based on the large number of 'fiddly bits' to remember and attend to just before launch, I would guess that someone initially 'forgot' to reset the previous 'test run' 7 day setting to the full 14 days!
However, it could have just been a bit of 'protection' in case of some unintended 'hiccup' with the production launch.
I usually keep the default buffer though (0.1 + 0.5 days), and sometimes shorter.
A small work size buffer is the perfect way to avoid overfetch issues (which subsequently trigger panic mode) if the initial estimates of crunch time happen to be large underestimates. This can easily occur when a new run is launched. Once the first tasks are returned and the estimates are adjusted by the client, the buffer size can be adjusted upwards to what is desired without the risk of panic mode. With new runs, I always reduce the setting to 0.2 days total until I see how things are going. Ultimately, when things become stable, I try to keep about 3 days total to protect against an outage just after 'close of business' on a Friday (Murphy's Law).
If I want to keep X days of work, I use X days + 0.01 days rather than spreading the total between the two settings. I don't particularly like the idea of the work on hand being able to fluctuate between a 'high water mark' and a 'low water mark'. That's just a personal preference of mine to try to keep a constant amount of work on hand.
Just got credit for two of these jobs. 30 and 210 credits respectively. agree with earlier comment that it is nice to see these WUs finally online. woohoo!!!
Really!? Why does a 50037 run time mdcg earn 1000 pts while a BRP4G-Beta-opencl-ati at 4502 earn 1000 pts?
BRP4G uses a GPU coprocessor, and mdcg doesn't. Or am i missing something?
I was thinking (or maybe not) a 13 hour crunch earns 1000 credits while a 1.25 hour crunch earns 1000 credits. Does not seem equitable. But don't get me wrong, I am not a credit whore.
I was thinking (or maybe not) a 13 hour crunch earns 1000 credits while a 1.25 hour crunch earns 1000 credits. Does not seem equitable. But don't get me wrong, I am not a credit whore.
LOL - GPU are 10x to 200x faster for some applications, so that reflects the difference.
In my case I had most of my hosts not running CPU work. The newly running Multi-Directed Continuous GW work takes enough longer than the runtime estimated based on my GPU work that several of my hosts fell into "panic mode" shortly after they arrived. While this is not a dire problem, I've chosen to lower the requested work queue from 3.5 to 2.5 days on two machines, hoping to quell it.
The recent versions of BOINC are supposed to calculate this separately for the GPU and the CPU. It has not been a problem for me with 7.6.33 (Win7 64-bit and Ubuntu 16.xx). I have a wide variety of CPU and GPU work, but the only time I see a panic mode is when there is an initial misestimation of the run times, combined with an unrealistically short deadline. I usually keep the default buffer though (0.1 + 0.5 days), and sometimes shorter.
Is this a change that was only done in the beta releases? If so, it might be the first beta client I'll have installed in a few years.
My 7.6.22 is currently sitting with FGRP ETAs that are too short by a factor of 2, and BRP4 tasks with ETAs too long by 2x. GW tasks look like they're in about the right ballpark judging by ETAs on a few that are ~50% done, but the variable runtimes make it harder to estimate there.
In my case I had most of my hosts not running CPU work. The newly running Multi-Directed Continuous GW work takes enough longer than the runtime estimated based on my GPU work that several of my hosts fell into "panic mode" shortly after they arrived. While this is not a dire problem, I've chosen to lower the requested work queue from 3.5 to 2.5 days on two machines, hoping to quell it.
The recent versions of BOINC are supposed to calculate this separately for the GPU and the CPU. It has not been a problem for me with 7.6.33 (Win7 64-bit and Ubuntu 16.xx). I have a wide variety of CPU and GPU work, but the only time I see a panic mode is when there is an initial misestimation of the run times, combined with an unrealistically short deadline. I usually keep the default buffer though (0.1 + 0.5 days), and sometimes shorter.
Is this a change that was only done in the beta releases? If so, it might be the first beta client I'll have installed in a few years.
My 7.6.22 is currently sitting with FGRP ETAs that are too short by a factor of 2, and BRP4 tasks with ETAs too long by 2x. GW tasks look like they're in about the right ballpark judging by ETAs on a few that are ~50% done, but the variable runtimes make it harder to estimate there.
On my home PC, the BRP4 estimates are around 3x higher that they should be. The FRPSSE are areound 20% only higher. The GW - Estimated: XX:XX:XX, but for 5 Elapsed seconds, about only 1-2 Remaining are deducted.
My 7.6.22 is currently sitting with FGRP ETAs that are too short by a factor of 2, and BRP4 tasks with ETAs too long by 2x. GW tasks look like they're in about the right ballpark judging by ETAs on a few that are ~50% done, but the variable runtimes make it harder to estimate there.
My BRP4G-Beta-cuda55 estimates are too long by a factor of 2 also on my GTX 750 Ti's (Win7 64-bit, BOINC 7.6.33). They are estimated at 1 hour 12 minutes, but actually take only 36 minutes.
The GW CV are estimated on my i7-4771 at 3 hours 48 minutes, but are usually taking between 12 and 13 hours thus far, with the shortest being 4 hours. Those estimate might change later however.
EDIT: The GW CV are now estimated at 8 hours, so they are closing the gap.
archae86 wrote:I don't know
)
Based on what has sometimes happened in the past, and based on the large number of 'fiddly bits' to remember and attend to just before launch, I would guess that someone initially 'forgot' to reset the previous 'test run' 7 day setting to the full 14 days!
However, it could have just been a bit of 'protection' in case of some unintended 'hiccup' with the production launch.
Cheers,
Gary.
Jim1348 wrote:I usually keep
)
A small work size buffer is the perfect way to avoid overfetch issues (which subsequently trigger panic mode) if the initial estimates of crunch time happen to be large underestimates. This can easily occur when a new run is launched. Once the first tasks are returned and the estimates are adjusted by the client, the buffer size can be adjusted upwards to what is desired without the risk of panic mode. With new runs, I always reduce the setting to 0.2 days total until I see how things are going. Ultimately, when things become stable, I try to keep about 3 days total to protect against an outage just after 'close of business' on a Friday (Murphy's Law).
If I want to keep X days of work, I use X days + 0.01 days rather than spreading the total between the two settings. I don't particularly like the idea of the work on hand being able to fluctuate between a 'high water mark' and a 'low water mark'. That's just a personal preference of mine to try to keep a constant amount of work on hand.
Cheers,
Gary.
Just got credit for two of
)
Just got credit for two of these jobs. 30 and 210 credits respectively. agree with earlier comment that it is nice to see these WUs finally online. woohoo!!!
Really!? Why does a 50037
)
Really!? Why does a 50037 run time mdcg earn 1000 pts while a BRP4G-Beta-opencl-ati at 4502 earn 1000 pts?
Just curious.
[EDIT] Also the runtime and cpu times for the mdcgs are almost the same (50000 each in the above example) .
robl wrote:Really!? Why does
)
BRP4G uses a GPU coprocessor, and mdcg doesn't. Or am i missing something?
AgentB wrote:robl
)
I was thinking (or maybe not) a 13 hour crunch earns 1000 credits while a 1.25 hour crunch earns 1000 credits. Does not seem equitable. But don't get me wrong, I am not a credit whore.
robl wrote:I was thinking (or
)
LOL - GPU are 10x to 200x faster for some applications, so that reflects the difference.
Source https://boinc.berkeley.edu/wiki/GPU_computing
Jim1348 wrote:archae86
)
Is this a change that was only done in the beta releases? If so, it might be the first beta client I'll have installed in a few years.
My 7.6.22 is currently sitting with FGRP ETAs that are too short by a factor of 2, and BRP4 tasks with ETAs too long by 2x. GW tasks look like they're in about the right ballpark judging by ETAs on a few that are ~50% done, but the variable runtimes make it harder to estimate there.
DanNeely wrote:Jim1348
)
On my home PC, the BRP4 estimates are around 3x higher that they should be. The FRPSSE are areound 20% only higher. The GW - Estimated: XX:XX:XX, but for 5 Elapsed seconds, about only 1-2 Remaining are deducted.
DanNeely wrote:My 7.6.22 is
)
My BRP4G-Beta-cuda55 estimates are too long by a factor of 2 also on my GTX 750 Ti's (Win7 64-bit, BOINC 7.6.33). They are estimated at 1 hour 12 minutes, but actually take only 36 minutes.
The GW CV are estimated on my i7-4771 at 3 hours 48 minutes, but are usually taking between 12 and 13 hours thus far, with the shortest being 4 hours. Those estimate might change later however.
EDIT: The GW CV are now estimated at 8 hours, so they are closing the gap.