How long should it take?

Allen
Allen
Joined: 23 Jan 06
Posts: 75
Credit: 656297928
RAC: 1212952

I'll keep that in mind Keith,

I'll keep that in mind Keith, thanks.  I will run what I have for now and after all the dust settles, I'll try cutting back on the number of WUs I run on both GPUs and see what happens.

I found on Milkyway, that if I ran three at a time, I got the best production there, so.....

Thanks all!

Allen

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4964
Credit: 18716123102
RAC: 6371259

Sadly, no gpus work on

Sadly, no gpus work on Milkyway anymore.  That project was one where you could really load up the multiplicity on gpu WU's and the card wouldn't even be breathing heavy.

 

Allen
Allen
Joined: 23 Jan 06
Posts: 75
Credit: 656297928
RAC: 1212952

Hi Keith, Well, the dust

Hi Keith,

Well, the dust has settled and seems like my numbers and machines are doing better.

But, I have a couple of machines that are acting oddly.

They all have the same problem, it seems. 

On the machine that has two RX 560s, one of the gpus seems to get a terrible count going on the Boinc application Elapse/Remaining section.  It starts counting up, way up, on the remaining time, but if I shutdown Boinc and start it up again, it shows reasonable times, without losing any of the processed time accrude.

Any thoughts?

Btw, I have shutdown almost all cpu work on Einstein now, just so you know.

Allen

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4964
Credit: 18716123102
RAC: 6371259

Well you've been crunching

Well you've been crunching enough to develop an accurate APR, that is if Einstein actually used that metric in their server software.  So the estimated times should be accurate.  Normally takes 11 validated tasks to develop an accurate APR for a host on the standard Boinc server software.

But when projects don't use the normal mechanism and implement their own choices, then you can't go by that.

Einstein still uses the old DCF host mechanism.  Unfortunately that applies to ALL applications and tasks on the project.  Only one DCF value is allowed for a host for all applications.  So the DCF falls when your host runs some fast running tasks of one type and then moves on to running some long-running tasks of a different type.  But the DCF that was calculated for the fast running tasks is now applied to the long-running tasks which is too low.  The DCF directly influences the time remaining and estimated times for a task.

So the times are out of whack now for the long running ones until enough of them are run to calculate a new DCF number that is more in line with the actual value which then slowly skews the estimated times back to reality.

Rinse and repeat for every new type task the host runs.  The only way to get out of this situation is to run only one type of task on a host so that the DCF stabilizes and produces accurate estimated times and deadlines.

You can also manipulate the DCF in the client_state.xml file to force it to a low value of 0.01.  This overrides what the server value calculates.  Can eliminate tasks getting forced into high-priority mode and pushing other work off the host.

 

Allen
Allen
Joined: 23 Jan 06
Posts: 75
Credit: 656297928
RAC: 1212952

Okay, so I guess no damage

Okay, so I guess no damage done and I will wait for it to sort itself out.  If after a week, it doesn't, I'll have a look at that client_state.xml file and see if I can reset it.

As always, thanks a bunch!

Allen

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.