Duration Correction Factor (DCF) Variation

Nothing But Idle Time
Nothing But Idl...
Joined: 24 Aug 05
Posts: 158
Credit: 289204
RAC: 0

RE: TDCF gives ...some feel

Message 84096 in response to message 84095

Quote:
TDCF gives ...some feel for how closely the benchmarks approximate the real time performance of the CPU.


I stated my task runtimes for my old P4/HT of 17.75 hours and now 24.75 hours; you can see how efficient that machine is!

Brian Silvers
Brian Silvers
Joined: 26 Aug 05
Posts: 772
Credit: 282700
RAC: 0

RE: RE: TDCF gives

Message 84097 in response to message 84096

Quote:
Quote:
TDCF gives ...some feel for how closely the benchmarks approximate the real time performance of the CPU.

I stated my task runtimes for my old P4/HT of 17.75 hours and now 24.75 hours; you can see how efficient that machine is!

Better than mine...

27.39 hours...

Alinator
Alinator
Joined: 8 May 05
Posts: 927
Credit: 9352143
RAC: 0

RE: RE: RE: TDCF gives

Message 84098 in response to message 84097

Quote:
Quote:
Quote:
TDCF gives ...some feel for how closely the benchmarks approximate the real time performance of the CPU.

I stated my task runtimes for my old P4/HT of 17.75 hours and now 24.75 hours; you can see how efficient that machine is!

Better than mine...

27.39 hours...

LOL...

OK, point taken. :-)

I should have qualified that by saying if the estimates from the project are reasonably accurate, then you can infer something about the benchmarks. ;-)

One thing for sure is once a project has gotten it's parameters dialed in, a big shift in TDCF almost always indicates something bad happened on the host. Usually in the form of a bad benchmark run.

Regarding the increase in runtimes; Since it has been noted elsewhere here in NC that the new S5R4 apps are essentially the same as the last S5R3 power apps from a performance POV, then the only conclusion you can draw from longer run times is that there is more work in the new tasks than before.

The fallout from the ramifications of that observation is the subject of other threads running here in NC. ;-)

Alinator

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6534
Credit: 284730859
RAC: 105773

RE: ..... conclusion you

Message 84099 in response to message 84098

Quote:
..... conclusion you can draw from longer run times is that there is more work in the new tasks than before .....


Yup, I gather they represent longer integrations ( in true IFO time ) compared with S3. There's queries on the Science board about this. I intend to leave it a week or so and then bravely poke one of the project scientists with an email to invite a brief low down on the changes/goals for S4. But they're real busy right now. Actually they always are, so 'busier' is more accurate. :-)

However I feel safe in stating that the intra-project 'science-done per credit-awarded' ratio has gone up.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109407521239
RAC: 35308682

RE: RE: The intention was

Message 84100 in response to message 84087

Quote:
Quote:
The intention was to get DCF closer to 1.0 for S5R4 units ...

I still don't see why it was changed.

It was changed because a few people like me requested Bernd to look into doing something about the wildly inaccurate estimates that were being seen by new hosts joining towards the end of S5R3. I was converting Windows hosts to Linux which meant that each one was receiving a new hostID. In a particular case the first task received was estimated to take 60 hours when I knew it would only take around 12. It's OK for me. I could stop BOINC, edit the DCF from 1.0 to 0.2 and bingo - end of problem.

A new participant, on the other hand, may well be scared off by seeing the 60 hour estimate, not knowing it was a complete furphy. Even if he decides to let it run, how impressed is he likely to be when he finds out how inaccurate the estimate really is? It might be not too bad for multi-core machines which can refine the estimate fairly quickly. It's painfully slow for a single CPU older machine, taking many weeks to get it right without manual intervention.

I'm sure the basic idea behind BOINC is to have DCF much closer to 1.0 than 0.2.

Quote:
to me I tended to use DCF as a measure of the efficiency of the applications on all projects.

To some extent, I believe you are deceiving yourself, particularly if you don't know whether or not anybody has been tweaking the estimates in the WUG for one project and not in another. One project could be tweaking upwards whilst another might be doing nothing and a third might be tweaking downwards. You would certainly be seeing changes in DCF for two of the three projects but it would be nothing to do with application efficiency.

Quote:
The DCF's for Seti, default app, Einstein, S5R3 default and CPND on my computers were in the same ballpark.

I would suggest coincidence rather than deliberate design. Personally, I think that projects should try to keep the estimates built into the task as much in line with reality as possible, so that the DCF remains near to 1.0. I'm not suggesting that this is easy or even half achievable, considering the variety of platforms and the variety of tasks running on those platforms. If they could even narrow the range to say 0.6 - 1.5, that would be good enough to prevent the problems that a wildly inaccurate estimate can cause.

Cheers,
Gary.

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6534
Credit: 284730859
RAC: 105773

Bernd has answered the key

Bernd has answered the key point on S4 here.

Quote:
The S5 LSC Science run lasted about two and a half years, when we started S5R2/3 only the data of the first year was available. S5R4 is looking at the rest of the data from S5, where the sensitivity of the detectors was higher than in the first year. Also we got more usable data out of the second year, which improves the sensitivity of the analysis. That, however, also means that for the workunits the data volume has increased and the amount of computational work necessary to process them, too.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 686138814
RAC: 555564

As for the DCF, my C2D under

As for the DCF, my C2D under Linux has now completed a lot of S5R4 units and its DCF is ca. 1.1 . Not that bad. So at least for people now joining the project (and starting with a 1.0 default DCF), the initial estimation should be much better than it was before.

CU
Bikeman

Winterknight
Winterknight
Joined: 4 Jun 05
Posts: 1221
Credit: 312342927
RAC: 647845

RE: As for the DCF, my C2D

Message 84103 in response to message 84102

Quote:

As for the DCF, my C2D under Linux has now completed a lot of S5R4 units and its DCF is ca. 1.1 . Not that bad. So at least for people now joining the project (and starting with a 1.0 default DCF), the initial estimation should be much better than it was before.

CU
Bikeman


Maybe fine for Linux, but for windows my quad is at 1.55. And as there are many more Windows users, it is not very accurate. In fact not much more accurate than S5R3, just in the opposite direction.

Brian Silvers
Brian Silvers
Joined: 26 Aug 05
Posts: 772
Credit: 282700
RAC: 0

RE: RE: As for the DCF,

Message 84104 in response to message 84103

Quote:
Quote:

As for the DCF, my C2D under Linux has now completed a lot of S5R4 units and its DCF is ca. 1.1 . Not that bad. So at least for people now joining the project (and starting with a 1.0 default DCF), the initial estimation should be much better than it was before.

CU
Bikeman


Maybe fine for Linux, but for windows my quad is at 1.55. And as there are many more Windows users, it is not very accurate. In fact not much more accurate than S5R3, just in the opposite direction.

Here's what my P4 is showing...

Average CPU efficiency 0.958477
Task duration correction factor 1.999301

Now, my AMD 3700+, which is still doing R3 work:

Average CPU efficiency 0.976424
Task duration correction factor 0.291247

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 686138814
RAC: 555564

RE: RE: As for the DCF,

Message 84105 in response to message 84103

Quote:
Quote:

As for the DCF, my C2D under Linux has now completed a lot of S5R4 units and its DCF is ca. 1.1 . Not that bad. So at least for people now joining the project (and starting with a 1.0 default DCF), the initial estimation should be much better than it was before.

CU
Bikeman


Maybe fine for Linux, but for windows my quad is at 1.55. And as there are many more Windows users, it is not very accurate. In fact not much more accurate than S5R3, just in the opposite direction.

It's much better for me than in S5R4.

DCF of 1.55 means the estimated time for newcomers will be too short by a factor of 1.55. For S5R3, my C2D had a DCF of 0.14 (!) meaning that the predicted runtime for newcomers with this computer would be too long by a factor of almost 7!!!. So it's much closer now. I can't believe it's that different for Windows?

CU
Bikeman

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.