Woohoo! I got my first S5R4 task. :-D

Alinator
Alinator
Joined: 8 May 05
Posts: 927
Credit: 9352143
RAC: 0

Agreed, he was comparing the

Agreed, he was comparing the basis to the rate. They are not the same thing.

The basis in Cobblestones is a unit work. The rate in CPCS is a unit of power.

Therefore, if you were running the SSE app on R3, then you are still running it on R4 by default. So even though your rate should drop by the 28 % correction factor for that host, the basis of how much the task is worth in Cobblestones should not since they contain more work than R3.

@Winterknight:

Let's say for the sake of argument that you have a host which can only do FPU or MMX. The correction for the MMX efficiency increase was made at the start of S5.

So you're telling me that it has somehow been able to improve on that in the interim and is now claiming more than it should now?

This is the basic problem of using any set of hosts in the field to set the basis on. It violates the definition of the Cobblestone, which is supposed to be a fixed and constant entity. The only 'machine' which can set the basis is the reference computer, even though it's a hypothetical construct.

Alinator

Winterknight
Winterknight
Joined: 4 Jun 05
Posts: 1481
Credit: 386279073
RAC: 465190

RE: RE: As you are so

Message 83162 in response to message 83160

Quote:
Quote:

As you are so concerned why don't you find a host to that spec, at the momnet none of my wingmen are using those families of cpu.

:sigh:

Quote:

And I don't think Eric's method has been implemented here yet, because if you calculate the reduction on the original S5R4 granted you will see 194.48 / 237.53 * 100 = 81.88% right on the button for a 28% reduction.

A 28% reduction from 100% is 72% of the baseline/prior value, not 81.88, or approx 82%. 82% would be an 18% reduction from a baseline of 100% (or 1, depending on how you referenced your baseline).


You're right, that's what comes of trying to do things in a hurry. When I did it much earlier I factored in the 11% increase in processing time from S5R3 to S5R4 using default apps. So it should be (194.48 * 35.8) / (237.53 * 40.1) = 0.718.

Brian Silvers
Brian Silvers
Joined: 26 Aug 05
Posts: 772
Credit: 282700
RAC: 0

RE: Agreed, he was

Message 83163 in response to message 83161

Quote:

Agreed, he was comparing the basis to the rate. They are not the same thing.

The basis in Cobblestones is a unit work. The rate in CPCS is a unit of power.

Therefore, if you were running the SSE app on R3, then you are still running it on R4 by default. So even though your rate should drop by the 28 % correction factor for that host, the basis of how much the task is worth in Cobblestones should not since they contain more work than R3.

@Winterknight:

Let's say for the sake of argument that you have a host which can only do FPU or MMX. The correction for the MMX efficiency increase was made at the start of S5.

So you're telling me that it has somehow been able to improve on that in the interim and is now claiming more than it should now?

Also, it is stated that the following are both true:

1. More work is being done per task.
2. Each task takes longer to run.

With the credit decrease to come up with some mythical "parity", the implied "value" of work performed here decreased. Even though those of us in the know actually know that we are doing more science, to others it appears as though your work isn't valued as much as it used to be.

The cpcs is decreased naturally by the extension of runtimes alone. The only problem with cross-project parity is...runtimes are not the same across tasks, much less across projects.

It is far easier to "sell" the idea of "getting less" if it only takes longer to get the same amount. You can then justify the decrease in cpcs by stating that more science is being performed. However, when you tell someone that more science is being performed AND total credit will be less, that becomes a much harder "sell"...

@Others:

PLEASE do not come at me with a "it should be about the science, not the credits" angle. If you do, then let's not be timid about that stance and abolish the credit system entirely. Stop being wishy washy about it. Stand firm on the conviction of your beliefs...

Alinator
Alinator
Joined: 8 May 05
Posts: 927
Credit: 9352143
RAC: 0

RE: Also, it is stated

Message 83164 in response to message 83163

Quote:

Also, it is stated that the following are both true:

1. More work is being done per task.
2. Each task takes longer to run.

With the credit decrease to come up with some mythical "parity", the implied "value" of work performed here decreased. Even though those of us in the know actually know that we are doing more science, to others it appears as though your work isn't valued as much as it used to be.

The cpcs is decreased naturally by the extension of runtimes alone. The only problem with cross-project parity is...runtimes are not the same across tasks, much less across projects.

Hmmm...

I'm not sure that came out the way you wanted (or I just didn't read it right).

Let's look at it a different way:

Say the team decided not to make a credit correction right now at the start of R4, and we will ignore the known sequence number variation within a template for the moment.

Then for the case of using the power app on R3 compared to now on R4 there should not be any change in the rate (CPCS) for the host. The extended runtime would be reflected as an increase in the granted credit for each individual task at comparable templates, because as we now know, there is more work in each task and the basis is supposed to be constant by definition.

IOWs, the host doesn't get less 'powerful' just because there is more work to do in the tasks.

Stated that way, when one takes into account that the only way to effect a rate change for a certain subset of hosts is to change the basis for the whole population the way it works now, then your preceding and following comments follow better logically, at least to me. ;-)

If that was what you were intending, then we're still on the same page. :-)

Quote:

It is far easier to "sell" the idea of "getting less" if it only takes longer to get the same amount. You can then justify the decrease in cpcs by stating that more science is being performed. However, when you tell someone that more science is being performed AND total credit will be less, that becomes a much harder "sell"...

@Others:

PLEASE do not come at me with a "it should be about the science, not the credits" angle. If you do, then let's not be timid about that stance and abolish the credit system entirely. Stop being wishy washy about it. Stand firm on the conviction of your beliefs...

LOL...

Amen!!! No argument there. I've said it before and I'll say it again:

Whereas the objective of the individual projects is the specific science they are doing, and has worth in and of itself regardless of how you score it...

The science of BOINC itself has everything to do with credit, and answering the question of how do you allow a widely disparate population of hosts and projects to participate and reward them all fairly and consistently.

Alinator

Odd-Rod
Odd-Rod
Joined: 15 Mar 05
Posts: 38
Credit: 19755165
RAC: 48405

RE: Hi Rod, Welcome to the

Message 83165 in response to message 83153

Quote:

Hi Rod,
Welcome to the message boards. I see you have been a member for quite a while but this is your first post here.

I also notice that you are in both the "yoyo@home" projects - and an impressive list of other projects as well :-).


Thanks, Gary. That 'impressive list of projects' is the reason I haven't posted here. Einstein's Long Term Debt (LTD) means that my hosts aren't requesting work here at the moment, and I tend to visit forums when I see work being crunched for those project. (ok, I know it's actually fora, but forums sounds more like the plural of forum!). But I like your style of posting - polite, yet correcting if necessary.

The whole Boinc crunching 'thing' is my hobby, that's why I attach to every project I find, but slowly I trim off some.

Well, that seems to be off topic, but then again with a thread title of "Woohoo! I got my first S5R4 task. :-D" almost anything could be regarded as off-topic ;)

Quote:

I hope you enjoy your stay here and many thanks for your kind words. It's always nice to receive positive feedback like yours.

Thanks, and I'm sure I will be posting again. Now to adjust resources and also detach some hosts from projects so that Einstein can get a chance!

Regards
Rod

Odd-Rod
Odd-Rod
Joined: 15 Mar 05
Posts: 38
Credit: 19755165
RAC: 48405

RE: RE: I made a complete

Message 83166 in response to message 83156

Quote:
Quote:
I made a complete pig's ear of the transition from S5R3 to S5R4 on my timing run machine.

Well at least it was a complete one! Keep at it! I look forward to viewing a beautiful pig's ear in due course ...... :-)

Cheers, Mike.


LOVL (Laugh Out Very Loud) I've never heard of that one, but it's what I did when I read this!

Alinator
Alinator
Joined: 8 May 05
Posts: 927
Credit: 9352143
RAC: 0

LOL... Or perhaps

LOL...

Or perhaps everything could be considered on topic...

Conversation here is usually allowed to be free flowing, as long as it stays civil. ;-)

Fortunately for Mike, becoming uncivil is almost never a problem here, or at least that's the way it seems to me.

In any event, welcome to the family of EAH crunchers and the NC community.

Alinator

Brian Silvers
Brian Silvers
Joined: 26 Aug 05
Posts: 772
Credit: 282700
RAC: 0

RE: I'm not sure that came

Message 83168 in response to message 83164

Quote:

I'm not sure that came out the way you wanted (or I just didn't read it right).

Or perhaps I assumed that people could follow my train of thought without maximum verbosity... ;-)

What I'm thinking is that the credit per task could've been recalibrated to be nearly identical to the S5R3 run. A 237.53 R3 task would be replaced by a 237.53 R4 task. If the R4 task takes longer, then the cpcs is reduced.

IOW, what I'm thinking more and more about that is "fair" inside of a single project is a flat / fixed amount. This is the only real way to guarantee that "Host A", a host that processed 100 tasks at 100 credit a piece in 2005, and "Host B", a host that processed 100 tasks at 100 credit a piece in 2008 at a higher level of science, would be at the same "rank" as far as tasks processed / credit rewarded.

One could argue that "Host B" should be higher, since they did more total science. However, the trend over time has been deflationary within a project. I'm willing to change my view on that if someone can show me that all of the decreases in credit have been offset for all hosts processing at any two given points in time by the "performance index" of the application. I doubt someone can do that, but I'm willing to entertain the thought that all the ups and downs just happened to "average out" for everyone so that any two hosts selected at random will be correctly ranked in the current paradigm by their credits. Otherwise, I would guess a host that has credits of, say, 8,000 has a higher "science ranking" than a host with 10,000 credits.

Oh, wait, we don't have a "science ranking" category. We have rankings based on credits or RAC.

:scratches head:

How to determine which host is higher in the standings?????

If you can come up with a more eloquent way of stating this conundrum *AND* be able to sell the idea to the CPP folks *AND* get the CPP folks to understand that all this is is their "cross-project" view merely refocused upon the "intra-project" view, THEN convince them that cross-project gerrymandering has a side-effect of skewing intra-project statistics, THEN maybe we could get down to coming up with a real and permanent solution....

Winterknight
Winterknight
Joined: 4 Jun 05
Posts: 1481
Credit: 386279073
RAC: 465190

RE: RE: I'm not sure

Message 83169 in response to message 83168

Quote:
Quote:

I'm not sure that came out the way you wanted (or I just didn't read it right).

Or perhaps I assumed that people could follow my train of thought without maximum verbosity... ;-)

What I'm thinking is that the credit per task could've been recalibrated to be nearly identical to the S5R3 run. A 237.53 R3 task would be replaced by a 237.53 R4 task. If the R4 task takes longer, then the cpcs is reduced.

IOW, what I'm thinking more and more about that is "fair" inside of a single project is a flat / fixed amount. This is the only real way to guarantee that "Host A", a host that processed 100 tasks at 100 credit a piece in 2005, and "Host B", a host that processed 100 tasks at 100 credit a piece in 2008 at a higher level of science, would be at the same "rank" as far as tasks processed / credit rewarded.

One could argue that "Host B" should be higher, since they did more total science. However, the trend over time has been deflationary within a project. I'm willing to change my view on that if someone can show me that all of the decreases in credit have been offset for all hosts processing at any two given points in time by the "performance index" of the application. I doubt someone can do that, but I'm willing to entertain the thought that all the ups and downs just happened to "average out" for everyone so that any two hosts selected at random will be correctly ranked in the current paradigm by their credits. Otherwise, I would guess a host that has credits of, say, 8,000 has a higher "science ranking" than a host with 10,000 credits.

Oh, wait, we don't have a "science ranking" category. We have rankings based on credits or RAC.

:scratches head:

How to determine which host is higher in the standings?????

If you can come up with a more eloquent way of stating this conundrum *AND* be able to sell the idea to the CPP folks *AND* get the CPP folks to understand that all this is is their "cross-project" view merely refocused upon the "intra-project" view, THEN convince them that cross-project gerrymandering has a side-effect of skewing intra-project statistics, THEN maybe we could get down to coming up with a real and permanent solution....


Surely over time since Einstein started on BOINC the trend has been inflationary, or else they wouldn't be trying to reduce credits now.

And the same is true on Seti, where if you look in the NC archieves you will find the evidence, if you have a spare month that is.

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6591
Credit: 324490920
RAC: 174183

RE: Or perhaps everything

Message 83170 in response to message 83167

Quote:
Or perhaps everything could be considered on topic...


As pointed out : 'Woohoo! I got my first S5R4 task. :-D' is rather inclusive. It'd fit in any the forums here actually. :-)

Quote:
Conversation here is usually allowed to be free flowing, as long as it stays civil. ;-)


Yup.

Quote:
Fortunately for Mike, becoming uncivil is almost never a problem here, or at least that's the way it seems to me.


Touch wood. Or put another way .... if it's that bad you wont see it, for long. :-)

It's probably worth mentioning that if a post is moderated [or 'hidden' to be exact as mods see extra mod-related buttons on the page ] the user is emailed with a categorisation of the reason and at our option an explanation. Most of the time we get it back to ourselves as we are the ones doing moving/un-double-posting/stickying/unstickying/locking/title-editing etc. There are a couple of advantages to this, the obvious one being the confining of, then the dousing of flames. [ There's an old saw - Q: If you could take only one thing, what would you take out of your house if it was burning? A: The fire!! ]. It's a bit like getting pulled over by a traffic cop - moved out of what could well be busy lanes.

I mention this not to encourage any takers for the experience, but because I recently read on another project a 'difficult exchange' b/w a mod and a contributor over a simple deletion of a doublepost and misunderstanding of same. So transparency of mechanism is advantageous. And if you do get pulled over, it's not personal. It'll feel like it, but it really and truly isn't. Mostly it's because of the appearance of the post ( as intent is harder to gauge ) ie. The Duck Test [ if looks like one, quacks like one, waddles like one - it is one ]. It sounds contradictory, but connotation ( implied sense ) can be just as important as denotation ( literal sense ) here. For instance, trying to take refuge in rhetoric constructs like 'I'm not calling you a bloody idiot, even though I could have' will not fly. This type of attempt, and others, to subvert moderation policy within the cloth of innocence gets a punt. Transparency runs the other way too, as on the forum index page and to your left when composing a post are our rules. So most, if any, discussion when 'pulled over' is re-displaying that. For instance the common trolling tactics of endless quibbling, must-have-the-last-word, I-said-it-you-prove-me-wrong, covert context shifts, swings between literal and implied meanings, martyrdom, unreasonable analogy, feined indifference, feined innocence, sotto voce ( under the breath ) - to name but a few - get very short shrift indeed. There are grey areas, and we can and have got it wrong ourselves - known in soccer as an 'own goal'. :-)

Usually when there's a significant change in the project, like the current changes, there's an increase in the general hubbub. I use this opportunity of increased focus to get this type of explanation in. I trust it is appreciated for the purpose it is intended, not so much a Riot Act reading but a friendly nod. :-)

Quote:
In any event, welcome to the family of EAH crunchers and the NC community.


Ditto.

Now as my fellow mods are possibly groaning from the loss of prized secrets I'll shut up now.
Quick Igor, saddle the horses - it's the pitchforks and the firebrands all over again ..... :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.