Credits per unit dropped from 19.94 to 16.53 on Mac PPC G4, why?

Keck_Komputers
Keck_Komputers
Joined: 18 Jan 05
Posts: 376
Credit: 5,744,955
RAC: 0

RE: I dont fully agree -

Message 44038 in response to message 44034

Quote:

I dont fully agree - equal work should get equal credits, faster computers get more credits/hour is true, but better more optimised code using simd unlocks the performance of the processor, flops rise so credit should accordingly rise.

Credit is a direct function of flops, so more flops = more credit, and this can be achieved by optimisation as well as brute force, and should be rewarded accordingly.


You are missing something here, most of the optimisations result in less FLOPs for the same work. Using special instructions and more efficient code structures reduce the operations needed to accomplish a task. So naturally the credit should be reduced.

BOINC WIKI

BOINCing since 2002/12/8

Andy Lee Robinson
Andy Lee Robinson
Joined: 3 Jun 06
Posts: 5
Credit: 860,465
RAC: 0

RE: RE: I dont fully

Message 44039 in response to message 44038

Quote:
Quote:

I dont fully agree - equal work should get equal credits, faster computers get more credits/hour is true, but better more optimised code using simd unlocks the performance of the processor, flops rise so credit should accordingly rise.

Credit is a direct function of flops, so more flops = more credit, and this can be achieved by optimisation as well as brute force, and should be rewarded accordingly.


You are missing something here, most of the optimisations result in less FLOPs for the same work. Using special instructions and more efficient code structures reduce the operations needed to accomplish a task. So naturally the credit should be reduced.

Yes, perhaps I'm missing something - I think there are many ways of looking at this. Algorithmic optimisations result in less instructions, flops and hence credit, so where then is the incentive? (apart from the science, which is the important thing, but natural human competitiveness really enhances the drive forward).
Using underutilised capabilities of a processor to enhance performance by performing equivalent flops with less SIMD instructions should perhaps increase credit. So what then is a FLOP? SIMD does several ops in parallel, but do we count this as just one op, because it saves many ops?

The whole credit system is a semantic mess anyway! Perhaps a PhD awaits s/he who can make a unified theory of everything and reconcile the credit system across projects, apps, processors and optimisations and their variations.

Projects aren't equal - they have different algorithms, each with different usage of instructions.

Perhaps an accurate measurement is just intractable.

Pooh Bear 27
Pooh Bear 27
Joined: 20 Mar 05
Posts: 1,376
Credit: 20,312,671
RAC: 0

Another way to look at this.

Another way to look at this. Most projects are not for profit and need to work on getting grants. If one project was over compensating for credits and they brought that information to a place that is hopefully going to donate to their project, than project 2 goes to the same people, but their credits are in par with everyone else, how is that going to skew the people trying to donate their money.

Same thing with us volunteering our time. If one over compensates, more people will gravitate there. The people at BOINC side of things, trying to keep the projects in check make it an even ability for all.

I hope this sheds some light on why it's important to keep even numbers across the board.

Keck_Komputers
Keck_Komputers
Joined: 18 Jan 05
Posts: 376
Credit: 5,744,955
RAC: 0

RE: Perhaps an accurate

Message 44041 in response to message 44039

Quote:
Perhaps an accurate measurement is just intractable.


You are probably right there. I still think the best idea in theory is benchmarks * time. Unfortunatly in practice this has been just barely satisfactory at it's best and a joke at it's worse.

BOINC WIKI

BOINCing since 2002/12/8

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4,322
Credit: 250,932,938
RAC: 37,570

There are two completely

There are two completely different "things" you can give "credit" for:

1. The contribution of a machine, i.e. the time the CPU has spent on that project, the power of the CPU (which is not necessarily reflected in the power consumption of the machine), and maybe also the main memory or disk space. I see two problems on that approach, at least in the current BOINC framework: 1. To be fair, this would mean an individual credit for each machine; already averaging over a single workunit is injustice. 2. The information from the client (about run-time, benchmark etc.) can be wrong, may it be by accident or intention. We have seen a lot of this on Einstein@Home, and even more on SETI.

2. The contribution to the project's goal. If all WUs are of equal size, this is reflected in the number of WUs a certain machine has "crunched". On Einstein@Home we have workunits of different size, but the contribution to the project goal can easily be measured in the number of "Templates" a machine has analyzed. These "Templates" are uneaqually splitted up in workunits, but as we grant a constant credit per template, the workunits get different credits. The only thing that needs to be made sure is that credits remain more or less comparable between projects. Note that changing the overall credit level of a project doesn't affect the competition within this project at all (apart from the short transition time where some caches are more filled with "old work" than others etc.).

Because of the mentioned problems with the first possibility, on Einstein@Home we chose to implement the second (with the additional advantage of reduced redundancy).

BM

BM

Alinator
Alinator
Joined: 8 May 05
Posts: 927
Credit: 9,352,143
RAC: 0

RE: There are two

Message 44043 in response to message 44042

Quote:

There are two completely different "things" you can give "credit" for:

1. The contribution of a machine, i.e. the time the CPU has spent on that project, the power of the CPU (which is not necessarily reflected in the power consumption of the machine), and maybe also the main memory or disk space. I see two problems on that approach, at least in the current BOINC framework: 1. To be fair, this would mean an individual credit for each machine; already averaging over a single workunit is injustice. 2. The information from the client (about run-time, benchmark etc.) can be wrong, may it be by accident or intention. We have seen a lot of this on Einstein@Home, and even more on SETI.

2. The contribution to the project's goal. If all WUs are of equal size, this is reflected in the number of WUs a certain machine has "crunched". On Einstein@Home we have workunits of different size, but the contribution to the project goal can easily be measured in the number of "Templates" a machine has analyzed. These "Templates" are uneaqually splitted up in workunits, but as we give gran a constant credit per template, the workunits get different credits. The only thing that needs to be made sure is that credits remain more or less comparable between projects. Note that changing the overall credit level of a project doesn't affect the competition within this project at all (apart from the short transition time where some caches are more filled with "old work" than others etc.).

Because of the mentioned problems with the first possibility, on Einstein@Home we chose to implement the second (with the additional advantage of reduced redundancy).

BM

Agreed, however even with the latest credit adjustment here which was justified by the release of a further optimized Einstein app (even though the 30% drop in "value" looked alarming at first! :-) ), EAH and SAH are still at credit parity from what I can tell from my machines results.

One problem is people have a tendency to get hung up on one metric or another, especially RAC which is almost useless as a figure of merit for determining "fair" scoring.

However, one of the things I have always been impressed with is the consistency you and Dr. Allen have tried to maintain with regard to credit since I started running EAH over a year ago now. At this point, any further reductions in the value of the work would take you below what I had established as my EAH baseline back with S4 before the Akos optimizations hit the scene.

As I said in an earlier post, for better or worse, somebody has to be the "standard" by which other projects are compared to for credit purposes. My opinion is both you and SAH should hold off on further adjustments from the current levels for awhile (maybe 3 months or so). This would give the other projects an opportunity to bring their scoring in line with the new proceedures for credit which have been implimented in the BOINC framework this year, as well as give us all a chance to collect some more data regarding the latest credit tweak here. ;-)

Regards,

Alinator

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4,322
Credit: 250,932,938
RAC: 37,570

RE: However, one of the

Message 44044 in response to message 44043

Quote:
However, one of the things I have always been impressed with is the consistency you and Dr. Allen have tried to maintain with regard to credit since I started running EAH over a year ago now. At this point, any further reductions in the value of the work would take you below what I had established as my EAH baseline back with S4 before the Akos optimizations hit the scene.

Thank you for your kind words!

We'll keep an eye on the credit, but from what I see currently, I think we got it right by now. The problem was that I couldn't say at first how much the optimized Apps would speed up the project on average - there are simply too many different machines. So we rather wanted to be too generous with credit for the first shot, and underestimated the speedup (I'm still a bit surprised...).

There's some code I'm currently working on for the Macs (Intel and PPC) that will give some speedup there, too, but due to the limited number of these machines in the project it won't make much of a difference to the (average) credit.

BM

BM

Stick
Stick
Joined: 24 Feb 05
Posts: 790
Credit: 33,139,331
RAC: 934

RE: RE: Perhaps an

Message 44045 in response to message 44041

Quote:
Quote:
Perhaps an accurate measurement is just intractable.

You are probably right there. I still think the best idea in theory is benchmarks * time. Unfortunatly in practice this has been just barely satisfactory at it's best and a joke at it's worse.

I agree! And, I think this the current tinkering has lost track of objectives. It is "way too focused" on computer performance metrics. The model should be concerned with valuing the work produced (as opposed to accounting for the computer time that is donated). I don't care how you slice it, I just "don't buy" the idea that work produced now (under v4.24) is somehow "less equal to" the work produced a week ago (under v4.02).

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4,322
Credit: 250,932,938
RAC: 37,570

RE: RE: RE: Perhaps an

Message 44046 in response to message 44045

Quote:
Quote:
Quote:
Perhaps an accurate measurement is just intractable.

You are probably right there. I still think the best idea in theory is benchmarks * time. Unfortunatly in practice this has been just barely satisfactory at it's best and a joke at it's worse.

I agree! And, I think this the current tinkering has lost track of objectives. It is "way too focused" on computer performance metrics. The model should be concerned with valuing the work produced (as opposed to accounting for the computer time that is donated). I don't care how you slice it, I just "don't buy" the idea that work produced now (under v4.24) is somehow "less equal to" the work produced a week ago (under v4.02).

I guess, then, the question is: Do you rather want an inflation of credits or an inflation of work?

BM

BM

Stick
Stick
Joined: 24 Feb 05
Posts: 790
Credit: 33,139,331
RAC: 934

RE: I guess, then, the

Message 44047 in response to message 44046

Quote:

I guess, then, the question is: Do you rather want an inflation of credits or an inflation of work?

BM

Bernd,

I saw an earlier post which (I think sarcastically) suggested that someone could do a PhD thesis on the credits issue. My immediate thought was: It should be in the field of Economics. (If you think about it, the parallels are many.) So with regard to your question about inflation, I believe we should strive for "price stability" and that our "currency" should be tied to the production of science. If we can expand the economy by improving productivity, that is not inflationary. The recent devaluation has negative implications. It certainly discourages the working class as we now must produce 30% more work for the same pay.

Taking the metaphor a little further, Einstein's management has recently been taking away our incentives for producing more. Think about the enthusiasm for the project that Akos generated during S4 and, more recently, when v4.24 was introduced as a Beta app. We had the opportunity to "soup up our engines" and make our computers work better. It was fun and we felt like we had a stake in the success of the enterprise. But now we know the truth. We are just laborers and our wages have just been cut.

Just some observations and a bit of ranting from someone who doesn't really care about credits anyway. ;-)

Stick

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.