A story of success

Astro
Astro
Joined: 18 Jan 05
Posts: 257
Credit: 1000560
RAC: 0

For full disclosure (if

For full disclosure (if anyone goes back to look at my boincstats pie charts) I have set all projects but Einstein to NNW/NNT so I only have one project to run dry when hopefully BoincSimap starts having work in the next couple days. So....after tomorrows update, the posted images will no longer reflect the given resource shares.

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

What Astro describes is a

What Astro describes is a simplified form of a "exchange" based calibration technique I described in what is probably tedious detail in the old UBW and the intent was to stop this credit inflation/deflation/fluctuation and allow the "system" to self-adjust ... it had anti-cheating built in and redundency ant a bunch of things ... but, the idea was that we would, in fact, benchmark the machines by running the science applications ...

In that a benchmark is only a synthetic item of work the best way to "benchmark" a system is to run the actual workloads on the machine and then from that data determine efficiency.

So, I can run EAH on the 4 PCs I have here at hand and then see which one runs them most effectively. From there I can know which system runs the work fastest.

Obviously factors like the amount of time that the work unit takes to complete is NOT constant, even on the same system, for various work units, then you have to get into averaging ...

Well, if it was easy, then anyone could play ....

Winterknight
Winterknight
Joined: 4 Jun 05
Posts: 1478
Credit: 382929092
RAC: 366292

But don't we already have

But don't we already have numbers for each project on each computer that measure the efficiency of the applications.
As the BOINC benchmark for a computer is common to all projects that computer is attached to, then comparing the DCF for each project will tell you the most efficient application for that computer. But that will not ensure max cr/hr if that is your aim.

But if credits/time were calculated from application efficiency then we could not have cross project parity until all applications were at the same level of efficiency. And that will probably be never.

And how many computers would be needed at each project HQ to ensure their calculations were correct. Put two identical cpu's on motherboards with chipsets of different design and you probably will not get the same performance, then add in a mix of different RAM suppliers and the idea snowballs out of control again. And we haven't even considered using cpu's from different families or manufacturers.

edit] Ididn't mention mixes of OS either.
Have you wondered why the top computers on Seti are all macs. [/edit

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

*MY* aim was never max CR/hr

*MY* aim was never max CR/hr ...

my aim was to ensure fair and equitable allocation of credit to participants.

To THIS end, I developed, back when I could think these things through ... a way to use the redundant processing feature of BOINC to our advantage. Anyway, it is moot in that it was a dead letter in BOINC Beta to suggest changes to the credit system to fix the problems we proved then ...

Now, the issues are so entrenched that it is not at all likely that we shall see a change.

Just as we use a keyboard designed to stop fast typists from jamming mechanical typewriters, we are still using that layout even though it was designed to SLOW down the typist. So, we suffer Carpel tunnel because of a "smart" design decision that has not be relevant for decades ...

The point should be that we determine what the speed of the system is in processing our specific workload and assign a value for that (whatever that value is), and once we had some "known" systems we could then use those to "calibrate" the other systems based on comparitive analysis.

In a sense it is the same way we extend calibration of systems. We have our top level standards that we use to calibrate the next level standards down, these are more numerous, but less accurate, but we can use these to calibrate the next level down with lower confidence and so on ... I forget how many levels there are, but, in the Navy all of our equipment could trace its calibration up through the standards that were used to make sure that they were measuring accurately.

Anyway, that is one of the reasons that the proposal that virtually no-one ever read was written, and why it is another one of those long things ... if it was easy, anyone could play ... if it was easy, it would have been done by now ...

Winterknight
Winterknight
Joined: 4 Jun 05
Posts: 1478
Credit: 382929092
RAC: 366292

Paul, I know your aim is to

Paul,
I know your aim is to ensure fair and equitable allocation of credit to participants. But Tony did say in his proposal, item "3) have each project take one moderately new PC".
As he knows from the data he produced for Seti units on his mainly AMD farm, last Nov/Dec before he got his Phenom and core2 quad. I said at the time I was seeing much different cr/hr across the AR curve using Intels of Dothan (Pent M) or later vintage.
So Seti would in fairness have to run at least one Intel and one AMD cpu. And as you yourself published different performance figures for, if I remember correctly, two P4 systems with different motherboards. It would therefore think the projects if they were to follow Tony's proposal would have to run tens of computers to get average figures for a reasonable range of cpu/chipset/ram/OS combinations.

And yes I did like your proposal based on a "Standards" principle, and said so at the time. And although the use now of FLops counting is fairer that the benchmark * time system it is still flawed because FLops are not equal. i.e. add and multiply are fast, whilst divide and sqrt take much longer.

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

WinterKinght, The point is

WinterKinght,

The point is that we are already running the computers... and the working point of the concept was that we use real working computers with reql work and go from there. The work became the benchmarking work load. Though the actual work unit(s) used would be the same ones (I suggested the use of several different WU for various reasons).

BUt, as I said, it is moot ... never going to happen ...

Winterknight
Winterknight
Joined: 4 Jun 05
Posts: 1478
Credit: 382929092
RAC: 366292

Yeah, I know it's not going

Yeah, I know it's not going to happen, but we can have our dreams.

RandyC
RandyC
Joined: 18 Jan 05
Posts: 6678
Credit: 111139797
RAC: 0

RE: WinterKinght, The

Message 80121 in response to message 80119

Quote:

WinterKinght,

The point is that we are already running the computers... and the working point of the concept was that we use real working computers with reql work and go from there. The work became the benchmarking work load. Though the actual work unit(s) used would be the same ones (I suggested the use of several different WU for various reasons).

BUt, as I said, it is moot ... never going to happen ...

Paul,

You've been away quite a while and may have missed all the hoopla over David Anderson's proposal for an all-new credit system. There was/is a big ruckus over it, some of which is covered in this thread.

Seti Classic Final Total: 11446 WU.

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

RE: You've been away quite

Message 80122 in response to message 80121

Quote:
You've been away quite a while and may have missed all the hoopla over David Anderson's proposal for an all-new credit system. There was/is a big ruckus over it, some of which is covered in this thread.


Apparently not long enough ... not sure which to put, smile or frown.

For those of us from BOINC Beta days this was vanilla stuff in that it was always part of the suggested potential of BOINC. For example, redundent storage. I usually have a spare TB around that could be used by a project. Somewhere I discussed this, prolly still in the UBW where the project would slice up what they wanted to store externally, and send to say ... 10 participants ... then if they need the data back, well, it is unlikely all 10 would be off-line at the same time ... not necessarily the most efficient storage, but, secure with 10 copies spread around the world.

How to measure ... I think it should be a different metric my own-self.

The point to the CS was to BE project indipendent but roughly comparable for the same computer over the same period of time. In OTHER words, using the stock application it should take x seconds to process a work unit and that is worth so many CS ... since Cobblestones are measured in seconds. ANY change in the equation, faster application, processor, memory, etc will increase that computers CS speed. But, that CS speed should be roughly comparible across multiple projects.

Because they insisted on using that absolutly horrid benchmark we get to here today ...

Now each project seems to be inventing its own way of measuring and rewarding work. Though I have EAH at share 80 and RAH at 100, I have a higher RAC and daily earnings at EAh if I read the graphs right ...

We might have gotten this right before we were much out of beta if those of us who looked into this problem had been listened to ...

If you take 32 hours to do the WU and I get it done in 15 min, sorry, same CS ... I am just faster ... I saw a graph where projects were compared to each other and you could tell where you could earn the most CS ... for those that are interested in only that ... well, they will go there ...

Stupid as it is, it bothers me that I want to support a number of projects, but, based on "payment" the projects seem may seem to value my "work" less because they pay me less per hour of CPU time on the same machine.

Not to beat the horse, Ok, it is to beat the horse, that is why I made that proposal that used standard work units ....

Brian Silvers
Brian Silvers
Joined: 26 Aug 05
Posts: 772
Credit: 282700
RAC: 0

RE: If you take 32 hours

Message 80123 in response to message 80122

Quote:

If you take 32 hours to do the WU and I get it done in 15 min, sorry, same CS ... I am just faster ...

I mentioned that over yonder in the middle of that long thread, and had someone try to tell me that I was advocating two vastly different machines getting the same cr/hr (?!?!?!?!?!). I was like, ummmm, no, I'm advocating a normalized amount of credit per completed task, so if one machine is 4 times faster than the other, their cr/hr would be 4x the slower...

...of course, I was the one "not listening".... :shrug:

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.