Optimized BOINC Client Discussion Thread

Stick
Stick
Joined: 24 Feb 05
Posts: 790
Credit: 33,143,489
RAC: 908
Topic 190958

With all the discussion going on here regarding Akosf's (wonderful) optimized Albert applications, I thought it would be useful to start a nearby thread for discussing the various optimized BOINC Clients available.

I would guess that most of us who are using one Akosf's applications are also using an optimized BOINC Client, but I know there are some who do not; and, they may not realize what optimized BOINC Clients do.

That is, optimized BOINC Clients score higher benchmarks than the standard BOINC Client; and, therefore, when used with an optimized application, tend to "normalize" claimed credit - back to the range that a standard application and the standard BOINC would yield.

My hope is that people will use this thread to post info regarding experience with the various BOINC optimized clients; and, more importantly, to post links to sites providing such software as well as links to other threads which might provide some useful information on the subject.

For starters, the Other sources of BOINC client software page has links to a variety of sites where you can read about and download optimized BOINC Clients.

LiborA
LiborA
Joined: 8 Dec 05
Posts: 74
Credit: 337,135
RAC: 0

Optimized BOINC Client Discussion Thread

My recommandation is Calibrating BOINC core client from http://boinc.truxoft.com/

Santas little helper
Santas little helper
Joined: 11 Feb 05
Posts: 36
Credit: 10,036,417
RAC: 26,815

IMHO credit is worth nothing

IMHO credit is worth nothing because it isn't involved in data processing so why should one use optimize BOINC clients? Wouldn't it be better to use a standard BOINC client without private optimizations to give no reason to credit disputes? (Not related to the fact that BOINC source code is open, that's wonderful)

Greetings, Santas little helper

Michael Karlinsky
Michael Karlinsky
Joined: 22 Jan 05
Posts: 888
Credit: 23,502,182
RAC: 0

RE: IMHO credit is worth

Message 26530 in response to message 26529

Quote:
IMHO credit is worth nothing because it isn't involved in data processing so why should one use optimize BOINC clients? Wouldn't it be better to use a standard BOINC client without private optimizations to give no reason to credit disputes? (Not related to the fact that BOINC source code is open, that's wonderful)

Hear, Hear!

Michael

archae86
archae86
Joined: 6 Dec 05
Posts: 3,159
Credit: 7,245,769,965
RAC: 1,323,204

I currently run trux's

I currently run trux's calibrating client.

If you are interested, see truXoft Calibrating Client

This is a delicate topic, with many people expressing concern that the non-calibrating clients (which usually had benchmark code optimized, thus giving higher benchmark results for the same computer, and thus posting higher claims) were unfair if used on projects for which the user did not have a correspondingly upgraded science application.

That is indeed one side of unfairness, but the other side is underclaiming, which happens when one runs a science ap which is far more efficient than the assumptions of the credit system, the symptom of which is consistent and even drastic underclaiming compared to others in a diverse set of quora.

Both sides of this unfairness are undesirable, for two reasons. As credit is a bit like price in a conventional economy, mispriced goods misguide people in decisions they may base on them. Specifically in the strange social engineering world of BOINC, there are quite a few people who respond to the credit stimulus, and a situation they perceive as unfair in general, or hurting them in particular, stimulates users to stop volunteering the machine time which makes the whole system go.

To the extent I understand trux's calibrating client as currently distributed, it contains mildly tweaked benchmarks which are likely to score your machines floating point and integer operation capability a bit higher than the distributed client.

More importantly, if you enable calibration for a project, the client monitors the difference between expected claim (based on information on predicted work content sent out with the result by the project) vs. actual claim for each result. It appears to have heavy filtering and some form of integration (likely a cousin of EWMA) as part of its method so that a pattern of consistent underclaiming is gradually countered by adjusting the only two parameters it uses to adjust claims--CPU time and floating point Gops/sec.

At this point I should acknowledge that the adjustment of both parameters from the actual observed values is a form of falsification. However it is done openly--truXoft actually logs the adjustments in stderr out file, which is posted to public view for each result for users not choosing to hide their computers.

Now, my personal user experience:

As trux warns on his site, starting up his client on a fresh stream of work results for a stable system (CPU + scienc ap + actual work units) takes a while to converge. Roughly 30 results seems about right here, and for the first dozen it often actually moves the "wrong" way, which has discourage many users and generated endless fruitless posts.

I've seen no comments on what happens when one introduces a huge performance change of the order of going from Einstein dist to akosf S-39L other than my own.

I had been running trux calibrating client for many weeks. It was doing no compensation (other than that implied by its somewhat more optimistic benchmark) because the calibration procedure decided I was overclaiming, and is currently set by trux not to reduce claims on non-SETI projects (I wish he would allow it to reduce, but that is a hypothetical).

So on each of my four machines it took quite a while after I implemented the change from the distributed science ap to C-37 before the calibration reached the breakeven point at which it began to increase my claims. Roughly 30 results, which of course took many more hours on my Pentium III's than on my P4 EE.

Also a bit odd was the initial behavior. Just after breakeven, the client was actually considerably _reducing_ my claimed CPU time from observed, while _increasing_ my claimed FP capability. Of course at crossover, the increase in claim was tiny. With a couple dozen more results, the claim increase was rising about 3% per result at peak, using relative increases in both CPU and FP capability.

Right now all four of my machines are claiming about 50 to 75% more cobblestones than the unmodified result. Based on quorum opinion, they are not overclaiming, although some may be near "fair claim". They are all still rising, some rapidly, so I don't know the end state. I suspect it may settle into modest overclaiming.

While a separate topic from claiming, I should mention than trux has coded a few additional features over the stock client. I eagerly await his project specific CPU dedication. As is well known, SETI runs at dramatically improved efficiency when it is one on just one, but not both, of the virtual CPU's of a hyperthreaded P4. I currently don't like to run SETI at more than about 15% resource share on my P4 because the fraction of double-run time starts picking up. If he codes this, I could probably go to 50%. (the difference is something like 45 minutes per SETI result per CPU when run with Einstein on the other side, vs about 75 minutes when run with another SETI on the other side--reallly big).

If trux implements that feature as I hope it to behave, it will represent a genuine increase in science output by the client. The rest of this is all on the social engineering side.

Santas little helper
Santas little helper
Joined: 11 Feb 05
Posts: 36
Credit: 10,036,417
RAC: 26,815

Has the trux algorithm been

Has the trux algorithm been mentioned on the official BOINC message board?
Maybe this could influence the main developer(s) to implement this or a modified version. I approve private interference but if this algorithm is a good idea then this improved fairness should be a benefit for all (and be implemented in the standard version). Good ideas are demanded everywhere :-)

Greetings, Santas little helper

archae86
archae86
Joined: 6 Dec 05
Posts: 3,159
Credit: 7,245,769,965
RAC: 1,323,204

RE: Maybe this could

Message 26533 in response to message 26532

Quote:
Maybe this could influence the main developer(s) to implement this or a modified version.

I believe the official direction is to move to science applications which report floating point operations performed back--and to award credit based on that.

From that point of view the calibrating client method is an interim measure.

And, yes, trux's client has been discussed quite a bit on the SETI message boards. I think most of the user opinion is favorable.

Those who think all score-adjusting clients bad seem to think his least bad.

Those who think it sometimes a good idea think his is fairer more often than the others.

Odysseus
Odysseus
Joined: 17 Dec 05
Posts: 372
Credit: 20,701,367
RAC: 8,764

RE: I believe the official

Message 26534 in response to message 26533

Quote:
I believe the official direction is to move to science applications which report floating point operations performed back--and to award credit based on that.

Yes; the SETI@home Enhanced app, in beta testing, does just that. Despite wildly disparate crunching times from different systems, the (few) credit claims I've seen so far agree with each other to within a percent or so, as long as all the hosts are running a fairly recent (v5.x?) BOINC client.

While this seems very sensible in terms of making for more consistent crediting than is possible under the present system, I have mixed feelings about it. I guess I'm somewhat 'left-wing' on the question, as I believe that slower systems deserve more credit than their throughput alone would indicate, as compensation for the 'overheads' of electricity, user supervision & maintenance, and so on, that are the same regardless of processing speed. While I recognize that productivity is what counts from the POV of the science, if a project's users feel insufficiently rewarded for crunching--and for many credit is a more important source of motivation than the benefit to science--they'll contribute less work.

Pav Lucistnik
Pav Lucistnik
Joined: 7 Mar 06
Posts: 136
Credit: 853,388
RAC: 0

Haven't thought there could

Haven't thought there could be left-wing distributed computing fans :) but absolutely no, you are credited for the work you done, not for the intentions. That's fair.

Odysseus
Odysseus
Joined: 17 Dec 05
Posts: 372
Credit: 20,701,367
RAC: 8,764

RE: Haven't thought there

Message 26536 in response to message 26535

Quote:
Haven't thought there could be left-wing distributed computing fans :) but absolutely no, you are credited for the work you done, not for the intentions. That's fair.

The present system involves a compromise between simply measuring CPU time and simply counting results. Otherwise benchmarking would be irrelevant to credit claims; clearly it's not now.

See the BOINC Wiki FAQ on credit for some discussion of related issues, as well as the article on credit claims.

m.mitch
m.mitch
Joined: 11 Feb 05
Posts: 187
Credit: 11,025,628
RAC: 0

RE: IMHO credit is worth

Message 26537 in response to message 26529

Quote:
IMHO credit is worth nothing because it isn't involved in data processing so why should one use optimize BOINC clients? Wouldn't it be better to use a standard BOINC client without private optimizations to give no reason to credit disputes? (Not related to the fact that BOINC source code is open, that's wonderful)

NO! Fast client means more science!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.