Krazy Kenzie’s Kredit Krunch

kenzieB
kenzieB
Joined: 10 Apr 07
Posts: 42
Credit: 584424
RAC: 0
Topic 194205

This thread was inspired by a little debate in the MilkyWay project over a few users who were threatening to leave because they felt it unfair that the Admin over there had lowered credit allocations.

Lot’s of arguing ensued, with plenty of facts bandied about so I decided to do some back-of-the-envelope research myself.

Here is the plan: I am going to run Q-Baby 24/7, all four cores, for a week on each of four different projects: SETI, Einstein, PrimeGrid and MilkyWay. For the purposes of the experiment, I am only going to be using stock, project supplied applications.

I will be using this basic formula to estimate the weeks worth of credit earned:

Credit Gain = (Ending Credit + New Pending1,2) - (Start Credit + Start Pending2)

1 Pending credit acquired during the week.
2 For projects with a large discrepancy between pending and awarded credit, I will manually guesstimate the difference.

Also, if a project runs out of work during the week, I will extrapolate credit to fill the idle hours.

Before anyone points out the boincstats link that shows all of this already or, even worse, starts crying that my experiment is neither fair nor impartial: I admit all of the above.

I just want to make my own measurements. And, being an admitted attention-seeking-person, I figured I would invite you all along for the ride.

Additionally, in the interests of fairness, I am starting similar threads in the other three concerned projects.

Finally, I freely admit that I have my own preconceived ideas about how all of this will turn out. (My guess, lowest to highest, SETI, Einstein, PrimeGrid, and by a comfortable margin in the credit lead, MilkyWay,) but I think that most of you know me well enough to know that if the facts turn out contrary, I’ll fess up and eat proper crow.

I am only going to be posting to this thread to provide weekly updates so, feel free to discuss amongst yourselves the many error of my ways.

I will begin crunching SETI at 19:00, PST, February 23, 2009.
SETI will end and Einstein begin at 19:00, March 2, 2009.
Einstein will end and PrimeGrid begin at 19:00 PST, March 9, 2009.
PrimeGrid will end and MilkyWay will begin at 19:00 PST, March 16, 2009.
MilkyWay will end at 19:00 PST, March 23.

Results posted by 21:00 PST, March 23.

Your prototypical Generation 'Y' slacker, and damn proud of it.

Help feed the world's hungry. Free Rice.

John Clark
John Clark
Joined: 4 May 07
Posts: 1087
Credit: 3143193
RAC: 0

Krazy Kenzie’s Kredit Krunch

Go for the comparison and I look forwards to seeing the results, in due time.

Shih-Tzu are clever, cuddly, playful and rule!! Jack Russell are feisty!

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109393893397
RAC: 35828994

RE: Here is the plan: I am

Quote:
Here is the plan: I am going to run Q-Baby 24/7, all four cores, for a week on each of four different projects: SETI, Einstein, PrimeGrid and MilkyWay. For the purposes of the experiment, I am only going to be using stock, project supplied applications.

I think you'll find it a bit of a pain to be switching projects at regular intervals like that. When you reach the end of each weekly period, what do you intend to do with partly crunched/unstarted tasks that remain in your cache?

Seeing as you have four cores, why don't you consider doing all four projects simultaneously, giving each one a 25% resource share? If you reset all the debts just before you are ready to start, a single task from each project should continue to run at all times, I would imagine. If you started immediately after the Seti weekly outage and built up a cache progressively to about 3 or 4 days, you might avoid the risk of any one project ever running dry - with any luck. I don't know if Seti is the only project that can be unpredictable with work supply.

Cheers,
Gary.

kenzieB
kenzieB
Joined: 10 Apr 07
Posts: 42
Credit: 584424
RAC: 0

RE: I think you'll find it

Message 90579 in response to message 90578

Quote:


I think you'll find it a bit of a pain to be switching projects at regular intervals like that. When you reach the end of each weekly period, what do you intend to do with partly crunched/unstarted tasks that remain in your cache?

Seeing as you have four cores, why don't you consider doing all four projects simultaneously, giving each one a 25% resource share? If you reset all the debts just before you are ready to start, a single task from each project should continue to run at all times, I would imagine. If you started immediately after the Seti weekly outage and built up a cache progressively to about 3 or 4 days, you might avoid the risk of any one project ever running dry - with any luck. I don't know if Seti is the only project that can be unpredictable with work supply.


Going with a one day cache and NNT about 12 - 18 hours before the end. Probably will lose a few wu’s at the end, tho. (Apologies in advance to my wing-peeps.)

I had considered crunching all four at once but figured that things could get skewered if BOINC decided some wu’s are getting too close to deadline and it goes into high-priority mode.

I am using the KISS principle throughout. (Keep It Simple, Silly!)

Remember, this is not meant to be a strictly scientific experiment, but rather a practical, real-world test to see what sort of credit various projects hand out.

:o)

Your prototypical Generation 'Y' slacker, and damn proud of it.

Help feed the world's hungry. Free Rice.

Chris S
Chris S
Joined: 27 Aug 05
Posts: 2469
Credit: 19550265
RAC: 0

Good for you Kenz, and I am

Good for you Kenz, and I am sure that your results will be more meaningful to us in the real world, than all the nerds in cyberspace.

Waiting for Godot & salvation :-)

Why do doctors have to practice?
You'd think they'd have got it right by now

Alinator
Alinator
Joined: 8 May 05
Posts: 927
Credit: 9352143
RAC: 0

Here's a copy of the relevant

Here's a copy of the relevant information about my hosts from the The Great Crunchoff Grandstand thread I started at MW for conversation about the experiments.

Quote:

FWIW:

I set up all my hosts to run my project set with equal resource shares several months ago to collect comparative data. In general, it is not as rigorous a test as CFL's, since all these machines have other assigned duties on my personal network and thus are not what I would classify as 'dedicated' crunch boxes.

Anyway, here are the current details (2/22/09) for them, but they aren't hidden on any project they run. So if you're from Missouri, you can check them yourself. ;-)

Application is stock unless otherwise noted.

BOINC CC versions:

Unit 1: 5.10.38

Unit 8: 5.8.16

All the rest are 5.10.13.

Unit 1:

400 MHz G3 iMac running Panther, 512 MB RAM

CPCS:

MW: 0.001255
EAH: 0.000431
SAH: 0.000332
LC: No compatible application

Unit 2:

1.83 GHz T2400 (CD Yonah) running XP Pro SP3, 2 GB RAM

MW: 0.004569
EAH: 0.004191
SAH: 0.004789 (AKv8 SSE3)
LC: 0.002666

Unit 3:

2.66 GHz P4 (Northwood) running XP Pro SP3, 1 GB RAM

MW: 0.005952
EAH: 0.003282
SAH: 0.004758 (AKv8 SSE2)
LC: 0.002319

Unit 4:

550 MHz PIII (Katmai) running 2K Pro SP4, 384 MB RAM

MW: 0.001541
EAH: 0.000652
SAH: 0.000701 (AKv8 SSE)
LC: 0.000753

Unit 5:

450 MHz PII (Deschutes) running XP Pro SP3, 384 MB RAM

MW: 0.001579
EAH: 0.000467
SAH: 0.000411 (KWSN 2.4 MMX)
LC: 0.000637

Unit 6:

500 MHz K6-2 running 2K Pro SP4, 256 MB RAM

MW: 0.001568 (Gipsel 0.19)*
EAH: 0.000287
SAH: 0.000196 (KWSN 2.4 MMX)
LC: 0.000567

Unit 7:

450 MHz K6-3 running 2K Server SP4, 384 MB RAM

MW: 0.001129 **
EAH: 0.000354
SAH: 0.000175 (KWSN 2.4 MMX)
LC: 0.000548

Unit 8:

300 MHz K6 running NT4 Server SP6a, 192 MB RAM

MW: 0.000857
EAH: No compatible application
SAH: 0.000126 (KWSN 2.4 MMX)
LC: 0.000335

* Gipsel app used for dual boot 9x compatibility

** either Gipsel or zslip 0.19 used for same reason as before, just not sure which right now. ;-)

The other two hosts I have are K6-2/500's, generally similar to but not identical to the one listed here. The one I chose is the one which consistently performs the best of the three.

Alinator

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109393893397
RAC: 35828994

RE: I had considered

Message 90582 in response to message 90579

Quote:
I had considered crunching all four at once but figured that things could get skewered if BOINC decided some wu’s are getting too close to deadline and it goes into high-priority mode.


If you keep your cache at reasonable levels and only make gradual changes if you really need to, it's highly unlikely that BOINC would ever go into HP mode. Even if it did, when it dropped out again it would "pay back" to those projects that had lost time anyway. Essentially, what you are saying is that BOINC can't be trusted on its own so you'll need to intervene in order to make up for BOINC's deficiencies. It seems to me that intervention is not really KISS.

Quote:
Remember, this is not meant to be a strictly scientific experiment, but rather a practical, real-world test ...

Since the great majority of people use "set and forget" with BOINC, a practical real world test would be to emulate that behaviour. If you left it alone for a month or two with equal resource shares you might be quite surprised at how well BOINC would do in assigning equal time to each project. I have a number of old dual cores that are set 50/50 between EAH and SAH. They've been running a long time and they always have 1 task from each project running simultaneously whenever I choose to look (which admittedly isn't all that often). They have a cache size of around 4 days and never seem to run out of work or go into HP mode.

Another example is a couple of Q6600s which are set 75/25 EAH/SAH. They always have 3 cores running EAH and one core running SAH and I would look at them each day. They have 3 day caches and have survived the current SAH outage and have now replenished their caches again. The only thing I had to do was to encourage a few uploads to occur so that the number of tasks stuck in upload could drop below 8. BOINC refuses to ask for new work if the number of stuck uploads exceeds 2*cpu cores. As soon as the stuck uploads had dropped to 8, BOINC was able to get a whole bunch of new SAH tasks on both hosts and thereby avoid the looming "out-of-work" situation. Both are still running the 3/1 ratio with no problems.

I'm of the opinion that the bad rap that BOINC gets for unnecessary HP mode and glitches in work fetch and apparent non-honouring of resource share are pretty much entirely due to unskilled micro-management. Left to its own devices, BOINC seems to do just fine.

It would actually be quite interesting to set up equal resource shares on your 4 projects and then record both the crunch time and credit awarded for each completed task into a spreadsheet. After a couple of weeks of set and forget operation you could sum the 4 crunch time columns and the four credit columns and see how "fair" BOINC had been to each project. You would then be able to compare the credit granting behaviour of each project in terms of credit per cpu second for your host.

Cheers,
Gary.

kenzieB
kenzieB
Joined: 10 Apr 07
Posts: 42
Credit: 584424
RAC: 0

Okay, here we go. Now

Okay, here we go. Now crunching SAH MB with Berkeley supplied apps.

Starting credit is:

324 930 + 1302 pending = 326 232

@ Gary Roberts and Alinator sorry that you disapprove of my methods.

I am mainly just doing this to satisfy my own curiosity after never being able to get a straight answer to the question.

I just figured I’d make a thread because some might be interested in how things develop. Or not, as the case may be. ;o)

Your prototypical Generation 'Y' slacker, and damn proud of it.

Help feed the world's hungry. Free Rice.

Rod
Rod
Joined: 3 Jan 06
Posts: 4396
Credit: 811266
RAC: 0

I have been crunching two

I have been crunching two projects since May on this computer no changes.. CPDN and Einstein. I just look at RAC.. My RAC at the present for Einstein is 597 for CPDN is 694... I'm Happy

There are some who can live without wild things and some who cannot. - Aldo Leopold

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

@Gary, If we have

@Gary,

If we have patience, maybe we can convince a second run using the established parameters as you cite. I agree that BOINC does do well in "fire and forget" mode, but, we only suppose that it does things "fairly" ...

If we allow this test and gather the data. We would then have a baseline against which to compare. In other words, month one, run single projects and collect data ...

Month TWO, run projects in parallel and see if the system stays as balanced as we suppose it will ... or even week 5 ...

What I mean by that, we should expect that the system would in week 5 do 1/4 of what it did in weeks one through 4 (unless we allow the run for the entire month and then the numbers should be the same).

Though I did not do rigorous data collection, in the last couple months I have made a press to "push" several projects and with roughly the same computing power applied the daily "earnings" are vastly different between ABC@Home, Cosmology@Home, and Rosetta@Home with RaH being by far the worst at 6K per day while the other two projects were over 10K per day (ABC was higher because I was also able to put my Power Mac to work on the task)...

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2139
Credit: 2752771530
RAC: 1443234

It would certainly be

It would certainly be interesting to run the 'parallel' test as well as the 'serial' test, and compare the credit total (sum of all four projects) for the two months. I suspect that BOINC will turn out to be more productive overall running different projects on the different cores (less competition for resources), but that's open for debate/experiment too.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.