Linux vs Windows performance.

Alan Deforge
Alan Deforge
Joined: 10 Mar 05
Posts: 10
Credit: 124900
RAC: 0

Good point Paul. I hadn't

Good point Paul. I hadn't considered the difference in WU's.
The Idea was just to let them run and over the course of time see who won.
Moe, Larry, and Curly are 24/7 crunchboxes without the routine distractions.
I believe that the WU's will average out after 10 ea or so. Some WU's are just bigger than others (probably genetics) and each computer takes it's chances when it draws from the deck. However, over the long run, they should average out.

ADDMP
ADDMP
Joined: 25 Feb 05
Posts: 104
Credit: 7332049
RAC: 0

RE: I believe that the WU's

Message 24854 in response to message 24853

Quote:
I believe that the WU's will average out after 10 ea or so. Some WU's are just bigger than others (probably genetics) and each computer takes it's chances when it draws from the deck. However, over the long run, they should average out.

I think you are probably right about averaging units over the VERY LONG run,
but on a time frame of days or even weeks, my observations of my own computers is that the WUs drawn are very non-random. If one computer starts getting 30-minute WUs, those short WUs just seem to keep coming to that same computer.

I have read here in the past that part of the data sent to a client can be processed over & over using diferent parameters, so it is more efficient for the servers to just reprocess the same data for a while rather then send new. Saves bandwidth.

I'm not sure I have that right, but I can see the non-randomness in WUs easily.

That makes it tough to do good comparative benchmarking.

ADDMP

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1364
Credit: 3562358667
RAC: 1580

continuity among work's

continuity among work's really eratic. My main machine does a big WU in 3.5H, with a 3day queue I've had as many as 6 datasets scheduled at once. 2 short, 4 long.

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

The server sends you a

The server sends you a "block" of work ... that chunk has, say 25 work units in it. You start peeling them off as they get assigned and you ask for work. The other participants that have the block also do the same. Over time all the work units are processed. If my computer is very fast and has high speed connection my tendency is going to be to get less work per chunk. The dial-up boys and girls get the block and then can pull a larger share and don't have to pull down as much work.

I have an example of that in the Wiki look in the Einstein@Home section, FAQ not hard to find, that FAQ is pretty short ...

I probably should ask Bruce if I can steal his questions and answers and add them in there too ...

======

Depending on system speed and projects, on time and the like, my playing with the numbers says that a week gets you into the ball part. A month is better ... beyond that you are just bouncing because of noise ...

I used to try to make this point in Classic where people moaned that some got more of the "short" AR and -9 work units so they were unfairly advantaged. Played "fair" not an issue. Those that did "cherry check" for AR did get slightly more done per unit time, but spent it in non-productive activities (in my opinion) and how could they REALLY be proud of their effort ... there will always be that nagging little dissatisfaction in the back of their minds ...

Of course, my 61K work units can make me look suspect because the number is much higher than most. And that is what annoyed me most about the patina the cheaters spread around. But *I* know ... and that is one of the reasons why the BOINC system is the way it is ... to stop some of those "exploits" ... if the benchmark plan had worked we would have had a pretty good system. The problem is of course, it does not work well ... :(

Bruce Allen
Bruce Allen
Moderator
Joined: 15 Oct 04
Posts: 1119
Credit: 172127663
RAC: 0

RE: I probably should ask

Message 24857 in response to message 24856

Quote:
I probably should ask Bruce if I can steal his questions and answers and add them in there too ...


You bet. But please correct any mistakes that you notice!

Director, Einstein@Home

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

RE: RE: I probably should

Message 24858 in response to message 24857

Quote:
Quote:
I probably should ask Bruce if I can steal his questions and answers and add them in there too ...

You bet. But please correct any mistakes that you notice!


Sure, don't I always ...

Alan Deforge
Alan Deforge
Joined: 10 Mar 05
Posts: 10
Credit: 124900
RAC: 0

Preliminary result are

Preliminary result are inconclusive. Larry (linux)grabbed some small WU's Curly (2000)grabbed a medium WU, and Poor Moe (XP)is STILL chewing on what I estimate to be the mother of all WU's. Add in the fact that the three were turned off overnight on the first day due to a janitors misunderstanding, and it seems that I won't have a clear answer for a while yet.

Funny thing, Larry is running a slight bit hotter than the others (well within operating parameters) this could be because it is working harder. They all report 100% CPU usage when crunching.

Sigh, It seems a straight answer is playing hard to get. However, Time will tell. My next question is...Do different versions of Linux process better?

wumpus
wumpus
Joined: 17 Feb 05
Posts: 50
Credit: 7809074
RAC: 0

Couldn't a server be set up

Couldn't a server be set up to feed the exact same workunits to 'benchmark' clients? You could put in a different URL to point to the 'benchmarks' and the server would give everyone the same 5 workunits. These workunits then could be compared between the clients.

Alan Deforge
Alan Deforge
Joined: 10 Mar 05
Posts: 10
Credit: 124900
RAC: 0

I decided not to "benchmark"

I decided not to "benchmark" using identical work units as it was not going to improve my stats. Also, I am lazy.

Preliminary results are interesting.....Linux is ahead (big surprise)

The Linux box (larry) is 50% faster than the 2000 box (curly) and almost 400% faster than the XP machine (Moe). I expected some margins, but this much is staggering. It seems that XP has an enormous overhead. Older MS operating systems like 2000 are not quite so burdensome. I have not tried 95, 98, ME, and the like, I am assuming they will perform like 2000.

IMHO: if you want a dedicated crunchbox, stay away from XP and lean strongly towards Linux. Even my older pIII 800 running linux beat the XP performance of the 2 gig box.

These are my results, your actual mileage may vary.

Michael Karlinsky
Michael Karlinsky
Joined: 22 Jan 05
Posts: 888
Credit: 23502182
RAC: 0

RE: I decided not to

Message 24862 in response to message 24861

Quote:

I decided not to "benchmark" using identical work units as it was not going to improve my stats. Also, I am lazy.

Preliminary results are interesting.....Linux is ahead (big surprise)

The Linux box (larry) is 50% faster than the 2000 box (curly) and almost 400% faster than the XP machine (Moe). I expected some margins, but this much is staggering. It seems that XP has an enormous overhead. Older MS operating systems like 2000 are not quite so burdensome. I have not tried 95, 98, ME, and the like, I am assuming they will perform like 2000.

IMHO: if you want a dedicated crunchbox, stay away from XP and lean strongly towards Linux. Even my older pIII 800 running linux beat the XP performance of the 2 gig box.

These are my results, your actual mileage may vary.

Please keep in mind that WUs are NOT equal in length. This most likely accounts
for these big differences in runtime.

Michael

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.