boinc on ipod.

Alinator
Alinator
Joined: 8 May 05
Posts: 927
Credit: 9,352,143
RAC: 0

According to Dr. Allen it's

According to Dr. Allen it's based strictly on the benchmarks (and the CPCS cutoff is 0.0013, btw). You can find Dr. Allen's comments by doing a keyword search for 'cpcs' of the forums.

Keep in mind the BM's tend to overestimate a host's capability (on Windows), so if you try to calculate it from result data you end up with a lower value than what gets reported in the contact log. For example, my PIII is currently reporting a CPCS of 0.001057, but its "true" CPCS as calculated from the last 260 results it's run is 0.000777.

Here's the formula for determining your current CPCS:

Reported CPCS = (FLOPS + IOPS) / 1728000

where FLOPS is your current Floating Point and IOPS is your current Integer benchmark values (in Megs).

Therefore the "Magic Number" you need to post is a value of 2246.4 for your combined benchmarks.

Alinator

Cmdr, it looks like your P4M is benching low for it's speed class, especially the Floating Point. IIRC, hasn't low benchmark values been a common problem with the Linux versions of the client? I don't see why running diskless should make a difference since the BM calculations typically stay in the processor cache on late model CPU's.

Annika
Annika
Joined: 8 Aug 06
Posts: 720
Credit: 494,410
RAC: 0

Linux benchmarks suck, I can

Linux benchmarks suck, I can confirm that from own experience. I don't want to say the software sucks ;-) but the score you get does, compared to Windows. After installing Linux (Debian) on my laptop the benchmark got halved(sp?), while actual WU completion times got about 5% better. So either Windows machines are grossly overrated or Linux benchmarks way too low. No idea which. But you certainly can't compare the two.
@Dex: Thanks, but I already experienced that myself ;-). What I am wondering is if the difference will be very noticeable on only 256 MB of memory and a generally slow machine.

Lt. Cmdr. Daze
Lt. Cmdr. Daze
Joined: 19 Apr 06
Posts: 756
Credit: 82,361
RAC: 0

Alinator, thanks for the

Message 55764 in response to message 55762

Alinator, thanks for the info!

Quote:
Cmdr, it looks like your P4M is benching low for it's speed class, especially the Floating Point. IIRC, hasn't low benchmark values been a common problem with the Linux versions of the client? I don't see why running diskless should make a difference since the BM calculations typically stay in the processor cache on late model CPU's.


Indeed, benchmarking is quite low for Linux. I never considered that a problem, since the science app is still fast enough. Yet, before the diskless run, I only had long WUS. With your explanation, that seems "impossible".

I found something interesting on the cache performance of Linux. Unfortunately, I'm not really familiar with the material. I found this:

Quote:


Before the 2.6 kernel, the scheduler had a significant limitation when many tasks were active. This was due to the scheduler being implemented using an algorithm with O(n) complexity. In this type of scheduler, the time it takes to schedule a task is a function of the number of tasks in the system. In other words, the more tasks (n) are active, the longer it takes to schedule a task. At very high loads, the processor can be consumed with scheduling and devote little time to the tasks themselves. Thus, the algorithm lacked scalability.

The pre-2.6 scheduler also used a single runqueue for all processors in a symmetric multiprocessing system (SMP). This meant a task could be scheduled on any processor -- which can be good for load balancing but bad for memory caches. For example, suppose a task executed on CPU-1, and its data was in that processor's cache. If the task got rescheduled to CPU-2, its data would need to be invalidated in CPU-1 and brought into CPU-2.

The prior scheduler also used a single runqueue lock; so, in an SMP system, the act of choosing a task to execute locked out any other processors from manipulating the runqueues. The result was idle processors awaiting release of the runqueue lock and decreased efficiency.

Finally, preemption wasn't possible in the earlier scheduler; this meant that a lower priority task could execute while a higher priority task waited for it to complete.


I interpret it as: Boinc is giving the CPU a high load. Because I have a 2.4 kernel, it needs more CPU time to schedule tasks. That would mean that crunching time would be increased. It could also explain the lower benchmarking, I guess.

@Annika

Quote:

Linux benchmarks suck, I can confirm that from own experience. I don't want to say the software sucks ;-) but the score you get does, compared to Windows. After installing Linux (Debian) on my laptop the benchmark got halved(sp?), while actual WU completion times got about 5% better. So either Windows machines are grossly overrated or Linux benchmarks way too low. No idea which. But you certainly can't compare the two.


Halved?? Wow, didn't realize that. No wonder, E@H thinks I'm running a 1 GHZ computer (I actually own a 2GHz). Nevertheless, Debian is not a very good choice for running Einstein (no doubt your choice didn't depend on Boinc). I started running Debian as well, but when I installed Gentoo, crunching times improved by about 15%...

Happy crunching,
Bert

Somnio ergo sum

Alinator
Alinator
Joined: 8 May 05
Posts: 927
Credit: 9,352,143
RAC: 0

Thanks for the link Cmdr.

Thanks for the link Cmdr. Interesting article, and goes a long way to explain why earlier Linux distro's seemed "sluggish" in some regards compared to Windows, and in theory could explain why the BM's come in low for Linux if they run at the same priority as the CC itself.

Keep in mind though, MS has always used architectural "cheats" to improve performance. Many times it was out of neccessity given the limitations of the PC platform. Since Linux has it's architectural roots in UNIX, cheating like that was just not allowed by definition. ;-)

In any event, there is a workaround for the benchmark problem in Linux. You can use the average of a number of results returned to determine the "true" CPCS, then disable benchmarking in the CC (the -no_time_test switch) and manually enter values in the client_state file to reflect the empirical data. You might have to use a Dev version of the client, because some of the "debug" features are stripped from the release versions.

For example, your P4M has a "true" CPCS of 0.002980 based on the average of 11 results from the same data pack, which represents a combined BM value of ~5150. So you could enter values of 2575 for the FLOPS and IOPS, which would then get reported to the project and the scheduler would send work based on that.

If you run more than one project, you may want to collect some data about all of them and use that to decide what values to use for the BM's.

Alinator

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1,364
Credit: 3,562,358,667
RAC: 0

Won't the DCF fix the

Won't the DCF fix the inadequate scheduler issue problem after a few dozen WUs?

Ensor
Ensor
Joined: 9 Feb 05
Posts: 49
Credit: 1,450,362
RAC: 0

RE: I wasn't able to find

Message 55767 in response to message 55743


Quote:
I wasn't able to find any specs on that particular model (probably need an NDA to get them), but decoding video needs the equivilant of a several hundred mhz pentium class FPU so it can't be too limited....


I don't know any specifics about the chip used inside the iPod, but if it's anything like the PowerPC based chips which IBM manufactures for use in set top boxes, the audio/video decoding is done by HARDWARE decoders onboard the chip!

That's how they can get away with using such a low (by today's standards) clock speed. All the processor cores will be doing is streaming data from the storage medium into the appropriate decoders....

I used to work as an embedded systems engineer, and had some little experience with the ARM7TDMI about 5 years ago (used in Nintendo's "GameBoy Advance" of all things). It doesn't have an onboard FPU, and although it does contain a "32-bit ALU", that's integer only - if you want floating point you have to attach an appropriate custom co-processor (think i386/i387), which AFAIK ARM don't manufacture anyway.

Unless the "PortalPlayer PP5021C" chip contains such a co-processor (highly unlikely) we're SOL on that point - and as you suggested, that information is most certainly protected by NDA's. :-(

Also, I certainly wouldn't like to run an iPod 24/7....it just wasn't designed for it. As others have said in this thread, overheating would almost certainly be an issue.

All of which is a real shame, because otherwise running BOINC on such a device is a damn good idea!!

TTFN - Pete.

ARM7TDMI Data Sheet


Alinator
Alinator
Joined: 8 May 05
Posts: 927
Credit: 9,352,143
RAC: 0

RE: Won't the DCF fix the

Message 55768 in response to message 55766

Quote:
Won't the DCF fix the inadequate scheduler issue problem after a few dozen WUs?

Not really, since the RDCF is designed to compensate for the Benchmark shortcomings when it comes to filling out your work cache for a given setting.

AFAIK the CPCS cutoff is an EAH specific server tweak, and results from the decision to stay with the fixed 2 week deadline.

In any event, we're wandering pretty far off from the topic of BOINC Ipods specifically, so we should probably spawn a new thread to continue this. ;-)

Alinator

Annika
Annika
Joined: 8 Aug 06
Posts: 720
Credit: 494,410
RAC: 0

RE: Halved?? Wow, didn't

Message 55769 in response to message 55764

Quote:

Halved?? Wow, didn't realize that. No wonder, E@H thinks I'm running a 1 GHZ computer (I actually own a 2GHz).


Well, in my case, the CPU gets identified correctly, but with half the scores... I don't mind, since all my projects except HashClash (which is pausing atm anyway) give server-determined credit. But it's really quite a huge difference.

Quote:

Nevertheless, Debian is not a very good choice for running Einstein (no doubt your choice didn't depend on Boinc). I started running Debian as well, but when I installed Gentoo, crunching times improved by about 15%...
Happy crunching,
Bert

Well, I can't confirm that one at all. Before I switched to Debian I actually had Gentoo installed for a couple of weeks, but switched because I had problems with my WLAN card and also realized that I preferred the handling of Debian (just personal preference/habit, so no need to flame at me you Gentoo fans out there ;-) they are both very good distris). So I switched and it didn't affect my WU completion times at all. Both are better than Windows (which I guess is largely due to low memory on my laptop) but roughly the same speed. My laptop is crunching just fine under Debian. I've heard quite a few times that Gentoo is "faster" (which was actually one of the reasons I installed it in the first place) but afterwards I've also read on a couple of Linux sites that this is more a theoretic advance unless you are REALLY into tuning your system (which is more successful under Gentoo) and this is my impression aswell. Debian seems quite "lean and mean", too. If it is installed correctly and you cut down on the graphical stuff a bit it seems to be no problem getting a good performance under Debian. At least on my system it isn't. Maybe it varies with the computer you use.

clownius
clownius
Joined: 16 Jun 06
Posts: 42
Credit: 2,164,665
RAC: 0

RE: Nope, earlier PIII's

Message 55770 in response to message 55751

Quote:

Nope, earlier PIII's aren't fast enough to draw long results. I know 550 and 600 MHz Katmai aren't. A 700 MHz Coppermine might just squeak in barely.

Generally I've found you need to be in the 1 GHz ballpark for the longs.

Alinator

I have an old celeron coppermine 700Mhz. it doesnt do Einstein atm but when it did it could still easily do a long WU in around a day and a bit. these old workhorses still get results in they just take a while. the only project i wouldn't throw them at is climate prediction otherwise they make deadlines easily enough.

Annika
Annika
Joined: 8 Aug 06
Posts: 720
Credit: 494,410
RAC: 0

No, I wouldn't recommend

No, I wouldn't recommend crunching CPDN on a Coppermine. This could get really ugly. My experiences with my laptop (1,3 GHz Celeron CPU, Banias Core, and 496 MB of memory) were already bad enough. Furthest I ever got into the model was 8 hours. And no, it was NOT overheated, and apart from CPDN it was running (and crunching) without any problems. Leave that somewhat weaker boxes (like the old P3s or similar but also slower laptops) for projects like Einstein and attach only really fast computers to CPDN.
Thanks for sharing your experiences with P3 boxes, guys. That sounded really encouraging, it should definitely be worth attaching the thing now I'm back from vacation.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.