Hello Bikeman,
I have not noticed that Windows S5R3 is faster than S5R2 as my credit has dropped from a bit over 16 to about 14 now (AMD 4800+ Win XP). All it shows is that there has not been as big a drop as in the Linux application.
Hi
Most Windows users have reported a more or less constant credits/hour rate under Windows when comparing S5R2 and S5R3, but with this huge variation in runtime, it's difficult to tell unless you run really quite a lot of units, as Annika already noted.
Many people have reported that the original S5R3 Linux app is slower than the original S5R3 Windows app. The latest Linux beta app closed this gap considerably, but some of it's speed improvements (there's a different detection/handling code for floating point anomalies) will also benefit the Windows app once it is used there as well, so after the next windows beta app we will be able to assess how much of a gap there is still left.
...The latest Linux beta app closed this gap considerably, but some of it's speed improvements (there's a different detection/handling code for floating point anomalies) will also benefit the Windows app once it is used there as well, so after the next windows beta app we will be able to assess how much of a gap there is still left.
CU
H-BE
Hopefully that will be coming along soon, looking forward to it! :)
I finished my first 4.09 run. It is faster than 4.02. But which is the meaning of this?
FPU status flags: COND_3 PRECISION
FPU masked exceptions now: 37f: PRECISION UNDERFLOW OVERFLOW ZERO_DIVIDE DENORMALIZED INVALID
FPU masked exceptions set: 37e: PRECISION UNDERFLOW OVERFLOW ZERO_DIVIDE DENORMALIZED
Tullio
I finished my first 4.09 run. It is faster than 4.02. But which is the meaning of this?
FPU status flags: COND_3 PRECISION
FPU masked exceptions now: 37f: PRECISION UNDERFLOW OVERFLOW ZERO_DIVIDE DENORMALIZED INVALID
FPU masked exceptions set: 37e: PRECISION UNDERFLOW OVERFLOW ZERO_DIVIDE DENORMALIZED
Tullio
This is part of the new code that handles floating point exceptions. What yiu see in the output is sthe contents of the FPU Status Word which contains sveral flags for various conditions the FPU can be in. Some of those conditions are harmless, others indicate a problem with either the hardware or the software. Those that are serious will now cause an exception that produces a stack dump. The others conditions are reported int teh output just to see that this new feature is working as expected.
Well I started processing these S5R3 Beta WU's to see if there was a speed up in processing time (and consequently an increase in granted credit).
S5R3 Application 4.02 started out at over 66,000 seconds, slowly reduced to a bit over 60,000 seconds. Then a sudden drop to between 52,570 to 50,065 seconds.
Credit started at just 12.02 cr/h rising to 15.96 cr/h (S5R2 gave me 21.4+ cr/h).
I then went to Beta as I was so disappointed with the new S5R3 Wu's.
S5R3 Application 4.09 started well, with the first 3 showing a great drop in processing times
40,307.92 seconds for WU 87374313
40,033.85 seconds for WU 87374374
43,621.74 seconds for WU 87375124
Giving credit of 19.82, 19.96, 18.32 cr/h for each one.
Unfortunately the next one has gone back to the same as the non Beta application with 52,867.22 seconds for WU 87376902
Dropping cr/h back to 15.11.
So S5R2 was giving (slowly dropping over the last few months) 24 cr/h reducing slowly to 21.43 with the last of the S5R2 work units.
S5R3 started giving me 12 and averaged out at 14.4 cr/h, a 32.76% Drop from S5R2.
First 3 Beta WU's gave me an average of 19.34 cr/h a 9.75% Drop from S5R2 but a 25.54% Increase over the first S5R3 WU's.
The last one is back to 29.49% Drop from S5R2 and only a 4.63% Increase over the first S5R3 WU's.
What happened to the speed up?
Forgot to mention that all my S5R3 WU's, including the Beta WU's and the last S5R2 that I did are all of the same 0518.05 frequency (on my AMD Opteron 285 Linux machine).
Have now processed 7 new Beta WU's on the above machine and added another computer as well (also an AMD Opteron 285), which has done 10 Beta WU's.
Overall I am a lot happier with this application than I was with the first S5R3 application.
On my First Opteron Computer (Linux Fedora Core 3)
Last S5R2 batch 0518.05 took 76,111 sec for 21.47 cr/h
First S5R3 batch 0518.05 took 66,500 sec dropping to 60,500 sec then to 50,000 sec (4.02 app/ 12 results/ ave 56,037 sec) for an ave 14.41 cr/h Ave Processing speed INCREASE over S5R2 is 26.37% Credit per Hour is a 32.88% DROP from S5R2
First BETA S5R3 batch 0518.05 took 52,800 sec dropping to 38,900 sec (4.09 app/ 7 results/ ave 43,125 sec) for an ave 18.53 cr/h Processing speed INCREASE over S5R2 is 39.40% Processing speed INCREASE over S5R3 app 4.02 = 23.04% Credit per Hour is a 13.69% DROP from S5R2, 22.23% INCREASE on S5R3
BETA S5R3 4.09 app gives On my 2nd Opteron computer (Linux Fedora Core 6)
Batch 0512.75 took 46,900 seconds dropping to 40,600 seconds (4 results)
Batch 0512.80 took 36,500 seconds dropping to 35,300 seconds (6 results)
Combined batches 0512.75 and 0512.80 had an average processing time of 39,085 seconds and an average cr/h of 20.44 cr/h
On this computer Previous S5R2 batch 0512.xx processed in 71,650 sec for 22.50 cr/h
Beta 4.09 S5R3 gives Processing Speed INCREASE of 45.45%
and Credit per Hour is a 9.16% DROP over S5R2
The speed up is very good and seems to be getting faster? Cr/h is starting to increase and approch what we had in the S5R2 batches.
So I believe the BETA 4.09 Application should be released as the mainstream application that Linux users should be using.
It looks like there is a slight (i.e. rarely showing) problem with the new checkpointing code that's in 4.07 and in 4.09, too (see here). I'd like to look into that first.
What do you mean by "rarely showing"? Is it worth putting the beta app on a laptop which atm refuses to go into hibernate, thus forcing me to shut down completely (and therefore losing my BOINC work from the last checkpoint)?
The variation of runtime in S5R3 is much higher than in S5R2. Hopefully this can be reflected in the future by a more fine grained credits allocation. Variation is routinely 20% and sometimes more.
In my case a nearly 33% variation between fastest & slowest so far with the same credit ...
I got a couple of compute errors under heavy system load(load average up to 20(!)).
System was sometimes hardly responsive to any user action.
There was (mostly) no memory shortage, but heavy I/O load(tar-gzip-split from disk to disk with data sizes around 10GB and simultaneous YOU-online update etc.) and CPU load. Other projects at the same time: ABC@H(64-Bit app.): some , Seti@H enhanced(64-Bit app.: no problems.
System in short: AMD 64 X2 4400 2.2GHz, 2GB, no oc, 64-Bit OpenSUSE 10.2, 64-Bit BOINC client 5.10.8
All WUs ended with the same error:
5.10.8
process got signal 11
2007-10-14 22:36:19.4634 [normal]: Built at: Oct 4 2007 22:05:46
2007-10-14 22:36:19.4636 [normal]: Start of BOINC application 'einstein_S5R3_4.09_i686-pc-linux-gnu'.
[...]
Usual WU-times are around 41k-55k seconds.
The WUs crashed after the following times: result after 32,572.69166 sec. result after 44,278.523226 sec. result after 15,650.6221 sec. result after 0.048002 sec result after 4,440.561517 sec result after 1.736107 sec. result after 12,303.488919 sec. result after 519.168445 sec.
Today the system is running at normal conditions and successfully finished a result.
I hope this information is somehow helpful for further development of the Einstein application.
RE: Hello Bikeman, I have
)
Hi
Most Windows users have reported a more or less constant credits/hour rate under Windows when comparing S5R2 and S5R3, but with this huge variation in runtime, it's difficult to tell unless you run really quite a lot of units, as Annika already noted.
Many people have reported that the original S5R3 Linux app is slower than the original S5R3 Windows app. The latest Linux beta app closed this gap considerably, but some of it's speed improvements (there's a different detection/handling code for floating point anomalies) will also benefit the Windows app once it is used there as well, so after the next windows beta app we will be able to assess how much of a gap there is still left.
CU
H-BE
RE: ...The latest Linux
)
Hopefully that will be coming along soon, looking forward to it! :)
I finished my first 4.09 run.
)
I finished my first 4.09 run. It is faster than 4.02. But which is the meaning of this?
FPU status flags: COND_3 PRECISION
FPU masked exceptions now: 37f: PRECISION UNDERFLOW OVERFLOW ZERO_DIVIDE DENORMALIZED INVALID
FPU masked exceptions set: 37e: PRECISION UNDERFLOW OVERFLOW ZERO_DIVIDE DENORMALIZED
Tullio
RE: I finished my first
)
This is part of the new code that handles floating point exceptions. What yiu see in the output is sthe contents of the FPU Status Word which contains sveral flags for various conditions the FPU can be in. Some of those conditions are harmless, others indicate a problem with either the hardware or the software. Those that are serious will now cause an exception that produces a stack dump. The others conditions are reported int teh output just to see that this new feature is working as expected.
CU
H-BE
RE: Well I started
)
Have now processed 7 new Beta WU's on the above machine and added another computer as well (also an AMD Opteron 285), which has done 10 Beta WU's.
Overall I am a lot happier with this application than I was with the first S5R3 application.
On my First Opteron Computer (Linux Fedora Core 3)
Last S5R2 batch 0518.05 took 76,111 sec for 21.47 cr/h
First S5R3 batch 0518.05 took 66,500 sec dropping to 60,500 sec then to 50,000 sec (4.02 app/ 12 results/ ave 56,037 sec) for an ave 14.41 cr/h
Ave Processing speed INCREASE over S5R2 is 26.37%
Credit per Hour is a 32.88% DROP from S5R2
First BETA S5R3 batch 0518.05 took 52,800 sec dropping to 38,900 sec (4.09 app/ 7 results/ ave 43,125 sec) for an ave 18.53 cr/h
Processing speed INCREASE over S5R2 is 39.40%
Processing speed INCREASE over S5R3 app 4.02 = 23.04%
Credit per Hour is a 13.69% DROP from S5R2, 22.23% INCREASE on S5R3
BETA S5R3 4.09 app gives On my 2nd Opteron computer (Linux Fedora Core 6)
Batch 0512.75 took 46,900 seconds dropping to 40,600 seconds (4 results)
Batch 0512.80 took 36,500 seconds dropping to 35,300 seconds (6 results)
Combined batches 0512.75 and 0512.80 had an average processing time of 39,085 seconds and an average cr/h of 20.44 cr/h
On this computer Previous S5R2 batch 0512.xx processed in 71,650 sec for 22.50 cr/h
Beta 4.09 S5R3 gives Processing Speed INCREASE of 45.45%
and Credit per Hour is a 9.16% DROP over S5R2
The speed up is very good and seems to be getting faster? Cr/h is starting to increase and approch what we had in the S5R2 batches.
So I believe the BETA 4.09 Application should be released as the mainstream application that Linux users should be using.
It looks like there is a
)
It looks like there is a slight (i.e. rarely showing) problem with the new checkpointing code that's in 4.07 and in 4.09, too (see here). I'd like to look into that first.
BM
BM
My last 3 results
)
My last 3 results crashed
http://einsteinathome.org/task/87551982
http://einsteinathome.org/task/87647494
http://einsteinathome.org/task/87690846
The last 2 seems like the same error.
- Knorr
What do you mean by "rarely
)
What do you mean by "rarely showing"? Is it worth putting the beta app on a laptop which atm refuses to go into hibernate, thus forcing me to shut down completely (and therefore losing my BOINC work from the last checkpoint)?
RE: The variation of
)
In my case a nearly 33% variation between fastest & slowest so far with the same credit ...
http://einsteinathome.org/host/1028744/tasks
Looking forward to those tweaks.
I got a couple of compute
)
I got a couple of compute errors under heavy system load(load average up to 20(!)).
System was sometimes hardly responsive to any user action.
There was (mostly) no memory shortage, but heavy I/O load(tar-gzip-split from disk to disk with data sizes around 10GB and simultaneous YOU-online update etc.) and CPU load. Other projects at the same time: ABC@H(64-Bit app.): some , Seti@H enhanced(64-Bit app.: no problems.
System in short: AMD 64 X2 4400 2.2GHz, 2GB, no oc, 64-Bit OpenSUSE 10.2, 64-Bit BOINC client 5.10.8
All WUs ended with the same error:
5.10.8
process got signal 11
2007-10-14 22:36:19.4634 [normal]: Built at: Oct 4 2007 22:05:46
2007-10-14 22:36:19.4636 [normal]: Start of BOINC application 'einstein_S5R3_4.09_i686-pc-linux-gnu'.
[...]
Usual WU-times are around 41k-55k seconds.
The WUs crashed after the following times:
result after 32,572.69166 sec.
result after 44,278.523226 sec.
result after 15,650.6221 sec.
result after 0.048002 sec
result after 4,440.561517 sec
result after 1.736107 sec.
result after 12,303.488919 sec.
result after 519.168445 sec.
Today the system is running at normal conditions and successfully finished a result.
I hope this information is somehow helpful for further development of the Einstein application.
cu,
Michael