Windows S5R3 SSE power App 4.36 available

Ed1934158
Ed1934158
Joined: 10 Nov 04
Posts: 62
Credit: 14,481,483
RAC: 0

RE: Before this app can be

Message 79665 in response to message 79663

Quote:
Before this app can be released for general use it needs to be modified to detect whether SSE is available on the system. If it doesn't, we'll have perhaps thousands of hosts bombing as soon as it starts.


How hard it is to program that feature?

Nothing But Idle Time
Nothing But Idl...
Joined: 24 Aug 05
Posts: 158
Credit: 289,204
RAC: 0

RE: .... So to have the

Message 79666 in response to message 79660

Quote:
.... So to have the correct estimated time to completion, I should have an RDCF of 0.302000, which in my opinion is way too low for a project like Einstein.


Are you implying that the task estimates are too high given the newer and faster app? Is a RDCF that approaches zero a bad precedent?

archae86
archae86
Joined: 6 Dec 05
Posts: 3,008
Credit: 4,848,848,110
RAC: 3,350,565

RE: RE: .... So to have

Message 79667 in response to message 79666

Quote:
Quote:
.... So to have the correct estimated time to completion, I should have an RDCF of 0.302000, which in my opinion is way too low for a project like Einstein.

Are you implying that the task estimates are too high given the newer and faster app? Is a RDCF that approaches zero a bad precedent?

If so Einstein is only just now catching up to SETI on Windows XP platforms.

I've been running the Windows Einstein aps as soon as they came out.

For comparison, here are the task/result duration correction factors for my four hosts on SETI and Einstein

_CPU_ SETI Einstein
Q6600 .154 .207
E6600 .138 .205
Cppmn .681 .530
Bnias .235 .302


Granted, the Einstein numbers are trending downward, and will continue somewhat further, but I suspect on the 4.36 release they won't get below the SETI values on my main hosts.

I'm not sure what bad effect Ageless has in mind, but it seems to me that the main one is less prefetching for new hosts or newly reset hosts than the longterm value. Many folks posting on the SETI forums complaining about large queues of abandoned work might regard that as a feature, rather than a problem.

Brian Silvers
Brian Silvers
Joined: 26 Aug 05
Posts: 772
Credit: 282,700
RAC: 0

RE: RE: RE: .... So to

Message 79668 in response to message 79667

Quote:
Quote:
Quote:
.... So to have the correct estimated time to completion, I should have an RDCF of 0.302000, which in my opinion is way too low for a project like Einstein.

Are you implying that the task estimates are too high given the newer and faster app? Is a RDCF that approaches zero a bad precedent?

If so Einstein is only just now catching up to SETI on Windows XP platforms.

I've been running the Windows Einstein aps as soon as they came out.

For comparison, here are the task/result duration correction factors for my four hosts on SETI and Einstein

_CPU_ SETI Einstein
Q6600 .154 .207
E6600 .138 .205
Cppmn .681 .530
Bnias .235 .302

Granted, the Einstein numbers are trending downward, and will continue somewhat further, but I suspect on the 4.36 release they won't get below the SETI values on my main hosts.

You are quite correct that Einstein is just now being able to catch up with the potential over at SETI. Sure, the stock app won't yield those low values, but the optimized app will.

Just for kicks, here's how my AMD stacks up:

_CPU_ SETI Einstein
3700+ .195 .302

I don't know what Jord is seeing as a "problem", unless it is a concern about huge queues and abandoned work...

Jord
Joined: 26 Jan 05
Posts: 2,952
Credit: 5,779,100
RAC: 5

archae86 wrote:I'm not sure

Message 79669 in response to message 79667

archae86 wrote:
I'm not sure what bad effect Ageless has in mind


At this moment a severe headache, which makes thinking problematic.

Nothing But Idle Time wrote:
Are you implying that the task estimates are too high given the newer and faster app? Is a RDCF that approaches zero a bad precedent?


The problem with the low RDCF is that eventually we'll change to another application and in the long term to a new S5 run. Not knowing the effects of the new application or the run time of the completely new run's initial tasks will have a lot of machines that run with this low an RDCF fetch a whole lot of work they can't possibly crunch within the allotted time.

So yes to your second question, it's bad. With a lower RDCF BOINC estimates more correctly what the times of the present work is. Change the work, which will eventually happen and tasks that now run in 4 hours will then run in 48 hours (example given).

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3,516
Credit: 468,381,959
RAC: 57,948

RE: archae86 wrote:I'm not

Message 79670 in response to message 79669

Quote:
archae86 wrote:
I'm not sure what bad effect Ageless has in mind

At this moment a severe headache, which makes thinking problematic.

Nothing But Idle Time wrote:
Are you implying that the task estimates are too high given the newer and faster app? Is a RDCF that approaches zero a bad precedent?

The problem with the low RDCF is that eventually we'll change to another application and in the long term to a new S5 run. Not knowing the effects of the new application or the run time of the completely new run's initial tasks will have a lot of machines that run with this low an RDCF fetch a whole lot of work they can't possibly crunch within the allotted time.

So yes to your second question, it's bad. With a lower RDCF BOINC estimates more correctly what the times of the present work is. Change the work, which will eventually happen and tasks that now run in 4 hours will then run in 48 hours (example given).

I'm not sure I get your point. The effect you mention will happen whenever a project goes from short-running WUs/apps to longer-running WUs/apps, no matter what the absolute value of the RDCF value is(be it 0.1 or 100), right?

There must be some events that will reset the RDCF to 1.0, tho, but I can't remember what those are. Anyone?

CU

Bikeman

Nothing But Idle Time
Nothing But Idl...
Joined: 24 Aug 05
Posts: 158
Credit: 289,204
RAC: 0

RE: There must be some

Message 79671 in response to message 79670

Quote:
There must be some events that will reset the RDCF to 1.0, tho, but I can't remember what those are. Anyone?
CU
Bikeman


Restting the project also resets the DCF to 1.0; I've observed it many times.
As for a mix of short and long running tasks... the dcf will be middle of the road, moving up and down slightly in response to the varying task lengths. But Ageless is right also, a consistently low task time will ultimately set the dcf to some low value, and then when a new set of longer running tasks come along there will be a tendency to overload the cache, though it's anybody's guess whether that will lead to missed deadlines.

Brian Silvers
Brian Silvers
Joined: 26 Aug 05
Posts: 772
Credit: 282,700
RAC: 0

RE: RE: There must be

Message 79672 in response to message 79671

Quote:
Quote:
There must be some events that will reset the RDCF to 1.0, tho, but I can't remember what those are. Anyone?
CU
Bikeman

Restting the project also resets the DCF to 1.0; I've observed it many times.
As for a mix of short and long running tasks... the dcf will be middle of the road, moving up and down slightly in response to the varying task lengths. But Ageless is right also, a consistently low task time will ultimately set the dcf to some low value, and then when a new set of longer running tasks come along there will be a tendency to overload the cache, though it's anybody's guess whether that will lead to missed deadlines.

I think Jord has a point, but only in regards to people running stock applications, which would be the majority of the user base. I think it would be a good idea to try to keep DCF fairly close to 1 in regards to the stock app. For those of us using optimized apps, I would have the expectation that we should "keep up with the bouncing ball" and realize that there will be a period of relearning when longer running tasks come out.

The "pain" of all of this could be lessened by testing new applications / new longer-running tasks in a separate beta area. This would allow testing / debugging of the application as well as the estimates used by the WU generator, before releasing out to the GUM (Great Unwashed Masses)...

Edit: One large caveat is if feature detection is worked out so that one application can go out to the GUM and it will do x87 on processors that don't support SSE, but SSE on those that do...

Winterknight
Winterknight
Joined: 4 Jun 05
Posts: 482
Credit: 189,581,804
RAC: 108,343

The problem of RDCF or should

The problem of RDCF or should that be TDCF now, is a problem at the moment on SetiBeta. They are running two applications Seti enhanced V.6 and AstroPulse (AP) V4.26. Enhanced is fairly well optimised and lowers the DCF but AP is relatively new, and ironing out basic bugs, and drives the DCF up.

If you have run Enhanced V.6 for some time and then the project wants to test the next AP app you end up d/loading too many 50+hr units, and vice versa you end up not d/loading enough 30 min enhanced units.

hotze33
hotze33
Joined: 10 Nov 04
Posts: 100
Credit: 367,079,550
RAC: 3,088

I have the first results from

I have the first results from my E8400 @ 4GHz

4.36
Frequency : 828.5
Period of task cycle = 140.8
Number of points = 13
Minimum runtime in data = 10024.09
Maximum runtime in data = 10483.66
Estimated peak runtime = 11905
Estimated average runtime = 10631
Estimated trough runtime = 9904
Estimated runtime variance = 0.168

4.32
Frequency : 828.55
Period of task cycle = 140.8
Number of points = 9
Minimum runtime in data = 12291.81
Maximum runtime in data = 14205.61
Estimated peak runtime = 15986
Estimated average runtime = 13648
Estimated trough runtime = 12314
Estimated runtime variance = 0.23

So also a decrease in runtime variance (like my Q6600). Keep up the good work.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.