D has to be complete unknown. But if either app in D were to be made stock app then it must be substituted into A.
See, that's where this wrapper thing that Bernd is working on would allow them to bundle both codepaths / apps in a single version. If it works out, x87 optimizations, which would equate to stock SETI, would be called if the processor support wasn't there for the SSE version, which would equate to the anonymous platform apps (optimzed SETI).
Since it is all bundled in one, if the credit target for the SSE side is set for the x87 (stock) SETI app, then you are not comparing apples to apples...
(or penguins to penguins...see your PM...and sorry for the length)
The detection of SSE is also in the Seti stock app and thats why some people do not see improvement by installing optimised app on Seti.
D has to be complete unknown. But if either app in D were to be made stock app then it must be substituted into A.
See, that's where this wrapper thing that Bernd is working on would allow them to bundle both codepaths / apps in a single version. If it works out, x87 optimizations, which would equate to stock SETI, would be called if the processor support wasn't there for the SSE version, which would equate to the anonymous platform apps (optimzed SETI).
Since it is all bundled in one, if the credit target for the SSE side is set for the x87 (stock) SETI app, then you are not comparing apples to apples...
(or penguins to penguins...see your PM...and sorry for the length)
The detection of SSE is also in the Seti stock app and thats why some people do not see improvement by installing optimised app on Seti.
I asked Joe about that, but never got a response... So you're saying that the only increase my systems see by having the optimized app is totally due to SSE2, and that if I were to install the SSE version, it would perform identically to the stock release?
Edit: Sounds like a good test scenario, so I'll run down my work here and at Cosmology and then do a test...
D has to be complete unknown. But if either app in D were to be made stock app then it must be substituted into A.
See, that's where this wrapper thing that Bernd is working on would allow them to bundle both codepaths / apps in a single version. If it works out, x87 optimizations, which would equate to stock SETI, would be called if the processor support wasn't there for the SSE version, which would equate to the anonymous platform apps (optimzed SETI).
Since it is all bundled in one, if the credit target for the SSE side is set for the x87 (stock) SETI app, then you are not comparing apples to apples...
(or penguins to penguins...see your PM...and sorry for the length)
The detection of SSE is also in the Seti stock app and thats why some people do not see improvement by installing optimised app on Seti.
I asked Joe about that, but never got a response... So you're saying that the only increase my systems see by having the optimized app is totally due to SSE2, and that if I were to install the SSE version, it would perform identically to the stock release?
Edit: Sounds like a good test scenario, so I'll run down my work here and at Cosmology and then do a test...
I don't know first hand all old computers retired from Seti, too costly to run, AMD Athlon cpu alone uses more power that PentM (desktop with graphics card) total system. But latest post on Seti is 729729
I asked Joe about that, but never got a response... So you're saying that the only increase my systems see by having the optimized app is totally due to SSE2, and that if I were to install the SSE version, it would perform identically to the stock release?
Edit: Sounds like a good test scenario, so I'll run down my work here and at Cosmology and then do a test...
I don't know first hand all old computers retired from Seti, too costly to run, AMD Athlon cpu alone uses more power that PentM (desktop with graphics card) total system. But latest post on Seti is 729729
OK, I've been running the same projects with the same resource shares on the same machines for 9 days now.
Those settings are (not %)
Einstein 25
Rosetta 25
ABC 25
and
Malaria control 50
Is Malaria twice the credit/week of the others which should be similar? Remember, Rosetta is the closest to the Boinc standard benchmark method. It's interesting and reaffirming that malaria control IS roughly twice that of Rosetta.
as a reminder, I am using the 64b ABC linux app which does more in the same time than the 32b one, and I'm using the 4.38 Einstein app for power users. So if we are to adjust credits upwards for "optimized" app, then I have to assume both ABC and Einstein apps are 3X faster than their "standard" app, and Malaria and Rosetta don't have one iota of optimization/improvement in their apps.
I'm absolutely sure there's some reason the following proposal wouldn't work, but am too tired to think it through. Anyway, here goes.
1) eradicate the benchmark scoring method.
2) enforced a project fixed credit method.
3) have each project take one moderately new PC at their disposal and attach to 9 other projects (at equal shares) they choose at random (like throwing a dart a full list of projects) and run them on that PC for a month.
4) at the end of the month, adjust their credit to match the average of the other 9 projects. (note: if the project doesn't issue steady work it would require another dart throw).
5) lather, rinse, and repeat for the next month.
Eventually, all the projects would gravitate towards some average which is alot closer than it is now. Also, each project would be able to see what other projects were doing and gauge their own based upon this. They also might be able to better judge how their own project compares to others without having to depend upon the "word" of the user who reports their findings (like my earlier one).
It's probably not doable, but was fun thinking up.
Sounds good to me. Too much fluctuation in it anyway, not to mention the capability of cheating it.
Quote:
2) enforced a project fixed credit method.
Do you mean server-side, like here, or would flop counting also work for you?
Quote:
3) have each project take one moderately new PC at their disposal and attach to 9 other projects (at equal shares) they choose at random (like throwing a dart a full list of projects) and run them on that PC for a month.
4) at the end of the month, adjust their credit to match the average of the other 9 projects. (note: if the project doesn't issue steady work it would require another dart throw).
5) lather, rinse, and repeat for the next month.
Sounds ok to me... I was going to bring up about how the odds would allow for random selection of a bulk set of faster projects or slower projects on the same "dart throw" due to optimized code vs. unoptimized code, but as long as you're doing it over time, it should start normalizing...
If you adopt a scheme like the one Astro proposed and change the credits every month, the credits will be useful for cross-project comparisons on platforms similar to the reference system used, but useless for most everything else.
It might be a rather radical approach, but what if every project completely ignored the other projects, defined their own credit measures and the third party stats sites like BOINCstats etc would calculate monthly "exchange rates" either based on the input they get from crunchers (as this popular cross project stats matrix that uses credit stats of multi-project hosts) or from own experiments. Basically updating the "BOINC combined" stats would no longer just add the mere sum of project credits for one day but the inner product with a scaling vector that would be updated weekly or monthly. To stress the whole point, projects should even give different names to what is now called "cobblestones", e.g. at E@H you might get "271.3 Alberts per unit", and on Seti@Home you might get 1000 ALFs instead...whatever :-).
Is Malaria twice the credit/week of the others which should be similar? Remember, Rosetta is the closest to the Boinc standard benchmark method. It's interesting and reaffirming that malaria control IS roughly twice that of Rosetta.
as a reminder, I am using the 64b ABC linux app which does more in the same time than the 32b one, and I'm using the 4.38 Einstein app for power users. So if we are to adjust credits upwards for "optimized" app, then I have to assume both ABC and Einstein apps are 3X faster than their "standard" app, and Malaria and Rosetta don't have one iota of optimization/improvement in their apps.
I'm currently running Malaria on an XP 2600+ w/Win XP Pro. Malaria returns approx. 9.4-9.5 cr/hr while Einstein returns just under 23 cr/hr using the 4.36 power Windows app. My system actually claims about 10.6 cr/hr at Malaria, but due to lower Linux benchmarks I usually only get the 9.4-9.5 cr/hr granted.
RE: RE: D has to be
)
The detection of SSE is also in the Seti stock app and thats why some people do not see improvement by installing optimised app on Seti.
RE: RE: RE: D has to be
)
I asked Joe about that, but never got a response... So you're saying that the only increase my systems see by having the optimized app is totally due to SSE2, and that if I were to install the SSE version, it would perform identically to the stock release?
Edit: Sounds like a good test scenario, so I'll run down my work here and at Cosmology and then do a test...
RE: RE: RE: RE: D has
)
I don't know first hand all old computers retired from Seti, too costly to run, AMD Athlon cpu alone uses more power that PentM (desktop with graphics card) total system. But latest post on Seti is 729729
RE: RE: I asked Joe
)
I'll follow up some over there...
RE: Edit: Sounds like a
)
FYI and FWIW, a test doesn't seem to be needed at this point, since Joe clarified some things.
OK, I've been running the
)
OK, I've been running the same projects with the same resource shares on the same machines for 9 days now.
Those settings are (not %)
Einstein 25
Rosetta 25
ABC 25
and
Malaria control 50
Is Malaria twice the credit/week of the others which should be similar? Remember, Rosetta is the closest to the Boinc standard benchmark method. It's interesting and reaffirming that malaria control IS roughly twice that of Rosetta.
as a reminder, I am using the 64b ABC linux app which does more in the same time than the 32b one, and I'm using the 4.38 Einstein app for power users. So if we are to adjust credits upwards for "optimized" app, then I have to assume both ABC and Einstein apps are 3X faster than their "standard" app, and Malaria and Rosetta don't have one iota of optimization/improvement in their apps.
I'm absolutely sure there's
)
I'm absolutely sure there's some reason the following proposal wouldn't work, but am too tired to think it through. Anyway, here goes.
1) eradicate the benchmark scoring method.
2) enforced a project fixed credit method.
3) have each project take one moderately new PC at their disposal and attach to 9 other projects (at equal shares) they choose at random (like throwing a dart a full list of projects) and run them on that PC for a month.
4) at the end of the month, adjust their credit to match the average of the other 9 projects. (note: if the project doesn't issue steady work it would require another dart throw).
5) lather, rinse, and repeat for the next month.
Eventually, all the projects would gravitate towards some average which is alot closer than it is now. Also, each project would be able to see what other projects were doing and gauge their own based upon this. They also might be able to better judge how their own project compares to others without having to depend upon the "word" of the user who reports their findings (like my earlier one).
It's probably not doable, but was fun thinking up.
RE: 1) eradicate the
)
Sounds good to me. Too much fluctuation in it anyway, not to mention the capability of cheating it.
Do you mean server-side, like here, or would flop counting also work for you?
Sounds ok to me... I was going to bring up about how the odds would allow for random selection of a bulk set of faster projects or slower projects on the same "dart throw" due to optimized code vs. unoptimized code, but as long as you're doing it over time, it should start normalizing...
It's a real dilemma. If
)
It's a real dilemma.
If you adopt a scheme like the one Astro proposed and change the credits every month, the credits will be useful for cross-project comparisons on platforms similar to the reference system used, but useless for most everything else.
It might be a rather radical approach, but what if every project completely ignored the other projects, defined their own credit measures and the third party stats sites like BOINCstats etc would calculate monthly "exchange rates" either based on the input they get from crunchers (as this popular cross project stats matrix that uses credit stats of multi-project hosts) or from own experiments. Basically updating the "BOINC combined" stats would no longer just add the mere sum of project credits for one day but the inner product with a scaling vector that would be updated weekly or monthly. To stress the whole point, projects should even give different names to what is now called "cobblestones", e.g. at E@H you might get "271.3 Alberts per unit", and on Seti@Home you might get 1000 ALFs instead...whatever :-).
CU
Bikeman
RE: Is Malaria twice the
)
I'm currently running Malaria on an XP 2600+ w/Win XP Pro. Malaria returns approx. 9.4-9.5 cr/hr while Einstein returns just under 23 cr/hr using the 4.36 power Windows app. My system actually claims about 10.6 cr/hr at Malaria, but due to lower Linux benchmarks I usually only get the 9.4-9.5 cr/hr granted.
Seti Classic Final Total: 11446 WU.