So the validation process deemed the two results insufficiently similar.
I'll speculate that at least one of the two CPUs committed error, but not of a kind that generated a access violation or one of the (few) other things that get caught at run time.
Since the einstien app isn't a flop counter I assume it's based on benchmarks and DCF in some manor. s5r2 has a much higher DCF than s5r1 did.
The claimed credits, if not Fpops, is Benchmark * time based. The DCF is a correction factor for predicting the completion time of units for that computer crunching that project. If the predicted time, calculated again from the benchmarks, is accurate the RDCF will be 1. If the computer is faster it is lowered slowly, and if it is slower it is increased immediately.
The RDCF is used to adjust the amount of work downloaded. As most computers had a low RDCF before S5R2 it is possible they downloaded too many new units, and some will not make the deadline.
That also shows that the BoincView estimate on that Baby WU, at ~ 1.7, was a bit high.
Looking a bit further into it, I think that BOINCView shows - accurately - the benchmark*time credit claim. I had one on SETI Astropulse Beta, which doesn't yet use FLOPs, which claimed 300+, and BOINCView got it right to two decimal places.
From what I said earlier in the thread (BOINCView over-estimates my Celeron and Mike's K6, is about right on elderly P4s, and under-estimates Core 2s), I hope that's another nail in the coffin of benchmark credit claims.
I think that a LOT of workunits won't make their deadline, which are VERY short for these enormous WUs. Nowadays I mostly crunch Rosetta and for some reason i got 2 S5R2 at the same time. Since the upcoming weekend is extra long ('Valborg' in Sweden) and I only crunch at work there is a great chance that my first WU won't be finished in time, leading to ~9 hours of wasted crunching. There is no way that my second WU will make it, leading to ~2 hours wasted crunching.
I have 2 processors so Einstein obviously thought it was a good idea to send me 2 WUs and for some reason cycle the crunching between them. I find that particularly stupid. I know a can pause one of them or increase the amount of Einstein work bla bla bla, but I don't want to do that. I want BOINC to do its work on its own.
I think the deadlines must be greatly increased or else no WUs will be delivered back to you.
There is an effort underway to make all projects on the BOINC system, which includes Rosetta, award the same amount of credit/hr (or whatever other unit of time you wish to use). The idea put forth by the Einstein staff is that they were giving "too much". The alternative view is that the other projects were giving "too little".
Both of these viewpoints accomplish the same thing. Hypothetical example:
Einstein offers 10. Rosetta offers 8. Einstein is granting 2 more than Rosetta.
Einstein offers 8. Rosetta offers 8. Einstein and Rosetta offer the same.
Einstein offers 10. Rosetta offers 10. Einstein and Rosetta offer the same.
Edit: My objection to lowering EAH credits is because I feel that other projects have not put as much time into their application or credit system as EAH. EAH should be considered the "leader", meaning that the other projects should strive to bring up their application performance or credit levels rather than the hard work that the staff here have put in be continually "devalued".
Good point, let each project run what ever level of points they think their project is worth to them. We then make our choice on what project we crunch. I notice that TANPAKU will always slide in small wu with a very near date to try to "overpower" the allocations that I have set up for the projects I crunch.
[pre][= blue]SETI@home classic workunits = 5,906 with CPU time of 60,377 hours[/][/pre]
Wow.... the Floating Point Speed is taking a steep dive...!!!
Any ideas why?
The floating point speed chart you mentioned is based on the awarded credit.
1) I'm not sure the data collection for this chart already takes S5R2 credit into account, e.g. the charts on the same page showing the percentage of completion of the analysis are obviously still using S5RI statistics
2) many workunits never get credited for because there are quite a lot of client errors and validation errors at the moment. As only finished and validated work units will result in credit, the overall estimated floating point speed based on credit is bound to suffer.
3) a short database maintenance service interruption yesterday may also have some effect
RE: RE: Checked, but no
)
Thanks for the info archae86
i have 82 Client error on my
)
i have 82 Client error on my P4 2.8C PC with HT enable.
http://einsteinathome.org/host/452134/tasks&offset=0
RE: Since the einstien app
)
The claimed credits, if not Fpops, is Benchmark * time based. The DCF is a correction factor for predicting the completion time of units for that computer crunching that project. If the predicted time, calculated again from the benchmarks, is accurate the RDCF will be 1. If the computer is faster it is lowered slowly, and if it is slower it is increased immediately.
The RDCF is used to adjust the amount of work downloaded. As most computers had a low RDCF before S5R2 it is possible they downloaded too many new units, and some will not make the deadline.
Andy
First R2 WU done, few days
)
First R2 WU done, few days ago... took over 54h, non stop (would be a few h faster if i would not have to compile some soft). Got over 400 credit.
Details below:
http://einsteinathome.org/workunit/33377773
RE: First R2 WU done, few
)
Yes, that's at ~ 7.3 credits /hour, that compares with ~ 13.6 with the most recent R1 on that machine ( Athlon XP 2500 + ).
My Baby ( AMD K6/2 500Mhz ) got 1.32 credits/hour on it's first R2, compared with it's most recent R1 at ~ 0.75.
Swings and slides ..... :-)
Both of these are to be expected, as per the earlier discussion in this thread.
Cheers, Mike.
( edit ) That also shows that the BoincView estimate on that Baby WU, at ~ 1.7, was a bit high.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: That also shows that
)
Looking a bit further into it, I think that BOINCView shows - accurately - the benchmark*time credit claim. I had one on SETI Astropulse Beta, which doesn't yet use FLOPs, which claimed 300+, and BOINCView got it right to two decimal places.
From what I said earlier in the thread (BOINCView over-estimates my Celeron and Mike's K6, is about right on elderly P4s, and under-estimates Core 2s), I hope that's another nail in the coffin of benchmark credit claims.
I think that a LOT of
)
I think that a LOT of workunits won't make their deadline, which are VERY short for these enormous WUs. Nowadays I mostly crunch Rosetta and for some reason i got 2 S5R2 at the same time. Since the upcoming weekend is extra long ('Valborg' in Sweden) and I only crunch at work there is a great chance that my first WU won't be finished in time, leading to ~9 hours of wasted crunching. There is no way that my second WU will make it, leading to ~2 hours wasted crunching.
I have 2 processors so Einstein obviously thought it was a good idea to send me 2 WUs and for some reason cycle the crunching between them. I find that particularly stupid. I know a can pause one of them or increase the amount of Einstein work bla bla bla, but I don't want to do that. I want BOINC to do its work on its own.
I think the deadlines must be greatly increased or else no WUs will be delivered back to you.
I got my first error with the
)
I got my first error with the new work units http://einsteinathome.org/task/83639058
It was error '10' the work unit has crashed on 2 other computer with one computer finishing it
I also noticed than one of my faster computers is now taking ~5 hours more to finish the new work units, 14:45 verses 9:30
RE: There is an effort
)
Good point, let each project run what ever level of points they think their project is worth to them. We then make our choice on what project we crunch. I notice that TANPAKU will always slide in small wu with a very near date to try to "overpower" the allocations that I have set up for the projects I crunch.
[pre][= blue]SETI@home classic workunits = 5,906 with CPU time of 60,377 hours[/][/pre]
RE: Wow.... the Floating
)
The floating point speed chart you mentioned is based on the awarded credit.
1) I'm not sure the data collection for this chart already takes S5R2 credit into account, e.g. the charts on the same page showing the percentage of completion of the analysis are obviously still using S5RI statistics
2) many workunits never get credited for because there are quite a lot of client errors and validation errors at the moment. As only finished and validated work units will result in credit, the overall estimated floating point speed based on credit is bound to suffer.
3) a short database maintenance service interruption yesterday may also have some effect
BRM