Crossed the 1 PFLOP barrier now

astro-marwil
astro-marwil
Joined: 28 May 05
Posts: 511
Credit: 402300833
RAC: 1070889
Topic 197275

Hallo!
Today E@H passed the first time since the competition the 1 PFLOP barrier. At the end of the competition we reached 1.050 PFLOP at maximum, but it dropped relative rapidly to disapionting 850 TFLOP. After a long valey the crunching speed of the project increased slowly but almost constantly and we can hope for further permanently growth.
The next steps in technology will be the 14nm structurwidth coming to market next year and the 10nm some years later. 7 or 8 nm structurwidth are under research/development. I think we will see this to the end of this decade. So the technology will give the basis for further growth at limited electrical power consumption. Good for science!

Kind regards and happy crunching
Martin

Betreger
Betreger
Joined: 25 Feb 05
Posts: 987
Credit: 1421490742
RAC: 811376

Crossed the 1 PFLOP barrier now

I am happy to be a small part of this. Distributed computing is very cool.

FalconFly
FalconFly
Joined: 16 Feb 05
Posts: 191
Credit: 15650710
RAC: 0

Hm, BOINCstats lists Einstein

Hm, BOINCstats lists Einstein RAC at about 479.5 TFlops with yesterday clocking in at ~530 TFLops...

Note that Berkeley at some time changed the GFlops formula from RAC/100 (the one which I remember from the very beginning of BOINC) to RAC/200, don't know exactly when that happened or why.

http://boinc.berkeley.edu/wiki/Computation_credit

So we may have to wait some time for the magic 1 PFlops barrier...

(I only recently stumbled across the "new" formula as I thought I finally crossed my personal TFlops barrier - turned out I was not :p )

astro-marwil
astro-marwil
Joined: 28 May 05
Posts: 511
Credit: 402300833
RAC: 1070889

Hallo FalconFly! According to

Hallo FalconFly!
According to the history that change took place sometime between Sept. 2008 and May 20010, but there is no reason said for this change. I refere to the E@H server status page and this figure is used also in official papers. Interesting discrepancy, maybe Bernd will give a statement to this.

Kind regards and happy crunching
Martin

FalconFly
FalconFly
Joined: 16 Feb 05
Posts: 191
Credit: 15650710
RAC: 0

Yeah, in this case it would

Yeah, in this case it would be interesting to hear what the Einstein Team itself thinks of that... After all these figures are an important measure of a projects' success.

(I would assume such a change would be thoroughly discussed by the Berkeley devs with all project admins - turns out that maybe wasn't the case)

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2139
Credit: 2752648717
RAC: 1486471

RE: Yeah, in this case it

Quote:

Yeah, in this case it would be interesting to hear what the Einstein Team itself thinks of that... After all these figures are an important measure of a projects' success.

(I would assume such a change would be thoroughly discussed by the Berkeley devs with all project admins - turns out that maybe wasn't the case)


I attended the BOINC Workshop in London, in August 2010, and I remember the change being announced to all attending projects (Bernd was there too) in David Anderson's "end of term report".

It was a technical correction of a programming error - benchmark speeds (they had been used as the basis for credit up until that point) had been summed instead of averaged, or vice-versa - that had been discovered while re-checking the maths for CreditNew.

It still strikes me as odd that Flops are derived from credit, rather than the other way round.

FalconFly
FalconFly
Joined: 16 Feb 05
Posts: 191
Credit: 15650710
RAC: 0

Hm, in order to reverse the

Hm, in order to reverse the calculation, BOINC would need a very comprehensive benchmark suite regularly optimized for real-world project performance. IMHO they always shunned those efforts.
I also think in the past many project admins made it clear that they sometimes had no way of predicting the GLFops count of their distributed workunits due to their client design (that was during the first discussions of implementation of fixed credit allocations, where those were known)

I was working BOINC improvement suggestions for the Devs myself many years ago and at least back in those days, they.... let's say they weren't too cooperative even towards readily presented solutions and even outright rejected to see some issues as actual issues. I guess they had other priorities.

Anyway, off-topic, but ever since several of the "easy credit" projects established themself (yielding horrendous credit compared to other projects), RAC IMHO lost quite some credibility anyway.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2139
Credit: 2752648717
RAC: 1486471

Well, there are people around

Well, there are people around who have tried replacing BOINC's benchmark with a more scientifically sound one, and - with a few other bugfixes - verified that the result gives more consistent and plausible results under CreditNew. Possibly only theoretically so far - I don't think they've reached the stage of even trying to ask a project administrator to alpha-test the results yet.

FalconFly
FalconFly
Joined: 16 Feb 05
Posts: 191
Credit: 15650710
RAC: 0

Oh well, at least they're

Oh well, at least they're working on it.

I remember at one time - simply due to total granted credit numbers growing so high for the 1st time - the BOINC devs actually discussed just regularly cutting all already granted credit totals by 100 or 1000 (with the argument that new users would have a better chance to improve their ranks).
At least those plans didn't realize back then, that was a terrible idea ;)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.