Can Einstein@Home pass the 1 Petaflop (1000 Teraflop) barrier?

The computing power of Einstein@Home has exceeded 950 Teraflops for the first time since the project was begun in 2005. Based on the rate that our computing power has been growing, I am hopeful that Einstein@Home will pass the 1 Petaflop barrier before the end of 2012. Einstein@Home volunteers: please keep your computers running over the holiday season, and please sign up any new ones that you might receive as a gift!

Bruce Allen
Director, Einstein@Home

Comments

Kathryn Tombaugh-Weber
Kathryn Tombaug...
Joined: 12 Aug 11
Posts: 4
Credit: 22271
RAC: 0

Okie Dokie Smokey. We are

Okie Dokie Smokey. We are crunching in Oklahoma.

dskagcommunity
dskagcommunity
Joined: 16 Mar 11
Posts: 89
Credit: 1157182194
RAC: 266552

We done all GPU BRP4 Units

We done all GPU BRP4 Units already, it seems ^^

DSKAG Austria Research Team: [LINK]http://www.research.dskag.at[/LINK]

Jeffery A. Rogers
Jeffery A. Rogers
Joined: 20 Aug 10
Posts: 3
Credit: 3790585
RAC: 0

Just fired up my I7 2600k and

Just fired up my I7 2600k and gtx 580.I want a pulsar by years end so 24/7 runing hehe.

dutchie
dutchie
Joined: 10 Jun 06
Posts: 34
Credit: 6102332
RAC: 0

Running 6 cores, one is gpu-

Running 6 cores, one is gpu- Einstein 4 ever, happy days !!!!

microchip
microchip
Joined: 10 Jun 06
Posts: 50
Credit: 110839566
RAC: 173189

I don't think we'll reach it

I don't think we'll reach it before the end of 2012. the FLOPS have been decreasing since days now. I guess most if not all SETI people have gone to their home and that's why we're seeing such decrease.

Stef
Stef
Joined: 8 Mar 05
Posts: 206
Credit: 110568193
RAC: 0

RE: I don't think we'll

Quote:
I don't think we'll reach it before the end of 2012. the FLOPS have been decreasing since days now. I guess most if not all SETI people have gone to their home and that's why we're seeing such decrease.


Yep.. it doesn't look like:

Vivek Saini
Vivek Saini
Joined: 8 Oct 12
Posts: 2
Credit: 393485
RAC: 0

Just fired up my I7 and

Just fired up my I7 and nvidia gpu
all 9 cores 24X 7
1 petaflop here i come
vivek

David S
David S
Joined: 6 Dec 05
Posts: 2473
Credit: 22936222
RAC: 0

I finally activated the new

I finally activated the new GT630 in one of my older crunchers. It immediately downloaded 16 Setis and 10 Einsteins, and got to work on the Einsteins in HP.

David

Miserable old git
Patiently waiting for the asteroid with my name on it.

Mark W. Patton
Mark W. Patton
Joined: 2 May 09
Posts: 19
Credit: 105147736
RAC: 0

I got another one of my

I got another one of my machines up and running. It is a 2 core AMD with no extra video processing power. I think if I put more RAM in it, it will be able to run a GPU.

David S
David S
Joined: 6 Dec 05
Posts: 2473
Credit: 22936222
RAC: 0

RE: I got another one of my

Quote:
I got another one of my machines up and running. It is a 2 core AMD with no extra video processing power. I think if I put more RAM in it, it will be able to run a GPU.


System RAM shouldn't matter to the GPU. The GPU has its own RAM (unless it's a laptop).

My new GT630 is crunching merrily away. Einstein units are getting downloaded in large quantities and processed immediately at HP, giving it little time to do Seti. (I suppose it could be working off debt from when it had a bunch of Seti to do on CPU and I wouldn't let it download any more for either project.)

David

Miserable old git
Patiently waiting for the asteroid with my name on it.

LAPLACE Luigi
LAPLACE Luigi
Joined: 21 Nov 12
Posts: 2
Credit: 2490735
RAC: 0

Adding a GTX570 1280 MB

Adding a GTX570 1280 MB (FERMI) in my 8 Core Mac Pro running OSX 10.8.2 to help you !

Mac Pro 12 Core 2009 2.4 GHz (2 x Xeon 6 core) 8 GB DDR3 1333 intel SSD 330 180 GB nVidia GTX 680 2048 MB Flashed for Mac running OSX 10.10.1

Bojan
Bojan
Joined: 16 Nov 12
Posts: 5
Credit: 3724078
RAC: 0

I just stop other(SETI)

I just stop other(SETI) proyect to start getting Einstein data and what happend? I was getting 512 WU and it say that this a limit of a day. Ok, no problem, but the problem is that i get 500 WU for CPU, maybe few less and around 15 for GPU. So, what can my sistem, i7 and 2 gtx460 do about that? Oh, busy a cpu for about 10 days and i can take a GPU out of the sistem. Beehh.

Alex
Alex
Joined: 1 Mar 05
Posts: 451
Credit: 500091245
RAC: 217075

Hi Bojan, here @Einstein

Hi Bojan,

here @Einstein you do not need the workbuffer of 10 days, a setting on 0.5 days is quite sufficiant.
I run my systems with a workbuffer-setting of 0.05 days and I never ran out of work.
That's very different to SETI.

Bojan
Bojan
Joined: 16 Nov 12
Posts: 5
Credit: 3724078
RAC: 0

RE: Hi Bojan, here

Quote:

Hi Bojan,

here @Einstein you do not need the workbuffer of 10 days, a setting on 0.5 days is quite sufficiant.
I run my systems with a workbuffer-setting of 0.05 days and I never ran out of work.
That's very different to SETI.

i know that, but in pass(when a seti was down, start of december) i have limited my project at 1gb of hdd, i always getting around 20 GPU work and about 10 CPU work and disk of limited for BOINC was full. It seems that project was knowing what my sistem was needet, but now? Im getting a 500 CPU work for 10 days and almost nothing for GPU (GPU on my sistem is much faster), it will finished all work less in a day.....
sorry for my bad english, hoppe that you understand me what i will tell you.

David S
David S
Joined: 6 Dec 05
Posts: 2473
Credit: 22936222
RAC: 0

RE: I finally activated the

Quote:
I finally activated the new GT630 in one of my older crunchers. It immediately downloaded 16 Setis and 10 Einsteins, and got to work on the Einsteins in HP.


Well, offsetting that, my i7 finally finished the big load of Einstein it downloaded when Seti was down. Now I think it's paying back debt to Seti. The net change in my Einstein production overall seems to be slightly negative.

David

Miserable old git
Patiently waiting for the asteroid with my name on it.

Alex
Alex
Joined: 1 Mar 05
Posts: 451
Credit: 500091245
RAC: 217075

RE: i know that, but in

Quote:

i know that, but in pass(when a seti was down, start of december) i have limited my project at 1gb of hdd, i always getting around 20 GPU work and about 10 CPU work and disk of limited for BOINC was full. It seems that project was knowing what my sistem was needet, but now? Im getting a 500 CPU work for 10 days and almost nothing for GPU (GPU on my sistem is much faster), it will finished all work less in a day.....
sorry for my bad english, hoppe that you understand me what i will tell you.

No problem, as soon as your system requires new work for GPU you will get it. Your GPU will never run dry. My systems fetch new work usually when downloaded wu's are at about 95%; that's early enough.
Don't forget to keep one cpu-core free to feed the gpu.

Bojan
Bojan
Joined: 16 Nov 12
Posts: 5
Credit: 3724078
RAC: 0

"Don't forget to keep one

"Don't forget to keep one cpu-core free to feed the gpu."

I understand that, but i read every post on every forum on BOINC projects but dont understand still that.

What is for you one Core on i7 which Intel say that have 8 Core?

Im running 6 WU, 2 are free "HT" or i shuld run only 3 WU and realise 1 real core not HT one?

thanks for your answer and thanks

Alex
Alex
Joined: 1 Mar 05
Posts: 451
Credit: 500091245
RAC: 217075

RE: "Don't forget to keep

Quote:

"Don't forget to keep one cpu-core free to feed the gpu."

I understand that, but i read every post on every forum on BOINC projects but dont understand still that.

What is for you one Core on i7 which Intel say that have 8 Core?

I'm running 6 WU, 2 are free "HT" or i shuld run only 3 WU and realise 1 real core not HT one?

thanks for your answer and thanks

I use the same i7 as you do. I've set the amount of processors to 75%. This system has 2 AMD GPU's running 4 wu's at the same time + 4 CPU wu's. Setting the processor usage to a higher value decreases the performance of the gpu-tasks dramatically.
When I stop GPU-work this system runs 6 CPU wu's.
Since you have one GPU and run 2 wu's on it you should have 2 GPU + 5 CPU wu's running. Since you have a nVidia GPU that uses less CPU, other settings might be better.
It needs some experimenting with these numbers to find the optimum setting. The settings I use are good for a system for daily work to keep it responsible.
You can check your CPU-Usage life with a fine sidebar gadget from addgadgets.com

Mark W. Patton
Mark W. Patton
Joined: 2 May 09
Posts: 19
Credit: 105147736
RAC: 0

How are we doing with the

How are we doing with the Petaflop challenge? I was able to get a dual-core machine running again that had fried a power supply. Hope we can make it.
Mark

JHMarshall
JHMarshall
Joined: 24 Jul 12
Posts: 17
Credit: 1018018169
RAC: 0

Mark, On the Einstein home

Mark,

On the Einstein home page under Tech Stuff you will find a server status link. The current average is 915.7 TFLOPs. We will need a lot more help to pass the PFLOP barrier.

Are you the same Mark that recently posted in MW. If so I responded to your post there.

Joe

Mark W. Patton
Mark W. Patton
Joined: 2 May 09
Posts: 19
Credit: 105147736
RAC: 0

Yep, that was me. Thank you

Yep, that was me. Thank you for your quick response.
Mark

Sid
Sid
Joined: 12 Jun 07
Posts: 12
Credit: 119429416
RAC: 1

Tossing in a 560ti-448 and a


Tossing in a 560ti-448 and a 660 till New Years!

steffen_moeller
steffen_moeller
Joined: 9 Feb 05
Posts: 78
Credit: 1773655132
RAC: 0

Hello, RE: [Im getting a

Hello,

Quote:
[Im getting a 500 CPU work for 10 days and almost nothing for GPU (GPU on my sistem is much faster), it will finished all work less in a day.....


I presume your system is just very fast and the boinc client got a bit overly excited about it when asking about all those tasks. To keep your GPU fed with data in a nice way, it is common practice to reduce the max CPU load to (n-1)/n*100% with the n the number of cores, so you have one free hand to feed the GPU all the time.

I never understood why SETI has not long started to seek some help with external bandwidth, i.e. to get out of Berkeley at least with part of their infrastructure. It costs them dearly. Anyway, the Einstein servers just work, completely painless, at least from Europe. So, as long as your GPU is not idling, just relax about being swamped with too many work units. You may want to reduce the gigabytes available to BOINC if the situation does not improve. I am confident that the BOINC client gives priority to the GPU to have always units available. Together with the responsible E@H servers, this should hence be fine, even when disproportionate. If you run out of jobs for the GPU, but say they come back when you suspend running workunits, then this is a (new) bug.

Steffen

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109377229549
RAC: 35976731

RE: I just stop other(SETI)

Quote:
I just stop other(SETI) proyect to start getting Einstein data and what happend? I was getting 512 WU and it say that this a limit of a day. Ok, no problem, but the problem is that i get 500 WU for CPU, maybe few less and around 15 for GPU.


Hi Bojan,

I have read all your messages and the replies. People have given reasonable answers but I think I can add more information to help explain what you are seeing here at Einstein.

You mentioned (later message) that you have a 1GB disk limit for BOINC. When you first started to ask for work here, you were probably under that limit by a reasonable margin. The very first CPU task you received would have required a considerable download of both the apps themselves and a series of large data files (unless you already had them on board). If it was a GW task (S6LV1), the download could easily have been greater than 100MB. If it was a FGRP2 task, the initial download would have been more modest. However you still take a big hit in disk space as soon as the first GW task arrives. In both cases, once you have taken that hit, you can get many extra tasks that use the same data so your ongoing requirement for extra space is minimal until those large data files are 'used up'. For GW tasks, this can be hundreds or even thousands of extra tasks.

For BRP4 tasks, the situation is quite different. For every single task you need 16MB of disk space. If you want to keep just 10 tasks in your cache, you need 160MB of disk space. On the other hand, if you wanted to have an extra 10 GW tasks, you may not even need a single extra MB. Extra GW tasks are simply extra sets of parameters to be applied to the existing large data files. These parameter sets are quite small and are stored within the state file (client_state.xml). They leave no additional footprint on your disk at the time of download.

It's likely that the first request for work used up most of your available disk space. Once the limit was reached, you would not get more BRP4 tasks. However, the scheduler could choose extra CPU tasks that could use your existing data files and because you had a large cache setting, it would do so at every opportunity. The only way to get an extra BRP4 task would be to finish and report an existing one and free up 16MB of space. The scheduler could then send you one additional task - effectively just a replacement.

Quote:
So, what can my sistem, i7 and 2 gtx460 do about that? Oh, busy a cpu for about 10 days and i can take a GPU out of the sistem. Beehh.


Assuming that you would simply like to see more GPU tasks in the cache, here's what I would do. Temporarily set your large cache setting to something much smaller so as to prevent any further CPU tasks (if you have plenty). You can change this locally or on the website, but if the latter, you need to 'update' your client so that it knows not to ask for more CPU work. If you change your prefs locally, the client will already know about it.

You would then decide on how many extra GPU tasks you wanted to have. Let's say you wanted an extra 20 tasks. That translates into 320MB of space to store them. You would need to change your 1GB disk limit to 1.32GB. Again 'update' if necessary, to make sure the client knows about it. If your (temporarily reduced) cache setting is still large enough to allow it, the client should immediately ask for GPU work only and the server should send about 20 in total, perhaps over a couple of request cycles.

Have a think about the above and please ask questions if anything is not clear.

Cheers,
Gary.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109377229549
RAC: 35976731

RE: ... in pass(when a seti

Quote:
... in pass(when a seti was down, start of december) i have limited my project at 1gb of hdd, i always getting around 20 GPU work and about 10 CPU work and disk of limited for BOINC was full. It seems that project was knowing what my sistem was needet ....


That was probably just good luck :-).

When you first started, you probably only had a single frequency range for the large data files. As we are nearing the end of the S6LV1 run, it becomes increasingly likely that the scheduler will want to send 'left over' tasks for other frequency bins and this would require more large data files. Your ability to store GPU tasks is probably being swamped by the scheduler needing to give you more large data files. If you want to continue getting more GPU tasks you will probably need to allocate a bit more disk space from time to time. When the new run starts in January, it should be easier to manage your requirements.

Quote:
... but now? Im getting a 500 CPU work for 10 days and almost nothing for GPU (GPU on my sistem is much faster), it will finished all work less in a day.....
sorry for my bad english, hoppe that you understand me what i will tell you.


Your English is fine :-). Just use the 'trick' I outlined in my previous message.

Cheers,
Gary.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109377229549
RAC: 35976731

RE: What is for you one

Quote:
What is for you one Core on i7 which Intel say that have 8 Core?


With modern CPUs you seem to always get a benefit by setting the BIOS to enable HT. So you will probably get the best outcome if you use the 8 virtual cores. If you were supporting one GPU, you would probably find that the best outcome would come from running 7 CPU tasks. With 2 GPUs, the best will probably come from running just 6 CPU tasks.

As with most things, you need to try out various settings and see what works best for you. Another factor to consider is the number of simultaneous tasks you will run on each GPU. You may get the best performance from running 6 CPU tasks and 2x on each GPU which would give 10 simultaneous tasks. You could also try running 5/4 to see if releasing a further virtual core gave sufficient improvement to outweigh the loss of a CPU task - I suspect not, but that's just a guess.

Quote:
Im running 6 WU, 2 are free "HT" or i shuld run only 3 WU and realise 1 real core not HT one?


I think virtual cores are good enough for GPU support. It would be a real waste to only run 3 CPU tasks :-).

Cheers,
Gary.

astro-marwil
astro-marwil
Joined: 28 May 05
Posts: 511
Credit: 402190833
RAC: 1072834

Hallo! I think, no one

Hallo!
I think, no one believes, that we will cross the 1 PFLOPS barrier within this year (only another 40 h). But since 5 days we see well a daily increase in crunching speed. So I will try a forecast, when we will reach our intermittend goal.
Within the last 8 weeks we had an average increase of 4.10 +/- 0.29 [TFLOPS/d], so we will get a range of the evening of 13.Jan. till the morning of the 16.Jan., most likely the evening of the 14.Jan..
If I take the increase over the last 5 days of 9.4 +/- 0,69 [TFLOPS/d], I get the late night of the 5.Jan. till the late night of the 6.Jan., most likely the midday of the 6.Jan..
So it´s a wide spread range. Let´s see what will happen.

Best whishes for a happy and healthy and wealthy NEW YEAR 2013 !!!!

Kind regards and happy crunching
Martin

Bojan
Bojan
Joined: 16 Nov 12
Posts: 5
Credit: 3724078
RAC: 0

Thanks Gary Roberts and

Thanks Gary Roberts and others for yours all replies. I just stop geting CPU work at beginning and now is fine. Still have around 300 CPU work to finish it utill 9.1.2013 and i hoppe that will manage all of them to that date.

thanks again to all and good crunching

Mr. GoodWrench
Mr. GoodWrench
Joined: 19 Nov 12
Posts: 1
Credit: 20198052
RAC: 0

Greetings! Just added a new

Greetings! Just added a new notebook w/NVidia to the cause. 24/7 - but, only connects when I'm stopped for the night.

Vegard
Vegard
Joined: 2 Sep 11
Posts: 8
Credit: 13739271
RAC: 0

Is the rising computing power

Is the rising computing power since the last few days (about 940 TFLOPS at the moment) real or just due to the very large credits per FLOPS ratio of the new FGRP2 tasks? Is this taken into account in the calculation?

Cheers!

astro-marwil
astro-marwil
Joined: 28 May 05
Posts: 511
Credit: 402190833
RAC: 1072834

Hallo Vegard! RE: ....

Hallo Vegard!

Quote:
.... due to the very large credits per FLOPS ratio of the new FGRP2 tasks? Is this taken into account in the calculation?


I believe it´s not taken into account. It results most likely from the verry high spread in crunching time, which is difficult to simulate in test runs. See here. For me this gives about 2.5 fold more credit/hour. For purpose of testing, this is ok.

Kind regards and happy crunching
Martin

Vegard
Vegard
Joined: 2 Sep 11
Posts: 8
Credit: 13739271
RAC: 0

Hi Martin, I believe that

Hi Martin,

I believe that the spread in crunching time or the credit/hour is not the crucial point here. I assume that the overall number of floating point operations is (almost) the same for each FGRP2 task, regardless of how long it runs on a certain machine. So, if I know this number and count the jobs per day, I can calculate the overall FLOPS of the project. But does it work like that? Or are the total FLOPS calculated from the CREDIT by using a fixed(!?) conversion number credit/FLOPS? If this is the case, the given FLOPS numbers on the server status page are meaningless (or at least very uncertain), since the credit/FLOPS are different for the 3 subprojects (but would explain the recent increase since FGRP2 is running). It would be nice, if an expert could clarify the method of computing the FLOPS (and thereby explain the HUGE differences of FLOPS numbers on different BOINC webpages, e.g. boincstats.com says 435 TFLOPS at the moment...).

Cheers!

DACJ
DACJ
Joined: 6 Aug 11
Posts: 1
Credit: 444821
RAC: 0

Where are we at as of today?

Where are we at as of today?

Jord
Joined: 26 Jan 05
Posts: 2952
Credit: 5779100
RAC: 0

Go to

Go to http://einstein.phys.uwm.edu/server_status.html and scroll all the way down to "Computing capacity".

Alex
Alex
Joined: 1 Mar 05
Posts: 451
Credit: 500091245
RAC: 217075

The link points to Floating

The link points to

Floating point speed
(from recent average credit of all users) 945.8 TFLOPS

this includes the better credit for the FGRP2, because it's calculatet with the credit as base.

More at the beginning of the server status page the 'COMPUTING' line gives the following info:

CPU TFLOPS
(from successful tasks last week)

This does not necessarily mean 'from average credit'; but since the numbers are very similar I assume that it means the same.

So the post

... due to the very large credits per FLOPS ratio of the new FGRP2 tasks? Is this taken into account in the calculation?

must be answered with YES

Happy new Year to all of you!

Alexander

Vegard
Vegard
Joined: 2 Sep 11
Posts: 8
Credit: 13739271
RAC: 0

Oh, I overlooked this line of

Oh, I overlooked this line of text, sorry... But I disagree that the answer is YES. I rather think that it depends on the conversion factors that are applied, if the numbers make sense or not. Would be nice, if an uncertainty could be calculated and written next to he numbers. Otherwise it will not be clear, if in the future we really passed any nice barrier like 1 PFLOPS or so... (And there are still the large discrepancies in the various webpages.) I am just surprised that we are celebrating the 950 TFLOPS (the first post by Bruce), while it is far from obvious (at least for the users) that this is correct.

Cheers and happy new year!

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109377229549
RAC: 35976731

RE: ... I am just surprised

Quote:
... I am just surprised that we are celebrating the 950 TFLOPS (the first post by Bruce), while it is far from obvious (at least for the users) that this is correct.


The thing that is being celebrated is really the rate of increase in the processing power of E@H. Not too long ago it was only half the current number. We have to measure it with some metric and as long as we keep using the same system and can be confident that the numbers are increasing, we can celebrate that fact and can mark out some milestones to celebrate as we pass them.

I'm just a volunteer like everyone else outside the actual project staff. I think the staff do a great job of maintaining a reliable and responsive system and this encourages increased participation. Strangely enough so does setting some sort of target to aim for. I've been pleasantly surprised at the number of messages that have popped up recently in various threads from average volunteers committing to help reach the target by switching resources here, or adding new resources, or dragging old hardware out of retirement, etc.

When Seti came back on line, I was expecting progress to drop backwards sharply. What I wasn't expecting was the quick recovery that has now occurred. Sure, a significant proportion of this upward spurt is due to the fact that tasks in the new FGRP2 run are crunching rather quickly. But I'm sure there is also the help of both existing and new volunteers who are adding resources in response to Bruce's messages.

In my own case, in the last couple of months I've added 15 new budget GPU crunchers. I didn't do that in response to any 'call to arms' but it turns out the timing was pretty good to help Bruce's current call :-). My reasoning was quite different. I had discovered the very good power efficiency of Kepler series GPUs and when the GTX650 came out at a nice cheap price, I couldn't resist. The original plan was to shut down a swag of high consuming 3 year old CPU only crunchers and pay for the upgrade with electricity savings whilst actually increasing my RAC.

The plan would have worked admirably. The new GPU crunchers have a combined RAC of around 400K. The original CPU fleet was producing 250K. When it came time to start shutting a lot of them down, Bruce had started his comments in these news threads so I didn't have the heart to withdraw them. Now that FGRP2 has arrived, my RAC has shot up even further. I've suddenly got a goal of my own - to reach 1M RAC by the end of the year. A week ago I didn't imagine it would be possible. But with three hours to go here in Brisbane, Queensland, my RAC is now over 997K and rising at a rate that should get me there.

I imagine Bruce is very pleased with his cunning psychology to lure people in and boost the contributions to the cause :-).

Cheers,
Gary.

Gavin
Gavin
Joined: 21 Sep 10
Posts: 191
Credit: 40643354503
RAC: 1592339

Gary, You've done it! and

Gary,

You've done it! and with time to spare.
What a fantastic way to enter 2013.
Congratulations and all the very best for a prosperous New Year :-)

Gavin.

Edit: Bugger, I may have spoken too soon, you just dipped under again!

astro-marwil
astro-marwil
Joined: 28 May 05
Posts: 511
Credit: 402190833
RAC: 1072834

Hallo Vegard! RE: I

Hallo Vegard!

Quote:
I rather think that it depends on the conversion factors that are applied, if the numbers make sense or not.


As long as I´m with this project, this is more than 7 years now, the conversionfactor is stable at 1,02*10^-5 (TFLOPS/(Cobblestone/d)]. - for E@H !!! - This I prooved by taking the daily data from the server status page, put them into a graphic and made a least square fit. The correlationfactor R^2 for this is 0,99991, so verry good. But intentionaly the factor is 1e-5 --- 100,000 Cobblestone/d for 1 TFLOPS. I think, the difference is somewhat difficult to rule out here.
The definition of the Cobblestone for the done crunching work, was set up for a better comparison of the different BOINC-projects.

By the way, we will get a new peak crunching power this afternoon. The old maximum was 953,8 @ Dec. 10th 20:35.Now it´s 952,5.

Best whishes for a happy, healthy and wealthy NEW YEAR 2013 !!!!

Kind regards and happy crunching
Martin

Vegard
Vegard
Joined: 2 Sep 11
Posts: 8
Credit: 13739271
RAC: 0

Hi Martin, interesting,

Hi Martin,

interesting, thanks! In your fit you make the assumption that there is only one conversion factor in the FLOPS calculation, which I guess is known to be true. But since the conversion factor of FGRP2 tasks SHOULD be completely different than for the other tasks at the moment (in the current testing phase) and since this subproject accounts for something like a fifth of the current official computing power, it has a significant impact on the current total FLOPS. In that sense it can easily explain the recent increase in computing power (after we lost some SETI people).

I think I understand everything better now, but I also think that we only reach this new peak by definition. Let's see how everything will develop in the future.

Thanks and cheers!

Sid
Sid
Joined: 12 Jun 07
Posts: 12
Credit: 119429416
RAC: 1

RE: The old maximum was

Quote:

The old maximum was 953,8 @ Dec. 10th 20:35.Now it´s 952,5.

New Einstein Max: 954.2 TFLOPS!

. . . let's keep pushing!

microchip
microchip
Joined: 10 Jun 06
Posts: 50
Credit: 110839566
RAC: 173189

I was right. No 1 PFLOPS at

I was right. No 1 PFLOPS at the end of 2012. Let's hope we can reach it shortly into 2013

astro-marwil
astro-marwil
Joined: 28 May 05
Posts: 511
Credit: 402190833
RAC: 1072834

Hi Vegard! It´s just what BM

Hi Vegard!
It´s just what BM meant in his thread Message 121517 dated Dec. 21st by

Quote:
... Flops estimation and Credit will be fine-tuned when we have more data (i.e. tasks returned), but possibly not this year anymore.


Obviously the amount of data in the different tasks and so the amount of crunching work and crunching time, changes thoroughly from task to task. From my observations of 244 tasks now, I conclude, we have about a factor of 2.5 to much credits/task now, compared to S6LV1.

Kind regards and happy crunching
Martin

astro-marwil
astro-marwil
Joined: 28 May 05
Posts: 511
Credit: 402190833
RAC: 1072834

Hallo Sid! Be careful.

Hallo Sid!
Be careful. Shortly before midnight (MEZ) most often there occures a drop back by about 0.5% (5 TFLOPS). Don´t know why this happen.

Kind regards and happy crunching
Martin

Sid
Sid
Joined: 12 Jun 07
Posts: 12
Credit: 119429416
RAC: 1

Martin: I figure with 500


Martin:

I figure with 500 more late model GPU's, 1 Petaflop today is possible.

I'm trying to drum up some support on the shoutbox at BOINCstats

. . . chugga, chugga.

David S
David S
Joined: 6 Dec 05
Posts: 2473
Credit: 22936222
RAC: 0

RE: When Seti came back on

Quote:
When Seti came back on line, I was expecting progress to drop backwards sharply. What I wasn't expecting was the quick recovery that has now occurred. Sure, a significant proportion of this upward spurt is due to the fact that tasks in the new FGRP2 run are crunching rather quickly. But I'm sure there is also the help of both existing and new volunteers who are adding resources in response to Bruce's messages.


I tend to think the current upsurge is at least partially due to Seti's latest problem with downloads getting stuck again. That said, I just watched my i7 suddenly clear out a large backlog of Seti downloads in just a few minutes, at speeds approaching 60Kbps. However, the host with the new GT 630 still has a whole bunch that aren't going anywhere at any speed.

David

Miserable old git
Patiently waiting for the asteroid with my name on it.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109377229549
RAC: 35976731

RE: Edit: Bugger, I may

Quote:
Edit: Bugger, I may have spoken too soon, you just dipped under again!


Hi Gavin, thanks for your congratulations and good wishes.

Yes, I did 'make it', I believe, but I didn't stay up to see it.

The data bandwidth required to feed the GPUs is quite onerous. My ISP has a peak allowance and a very much larger off-peak allowance. I take advantage of this by restricting comms to the off-peak period (2.00am to noon local time). I don't pay for uploads so results are uploaded during peak time (noon to 2.00am) but (unless I'm very keen) are not reported. Last night I reported the backlog on the GPU hosts around 10:30pm and saw that there would be enough to push the RAC above 1M by midnight.

When I got up this morning, the off-peak period was well underway and all hosts had requested work to replenish caches and had reported their completed backlogs. So I'm now well above 1M. When I look at daily increments in total credit, those are running at around 1.4M. So RAC will continue to rise until the credit allocation for FGRP2 is adjusted and/or I decide to complete my strategy of actually turning off the 'gas guzzlers' :-). I'll be leaving them on until we hit 1PF at least :-).

It's amazing how much we get sucked in by rising numbers :-).

Cheers,
Gary.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109377229549
RAC: 35976731

RE: ... partially due to

Quote:
... partially due to Seti's latest problem with downloads getting stuck again.


I don't follow Seti so I'm quite oblivious to whether the project is running normally or not. I do understand that Seti supporters tend to keep large caches and tend to use E@H as a backup. Unless Seti has not been supplying work for a few days, I wouldn't think we would see much effect here yet. Sure, caches might be having the shortfall supplied by E@H but until Seti tasks actually run out, I wouldn't think that more E@H tasks are being crunched yet.

Cheers,
Gary.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2139
Credit: 2752617967
RAC: 1502394

RE: RE: ... partially due

Quote:
Quote:
... partially due to Seti's latest problem with downloads getting stuck again.

I don't follow Seti so I'm quite oblivious to whether the project is running normally or not. I do understand that Seti supporters tend to keep large caches and tend to use E@H as a backup. Unless Seti has not been supplying work for a few days, I wouldn't think we would see much effect here yet. Sure, caches might be having the shortfall supplied by E@H but until Seti tasks actually run out, I wouldn't think that more E@H tasks are being crunched yet.


I think there's a fair chance we should be able to declare the Petaflop within the next 36 hours.

SETI has been limping for a while, with a recurrence of the scheduler timeout problem it suffered at the very beginning of November. And around 02:50 UTC this morning, one of their collection of creaky and cranky servers left the party: no work has been sent out or reported since then.

They are also due for scheduled maintenance - probably tomorrow, after the holiday. That means we should get the benefit of the big post-holiday office switch on tomorrow, when their volunteers fetch from backup projects. And even if we miss this opportunity, Berkeley has another outage scheduled for electrical supply work between 4-6 January...

965.7 TFLOPS and rising.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109377229549
RAC: 35976731

Hi Richard, thanks very much

Hi Richard, thanks very much for the update. Interesting days ahead!! Looks like we might need to get ready with the welcome mat once again!! :-).

I hope Bernd, HB and Oliver aren't on an extended break. It seems the FGRP2 validator might need a bit of a hurry-up some time soon :-).

Cheers,
Gary.