Report deadline: 1 week!

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

> After reading the comments,

Message 3398 in response to message 3395

> After reading the comments, I'm still undecided about what to do. So in the
> short term I'm not going to do anything. In the longer term, the option of
> having a shorter time before completed WU are purged from the database, and a
> longer deadline, seems reasonable. I'll mull this over.

Bruce,

To be honest I have only looked at my results to see something for my documentation. So, for what it is worth, I would be fine to have results deleted faster to keep the size down to the lower limits.

One other question I have had for, like, forever, is if the indexes that are placed on the results table are part of a solution or part of the problem. Without a large data set there is no way I can test this on my own. BUT, I know that with Oracle databases, and every other article on other db state that you only index for read performance. Since this is primarily a transaction database the reduction of indexes is normally the smart option.

I raised this question early in the BOINC Beta and never got an answer that this was even tested. I have not found a mention of the ability to get a good grip on the dynamic performance of the data base in the MySQL documentation. Again, In Oracle you can actually look at the dynamic performance tables to get a clear indication if the indexes are even being used...

The last schema and constraint files I looked at had about 10 indexes on Results. Just food for thought.

Jim Baize
Jim Baize
Joined: 22 Jan 05
Posts: 116
Credit: 582144
RAC: 0

Uh... yeah... what he

Message 3399 in response to message 3398

Uh... yeah... what he said!

> > After reading the comments, I'm still undecided about what to do. So in
> the
> > short term I'm not going to do anything. In the longer term, the option
> of
> > having a shorter time before completed WU are purged from the database,
> and a
> > longer deadline, seems reasonable. I'll mull this over.
>
> Bruce,
>
> To be honest I have only looked at my results to see something for my
> documentation. So, for what it is worth, I would be fine to have results
> deleted faster to keep the size down to the lower limits.
>
> One other question I have had for, like, forever, is if the indexes that are
> placed on the results table are part of a solution or part of the problem.
> Without a large data set there is no way I can test this on my own. BUT, I
> know that with Oracle databases, and every other article on other db state
> that you only index for read performance. Since this is primarily a
> transaction database the reduction of indexes is normally the smart option.
>
> I raised this question early in the BOINC Beta and never got an answer that
> this was even tested. I have not found a mention of the ability to get a good
> grip on the dynamic performance of the data base in the MySQL documentation.
> Again, In Oracle you can actually look at the dynamic performance tables to
> get a clear indication if the indexes are even being used...
>
> The last schema and constraint files I looked at had about 10 indexes on
> Results. Just food for thought.
>

Jim

Martin P.
Martin P.
Joined: 17 Feb 05
Posts: 162
Credit: 40156217
RAC: 0

Sorry to come back to this

Sorry to come back to this discussions. It just happened that I lost 2 results. They reported back but received 0.00 credit because the deadline was missed:
http://einsteinathome.org/workunit/345699
http://einsteinathome.org/workunit/345684
There were still several more WUs on my computer with the deadline also being Feb. 25th. So I reset the project because I did not want to waste computer-time. Those 7 WUs are lost, as well as another 8 with a deadline for March 3rd.

After restarting the project I assumed that the client would download fewer WUs as the computer reports:
Average turnaround time 6.75 days.
and the time is set to contact the project every 2 days (needed for SETI@Home since they have many server problems lately).

Guess what: Instead of downloading 4 WUs (it's a dual processor machine) the client downloaded 16 and still asks for more (luckily there is a download limit of 16)! At least 8 of them will time-out again.

Martin P.
Martin P.
Joined: 17 Feb 05
Posts: 162
Credit: 40156217
RAC: 0

> After reading the comments,

Message 3401 in response to message 3395

> After reading the comments, I'm still undecided about what to do. So in the
> short term I'm not going to do anything. In the longer term, the option of
> having a shorter time before completed WU are purged from the database, and a
> longer deadline, seems reasonable. I'll mull this over.
>
> [Oh, in response to one question: no, after the WU is purged from the
> database, it is only available in our archives, not to other projects,
> statistics servers, etc.]
>
> Cheers,
> Bruce
>

Hi Bruce,

cutting it back to 4 days should be o.k. Please do not make it shorter than that because otherwise users do not have a chance to check after e.g. a weekend (3 days + 1 day timezone difference).

But PLEASE make the deadline longer!!!!

Bruce Allen
Bruce Allen
Moderator
Joined: 15 Oct 04
Posts: 1119
Credit: 172127663
RAC: 0

Martin P., > Sorry to come

Message 3402 in response to message 3400

Martin P.,

> Sorry to come back to this discussions. It just happened that I lost 2
> results. They reported back but received 0.00 credit because the deadline was
> missed:
> http://einsteinathome.org/workunit/345699
> http://einsteinathome.org/workunit/345684
> There were still several more WUs on my computer with the deadline also being
> Feb. 25th. So I reset the project because I did not want to waste
> computer-time. Those 7 WUs are lost, as well as another 8 with a deadline for
> March 3rd.
>
> After restarting the project I assumed that the client would download fewer
> WUs as the computer reports:
> Average turnaround time 6.75 days.
> and the time is set to contact the project every 2 days (needed for SETI@Home
> since they have many server problems lately).
>
> Guess what: Instead of downloading 4 WUs (it's a dual processor machine) the
> client downloaded 16 and still asks for more (luckily there is a download
> limit of 16)! At least 8 of them will time-out again.

2005-02-28 09:39:43 [normal ] OS version Darwin 7.8.0
2005-02-28 09:39:43 [normal ] Request [HOST#18791] Database [HOST#18791] Request [RPC#16] Database [RPC#15]
2005-02-28 09:39:43 [normal ] Processing request from [USER#17942] [HOST#18791] [IP 213.229.0.106] [RPC#16] core client version 4.19
2005-02-28 09:39:43 [normal ] [HOST#18791] got request for 529800.652800 seconds of work; available disk 3.000000 GB
2005-02-28 09:39:43 [debug ] [HOST#18791]: has file H1_0388.4
2005-02-28 09:39:43 [debug ] in_send_results_for_file(H1_0388.4, 0) prev_result.id=1317096
2005-02-28 09:39:43 [debug ] Sorted list of URLs follows [host timezone: UTC+3600]
2005-02-28 09:39:43 [debug ] zone=+3600 url=http://einstein.aei.mpg.de
2005-02-28 09:39:43 [debug ] zone=-21600 url=http://einstein.phys.uwm.edu
2005-02-28 09:39:43 [debug ] [HOST#18791] Sending app_version einstein powerpc-apple-darwin 478
2005-02-28 09:39:43 [debug ] [HOST#18791] Already has file H1_0388.4
2005-02-28 09:39:43 [debug ] [HOST#18791] reducing disk needed for WU by 12144000 bytes (length of H1_0388.4)
2005-02-28 09:39:43 [debug ] est cpu dur 28928.564374; running_frac 0.587513; rsf 1.000000; est 49239.015667
2005-02-28 09:39:43 [normal ] [HOST#18791] Sending [RESULT#1493531 H1_0388.4__0388.6_0.1_T19_Test02_1] (fills 49239.02 seconds)
2005-02-28 09:39:43 [debug ] in_send_results_for_file(H1_0388.4, 1) prev_result.id=1493531
2005-02-28 09:39:43 [debug ] touched ../locality_scheduling/need_work/H1_0388.4: need work for file H1_0388.4
2005-02-28 09:39:43 [debug ] make_more_work_for_file(H1_0388.4, 1)=0
2005-02-28 09:39:49 [debug ] in_send_results_for_file(H1_0388.4, 2) prev_result.id=1493531
2005-02-28 09:39:49 [debug ] est cpu dur 28928.564374; running_frac 0.587513; rsf 1.000000; est 49239.015667
2005-02-28 09:39:49 [debug ] [HOST#18791] Sending app_version einstein powerpc-apple-darwin 478
2005-02-28 09:39:49 [debug ] [HOST#18791] Already has file H1_0388.4
2005-02-28 09:39:49 [debug ] [HOST#18791] reducing disk needed for WU by 12144000 bytes (length of H1_0388.4)
2005-02-28 09:39:49 [debug ] est cpu dur 28928.564374; running_frac 0.587513; rsf 1.000000; est 49239.015667
2005-02-28 09:39:49 [normal ] [HOST#18791] Sending [RESULT#1494335 H1_0388.4__0388.7_0.1_T19_Test02_0] (fills 49239.02 seconds)
2005-02-28 09:39:49 [debug ] in_send_results_for_file(H1_0388.4, 3) prev_result.id=1494335
2005-02-28 09:39:49 [debug ] touched ../locality_scheduling/need_work/H1_0388.4: need work for file H1_0388.4
2005-02-28 09:39:49 [debug ] make_more_work_for_file(H1_0388.4, 3)=0
2005-02-28 09:39:55 [debug ] in_send_results_for_file(H1_0388.4, 4) prev_result.id=1494335
2005-02-28 09:39:55 [debug ] est cpu dur 28928.564374; running_frac 0.587513; rsf 1.000000; est 49239.015667
2005-02-28 09:39:55 [debug ] [HOST#18791] Sending app_version einstein powerpc-apple-darwin 478
2005-02-28 09:39:55 [debug ] [HOST#18791] Already has file H1_0388.4
2005-02-28 09:39:55 [debug ] [HOST#18791] reducing disk needed for WU by 12144000 bytes (length of H1_0388.4)
2005-02-28 09:39:55 [debug ] est cpu dur 28928.564374; running_frac 0.587513; rsf 1.000000; est 49239.015667
2005-02-28 09:39:55 [normal ] [HOST#18791] Sending [RESULT#1494339 H1_0388.4__0388.8_0.1_T19_Test02_0] (fills 49239.02 seconds)
2005-02-28 09:39:55 [debug ] in_send_results_for_file(H1_0388.4, 5) prev_result.id=1494339
2005-02-28 09:39:55 [debug ] touched ../locality_scheduling/need_work/H1_0388.4: need work for file H1_0388.4
2005-02-28 09:39:55 [debug ] make_more_work_for_file(H1_0388.4, 5)=0
2005-02-28 09:40:01 [debug ] in_send_results_for_file(H1_0388.4, 6) prev_result.id=1494339
2005-02-28 09:40:01 [debug ] est cpu dur 28928.564374; running_frac 0.587513; rsf 1.000000; est 49239.015667
2005-02-28 09:40:01 [debug ] [HOST#18791] Sending app_version einstein powerpc-apple-darwin 478
2005-02-28 09:40:01 [debug ] [HOST#18791] Already has file H1_0388.4
2005-02-28 09:40:01 [debug ] [HOST#18791] reducing disk needed for WU by 12144000 bytes (length of H1_0388.4)
2005-02-28 09:40:01 [debug ] est cpu dur 28928.564374; running_frac 0.587513; rsf 1.000000; est 49239.015667
2005-02-28 09:40:01 [normal ] [HOST#18791] Sending [RESULT#1494343 H1_0388.4__0388.9_0.1_T19_Test02_0] (fills 49239.02 seconds)
2005-02-28 09:40:02 [debug ] in_send_results_for_file(H1_0388.4, 7) prev_result.id=1494343
2005-02-28 09:40:02 [debug ] touched ../locality_scheduling/need_work/H1_0388.4: need work for file H1_0388.4
2005-02-28 09:40:02 [debug ] make_more_work_for_file(H1_0388.4, 7)=0
2005-02-28 09:40:08 [debug ] in_send_results_for_file(H1_0388.4, 8) prev_result.id=1494343
2005-02-28 09:40:08 [debug ] est cpu dur 28928.564374; running_frac 0.587513; rsf 1.000000; est 49239.015667
2005-02-28 09:40:08 [debug ] [HOST#18791] Sending app_version einstein powerpc-apple-darwin 478
2005-02-28 09:40:08 [debug ] [HOST#18791] Already has file H1_0388.4
2005-02-28 09:40:08 [debug ] [HOST#18791] reducing disk needed for WU by 12144000 bytes (length of H1_0388.4)
2005-02-28 09:40:08 [debug ] est cpu dur 28928.564374; running_frac 0.587513; rsf 1.000000; est 49239.015667
2005-02-28 09:40:08 [normal ] [HOST#18791] Sending [RESULT#1494347 H1_0388.4__0388.5_0.1_T20_Test02_0] (fills 49239.02 seconds)
2005-02-28 09:40:08 [debug ] in_send_results_for_file(H1_0388.4, 9) prev_result.id=1494347
2005-02-28 09:40:08 [debug ] touched ../locality_scheduling/need_work/H1_0388.4: need work for file H1_0388.4
2005-02-28 09:40:08 [debug ] make_more_work_for_file(H1_0388.4, 9)=0
2005-02-28 09:40:14 [debug ] in_send_results_for_file(H1_0388.4, 10) prev_result.id=1494347
2005-02-28 09:40:14 [debug ] est cpu dur 28928.564374; running_frac 0.587513; rsf 1.000000; est 49239.015667
2005-02-28 09:40:14 [debug ] [HOST#18791] Sending app_version einstein powerpc-apple-darwin 478
2005-02-28 09:40:14 [debug ] [HOST#18791] Already has file H1_0388.4
2005-02-28 09:40:14 [debug ] [HOST#18791] reducing disk needed for WU by 12144000 bytes (length of H1_0388.4)
2005-02-28 09:40:14 [debug ] est cpu dur 28928.564374; running_frac 0.587513; rsf 1.000000; est 49239.015667
2005-02-28 09:40:14 [normal ] [HOST#18791] Sending [RESULT#1494351 H1_0388.4__0388.6_0.1_T20_Test02_0] (fills 49239.02 seconds)
2005-02-28 09:40:14 [debug ] in_send_results_for_file(H1_0388.4, 11) prev_result.id=1494351
2005-02-28 09:40:14 [debug ] touched ../locality_scheduling/need_work/H1_0388.4: need work for file H1_0388.4
2005-02-28 09:40:14 [debug ] make_more_work_for_file(H1_0388.4, 11)=0
2005-02-28 09:40:20 [debug ] in_send_results_for_file(H1_0388.4, 12) prev_result.id=1494351
2005-02-28 09:40:20 [debug ] est cpu dur 28928.564374; running_frac 0.587513; rsf 1.000000; est 49239.015667
2005-02-28 09:40:20 [debug ] [HOST#18791] Sending app_version einstein powerpc-apple-darwin 478
2005-02-28 09:40:20 [debug ] [HOST#18791] Already has file H1_0388.4
2005-02-28 09:40:20 [debug ] [HOST#18791] reducing disk needed for WU by 12144000 bytes (length of H1_0388.4)
2005-02-28 09:40:20 [debug ] est cpu dur 28928.564374; running_frac 0.587513; rsf 1.000000; est 49239.015667
2005-02-28 09:40:20 [normal ] [HOST#18791] Sending [RESULT#1494359 H1_0388.4__0388.7_0.1_T20_Test02_0] (fills 49239.02 seconds)
2005-02-28 09:40:21 [debug ] in_send_results_for_file(H1_0388.4, 13) prev_result.id=1494359
2005-02-28 09:40:21 [debug ] touched ../locality_scheduling/need_work/H1_0388.4: need work for file H1_0388.4
2005-02-28 09:40:21 [debug ] make_more_work_for_file(H1_0388.4, 13)=0
2005-02-28 09:40:27 [debug ] in_send_results_for_file(H1_0388.4, 14) prev_result.id=1494359
2005-02-28 09:40:27 [debug ] est cpu dur 28928.564374; running_frac 0.587513; rsf 1.000000; est 49239.015667
2005-02-28 09:40:27 [debug ] [HOST#18791] Sending app_version einstein powerpc-apple-darwin 478
2005-02-28 09:40:27 [debug ] [HOST#18791] Already has file H1_0388.4
2005-02-28 09:40:27 [debug ] [HOST#18791] reducing disk needed for WU by 12144000 bytes (length of H1_0388.4)
2005-02-28 09:40:27 [debug ] est cpu dur 28928.564374; running_frac 0.587513; rsf 1.000000; est 49239.015667
2005-02-28 09:40:27 [normal ] [HOST#18791] Sending [RESULT#1494363 H1_0388.4__0388.8_0.1_T20_Test02_0] (fills 49239.02 seconds)
2005-02-28 09:40:27 [normal ] [HOST#18791] Sent 8 results

The scheduler thinks that the jobs will take 28928 CPU seconds to complete. It also estimates that the code runs 58% of the time on your machine and thus will complete in 49,239 wallclock seconds. Hence since the machine requested 529,800 seconds of work, it got 8 WU. The next request, a minute later, got the same response because you have a two CPU machine.

Note that according to this: rsf 1.000000 the E@H scheduler believes that this machine has 100% of its BOINC resources devoted to E@H. If you are also using it for SETI@Home, then you need to reduce the resource share fraction, I think.

Bruce

Director, Einstein@Home

Martin P.
Martin P.
Joined: 17 Feb 05
Posts: 162
Credit: 40156217
RAC: 0

> > The scheduler thinks

Message 3403 in response to message 3402

>
> The scheduler thinks that the jobs will take 28928 CPU seconds to complete.
> It also estimates that the code runs 58% of the time on your machine and thus
> will complete in 49,239 wallclock seconds. Hence since the machine requested
> 529,800 seconds of work, it got 8 WU. The next request, a minute later, got
> the same response because you have a two CPU machine.
>
> Note that according to this: rsf 1.000000 the E@H scheduler believes that this
> machine has 100% of its BOINC resources devoted to E@H. If you are also using
> it for SETI@Home, then you need to reduce the resource share fraction, I
> think.
>
> Bruce
>

Hi Bruce,

thanks for replying! Unfortunately I do not understand anything from the first part of your message (I apologize, but I am just a normal user. Plus my mothertongue is German, not English)! My resource sharing is set to 60% E@H and 40% SETI@Home. However, this does not work on Mac OS X anyway. The Mac client ignores any setting in the preferences because it is a command line client that runs if you tell it to and quits if you quit the Terminal window.

BTW: I am just about to reset my other Mac (http://einsteinathome.org/host/18591/tasks). It just finished 2 WUs and started crunching on 2 more that will expire tomorrow (with another 7 WUs waiting expiring also tomorrow or March 3rd). Time to complete one WU: appr. 69,000 seconds. Computer running appr. 14 hours/day.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4312
Credit: 250480775
RAC: 35261

What makes you think the

What makes you think the resource share doesn't work with command line clients?

Warum glaubst Du, dass die Einstellung der Verteilung mit einem Kommandozeilen-Client nicht funktioniert? Das sollte mit dem Interface nichts zu tun haben...

BM

BM

Saenger
Saenger
Joined: 15 Feb 05
Posts: 403
Credit: 33009522
RAC: 0

> What makes you think the

Message 3405 in response to message 3404

> What makes you think the resource share doesn't work with command line
> clients?
>
> Warum glaubst Du, dass die Einstellung der Verteilung mit einem
> Kommandozeilen-Client nicht funktioniert? Das sollte mit dem Interface nichts
> zu tun haben...

I think Boinc generally 'forgets' the other projects. I've set my limit to 1 day, and always have enough to crunch for the whole weekend, and not just CPDN;)

Ich glaube, Boinc 'vergißt' die anderen Projekte generell. Ich hab mein Limit auf 1 tag gesetzt, und hab' immer genug für's ganze Wochenende, und nicht nur für Klima ;)

Grüße vom Sänger

Martin P.
Martin P.
Joined: 17 Feb 05
Posts: 162
Credit: 40156217
RAC: 0

> What makes you think the

Message 3406 in response to message 3404

> What makes you think the resource share doesn't work with command line
> clients?
>
> Warum glaubst Du, dass die Einstellung der Verteilung mit einem
> Kommandozeilen-Client nicht funktioniert? Das sollte mit dem Interface nichts
> zu tun haben...
>
> BM
>

When I start either SETI@Home or E@H in Terminal and look into activity monitor it shows appr. 95% processor load (unless I do something else ofcourse). When I start both SETI@Home and E@H in Terminal, activity monitor always shows appr. 46% processor load for each, regardless what my resource share settings are (I think they call it linear scaling in UNIX language). On my Windows machine the client will stop crunching one project and start the other one according to the settings. e.g. I have set E@H to 200 and SETI@Home to 100. E@H will run for 2 hours, then stops and SETI@Home takes over for 1 hour.
Please keep in mind: there is no BOINC-GUI for Mac. We have to run it from Terminal in seperate windows for each project. Each window represents a completely independent process.

Entschuldige, aber es ist mir zu mühsam das Ganze noch einmal auf Deutsch zu übersetzen. Ist das o.k?

Martin P.
Martin P.
Joined: 17 Feb 05
Posts: 162
Credit: 40156217
RAC: 0

> > I think Boinc generally

Message 3407 in response to message 3405

>
> I think Boinc generally 'forgets' the other projects. I've set my limit to 1
> day, and always have enough to crunch for the whole weekend, and not just
> CPDN;)
>
> Ich glaube, Boinc 'vergißt' die anderen Projekte generell. Ich hab mein Limit
> auf 1 tag gesetzt, und hab' immer genug für's ganze Wochenende, und nicht nur
> für Klima ;)
>

I believe so too. Obviously BOINC assumes that:
1. Only one project is being run, and
2. the computer runs 24/7 doing nothing else but BOINC.
This is probably also the reason for the high number of downloaded WUs. I simply don't understand why it does not take the average turnaround time into consideration when calculating the number of seconds work requested.

Glaub' ich auch. Offensichtlich geht BOINC davon aus, dass:
1. Nur ein Projekt läuft, und
2. Der Computer 24/7 in Betrieb ist und nichts anderes tut als BOINC.
Wahrscheinlich gibt es deshalb auch die unsinnig große Zahl an heruntergeladenen WUs. Ich versteh' nur nicht warum die durchschnittliche Durchlaufzeit/WU nicht berücksichtigt wird. Damit könnte man genau diesen Fehler berichtigen.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.