End of S4 work

Pav Lucistnik
Pav Lucistnik
Joined: 7 Mar 06
Posts: 136
Credit: 853,388
RAC: 0
Topic 191415

So if I understand correctly, we're out of unsent S4 results now?

22:18:10 Message from server: No work sent
22:18:10 Message from server: To get more Einstein@Home work, finish current work, stop BOINC, remove app_info.xml file, and restart.

[B@H] Ray
[B@H] Ray
Joined: 4 Jun 05
Posts: 621
Credit: 49,583
RAC: 0

End of S4 work

The have completed making them, still a couple weeks worth to send out.

Both S4 & S5 units are going out now. I have one system only getting S5 units and one still only recieving S4 units. The 2nd system has been recieving a new S4 every 55 Min. or so as it turns one in.

Just by the luck of the draw as to which you get, but looks like after you recieve an S5 you will get all S5 units.

Could be that certain types of systems are getting the S5 units first, the one of mine getting the S5 is the slower one.

EDIT
Looks like 5 of your 6 are getting the S5 units now.


Try the Pizza@Home project, good crunching.

ErichZann
ErichZann
Joined: 11 Feb 05
Posts: 120
Credit: 81,582
RAC: 0

RE: So if I understand

Quote:

So if I understand correctly, we're out of unsent S4 results now?

22:18:10 Message from server: No work sent
22:18:10 Message from server: To get more Einstein@Home work, finish current work, stop BOINC, remove app_info.xml file, and restart.

No, your problem is that app_info file and as the program tells you you should just delete it. Thats a file noone needs but that prevents you from downloading the client for the new WUs. I suppose your Client wanted to download a s5 WU, but i think there are still s4 flying around and after that you could again get a s4 one. But they will be "empty" soon so...

Pav Lucistnik
Pav Lucistnik
Joined: 7 Mar 06
Posts: 136
Credit: 853,388
RAC: 0

Hmm, I was hoping sticking

Hmm, I was hoping sticking with 4.61 app will work - just to pick up any S4 work left and then be gone, this machine is too slow for large S5 workunits.

Guess my masterplan is in vain now.

ErichZann
ErichZann
Joined: 11 Feb 05
Posts: 120
Credit: 81,582
RAC: 0

RE: Guess my masterplan is

Message 38839 in response to message 38838

Quote:

Guess my masterplan is in vain now.

hehe... yeah.. hm

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,872
Credit: 117,804,148,515
RAC: 34,701,282

RE: Hmm, I was hoping

Message 38840 in response to message 38838

Quote:
Hmm, I was hoping sticking with 4.61 app will work - just to pick up any S4 work left and then be gone ....

Your plan worked and you got as much S4 work as was available. Normally when there is a transition to a new and more interesting dataset, people grumble about "When is this old stuff gonna finish so I can get some sexy new stuff..." and nobody wants to finish up the old. This time however I'll bet there were a significant number of people who were out there increasing caches and soaking up every last drop of S4 they could find :). I'm really glad they did that because they probably also inherited quite a number of large data file downloads just for some small dregs of remaining S4 :).

Also, this time around, the scheduler is probably a lot smarter than it was on previous occasions. I haven't really looked in detail but I've seen very few examples of downloading a large data file just to do a single (or a few) remaining result/s. I seemed to notice that a lot last time we went through this transition.

Quote:
this machine is too slow for large S5 workunits.

I presume you are referring to your P4 1400MHz that was taking about 5 hours per S4 result? I've just looked at 3 x PIII-450MHz boxes I'm running that were also taking 5+ hours to do S4 work using Akos' S41.07. All of the three are now crunching long S5 work and one of them has actually turned in a result. It took 58.5 hours and claimed 140.39 and is still pending. It outperformed the other member of the quorum :). By contrast, your box would probably take about 24 hours to do a long S5. Why do you think that's too slow?

Your comment about slowness has prompted me to get out my soapbox. Please understand that these comments are general and not at all aimed at you or any one else for that matter :).

A lot of people are expressing the view that the new work is taking too long now. It's almost like they think this somehow devalues the science they are doing. If a person is donating X hours of CPU time to the project, why is there any difference if that X hours happens to equate to 1 extremely large workunit or 10 normal workunits or 100 short workunits? It's still X hours of science being done. The exciting thing is that this science is the most sensitive available and we are now firmly in the thick of it. As far as the project itself is concerned, there are significant benefits to the server and database loading to be had by sending out the X hours of work as a smaller number of workunits rather than a larger number. We should be happy that the reliability of the project is being protected.

Quote:
Guess my masterplan is in vain now.

I would presume that there is now unlikely to be any previously unsent S4 work left so you certainly can't expect any more "big runs" of it. However there will still be S4 work coming and this will be repeat work for when the server detects either work failing validation, work exceeding a deadline, work with a client error or work aborted by the user. Therefore people should not delete their old S4 applications or data files for a couple of weeks yet. I've already seen people assuming they can do so. If you do, and the server decides to send you some S4 work, it will have to resend you the S4 app and data files as well, ie, unnecessary bandwidth wastage. You would be sent the standard app and not an optimised one.

Please note that these comments are directed generally and not specifically at the OP :).

Cheers,

Cheers,
Gary.

[B@H] Ray
[B@H] Ray
Joined: 4 Jun 05
Posts: 621
Credit: 49,583
RAC: 0

The S4 is still all that my

The S4 is still all that my Celeron 2.96 is getting, has 23 in the que, almost 18 hour supply of them. I keep thinking it will download an S5 but no luck so far.

Can't complaine, will take the credits and let the P4 crunch the S5 for now.


Try the Pizza@Home project, good crunching.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,872
Credit: 117,804,148,515
RAC: 34,701,282

RE: The S4 is still all

Message 38842 in response to message 38841

Quote:
The S4 is still all that my Celeron 2.96 is getting ....

That's why I put "previously unsent" in bold. I think the server is smart enough to know that you (and others) have the appropriate large data file already so it wont send that work immediately to someone else who doesn't already have the correct large data file. I've noticed a couple of mine like this and seen them pretty quickly switch to S5 with a message "received server request to delete file r1_xxxx.x..." or something similar. Pav with his app_info.xml file couldn't get more S4 work so there mustn't be any previously unsent stuff available.

Cheers,

Cheers,
Gary.

Pav Lucistnik
Pav Lucistnik
Joined: 7 Mar 06
Posts: 136
Credit: 853,388
RAC: 0

RE: Pav with his

Message 38843 in response to message 38842

Quote:
Pav with his app_info.xml file couldn't get more S4 work so there mustn't be any previously unsent stuff available.

I'm not so sure about this, man, as others are still receiving S4 results. I don't mind downloading a datafile for just one result from it, this is fast connected box.

Well whatever, it's doing SIMAP now. Will do HashClash once they reenable feeder.

As to why I don't like 24-hour running jobs? Well, there's no fun in peeking at them anymore. I like something I can watch swooozing by. Some fast paced action. Thrill! Suspension! Yeah..

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.