// DBOINCP-300: added node comment count condition in order to get Preview working ?>
Martin P.
Joined: 17 Feb 05
Posts: 162
Credit: 40156217
RAC: 0
2 Feb 2007 9:55:02 UTC
Topic 192375
(moderation:
)
Hi,
how can I implement the command into the new client? With the truxoft optimized client this was easy, but how do I do that with the PPC- and the Windows-client 5.8.8?
The projects do not want this, because it causes database problems. It does not hurt anything to not report immediately.
If you want to report immediately, change your connection time to .01, and that way one in, one out.
Then you should also remember there is a risk of running out of work with such a small connection time when the Einstein server is down. If you want to avoid idle time for your CPU, either accept not to report immediately, or attach to one/two backup project(s).
The only other alternative is to grab the new source tarball and re-impliment the RRI functionality yourself, or find a third party version which has it in it.
The only other alternative is to grab the new source tarball and re-impliment the RRI functionality yourself, or find a third party version which has it in it.
Alinator
Not nice to advocate this, since projects are already having database issues. It would be best if it was forced not to allow only one unit to be reported, but that can not happen because of slower machines, etc.
Reporting 20 results is just as much work on the database as 1 result. If we all did 1 result, the database would never be available and would probably crash more often.
The only other alternative is to grab the new source tarball and re-impliment the RRI functionality yourself, or find a third party version which has it in it.
Alinator
Not nice to advocate this, since projects are already having database issues. It would be best if it was forced not to allow only one unit to be reported, but that can not happen because of slower machines, etc.
Reporting 20 results is just as much work on the database as 1 result. If we all did 1 result, the database would never be available and would probably crash more often.
Being 'nice' has nothing to do with it, the OP asked for possible solutions and mine was was merely a possible alternative and I was not advocating a course of action either way.
That being said, I have read most of the discussion on this issue, including Rom's thorough and objective rationale on why it's a bad idea to RRI from the projects POV. In that context, I can understand when you are running on a shoestring budget with limited and/or antiquated hardware (for the application) why they asked that we participants refrain from using it.
OTOH, the point which gets lost in much of the debate is that as a result a participant is required to carry more risk of having their donated computing resource lost due to malfunction on either end.
When one considers that the project backend would be assumed to be run on enterprise class hardware and software, and therefore more reliable than the client hosts, it logically follows it would be preferable to get all data off the hosts as soon as possible in order to minimize potential wasted effort on both ends.
If it turns out a project can not handle the load under all possible scenarios, then in the final analysis the problem is their's, and not the participants per se. In fact your own suggestion of running a CI of 0.01 days puts essentially just as much load on the backend with a late model fast host as RRI with a longer CI does, the difference being that when the project goes down the host will in all likelyhood run out of work and go idle (assuming no backup project) in the first case. One could go as far as to say that if a projects DB server can not handle the load of WU's out in the field, then the most appropriate course of action is for them to limit the total number in progress to a value they can handle reliably rather than complain about people deciding when they want their hosts to report back completed work.
Also, if this is truly deprecated and/or diagnostic functionality, then a little more care should be taken in preparing the public release versions of the client so when you issue X:\\boinc\\boinc /? at the command promt, you don't see:
.
.
.
-return_results_immediately contact server when have results
.
.
.
as a switch option (at least through 5.4.11 and the dev versions of 5.8)
In any event, and I'll take the liberty of saying that anyone who hangs out in the forums is aware that EAH and SAH are having some backend trouble right at the moment, then it only makes sense for us to carry a little more risk on our end, and minimize our use of CC functionality which could bring the backend down since it's in nobody's interest to drive them offline completely.
In any event, and I'll take the liberty of saying that anyone who hangs out in the forums is aware that EAH and SAH are having some backend trouble right at the moment, then it only makes sense for us to carry a little more risk on our end, and minimize our use of CC functionality which could bring the backend down since it's in nobody's interest to drive them offline completely.
Alinator
Oh, I don't know... if I follow mikey's analogy here there's only between 7,854 (1 result reporters) and 6,646 (multiple result reporters) computers that can report results each day anyway. ;-)
(there's only 86,400 seconds in a day...)
Mikey's analogy were true if the database only allowed one person at the same time to report. I don't really see the use of such a database though, nor is it a good description of a database.
What other projects could do is give out an extra message with the scheduler report when their server is overloaded. Primegrid is using this when you try to report but the server load is too high. Uploads and downloads go as if nothing is happening as they don't add server load (too much). But reports do add to the server load. So maybe that a similar tactic can be used for clients that report results immediately.
Well, Alinator, I'm a bit sceptical about "client PCs being less reliable and therefore more likely to lose the work". I've been doing BOINC for a bit more than a year now (no big farm, but still) and I've never yet lost a result after it was completed. There were a few (maybe 8 or 10 in all those months) WUs that crashed while they were running due to some problems with the OS or so, but finished results? Beats me how that could happen, unless you format your whole hard disk or at least delete the result by accident. Or am I missing sth here?
If anyone is looking for skins for the new BOINC version. I have a few available for download, that I made a while back, when testing 5.8.0 Here is the link to my downlaod section of my site.
Well, Alinator, I'm a bit sceptical about "client PCs being less reliable and therefore more likely to lose the work". I've been doing BOINC for a bit more than a year now (no big farm, but still) and I've never yet lost a result after it was completed. There were a few (maybe 8 or 10 in all those months) WUs that crashed while they were running due to some problems with the OS or so, but finished results? Beats me how that could happen, unless you format your whole hard disk or at least delete the result by accident. Or am I missing sth here?
Actually on this one i will have to differ a little. I have got a veggie patch (small farm) growing and some of the older hardware is a little prone to failure. It can be a week or more at times until i manage to find replacement hardware at a reasonable price. So completed results do get lost when i cant report then quickly enough. I generally run a 0.25 day cache for this reason but due to a bit of instability in my favorite projects at the moment im running a 2.5 day cache until everything settles down.
Okay, that is of course a good point. I think for this kind of hardware it's a different issue. What I was referring to was hardware that is also still used "normally", 'cause that's the only kind I've been able to get hold off so far...
How to customize the new 5.8.8 client?
)
The projects do not want this, because it causes database problems. It does not hurt anything to not report immediately.
If you want to report immediately, change your connection time to .01, and that way one in, one out.
RE: The projects do not
)
Then you should also remember there is a risk of running out of work with such a small connection time when the Einstein server is down. If you want to avoid idle time for your CPU, either accept not to report immediately, or attach to one/two backup project(s).
Somnio ergo sum
The only other alternative is
)
The only other alternative is to grab the new source tarball and re-impliment the RRI functionality yourself, or find a third party version which has it in it.
Alinator
RE: The only other
)
Not nice to advocate this, since projects are already having database issues. It would be best if it was forced not to allow only one unit to be reported, but that can not happen because of slower machines, etc.
Reporting 20 results is just as much work on the database as 1 result. If we all did 1 result, the database would never be available and would probably crash more often.
RE: RE: The only other
)
Being 'nice' has nothing to do with it, the OP asked for possible solutions and mine was was merely a possible alternative and I was not advocating a course of action either way.
That being said, I have read most of the discussion on this issue, including Rom's thorough and objective rationale on why it's a bad idea to RRI from the projects POV. In that context, I can understand when you are running on a shoestring budget with limited and/or antiquated hardware (for the application) why they asked that we participants refrain from using it.
OTOH, the point which gets lost in much of the debate is that as a result a participant is required to carry more risk of having their donated computing resource lost due to malfunction on either end.
When one considers that the project backend would be assumed to be run on enterprise class hardware and software, and therefore more reliable than the client hosts, it logically follows it would be preferable to get all data off the hosts as soon as possible in order to minimize potential wasted effort on both ends.
If it turns out a project can not handle the load under all possible scenarios, then in the final analysis the problem is their's, and not the participants per se. In fact your own suggestion of running a CI of 0.01 days puts essentially just as much load on the backend with a late model fast host as RRI with a longer CI does, the difference being that when the project goes down the host will in all likelyhood run out of work and go idle (assuming no backup project) in the first case. One could go as far as to say that if a projects DB server can not handle the load of WU's out in the field, then the most appropriate course of action is for them to limit the total number in progress to a value they can handle reliably rather than complain about people deciding when they want their hosts to report back completed work.
Also, if this is truly deprecated and/or diagnostic functionality, then a little more care should be taken in preparing the public release versions of the client so when you issue X:\\boinc\\boinc /? at the command promt, you don't see:
.
.
.
-return_results_immediately contact server when have results
.
.
.
as a switch option (at least through 5.4.11 and the dev versions of 5.8)
In any event, and I'll take the liberty of saying that anyone who hangs out in the forums is aware that EAH and SAH are having some backend trouble right at the moment, then it only makes sense for us to carry a little more risk on our end, and minimize our use of CC functionality which could bring the backend down since it's in nobody's interest to drive them offline completely.
Alinator
RE: In any event, and I'll
)
Oh, I don't know... if I follow mikey's analogy here there's only between 7,854 (1 result reporters) and 6,646 (multiple result reporters) computers that can report results each day anyway. ;-)
(there's only 86,400 seconds in a day...)
Mikey's analogy were true if the database only allowed one person at the same time to report. I don't really see the use of such a database though, nor is it a good description of a database.
What other projects could do is give out an extra message with the scheduler report when their server is overloaded. Primegrid is using this when you try to report but the server load is too high. Uploads and downloads go as if nothing is happening as they don't add server load (too much). But reports do add to the server load. So maybe that a similar tactic can be used for clients that report results immediately.
Well, Alinator, I'm a bit
)
Well, Alinator, I'm a bit sceptical about "client PCs being less reliable and therefore more likely to lose the work". I've been doing BOINC for a bit more than a year now (no big farm, but still) and I've never yet lost a result after it was completed. There were a few (maybe 8 or 10 in all those months) WUs that crashed while they were running due to some problems with the OS or so, but finished results? Beats me how that could happen, unless you format your whole hard disk or at least delete the result by accident. Or am I missing sth here?
If anyone is looking for
)
If anyone is looking for skins for the new BOINC version. I have a few available for download, that I made a while back, when testing 5.8.0 Here is the link to my downlaod section of my site.
http://dexter-d3xt3r.sytes.net/downloads.html
NOTE: Some of them are crude, but the Pulsar Skin, is not bad.
d3xt3r.net
RE: Well, Alinator, I'm a
)
Actually on this one i will have to differ a little. I have got a veggie patch (small farm) growing and some of the older hardware is a little prone to failure. It can be a week or more at times until i manage to find replacement hardware at a reasonable price. So completed results do get lost when i cant report then quickly enough. I generally run a 0.25 day cache for this reason but due to a bit of instability in my favorite projects at the moment im running a 2.5 day cache until everything settles down.
Okay, that is of course a
)
Okay, that is of course a good point. I think for this kind of hardware it's a different issue. What I was referring to was hardware that is also still used "normally", 'cause that's the only kind I've been able to get hold off so far...