Since this is taking longer than anticipated, would it be possible to do another deadline extension? I ask because I have several completed workunits that still aren't uploading, but that are due in less than two days.
The Retry resulted in a couple items clearing off of Upload
but after that it just results in everything going to project
backoff. I'll just be patient, let it clear when it is ready.
The Retry resulted in a couple items clearing off of Upload
but after that it just results in everything going to project
backoff. I'll just be patient, let it clear when it is ready.
I am in the same boat. Retry helps but I could be doing this all night so like you I too will be patient and let the computers take care of it.
All Uploads completed and reported. Back to normal, for me. :-)
According to the event log, I appear to've gotten some uploads done; but the server's apparently being hammered hard enough that after a few uploads it requests my client backoff for a few minutes. My remaining upload backlog in turn is preventing me from doing any fresh downloads.
The recovery rate seems to be slowly increasing: In the ~8 hours between when I checked last night and when I got up this morning, the upload backlog on my main PC dropped from 15 screens worth of files to 14. During the 10 hours I was at work it dropped to 11 screens; a bit over twice as fast. Unless it continues to improve though, I'm still ~40 hours from being caught up. *sigh*
dunno if it's the server slowly recovering or just that people with short upload queues are getting them empty and reducing load.
Open BOINC Manager at the transfers tab. Select them all by clicking the first one and then shift-clicking the last one. Click the "retry now" button. If the process stalls, or if a significant number of the entries reverts to "retrying in ..." just click the retry button again. Rinse and repeat until the entire list has cleared. This might take several minutes for a particularly long list of stuck transfers but the entire list cam be cleared this way.
unfortunately, as David Anderson just pointed out, it takes far more than just several minutes (try several hours) to manually push through all those tasks...i've only got 20 tasks on one host, and even that will take more than several minutes. i can only imagine the nightmare it would be to try to force through the 358 tasks on my other host! i'm just gonna ride out the storm and let the server absorb these completed tasks as it sees fit...and hopefully someone on the server end is keeping en eye on task deadlines and extending them accordingly.
I have more than 80 machines. At this moment, none of them have any 'stuck' uploads. Of course there may be the odd few from work completed since they were last cleared out. I've just been through all of them and cleared any uploads they had.
As I said previously, "If the process stalls ...". That includes any "Project Backoff: ..." type messages. Just click retry. The BOINC client stops retrying for too long an interval when the server is really not 'down', just 'slow'. Most of the machines had a very small number of uploads and could be cleared in less than a minute or two. I remember just two that had a significant number - more than 10-20. They were cleared in 5-10 mins or so. I've been doing a general cleanout on each day that uploads have been available and this has kept the upload queue manageable.
As for stuff going to /dev/null, I guess you could turn off comms and keep everything locally if you were paranoid. I imagine the Devs wouldn't have re-enabled uploads if they felt the files would just be trashed.
well i wish my personal experience matched yours, but as i said before, the uploads just don't happen that fast for me - and i'm taking about WHEN i'm there to babysit and manually hit the retry button over and over! as you stated, the project back-off times are unnecessarily large, resulting in uploads occurring far less often than they theoretically COULD happen if you aren't there to manually hit the retry button. but if its between babysitting for hours to get hundreds of completed tasks to upload and letting them take longer than necessary to upload on their own, i choose the latter.
I think Gary is doing mainly/only FGRP work, and those results are less prone to upload-failure (per task) because each tasks computes just two, rather tiny result files. A GW S6BucketFU1UB task, however, generates 8 result files, each around 200KB. So to successfully upload the result set from a GW task, on average you'll need more retries when the upload server is not working perfectely and rejecting a certain percentage of upload-trials.
Silly question: Since this
)
Silly question:
Since this is taking longer than anticipated, would it be possible to do another deadline extension? I ask because I have several completed workunits that still aren't uploading, but that are due in less than two days.
Thanks!
The Retry resulted in a
)
The Retry resulted in a couple items clearing off of Upload
but after that it just results in everything going to project
backoff. I'll just be patient, let it clear when it is ready.
RE: The Retry resulted in a
)
I am in the same boat. Retry helps but I could be doing this all night so like you I too will be patient and let the computers take care of it.
coming up on 5 days - BOY I
)
coming up on 5 days - BOY I hope this is fixed soon!
RE: RE: All Uploads
)
The recovery rate seems to be slowly increasing: In the ~8 hours between when I checked last night and when I got up this morning, the upload backlog on my main PC dropped from 15 screens worth of files to 14. During the 10 hours I was at work it dropped to 11 screens; a bit over twice as fast. Unless it continues to improve though, I'm still ~40 hours from being caught up. *sigh*
dunno if it's the server slowly recovering or just that people with short upload queues are getting them empty and reducing load.
There is some uploading, but
)
There is some uploading, but more importantly, I got new workunits!!
RE: Open BOINC Manager at
)
unfortunately, as David Anderson just pointed out, it takes far more than just several minutes (try several hours) to manually push through all those tasks...i've only got 20 tasks on one host, and even that will take more than several minutes. i can only imagine the nightmare it would be to try to force through the 358 tasks on my other host! i'm just gonna ride out the storm and let the server absorb these completed tasks as it sees fit...and hopefully someone on the server end is keeping en eye on task deadlines and extending them accordingly.
I have more than 80 machines.
)
I have more than 80 machines. At this moment, none of them have any 'stuck' uploads. Of course there may be the odd few from work completed since they were last cleared out. I've just been through all of them and cleared any uploads they had.
As I said previously, "If the process stalls ...". That includes any "Project Backoff: ..." type messages. Just click retry. The BOINC client stops retrying for too long an interval when the server is really not 'down', just 'slow'. Most of the machines had a very small number of uploads and could be cleared in less than a minute or two. I remember just two that had a significant number - more than 10-20. They were cleared in 5-10 mins or so. I've been doing a general cleanout on each day that uploads have been available and this has kept the upload queue manageable.
As for stuff going to /dev/null, I guess you could turn off comms and keep everything locally if you were paranoid. I imagine the Devs wouldn't have re-enabled uploads if they felt the files would just be trashed.
Cheers,
Gary.
well i wish my personal
)
well i wish my personal experience matched yours, but as i said before, the uploads just don't happen that fast for me - and i'm taking about WHEN i'm there to babysit and manually hit the retry button over and over! as you stated, the project back-off times are unnecessarily large, resulting in uploads occurring far less often than they theoretically COULD happen if you aren't there to manually hit the retry button. but if its between babysitting for hours to get hundreds of completed tasks to upload and letting them take longer than necessary to upload on their own, i choose the latter.
Hi! I think Gary is doing
)
Hi!
I think Gary is doing mainly/only FGRP work, and those results are less prone to upload-failure (per task) because each tasks computes just two, rather tiny result files. A GW S6BucketFU1UB task, however, generates 8 result files, each around 200KB. So to successfully upload the result set from a GW task, on average you'll need more retries when the upload server is not working perfectely and rejecting a certain percentage of upload-trials.
Cheers
HB