Looks like I have 6 of these things waiting. Here are some selected lines from the event log, in case I've correctly guessed which parts are useful:
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: HTTP/1.1 413 Request Entity Too Large
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: Server: nginx
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: Date: Thu, 18 Oct 2018 01:23:26 GMT
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: Content-Type: text/html
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: Content-Length: 192
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: Connection: close
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server:
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: <html>
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: <head><title>413 Request Entity Too Large</title></head>
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: <body bgcolor="white">
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: <center><h1>413 Request Entity Too Large</h1></center>
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: <hr><center>nginx</center>
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: </body>
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: </html>
Thu 18 Oct 2018 09:23:27 AM CST | | [http_xfer] [ID#468] HTTP: wrote 192 bytes
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Info: Closing connection 621
...
Thu 18 Oct 2018 09:23:28 AM CST | Einstein@Home | [file_xfer] http op done; retval -224 (permanent HTTP error)
Thu 18 Oct 2018 09:23:28 AM CST | Einstein@Home | [file_xfer] http op done; retval -224 (permanent HTTP error)
Thu 18 Oct 2018 09:23:28 AM CST | Einstein@Home | [file_xfer] file transfer status -224 (permanent HTTP error)
Thu 18 Oct 2018 09:23:28 AM CST | Einstein@Home | Backing off 05:34:24 on upload of h1_0086.50_O1C02Cl2In0__O1OD1_86.60Hz_39_2_0
I have, peeking on the server side, found "timed out - no response" for many units which are sitting in my upload queue at home, each having been told to back off a number of times. I have a total of about 40 units over two rigs in upload freeze, many I guess awaiting a similiar fate when their deadlines run out today.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
The number of invalid/inconclusive results suggests that something needs adjusted before these WU's go mainstream.
Indeed. Currently we're not sending any such tasks (not even the 'unsent' ones) while we are investigating.
We do run some tests e.g. of the application on a limited scale, but this type of problems could only be found on a when testing on a larger variety of systems than what we can provide.
I doubt that the problem that occurred can be fixed in the validator, I'd rather say that a certain part of the application needs to be reworked. Anyway, don't worry, we'll manually grant credit for the tasks that turn out invalid, it's certainly not your fault.
Been running a bunch of these on my i7-8700's. I'm running at stock speed (3.2GHz base clock). I tried 6 at a time and 12 at a time. Running 6 up came in around 19,000 seconds, running 12 up around 25,500 seconds (approx. 7 hours). Memory usage seems to be around 137MB each. This is under Linux. Times might vary with different frequency ranges.
Now, my uploads due to
)
Now, my uploads due to timeout are derelict. And only terse findings come from the competent people.
Quite free to Crunchen doesn't unfortunately make fun
The same here with two Tasks
)
The same here with two Tasks from yesterday.
Both are "resends" (_4) and will time out on Friday at 4.54 UTC.
Looks like I have 6 of these
)
Looks like I have 6 of these things waiting. Here are some selected lines from the event log, in case I've correctly guessed which parts are useful:
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: HTTP/1.1 413 Request Entity Too Large
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: Server: nginx
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: Date: Thu, 18 Oct 2018 01:23:26 GMT
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: Content-Type: text/html
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: Content-Length: 192
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: Connection: close
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server:
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: <html>
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: <head><title>413 Request Entity Too Large</title></head>
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: <body bgcolor="white">
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: <center><h1>413 Request Entity Too Large</h1></center>
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: <hr><center>nginx</center>
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: </body>
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Received header from server: </html>
Thu 18 Oct 2018 09:23:27 AM CST | | [http_xfer] [ID#468] HTTP: wrote 192 bytes
Thu 18 Oct 2018 09:23:27 AM CST | Einstein@Home | [http] [ID#468] Info: Closing connection 621
...
Thu 18 Oct 2018 09:23:28 AM CST | Einstein@Home | [file_xfer] http op done; retval -224 (permanent HTTP error)
Thu 18 Oct 2018 09:23:28 AM CST | Einstein@Home | [file_xfer] http op done; retval -224 (permanent HTTP error)
Thu 18 Oct 2018 09:23:28 AM CST | Einstein@Home | [file_xfer] file transfer status -224 (permanent HTTP error)
Thu 18 Oct 2018 09:23:28 AM CST | Einstein@Home | Backing off 05:34:24 on upload of h1_0086.50_O1C02Cl2In0__O1OD1_86.60Hz_39_2_0
I have, peeking on the server
)
I have, peeking on the server side, found "timed out - no response" for many units which are sitting in my upload queue at home, each having been told to back off a number of times. I have a total of about 40 units over two rigs in upload freeze, many I guess awaiting a similiar fate when their deadlines run out today.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Upload problem now
)
Upload problem now relieved.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
The number of
)
The number of invalid/inconclusive results suggests that something needs adjusted before these WU's go mainstream.
DanNeely wrote:The number of
)
Indeed. Currently we're not sending any such tasks (not even the 'unsent' ones) while we are investigating.
We do run some tests e.g. of the application on a limited scale, but this type of problems could only be found on a when testing on a larger variety of systems than what we can provide.
I doubt that the problem that occurred can be fixed in the validator, I'd rather say that a certain part of the application needs to be reworked. Anyway, don't worry, we'll manually grant credit for the tasks that turn out invalid, it's certainly not your fault.
BM
We fixed the problem in
)
We fixed the problem in validation and are generating workunits again. Still Beta Test, the OSX app version seems broken, so disabled for now.
BM
One task completed and
)
One task completed and validated on my SuSE Leap 15.0, two other running.
Tullio
Binary radio pulsar search is going on on my UleFone smart phone with ARM64 CPU and Android 7.1.1
Been running a bunch of these
)
Been running a bunch of these on my i7-8700's. I'm running at stock speed (3.2GHz base clock). I tried 6 at a time and 12 at a time. Running 6 up came in around 19,000 seconds, running 12 up around 25,500 seconds (approx. 7 hours). Memory usage seems to be around 137MB each. This is under Linux. Times might vary with different frequency ranges.
BOINC blog