Regarding the observation of MAD_MAX: from what I understand, the project has an overall amount of working units available, which is shown in 'work still remaining'. These are fed into the queue into portions, which is reflected by 'Tasks to send'.
Yeah, flow of new GPU tasks has resumed right after i wrote previous post.
Now work queue is full again.
But anyway: looks like GRPB1G is ending very soon. Is there a new data set for GR coming or we will switch GPUs to another sub-project? Like resuming of BRP4G or FGRP5G for example?
Yeah, flow of new GPU tasks has resumed right after i wrote previous post.
Now work queue is full again.
But anyway: looks like GRPB1G is ending very soon. Is there a new data set for GR coming or we will switch GPUs to another sub-project? Like resuming of BRP4G or FGRP5G for example?
GW on GPU will not be ready yet i guess...
It's business as normal on the GPU side. The total amount of work gets incremented every week or so when a new data file is imported into the system; with the result that it always looks like we're about to run out in the near future. Actual availability is a black box on the user end because we have zero insight into the amount of data ready to be fed into the system; however my understanding is that Fermi is generating a continuous stream of new data to be looked over so we shouldn't be in danger of a long term outage.
This issue has already been raised in more appropriate parts of the forum.
Occasionally things happen that are beyond anybodys control (usually at the weekend whan staff levels are short or non existant) we are all entitled to at least one day off after all!
Whatever the current problem, it will be sorted tomorrow and if not an annoucement will be made as to the cause of the issue and anticipated time of fix.
We are all in the same boat so you are not losing out ;-)
Did we run out of work or
)
Did we run out of work or something broken?
Last day i am getting all the time:
But status page shows
GRPB1G search progress
While Tasks to send ~ 0
New tasks for FGRBP1G became
)
New tasks for FGRBP1G became available just a moment ago. Try clicking update...
Yes, there are working units
)
Yes, there are working units available again.
Regarding the observation of MAD_MAX: from what I understand, the project has an overall amount of working units available, which is shown in 'work still remaining'. These are fed into the queue into portions, which is reflected by 'Tasks to send'.
Yeah, flow of new GPU tasks
)
Yeah, flow of new GPU tasks has resumed right after i wrote previous post.
Now work queue is full again.
But anyway: looks like GRPB1G is ending very soon. Is there a new data set for GR coming or we will switch GPUs to another sub-project? Like resuming of BRP4G or FGRP5G for example?
GW on GPU will not be ready yet i guess...
Mad_Max wrote:Yeah, flow of
)
It's business as normal on the GPU side. The total amount of work gets incremented every week or so when a new data file is imported into the system; with the result that it always looks like we're about to run out in the near future. Actual availability is a black box on the user end because we have zero insight into the amount of data ready to be fed into the system; however my understanding is that Fermi is generating a continuous stream of new data to be looked over so we shouldn't be in danger of a long term outage.
The task queue is empty again
)
The task queue is empty again for about a day.
Tasks to send = 0 and FGRPB1G work generator - Not Running
This issue has already been
)
This issue has already been raised in more appropriate parts of the forum.
Occasionally things happen that are beyond anybodys control (usually at the weekend whan staff levels are short or non existant) we are all entitled to at least one day off after all!
Whatever the current problem, it will be sorted tomorrow and if not an annoucement will be made as to the cause of the issue and anticipated time of fix.
We are all in the same boat so you are not losing out ;-)
there seems to be a problem.
)
there seems to be a problem. I got only erroring WUs today. till yesterdays everything worked fine
computer and his jobs
for the last failing job i captured the errormessage:
29.04.2019 09:48:19 | Einstein@Home | Starting task LATeah1049U_172.0_0_0.0_18544470_0
29.04.2019 09:48:23 | Einstein@Home | Computation for task LATeah1049U_172.0_0_0.0_18544470_0 finished
29.04.2019 09:48:23 | Einstein@Home | Output file LATeah1049U_172.0_0_0.0_18544470_0_0 for task LATeah1049U_172.0_0_0.0_18544470_0 absent
29.04.2019 09:48:23 | Einstein@Home | Output file LATeah1049U_172.0_0_0.0_18544470_0_1 for task LATeah1049U_172.0_0_0.0_18544470_0 absent
Sorry. The app version 1.21
)
Sorry. The app version 1.21 was meant to fix the "thread priority" issue as described here, but apparently has an other problem. Deprecated for now.
BM
nice try ^^
)
nice try ^^