Am working on a FGRP3 and the time is so screwy I wonder if there will be an error when it's complete.
LAteah are usually pretty long, coming in at 60 min. and leaving here maybe at 39 min. or so. The one I'm worried about is LAteah56c_32.0_3296_-1.79e-10_2
It has at this moment:
Elapsed 1:44:43 Progress 19.354% To Completion 11:37:54 Running
I count that as less than 14 hours. Isn't that weird for an LAteaH?
Not too much credit on this like the typical FGR3's ...
We are still trying to find out how to prevent clients where these tasks would crash from getting these tasks at all.
For now we are preparing a new version of the app that reserves the last 1% of the progress counter for the "follow-up" stage. I agree that that whet the current app does (stay at 100% done for indefinitely long time) is, say, suboptimal.
The problem in general is that we may be able to (approx) predict the remaining runtime at the beginning of the follow-up stage, but not earlier.
Am working on a FGRP3 and the time is so screwy I wonder if there will be an error when it's complete.
LAteah are usually pretty long, coming in at 60 min. and leaving here maybe at 39 min. or so. The one I'm worried about is LAteah56c_32.0_3296_-1.79e-10_2
By "so screwy", do you really mean "a lot shorter than normal"? Also, what do the "60 min." and "39 min." refer to?
I took a look at the task you identified. It took somewhat less than 9 hours, whereas, FGRP3 tasks normally seem to take a bit over a day - perhaps 26 hours or so - on your machine. Bernd has explained the reason for this a few times previously. Here is a quote from this message posted last June.
As with previous FGRP WUs, we try to cut the data sets in equally sized workunits. However there always remain a few "short ends"; some WUs that are significantly shorter.
Your task is one of these "short ends". It took about 1/3rd of the time and was awarded about 1/3rd of the credit. Quite unremarkable :-).
Quote:
It has at this moment:
Elapsed 1:44:43 Progress 19.354% To Completion 11:37:54 Running
I count that as less than 14 hours. Isn't that weird for an LAteaH?
You really should ignore estimates if you want to know how long a running task is likely to take. In the above example the task is going to take (1.75 / 0.19354) = 9.0 hours approximately, which is a much closer value than what you get by adding the current 'elapsed' time to the 'to completion' estimate.
By "so screwy", do you really mean "a lot shorter than normal"? Also, what do the "60 min." and "39 min." refer to?
Quote:
"Elapsed 1:44:43 Progress 19.354% To Completion 11:37:54 Running"
I forgot. None of this appears on Einstein pages, they are only on Grid Republic Desktop. It shows in the Advanced View the time it came in, the estimated time it will take, what's elapsed, progress, completion....for all tasks in progress or those downloaded to be worked on next.
Sometimes I do the math in my head, the Progress/time..
"here" is my computer...I think I am getting confused.
I would be lost without GRD. The task page in Einstein gives me only the times after the fact. Nothing about progress.
I was used to getting 880 for the LAteahs, then 660, and I think for the screwy one, I got 66 points.
When I get a new task, I like to check to see approximately when it will be done. Looking at "In Progress" on the Task List doesn't show that.
We are still trying to find out how to prevent clients where these tasks would crash from getting these tasks at all.
For now we are preparing a new version of the app that reserves the last 1% of the progress counter for the "follow-up" stage. I agree that that whet the current app does (stay at 100% done for indefinitely long time) is, say, suboptimal.
The problem in general is that we may be able to (approx) predict the remaining runtime at the beginning of the follow-up stage, but not earlier.
BM
Since your information started out with GPU tasks, I didn't pay attention, since I only deal in CPU's. If you'da said "leftovers from ALL FGRP3's, I maybe might have read it. Maybe, maybe not...there was a chance. Thank you.
We enabled GPU App versions (NVidia & ATI) again to test over the weekend. For testing, GPU RAM requirement has been set to 2GB, we will lower it subsequently.
In parallel we are testing a new App version with different progress counting over at Albert@Home, this should be made available here early next week.
It doesn't seem very resource-efficient. On my GTX 670, one WU never got over 50% GPU load, and was frequently at 0%. After changing my GPU utilization factor to 0.25 in preferences, to match BRP, I am currently running 3x FGRP and 1x BRP5; GPU load floats between 75% and 95%.
BTW, I also note that each WU uses approximately 600 MB of GPU RAM.
Quote:
We enabled GPU App versions (NVidia & ATI) again to test over the weekend. For testing, GPU RAM requirement has been set to 2GB, we will lower it subsequently.
In parallel we are testing a new App version with different progress counting over at Albert@Home, this should be made available here early next week.
Running at about 25% GPU usage on my fast AMDs, 2 hours, 660 credits. Even running 4x they'd show low usage. As far as credit goes, the BRP5 WUs are already lower than any other project I know of. These are a fraction of that.
I dedicated one of my hosts 7181095 to running FGRP3 on the GPU. With the GPU utilization factor set to 0.25, the FGRP3 tasks appear to run stable. With the factor set to 0.20, 8 tasks failed to run successfully even though the GPU load was at 86% with this configuration. I am currently running with a factor of 0.25 and have the remaining two of six cores running S6CasA. Thank you for the work done to add GPU support for FGRP3.
Will the GPU tasks be
)
Will the GPU tasks be returning at some point?
Hi Berndt, No problems, at
)
Hi Berndt,
No problems, at least I don't think there is one.
Am working on a FGRP3 and the time is so screwy I wonder if there will be an error when it's complete.
LAteah are usually pretty long, coming in at 60 min. and leaving here maybe at 39 min. or so. The one I'm worried about is LAteah56c_32.0_3296_-1.79e-10_2
It has at this moment:
Elapsed 1:44:43 Progress 19.354% To Completion 11:37:54 Running
I count that as less than 14 hours. Isn't that weird for an LAteaH?
Not too much credit on this like the typical FGR3's ...
Thanks...
RE: Will the GPU tasks be
)
Yes.
We are still trying to find out how to prevent clients where these tasks would crash from getting these tasks at all.
For now we are preparing a new version of the app that reserves the last 1% of the progress counter for the "follow-up" stage. I agree that that whet the current app does (stay at 100% done for indefinitely long time) is, say, suboptimal.
The problem in general is that we may be able to (approx) predict the remaining runtime at the beginning of the follow-up stage, but not earlier.
BM
BM
RE: Am working on a FGRP3
)
By "so screwy", do you really mean "a lot shorter than normal"? Also, what do the "60 min." and "39 min." refer to?
I took a look at the task you identified. It took somewhat less than 9 hours, whereas, FGRP3 tasks normally seem to take a bit over a day - perhaps 26 hours or so - on your machine. Bernd has explained the reason for this a few times previously. Here is a quote from this message posted last June.
As with previous FGRP WUs, we try to cut the data sets in equally sized workunits. However there always remain a few "short ends"; some WUs that are significantly shorter.
Your task is one of these "short ends". It took about 1/3rd of the time and was awarded about 1/3rd of the credit. Quite unremarkable :-).You really should ignore estimates if you want to know how long a running task is likely to take. In the above example the task is going to take (1.75 / 0.19354) = 9.0 hours approximately, which is a much closer value than what you get by adding the current 'elapsed' time to the 'to completion' estimate.
Cheers,
Gary.
RE: RE: By "so screwy",
)
RE: RE: Will the GPU
)
Since your information started out with GPU tasks, I didn't pay attention, since I only deal in CPU's. If you'da said "leftovers from ALL FGRP3's, I maybe might have read it. Maybe, maybe not...there was a chance. Thank you.
We enabled GPU App versions
)
We enabled GPU App versions (NVidia & ATI) again to test over the weekend. For testing, GPU RAM requirement has been set to 2GB, we will lower it subsequently.
In parallel we are testing a new App version with different progress counting over at Albert@Home, this should be made available here early next week.
BM
BM
It doesn't seem very
)
It doesn't seem very resource-efficient. On my GTX 670, one WU never got over 50% GPU load, and was frequently at 0%. After changing my GPU utilization factor to 0.25 in preferences, to match BRP, I am currently running 3x FGRP and 1x BRP5; GPU load floats between 75% and 95%.
BTW, I also note that each WU uses approximately 600 MB of GPU RAM.
Running at about 25% GPU
)
Running at about 25% GPU usage on my fast AMDs, 2 hours, 660 credits. Even running 4x they'd show low usage. As far as credit goes, the BRP5 WUs are already lower than any other project I know of. These are a fraction of that.
I dedicated one of my hosts
)
I dedicated one of my hosts 7181095 to running FGRP3 on the GPU. With the GPU utilization factor set to 0.25, the FGRP3 tasks appear to run stable. With the factor set to 0.20, 8 tasks failed to run successfully even though the GPU load was at 86% with this configuration. I am currently running with a factor of 0.25 and have the remaining two of six cores running S6CasA. Thank you for the work done to add GPU support for FGRP3.