But I would expect the change to take effect only when new work is allocated, not just on a simple project update. He's had about 8 new tasks since he posted, so hopefully it's switched by now.
Didn't know the change didn't take effect immediately. Works fine now, thanks.
Can someone here help a noob and post an app_info file that will let me run 2 at a time on a GTX 460? I'm only @ 68% GPU, 55c temp and 30% memory running 1.
Thanks in advance.
Thank you for implementing this via the raw 'count' figure, not the inverse "Run 1, 2, 3... tasks", which might have been more intuitive for new users.
There are a couple of reasons for this:
* This is not a setting to be used by newbies
* It allows finer tuning, which might be particularly necessary when combining different Apps of the same or different projects
* It really is a factor that gets multiplied to the ncudas/natis value set by the project. This should still work even if we'll once have an app that uses 0.6 or 2.0 GPUs.
Can someone here help a noob and post an app_info file that will let me run 2 at a time on a GTX 460? I'm only @ 68% GPU, 55c temp and 30% memory running 1.
Thanks in advance.
We implemented this setting precisely to avoid the necessity for an app_info.xml.
Can someone here help a noob and post an app_info file that will let me run 2 at a time on a GTX 460? I'm only @ 68% GPU, 55c temp and 30% memory running 1.
Thanks in advance.
We implemented this setting precisely to avoid the necessity for an app_info.xml.
Am I correct in assuming that it doesn't go into effect until the client contacts the server again?
To be precise this will not go into effect until the client gets new tasks for a BRP4cuda plan class. A manual project update may not have this effect. Edit: However you should do a manual update anyway (for the client to read this new setting, so it can include this in the work request).
i have to let my queue of E@H tasks wind down before i can stop using an app_info.xml file. Bernd, i'm essentially doing as you suggested, only a bit differently. you see, my queue currently has both Gravitational Wave tasks (some S6LV1's, but mostly S6Bucket's) and BRP CUDA tasks, the latter of which will run out long before the CPU tasks do if i set the project to get "no new tasks." so what i've done instead is disable CPU computing through my project web preferences - this way i'll only continue to get GPU tasks for the time being. when the number of CPU tasks in the queue gets much closer to zero, i'll be able to calculate much more accurately the amount of time required to crunch those remaining CPU tasks. once i know how much time is needed to run the remaining queue of CPU tasks dry, i can set "no new tasks" accordingly, so that GPU tasks run dry at about the same time. i can also re-enable CPU computing at this point since NNT will be set. then i can get rid of the app_info.xml file.
To be precise this will not go into effect until the client gets new tasks for a BRP4cuda plan class. A manual project update may not have this effect. Edit: However you should do a manual update anyway (for the client to read this new setting, so it can include this in the work request).
BM
It's now running 2 at a time. Seems to be about 24% more effecient that running 1.
Thanks to all for the help.
The GPU utilization factor option is working great for me. Thanks for implementing this feature as this eliminates the need of app_info.xml management.
For anyone modifying or eliminating the app_info.xml file, I would suggest backing up your Einstein project data first as when swapping applications, you may lose your work. This way you can simply copy the data back. This worked good on my system and I avoided having to download two days of resent tasks.
RE: But I would expect the
)
Didn't know the change didn't take effect immediately. Works fine now, thanks.
Can someone here help a noob
)
Can someone here help a noob and post an app_info file that will let me run 2 at a time on a GTX 460? I'm only @ 68% GPU, 55c temp and 30% memory running 1.
Thanks in advance.
RE: Thank you for
)
There are a couple of reasons for this:
* This is not a setting to be used by newbies
* It allows finer tuning, which might be particularly necessary when combining different Apps of the same or different projects
* It really is a factor that gets multiplied to the ncudas/natis value set by the project. This should still work even if we'll once have an app that uses 0.6 or 2.0 GPUs.
BM
BM
RE: Can someone here help a
)
We implemented this setting precisely to avoid the necessity for an app_info.xml.
Please use this setting instead.
BM
BM
RE: RE: Can someone here
)
Thanks.
RE: Am I correct in
)
To be precise this will not go into effect until the client gets new tasks for a BRP4cuda plan class. A manual project update may not have this effect. Edit: However you should do a manual update anyway (for the client to read this new setting, so it can include this in the work request).
BM
BM
Done. Thanks again.
)
Done. Thanks again.
i have to let my queue of E@H
)
i have to let my queue of E@H tasks wind down before i can stop using an app_info.xml file. Bernd, i'm essentially doing as you suggested, only a bit differently. you see, my queue currently has both Gravitational Wave tasks (some S6LV1's, but mostly S6Bucket's) and BRP CUDA tasks, the latter of which will run out long before the CPU tasks do if i set the project to get "no new tasks." so what i've done instead is disable CPU computing through my project web preferences - this way i'll only continue to get GPU tasks for the time being. when the number of CPU tasks in the queue gets much closer to zero, i'll be able to calculate much more accurately the amount of time required to crunch those remaining CPU tasks. once i know how much time is needed to run the remaining queue of CPU tasks dry, i can set "no new tasks" accordingly, so that GPU tasks run dry at about the same time. i can also re-enable CPU computing at this point since NNT will be set. then i can get rid of the app_info.xml file.
RE: To be precise this
)
It's now running 2 at a time. Seems to be about 24% more effecient that running 1.
Thanks to all for the help.
The GPU utilization factor
)
The GPU utilization factor option is working great for me. Thanks for implementing this feature as this eliminates the need of app_info.xml management.
For anyone modifying or eliminating the app_info.xml file, I would suggest backing up your Einstein project data first as when swapping applications, you may lose your work. This way you can simply copy the data back. This worked good on my system and I avoided having to download two days of resent tasks.