For the time being I raised the credit to 5000 for newly generated BRP5 workunits.
Was this perhaps a declaration of intention rather than an already accomplished fact? I've done a little browsing looking for the first 5000 credit awards, and seen none, including for more than one WU created hours after this announcement post.
Here is one which is listed as "created 31 May 2013 22:07:39 UTC"
Just curious, and I do understand it is now the weekend.
Sorry for being a novice at this, but could you explain the 1x, 2x and how to configure this for a GPU?
When a single BOINC task is running on a GPU, depending on the details of the calculation it is common that the GPU must frequently wait for action from the rest of the system--possibly actual computation carried out on the CPU, data transfer from memory by the I/O bus, or such. Consulting a GPU monitoring application such as GPU-Z, one manifestation of this is that GPU load when running 1X is often much less than 100%.
Running more than one task on the GPU at a time is slightly a misnomer, as in fact there is rapid context switching among the currently active tasks. In the Einstein BOINC case, each copy running of the GPU has a separate copy of the CPU support task running. With your GTX660s. I'd expect that running 2x would significantly increase total system throughput--at the cost of noticeably higher power consumption.
While in the old days one had to fiddle with configuration details supplied with a system file called app_info.xml, on Einstein for quite some time this matter of running multiple tasks per GPU has become very easy to do. One just goes to one's account page on the Einstein web site, clicks on the Einstein@home preferences link, and edits the "GPU utilization factor of BRP apps" value applicable to the location (a.k.a. venue) for the computer in question. A value of this parameter of 1 means 1x, of 0.5 means 2X, and so on. However unlike most preference revisions it does not necessarily take effect the next time your host does an update. Rather it does not take effect until at least one new GPU job has been requested and downloaded after you changed the preference.
Running this way does increase power consumption and I/O CPU load, and I/O load. You may find that the optimum number of CPU jobs to run is lower--as controlled by the computing preferences section parameter:
On multiprocessors, use at most nn.n% of the processors
Enforced by version 6.1+
I've done a little browsing looking for the first 5000 credit awards, and seen none, including for more than one WU created hours after this announcement post.
Just curious, and I do understand it is now the weekend.
I've done a little browsing looking for the first 5000 credit awards, and seen none, including for more than one WU created hours after this announcement post.
Just curious, and I do understand it is now the weekend.
Good news. That one was created at 1 Jun 2013 18:53:39 UTC, whereas this one still at 4000 was created at 1 Jun 2013 7:19:20 UTC, so it seems the actual change was for work newly created sometime mid-day for June 1 UTC.
Your post was very timely, as I was just doing a new search, in which I have failed to find any 5000 credit Valid postings, with my most recent Valid finding the one I've included here.
That one was created at 1 Jun 2013 18:53:39 UTC, whereas this one still at 4000 was created at 1 Jun 2013 7:19:20 UTC, so it seems the actual change was for work newly created sometime mid-day for June 1 UTC.
Your post was very timely, as I was just doing a new search, in which I have failed to find any 5000 credit Valid postings, with my most recent Valid finding the one I've included here.
Actually those are the times the WUs are sent. They're created days earlier but have to work their way trough the queue. The reason the one above showed up now is that it's a fairly fast GPU (7850) on a machine that also runs GPUGrid (and therefore has to have a tiny queue). Credits/day have been down down down since BRP5 hit but partly that's because of the long pendings on large WUs. Not sure why the project has such a long due date on GPU WUs. I thought that a fast turnaround time was useful to Einstein, but maybe not. Don't see the purpose of a 2 week turn around, or even 1 week for that matter.
Actually those are the times the WUs are sent. They're created days earlier but have to work their way trough the queue.
Really? I'd have thought the time shown in the "time sent" column was the actual time sent for a particular task, and that the time shown as "time created" was the time created.
For example, for Work Unit 166270593 the creation date is shown as 1 Jun 2013 7:19:20 UTC, while the two component tasks show as having been sent at 1 Jun 2013 8:31:08 UTC, and 1 Jun 2013 10:44:09 UTC.
Why do you think the time listed as creation date is actually the sent date, and what, then, do you suppose the times shown as the sent dates to be?
[edited to remove third p from suppose]
Bernd Machenschalk wrote: For
)
Was this perhaps a declaration of intention rather than an already accomplished fact? I've done a little browsing looking for the first 5000 credit awards, and seen none, including for more than one WU created hours after this announcement post.
Here is one which is listed as "created 31 May 2013 22:07:39 UTC"
Just curious, and I do understand it is now the weekend.
RE: Bernd Machenschalk
)
As Bernd wrote and you cited it will be for new workunits not for those which are currently queued on server.
Sorry for being a novice at
)
Sorry for being a novice at this, but could you explain the 1x, 2x and how to configure this for a GPU?
There are only 10 kind of people in the world, those that understand binary and those that don't!
eeqmc2_52 wrote:Sorry for
)
When a single BOINC task is running on a GPU, depending on the details of the calculation it is common that the GPU must frequently wait for action from the rest of the system--possibly actual computation carried out on the CPU, data transfer from memory by the I/O bus, or such. Consulting a GPU monitoring application such as GPU-Z, one manifestation of this is that GPU load when running 1X is often much less than 100%.
Running more than one task on the GPU at a time is slightly a misnomer, as in fact there is rapid context switching among the currently active tasks. In the Einstein BOINC case, each copy running of the GPU has a separate copy of the CPU support task running. With your GTX660s. I'd expect that running 2x would significantly increase total system throughput--at the cost of noticeably higher power consumption.
While in the old days one had to fiddle with configuration details supplied with a system file called app_info.xml, on Einstein for quite some time this matter of running multiple tasks per GPU has become very easy to do. One just goes to one's account page on the Einstein web site, clicks on the Einstein@home preferences link, and edits the "GPU utilization factor of BRP apps" value applicable to the location (a.k.a. venue) for the computer in question. A value of this parameter of 1 means 1x, of 0.5 means 2X, and so on. However unlike most preference revisions it does not necessarily take effect the next time your host does an update. Rather it does not take effect until at least one new GPU job has been requested and downloaded after you changed the preference.
Running this way does increase power consumption and I/O CPU load, and I/O load. You may find that the optimum number of CPU jobs to run is lower--as controlled by the computing preferences section parameter:
Thank you for the detailed
)
Thank you for the detailed explanation - extremely helpful.
There are only 10 kind of people in the world, those that understand binary and those that don't!
RE: I've done a little
)
To set minds at ease, they're starting to appear:
http://einsteinathome.org/workunit/166339605
Beyond wrote:RE: I've done
)
Good news. That one was created at 1 Jun 2013 18:53:39 UTC, whereas this one still at 4000 was created at 1 Jun 2013 7:19:20 UTC, so it seems the actual change was for work newly created sometime mid-day for June 1 UTC.
Your post was very timely, as I was just doing a new search, in which I have failed to find any 5000 credit Valid postings, with my most recent Valid finding the one I've included here.
RE: That one was created at
)
Actually those are the times the WUs are sent. They're created days earlier but have to work their way trough the queue. The reason the one above showed up now is that it's a fairly fast GPU (7850) on a machine that also runs GPUGrid (and therefore has to have a tiny queue). Credits/day have been down down down since BRP5 hit but partly that's because of the long pendings on large WUs. Not sure why the project has such a long due date on GPU WUs. I thought that a fast turnaround time was useful to Einstein, but maybe not. Don't see the purpose of a 2 week turn around, or even 1 week for that matter.
Beyond wrote:Actually those
)
Really? I'd have thought the time shown in the "time sent" column was the actual time sent for a particular task, and that the time shown as "time created" was the time created.
For example, for Work Unit 166270593 the creation date is shown as 1 Jun 2013 7:19:20 UTC, while the two component tasks show as having been sent at 1 Jun 2013 8:31:08 UTC, and 1 Jun 2013 10:44:09 UTC.
Why do you think the time listed as creation date is actually the sent date, and what, then, do you suppose the times shown as the sent dates to be?
[edited to remove third p from suppose]
Just so long as nobody finds
)
Just so long as nobody finds a WU which was sent before it was created...