Multi-Directed Gravitational Wave Search

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3,117
Credit: 4,050,672,230
RAC: 0

ExtraTerrestrial Apes

ExtraTerrestrial Apes wrote:
Zalster wrote:

Yes, I understand many believe as you do.  And many more will post here that their observations are similar to yours.

However, that is not my observations nor many of my teammates after testing both methods.

It is as I have stated, which is why we choose to use the app_config.xml to maximize our output.

 

I don't understand how you reach that conclusion. Do you use an alternative BOINC client? Do you change core affinities? Do you have differing numbers of concurrent tasks in your comparison?

If both answers are "no" I fail to see how the two methods in question differ. Both provide different ways to launch the same number of CPU & GPU tasks, if configured properly. Then the Windows / OS scheduler takes over and distributes those tasks over the cores according to their priorities.

How did you measure that productivity difference? When you speak about CPU tasks taking full cores, starving the GPUs.. remember that the CPU portion of the BRP 1.57 GPU app hardly uses causes CPU load, even if cores are available. Note that I'm not disputing that your method of limiting the maximum number of concurrent tasks works well, I just think you're fiddling around with things more than necessary (and seemed to be rather frustrated by that).

Regarding your other answers: OK!

MrS

I run Boinc Manager, BoincTasks, SIVX64, Process Lasso, Nvidia Inspector, Precision X. Allowing me to see not only how much CPU each work unit takes but physical as well as virtual Memory.

I've tested these theories on both Einstein, Seti Main and Seti Beta and was reproducible by my teammates.

Have I set CPU affinity before? yes and I prefer not to do that as it causes problems when work units exceed physical cores. 

Yes GPU use of CPU core is minimal, but you still have to have something, even if it is less than 0.2 Core per work unit (0.2 x 12 =2.4). If the CPU is 16 cores and 16 GW take 100% of all cores, then you have starvation of the GPUs since there is nothing left (16 cores at 100% with 12 GPU work units looking for a minimum of 2.4 CPU cycle waiting). Do that for enough cycles and the GPU work units will error out.

I don't expect you to believe or even agree. I point this out so others know there are other ways by which to control how work is done on their systems.

Productivity differences can be calculated by how long it takes to do X number of work units in a set amount of time vs doing it the other way and see how many work units it can do in the same amount of time.

Varying the amount of work per GPU was only done at the beginning to find the most productive amount to each card set (ie 650, 750, 780, 780Ti, 980, 980Ti, Titan X, Titan Black, Titan Z, 1080f, 1070ftw) once that is established, there is no need to change that number.

So, I think we are moving far away from the central core of this thread.

What we really should be looking at is why are certain GW using so much RAM of the system and what can be done about it?

Harri Liljeroos
Harri Liljeroos
Joined: 10 Dec 05
Posts: 3,498
Credit: 2,869,008,913
RAC: 983,965

One point is that when you

One point is that when you are controlling CPU usage with app_config and for some reason you run out of GPU work (or your GPU fails), you will automatically use freed CPU cores for CPU work and not let them sit idle.

Just my two cents.

Jonathan Jeckell
Jonathan Jeckell
Joined: 11 Nov 04
Posts: 114
Credit: 1,341,945,207
RAC: 13

Seeing the new 1.02 version

Seeing the new 1.02 version in the wild.  Hopefully my Linux machines will stop puking them up and complete them reliably this time.

Also, the other flavors are marked as SSE and/or AVX optimized (yeah, I saw the comment in this thread not to take the AVX and SSE too seriously because the units can revert to the lower if required by the processor).  Does the Mac OS X version have any such optimization (like SSE2 or AVX)?  It makes a HUGE difference over in SETI, with SSSE3 optimized units using the processor about 30% more efficiently than the stock SETI app, and AVX even more.

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 770
Credit: 530,064,336
RAC: 205,159

Zalster wrote:What we really

Zalster wrote:
What we really should be looking at is why are certain GW using so much RAM of the system and what can be done about it?

We know the frequency at which the analysis is performed has a drastic influence on the runtime, so it's natural to assume this is also causing the difference in memory requirement. I have no insider knowledge of this, though.

And you're right, we're getting far off topic with the other discussion. But still, I suspect there should be a simple explanation for what you're reporting. For example, you're talking about how 100% CPU load starves the GPUs. This is well established - but that's not what we're comparing here, are we? What I understood you're talking about is this:

Config 1: 16 core machine running 12 GPU WUs, 12 CPU WUs, with BOINC set to run at most 12 of those CPU tasks.

Config 2: 16 core machine running 12 GPU WUs (each at e.g. 0.01 CPU), 12 CPU WUs, with BOINC set to use at most 75% (12) of the cores.

If I understand you correctly, Config 1 would somehow be more productive. Which I highly doubt. But if you're comparing anything else, e.g. Config 2 might run more CPU tasks, then we might have found a very simple answer. That's why I was asking you about the specifics of your observation.

MrS

Scanning for our furry friends since Jan 2002

JBird
Joined: 22 Dec 14
Posts: 1,963
Credit: 4,046,216,051
RAC: 0

Hey Johnathan, I too noticed

Hey Johnathan, I too noticed v1.02 appear(on SSP) but only see it on 1 of 2 machines I use

JBird
Joined: 22 Dec 14
Posts: 1,963
Credit: 4,046,216,051
RAC: 0

OK update: 40 minutes after I

OK update: 40 minutes after I posted my "missing v1.02 on one machine" - it appeared in my folder - thank you

JBird
Joined: 22 Dec 14
Posts: 1,963
Credit: 4,046,216,051
RAC: 0

May be a dumb question but,

May be a dumb question but, is v1.02 Faster?

If it is, how can I change to it for the remainder of my Queue? would I need to delete v1.01 from my folder(if even that is possible) BOINC didn't deprecate the 1.01 exe

I've already timed out way too many with today's 10/21 deadline and I'm afraid even the remaining tasks won't make it at this present rate

AgentB
AgentB
Joined: 17 Mar 12
Posts: 915
Credit: 513,211,304
RAC: 0

JBird wrote:May be a dumb

JBird wrote:
May be a dumb question but, is v1.02 Faster?

Too early to say, but looking at only 2 validated results i have - not faster

JBird
Joined: 22 Dec 14
Posts: 1,963
Credit: 4,046,216,051
RAC: 0

OK thanks anyway AgentB

OK thanks anyway AgentB

Richie
Richie
Joined: 7 Mar 14
Posts: 656
Credit: 1,702,989,778
RAC: 0

JBird wrote:If it is, how can

JBird wrote:

If it is, how can I change to it for the remainder of my Queue? would I need to delete v1.01 from my folder(if even that is possible) BOINC didn't deprecate the 1.01 exe

I've already timed out way too many with today's 10/21 deadline and I'm afraid even the remaining tasks won't make it at this present rate

It isn't possible to change the version for the tasks that are already downloaded. But it wouldn't be a big deal if you just canceled a bunch of tasks from the queue to help the remaining tasks to complete in time.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.