Gravitational Wave Engineering run on LIGO O1 Open Data

mmonnin
mmonnin
Joined: 29 May 16
Posts: 290
Credit: 3,212,239,020
RAC: 36,328

Also taking a guess, telling

Also taking a guess, telling BOINC to read the file and a message will be in the event log with the known applications if its not correct.

tolafoph
tolafoph
Joined: 14 Sep 07
Posts: 122
Credit: 74,659,937
RAC: 0

I ran 3 tasks on my i7 quad

I ran 3 tasks on my i7 quad core once with my usually hyper-threading (HT) on and now turned it off. 

The 3 tasks with HT on used about 50% of the CPU and at most 70% of the GPU. They ran about 9600s each.

The 3 tasks with HT off used about 80% of the CPU and at most 75% of the GPU. They ran about 8300s each.

To compare the other times I got yesterday with HT on, I will run 1 and 2 tasks with HT off.

archae86
archae86
Joined: 6 Dec 05
Posts: 3,142
Credit: 6,977,984,931
RAC: 1,867,018

crashtech wrote:I am curious

crashtech wrote:

I am curious to try running this app starting with 4 at a time, with each instance having its own physical core, like so:

[code]

      <gpu_usage>0.25</gpu_usage>
      <cpu_usage>2.0</cpu_usage>

Does anyone think this might work, and if so, does anyone know the right project name to place in the app_config?

Bear in mind that this sort of directive influences how many tasks BOINC starts up to run simultaneously, but not the actual assignment of tasks to CPUs (virtual or physical).  So your 2.0 will lower the number of simultaneous tasks, but you get no assurance at any given moment that Windows may not assign two of your four support tasks to opposite HT instances of the same physical core for a while.

If you really think that a good thing, you could try using Process Lasso to set CPU affinity to the support task to only the odd (or only the even) numbered CPUs.  I've actually done that at some point in the past.  You may or may not like the results.  Affinity assignment consequences are often different from our expectations.

crashtech
crashtech
Joined: 16 Mar 17
Posts: 3
Credit: 2,534,638,697
RAC: 981,553

I think I understand it in

I think I understand it in terms of making a reasonable effort to see that enough resources are available for the task. I don't think these WUs are using more than one logical core (single-threaded, perhaps) but I do notice that running another CPU project even at moderate levels reduces GPU usage, so there is contention for other resources going on (cache. etc.) not for cores.

Possibly disabling SMT/HT would be helpful for these units, might need to check that.

Edit:

BTW, I am getting much better utilization of my GTX 1060, from ~15% to ~86% by running 4 instances at once. That's not to say 4 is the sweet spot though, that's just a guess for now and the number will of course vary by GPU and possibly even more by platform/CPU, since these seem highly dependent on CPU.

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3,117
Credit: 4,050,672,230
RAC: 0

So validations are slow in

So validations are slow in coming for the GW engineering work units on GPU.

Here's what I'm seeing currently on my systems.

GWE on i7 6900@4GHz is around 24000 seconds (range of low 20,654 to high 26,219)

GWE gpu on 1080Ti supported by i7 5930K@4GHz is averaging 3330 seconds. 

Going to give it a few more days before I try running more than 1 per card. But so far, that pretty impressive.

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257,957,147
RAC: 0

Zalster wrote:GWE on i7

Zalster wrote:

GWE on i7 6900@4GHz is around 24000 seconds (range of low 20,654 to high 26,219)

GWE gpu on 1080Ti supported by i7 5930K@4GHz is averaging 3330 seconds. 

Going to give it a few more days before I try running more than 1 per card. But so far, that pretty impressive.

The 1080 Ti produces about 7.2 times the output of a single core of an i7-6900.  But the TDP of a 1080 Ti is 250 watts, while that of an i7-6900 is 140 watts, or 17.5 watts per core. 

So the GTX 1080 Ti uses 14.3 times as much power to produce 7.2 times the output, and so is half as efficient.  I would just use more cores of the CPU.  I expect it would cost less too.

EDIT: I realize that the GPU is not fully loaded yet, and is using much less that its rated power.  But the CPU may be using less too.  Someone will need to measure the real numbers.  But we might as well wait until the software is finalized.  This is all very preliminary thus far.  I mention it just to give something to think about for comparison.  Output per watt (and dollar) is significant for me.

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3,117
Credit: 4,050,672,230
RAC: 0

Jim1348 wrote: The 1080 Ti

Jim1348 wrote:

The 1080 Ti produces about 7.2 times the output of a single core of an i7-6900.  But the TDP of a 1080 Ti is 250 watts, while that of an i7-6900 is 140 watts, or 17.5 watts per core. 

So the GTX 1080 Ti uses 14.3 times as much power to produce 7.2 times the output, and so is half as efficient.  I would just use more cores of the CPU.  I expect it would cost less too.

Hey Jim, 

Don't want to start a argument but I think the math is wrong.  I don't know any Intel chip that runs on the stated TDP.  I wish it ran at 140 W haha. More like 450 watts on a good day.  Especially when that auto boost is allowed. So figure 37.5 w per thread for 6.6 hours is 250 watts.  Yes the GPU can push 250W when fully loaded but it's not currently so it's lower.  Of course, once I start to run more than 1 per card that electric use will go up but still overall, the GPU is more efficient in both time wise and electricity.  At least that is how I see it.   Have a good day and keep crunching..

 

Z

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,838
Credit: 108,509,777,058
RAC: 33,657,962

Zalster wrote:... the GPU is

Zalster wrote:
... the GPU is more efficient in both time wise and electricity.  At least that is how I see it.

Without having any hard figures to support it, I'd be very surprised if that is not the case, even given the current incomplete state of development of the GPU app.  Just using the raw TDP specs can be very misleading.

If someone wanted to do a proper comparison and come up with a meaningful result, the power draw from the wall should be measured for two configurations.  Firstly, with no discrete GPU installed. measure the wall power used and the crunch times returned when running from just one task to the maximum number possible.  As both the power draw and the crunch time will probably change with each separate case, test them all individually and then work out the optimum (tasks completed per kWh used).  You might expect that running a task on every thread might be the best but I'm not so sure.  I suspect that it might actually be more efficient to run less than the maximum possible.

Secondly, install the GPU (disable CPU crunching completely) and then repeat the exercise from a single GPU task to however many can be run concurrently, while still showing a 'per task' lower crunch time - ie the concurrent crunch time still being less than the same number of tasks crunched consecutively.  Some early reports seem to indicate that this might have quite a large effect on the 'per task' time since the GPU utilization for a single task seems quite low.  It would be good to know the extra power consumed with each extra concurrent task so that the most efficient configuration could be determined.

When the GPU app requires less CPU support than now, this will all change.  If the GPU app currently has a better output per kWh, it may improve further as the app matures.  The optimum number of concurrent tasks would also likely decrease at that point.  Having the figures for now would be useful for when apps with greater GPU utilization are tested in the future.  I'll probably try to do something like this when a Linux app becomes available.

 

Cheers,
Gary.

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257,957,147
RAC: 0

Zalster wrote:Don't want to

Zalster wrote:
Don't want to start a argument but I think the math is wrong.  I don't know any Intel chip that runs on the stated TDP.  I wish it ran at 140 W haha. More like 450 watts on a good day.  Especially when that auto boost is allowed. So figure 37.5 w per thread for 6.6 hours is 250 watts.  Yes the GPU can push 250W when fully loaded but it's not currently so it's lower.  Of course, once I start to run more than 1 per card that electric use will go up but still overall, the GPU is more efficient in both time wise and electricity.  At least that is how I see it.   Have a good day and keep crunching..

I agree that the GPUs are not being pushed anywhere near to their full TDP at the moment.  But 450 watts for a CPU?  I don't think you have seriously looked into that one.

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3,117
Credit: 4,050,672,230
RAC: 0

Gary Roberts wrote:If someone

Gary Roberts wrote:

If someone wanted to do a proper comparison and come up with a meaningful result, the power draw from the wall should be measured for two configurations.  Firstly, with no discrete GPU installed. measure the wall power used and the crunch times returned when running from just one task to the maximum number possible.  As both the power draw and the crunch time will probably change with each separate case, test them all individually and then work out the optimum (tasks completed per kWh used).  

 

 

Easy enough, I have few Kill a Watt meter laying around. Should be able to get wattage with the CPU fully loaded with only CPU task in a few hours as well as a single thread work unit. The GPU eval will have to wait until Thursday when I can go over to that windows machine and plug in the kill a watt.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.