Multi-Directed Gravitational Wave Search

TimeLord04
TimeLord04
Joined: 8 Sep 06
Posts: 1,442
Credit: 72,378,840
RAC: 0

I HIGHLY recommend an

I HIGHLY recommend an app_config.xml file to set things appropriately.

 

[My MAC app_config.xml File:]

My MAC has TWO EVGA GTX-750TI SC cards.  I crunch TWO Units at a time per card.  I DO NOT crunch CPU Units.  My MAC has a 4-Core CPU, and through BOINC Settings  ---> Computing Preferences, I limit CPU Cores and Usage to 50% Each.

In my app_config.xml File, (below), the <max_concurrent>4</max_concurrent> tags allow 2 Units Per Card to be crunched at one time.  The CPU Usage Tags in the File are set to allow the CPU to use as little as 0.20 CPU for BRP4G work, and 0.50 CPU for ALL other work.  The GPU Usage is set to 0.50 on ALL Work, this allows 2 Units at a time of that Work Type to be crunched at one time on the GPU.

 

<app_config>
<app>
<name>einsteinbinary_BRP6</name>
<max_concurrent>4</max_concurrent>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>.5</cpu_usage>
</gpu_versions>
</app>
<app>
<name>einsteinbinary_BRP4G</name>
<max_concurrent>4</max_concurrent>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>.2</cpu_usage>
</gpu_versions>
</app>
<app>
<name>hsgamma_FGRPB1G</name>
<max_concurrent>4</max_concurrent>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>.5</cpu_usage>
</gpu_versions>
</app>
</app_config>

 

 

[My Windows XP Pro x64 app_config.xml File:]

My Windows system has ONE EVGA GTX-760.  I crunch TWO Units at a time on this card, as well.  As before, I DO NOT crunch CPU Units.  The Windows system has a Dual Core AMD A6-6400K APU, I DO NOT use the attached ATI Radeon card.

On this computer, I also make use of BOINC Settings  --->  Computing Preferences, and Limit the CPU Cores and Usage to 50% each.  In this app_config.xml File, (below), the <max_concurrent>2</max_concurrent> allows 2 Units to be crunched at one time on the GTX-760.

As before, the <cpu_usage> x </cpu_usage> tags allow the APU to use as little as 0.20 CPU on the core for BRP4G GPU Work, and 0.50 CPU for ALL other GPU Work.

 

<app_config>
<app>
<name>einsteinbinary_BRP6</name>
<max_concurrent>2</max_concurrent>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>.5</cpu_usage>
</gpu_versions>
</app>
<app>
<name>einsteinbinary_BRP4G</name>
<max_concurrent>2</max_concurrent>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>.2</cpu_usage>
</gpu_versions>
</app>
<app>
<name>hsgamma_FGRP3</name>
<max_concurrent>2</max_concurrent>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>.5</cpu_usage>
</gpu_versions>
</app>
<app>
<name>hsgamma_FGRPB1G</name>
<max_concurrent>2</max_concurrent>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>.5</cpu_usage>
</gpu_versions>
</app>
</app_config>

 

I hope this is informative and helps people make the best use of their individual GPU needs.  Smile

In MAC I recommend TextWrangler to create the app_config.xml File, and in Windows use Notepad.exe and make sure when Saving to change File Types to "All" and make sure that the line below "File Types" is set to ANSI.  Otherwise you may end up with "app_config.xml.txt" <<-----  THIS Does NOT Work.

 

TL

[EDIT:]

The app_config.xml File should be saved in BOINC Projects Folder, in the einstein.phys.uwm.edu Folder.  Once placed in the proper Folder, Suspend BOINC Task Processes, then in BOINC Manager go to Options  --->  Read Config Files.  Then check the Event Log and make sure that the app_config.xml was read in and has no errors.

 

TL

TimeLord04
Have TARDIS, will travel...
Come along K-9!
Join SETI Refugees

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,870
Credit: 115,918,914,159
RAC: 35,379,911

Richie_9 wrote:Eskomorko

Richie_9 wrote:
Eskomorko wrote:
Those CV 1.00 tasks took 20hrs to complete. Is that normal?

I looked at your host and my amateurish estimate is that those CV 1.00 tasks might take that long if you run your system under very heavy load... close to 100%. How many tasks were you running parallel at that time?

Thanks very much for your help in various places around the boards.  Helping others is a good way to improve your own knowledge :-).

Richie_9 wrote:
If you want to run both GPU and CPU work at the same time (FGRPB1G and CV 1.00) I would personally setup your system to run perhaps 4 CPU tasks + 2 GPU tasks. Or with a cc_config.xml file you could lower the dedication factor of CPU per each GPU task to 0.5 CPU / 0.5 GPU (default is 1 CPU / 1 GPU) and then a nice combination might be 5 CPU + 2 GPU tasks. Others are better able to give advise on tweaking with that CPU factor... I haven't tried that kind of management thru the config file.

The E3-1231 processor is a quad core with HT.  If there are CPU tasks occupying 2 threads on a single core, that would certainly slow down those tasks in comparison to one thread per core.  I think your suggestion of 4 CPU tasks plus 2 GPU tasks is worthwhile for testing.  Processing of FGRPB1G tasks would seem to be a whole new ball game so comparisons with previous BRP apps could be very misleading.  It will take time for various people to experiment and report before we get some idea of the best way to configure things.

There are different ways to achieve that suggested mix.  One way is to use an app_config.xml file (not cc_config.xml).  I posted an example in this message together with a link to the documentation.  If gpu_usage was set to 0.5 and cpu_usage was set to 2.0, there should be 4 CPU tasks and 2 GPU tasks in progress simultaneously if the HT remained enabled.  The big advantage of this method is the ease of experimenting with setting changes - just edit the file and then use the 're-read config files' option in BOINC Manager to get an instant change in the task mix.  If the change is not helpful, it's very easy to back it out and try something else.

A second way to achieve the same 4+2 mix (with HT still enabled) would be to set the compute preference setting for BOINC to use 75% of the cores and then set the project preference setting for GPU utilization factor of FGRP apps to 0.5.  That way, there would be no need to create an app_config.xml file. The disadvantage of this method is that changes in the GPU utilization factor aren't seen by the client (even on an 'update') unless new tasks are downloaded.  Another disadvantage is that the cpu_usage value can't be changed from the default (1 CPU) when the GPU utilization factor is changed, so options for tweaking are more limited.

 

Cheers,
Gary.

Eskomorko
Eskomorko
Joined: 15 Jan 09
Posts: 39
Credit: 870,934,733
RAC: 2

That 4+2 combo did the trick.

That 4+2 combo did the trick. RAC is rising again and my machine isn't overburdened by the work anymore.

Thanks for the advice!

mmonnin
mmonnin
Joined: 29 May 16
Posts: 291
Credit: 3,311,823,207
RAC: 368,895

Richie_9 wrote:Eskomorko

Richie_9 wrote:
Eskomorko wrote:
It's running 8 tasks at same time. One per cpu.

Most likely that's too much. It's generally a good idea to leave one or two threads/cores from those eight free, so it can dedicate its time to handle system operations. There's so much traffic that somebody needs to do it. That way the circus will start to fly.

 

That's completely not needed at all. Run 8 threads and set the priorities and the CPU tasks will use just what's left.

Richie
Richie
Joined: 7 Mar 14
Posts: 656
Credit: 1,702,989,778
RAC: 0

mmonnin wrote:That's

mmonnin wrote:
That's completely not needed at all. Run 8 threads and set the priorities and the CPU tasks will use just what's left.

Where and how should those priorities be set?

mmonnin
mmonnin
Joined: 29 May 16
Posts: 291
Credit: 3,311,823,207
RAC: 368,895

Process Lasso can keep

Process Lasso can keep permanent affinities/priorities. Although I think by default the exe for GPU tasks have been a notch higher than CPU tasks so they take the needed CPU cycles to feed the GPUs. I've run my 3770k with 8 CPU tasks and 6 E@H BRP4G GPU tasks on 2x GPUs plus several other NCI apps. No change in GPU performance with the CPU idle. I haven't tried the new ones but it seems like 3x per GPU isn't needed but I'd still run 8 CPU tasks. They may all run a bit slower on average but more work gets done.

solling2
solling2
Joined: 20 Nov 14
Posts: 219
Credit: 1,575,898,144
RAC: 50,625

Christian Beer wrote:We just

Christian Beer wrote:

We just started to distribute the first tasks for the next search for continuous gravitational waves on Einstein@Home. It works similar to the recent all-sky search but also has some new features. Here is an overview:

  • We are using a refined dataset based on the data gathered during the first Observational Run (O1) of the LIGO detectors.  ...  

So these are tasks for the first Observational Run, crunching continuing. For the second Observational Run, data are analysed ONLINE, see http://www.ligo.org/news/index.php#O2Jan2017update , no tasks to be crunched in Einstein@Home as of now. Two event candidates already identified in run two. I just wonder whether crunching will continue after run one when candidates are obvious online?

 

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257,957,147
RAC: 0

solling2 wrote:For the second

solling2 wrote:
For the second Observational Run, data are analysed ONLINE, see http://www.ligo.org/news/index.php#O2Jan2017update , no tasks to be crunched in Einstein@Home as of now. Two event candidates already identified in run two. I just wonder whether crunching will continue after run one when candidates are obvious online?

I am also wondering whether this affects the development of a GPU version.  If there is not so much data to crunch, they may not be so interested.  Or maybe a GPU can allow them to crunch in better ways than before?  Some enlightenment would be helpful.

Shawn Kwang
Shawn Kwang
Joined: 3 Nov 15
Posts: 289
Credit: 2,987,808
RAC: 1,589

solling2 wrote: So these are

solling2 wrote:

So these are tasks for the first Observational Run, crunching continuing. For the second Observational Run, data are analysed ONLINE, see http://www.ligo.org/news/index.php#O2Jan2017update , no tasks to be crunched in Einstein@Home as of now. Two event candidates already identified in run two. I just wonder whether crunching will continue after run one when candidates are obvious online?

 

There are a number of different analyses being run on the data from LIGO. E@H is part of the continuous wave (CW) analysis, where the entire dataset is collected before sending it out to our volunteers. The data analysis that is run as the data is being collected, and being referred on the Web site, is the compact binary coalescence (CBC) search, which is a different scientific result. (All this is a simplification of the science that is being done with data collected at LIGO. There are multiple analyses, not just the two I mentioned.)

I believe there will be more CW data all of you crunch in the future.

Einstein@Home Project

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257,957,147
RAC: 0

"Continuous Gravitational

"Continuous Gravitational Wave Search R" just recently showed up on the Project Preferences page.  What is it? 

It was enabled by default, but I don't recall seeing an explanation.

(Maybe this should be in a separate topic, since it is not marked "Multi-Directed").

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.