I see now they are on average of 12 hours. That changed.
This will happen every time a CPU task completes.
Because you are running GPU tasks which complete very quickly, these will be affecting the estimates for all tasks, both CPU and GPU. If you watch closely, you will see a drop in the estimates for every other task as each GPU task completes. If enough GPU tasks complete before another CPU task does, the estimates could come right down again. When a further single CPU task does complete (perhaps taking around 12 hours again) the estimates will all go right back up to there in one big jump. This is the way the duration correction factor (DCF) works at Einstein.
There are strategies you can implement to partially mitigate against these big swings.
Firstly, stop the GPU tasks finishing so quickly :-). If you run two concurrent tasks they will each take longer, but less than twice as long, so you should be able to increase GPU task throughput. The side benefit is that the reduction in the estimates of other tasks won't be quite so aggressive. With enough GPU RAM you can go higher than two but the throughput gains will be less and your machine may become sluggish.
Secondly, experiment with how CPU support is being provided to GPU tasks. The default for NVIDIA GPUs is to 'reserve' one CPU core for each running GPU task. That full core really is needed but with HT enabled, BOINC will use a single virtual core. I don't have any relevant experience with HT capable CPUs and NVIDIA GPUs but others may be prepared to share their experiences. Looking at your returned results, the difference between CPU time and elapsed (run) time for CPU tasks (2000+ secs) seems to suggest that CPU tasks are losing some CPU cycles to other things - perhaps GPU support.
Some volunteers choose to either partially or fully restrict CPU tasks, giving the GPU unfettered access to support. This is also beneficial for making most efficient use of power and limiting heat production. It's also good for reducing the impact on non-crunching work the machine needs to handle. You need to work out what suits you best. Just try experimenting with different setups.
Dennis_82 wrote:
So E@H hasnt got short units for max 2 hours right?
That's correct. There are two different CPU searches and the tasks for each will take longer than that. Turning off HT will reduce the crunch time, but not enough to prevent a drop in throughput for CPU work with only half the number of tasks running. CPU tasks are designed to take a certain time so as not to overload the servers with overly frequent requests. GW tasks do take longer than FGRPB1 tasks but this is allowed for in the credit awards. Many people choose to run GW tasks because of the prospect of the first ever detection of continuous gravitational waves - exactly what the GW search is designed to do.
Thanks for all the info. At this moment i work on my 24/7 system on a 2 core cpu. it runs 2 gpu taks at the same time giving something like 300K per day on point.
On my other system that only runs a few hours a day i got the 4770K, but damn those units take long.
I would like to run Gamma-ray pulsar binary search #1 on GPUs v1.20 on my windows 10/64bit machine. Is there something I need to load beyond the Einstein@Home and Bionc?
I would like to run Gamma-ray pulsar binary search #1 on GPUs v1.20 on my windows 10/64bit machine. Is there something I need to load beyond the Einstein@Home and Bionc?
N528221
Besides being off topic in this thread, your NVIDIA NVS 4200M (512MB) doesn't have enough GPU-RAM to support the Gamma-ray pulsar binary search #1 on GPUs v1.20. It needs at least 1 GB of GPU-RAM.
After performing the last "tuning", today we started the new search for Continuous Gravitational Waves from the Galactic Center ("O1Spot1"). The first workunits were generated and the first tasks are being sent. As we already did in the "tuning" run, the work is split into a "high" and a "low" frequency part, the former being processed by the rather fast (CPU) hosts, the latter by the slower ones. This search is set up to run for a couple of months.
Hello Bernd!
Do I understand right, that for this search only CPU-application exist? And CPU-time now is meaningful for Einstein@Home project?
Yes, for this particular search there is only a CPU app. A GPU app is proposed for future GW searches. There is no firm indication of when that might be. It will be ready when it's ready.
hoarfrost wrote:
And CPU-time now is meaningful for Einstein@Home project?
I am assuming the 01 designation implies this data is from the first science run of the improved LIGO. I look forward to data from the second and "souped" up LIGO data.
O1 is code for 'Observation run #1' for the advanced LIGO detectors. This is already 'souped up' data by comparison with what came before.
The second observation run (O2) started last December and is still in progress. The latest update of LIGO news has it scheduled to finish on August 25. It is going to be quite a while after that before data is prepared and ready for distribution to E@H volunteer hosts. No doubt there will have been some enhancements made to the detectors between O1 and O2 so there should be some modest further improvement over O1 data. We can probably expect incremental improvements for years to come.
Detections of BH-BH mergers are becoming somewhat routine these days - that same news page talks about a 3rd confirmed detection - so the more exciting stuff will be the first confirmation of continuous GW emissions. This is why E@H (with its large CPU resources) is ideally placed to be involved in that. If you wait around for even more 'souped up' data, you might miss the boat.
The signal from which sources we are currently looking for? From pair of central Huge Black Hole (n*10^6 Msun) with other presumable black hole (m*10^5 Msun)?
The signal from which sources we are currently looking for? From pair of central Huge Black Hole (n*10^6 Msun) with other presumable black hole (m*10^5 Msun)?
Thank you!
This search is primarily going for CONTINUOUS waves. These are expected from asymmetric pulsars. However, this question is more related to the science section than to this thread which primarily refers to the app. Anyway, just keep on crunching!
Dennis_82 wrote:I see now
)
This will happen every time a CPU task completes.
Because you are running GPU tasks which complete very quickly, these will be affecting the estimates for all tasks, both CPU and GPU. If you watch closely, you will see a drop in the estimates for every other task as each GPU task completes. If enough GPU tasks complete before another CPU task does, the estimates could come right down again. When a further single CPU task does complete (perhaps taking around 12 hours again) the estimates will all go right back up to there in one big jump. This is the way the duration correction factor (DCF) works at Einstein.
There are strategies you can implement to partially mitigate against these big swings.
Firstly, stop the GPU tasks finishing so quickly :-). If you run two concurrent tasks they will each take longer, but less than twice as long, so you should be able to increase GPU task throughput. The side benefit is that the reduction in the estimates of other tasks won't be quite so aggressive. With enough GPU RAM you can go higher than two but the throughput gains will be less and your machine may become sluggish.
Secondly, experiment with how CPU support is being provided to GPU tasks. The default for NVIDIA GPUs is to 'reserve' one CPU core for each running GPU task. That full core really is needed but with HT enabled, BOINC will use a single virtual core. I don't have any relevant experience with HT capable CPUs and NVIDIA GPUs but others may be prepared to share their experiences. Looking at your returned results, the difference between CPU time and elapsed (run) time for CPU tasks (2000+ secs) seems to suggest that CPU tasks are losing some CPU cycles to other things - perhaps GPU support.
Some volunteers choose to either partially or fully restrict CPU tasks, giving the GPU unfettered access to support. This is also beneficial for making most efficient use of power and limiting heat production. It's also good for reducing the impact on non-crunching work the machine needs to handle. You need to work out what suits you best. Just try experimenting with different setups.
That's correct. There are two different CPU searches and the tasks for each will take longer than that. Turning off HT will reduce the crunch time, but not enough to prevent a drop in throughput for CPU work with only half the number of tasks running. CPU tasks are designed to take a certain time so as not to overload the servers with overly frequent requests. GW tasks do take longer than FGRPB1 tasks but this is allowed for in the credit awards. Many people choose to run GW tasks because of the prospect of the first ever detection of continuous gravitational waves - exactly what the GW search is designed to do.
Cheers,
Gary.
Hi, Thanks for all the info.
)
Hi,
Thanks for all the info. At this moment i work on my 24/7 system on a 2 core cpu. it runs 2 gpu taks at the same time giving something like 300K per day on point.
On my other system that only runs a few hours a day i got the 4770K, but damn those units take long.
I would like to run Gamma-ray
)
I would like to run Gamma-ray pulsar binary search #1 on GPUs v1.20 on my windows 10/64bit machine. Is there something I need to load beyond the Einstein@Home and Bionc?
N528221
N528221 wrote:I would like to
)
Besides being off topic in this thread, your NVIDIA NVS 4200M (512MB) doesn't have enough GPU-RAM to support the Gamma-ray pulsar binary search #1 on GPUs v1.20. It needs at least 1 GB of GPU-RAM.
Bernd Machenschalk
)
Hello Bernd!
Do I understand right, that for this search only CPU-application exist? And CPU-time now is meaningful for Einstein@Home project?
Thank you for amazing science!
hoarfrost wrote:... for this
)
Yes, for this particular search there is only a CPU app. A GPU app is proposed for future GW searches. There is no firm indication of when that might be. It will be ready when it's ready.
It always has been.
Cheers,
Gary.
I am assuming the 01
)
I am assuming the 01 designation implies this data is from the first science run of the improved LIGO. I look forward to data from the second and "souped" up LIGO data.
O1 is code for 'Observation
)
O1 is code for 'Observation run #1' for the advanced LIGO detectors. This is already 'souped up' data by comparison with what came before.
The second observation run (O2) started last December and is still in progress. The latest update of LIGO news has it scheduled to finish on August 25. It is going to be quite a while after that before data is prepared and ready for distribution to E@H volunteer hosts. No doubt there will have been some enhancements made to the detectors between O1 and O2 so there should be some modest further improvement over O1 data. We can probably expect incremental improvements for years to come.
Detections of BH-BH mergers are becoming somewhat routine these days - that same news page talks about a 3rd confirmed detection - so the more exciting stuff will be the first confirmation of continuous GW emissions. This is why E@H (with its large CPU resources) is ideally placed to be involved in that. If you wait around for even more 'souped up' data, you might miss the boat.
Cheers,
Gary.
Hello! The signal from which
)
Hello!
The signal from which sources we are currently looking for? From pair of central Huge Black Hole (n*10^6 Msun) with other presumable black hole (m*10^5 Msun)?
Thank you!
hoarfrost schrieb:Hello! The
)
This search is primarily going for CONTINUOUS waves. These are expected from asymmetric pulsars. However, this question is more related to the science section than to this thread which primarily refers to the app. Anyway, just keep on crunching!