Apparently Linux Gamma-Ray#5 cpu tasks have the following results on two different Amd model cpus.
A more likely explanation for the difference could be that the two machines were analysing different data files at the time you recorded the results. The first part of a task name (eg something like LATeah1026F_... ) indicates that such tasks would be using a data file named LATeah1026F.dat.
Quite often successive data files do give similar crunch times, particularly if the file sizes are the same. Recently there has been a reasonable turnover in these files and the sizes have been markedly different. For example, the LATeah1025F.dat, issued May 21 had a size of 368,588 bytes whilst the next in the series LATeah1026F.dat issued May 24 had a size of 5,134,792 bytes - very much larger. I don't really know the factors that do affect crunch time, but I wouldn't be surprised if file size was one of them.
The only way to have a fair comparison is to choose results for the same data file. Such results do tend to have a pretty constant crunch time on specific hardware - perhaps a slightly rising crunch time as the frequency term in the task name increases. So, for a really rigorous comparison, use tasks having the same data file and frequency, which is the 2nd field in the task name.
If it does turn out that the results from both CPUs were for the same data file, then it was a surprisingly large difference, much more than I would have expected.
For FGRP tasks on CPU, don't forget to create a wisdom file, it speeds up processing a lot.
On my 2700X, when running 5 FGRP tasks in parallel, execution times are either 1 h 20 min or around 4 h. There seems to be two flavors of FGRP tasks coming currently.
For FGRP tasks on CPU, don't forget to create a wisdom file, it speeds up processing a lot.
On my 2700X, when running 5 FGRP tasks in parallel, execution times are either 1 h 20 min or around 4 h. There seems to be two flavors of FGRP tasks coming currently.
For FGRP tasks on CPU, don't forget to create a wisdom file, it speeds up processing a lot.
On my 2700X, when running 5 FGRP tasks in parallel, execution times are either 1 h 20 min or around 4 h. There seems to be two flavors of FGRP tasks coming currently.
What is "a wisdom file"?
And how do you create it?
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
I recommend running the command at the same time as you have some load on the CPU, it gives the best result in my experience. So run the command while at the same time processing a few FGRP tasks ideally. There is a discussion about this in the same thread I think - if in doubt you can try it out yourself, just create several wisdom.dat files and see which one is best.
Thanks for the tip. I never realized there was a wisdom binary available for profiling in Linux. Ran it on both the hosts and chopped 20-30 minutes off the crunch time.
Yes, great tip! Just ran wisdom and the first gamma ray pulsar search #5 v1.08 task came in with a time 40 min. less than the average of the three shortest tasks from the previous 20 tasks crunched without wisdom. That's on a 4-thread Pentium running a single CPU task while the AMD GPU ran 3x pulsar tasks. I'll try to remember to report some average times later.
Thank you ROLF!
Ideas are not fixed, nor should they be; we live in model-dependent reality.
My first estimate must have been with partially completed tasks started without a wisdom file. Now that I have completed some more tasks started from the beginning with a wisdom file, it looks like the improvement is on the order of around 60 minutes on average.
I recommend running the command at the same time as you have some load on the CPU, it gives the best result in my experience. So run the command while at the same time processing a few FGRP tasks ideally. There is a discussion about this in the same thread I think - if in doubt you can try it out yourself, just create several wisdom.dat files and see which one is best.
When I run this command it goes away and doesn't come back until I do a control C?
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
I recommend running the command at the same time as you have some load on the CPU, it gives the best result in my experience. So run the command while at the same time processing a few FGRP tasks ideally. There is a discussion about this in the same thread I think - if in doubt you can try it out yourself, just create several wisdom.dat files and see which one is best.
When I run this command it goes away and doesn't come back until I do a control C?
Tom M
What I was missing is that it could take an hour or more to generate.
I think.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Tom M wrote:Apparently Linux
)
A more likely explanation for the difference could be that the two machines were analysing different data files at the time you recorded the results. The first part of a task name (eg something like LATeah1026F_... ) indicates that such tasks would be using a data file named LATeah1026F.dat.
Quite often successive data files do give similar crunch times, particularly if the file sizes are the same. Recently there has been a reasonable turnover in these files and the sizes have been markedly different. For example, the LATeah1025F.dat, issued May 21 had a size of 368,588 bytes whilst the next in the series LATeah1026F.dat issued May 24 had a size of 5,134,792 bytes - very much larger. I don't really know the factors that do affect crunch time, but I wouldn't be surprised if file size was one of them.
The only way to have a fair comparison is to choose results for the same data file. Such results do tend to have a pretty constant crunch time on specific hardware - perhaps a slightly rising crunch time as the frequency term in the task name increases. So, for a really rigorous comparison, use tasks having the same data file and frequency, which is the 2nd field in the task name.
If it does turn out that the results from both CPUs were for the same data file, then it was a surprisingly large difference, much more than I would have expected.
Cheers,
Gary.
For FGRP tasks on CPU, don't
)
For FGRP tasks on CPU, don't forget to create a wisdom file, it speeds up processing a lot.
On my 2700X, when running 5 FGRP tasks in parallel, execution times are either 1 h 20 min or around 4 h. There seems to be two flavors of FGRP tasks coming currently.
Rolf wrote: For FGRP tasks
)
What is "a wisdom file"?
mikey wrote: Rolf
)
And how do you create it?
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
How to create a wisdom.dat
)
How to create a wisdom.dat file and where to put that file, see first post in this link
https://einsteinathome.org/content/gamma-ray-pulsar-search-fgrpb1-cpu-app-version-108
I recommend running the command at the same time as you have some load on the CPU, it gives the best result in my experience. So run the command while at the same time processing a few FGRP tasks ideally. There is a discussion about this in the same thread I think - if in doubt you can try it out yourself, just create several wisdom.dat files and see which one is best.
Thanks for the tip. I never
)
Thanks for the tip. I never realized there was a wisdom binary available for profiling in Linux. Ran it on both the hosts and chopped 20-30 minutes off the crunch time.
Yes, great tip! Just ran
)
Yes, great tip! Just ran wisdom and the first gamma ray pulsar search #5 v1.08 task came in with a time 40 min. less than the average of the three shortest tasks from the previous 20 tasks crunched without wisdom. That's on a 4-thread Pentium running a single CPU task while the AMD GPU ran 3x pulsar tasks. I'll try to remember to report some average times later.
Thank you ROLF!
Ideas are not fixed, nor should they be; we live in model-dependent reality.
My first estimate must have
)
My first estimate must have been with partially completed tasks started without a wisdom file. Now that I have completed some more tasks started from the beginning with a wisdom file, it looks like the improvement is on the order of around 60 minutes on average.
Thank you ROLF!
Rolf wrote: How to create a
)
Ok, what am I missing?
When I run this command it goes away and doesn't come back until I do a control C?
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Tom M wrote: Rolf
)
What I was missing is that it could take an hour or more to generate.
I think.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!