On an AMD Phenom II from ca. 2009, however, while also crunching FGRBP on two GPUs, Gamma-Ray#5 CPU time was slooow to begin with and wisdom seemed to make it even slower; slow enough that I aborted the task before it was 20% completed.
But as Gary has been harping about, you have make sure you are comparing the same family of tasks in a comparison run. There can be very large differences in crunch times between task species. And even some variation in task species of the same run but slightly different frequencies.
Ideally, the only proper way to compare would be to run the same task over and over again with differently generated wisdom files in an offline benchmark.
I ran the 3950X profile with an off and on BOINC loading because I was still reading the thread and doing other stuff. When I moved to the other host and knew what to expect I just stopped BOINC and ran the profile and did something else for a while until I came back and saw the process had finished.
But I don't see much difference in crunchtimes between the two hosts on the same task species other than the expected improvement on the Zen 2 processor.
The proper test would be offline runs with no BOINC load, partial BOINC load and full BOINC load to generate different wisdom files and then test each wisdom file on the same task over and over again in the benchmark.
And since we don't really know what is going on under the hood with the wisdom generator or the FGRP5 science application, a wisdom file could work really well on one task species and not on others.
I should try that experiment in Rick's BenchMT tool. I have profiled other changes to other applications with it.
Is there any processing advantages to claiming you have 128 cpus when the cpu chip in question is a 8c/16t cpu?
I can imagine a beowulf distributed processing system that could use that. And our current top producer seems to own that many discrete systems.
I am presuming there is no software that allows a beowulf aggragation like his system could be displaying.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Is there any processing advantages to claiming you have 128 cpus when the cpu chip in question is a 8c/16t cpu?
I can imagine a beowulf distributed processing system that could use that. And our current top producer seems to own that many discrete systems.
I am presuming there is no software that allows a beowulf aggragation like his system could be displaying.
Tom M
I don't know because I know nothing about beowulf clusters. I would think that spoofing your cpu count is only useful for keeping your cache size at the level you want.
I assume it is just that each of his individual hosts just get whatever that system of normal size gets is being aggregated.
Is there any processing advantages to claiming you have 128 cpus when the cpu chip in question is a 8c/16t cpu
My answer is slightly off your question but might help explain something you may see.
The project sets daily download task count limits. These limits adjust to reported CPU count and to some property measures of GPUs present. But it has been common at this project for people with highly capable GPU units running the GRP work to find the daily task limit too confining. An easy way to loosen the limit is to adjust one's system to report more CPUs than it actually has. Many of us have done this.
For example, the system on which I am typing this note at one time hosted a Radeon VII. The actual CPU chip is an i5-4690K which has four physical cores and no hyperthreading. So, without tampering, you would see it on my host list as having 4 processors, and I would have failed to receive enough tasks to keep it occupied. Instead, it is reported as 16.
[Edit to add: I see that Keith commented on the download facilitation motivation while I was typing]
Is there any processing advantages to claiming you have 128 cpus when the cpu chip in question is a 8c/16t cpu
My answer is slightly off your question but might help explain something you may see.
The project sets daily download task count limits.
Bingo!!!
That is what I was trying to get at.
And since high volume systems run at the edge of "acceptable" behavior we have had to "spoof" our systems to get enough tasks to keep our systems busy.
Thank you!
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
It quite likely has been updated a time or two since, but back in July 2012 there was discussion on this topic, and in that thread Bikeman made a post containing the recently updated formula.
Max tasks per day = 32 * (number of usable CPU cores) + 160 * (number of reported GPUs), where the number of cores is limited to max 64 and the max # of GPUs is 8.
I suspect this has been updated a time or two since then, but not often, and quite possibly the structural form remains the same.
Note the word "usable" means that if you limit CPU usage by preference that your daily maximum goes down.
The place I've actually seen this limit in text is in the last contact log for each host posted with a URL containing the Einstein Host number as
When the limit has had an actual limitation effect that log contains the text
reached daily quota of yourhostmax tasks
I've personally seen these messages within the last few years on two hosts, for one of which the limit number was 768, and the second was much lower--maybe 384 or somewhere near that. I don't know whether I had already spoofed the CPU count on the one that showed a 768 limit.
Quote:On an AMD Phenom II
)
But as Gary has been harping about, you have make sure you are comparing the same family of tasks in a comparison run. There can be very large differences in crunch times between task species. And even some variation in task species of the same run but slightly different frequencies.
Ideally, the only proper way to compare would be to run the same task over and over again with differently generated wisdom files in an offline benchmark.
I ran the 3950X profile with an off and on BOINC loading because I was still reading the thread and doing other stuff. When I moved to the other host and knew what to expect I just stopped BOINC and ran the profile and did something else for a while until I came back and saw the process had finished.
But I don't see much difference in crunchtimes between the two hosts on the same task species other than the expected improvement on the Zen 2 processor.
The proper test would be offline runs with no BOINC load, partial BOINC load and full BOINC load to generate different wisdom files and then test each wisdom file on the same task over and over again in the benchmark.
And since we don't really know what is going on under the hood with the wisdom generator or the FGRP5 science application, a wisdom file could work really well on one task species and not on others.
I should try that experiment in Rick's BenchMT tool. I have profiled other changes to other applications with it.
I have another cpu task
)
I have another cpu task related question.
Is there any processing advantages to claiming you have 128 cpus when the cpu chip in question is a 8c/16t cpu?
I can imagine a beowulf distributed processing system that could use that. And our current top producer seems to own that many discrete systems.
I am presuming there is no software that allows a beowulf aggragation like his system could be displaying.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Tom M wrote: I have another
)
I don't know because I know nothing about beowulf clusters. I would think that spoofing your cpu count is only useful for keeping your cache size at the level you want.
I assume it is just that each of his individual hosts just get whatever that system of normal size gets is being aggregated.
Tom M wrote:I have another
)
My answer is slightly off your question but might help explain something you may see.
The project sets daily download task count limits. These limits adjust to reported CPU count and to some property measures of GPUs present. But it has been common at this project for people with highly capable GPU units running the GRP work to find the daily task limit too confining. An easy way to loosen the limit is to adjust one's system to report more CPUs than it actually has. Many of us have done this.
For example, the system on which I am typing this note at one time hosted a Radeon VII. The actual CPU chip is an i5-4690K which has four physical cores and no hyperthreading. So, without tampering, you would see it on my host list as having 4 processors, and I would have failed to receive enough tasks to keep it occupied. Instead, it is reported as 16.
[Edit to add: I see that Keith commented on the download facilitation motivation while I was typing]
Since I don't know, what is
)
Since I don't know, what is the project daily download limit?
archae86 wrote: Tom M
)
Bingo!!!
That is what I was trying to get at.
And since high volume systems run at the edge of "acceptable" behavior we have had to "spoof" our systems to get enough tasks to keep our systems busy.
Thank you!
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Keith Myers wrote: Since I
)
Thinking way back and if I remember correctly, a 4 core non HT cpu with two gpu's hit the limit at something like 512 daily tasks.
Gav.
Thanks for the info. I know
)
Thanks for the info. I know I have never stumbled across that limit before. Mainly because my gpus run multiple gpu projects all the time.
Looking at my BoincTasks info, I seem to be averaging around 200 gpu tasks a day on each host.
It quite likely has been
)
It quite likely has been updated a time or two since, but back in July 2012 there was discussion on this topic, and in that thread Bikeman made a post containing the recently updated formula.
Max tasks per day = 32 * (number of usable CPU cores) + 160 * (number of reported GPUs), where the number of cores is limited to max 64 and the max # of GPUs is 8.
I suspect this has been updated a time or two since then, but not often, and quite possibly the structural form remains the same.
Note the word "usable" means that if you limit CPU usage by preference that your daily maximum goes down.
The place I've actually seen this limit in text is in the last contact log for each host posted with a URL containing the Einstein Host number as
https://einsteinathome.org/host/hostnumber/log
When the limit has had an actual limitation effect that log contains the text
reached daily quota of yourhostmax tasks
I've personally seen these messages within the last few years on two hosts, for one of which the limit number was 768, and the second was much lower--maybe 384 or somewhere near that. I don't know whether I had already spoofed the CPU count on the one that showed a 768 limit.
I guess that these values
)
I guess that these values from my last contact log is not representative of the truth.
I doubt that the project will send me 2000K jobs.