// DBOINCP-300: added node comment count condition in order to get Preview working ?>
DanNeely
Joined: 4 Sep 05
Posts: 1,364
Credit: 3,562,358,667
RAC: 0
17 Jan 2006 5:26:00 UTC
Topic 190624
(moderation:
)
I've done it manually this evening for other reasons (I wanted to starve annother app of cpu time to see what would happen), and am wondering if there's any gain to doing it regularly?
I don't believe anyone has done a study that collected data to prove/disprove this question. you could be the first ... :)
The place where it may come into effect is to "pair" up work from two different projects so that the most effective use is made of the processors and that threads are not switched willy-nilly causing cache misses ...
But, my suspicion is that the gains, if any, will be marginal. Because of that variability in the run times of work, unless you have a good and long baseline of timings, say 250-1,000 results, I would say that making measurements would be difficult to make a convincing case.
The studies I have seen suggest a small improvement on true dual processor (dual-core) systems with no improvement and sometimes degredation on hyperthreaded systems.
As Paul stated the reason for this is cache hits/misses. On a true dual system setting process affinity will always increase cache hits. On a hyperthreaded system the OS can actually reduce misses by changing which logical CPU a process is running on. Also since there is only one cache it can be flushed by something the other logical CPU is doing.
Looking at my runtimes for the last 20 WU (10 of each), I seem to be averaging 13.1ksec/WU instead of 13.5ksec/WU, a 3% gain. My WU sizes have been fairly constant, with ETAs at 4h+-10m, a 4% spread. (the 18ksec WUs before these two batches were with my CPU throttled by a 6th while troubleshooting a hardware problem)
IS there a benefit to setting CPU affinity in dual core system?
)
I don't believe anyone has done a study that collected data to prove/disprove this question. you could be the first ... :)
The place where it may come into effect is to "pair" up work from two different projects so that the most effective use is made of the processors and that threads are not switched willy-nilly causing cache misses ...
But, my suspicion is that the gains, if any, will be marginal. Because of that variability in the run times of work, unless you have a good and long baseline of timings, say 250-1,000 results, I would say that making measurements would be difficult to make a convincing case.
The studies I have seen
)
The studies I have seen suggest a small improvement on true dual processor (dual-core) systems with no improvement and sometimes degredation on hyperthreaded systems.
As Paul stated the reason for this is cache hits/misses. On a true dual system setting process affinity will always increase cache hits. On a hyperthreaded system the OS can actually reduce misses by changing which logical CPU a process is running on. Also since there is only one cache it can be flushed by something the other logical CPU is doing.
BOINC WIKI
BOINCing since 2002/12/8
Looking at my runtimes for
)
Looking at my runtimes for the last 20 WU (10 of each), I seem to be averaging 13.1ksec/WU instead of 13.5ksec/WU, a 3% gain. My WU sizes have been fairly constant, with ETAs at 4h+-10m, a 4% spread. (the 18ksec WUs before these two batches were with my CPU throttled by a 6th while troubleshooting a hardware problem)