Hi all,
I have two PCs, one runs seti@home much faster than the other. With einstein@home the PC that runs seti@home faster is taking longer to run einstein@home.
Does Einstein@home change the "size" of the work unit to match the processor speed? Or is there some other explanation as to this apparently odd behaviour?
many thanks
Copyright © 2024 Einstein@Home. All rights reserved.
How is WU size matched to CPU?
)
The code the diffeent BOINC project use might be suited better for certain CPU architectures than for others. E.g. it seems that E@H perfoms better on AMD CPUs than on Intel ones (with comparable clock rates and overall performance) while SETI may be the other way round. The WUs of E@H are constant (well, small difference may occur). There are some threads here discussing the runtimes of E@H WUs on different (types of) machines, one title I remember is "Curious run times".
BM
BM
> The code the diffeent BOINC
)
> The code the diffeent BOINC project use might be suited better for certain CPU
> architectures than for others. E.g. it seems that E@H perfoms better on AMD
> CPUs than on Intel ones (with comparable clock rates and overall performance)
> while SETI may be the other way round. The WUs of E@H are constant (well,
> small difference may occur). There are some threads here discussing the
> runtimes of E@H WUs on different (types of) machines, one title I remember is
> "Curious run times".
>
> BM
>
>
Many thanks
I had a search on curious run times and found the thread. CPU architecture is most likely the answer but both my CPUs are Intel without HT so there must be another aspect at play, maybe L2 cache size? I guess einstein@home is less influenced by L2 cache size. (With SETI the slower CPU runs SETI@home faster, but this CPU has more L2 cache which seems to enable SETI@home to be more efficient).
> I guess einstein@home is
)
> I guess einstein@home is less influenced by L2 cache size.
That's been my observation; one of my fastest PCs is actually running a 1.8 GHz AMD Duron processor, quite a surprise given the Duron's much smaller L2 cache than the XP/MP line, just like Celeron is to P4 on the Intel side.
"Chance is irrelevant. We will succeed."
- Seven of Nine
In short, the WUs are not
)
In short, the WUs are not generated specially for any particular machine. Some projects have more than one application (Protein Predictor has MFold and Charmm, and Pirates has a dozen or so). Some of these different applications may require more memory or more run time than is available on a particular machine and that machine will not get WUs of that type sent.
BOINC WIKI
> In short, the WUs are not
)
> In short, the WUs are not generated specially for any particular machine.
> Some projects have more than one application (Protein Predictor has MFold and
> Charmm, and Pirates has a dozen or so). Some of these different applications
> may require more memory or more run time than is available on a particular
> machine and that machine will not get WUs of that type sent.
FWIW, all the E@H workunits have the same size and memory restrictions and should take the same number of cycles to run. However this is strictly speaking not true, since depending upon what is in the data, the search algorithm may have to do a different amount of work, and the memory and file usage may be quite different. But when the scheduler sends out work, the disk/memory/cpu requirements for all the WU are the same.
Bruce
Director, Einstein@Home
Delete response! Info
)
Delete response! Info already in this thread and or the one refered to.... pipeline length vs randomness of code, etc.