It just seems weird that I could go to the trouble of (for example) setting up a GUI-free, kernel-optimised, 3 gig-plus brained, seriously skinny linux box (well I can't tweak XP much more than I have - grin ), and then get some bloke come along with a 650 mHz cpu, running Nortons A/V, around 100 pieces of spyware (I have seen it happen...) and doing a WU every 24 hours and that single WU gets him the same credit as (for example), the 6 work units done by my hypothetical box. I appreciate that he has spent 24 hours getting there, and I have spent 24 hours getting my 6 WUs done, but surely 6 WUS mean more to science than one ?
What you can do is to use a client that scales your claimed credit upwards. There're some that just bogusly inflate the benchmarks. The problem with these is that they inflate your score for every project, not just the ones that you've got an optimized client for, and the ammount they boost by is completely unrealted to the size of the science apps optimization. FOr this reason they're use isoften percieved as cheating.
However the trux client is different, it adjusts your claimed credit based on how long the project says it should take. IT needs about two dozen work units to dialin, but after that it claims numbers very close to what the standard boinc client would get with the standard science app. It will still be slightly lower, but generally within the range of variance between different OS/platforms. the slightly low credit is by design, to avoid accusations of cheating.
However the trux client is different, it adjusts your claimed credit based on how long the project says it should take. IT needs about two dozen work units to dialin, but after that it claims numbers very close to what the standard boinc client would get with the standard science app. It will still be slightly lower, but generally within the range of variance between different OS/platforms. the slightly low credit is by design, to avoid accusations of cheating.
I was wondering when you would appear - I assume you do sleep sometime ? (grin)
I suppose I could have done it the way you suggest, but I tend to be amazingly honest (in my not totally humble opinion), so cannot do that (even if I do enjoy micromanagement...). Thanks for the suggestion.
It just seems weird that I could go to the trouble of (for example) setting up a GUI-free, kernel-optimised, 3 gig-plus brained, seriously skinny linux box (well I can't tweak XP much more than I have - grin ), and then get some bloke come along with a 650 mHz cpu, running Nortons A/V, around 100 pieces of spyware (I have seen it happen...) and doing a WU every 24 hours and that single WU gets him the same credit as (for example), the 6 work units done by my hypothetical box. I appreciate that he has spent 24 hours getting there, and I have spent 24 hours getting my 6 WUs done, but surely 6 WUS mean more to science than one ?
Gray
Gray, I feel for ya, buddy. I wasn't suggesting you do the OC/bench dance I described, just using that to illustrate a point.
microcraft
"The arc of history is long, but it bends toward justice" - MLK
It would be been nice to earn the credits per w/u instead of by time.
RD,
I agree, and more than a few times I've noticed a scheme for benching/crediting based on tFlops performed discussed on the developers' boards, so an equitable system for crediting is being worked on, at some priority. Such a plan does present it's own problems - for example, different projects emphasizing different levels and types of calculations.
Respects,
Michael R.
microcraft
"The arc of history is long, but it bends toward justice" - MLK
"The procedure entry point ClientLibraryStartup could not be located in the dynamic link library boinc.dll."
Hmmm I must have done something wrong. ;)
Any ideas folks?
My first guess is that you may not have complied with this part of the installation instructions:
"Copy also all other files in the distribution zip archive (except of the readme files) to the main BOINC directory. Then restart BOINC again."
As the message suggests the libraries actually present in the execution directory don't contain an expected entry point.
Some of the libraries in the trux zip file have the same names, and therefore must replace, ones which your previous installation placed in that same BOINC primary directory. Depending on what unzip/copy method you used, you probably needed to say "yes" to an overwrite inquiry.
What you can do is to use a client that scales your claimed credit upwards. There're some that just bogusly inflate the benchmarks. The problem with these is that they inflate your score for every project, not just the ones that you've got an optimized client for, and the ammount they boost by is completely unrealted to the size of the science apps optimization. FOr this reason they're use isoften percieved as cheating.
However the trux client is different, it adjusts your claimed credit based on how long the project says it should take. IT needs about two dozen work units to dialin, but after that it claims numbers very close to what the standard boinc client would get with the standard science app. It will still be slightly lower, but generally within the range of variance between different OS/platforms. the slightly low credit is by design, to avoid accusations of cheating.
I'm curious. (big surprise there!) My rig is currently slicing WUs from 4 different datafiles. Does that imply that trux' CC will need to process 30 or more of each before it reaches equalibrium, and then will have to re-calibrate each time a new datafile enters the picture?
edit - added last phrase
microcraft
"The arc of history is long, but it bends toward justice" - MLK
My rig is currently slicing WUs from 4 different datafiles. Does that imply that trux' CC will need to process 30 or more of each before it reaches equalibrium, and then will have to re-calibrate each time a new datafile enters the picture?
The trux Calibrating client handles projects entirely distinctly, but assumes that work within a project should have an equivalent relationship between measured effort and "real work". You are right in mentioning that different datafiles have slightly different errors (for that matter, it drifts within the same datafile), so you will see the calibration hunt around a little, but in most cases it is quite minor. A bigger issue can be that a temporary disturbance in your machine's operating conditions which leads to different reported CPU times can put a transient into the calibration, which will move off, then back to equilibrium.
With Einstein, at least, so far this is quite minor stuff compared to the initial calibration--no reason to worry.
RE: It just seems weird
)
Try Trux's Calibrating Client to adjust the credit per WU.
[Edit to fix URL]
Seti Classic Final Total: 11446 WU.
What you can do is to use a
)
What you can do is to use a client that scales your claimed credit upwards. There're some that just bogusly inflate the benchmarks. The problem with these is that they inflate your score for every project, not just the ones that you've got an optimized client for, and the ammount they boost by is completely unrealted to the size of the science apps optimization. FOr this reason they're use isoften percieved as cheating.
However the trux client is different, it adjusts your claimed credit based on how long the project says it should take. IT needs about two dozen work units to dialin, but after that it claims numbers very close to what the standard boinc client would get with the standard science app. It will still be slightly lower, but generally within the range of variance between different OS/platforms. the slightly low credit is by design, to avoid accusations of cheating.
http://boinc.truxoft.com/
http://einsteinathome.org/node/190958
RE: However the trux client
)
HA!!! Beat ya by 7 secs.
Seti Classic Final Total: 11446 WU.
RE: HA!!! Beat ya by 7
)
Yeah, but you didn't tell him why he should use it, so neeener neeener neener. :-P~~~
Thx for the pointers. I have
)
Thx for the pointers. I have installed the truXoft software but now I just get an error when starting boinc.
"The procedure entry point ClientLibraryStartup could not be located in the dynamic link library boinc.dll."
Hmmm I must have done something wrong. ;)
Any ideas folks?
RE: Greetings Michael I
)
Gray, I feel for ya, buddy. I wasn't suggesting you do the OC/bench dance I described, just using that to illustrate a point.
microcraft
"The arc of history is long, but it bends toward justice" - MLK
RE: It would be been nice
)
RD,
I agree, and more than a few times I've noticed a scheme for benching/crediting based on tFlops performed discussed on the developers' boards, so an equitable system for crediting is being worked on, at some priority. Such a plan does present it's own problems - for example, different projects emphasizing different levels and types of calculations.
Respects,
Michael R.
microcraft
"The arc of history is long, but it bends toward justice" - MLK
RE: "The procedure entry
)
My first guess is that you may not have complied with this part of the installation instructions:
"Copy also all other files in the distribution zip archive (except of the readme files) to the main BOINC directory. Then restart BOINC again."
As the message suggests the libraries actually present in the execution directory don't contain an expected entry point.
Some of the libraries in the trux zip file have the same names, and therefore must replace, ones which your previous installation placed in that same BOINC primary directory. Depending on what unzip/copy method you used, you probably needed to say "yes" to an overwrite inquiry.
RE: What you can do is to
)
Dan,
I'm curious. (big surprise there!) My rig is currently slicing WUs from 4 different datafiles. Does that imply that trux' CC will need to process 30 or more of each before it reaches equalibrium, and then will have to re-calibrate each time a new datafile enters the picture?
edit - added last phrase
microcraft
"The arc of history is long, but it bends toward justice" - MLK
RE: My rig is currently
)
The trux Calibrating client handles projects entirely distinctly, but assumes that work within a project should have an equivalent relationship between measured effort and "real work". You are right in mentioning that different datafiles have slightly different errors (for that matter, it drifts within the same datafile), so you will see the calibration hunt around a little, but in most cases it is quite minor. A bigger issue can be that a temporary disturbance in your machine's operating conditions which leads to different reported CPU times can put a transient into the calibration, which will move off, then back to equilibrium.
With Einstein, at least, so far this is quite minor stuff compared to the initial calibration--no reason to worry.