Is that "too hard to get" for an US- citizen?
BTW: "Bomber", or "Hairstyle"?
That was callous and untoward. Btw he/she's from Denmark.
Ok. Done wrong. Ashes on my head. Accepted.
But that was "btw".
Still waiting for some "official anouncements" here.
Regards, bonnyscott
Correct, Denmark is my native country.
With regards to the nick, then I can honestly say that I allways have been a huge fan of The B52's, and though I'm 46, then I also still love punk music
Why all the complaining? If optimizing for instruction sets shows so much potential for improvement, why aren't all the ubergeek Linux folks swarming all over the code themselves, and writing their own optimizations? Isn't that what they're supposed to be good at? And if it turns out in the end that the code is just too thick and convoluted to address those arcane instruction sets, isn't that a clue that it's just an inferior OS?
Is there any project where Linux is particularly well-suited, where is has the advantage over the others?
Clueless :-)
microcraft
"The arc of history is long, but it bends toward justice" - MLK
Why all the complaining? If optimizing for instruction sets shows so much potential for improvement, why aren't all the ubergeek Linux folks swarming all over the code themselves, and writing their own optimizations? Isn't that what they're supposed to be good at?
Please correct me if i am wrong, but afaik the albert application is closed-source so optimization is not that easy for the ubergeeks if you first have to reverse engineer the code.
That said, akosf apparently managed not only to reverse engineer the code but to improve the efficiency of the albert application by the factor of 4. And he did it in his spare time ;)
I am living in germany where we do not have a lot of natural resources but have to rely on brainpower to compete in a globalized world. Sadly, the whole educational and science system here is heavily underfunded so every Euro counts.
And an efficiency improvement of the aforementioned magnitude is about serious money. Isn't it that the LIGO/GEO-600 science output is still computationally bound?
As far as i understand it, there are taxpaid computing clusters still working at or below 25% efficiency without akosf improved methods and i ask myself: Why is the science application closed source making it so hard for the community to contribute with optimizations?
Why all the complaining? If optimizing for instruction sets shows so much potential for improvement, why aren't all the ubergeek Linux folks swarming all over the code themselves, and writing their own optimizations?
I'm thinking on an optimised albert to Linux, but at first I have to solve some problems.
Well, that was a decent food fight if I ever saw! :-)
A good example of where you arrive at ( conclusions ) often depends from where you started ( assumptions )!
I'm sure our project managers will have the veracity to account for the concerns expressed here, and I'm also sure that as usual time/money/finities etc .... limits immediate implementation of any of the good ideas expressed here. They have had a major conference recently. They are in the middle of a big science run. They are not indifferent to concerns, indeed they are grateful for the feedback that occurs here. The optimisations ( all hail akosf!!! ) while not fully implemented yet for all comers are an excellent example of the brilliant interplay that can occur with distributed computing ( dare I say an 'emergent property' ). It is excellent that such vigor can be witnessed here, and even if the views are disparate ( at whatever granular level ).
The original poster did say he'd come back later when conditions are more suited, and who else could or should decide that?!! Him, of course. So let's not soil that threshold/doorway that he may return through please. Recently in another thread the purpose of credit was discussed. I'll add to that theme by identifying another usage - it clearly illustrates a user's dedication to this project when, for example, 300K+ credit is generated in some 12 or so months, not-with-standing any difficulties!
So enough sermonizing! :-)
To Wurgl ( and friends ) I'll re-iterate:
Certainly many thanks to you for your assistance, your considerable contributions, and you're quite welcome back at any time! :-)
I too look forward to when optimisations also flow to non-Windows platforms.....
Cheers, Mike.
( edit ) and please no false logic by implying that I've said that users with less than the above example are thus less dedicated. That does not sensibly follow as a conclusion so don't bother. :-)
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
I'm sure our project managers will have the veracity to account for the concerns expressed here, and I'm also sure that as usual time/money/finities etc .... limits immediate implementation of any of the good ideas expressed here.
Okay. As I have read some messages from Bruce, there are already optimized -- maybe not as fast as those from akosf -- binaries for other platforms.
These will be distributed somewhen. Now there are two ways to solve this puzzle:
1.) Wait until the hardware can handle the quadruple load.
This is the easiest way for the guys sitting on the serverside, because they just have nothing to do :^) Since I tend to be lazy, I would choose this way :-)
2.) Distribute the faster applications and restrict the number of WUs to a save limit, so that the current hardware can still handle it. Still not much to do on the serverside, just make the things ready for distribution and fiddle around with a few parameters and watch, if the hardware can handle more, or not.
When you choose the 2nd way, then you are not only helping your science, you are also helping other projects, by freeing resources on the boxes which are currently just doing your science stuff. You win, because your hardware can do whatever it is able to do and others win.
And when your hardware is fit enough, you just need to change this WU/per day limitaion to something closer to infinite.
These will be distributed somewhen. Now there are two ways to solve this puzzle:
1.) Wait until the hardware can handle the quadruple load.
This is the easiest way for the guys sitting on the serverside, because they just have nothing to do :^) Since I tend to be lazy, I would choose this way :-)
2.) Distribute the faster applications and restrict the number of WUs to a save limit, so that the current hardware can still handle it. Still not much to do on the serverside, just make the things ready for distribution and fiddle around with a few parameters and watch, if the hardware can handle more, or not.
...
Quote:
What do you think?
I think the option #2 would be perfect. HW admins would just have to install a gadget to filter out rumble of all those not wanting to crunch for more than one project.
2.) Distribute the faster applications and restrict the number of WUs to a save limit, so that the current hardware can still handle it. Still not much to do on the serverside, just make the things ready for distribution and fiddle around with a few parameters and watch, if the hardware can handle more, or not.
--snip--
What do you think?
I concur and would like to add:
3.) Release the optimized applications for the non-Windows platforms now.
This would have not much impact on the server side, because this is the minority
anyway.
3.) Release the optimized applications for the non-Windows platforms now.
This would have not much impact on the server side, because this is the minority
anyway.
This seems a good option, but it's not when considering credit granted (there are some people around that care). It defeats the idea (which Wurgl is actually complaining about) that time per work done should be roughly the same regardless of OS for the same (or comparable) computer.
Okay. As I have read some messages from Bruce, there are already optimized -- maybe not as fast as those from akosf -- binaries for other platforms.
These will be distributed somewhen. Now there are two ways to solve this puzzle:
1.) Wait until the hardware can handle the quadruple load.
This is the easiest way for the guys sitting on the serverside, because they just have nothing to do :^) Since I tend to be lazy, I would choose this way :-)
2.) Distribute the faster applications and restrict the number of WUs to a save limit, so that the current hardware can still handle it. Still not much to do on the serverside, just make the things ready for distribution and fiddle around with a few parameters and watch, if the hardware can handle more, or not.
When you choose the 2nd way, then you are not only helping your science, you are also helping other projects, by freeing resources on the boxes which are currently just doing your science stuff. You win, because your hardware can do whatever it is able to do and others win.
And when your hardware is fit enough, you just need to change this WU/per day limitaion to something closer to infinite.
What do you think?
Nice one!! I can follow that. :-)
I do recall some really passionate crunchers wanting to increase the limit because they felt, or shall I say appeared/seemed to have felt, a bit betrayed that their total dedication ( they didn't crunch for anyone other than E@H ) was not being 'serviced' at a given WU/day ceiling. Others saw this as on the face of it perhaps odd, but it was quite deeply felt.
That's what I was, probably quite poorly, trying to politely imply by 'vigor' and 'disparate'. The technical points everyone has made are pretty accurate as far as I can tell within their assumptions. I do wonder what scenario will unfold from the admin, dev's etc..
( I'm a volunteer moderator on another continent, so it's important to understand I really don't have any other special status or info pipeline here. )
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: RE: RE: Is that
)
Why all the complaining? If
)
Why all the complaining? If optimizing for instruction sets shows so much potential for improvement, why aren't all the ubergeek Linux folks swarming all over the code themselves, and writing their own optimizations? Isn't that what they're supposed to be good at? And if it turns out in the end that the code is just too thick and convoluted to address those arcane instruction sets, isn't that a clue that it's just an inferior OS?
Is there any project where Linux is particularly well-suited, where is has the advantage over the others?
Clueless :-)
microcraft
"The arc of history is long, but it bends toward justice" - MLK
RE: Why all the
)
Please correct me if i am wrong, but afaik the albert application is closed-source so optimization is not that easy for the ubergeeks if you first have to reverse engineer the code.
That said, akosf apparently managed not only to reverse engineer the code but to improve the efficiency of the albert application by the factor of 4. And he did it in his spare time ;)
I am living in germany where we do not have a lot of natural resources but have to rely on brainpower to compete in a globalized world. Sadly, the whole educational and science system here is heavily underfunded so every Euro counts.
And an efficiency improvement of the aforementioned magnitude is about serious money. Isn't it that the LIGO/GEO-600 science output is still computationally bound?
As far as i understand it, there are taxpaid computing clusters still working at or below 25% efficiency without akosf improved methods and i ask myself: Why is the science application closed source making it so hard for the community to contribute with optimizations?
RE: Why all the
)
I'm thinking on an optimised albert to Linux, but at first I have to solve some problems.
Well, that was a decent food
)
Well, that was a decent food fight if I ever saw! :-)
A good example of where you arrive at ( conclusions ) often depends from where you started ( assumptions )!
I'm sure our project managers will have the veracity to account for the concerns expressed here, and I'm also sure that as usual time/money/finities etc .... limits immediate implementation of any of the good ideas expressed here. They have had a major conference recently. They are in the middle of a big science run. They are not indifferent to concerns, indeed they are grateful for the feedback that occurs here. The optimisations ( all hail akosf!!! ) while not fully implemented yet for all comers are an excellent example of the brilliant interplay that can occur with distributed computing ( dare I say an 'emergent property' ). It is excellent that such vigor can be witnessed here, and even if the views are disparate ( at whatever granular level ).
The original poster did say he'd come back later when conditions are more suited, and who else could or should decide that?!! Him, of course. So let's not soil that threshold/doorway that he may return through please. Recently in another thread the purpose of credit was discussed. I'll add to that theme by identifying another usage - it clearly illustrates a user's dedication to this project when, for example, 300K+ credit is generated in some 12 or so months, not-with-standing any difficulties!
So enough sermonizing! :-)
To Wurgl ( and friends ) I'll re-iterate:
Certainly many thanks to you for your assistance, your considerable contributions, and you're quite welcome back at any time! :-)
I too look forward to when optimisations also flow to non-Windows platforms.....
Cheers, Mike.
( edit ) and please no false logic by implying that I've said that users with less than the above example are thus less dedicated. That does not sensibly follow as a conclusion so don't bother. :-)
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: I'm sure our project
)
Okay. As I have read some messages from Bruce, there are already optimized -- maybe not as fast as those from akosf -- binaries for other platforms.
These will be distributed somewhen. Now there are two ways to solve this puzzle:
1.) Wait until the hardware can handle the quadruple load.
This is the easiest way for the guys sitting on the serverside, because they just have nothing to do :^) Since I tend to be lazy, I would choose this way :-)
2.) Distribute the faster applications and restrict the number of WUs to a save limit, so that the current hardware can still handle it. Still not much to do on the serverside, just make the things ready for distribution and fiddle around with a few parameters and watch, if the hardware can handle more, or not.
When you choose the 2nd way, then you are not only helping your science, you are also helping other projects, by freeing resources on the boxes which are currently just doing your science stuff. You win, because your hardware can do whatever it is able to do and others win.
And when your hardware is fit enough, you just need to change this WU/per day limitaion to something closer to infinite.
What do you think?
RE: These will be
)
...
I think the option #2 would be perfect. HW admins would just have to install a gadget to filter out rumble of all those not wanting to crunch for more than one project.
Metod ...
RE: 2.) Distribute the
)
I concur and would like to add:
3.) Release the optimized applications for the non-Windows platforms now.
This would have not much impact on the server side, because this is the minority
anyway.
Michael
Team Linux Users Everywhere
RE: 3.) Release the
)
This seems a good option, but it's not when considering credit granted (there are some people around that care). It defeats the idea (which Wurgl is actually complaining about) that time per work done should be roughly the same regardless of OS for the same (or comparable) computer.
Metod ...
RE: Okay. As I have read
)
Nice one!! I can follow that. :-)
I do recall some really passionate crunchers wanting to increase the limit because they felt, or shall I say appeared/seemed to have felt, a bit betrayed that their total dedication ( they didn't crunch for anyone other than E@H ) was not being 'serviced' at a given WU/day ceiling. Others saw this as on the face of it perhaps odd, but it was quite deeply felt.
That's what I was, probably quite poorly, trying to politely imply by 'vigor' and 'disparate'. The technical points everyone has made are pretty accurate as far as I can tell within their assumptions. I do wonder what scenario will unfold from the admin, dev's etc..
( I'm a volunteer moderator on another continent, so it's important to understand I really don't have any other special status or info pipeline here. )
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal