PS... One other thing. there is a credit war in the making within Milky Way right now as someone apparently has made an optimized client and is getting 1.5 million credits a day..... Just one more reason that the more I look at other projects besides Einstein and CPDN that I'm glad I found the 2 best!!!!!
You're wrong.
Please explain?
There are a discussion in Milkyway becouse one user have a RAC of 140,000 aprox...(the sum of all the computers what he have dedicated to this project) whit her own opti app. This user has made a lot of opti app's to give milky's users more performance when his plattaforms went in disadvantage with the others...
He have his own and private app version now, the work is good, the credits are the same, but the time is very, very low....
It's an open source, not? Well, anybody who have the same skills like that user, can do it.... or not...? ;)
This just exposed me to a short time of getting an unusually high number of credits and honestly, I didn't feel that I wasn't going anything productive except gobbling those figures and not doing anything productive.
OK. So the big question is, why did you / do you feel that the work being done was not anything productive?
PS... One other thing. there is a credit war in the making within Milky Way right now as someone apparently has made an optimized client and is getting 1.5 million credits a day..... Just one more reason that the more I look at other projects besides Einstein and CPDN that I'm glad I found the 2 best!!!!!
The individual in question there is within his rights to do what he has done if the project makes the application open source and they have declared his results as scientifically valid. If he does not distribute it, then no "wrong-doing" has occurred. If he distributes it, then just like over at SETI, he will need to make sure that people can access his modified source code.
Also, I think that people need to lighten up about "absurd credits" being issued by projects that are classified as "beta" projects. The disaster at Cosmology should serve as an example of what can happen, even if it appears as though the project is running just fine for an extended period of time...
I would tentatively agree Brian. In the sense that the project folks were partially vindicated in paying way too much credit for a time when due to it's beta nature a whole lotta honestly computed work got thrown out and nobody got credit for it anyhow.
However... (and I can't say this over there because I'd be crucified for even thinking it) I would propose that in some ways they created the situation that contributed to their downfall BY paying the enormous credit. Simply put, they got waaaaaay too many computers waaaay too fast because those volunteers that really, really value credit just FLOCKED to that project. (Yeah, I know, everyone arguing against CPP says that never happens but uh... I saw it happening very clearly and all it took was a google search or two to see it was being talked about quite a bit) My point isn't that they were a huge portion of BOINC participants overall (no, I don't think overpaying projects rob huge amounts from others), but that they were more than the project could handle. The moment they changed the credit to a more reasonable amount for the newer work then those that almost purely chase credit bailed like rats from a sinking ship (no, I'm not saying they are rats... it's just a good visual analogy of 2/3rds of the user base going bye-bye) which led to even more problems (imagine 80-90% of all E@H workunits being in 'limbo' and having to time out as not returned) and now those that would like to continue no matter the credit cannot because the whole project is pretty much FUBAR. I mean... you KNOW a project has gone bad when the admin posts his last words (paraphrased as) "Don't worry, you'll have a new punching bag to abuse soon." :-\
If they hadn't 'paid' out 3-4 times the credit of 90% of the other BOINC projects, would they be in the situation they are now? I tend to doubt it. Ironically I found a post I made there about 4 months ago that predicted quite a bit of what's ultimately come to pass (though I wish it hadn't have). :-(
Of course, I'm probably reading more into the situation than really exists, so feel free to tell me I don't have a clue! :-D
PS... One other thing. there is a credit war in the making within Milky Way right now as someone apparently has made an optimized client and is getting 1.5 million credits a day..... Just one more reason that the more I look at other projects besides Einstein and CPDN that I'm glad I found the 2 best!!!!!
Although 1.5 M credits a day seems unreal, but, depending on what he produced before the optimization, the improvement may call into question the skill of the original programmer. The important point is that if he is producing valid results but just doing it "faster and/or smarter", then he should be rewarded accordingly!
PS... One other thing. there is a credit war in the making within Milky Way right now as someone apparently has made an optimized client and is getting 1.5 million credits a day..... Just one more reason that the more I look at other projects besides Einstein and CPDN that I'm glad I found the 2 best!!!!!
I don't think that's ever happened Arion. I believe I know exactly who you're talking about and yes, I'd imagine that if he ever ran all of his computers 100% MW he could probably generate that, but from what I've seen, he purposefully leaves the resource share for MW very low (or runs on very few computers.. I can't remember which) and mostly crunches other projects.
The client is approximately 25 TIMES faster than the stock science app and he's supposedly already shared the necessary info with the project developers, so I guess the ball's in their court to implement it.
I actually like the project a lot and as an astronomer who has very little training in astrophysics, I can understand what they're doing a lot easier than getting my head around how to detect gravity waves . I'm not running it on all my computers because they're still in the throes of beta growing pains there, but I'll probably go 20% each to CPDN, E@H, Cosmo, MilkyWay and Orbit once (hopefully) they're all running correctly. (though some processors are better than others on certain projects so the mix won't be perfect as I'll not run some projects on some platforms). For instance AMD's suck at Cosmo, but chew up Milky Way like butter, Slow Ghz Core2's suck at MilkyWay, but rock at Einstein, etc.)
Simply put, they got waaaaaay too many computers waaaay too fast because those volunteers that really, really value credit just FLOCKED to that project.
Note to mods: I'm going to try to keep the discussion of other projects to a minimum, but this has a lot to do with the cross-project parity quest...
As I pointed out there, 0.52% of the total BOINC user population does not a "flocking" make... While you make mention that you realize that the numbers aren't all that high, let's put some context to things...
Here's the view from BOINCstats:
[pre]
Cosmology BOINC
Total Active Total Active
Users 7,583 3,559 Users 1,453,500 324,601
Hosts 22,197 8,336 Hosts 3,311,862 555,713
[/pre]
22,197 is a mere 0.67% of the total. 8,336 is a mere 1.5% of active hosts. 3,559 is around 1.1% of active users and is likely on the high side now due to Northern Hemisphere Summer decline in active participants that happens every summer...
No, they didn't run on hard times because of a "flocking", but because of a series of very poor decisions:
*Releasing the first "lensing" application without significantly testing it.
*When they realized how bad the lensing application was broken, they removed lensing without reverting to the prior application.
*Not instituting a lower quota due to not reverting to the prior application and how fast the new application without lensing was running.
*Overly-restrictive Homogeneous Redundancy.
*Re-releasing a lensing application that was not significantly tested.
*Cross-version validation problems due to lack of internal testing of the validator.
*Continued restrictive Homogeneous Redundancy
*Blind aborts of tasks without giving anyone credit for work they had already completed (Speculation on my part: Likely due to the howling of various individuals who *STILL* think that even though cr/hr is now down around what LHC offers, it might be a little bit "too much".)
Quote:
My point isn't that they were a huge portion of BOINC participants overall (no, I don't think overpaying projects rob huge amounts from others), but that they were more than the project could handle. The moment they changed the credit to a more reasonable amount for the newer work then those that almost purely chase credit bailed like rats from a sinking ship
The thing is, if the number of hosts at the maximum was "more than the project could handle", then a decrease in hosts should've caused things to improve. It has not. The problem is that they just don't have their project configured properly. This is what "beta" can mean, however I think a large part of the issue is that they have been over-confident in themselves. You just don't go about distributing such poorly tested applications out into the wild...and then NOT have someone available for support *AND* tell your "boss" that things are "functional"...
Quote:
Of course, I'm probably reading more into the situation than really exists, so feel free to tell me I don't have a clue! :-D
No, you have a clue, just I think a bit of the wrong one... ;-) There was a period of time where things ran smoothly there with about the same load as what appears to have been early on in this last botched lensing introduction... The project just isn't configured correctly. That's the bottom line. Until they seek external help from someone more experienced, I doubt things will get better. My take is, they've been "too proud" (or too confident) to ask...
IMO, YMMV, etc, etc, etc...
2nd Attention Mods: Yes, I know, lengthy post about another project. Beg for leniency... ;-)
Now I finally found an old post of mine, to demonstrate how recurrent this theme is. Excerpt:
Quote:
- those who care only about the science must care about the concerns of those who don't. This way the credit can accumulate for the latter, and the science is done irrespective of their motivation.
- those who care only about the credit should consider the following. Suppose I have a project called IAC@H. This has all the usual BOINC structure, the WU's simply count the number of integers between successive multiples of one hundred. We start at 1 - 100 and continue on to infinity. Call it 'Integer Axiom Concordance At Home' as it verifies that each century interval has precisely one hundred integers ( which would test that multiple successive additions are equivalent to a single block subtraction ). As this is clearly lucidrous you could easily re-title as 'Inane Aimless Computation At Home'. Would any credit focussed participants be satisfied with credit from such a silly activity? If you would, then stop reading here, as I won't affect your view - ever. If you wouldn't, then why? Hopefully it is because the credit, by proxy, means something in the world outside of the computer! Here at E@H it is the ( eventual ) detection of gravity waves.
- the likely reality is that we are all at least a bit of each. I'll admit to earlier being real keen to get into the top 100 for RAC. I did that quite a while ago, briefly, but having done it I'm now indifferent. Mind you I could be bluffing you all here, as I see it's alot steeper slope there now. So I could be faking lack of concern ... :-)
( You have to crack about 4500 to sneak in now, I did it at just over 3000 - how far we come! )
So I'm actually talking about how we deal with each other in this collaborative enterprise! :-)
What's that French phrase - plus ca plus change - or whatever? :-)
Cheers, Mike.
( edit ) Golly gosh, I used to write some really long posts back then too :-)
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
If ahh, erm... everyone could just ignore my last two posts in this thread please?
I've only been on Cosmo for 4-5 months and it's not exactly been a good 4-5 months, so my views are a bit jaded. I didn't read the MilkyWay boards before opening my big mouth here so I didn't know this had turned into the flambe du jour over there.
Honestly, I now don't want to touch either topic with a 10 parsec pole, so just please forget they exist. :P
RE: RE: RE: PS... One
)
There are a discussion in Milkyway becouse one user have a RAC of 140,000 aprox...(the sum of all the computers what he have dedicated to this project) whit her own opti app. This user has made a lot of opti app's to give milky's users more performance when his plattaforms went in disadvantage with the others...
He have his own and private app version now, the work is good, the credits are the same, but the time is very, very low....
It's an open source, not? Well, anybody who have the same skills like that user, can do it.... or not...? ;)
Best regards.
(And sorry for my bad english...)
Logan.
BOINC FAQ Service (Ahora, también disponible en Español/Now available in Spanish)
RE: This just exposed me
)
OK. So the big question is, why did you / do you feel that the work being done was not anything productive?
Did you read through this message from the Project Scientist?
The individual in question there is within his rights to do what he has done if the project makes the application open source and they have declared his results as scientifically valid. If he does not distribute it, then no "wrong-doing" has occurred. If he distributes it, then just like over at SETI, he will need to make sure that people can access his modified source code.
RE: Also, I think that
)
I would tentatively agree Brian. In the sense that the project folks were partially vindicated in paying way too much credit for a time when due to it's beta nature a whole lotta honestly computed work got thrown out and nobody got credit for it anyhow.
However... (and I can't say this over there because I'd be crucified for even thinking it) I would propose that in some ways they created the situation that contributed to their downfall BY paying the enormous credit. Simply put, they got waaaaaay too many computers waaaay too fast because those volunteers that really, really value credit just FLOCKED to that project. (Yeah, I know, everyone arguing against CPP says that never happens but uh... I saw it happening very clearly and all it took was a google search or two to see it was being talked about quite a bit) My point isn't that they were a huge portion of BOINC participants overall (no, I don't think overpaying projects rob huge amounts from others), but that they were more than the project could handle. The moment they changed the credit to a more reasonable amount for the newer work then those that almost purely chase credit bailed like rats from a sinking ship (no, I'm not saying they are rats... it's just a good visual analogy of 2/3rds of the user base going bye-bye) which led to even more problems (imagine 80-90% of all E@H workunits being in 'limbo' and having to time out as not returned) and now those that would like to continue no matter the credit cannot because the whole project is pretty much FUBAR. I mean... you KNOW a project has gone bad when the admin posts his last words (paraphrased as) "Don't worry, you'll have a new punching bag to abuse soon." :-\
If they hadn't 'paid' out 3-4 times the credit of 90% of the other BOINC projects, would they be in the situation they are now? I tend to doubt it. Ironically I found a post I made there about 4 months ago that predicted quite a bit of what's ultimately come to pass (though I wish it hadn't have). :-(
Of course, I'm probably reading more into the situation than really exists, so feel free to tell me I don't have a clue! :-D
RE: Did you read through
)
Although 1.5 M credits a day seems unreal, but, depending on what he produced before the optimization, the improvement may call into question the skill of the original programmer. The important point is that if he is producing valid results but just doing it "faster and/or smarter", then he should be rewarded accordingly!
Stan
RE: Arion PS... One other
)
I don't think that's ever happened Arion. I believe I know exactly who you're talking about and yes, I'd imagine that if he ever ran all of his computers 100% MW he could probably generate that, but from what I've seen, he purposefully leaves the resource share for MW very low (or runs on very few computers.. I can't remember which) and mostly crunches other projects.
The client is approximately 25 TIMES faster than the stock science app and he's supposedly already shared the necessary info with the project developers, so I guess the ball's in their court to implement it.
I actually like the project a lot and as an astronomer who has very little training in astrophysics, I can understand what they're doing a lot easier than getting my head around how to detect gravity waves . I'm not running it on all my computers because they're still in the throes of beta growing pains there, but I'll probably go 20% each to CPDN, E@H, Cosmo, MilkyWay and Orbit once (hopefully) they're all running correctly. (though some processors are better than others on certain projects so the mix won't be perfect as I'll not run some projects on some platforms). For instance AMD's suck at Cosmo, but chew up Milky Way like butter, Slow Ghz Core2's suck at MilkyWay, but rock at Einstein, etc.)
Folks.. So long and thak you
)
Folks.. So long and thak you for the fish...
Bye...
Logan.
BOINC FAQ Service (Ahora, también disponible en Español/Now available in Spanish)
RE: Simply put, they got
)
Note to mods: I'm going to try to keep the discussion of other projects to a minimum, but this has a lot to do with the cross-project parity quest...
As I pointed out there, 0.52% of the total BOINC user population does not a "flocking" make... While you make mention that you realize that the numbers aren't all that high, let's put some context to things...
Here's the view from BOINCstats:
[pre]
Cosmology BOINC
Total Active Total Active
Users 7,583 3,559 Users 1,453,500 324,601
Hosts 22,197 8,336 Hosts 3,311,862 555,713
[/pre]
22,197 is a mere 0.67% of the total. 8,336 is a mere 1.5% of active hosts. 3,559 is around 1.1% of active users and is likely on the high side now due to Northern Hemisphere Summer decline in active participants that happens every summer...
No, they didn't run on hard times because of a "flocking", but because of a series of very poor decisions:
*When they realized how bad the lensing application was broken, they removed lensing without reverting to the prior application.
*Not instituting a lower quota due to not reverting to the prior application and how fast the new application without lensing was running.
*Overly-restrictive Homogeneous Redundancy.
*Re-releasing a lensing application that was not significantly tested.
*Cross-version validation problems due to lack of internal testing of the validator.
*Continued restrictive Homogeneous Redundancy
*Blind aborts of tasks without giving anyone credit for work they had already completed (Speculation on my part: Likely due to the howling of various individuals who *STILL* think that even though cr/hr is now down around what LHC offers, it might be a little bit "too much".)
The thing is, if the number of hosts at the maximum was "more than the project could handle", then a decrease in hosts should've caused things to improve. It has not. The problem is that they just don't have their project configured properly. This is what "beta" can mean, however I think a large part of the issue is that they have been over-confident in themselves. You just don't go about distributing such poorly tested applications out into the wild...and then NOT have someone available for support *AND* tell your "boss" that things are "functional"...
No, you have a clue, just I think a bit of the wrong one... ;-) There was a period of time where things ran smoothly there with about the same load as what appears to have been early on in this last botched lensing introduction... The project just isn't configured correctly. That's the bottom line. Until they seek external help from someone more experienced, I doubt things will get better. My take is, they've been "too proud" (or too confident) to ask...
IMO, YMMV, etc, etc, etc...
2nd Attention Mods: Yes, I know, lengthy post about another project. Beg for leniency... ;-)
Now I finally found an old
)
Now I finally found an old post of mine, to demonstrate how recurrent this theme is. Excerpt:
What's that French phrase - plus ca plus change - or whatever? :-)
Cheers, Mike.
( edit ) Golly gosh, I used to write some really long posts back then too :-)
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
If ahh, erm... everyone could
)
If ahh, erm... everyone could just ignore my last two posts in this thread please?
I've only been on Cosmo for 4-5 months and it's not exactly been a good 4-5 months, so my views are a bit jaded. I didn't read the MilkyWay boards before opening my big mouth here so I didn't know this had turned into the flambe du jour over there.
Honestly, I now don't want to touch either topic with a 10 parsec pole, so just please forget they exist. :P
RE: If ahh, erm... everyone
)
What last two posts? ;-)