Even Seti at one point didn't have the resources to even look at the results we were sending back to them, so they were storing them on a shelf for 'some day' when they could review them. It got so bad for them that they were even resending out the SAME units that had already been crunched just so people wouldn't leave the project. The 'new' results were just discarded as they already had results from those workunits sitting on the shelf.
Are you absolutely certain of the exact truth of the specifics of this?
....
....
I just want to be *sure* that you have your facts precisely correct before I send a letter to Berkeley, the California legislature, and the EPA about the money and the gigawatts of power being wasted by SETI for results that are immediately discarded.
I don't think it's at all necessary to consider such actions, for a whole variety of reasons. You should perhaps consider finding out more about what really happened, and when, and why, first. I did a quick bit of searching and came across this message from Matt Lebofsky which gives a bit of background. I was running Seti classic at the time, was looking into transitioning to the BOINC version, and because of the heat and turmoil between the Seti BOINC ayesayers and the Seti Classic naysayers, decided to start the BOINC adventure by joining the brand new Einstein project for my very first taste. If you want an education in vitriol, go read the 2005 posts leading up to the switching off of Seti Classic at the end of the year.
Here is an excerpt from Matt's message, linked above (the emphasis is mine).
Quote:
These outages, to fix broken indexes or corrupt tables or simply add rows to tables, etc., result in a "behind the scenes" outage. All the data servers are up serving data, but we are unable to split data - so we fall behind. Yes, this means users get overly redundant data at this time - we decided more people would rather have data to chew on than another full outage which typically results in message board posts of the sort "Berkeley is down again - those idiots have no idea what they are doing, blah blah blah".
There are other messages that put numbers to the level of redundancy that happened in times of crisis. Here is another Matt post that contains this revealing comment in response to a question about redundancy levels. Once again, the emphasis is mine.
Quote:
In the classic SETI@home project, as of RIGHT NOW, the numbers are:
I know the web site is reporting 1.9 billion results - not sure why. Just noticed the discrepancy myself, and am looking into it. At any rate, the seemingly obvious result is that we have gotten 7.25 results per workunit on average.
Bear in mind this key word here is average, and therefore is heavily skewed by several short periods of time where we were unable to create new work and so the redundancy levels were ridiculously high (sometimes as high as 20 results per workunit or more). But the mode is more like 4 to 5 results per workunit during the course of the project (it would be a sufficient drain on our busy database to calculate the exact number right now). In 1999, we were lucky to get 2 or 3 results per workunit, but thanks to the Moore's law and the unexpected large numbers of users motivated by the credit-gathering competition, we now have more CPU power than we need. Now, in 2005, if all systems are working at full speed, we get around 5 to 6 results per workunit - and that's *after* adding twice as much scientific computation to each workunit a couple of years ago.
This was a known problem with the classic SETI@home model where users could get as much work as they wanted whenever they wanted, regardless of scientific merit. We never expected such an amazing surplus of CPU power (which is a wonderful thing). So we never built in a graceful method for turning users away.
You have to remember, in the 'Classic' era, you couldn't just tell people to divert their resources to other projects since there was nothing comparable having the same degree of appeal. The advent of BOINC became the 'graceful method' that allowed Seti to stop the over-redundancy. So, from that point of view, the problem was managed a long time ago by 'self-regulation' without the need of a 'big stick, complaining to the authorities' type mechanism.
So, from that point of view, the problem was managed a long time ago by 'self-regulation' without the need of a 'big stick, complaining to the authorities' type mechanism.
Just so everyone understands, my "threat" to "complain" was always empty. If I were going to complain it would be that we don't have enough telescope time or the in-house computational resources to do some things that obviously need doing. I hope that those who bothered reading my former diatribe could at least credit me with having a point about wastefulness.
However... as I sit here illuminated by an inadequate LED lamp, it does occur to me that all of these projects were intended to run on "spare cycles" via a screen saver and that it is only the most dedicated number-crunchers with their farms which have produced the situation where energy is wasted. No project scientist ever suggested the building and 24/7 running of crunching-machines. In fact, I would say that most don't, haven't, and won't.
One can hardly blame project scientists for the excess enthusiasm of some participants.
If we were all still using only "spare cycles" of our CPUs while the computer was waiting to do useful work, we probably would not have ever run out of data. Certainly the idea of using spare cycles uses some additional energy, but *I* am an extreme waster of time, money, and electricity by building machines for no purpose except to crunch numbers. Mea culpa.
With no clear mechanism for me, the volunteer, to judge the usefulness of my expenditures, I cannot only contribute when there is scientific value to doing-so. There's little way (no practical way, really) for me to know.
I think it would be a responsible practice to stop sending work for the sake of giving volunteers something to run. My machines would have switched to Einstein long ago had BOINC been given a clear provision for priority crunching which in plain English would be, "I want only work from "x". Failing work from "x" I will accept work from "y." If no work is available from "x" or "y," please send me work from "z."" Then, when SETI@Home was out of *useful* work I would have crunched Einstein instead.
I do think it would be irresponsible for any project which currently practices the sending of useless work not to STOP generating it. It is *hugely* wasteful.
I think (and believe)it is immoral to tell someone they are contributing to a scientific project when that induces someone to burn their midnight oil at their expense when there is no need of the light. It's roughly the equivalent of asking volunteers to send money then piling it up and burning it. I have no problem "spending" money. Burning money? Who would care to defend that?
In other academic settings it would be considered unethical to knowingly waste the time and resources of a volunteer. That's the sort of thing that has to be reviewed and could get a researcher fired.
Can you imagine the amount of money that was wasted, quite apart from the resources used?
How much better it would be (in this day) to have a project switch to something, even BitCoin (or equivalent), for that projects' own funding rather than *completely* waste the host-owner's time and money. I'm not bitter, angry, or vengeful. I'm just disappointed in the choice made, but more disappointed that it hasn't been revisited and changed.
"Classic" was a long time ago. Maybe it made sense then. I don't believe there is an excuse, now.
The problem in 2005 was that BOINC didn't seem to want to work. That was my first taste of Einstein, as well. In fact, early BOINC troubles were responsible for my stopping all projects for a while.
If Matt wanted to avoid slanderous accusations of idiocy, all he had to-do was say, "We have temporarily run out of work, but we expect new work to be available ______________. Please leave your screensaver running as we will need your continued support once data is available." The volunteers have learned patience (of a sort) over time.
So, let us admit that the past waste is done. That ship has sailed. It's water under the bridge. There's no use crying over spilled milk.
Going forward we have a different situation in the present and hopefully one eye cast to what can be done to improve the future.
If I were to complain it would be to a project scientist directly and ask them to stop. (not that I could get their attention)
...and all of this seemingly ill-will on my part is probably made worse by non-BOINC and non-SETI and non-Einstein experiences I've had with organizations who asked for my support only to waste or misappropriate the funds they were entrusted-with.
"Classic" was a long time ago. Maybe it made sense then. I don't believe there is an excuse, now.
Indeed. It was shut down in 2005, a few days before my joining date here (those two facts are related - I diversified very quickly after I was forced to bite the bullet and transfer to BOINC. And the rest, as they say, is history).
I'm fairly certain that the allegations of "waste" - distributing work purely to keep volunteers' machines active - relate exclusively to the 'Classic' period, and hence hasn't happened for the last 10 years. Yes, SETI does reprocess work when a new algorithm (autocorrelation) or pre-processing technique (radar blanking) becomes available - but that's for scientific purposes. Einstein does the same.
I keep a database of all the SETI tasks I've processed, and publish summaries of the results in SETI data distribution history. I've scanned that database looking for evidence of 'make-work' wastage, and I haven't found any. It's slightly confusing, and hard to analyse, because SETI has very long deadlines, and timed-out tapes appear as resends months after they were split. But the stated policy - and I have found no evidence to contradict it - is that 'modern' SETI won't re-issue work without scientific purpose.
Around the time that the SETI project began there was some truth to the belief that one could do useful distributed computation using "spare cycles" that were nearly free rebel regarding power consumption.
But very long ago modern processors and operating systems collaborated greatly to reduce power when nothing is being done compared to the power consumed when a boinc task is active. So there are NO free "spare cycles", anywhere.
If you actually look at a comparison of the incremental power consumption required to process an equivalent amount of distributed computation by consuming "spare cycles" on any CPU currently in service to that required on an even moderately modern GPU (as would be present on pretty much any purpose built crunching system put together in the last couple of years) the purpose built system is greatly superior both in overall power efficiency and in marginal power efficiency.
in summary I disagree with the general line expressed by
Quote:
projects were intended to run on "spare cycles" via a screen saver and that it is only the most dedicated number-crunchers with their farms which have produced the situation where energy is wasted.
... it is only the most dedicated number-crunchers with their farms which have produced the situation where energy is wasted. No project scientist ever suggested the building and 24/7 running of crunching-machines. In fact, I would say that most don't, haven't, and won't.
The energy waste was a function of the way Seti classic worked. It has been severely reduced with the introduction of BOINC because of the mechanism to limit copies of a WU to the number required to form a quorum. No longer did projects have to send out excess copies just to make sure enough would come back to allow the results to be validated.
I entirely dispute the claim that people with farms cause energy waste. A long time ago, Bruce Allen made a statement about how much the volunteers were saving the project by the simple fact that volunteers provided hardware and paid for electricity that the project would otherwise have to do. The clear impression was that the data needed to be processed and that he was grateful to have volunteers step up so that he didn't have to do it 'in house'.
And here is the killer bit. By far the biggest component of volunteered resources came from average people donating their spare cycles from existing equipment and NOT from purpose built farms. And if you asked a project scientist if they would like you to build a bunch of machines to help with the crunching, the answer would most likely be a resounding 'YES'.
Quote:
If we were all still using only "spare cycles" of our CPUs while the computer was waiting to do useful work, we probably would not have ever run out of data. Certainly the idea of using spare cycles uses some additional energy, but *I* am an extreme waster of time, money, and electricity by building machines for no purpose except to crunch numbers. Mea culpa.
Rubbish - on a number of fronts :-). Volunteers here are not extreme wasters, no matter how many machines they dedicate to crunching. I don't comment on all projects, only those with a genuine scientific purpose like this one. Just ask yourself how many brand new pulsars would have been discovered without the entire population of volunteers deciding that this project was worthy of support. Most people have enough common sense to recognise projects that have a worthy scientific aim. When a project gets this level of support, it encourages the scientists to think bigger and develop algorithms to analyse more data in finer detail. It encourages the development of newer and more sensitive hardware to obtain better quality data which in turn accelerates the discovery process. Governments have universally cut back on the support of science. Volunteers have helped mitigate some of the effects of this. This is not waste.
Quote:
How much better it would be (in this day) to have a project switch to something, even BitCoin (or equivalent), for that projects' own funding rather than *completely* waste the host-owner's time and money. I'm not bitter, angry, or vengeful. I'm just disappointed in the choice made, but more disappointed that it hasn't been revisited and changed.
I really don't understand what's troubling you here. Classic was designed to send out X copies of each WU with the hope that a fraction would be returned - enough to allow the results to be validated. Sure, there was waste because X was quite often something like 6 to 8 in 'normal' times and as large as 20-40 when there were 'back-end' issues that prevented splitting of new data. All that finished when Classic died. When BOINC started, the actual number required for a quorum was issued. If one failed, only then was an extra copy created. So where is the continuing waste? Other projects quickly joined the BOINC camp so there were always viable alternatives if a particular project ran 'dry'. So as far as I'm aware, for 11 years now there hasn't been 'wasteful work' sent out like there was with Classic.
Quote:
The problem in 2005 was that BOINC didn't seem to want to work. That was my first taste of Einstein, as well. In fact, early BOINC troubles were responsible for my stopping all projects for a while.
I started running BOINC and Einstein in early 2005. Sure, it was a bit primitive compared with what we have today, both BOINC and the early Einstein apps. Perhaps I have rose tinted glasses but my recollection is that it was quite workable. In that first year I had about 9 machines, 3 office machines that had been running Classic and a further 6 purpose built once I saw what BOINC could do. I ran Einstein, Seti and LHC. I thought BOINC was just the bees knees. I had set up the 6 new machines and had gone away on a 3 week business trip. I returned to find them still crunching away. I didn't get the same impression that BOINC "didn't seem to want to work".
In fact, I was so taken by how well things seemed to be running early on that I started threads like Project Ramp-Up -- Post your observations or comments here. This was started about 2 weeks after the project opened its doors to the public. This was the time when I added the 2nd and 3rd office machines to the project. Brings back all sorts of memories about how refreshing a change it was to be contributing to a new and exciting (and properly working) project. It was extremely impressive to see how responsive to the users Bruce was. A real sense of how much your contribution was valued.
I apologize to everyone on the forum if I kicked some kind of hornet's nest with this topic.
But I have had an interesting time trying to figure out how each of these project award credit so I can optimize my contributions with available hardware. Some of them are a real puzzle and defy my predictions.
For example: I just upped the clock of my i7-5820k from the stock 3.3GHz to 4GHz and it seems to have made no discernible difference in how quickly it processes an FGRP4 work unit.
It's also fascinating to see how different variations of optimizations for the Mac OS X client for SETI affect time/credit. Apparently the AVX and SSSE3 versions are up to twice as efficient (depending on the machine).
Now I wish I was proficient enough to contribute to porting/optimization of these clients. Think of the energy we could save there...
Hello all.
I am a bit amazed with the conversation I read here...
When I contributed for the first time to Seti@home, around 15 years ago, the only reward, if my memory is good, was the fact that a task had been completed and validated by SETI. Then the cruncher score was no more than the number of completed tasks.
Before moving to BOINC, the SETI Team did not consider that it was his responsability to take into account the Internet connection rate of the cruncher, his computer speed or others. If one whished to make it better, he was free to move to an other city where the Internet communication was faster or he could buy a new computer...
(note : I was proud of my 145 tasks despite the fact that some others were above 40,000. And as any cruncher when SETI moved to BOINC, I have been quite disappointed to learn that my 145 where lost for ever !)
Of course, things have changed in between : the first SETI used only datas from Arecibo and the slicing into elementary tasks was I suppose easier. Now the tasks differ significantly from one to one and then they do not need the same computer resources.
It is fair, at this point of view, to give different "awards" for the different tasks.
But let me still be thinking that it's not the main issue of our quest.
Getting 4400 or 650 or 872.56 or whatever has for me no real importance.
Our credits cannot be converted into Dollars, Euros, Yens, Yuans, Wons, Roubles or others, no ?
The issue is to get relevant results for a global benefit :
the project itself and the science it helps, and all of us, the crunchers who might have nothing to do if BOINC stops.
Best regards to all and thanks to Jonathan who has put the subject on the table.
No taboos here, I hope.
The credit is important to me insomuch as it helps me optimize my contribution with what I have available. If the credit reflects the value of the information produced by cranking out a work unit, then I want to crank out as many "points" for the amount of CPU cycles I have available as possible. Well, within projects anyway.
As I noted in the beginning, this project awards substantially more credit for the CPU seconds than SETI or Rosetta, yet not as much as many others out there. But I still contribute to these other projects. But the points help me figure out which of my machines do a better job and task the best asset for the work to maximize my contribution.
And again, I wish I had the skill to help optimize these clients, because trying to reverse-engineer the credit structure has revealed just how vast some of those optimizations (for SSE4.1, SSSE3, and AVX for SETI, for example) can improve productivity.
Oh, and while the credit provides me with a nice benchmark to optimize my contributions, I honestly do get a little nerd satisfaction by comparing my contributions (stretching back to 2004 for Einstein, and 1999 for SETI) with others. But that's about it. Mentioning this at parties in most of my social circles results in kind of an awkward pause, and, frankly, open disdain from my wife (who thinks the new computer I built is for video editing). ;-)
RE: I am a global warming
)
Yes, it is very hard to believe in a global warming, especially in this time of year, especially in Russia.
RE: RE: Even Seti at
)
I don't think it's at all necessary to consider such actions, for a whole variety of reasons. You should perhaps consider finding out more about what really happened, and when, and why, first. I did a quick bit of searching and came across this message from Matt Lebofsky which gives a bit of background. I was running Seti classic at the time, was looking into transitioning to the BOINC version, and because of the heat and turmoil between the Seti BOINC ayesayers and the Seti Classic naysayers, decided to start the BOINC adventure by joining the brand new Einstein project for my very first taste. If you want an education in vitriol, go read the 2005 posts leading up to the switching off of Seti Classic at the end of the year.
Here is an excerpt from Matt's message, linked above (the emphasis is mine).
There are other messages that put numbers to the level of redundancy that happened in times of crisis. Here is another Matt post that contains this revealing comment in response to a question about redundancy levels. Once again, the emphasis is mine.
You have to remember, in the 'Classic' era, you couldn't just tell people to divert their resources to other projects since there was nothing comparable having the same degree of appeal. The advent of BOINC became the 'graceful method' that allowed Seti to stop the over-redundancy. So, from that point of view, the problem was managed a long time ago by 'self-regulation' without the need of a 'big stick, complaining to the authorities' type mechanism.
Cheers,
Gary.
RE: So, from that point
)
Just so everyone understands, my "threat" to "complain" was always empty. If I were going to complain it would be that we don't have enough telescope time or the in-house computational resources to do some things that obviously need doing. I hope that those who bothered reading my former diatribe could at least credit me with having a point about wastefulness.
However... as I sit here illuminated by an inadequate LED lamp, it does occur to me that all of these projects were intended to run on "spare cycles" via a screen saver and that it is only the most dedicated number-crunchers with their farms which have produced the situation where energy is wasted. No project scientist ever suggested the building and 24/7 running of crunching-machines. In fact, I would say that most don't, haven't, and won't.
One can hardly blame project scientists for the excess enthusiasm of some participants.
If we were all still using only "spare cycles" of our CPUs while the computer was waiting to do useful work, we probably would not have ever run out of data. Certainly the idea of using spare cycles uses some additional energy, but *I* am an extreme waster of time, money, and electricity by building machines for no purpose except to crunch numbers. Mea culpa.
With no clear mechanism for me, the volunteer, to judge the usefulness of my expenditures, I cannot only contribute when there is scientific value to doing-so. There's little way (no practical way, really) for me to know.
I think it would be a responsible practice to stop sending work for the sake of giving volunteers something to run. My machines would have switched to Einstein long ago had BOINC been given a clear provision for priority crunching which in plain English would be, "I want only work from "x". Failing work from "x" I will accept work from "y." If no work is available from "x" or "y," please send me work from "z."" Then, when SETI@Home was out of *useful* work I would have crunched Einstein instead.
I do think it would be irresponsible for any project which currently practices the sending of useless work not to STOP generating it. It is *hugely* wasteful.
I think (and believe)it is immoral to tell someone they are contributing to a scientific project when that induces someone to burn their midnight oil at their expense when there is no need of the light. It's roughly the equivalent of asking volunteers to send money then piling it up and burning it. I have no problem "spending" money. Burning money? Who would care to defend that?
In other academic settings it would be considered unethical to knowingly waste the time and resources of a volunteer. That's the sort of thing that has to be reviewed and could get a researcher fired.
Can you imagine the amount of money that was wasted, quite apart from the resources used?
How much better it would be (in this day) to have a project switch to something, even BitCoin (or equivalent), for that projects' own funding rather than *completely* waste the host-owner's time and money. I'm not bitter, angry, or vengeful. I'm just disappointed in the choice made, but more disappointed that it hasn't been revisited and changed.
"Classic" was a long time ago. Maybe it made sense then. I don't believe there is an excuse, now.
The problem in 2005 was that BOINC didn't seem to want to work. That was my first taste of Einstein, as well. In fact, early BOINC troubles were responsible for my stopping all projects for a while.
If Matt wanted to avoid slanderous accusations of idiocy, all he had to-do was say, "We have temporarily run out of work, but we expect new work to be available ______________. Please leave your screensaver running as we will need your continued support once data is available." The volunteers have learned patience (of a sort) over time.
So, let us admit that the past waste is done. That ship has sailed. It's water under the bridge. There's no use crying over spilled milk.
Going forward we have a different situation in the present and hopefully one eye cast to what can be done to improve the future.
If I were to complain it would be to a project scientist directly and ask them to stop. (not that I could get their attention)
...and all of this seemingly ill-will on my part is probably made worse by non-BOINC and non-SETI and non-Einstein experiences I've had with organizations who asked for my support only to waste or misappropriate the funds they were entrusted-with.
RE: "Classic" was a long
)
Indeed. It was shut down in 2005, a few days before my joining date here (those two facts are related - I diversified very quickly after I was forced to bite the bullet and transfer to BOINC. And the rest, as they say, is history).
I'm fairly certain that the allegations of "waste" - distributing work purely to keep volunteers' machines active - relate exclusively to the 'Classic' period, and hence hasn't happened for the last 10 years. Yes, SETI does reprocess work when a new algorithm (autocorrelation) or pre-processing technique (radar blanking) becomes available - but that's for scientific purposes. Einstein does the same.
I keep a database of all the SETI tasks I've processed, and publish summaries of the results in SETI data distribution history. I've scanned that database looking for evidence of 'make-work' wastage, and I haven't found any. It's slightly confusing, and hard to analyse, because SETI has very long deadlines, and timed-out tapes appear as resends months after they were split. But the stated policy - and I have found no evidence to contradict it - is that 'modern' SETI won't re-issue work without scientific purpose.
Around the time that the SETI
)
Around the time that the SETI project began there was some truth to the belief that one could do useful distributed computation using "spare cycles" that were nearly free rebel regarding power consumption.
But very long ago modern processors and operating systems collaborated greatly to reduce power when nothing is being done compared to the power consumed when a boinc task is active. So there are NO free "spare cycles", anywhere.
If you actually look at a comparison of the incremental power consumption required to process an equivalent amount of distributed computation by consuming "spare cycles" on any CPU currently in service to that required on an even moderately modern GPU (as would be present on pretty much any purpose built crunching system put together in the last couple of years) the purpose built system is greatly superior both in overall power efficiency and in marginal power efficiency.
in summary I disagree with the general line expressed by
RE: ... it is only the most
)
The energy waste was a function of the way Seti classic worked. It has been severely reduced with the introduction of BOINC because of the mechanism to limit copies of a WU to the number required to form a quorum. No longer did projects have to send out excess copies just to make sure enough would come back to allow the results to be validated.
I entirely dispute the claim that people with farms cause energy waste. A long time ago, Bruce Allen made a statement about how much the volunteers were saving the project by the simple fact that volunteers provided hardware and paid for electricity that the project would otherwise have to do. The clear impression was that the data needed to be processed and that he was grateful to have volunteers step up so that he didn't have to do it 'in house'.
And here is the killer bit. By far the biggest component of volunteered resources came from average people donating their spare cycles from existing equipment and NOT from purpose built farms. And if you asked a project scientist if they would like you to build a bunch of machines to help with the crunching, the answer would most likely be a resounding 'YES'.
Rubbish - on a number of fronts :-). Volunteers here are not extreme wasters, no matter how many machines they dedicate to crunching. I don't comment on all projects, only those with a genuine scientific purpose like this one. Just ask yourself how many brand new pulsars would have been discovered without the entire population of volunteers deciding that this project was worthy of support. Most people have enough common sense to recognise projects that have a worthy scientific aim. When a project gets this level of support, it encourages the scientists to think bigger and develop algorithms to analyse more data in finer detail. It encourages the development of newer and more sensitive hardware to obtain better quality data which in turn accelerates the discovery process. Governments have universally cut back on the support of science. Volunteers have helped mitigate some of the effects of this. This is not waste.
I really don't understand what's troubling you here. Classic was designed to send out X copies of each WU with the hope that a fraction would be returned - enough to allow the results to be validated. Sure, there was waste because X was quite often something like 6 to 8 in 'normal' times and as large as 20-40 when there were 'back-end' issues that prevented splitting of new data. All that finished when Classic died. When BOINC started, the actual number required for a quorum was issued. If one failed, only then was an extra copy created. So where is the continuing waste? Other projects quickly joined the BOINC camp so there were always viable alternatives if a particular project ran 'dry'. So as far as I'm aware, for 11 years now there hasn't been 'wasteful work' sent out like there was with Classic.
I started running BOINC and Einstein in early 2005. Sure, it was a bit primitive compared with what we have today, both BOINC and the early Einstein apps. Perhaps I have rose tinted glasses but my recollection is that it was quite workable. In that first year I had about 9 machines, 3 office machines that had been running Classic and a further 6 purpose built once I saw what BOINC could do. I ran Einstein, Seti and LHC. I thought BOINC was just the bees knees. I had set up the 6 new machines and had gone away on a 3 week business trip. I returned to find them still crunching away. I didn't get the same impression that BOINC "didn't seem to want to work".
In fact, I was so taken by how well things seemed to be running early on that I started threads like Project Ramp-Up -- Post your observations or comments here. This was started about 2 weeks after the project opened its doors to the public. This was the time when I added the 2nd and 3rd office machines to the project. Brings back all sorts of memories about how refreshing a change it was to be contributing to a new and exciting (and properly working) project. It was extremely impressive to see how responsive to the users Bruce was. A real sense of how much your contribution was valued.
Cheers,
Gary.
I apologize to everyone on
)
I apologize to everyone on the forum if I kicked some kind of hornet's nest with this topic.
But I have had an interesting time trying to figure out how each of these project award credit so I can optimize my contributions with available hardware. Some of them are a real puzzle and defy my predictions.
For example: I just upped the clock of my i7-5820k from the stock 3.3GHz to 4GHz and it seems to have made no discernible difference in how quickly it processes an FGRP4 work unit.
It's also fascinating to see how different variations of optimizations for the Mac OS X client for SETI affect time/credit. Apparently the AVX and SSSE3 versions are up to twice as efficient (depending on the machine).
Now I wish I was proficient enough to contribute to porting/optimization of these clients. Think of the energy we could save there...
Hello all. I am a bit amazed
)
Hello all.
I am a bit amazed with the conversation I read here...
When I contributed for the first time to Seti@home, around 15 years ago, the only reward, if my memory is good, was the fact that a task had been completed and validated by SETI. Then the cruncher score was no more than the number of completed tasks.
Before moving to BOINC, the SETI Team did not consider that it was his responsability to take into account the Internet connection rate of the cruncher, his computer speed or others. If one whished to make it better, he was free to move to an other city where the Internet communication was faster or he could buy a new computer...
(note : I was proud of my 145 tasks despite the fact that some others were above 40,000. And as any cruncher when SETI moved to BOINC, I have been quite disappointed to learn that my 145 where lost for ever !)
Of course, things have changed in between : the first SETI used only datas from Arecibo and the slicing into elementary tasks was I suppose easier. Now the tasks differ significantly from one to one and then they do not need the same computer resources.
It is fair, at this point of view, to give different "awards" for the different tasks.
But let me still be thinking that it's not the main issue of our quest.
Getting 4400 or 650 or 872.56 or whatever has for me no real importance.
Our credits cannot be converted into Dollars, Euros, Yens, Yuans, Wons, Roubles or others, no ?
The issue is to get relevant results for a global benefit :
the project itself and the science it helps, and all of us, the crunchers who might have nothing to do if BOINC stops.
Best regards to all and thanks to Jonathan who has put the subject on the table.
No taboos here, I hope.
The credit is important to me
)
The credit is important to me insomuch as it helps me optimize my contribution with what I have available. If the credit reflects the value of the information produced by cranking out a work unit, then I want to crank out as many "points" for the amount of CPU cycles I have available as possible. Well, within projects anyway.
As I noted in the beginning, this project awards substantially more credit for the CPU seconds than SETI or Rosetta, yet not as much as many others out there. But I still contribute to these other projects. But the points help me figure out which of my machines do a better job and task the best asset for the work to maximize my contribution.
And again, I wish I had the skill to help optimize these clients, because trying to reverse-engineer the credit structure has revealed just how vast some of those optimizations (for SSE4.1, SSSE3, and AVX for SETI, for example) can improve productivity.
Oh, and while the credit
)
Oh, and while the credit provides me with a nice benchmark to optimize my contributions, I honestly do get a little nerd satisfaction by comparing my contributions (stretching back to 2004 for Einstein, and 1999 for SETI) with others. But that's about it. Mentioning this at parties in most of my social circles results in kind of an awkward pause, and, frankly, open disdain from my wife (who thinks the new computer I built is for video editing). ;-)