You post the answer, 90% of the users does not use GPU´s.
The last weeks was the exception, a bounch of madness crunchers (me inclided) with large hungry GPU´s fleets comes to the E@H seas and give the project a help, so the numbers of GPU´s and WU procecces rise, but tha´s is the exception not the roule.
One of the success of a project is the number of users who allready joint the project, not the power of his hosts, if only power counts, then why allow old small GPU´s stay in the project, if you could develop a more optimized app that uses only the newer´s GPU series habilities? If you forget the old GPU´s dessings limitations.
No, the projects success is based on the CPU´s based Hosts, the GPU´s is only an aid to make there more productive.
so /me is really wondering when the break-even is reached to focus only on GPU related systems ?
While gpu's can do 10 TIMES the work of a cpu in the same amount of itme, there is STILL work that just can't be made to work yet on a gpu. It is too complicated or just isn't written to take advantage, or can't take advantage, of the specific way gpu's do things. If you can't get a workunit to ALL fit inside the gpu's memory you are always swapping and that slows down the efficiency of the gpu usage and negates the use of one. Rosetta, for one, says they can't make it work on a gpu as the workunit uses too many things besides just the math capabilities of a cpu. The other thing is at some projects the units are bigger than the 1gb of ram most modern gpu's have, so IF you make it so only the top 1% of gpu's can crunch the unit is it worth the trouble, probably not. Some projects, Collatz for example, are actually talking about lowering the memory required of a gpu to enable MORE and older gpu's to be used. At Collatz as small as a 256mb gpu can be used to crunch workunits. On other projects the options are now being tested to run multiple workunits in the gpu's onboard memory, it works in some places and not in others. Einstein actually has a webpage setting letting you run multiple workunits, but most projects require an additional file. The Beta version of Boinc 7.0.40 and above will recognize SOME onboard gpu's, as in the Intel Sandy Bridge kind, and you can crunch with them. The NEWEST Beta, 7.0.42, will let you use a new add-on file to control the gpu even better and easier. http://boinc.berkeley.edu/dev/forum_thread.php?id=8051&postid=46718
Collatz for example, are actually talking about lowering the memory required of a gpu to enable MORE and older gpu's to be used. At Collatz as small as a 256mb gpu can be used to crunch workunits.
The Stock GPU memory requirements for Collatz is 128Mb (unless it's been lowered again), and has been for well over a year, others have run it on 64Mb and 48Mb GPUs
(since i proved the Cuda app would run on my 128Mb 8400M GS and got Slicker to lower the Stock GPU requirements)
BUT BE CAREFUL using Beta versions, they CAN AND OFTEN DO have 'issues' that cause newer versions to come out to 'fix' them.
They are initially Development versions, more Alpha version than Beta version, saying it's a Beta version implies Alpha testing of that version has already happened,
when really Alpha testing only happens when the latest client appears.
If you're going to post the links to the download repository, you should also post the link to the BOINC 7 Change Log and news thread,
so people know what the changes are, what they are letting themselves in for, and which versions are the horribly broken versions.
One of the success of a project is the number of users who allready joint the project, not the power of his hosts
Hhm, not sure about that. In my opinion the used time and the used energy is important.
And, from a standpoint of saved energy shouldn't it be allowed to think about whether it is really worth to support old (and inefficient) lame CPU hardware ?
Collatz for example, are actually talking about lowering the memory required of a gpu to enable MORE and older gpu's to be used. At Collatz as small as a 256mb gpu can be used to crunch workunits.
The Stock GPU memory requirements for Collatz is 128Mb (unless it's been lowered again), and has been for well over a year, others have run it on 64Mb and 48Mb GPUs
(since i proved the Cuda app would run on my 128Mb 8400M GS and got Slicker to lower the Stock GPU requirements)
I think this is an interesting discussion and I would like to hear more opinions on this. Naturally there are different expectations and I often hear somewhat conflicting requests
- Some would like to see the apps being specifically optimized for the latest and greatest GPUs.
- Others are disappointed that their legacy hardware is not supported, even tho it might still be useful in the sense that it allows faster processing than a CPU.
As we are already dealing with > 20 different app versions here at E@H, we are a bit reluctant to add new versions without a good reason, and that's true both for supporting super-legacy stuff and for fine-grained-optimization versions. And it certainly would not make sense to do so if volunteers are by and large happy with the current situation wrt. hardware requirements (e.g. we still support something like a 8800 GT (which is actually not that slow!!) and HD 5xxx series ATI/AMD cards (provided they have enough RAM).
BOINC was supossed to use the power of iddle computers to do something more usefull than stay there showing flying toasters while the user was bussy doing something else away of the keyboard...
So with that in mind any project should have at least one "not highly optimized" version of their apps which should run on almost any hardware...
But also, there is the matter of energy consumption and the effects on the planet, in this case it would be good to discourage old and inefficient hardware in same way... I mean if (for example) an old P4 were not supported then the owners will not think about using it for crunching 24/7 and most probably they will be saving energy without affecting too much the amount of work the project gets done...
On the other hand, having very optimized apps specifically designed to use the most of every different harware could work as a big motivation for heavy crunchers and credit hunters, which at the end could give the project that elusive petaflop...
From the point of view of the project my questions, knowing that it is not possible to handle thousands of versions, would be: from where it comes the work done? is the amount provided by the xx% top users/hardware really significative? Is it worth to give them more optimized apps loosing support for the legacy/botton hardware/users?
Of course, Ill be happy with ultra optimized versions specially designed for my specific hardware, but if all those optimizations are going to give me just a 1% extra my happiness wont be very noticed... :b
...we still support something like a 8800 GT (which is actually not that slow!!).
Works on an 8400GS 512Mb too (slowly), and I'm using the 9600 GT in some hosts (3,500s in a fast host, 5,000s in a slow one) which is a slower card than an 8800 GT (312 vs. 504GFLOPS, according to this list on Wikipedia).
Each GeForce range seems to overlap several other ranges above and below. For example, according to that list, the 8800GT still outranks everything up to a GT 640 in the current range.
10% of active hosts (last 2 weeks) w/ GPU seems to provide 50% o
)
You post the answer, 90% of the users does not use GPU´s.
The last weeks was the exception, a bounch of madness crunchers (me inclided) with large hungry GPU´s fleets comes to the E@H seas and give the project a help, so the numbers of GPU´s and WU procecces rise, but tha´s is the exception not the roule.
One of the success of a project is the number of users who allready joint the project, not the power of his hosts, if only power counts, then why allow old small GPU´s stay in the project, if you could develop a more optimized app that uses only the newer´s GPU series habilities? If you forget the old GPU´s dessings limitations.
No, the projects success is based on the CPU´s based Hosts, the GPU´s is only an aid to make there more productive.
Also GPUs only benefit the
)
Also GPUs only benefit the pulsar search (BRP4); they aren't used in the gravity wave work (SL6V1) where it's only CPU power that counts.
RE: so /me is really
)
While gpu's can do 10 TIMES the work of a cpu in the same amount of itme, there is STILL work that just can't be made to work yet on a gpu. It is too complicated or just isn't written to take advantage, or can't take advantage, of the specific way gpu's do things. If you can't get a workunit to ALL fit inside the gpu's memory you are always swapping and that slows down the efficiency of the gpu usage and negates the use of one. Rosetta, for one, says they can't make it work on a gpu as the workunit uses too many things besides just the math capabilities of a cpu. The other thing is at some projects the units are bigger than the 1gb of ram most modern gpu's have, so IF you make it so only the top 1% of gpu's can crunch the unit is it worth the trouble, probably not. Some projects, Collatz for example, are actually talking about lowering the memory required of a gpu to enable MORE and older gpu's to be used. At Collatz as small as a 256mb gpu can be used to crunch workunits. On other projects the options are now being tested to run multiple workunits in the gpu's onboard memory, it works in some places and not in others. Einstein actually has a webpage setting letting you run multiple workunits, but most projects require an additional file. The Beta version of Boinc 7.0.40 and above will recognize SOME onboard gpu's, as in the Intel Sandy Bridge kind, and you can crunch with them. The NEWEST Beta, 7.0.42, will let you use a new add-on file to control the gpu even better and easier.
http://boinc.berkeley.edu/dev/forum_thread.php?id=8051&postid=46718
ALL version of Boinc can be downloaded from here:
http://boinc.berkeley.edu/dl/?C=M;O=D
BUT BE CAREFUL using Beta versions, they CAN AND OFTEN DO have 'issues' that cause newer versions to come out to 'fix' them.
RE: Collatz for example,
)
The Stock GPU memory requirements for Collatz is 128Mb (unless it's been lowered again), and has been for well over a year, others have run it on 64Mb and 48Mb GPUs
(since i proved the Cuda app would run on my 128Mb 8400M GS and got Slicker to lower the Stock GPU requirements)
They are initially Development versions, more Alpha version than Beta version, saying it's a Beta version implies Alpha testing of that version has already happened,
when really Alpha testing only happens when the latest client appears.
If you're going to post the links to the download repository, you should also post the link to the BOINC 7 Change Log and news thread,
so people know what the changes are, what they are letting themselves in for, and which versions are the horribly broken versions.
Claggy
RE: One of the success of a
)
Hhm, not sure about that. In my opinion the used time and the used energy is important.
And, from a standpoint of saved energy shouldn't it be allowed to think about whether it is really worth to support old (and inefficient) lame CPU hardware ?
RE: RE: Collatz for
)
I think this is an interesting discussion and I would like to hear more opinions on this. Naturally there are different expectations and I often hear somewhat conflicting requests
- Some would like to see the apps being specifically optimized for the latest and greatest GPUs.
- Others are disappointed that their legacy hardware is not supported, even tho it might still be useful in the sense that it allows faster processing than a CPU.
As we are already dealing with > 20 different app versions here at E@H, we are a bit reluctant to add new versions without a good reason, and that's true both for supporting super-legacy stuff and for fine-grained-optimization versions. And it certainly would not make sense to do so if volunteers are by and large happy with the current situation wrt. hardware requirements (e.g. we still support something like a 8800 GT (which is actually not that slow!!) and HD 5xxx series ATI/AMD cards (provided they have enough RAM).
Cheers
HB
Several thoughts: BOINC
)
Several thoughts:
BOINC was supossed to use the power of iddle computers to do something more usefull than stay there showing flying toasters while the user was bussy doing something else away of the keyboard...
So with that in mind any project should have at least one "not highly optimized" version of their apps which should run on almost any hardware...
But also, there is the matter of energy consumption and the effects on the planet, in this case it would be good to discourage old and inefficient hardware in same way... I mean if (for example) an old P4 were not supported then the owners will not think about using it for crunching 24/7 and most probably they will be saving energy without affecting too much the amount of work the project gets done...
On the other hand, having very optimized apps specifically designed to use the most of every different harware could work as a big motivation for heavy crunchers and credit hunters, which at the end could give the project that elusive petaflop...
From the point of view of the project my questions, knowing that it is not possible to handle thousands of versions, would be: from where it comes the work done? is the amount provided by the xx% top users/hardware really significative? Is it worth to give them more optimized apps loosing support for the legacy/botton hardware/users?
Of course, Ill be happy with ultra optimized versions specially designed for my specific hardware, but if all those optimizations are going to give me just a 1% extra my happiness wont be very noticed... :b
RE: ...we still support
)
Works on an 8400GS 512Mb too (slowly), and I'm using the 9600 GT in some hosts (3,500s in a fast host, 5,000s in a slow one) which is a slower card than an 8800 GT (312 vs. 504GFLOPS, according to this list on Wikipedia).
Each GeForce range seems to overlap several other ranges above and below. For example, according to that list, the 8800GT still outranks everything up to a GT 640 in the current range.
RE: Also GPUs only benefit
)
Which let me wonder why I get often no new work unit (within last 1-2 weeks) for that project (I deselected the BRP4 project fwiw)
RE: RE: Also GPUs only
)
Have a look at http://einsteinathome.org/node/196689