But now I need to estimate nVidia card prices. The one I am thinking about using is about three years old. (I was going to get a cheap off-lease computer to drive my TV and make up for the low performance with the card.) Obviously it is out of date and was only $80 or so at the time.
Several model numbers have been given without prices. A couple of performance improvements have been given and met with skepticism. If the best improvement were true I could drag out the emergency spare from the closet and have a screamer.
But now I need real numbers on the price of the nVidia card v 4 core CPUs.
I knew I should have kept my mouth shut.
Hi Matt,
wait a moment. Do not just look for prices.
I have a GTX260-192 (the older one with a build-in bug) which is unable to crunch GPUGRID. All wu's fail at ~20% done. I can crunch Einstein, MW and SETI. At the moment I have access to a GTS250, which is good for Einstein, but not good for MW and SETI.
Here in this forum I did not find any information about the requirements for Cuda-capabilities required for Einstein.
First of all we should ask the developers or the moderators: which cards will work, which will not work? Not work not only means older cards, I see many posts in other forums telling that even fermi-cards will not crunch cuda apps.
The second question is: about two months ago there was a post telling us that a new cuda app will come soon. Or will that be an OpenCL-app? What happens with the actual cuda app? Will that be removed?
I would say, make your decision when these questions got an answer. It makes no sense to buy a cheap (old) card if you can use it only some months. This would make it expensive.
So we haven't been purchasing new hardware and buying processing capacity outside the project then?
Cheers, Mike.
Hi Mike,
the post says: Yesterday we set up a new server for downloading ABP data files
New may include investment by purchasing new hardware. The post does not give details. The machine we used previously is now just used for handling the result files (upload, validation, assimilation, archiving).
This looks like the new server is more dedicated and some work is now done on another machine, the old one. It does not look like the project is buying processing capacity outside. Outsourced here means done on another machine.
Here in this forum I did not find any information about the requirements for Cuda-capabilities required for Einstein.
There's an entire thread devoted to this. Perhaps you know of it? The most recent post is yours .... :-)
Quote:
First of all we should ask the developers or the moderators: which cards will work, which will not work? Not work not only means older cards, I see many posts in other forums telling that even fermi-cards will not crunch cuda apps.
The second question is: about two months ago there was a post telling us that a new cuda app will come soon. Or will that be an OpenCL-app? What happens with the actual cuda app? Will that be removed?
see this recent statement from the developer : NB the word 'testing' means we will discover what will or won't work. That hence has implications for the CUDA line of apps. Also see the first post of that thread. Or the rest of them for that matter.
Re servers : Alex, I'm stifling a chuckle. Perhaps I've been too subtle, or it's a language issue. The project has simply re-arranged the hardware assigned to various processes. This is no biggie. It happens all the time .....
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Seti has 249,513 active hosts.
Milkyway has 34,464 active hosts.
See the difference there already? Now just imagine that all 249,513 hosts try to contact Seti at the same time, telling the project they want to upload, download and report work. That's a pretty sturdy serverbase one needs then, to be able to cope with those amounts of computers hitting your servers 24 hours a day. Seti manages this, on a shoestring budget, and has managed to do so for the last 11 years.
SETI is a pioneer in DC and in no way I want to derogate their effort. What I wanted to point out is that if problems come up they need a solution.
Einstein seems to have enough money, so they could invest in a new server.
Orbit@Home changed the policy of validation.
What I am missing here is your answer to my post Is it really necessary, that a seti cuda wu finishes within 3 to 15 min ? Make them 10 times longer and you have ten times less traffic and validation work.
Simplified, one wu needs two server contacts. That means (worst case) within three minutes. Make the wu's longer and you get 2 contacts in lets say two hours. Would that help?
GPUGRID, for example, has wu's running more than 24 hours. I mean, this is a way to reduce server load.
I'm missing also your comment to the words MW produces these wu's on demand
Sometimes I get messages like 'no cuda wu's available' and every day I get a lot of messages 'no work for AP available'. What's wrong with the idea of generating wu's on demand? Currently there's a lot of traffic - for nothing.
Quote:
At Milkyway it's 5,000 or 10,000 tasks per CPU per day. Now what's the use in numbers like that when - as you say - they only have 2,000 or less available at any time? Especially on a 3 day deadline.
MW has a deadline of more than a week, so I assume you are referring to the tree day outage of the SETI servers.
Does it really make sense to distribute wu's knowing that the return can not be handled?
Quote:
Madness (personal opinion).
Whose?
But the main theme here is not SETI, it is 'The coming distributed processing explosion'. It is not coming, it is alredy there. We see a lot of problems and many more will arise. Different projects need different strategies, but I'm shure, that clever people will find solutions for their specific problems.
Here in this forum I did not find any information about the requirements for Cuda-capabilities required for Einstein.
There's an entire thread devoted to this. Perhaps you know of it? The most recent post is yours .... :-)
Quote:
First of all we should ask the developers or the moderators: which cards will work, which will not work? Not work not only means older cards, I see many posts in other forums telling that even fermi-cards will not crunch cuda apps.
The second question is: about two months ago there was a post telling us that a new cuda app will come soon. Or will that be an OpenCL-app? What happens with the actual cuda app? Will that be removed?
see this recent statement from the developer : NB the word 'testing' means we will discover what will or won't work. That hence has implications for the CUDA line of apps. Also see the first post of that thread. Or the rest of them for that matter.
Re servers : Alex, I'm stifling a chuckle. Perhaps I've been too subtle, or it's a language issue. The project has simply re-arranged the hardware assigned to various processes. This is no biggie. It happens all the time .....
Cheers, Mike.
Hi Mike,
the thread you're referring to is a very large thread. I did not check every word, but looking around I could not find a clear statement about the cuda-capabilities required. Maybe the answer can be found it the name of the app, which includes 'cuda23'.
The most recent post: did you find an answer to my question? I still do not now either or not it will work.
Statement of the developer: will it be the same cuda-version? This question is not answered.
Servers: language is always an issue, if it's not your native language. But to make that clear: crunching is my hobby, not my profession. And a hobby should always come along with some fun. And if I am the main player in this case, shall it be. I'm not angry, I'm laughing with you. Misinterpretations can be very funny.
Many many years ago I worked for a service company. The guy from logistics wanted to point out, why he's always asking for part numbers. He posted:
Technician: Two bulbs, four watt please.
Logistic: Two what?
T: No, four!
L: Four what?
T: Yes.
L: No.
So let's have fun. But hopefully we can get the answers. It's the basis for Matt's and my next investment.
What I am missing here is your answer to my post Is it really necessary, that a seti cuda wu finishes within 3 to 15 min ? Make them 10 times longer and you have ten times less traffic and validation work.
They can't. The Seti Enhanced (Multibeam) tasks are the same for both CPUs and GPUs. Those are still essentially the same as Seti Classic had them: 107 seconds worth of raw data needing to be analyzed.
It's not simple making them 214 or 1070 seconds worth of data, as that would require a complete rewrite of just about all the back-end programs the project is using (not BOINC back-end).
Just see how long Astropulse took to develop: over 6 years. Those tasks - looking at the broadband spectrum for signals, pulsars, black holes and as-yet unknown astrophysical phenomena - take quite a while on a CPU, yet when run through the beta app developed by 3rd party Lunatics on the ATI GPUs, it's also mere minutes.
Quote:
What's wrong with the idea of generating wu's on demand? Currently there's a lot of traffic - for nothing.
They are generating tasks on demand. What you see in the 'ready to send' list isn't what's in the feeder. The feeder has a maximum of 100 tasks, for all the hardware platforms that the project has available. Roughly said, 50 tasks for Seti Enhanced, 50 for Astropulse.
Yet there's a problem here. Seti Enhanced tasks are 350KB, whereas Astropulse are 8MB. Seti has got one 100Mbit connection into the lab, so when they allow Astropulse tasks to go out, they can send 1 at a time with a second slowly filtering through on the rest of the bandwidth. One AP takes up 64Mbit of the connection already. Most people out there getting the tasks in don't even have a speedy connection at 100Mbit. They may have it on their personal network, but not their internet connection, so as long as two computers are downloading Astropulse from Seti, all the other 249,511 that may be knocking on the door at that time will have to wait: saturated bandwidth.
Before you say, increase the feeder, or add another feeder... it won't help as they only have one 100Mbit connection into the lab that connects to all hosts out there. The Space and Sciences Lab just had a 1,000Mbit connection laid up the mountain, yet Seti is still only getting 100Mbit off of that. For them to lay their own Gigabit connection down the mountain would cost approximately 60,000 dollars. Money they do not have.
Quote:
MW has a deadline of more than a week, so I assume you are referring to the tree day outage of the SETI servers.
Stop assuming things. I was talking about their old deadline. I haven't run MW in a long time, only ran a test on my ATI HD4850 a couple of months ago. But even a 7 day deadline isn't very useful if you are allowed to download 5,000 to 10,000 tasks on the CPU. Not unless those tasks take no more than 60 seconds a piece.
Quote:
Quote:
Madness (personal opinion).
Whose?
Uhm... make an educated guess. Assume something. You're good at doing that.
RE: RE: Not long ago
)
http://einsteinathome.org/node/195306
RE: RE: RE: Not long
)
And your definition of 'outsourced' is ?
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: RE: RE: RE: Not
)
The machine we used previously is now just used for handling the result files (upload, validation, assimilation, archiving).
RE: RE: RE: RE: Quote
)
So we haven't been purchasing new hardware and buying processing capacity outside the project then?
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: But now I need to
)
Hi Matt,
wait a moment. Do not just look for prices.
I have a GTX260-192 (the older one with a build-in bug) which is unable to crunch GPUGRID. All wu's fail at ~20% done. I can crunch Einstein, MW and SETI. At the moment I have access to a GTS250, which is good for Einstein, but not good for MW and SETI.
Here in this forum I did not find any information about the requirements for Cuda-capabilities required for Einstein.
First of all we should ask the developers or the moderators: which cards will work, which will not work? Not work not only means older cards, I see many posts in other forums telling that even fermi-cards will not crunch cuda apps.
The second question is: about two months ago there was a post telling us that a new cuda app will come soon. Or will that be an OpenCL-app? What happens with the actual cuda app? Will that be removed?
I would say, make your decision when these questions got an answer. It makes no sense to buy a cheap (old) card if you can use it only some months. This would make it expensive.
Regards
Alexander
RE: So we haven't been
)
Hi Mike,
the post says:
Yesterday we set up a new server for downloading ABP data files
New may include investment by purchasing new hardware. The post does not give details.
The machine we used previously is now just used for handling the result files (upload, validation, assimilation, archiving).
This looks like the new server is more dedicated and some work is now done on another machine, the old one. It does not look like the project is buying processing capacity outside. Outsourced here means done on another machine.
HTH.
Regards,
Alexander
RE: Here in this forum I
)
There's an entire thread devoted to this. Perhaps you know of it? The most recent post is yours .... :-)
see this recent statement from the developer : NB the word 'testing' means we will discover what will or won't work. That hence has implications for the CUDA line of apps. Also see the first post of that thread. Or the rest of them for that matter.
Re servers : Alex, I'm stifling a chuckle. Perhaps I've been too subtle, or it's a language issue. The project has simply re-arranged the hardware assigned to various processes. This is no biggie. It happens all the time .....
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Hi Ageless, RE: Seti
)
Hi Ageless,
SETI is a pioneer in DC and in no way I want to derogate their effort. What I wanted to point out is that if problems come up they need a solution.
Einstein seems to have enough money, so they could invest in a new server.
Orbit@Home changed the policy of validation.
What I am missing here is your answer to my post
Is it really necessary, that a seti cuda wu finishes within 3 to 15 min ? Make them 10 times longer and you have ten times less traffic and validation work.
Simplified, one wu needs two server contacts. That means (worst case) within three minutes. Make the wu's longer and you get 2 contacts in lets say two hours. Would that help?
GPUGRID, for example, has wu's running more than 24 hours. I mean, this is a way to reduce server load.
I'm missing also your comment to the words MW produces these wu's on demand
Sometimes I get messages like 'no cuda wu's available' and every day I get a lot of messages 'no work for AP available'. What's wrong with the idea of generating wu's on demand? Currently there's a lot of traffic - for nothing.
MW has a deadline of more than a week, so I assume you are referring to the tree day outage of the SETI servers.
Does it really make sense to distribute wu's knowing that the return can not be handled?
Whose?
But the main theme here is not SETI, it is 'The coming distributed processing explosion'. It is not coming, it is alredy there. We see a lot of problems and many more will arise. Different projects need different strategies, but I'm shure, that clever people will find solutions for their specific problems.
Regards,
Alexander[/i]
RE: RE: Here in this
)
Hi Mike,
the thread you're referring to is a very large thread. I did not check every word, but looking around I could not find a clear statement about the cuda-capabilities required. Maybe the answer can be found it the name of the app, which includes 'cuda23'.
The most recent post: did you find an answer to my question? I still do not now either or not it will work.
Statement of the developer: will it be the same cuda-version? This question is not answered.
Servers: language is always an issue, if it's not your native language. But to make that clear: crunching is my hobby, not my profession. And a hobby should always come along with some fun. And if I am the main player in this case, shall it be. I'm not angry, I'm laughing with you. Misinterpretations can be very funny.
Many many years ago I worked for a service company. The guy from logistics wanted to point out, why he's always asking for part numbers. He posted:
Technician: Two bulbs, four watt please.
Logistic: Two what?
T: No, four!
L: Four what?
T: Yes.
L: No.
So let's have fun. But hopefully we can get the answers. It's the basis for Matt's and my next investment.
Regards,
Alexander
RE: What I am missing here
)
They can't. The Seti Enhanced (Multibeam) tasks are the same for both CPUs and GPUs. Those are still essentially the same as Seti Classic had them: 107 seconds worth of raw data needing to be analyzed.
It's not simple making them 214 or 1070 seconds worth of data, as that would require a complete rewrite of just about all the back-end programs the project is using (not BOINC back-end).
Just see how long Astropulse took to develop: over 6 years. Those tasks - looking at the broadband spectrum for signals, pulsars, black holes and as-yet unknown astrophysical phenomena - take quite a while on a CPU, yet when run through the beta app developed by 3rd party Lunatics on the ATI GPUs, it's also mere minutes.
They are generating tasks on demand. What you see in the 'ready to send' list isn't what's in the feeder. The feeder has a maximum of 100 tasks, for all the hardware platforms that the project has available. Roughly said, 50 tasks for Seti Enhanced, 50 for Astropulse.
Yet there's a problem here. Seti Enhanced tasks are 350KB, whereas Astropulse are 8MB. Seti has got one 100Mbit connection into the lab, so when they allow Astropulse tasks to go out, they can send 1 at a time with a second slowly filtering through on the rest of the bandwidth. One AP takes up 64Mbit of the connection already. Most people out there getting the tasks in don't even have a speedy connection at 100Mbit. They may have it on their personal network, but not their internet connection, so as long as two computers are downloading Astropulse from Seti, all the other 249,511 that may be knocking on the door at that time will have to wait: saturated bandwidth.
Before you say, increase the feeder, or add another feeder... it won't help as they only have one 100Mbit connection into the lab that connects to all hosts out there. The Space and Sciences Lab just had a 1,000Mbit connection laid up the mountain, yet Seti is still only getting 100Mbit off of that. For them to lay their own Gigabit connection down the mountain would cost approximately 60,000 dollars. Money they do not have.
Stop assuming things. I was talking about their old deadline. I haven't run MW in a long time, only ran a test on my ATI HD4850 a couple of months ago. But even a 7 day deadline isn't very useful if you are allowed to download 5,000 to 10,000 tasks on the CPU. Not unless those tasks take no more than 60 seconds a piece.
Uhm... make an educated guess. Assume something. You're good at doing that.