Bernd, Bruce, Gary, Mike, a question for you!

LucaB76 - BOINC.Italy
LucaB76 - BOINC...
Joined: 16 Jan 06
Posts: 14
Credit: 754,232
RAC: 0
Topic 191466

Since the first appearence on the E@H stage of the S5 work, on my P4 HTed Prescott I got only 1 short wu and a bunch of long wu.

Disabling HT I spent about 5k seconds on the short wu and I'm spending about 40k seconds on the long ones. If I enable the HT, times peak to 70k seconds (about 20 hours!!).

On this scenario I turn on my pc in the morning to begin a new wu and when I go to sleep, late in the night, the same wu is still there... Obviously it's impossible to share time with work units from other projects. I get frustrated when I look at my 0.5days cache... nothing happens... just a small percentage that very slowly goes up.

I'm an amateur astronomer from Pisa, Italy. I live near the Virgo Complex, the French-Italian facility employed in the search for gravitational waves. I know the strict tolerances required by math computation on this kind of data and I'm not surprised that the Akos's task failed to meet the requirements for the project... Clearly math precision and big speed improvements aren't so good friends!!

I'm even aware that you decided to do soooo loooong wu to reduce the entries in the results DBase and leave free room for new users...

My question is the following: Is it possible to establish a trade-off between time spent on long wu & new entries on the database? Here is my idea!

Since the celestial sphere goes from -90° to +90° on the Declination grid, is it possible to split long wu in blocks of 20, 30 or 45 degrees and let the application scan only on one of these blocks?

The only overhead introduced would be represented by a process that splits wus at the beginning and gathers and compacts the results after crunching!

With such expedient:
- you can always reduce the entries in the results database 'cause wu are longer than S4 ones;
- we can get shorter crunching times and we can see more movement on our cache;
- and you can always get precise results for science 'cause nothing change on the app's accuracy.

Do you think that this trade-off is possible?
Did I miss somthing?
Thanks for your attention and patience!

Luca B. from Pisa, Italy

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,388
Credit: 51,643,933,784
RAC: 69,911,771

Bernd, Bruce, Gary, Mike, a question for you!

Quote:

Disabling HT I spent about 5k seconds on the short wu and I'm spending about 40k seconds on the long ones. If I enable the HT, times peak to 70k seconds (about 20 hours!!).....

I have a 2.6GHz Northwood with HT enabled. It is shared 50/50 with Seti and I manage to keep 1 project on each virtual CPU most of the time. As has been mentioned in other posts this does significantly speed up each thread. Long EAH results take about 18 hours. For Seti, the range is from about 3.5 hours to 8 hours (after a very casual look) with an average around 6 hours. So on an average day that machine will produce 1.33 EAH results and 4 Seti results in total, very approximately. I'm happy with that.

Quote:
My question is the following: Is it possible to establish a trade-off between time spent on long wu & new entries on the database?
.....

I really don't understand why people get upset with not seeing huge numbers of results flash past. In my case in 18 hours of crunching I have 1 EAH result and about 176 credits are added to the total. I don't have short results on that machine yet but if I did they would take about 2 hours each and I would do 12 per day and each one would return around 18-20 credits. So in 18 hours of crunching I would see 9 results and about 175-180 credits. In each case, whether 1 long result or 9 short results, I have contributed much the same science to the project and have received much the same credit. I can't really see that I need to be worried about how many results that science was split into.

Please realise that everything I say is my personal opinion. I'm not an employee of the project and I do not speak for the Developers. I'm a supporter interested in seeing the project remain healthy and viable just like you. I respect the right of the Developers to organise and maintain their project in the best interests of the project.

Please also realise that I'm not attacking you or anyone else for that matter. I can understand that it would be far better to do 9 results rather than 1 result if that meant that 9 times the science was being done. Or even twice the science for that matter. But it doesn't. There is a certain amount of work to do and the actual number of small units it is split into is irrelevant from the point of view of the total amount of work to do. It does however (as you have acknowledged) make a huge difference to server load. Let me put it to you this way. Which would have the most value to you, (a) a 1Kg gold bar, or, (b) 100 x 10gm gold mini-bars - if every one had a serial number stamped on it that you had to write down in a ledger and keep track of?

It is far more important to keep advocating the ongoing process of application optimisation in an acceptable and controlled manner. Over the life of S5, I expect to see incremental improvements that will reduce my 18 hours to something like 12-14 or even less per result. Akos has shown that it is possible to do so. The project Developers do understand the value in both hardware and electricity that we volunteers contribute. I'm certain that they will make every effort to give us the most efficient apps possible, bearing in mind that they have to protect the integrity of the project first and foremost.

On the point of workunit size, I'm sure they would have put a lot of careful consideration into the choice, bearing in mind the problems that the power crunchers had with S4 data once Akos' optimisations came unexpectedly on the scene.

Quote:

Do you think that this trade-off is possible?
Did I miss somthing?
Thanks for your attention and patience!

Luca B. from Pisa, Italy

I have answered from a personal perspective because I believe that your questions and comments deserve a response. I'm sure the Developers would be looking very carefully at all that has happened and will keep us advised as they have done in the past. They gave us a trouble free transition to S5 and a new credit system that seems very fair and reasonable. I'm sure they will properly assess the further optimisation of the stock application and any particular issues regarding workunit size and will advise us at an appropriate time.

Cheers,
Gary.

Alinator
Alinator
Joined: 8 May 05
Posts: 927
Credit: 9,352,143
RAC: 0

RE: I have a 2.6GHz

Message 40768 in response to message 40767

Quote:

I have a 2.6GHz Northwood with HT enabled. It is shared 50/50 with Seti and I manage to keep 1 project on each virtual CPU most of the time. As has been mentioned in other posts this does significantly speed up each thread. Long EAH results take about 18 hours. For Seti, the range is from about 3.5 hours to 8 hours (after a very casual look) with an average around 6 hours. So on an average day that machine will produce 1.33 EAH results and 4 Seti results in total, very approximately. I'm happy with that.

Really??!! How did you do that, I was under the impression the HT was disabled in Northwoods except for the 3.06 GHz model.

Alinator

Never mind, your's must be a later P4C. Rats, thought there might be a "cheap" way to improve my P4B 2.66.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,388
Credit: 51,643,933,784
RAC: 69,911,771

RE: Never mind, your's

Message 40769 in response to message 40768

Quote:

Never mind, your's must be a later P4C. Rats, thought there might be a "cheap" way to improve my P4B 2.66.

LOL!! Yes, it's a 2.6C. HT capability started with the 2.4C if I remember correctly. Yours is 533MHz (4x133) and not capable of HT. Mine is 800MHz (4x200).

Cheers,
Gary.

Alinator
Alinator
Joined: 8 May 05
Posts: 927
Credit: 9,352,143
RAC: 0

LOL, yep this jogged my

LOL, yep this jogged my memory. Intel had HT enabled on the 3.06B, but they they wanted something ridiculous extra for it which is why I got the 2.66. It was expensive enough as I recall! ;-)

Par for the course them. :-)

Alinator

[B@H] Ray
[B@H] Ray
Joined: 4 Jun 05
Posts: 621
Credit: 49,583
RAC: 0

They built a few 2.66

They built a few 2.66 Northwoods with HT near the end of the Northwoods, but with a 800 FSB like the Prescott. Sure wish that they made more with HT and the 533 FSB as that is the only HT CPU that my MB will run.

----------------

One bad thing:
All Prescott P4's have HT in them, but many sold as non HT with it turned off in the core, no way of turning it on. That allowed Intel to charge more for the ones with HT turned on for more speed at a given clock speed at no extra cost to them.

All business do things like that when they can so don't get upset at Intel for that, but it sure would be great if there was a way of turning it on for a Prescott with it turned off when built. But taking the chip apart and fixing something so small that you can not see it would only ruin the chip.

---------------

If you get a new MB be sure that it has all the new stuff on it, even if you have no plans on using them now. I got a socket 478 MB capable of running HT when I built the Celeron, but, it will only run the 533 FSB so I am stuck with non HT or the 3.06 533FSB if I want to upgrade later. But when I built the whole system for under US$200, after rebates (new case, MB, Memory, CPU and HD) I can not really complaine about that. If I really want to upgrade in the future guess I can go with another MB.
Wife is really happy with the way it runs now so I will leave it this way for a couple years.


Try the Pizza@Home project, good crunching.

LucaB76 - BOINC.Italy
LucaB76 - BOINC...
Joined: 16 Jan 06
Posts: 14
Credit: 754,232
RAC: 0

RE: I have a 2.6GHz

Message 40772 in response to message 40767

Quote:
I have a 2.6GHz Northwood with HT enabled. It is shared 50/50 with Seti and I manage to keep 1 project on each virtual CPU most of the time. As has been mentioned in other posts this does significantly speed up each thread.

Gary, how do you obtain this configuration? Which Manager do you use? AFAIK only TruXoft Manager can set the cpu affinity... Is this your case? Don't tell me that you manually suspend every n-1 wu in the cache... Any other way with official 5.4.9?

Quote:
I really don't understand why people get upset with not seeing huge numbers of results flash past. [CUT] I have answered from a personal perspective because I believe that your questions and comments deserve a response.[CUT]

Obviously, from the science side, it's the same to send 1 big wu for 100credits or 10 small wu for 10credits.
But it's different for the cruncher that maybe prefers the second option to see work flow regularly; and it's different for the project's servers that are more busy with 10 small connections! This is simple and clear!
The first option is just... boring... but it's only a my feeling! ;-) No problems with that!

Thanks for all the other answers!

Udo
Udo
Joined: 19 May 05
Posts: 203
Credit: 8,945,570
RAC: 0

RE: Gary, how do you

Message 40773 in response to message 40772

Quote:

Gary, how do you obtain this configuration? Which Manager do you use? AFAIK only TruXoft Manager can set the cpu affinity... Is this your case? Don't tell me that you manually suspend every n-1 wu in the cache... Any other way with official 5.4.9?

Luca,
if you set you ressource share to the same value for both projects (50/50) they will both run on one CPU each.
I had a server with 3 CPUs and had E@H at 66% and an other project at 33%. So I had 2 WUs for E@H and 1 WU of the other project running at the same time.

Udo

Udo

LucaB76 - BOINC.Italy
LucaB76 - BOINC...
Joined: 16 Jan 06
Posts: 14
Credit: 754,232
RAC: 0

RE: Luca, if you set you

Message 40774 in response to message 40773

Quote:
Luca,
if you set you ressource share to the same value for both projects (50/50) they will both run on one CPU each.
I had a server with 3 CPUs and had E@H at 66% and an other project at 33%. So I had 2 WUs for E@H and 1 WU of the other project running at the same time.
Udo


I'm not so lucky, Udo :-(
I've got 3 Eintein long wu and 14 Rosetta wu. The resurce share is 100 for both. The short and long debts are the same for both project... they are even... BUT... Rosy is running both task and Einstein is preempted...

The only thing that worked for me in the past was TruXoft tx36, but now I'm using official 5.4.9... and nothing to do!!

Thanks!

Alinator
Alinator
Joined: 8 May 05
Posts: 927
Credit: 9,352,143
RAC: 0

RE: They built a few 2.66

Message 40775 in response to message 40771

Quote:

They built a few 2.66 Northwoods with HT near the end of the Northwoods, but with a 800 FSB like the Prescott. Sure wish that they made more with HT and the 533 FSB as that is the only HT CPU that my MB will run.

----------------

One bad thing:
All Prescott P4's have HT in them, but many sold as non HT with it turned off in the core, no way of turning it on. That allowed Intel to charge more for the ones with HT turned on for more speed at a given clock speed at no extra cost to them.

All business do things like that when they can so don't get upset at Intel for that, but it sure would be great if there was a way of turning it on for a Prescott with it turned off when built. But taking the chip apart and fixing something so small that you can not see it would only ruin the chip.

---------------

If you get a new MB be sure that it has all the new stuff on it, even if you have no plans on using them now. I got a socket 478 MB capable of running HT when I built the Celeron, but, it will only run the 533 FSB so I am stuck with non HT or the 3.06 533FSB if I want to upgrade later. But when I built the whole system for under US$200, after rebates (new case, MB, Memory, CPU and HD) I can not really complaine about that. If I really want to upgrade in the future guess I can go with another MB.
Wife is really happy with the way it runs now so I will leave it this way for a couple years.

Oh no, I don't blame them for making a buck (or 200! :-D) while they can. It costs a bundle to bring new silicon to market.

At this point I think the next box I build will be a dual socket Opty server, and use it to replace a number of single processor old timers I have doing backend chores.

Cost of juice just getting to the point it makes good sense even though the box will be significantly more expensive to put together.

Alinator

Steve Cressman
Steve Cressman
Joined: 9 Feb 05
Posts: 104
Credit: 139,654
RAC: 0

RE: RE: Luca, if you set

Message 40776 in response to message 40774

Quote:
Quote:
Luca,
if you set you ressource share to the same value for both projects (50/50) they will both run on one CPU each.
I had a server with 3 CPUs and had E@H at 66% and an other project at 33%. So I had 2 WUs for E@H and 1 WU of the other project running at the same time.
Udo

I'm not so lucky, Udo :-(
I've got 3 Eintein long wu and 14 Rosetta wu. The resurce share is 100 for both. The short and long debts are the same for both project... they are even... BUT... Rosy is running both task and Einstein is preempted...

The only thing that worked for me in the past was TruXoft tx36, but now I'm using official 5.4.9... and nothing to do!!

Thanks!


I think that maybe if you reset your debts with BoincDebtViewer they would probably go back to sharing 50/50. If one has more debt than the other, because of lack of work at one point in time, this would make it end up doing two from the same project instead of what you want. Give it a try and see, can't hurt anything.
:)

98SE XP2500+ @ 2.1 GHz Boinc v5.8.8

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.