Efficiency of crunches per downloaded chunck of data.

Heffed
Heffed
Joined: 18 Jan 05
Posts: 257
Credit: 12368
RAC: 0

RE: ... I think you mean

Message 13376 in response to message 13375

Quote:
... I think you mean 4.45.


Perhaps 4.25. ;)

Kilcock
Kilcock
Joined: 1 Jun 05
Posts: 41
Credit: 2604
RAC: 0

RE: With BOINC 2.25 and

Message 13377 in response to message 13374

Quote:

With BOINC 2.25 and above you can remote control BOINC on all your other computers. Now if you can remote control the other computers you would be able to write a program that went and found out what data sets the other computers that you manage have. The program would then download all data sets that it didn't have from the other computers. it would then upload the data sets that the other computers are missing so all the computers that you manage have a copy of all downloaded data sets. this would mostly be of a benefit to modem users.

Ziran, What you describe here is a replication system (often used where multiple servers contain the same data, but each is serving a different geographical location)
Applying this idea on Boinc would only be of advantage to modem (and broadband) users if different clients in the network would attempt to download the same WUDF.
The efficiency I would like to see is that every client gets and uses the data it needs and not, as in a worst case, a 1 use to 28 download ratio.

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

The message with all the

The message with all the percentages is just telling you why you will not get more work. In shourt, it is not turned on enough.

So, your download is constrained.

This is a server-side piece of protection for those that are too ambitious... :)

There is a fine line that the projects walk between satisfying the need of participants to have work on their system and thier need to keep the amount of work "in-flight" to a small number. In Einstein@Home's case this is in part because of the size of the server they have.

The capability of the server is also why they are so aggressive in puring the database as soon as possible.

Kilcock
Kilcock
Joined: 1 Jun 05
Posts: 41
Credit: 2604
RAC: 0

RE: The message with all

Message 13379 in response to message 13378

Quote:

The message with all the percentages is just telling you why you will not get more work. In short, it is not turned on enough.

So, your download is constrained.


yep, by moving wu's to an other computer, i reduce work for the first, with the result that it is running idle half the night. (unless you expect me to wake up in the night to go online) So, as a dog, I byte myself in the tail.

That said, I got a workaround to avoid the merge operation and both machines are spawning wu's. #1 has 1 waiting and #2 has 2 waiting, which leaves the option to include more machines. (but I pass on that one)

Quote:

This is a server-side piece of protection for those that are too ambitious... :)

There is a fine line that the projects walk between satisfying the need of participants to have work on their system and thier need to keep the amount of work "in-flight" to a small number. In Einstein@Home's case this is in part because of the size of the server they have.

The capability of the server is also why they are so aggressive in puring the database as soon as possible.

Guess, there is always an other side of a story. Thanks for bringing that up Paul.

With all the complications encountered, I realy hope that mentioned v below v will bring the solution to this issue.

Quote:


But I think there is more to it:

With the arrival of the S4 dataset we got WUDF's of 6+ Mb. To my suprise my computers need the same time to crunch a WU. This shows that there is no relation between the length of the WUDF and the crunchtime and leads me to believe that every wu crunches on a piece of the WUFD. If this is the case, than it would be most efficient to reduce the WUDF-size to 0.5+ MB, so every Wu has just the data it needs and not a chain which is a 28 fold of the data it needs.

Hopefully, above is the case. and if so, than it would be relatively easy to solve, but would give the maximun efficiency for all users.

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

RE: With all the

Message 13380 in response to message 13379

Quote:
With all the complications encountered, I realy hope that mentioned v below v will bring the solution to this issue.
Quote:


But I think there is more to it:

With the arrival of the S4 dataset we got WUDF's of 6+ Mb. To my suprise my computers need the same time to crunch a WU. This shows that there is no relation between the length of the WUDF and the crunchtime and leads me to believe that every wu crunches on a piece of the WUFD. If this is the case, than it would be most efficient to reduce the WUDF-size to 0.5+ MB, so every Wu has just the data it needs and not a chain which is a 28 fold of the data it needs.

Hopefully, above is the case. and if so, than it would be relatively easy to solve, but would give the maximun efficiency for all users.

This is part and parcel with what I have been trying to say in the SETI@Home messageboards. The size of the datafile, or the size of a program is not an indication of speed. If you go to the old site and seach on "Good SQL" it should take you to a statement (yes one line of code) that when printed in a normal 10 point font is about 18 inches in height. It only took about 15 ms to run ... much longer to print the data (a sample is shown in the definition of SQL I think ...

So, much smaller files and longer run times... good thing I am in this for the money ....

Kilcock
Kilcock
Joined: 1 Jun 05
Posts: 41
Credit: 2604
RAC: 0

RE: The size of the

Quote:
The size of the datafile, or the size of a program is not an indication of speed.


While crunching on a data file one assumes to start at A and end at Z. Comparison with a program is not valid, becase a program can have many switches and with switches you can skip the excution of code.

Quote:
So, much smaller files and longer run times...

If you want to put it that sipmle: Yes, as long as it will improve the efficiency for modem and broadband users alike and when achieved it will save a lot of resources, which will bennefit for many years to come,I presume.

Eric.ie

Kilcock
Kilcock
Joined: 1 Jun 05
Posts: 41
Credit: 2604
RAC: 0

Update on

Update on mini-Expiriment:

With the introduction of the new set of wu'f and the reduction of the wudf size the usefulness of this test has minimised. At the moment I'm to finish the last assigned wu's and look forward to be able to smell the flowers in the west of Ireland. Continuation or redoing of this test, is under the present circumstances, neigther usefull nor likely.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.