SETI Orphans. Can we go back to E@H?

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6464
Credit: 9592113663
RAC: 6676915

Speedy wrote:Quote:And why

Speedy wrote:
Quote:
And why such waste??

when you say the above may I ask what you are referring to?

 

Possibly referring to the amount of cpu cycles/time (2+ days?) not spent on productive processing due to having the task miss the deadline.

 

Tom M

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

Raistmer*
Raistmer*
Joined: 20 Feb 05
Posts: 208
Credit: 181428947
RAC: 6029

Gary Roberts

Gary Roberts wrote:
Raistmer* wrote:
Why small deadlines?? Gravitational waves can't wait month to be discovered??

The deadline for Einstein CPU tasks has pretty much always been 14 days.  That's not small - it's just the standard.  Even for a task taking 3 days, there is still plenty of time to meet the deadline.

It was a pretty big deal when GW was first directly measured (a BH-BH cataclysmic collision).  It will be an even bigger deal and a true triumph for those who have developed the technology when the much weaker continuous emissions from rapidly spinning massive objects like neutron stars is finally detected.  Hardly surprising that there is a race to be the first to make that detection! :-).

Gary, FYI it's not E@h who make first detection. And seems you miss the whole process of making such detection. It's not single task completion. For first detection it was few months of verifications and confirmations.

So lets not link this "race" to relative small deadlines implemented here for rather computationally big tasks.

 

Raistmer*
Raistmer*
Joined: 20 Feb 05
Posts: 208
Credit: 181428947
RAC: 6029

Speedy написал:Цитата:And why

Speedy wrote:
Quote:
And why such waste??

when you say the above may I ask what you are referring to?

 

 

I referring to finally 3 days of processing lost because of deadline miss. 

Deadline should be justified, and for now 14-days deadline doesn't seems such for me. Servers can't handle more tasks in fly or what?...

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3963
Credit: 47146232642
RAC: 65386770

Raistmer, is your problem

Raistmer, is your problem that your system has too many tasks downloaded to meet the deadline of tasks down the line? certainly your system is capable of completing a single work unit in the 14 days, right? or do you not leave your systems on 24/7?

 

Einstein downloading too many tasks is a common problem. recommend set Einstein resource share to 0 (it will only send 1 task per device + maybe a spare (mine gets a spare sometimes), or set the cache limits really low on your system.

 

What I do is set resource share to 0.

_________________________________________________________________________

Raistmer*
Raistmer*
Joined: 20 Feb 05
Posts: 208
Credit: 181428947
RAC: 6029

Ian&Steve C.

Ian&Steve C. wrote:

Raistmer, is your problem that your system has too many tasks downloaded to meet the deadline of tasks down the line? certainly your system is capable of completing a single work unit in the 14 days, right? or do you not leave your systems on 24/7?

 

Einstein downloading too many tasks is a common problem. recommend set Einstein resource share to 0 (it will only send 1 task per device + maybe a spare (mine gets a spare sometimes), or set the cache limits really low on your system.

 

What I do is set resource share to 0.

Share is 0 (backup project) but that PC not 24/7 for BOINC.  

 

BTW, interesting that 2 iGPU apps are configured differently.

BRPS asks for 0.5 CPU so 2 CPU cores are busy with CPU tasks + iGPU.

And Gamma search asks full CPU so only single CPU task running along with it on the same host.

 

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117749605467
RAC: 34887636

Raistmer* wrote:Gary, FYI

Raistmer* wrote:
Gary, FYI it's not E@h who make first detection.

Raistmer, FYI, nobody has yet made the first direct detection of continuous GW that we know must fill the entire universe.  It's incredibly hard to do that compared to the quite common detections these days of BH or NS merge events.  Einstein, because of the large number of volunteers, is well equipped (but no guarantee) to be the very first.  It will be a big deal (for someone) when it happens.

The deadlines have been 2 weeks for the last 15 years.  You don't have to like it but you do have to lump it.  That's entirely up to you.

Cheers,
Gary.

Raistmer*
Raistmer*
Joined: 20 Feb 05
Posts: 208
Credit: 181428947
RAC: 6029

Gary Roberts написал:someone)

Gary Roberts wrote:

someone) when it happens.

The deadlines have been 2 weeks for the last 15 years.  You don't have to like it but you do have to lump it.  That's entirely up to you.

And there is another possibility - to attract attention to inefficiency and finally rise changes to remove it.

And world knows many examples of mistakes that lasted MUCH longer than 15 years ;)

 UPDATE: btw, server statistics has means to estmate this inefficiency:

 

https://clip2net.com/s/475Ffyf

 

And it seems that direct failures have more share  for now. So app stability comes first.

 

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117749605467
RAC: 34887636

Raistmer, It really is great

Raistmer,
It really is great that you're all fired up wanting to fix things.  I truly hope you stay that way.

By posting on these boards, you are probably missing your intended target.  There is very little interaction with the Technical Staff (probably just Bernd), and effectively zero (that we know about) with the Scientists/Researchers.  It wasn't always like that, but for quite a while now, it's been fairly evident that the people who matter don't have the time (and perhaps not the interest) to monitor what goes on here.

That's not a criticism, it just the way things seem.  We can only guess at the pressures within the various research groups.  They design the what/when/how/where that they need.  We can choose to participate by crunching data - or not.

If you are convinced that there are better ways to do things, you need to get the attention of a different group of people and you need to convince them.

I understand that you really don't like what I'm trying to suggest but that doesn't bother me in the slightest.  Like a completely broken record, all I can do is continue to make the same suggestions.

I wish you all the best in whatever you decide to do.

Cheers,
Gary.

Raistmer*
Raistmer*
Joined: 20 Feb 05
Posts: 208
Credit: 181428947
RAC: 6029

Gary Roberts

Gary Roberts wrote:

Raistmer,By posting on these boards, you are probably missing your intended target.  There is very little interaction with the Technical Staff (probably just Bernd), and effectively zero (that we know about) with the Scientists/Researchers.  It wasn't always like that, but for quite a while now, it's been fairly evident that the people who matter don't have the time (and perhaps not the interest) to monitor what goes on here.

That's important point indeed.

 

Gary Charpentier
Gary Charpentier
Joined: 13 Jun 06
Posts: 2061
Credit: 106676006
RAC: 60468

Late, but perhaps pertinent

Late, but perhaps pertinent or not.  Many Seti people long ago may have set their cache to things like 10 days, so their machine held enough to cover the Tuesday outrage.  Most other projects do not have outages and have much shorter deadlines, CPDN being the exception.  A huge cache in such a case it almost sure to result in many W/U going into EDF and blowing deadlines.  Most other project admins assume that the client is set to default, 1 day and optimize work for that.  Before blasting about deadlines, you might see if setting default values on your client work fetch cache makes the problem go away.

Less pertinent, but still on the subject is work ratios.  Anyone who sets wildly different ratios between projects is setting themselves up for problems.  Something like Project A 100,000 share, project B 1 share.  The credit debit will run away because the fetch has no method of accounting that Project A only rarely has work.  What will happen long term is you may actually run out of work and have idle cores as it tries to maintain the ratio.  In this case if you have two projects set A to default 100 and B to zero.  The fetch will look at A and if no work grab a little from B.  Not sure how it works for three (or more) projects, C also at zero.  Might be a question for the BOINC Dev board.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.