Observations on FGRBP1 1.18 for Windows

AgentB
AgentB
Joined: 17 Mar 12
Posts: 915
Credit: 513211304
RAC: 0

Jim1348 wrote:I wish we were

Jim1348 wrote:
I wish we were given some explanation.  Is it temporary (maybe due to server limitations), or more long-term because there is notenough work?  People tend to assume the worst, which is sometimes accurate.

There has been some explanation recently about beta tasks running out.  Perhaps try turning off beta tasks and see if that changes things.

Mumak
Joined: 26 Feb 13
Posts: 325
Credit: 3518065231
RAC: 1635337

Since v1.18 seems to run very

Since v1.18 seems to run very well, I think it might be time to promote it to stable apps.

-----

TimeLord04
TimeLord04
Joined: 8 Sep 06
Posts: 1442
Credit: 72378840
RAC: 0

Mumak wrote:Since v1.18 seems

Mumak wrote:
Since v1.18 seems to run very well, I think it might be time to promote it to stable apps.

+1

I wholeheartedly agree.  Smile

TimeLord04
Have TARDIS, will travel...
Come along K-9!
Join SETI Refugees

Richie
Richie
Joined: 7 Mar 14
Posts: 656
Credit: 1702989778
RAC: 0

Gary Roberts wrote:Holmis

Gary Roberts wrote:
Holmis wrote:
... try setting your "additional" cache setting to a very low number ...

This is good advice because otherwise BOINC waits until the 'low water mark' (the 1st setting) is reached before trying to fill up to the 'high water mark' (1st setting plus additional).  For most people, I don't really see the point of having this work cache 'range'.  Just set what you want in the first setting and leave the other as 0.01 days.  BOINC will then always be trying to maintain the full value.

Btw, what's the difference if you set 0 or 0.01 for that additional cache? I have never understood why that setting is available there at all. I have always kept additional cache as 0. I tested now shortly if setting that either way had any difference how often client sends requests (while cache is far from full), but I didn't notice any difference,

floyd
floyd
Joined: 12 Sep 11
Posts: 133
Credit: 186610495
RAC: 0

Mumak wrote:Since v1.18 seems

Mumak wrote:
Since v1.18 seems to run very well, I think it might be time to promote it to stable apps.


I see an unusual number of invalids for my only host, running two different GPUs. Someone with a better overview should carefully compare the 1.18 and 1.17 results and I'm sure that's what they're doing now. Let them take their time to make sure the 1.18 results are in general correct.

By the way, the host mentioned above runs Linux but I don't see how the Windows version is so special that it needs its own thread so I felt free to join here.

AgentB
AgentB
Joined: 17 Mar 12
Posts: 915
Credit: 513211304
RAC: 0

floyd_7 wrote:I see an

floyd_7 wrote:
I see an unusual number of invalids for my only host

Not sure what you mean by unusual?

I see from my records some samples (v1.05 was FGRPSSE)

Version  #invalid/#valid

v1.05      8/5865 

v1.17      3/3611

v1.18      9/3248

BRP6      27/27066

 

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

floyd_7 wrote:I see an

floyd_7 wrote:
I see an unusual number of invalids for my only host, running two different GPUs. Someone with a better overview should carefully compare the 1.18 and 1.17 results and I'm sure that's what they're doing now. Let them take their time to make sure the 1.18 results are in general correct.

I see only five invalids, which is not a large number I think.  It is a bit more than I get on Win7 64-bit running on two GTX 750 Ti's however.  https://einsteinathome.org/host/11368189/tasks

But the GTX 750 Ti's are not overclocked, and always run cool.  Maybe your cards are factory overclocked a little too much?

 

denjoR
denjoR
Joined: 9 Apr 08
Posts: 4
Credit: 139110089
RAC: 0

Gavin_14 wrote:Mumak wrote:I

Gavin_14 wrote:
Mumak wrote:

I don't know what happened, but suddenly all my hosts take more time to finish v1.18:
Fury X x1: 450 -> 520 s
RX 480 x1: 660 -> 750 s
HD7950 x2: 1230 -> 1400 s
GTX 1050 Ti x1: 1470 -> 1540 s

Has anybody else observed a similar behavior ? Was there some change in work amount?

I can confirm that over the last maybe 4 -5 days I have observed runtime increases somewhere in the region of 30 - 60 seconds per task. So you are not alone :-)
Perhaps we are now sifting through data from a different frequency that's slightly more demanding?

 

something changed (RX 480)

the 2 wus that im running finished everytime at the same time. that was not the case the days before.

so i put a 3 minute time gap between both wus and the result is that the average gpu load increasd from 87% to 93 % unfortunately the time gap vary overtime so the efficiency (gpu Load) of the app is totally inconstant.

  (average gpu load with 2 wus that finsihed @ the same time are only @ 83%) 

if you run 2 wus put a gap between it because if both wus comes to the point 89 % your gpu load decrease for 90 seconds to nearly 0% 

@ Gavin 14 

why your are running the fury and RX only @ one wu at the time?! you can double up your output Oo

Mumak
Joined: 26 Feb 13
Posts: 325
Credit: 3518065231
RAC: 1635337

I guess that was a question

I guess that was a question for me, not Gavin?

When I try to run more that one WU on the Fury X or RX 480, the runtimes get much longer than 2 x 1 WU. I don't know why is that, only that GPU memory controller utilization dropped to almost 0 most of the time (see above in this thread)... For example on the Fury, 1 WU took 450 s, while two 1500-1800 s.

-----

denjoR
denjoR
Joined: 9 Apr 08
Posts: 4
Credit: 139110089
RAC: 0

that problem comes up when i

that problem comes up when i run more then 2 gpu wus. but 2 at the time should be no problem Oo 

intresting that this happend to you with 2 wus. 

each wu needs a single free cpu thread but i think you know that ;)

 

 

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.