IMO if the new work is 5X longer the credits should be 5X
Well, only if the original credit was accurate. I think we got to much credit for the shorter tasks and I view it as a bonus for testing a new application.
My take on the amount of credit for a task is that it should not matter what GPU app/science run your choose here as they should all pay about equal amount of credits per unit time. Same goes for a comparison among the CPU apps/science runs.
I think we got to much credit for the shorter tasks and I view it as a bonus for testing a new application.
When the great BRP6g Cuda 55 aps were running I was averaging over 90k RAC, when they went away the BRP4g took over I lost about 20k. The way it is now the same GPU will get well under 30k. My only solace is all the other Windoz machines are suffering equally.
I hope this is a work in progress and the Windoz users can eventually get throughput similar to the Linux users.
Over on the Technical News thread, there was a report a few hours ago of a current generation result receiving credit of 1,365 instead of the 3,465 awarded for the first day.
Now my short-queue host has also received 1,365 for the six most recently returned units. So I suspect this is the new standard rate (until the next revision). I thought 700 was very, very skimpy, and 3,465 lavishing generous. I think 1,365 is somewhat skimpy, personally.
I thought 700 was very, very skimpy, and 3,465 lavishing generous. I think 1,365 is somewhat skimpy, personally.
I agree with your assessment. 693 has been the standard award for many iterations of the CPU only version of the various FGRP runs from FGRP1 to FGRP4 to FGRPB1. I could easily be wrong but my understanding was that each successive run was processing tasks with the same amount of 'science content'. If so, the award of a constant 693 seems quite appropriate.
The bit I am unsure of is whether or not the GPU app has been processing tasks still with the same science content. It seemed likely because the award continued to be 693 when the beta test was on and each quorum was split between a CPU task and a GPU task. I read the announcement that the science content was being increased by a factor of 5x and I wasn't really surprised when the award was 700 because there have been examples in the past where the crunch load/crunch time has changed and the credit is adjusted somewhat later. I took 700 to be an interim adjustment until a more appropriate value could be worked out.
In the past there have also been examples of apps with improved efficiency able to crunch the same scientific payload in a shorter time. In these cases, the credit award was maintained even when the crunch time was quite a bit shorter. So if the current situation really is a 5x increase in content but only a ~3.5x increase in crunch time due to some sort of efficiency improvement, I wouldn't have a problem with the seemingly overgenerous 5x693=3465 credit award. No doubt Christian will make a comment at some point. I suspect that maybe 1365 is some sort of unintended consequence creeping in when some other task generation parameter was being adjusted.
...what a seasonable gift from the Project Team for Christmas; more CPU Power, more GPU Memory, more Runtime, less Credit. No need to begin a new Discuss. It is like it is.
Thankfully CreditNew doesn't apply here or else there probably would be a 'debate'. I should confess that I know pretty much zero about CreditNew other than to say I did read a lot of incendiary commentary on the topic, somewhere, quite some time ago, which I promptly forgot about as I was scooting off into the far distance at quite a rate of knots .
I'm very happy with our current system where the Devs decide on fixed credit and there is an opportunity for well-mannered political lobbying that may (or may not) be taken notice of. I think it's quite OK for people to put forward reasonable, politely presented points of view but would be inclined to take action if somebody started abusing that.
3: 5 times the credit is not the answer, that would reward the Linux hosts with an ungodly unreasonable amount of credit.
4: the solution should be a better written app for Windows so the Windows hosts get throughput simular to the Linux hosts.
They should do what is best for the science, which is to maximize the output of each host as much as possible. Then the technology decides how much credit is given (equal pay for equal work is what most projects use, and which makes the most sense to me). A forced equality for political-correctness is one distortion of the data that I like to avoid. It creates the wrong incentives for maximizing the output by providing the wrong rewards.
Betreger wrote:IMO if the new
)
Well, only if the original credit was accurate. I think we got to much credit for the shorter tasks and I view it as a bonus for testing a new application.
My take on the amount of credit for a task is that it should not matter what GPU app/science run your choose here as they should all pay about equal amount of credits per unit time. Same goes for a comparison among the CPU apps/science runs.
Holmis wrote: I think we got
)
When the great BRP6g Cuda 55 aps were running I was averaging over 90k RAC, when they went away the BRP4g took over I lost about 20k. The way it is now the same GPU will get well under 30k. My only solace is all the other Windoz machines are suffering equally.
I hope this is a work in progress and the Windoz users can eventually get throughput similar to the Linux users.
Over on the Technical News
)
Over on the Technical News thread, there was a report a few hours ago of a current generation result receiving credit of 1,365 instead of the 3,465 awarded for the first day.
Now my short-queue host has also received 1,365 for the six most recently returned units. So I suspect this is the new standard rate (until the next revision). I thought 700 was very, very skimpy, and 3,465 lavishing generous. I think 1,365 is somewhat skimpy, personally.
archae86 wrote:I thought 700
)
I agree with your assessment. 693 has been the standard award for many iterations of the CPU only version of the various FGRP runs from FGRP1 to FGRP4 to FGRPB1. I could easily be wrong but my understanding was that each successive run was processing tasks with the same amount of 'science content'. If so, the award of a constant 693 seems quite appropriate.
The bit I am unsure of is whether or not the GPU app has been processing tasks still with the same science content. It seemed likely because the award continued to be 693 when the beta test was on and each quorum was split between a CPU task and a GPU task. I read the announcement that the science content was being increased by a factor of 5x and I wasn't really surprised when the award was 700 because there have been examples in the past where the crunch load/crunch time has changed and the credit is adjusted somewhat later. I took 700 to be an interim adjustment until a more appropriate value could be worked out.
In the past there have also been examples of apps with improved efficiency able to crunch the same scientific payload in a shorter time. In these cases, the credit award was maintained even when the crunch time was quite a bit shorter. So if the current situation really is a 5x increase in content but only a ~3.5x increase in crunch time due to some sort of efficiency improvement, I wouldn't have a problem with the seemingly overgenerous 5x693=3465 credit award. No doubt Christian will make a comment at some point. I suspect that maybe 1365 is some sort of unintended consequence creeping in when some other task generation parameter was being adjusted.
Cheers,
Gary.
I guess Christmas would not
)
I guess Christmas would not be complete without a CreditNew debate.
...what a seasonable
)
...what a seasonable gift from the Project Team for Christmas; more CPU Power, more GPU Memory, more Runtime, less Credit. No need to begin a new Discuss. It is like it is.
BR
Merry Christmas and a happy new Year.
Der Mann mit der Ledertasche
Greetings from the North
AgentB wrote:... a CreditNew
)
Thankfully CreditNew doesn't apply here or else there probably would be a 'debate'. I should confess that I know pretty much zero about CreditNew other than to say I did read a lot of incendiary commentary on the topic, somewhere, quite some time ago, which I promptly forgot about as I was scooting off into the far distance at quite a rate of knots .
I'm very happy with our current system where the Devs decide on fixed credit and there is an opportunity for well-mannered political lobbying that may (or may not) be taken notice of. I think it's quite OK for people to put forward reasonable, politely presented points of view but would be inclined to take action if somebody started abusing that.
Cheers,
Gary.
My thoughts on this app have
)
My thoughts on this app have evolved over time.
1: my GTX660 was cranking out apx 80k of throughput running BRP6 cuda55 now it is struggling to to 12 of these devils a day running 2 at a time.
2: it uses a CPU per work unit which detracts from my contribution to my other project.
3: 5 times the credit is not the answer, that would reward the Linux hosts with an ungodly unreasonable amount of credit.
4: the solution should be a better written app for Windows so the Windows hosts get throughput simular to the Linux hosts.
5: I shall continue on because I believe in this project, as my RAC continues to drop from 90k to who knows what.
Betreger wrote:3: 5 times the
)
They should do what is best for the science, which is to maximize the output of each host as much as possible. Then the technology decides how much credit is given (equal pay for equal work is what most projects use, and which makes the most sense to me). A forced equality for political-correctness is one distortion of the data that I like to avoid. It creates the wrong incentives for maximizing the output by providing the wrong rewards.
I agree, The Windoz hosts are
)
I agree, The Windoz hosts are not producing the throughput the Linux hosts are. A better app needs to be written for Windoz.
I am certain the project developers are aware and working on that, but I have no evidence of that.