Understanding credits

Betreger
Betreger
Joined: 25 Feb 05
Posts: 992
Credit: 1594325658
RAC: 767824
Topic 198643

I've been with this project for a while and let me preface my question with I'm not in it for the credit but because I think distributed computing is a brilliant idea and I am interested in what is being done here. So no one should take it as a complaint.
The examples I'm using were done on the same computer under the same conditions. both are very typical results for that box. I do understand E@H uses fixed credits.
11 Jun 2016, 3:44:03 UTC Completed and validated 46,910.45 25,309.04 2,000.00 Gravitational Wave search O1 all-sky I v1.04 (X64O1I)
windows_x86_64
11 Jun 2016, 21:04:21 UTC Completed and validated 34,056.09 17,118.32 693.00 Gamma-ray pulsar binary search #1 v1.00
windows_intelx86
The gravity wave took apx 13 hrs to complete for 2000 credits for a bit over 153 cr/hr.
The Gamma-ray took 10 hrs for 693 credits or 69.3 cr/hr.
My question is why the huge discrepancy?

Logforme
Logforme
Joined: 13 Aug 10
Posts: 332
Credit: 1714373961
RAC: 0

Understanding credits

My understanding is that the project doubled the credits for the Gravitational Wave search O1 all-sky I application to get it done in a hurry.
Personally I'm a big fan of rewarding credits for work done (FLOPS), but since the "I" application has finished already the credit doubling seem to have worked.

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 734762712
RAC: 1298574

The intention of the project

The intention of the project is to award credit evenly across different searches, so that ideally the GW search and the Fermi search would, on identical hardware, give you the same amount of credit per hour computing time.

It was especially hard to do this for the latest GW search: the search performs most of its work on data that is on the order of several MB only, which means that for some CPUs, the most computationally expensive computing steps can be done with almost all data in the CPU cache, which makes the computation really fast on those hosts. On other CPUs, the data will not fit into the CPU cache and those CPUs will take longer. The Fermi search, however,need to operate on a larger dataset, which will never completely fit into the CPU cache. So on the "faster" hosts, especially those with larger caches (per CPU core you are using for BOINC), the GW app will run faster than the Fermi search in terms of credits, compared to the slower, smaller cache hosts.

This is a bit unsatisfactory so it was decided to try to identify the "fast" and "slower" hosts based on heuristic criteria (from the CPU type name) and define a sub-run of the GW-search for each of the two groups, awarding twice as many credits to the "slower" hosts so that both group of hosts would get roughly the same credits/hour compared to the Fermi search.

The heuristic division into the "fast" and "slower" groups is not perfect, and if you see that your host gets more credits per hour for the GW search than the Fermi search, it's an indication that your CPU should have been qualified as a fast one but was actually classified as a "slower" one.

There are other factors than the CPU type that will determine how fast your GW units will run, e.g. if you enabled or disabled hyperthreading, how many cores are allowed to run BOINC tasks and what kind of BOINC tasks are running in parallel with the GW search tasks, so there will always be cases where the "fast"/"slower" host classification will fail.

HB

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

Betreger, Be glad that

Betreger,

Be glad that the system "misclassified" your cpu as "slow" otherwise you would have only gotten 1000 credits for those 13 hours of work. Think about that.

Those with "fast" got penalized for having fast CPUs.

my 2 cents..or 0.02 euro since they are close down at that low monetary value, lol

AgentB
AgentB
Joined: 17 Mar 12
Posts: 915
Credit: 513211304
RAC: 0

RE: Be glad that the system

Quote:

Be glad that the system "misclassified" your cpu as "slow" otherwise you would have only gotten 1000 credits for those 13 hours of work. Think about that.

Those with "fast" got penalized for having fast CPUs.

This issue will reappear as "Why was i getting 2K before but now i'm getting 1K?" for the same type of OAS task.

There is an ongoing thread on the boinc forums about [Discussion] 4th Generation BOINC credit system and i feel that credit it is a true dilemma(*) for project admins in "deciding".

In simplest terms imagine three applications and three hosts the average time they take to run each app, say in hours.

[pre]
AppX AppY AppZ
hostA 2 1 1
hostB 1 2 1
hostC 1 1 2
[/pre]

You can easily construct this by saying one host has fast memory and a slow processor or support a certain instruction set well (eg double precison / AVG) etc.

Looking at this you get a feel that maybe each app should be the same credit, but then if you knew hostA was representative of 90% of the population, you might say AppX should get more credit than AppY or AppZ.

As time goes by maybe HostB becomes more representative, so the feeling changes so maybe AppY should get more credit.

All that said, i'd crunch OAS for 0 credits. It's exciting.

(*) Long time ago a i knew what a lemma(**) was, and of course we all what a dilemma is - only recently did i connect the dots.. a di-lemma is exactly two lemmas.
(**) algebra

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.