Credits

Lester Lane
Lester Lane
Joined: 30 Aug 05
Posts: 7
Credit: 124,098,105
RAC: 61,994
Topic 215289

I've had a hunt around but can't find my question even though I am sure it has been asked before!  Can someone please explain why job h1_0368.70_O2C02Cl1In0__O2AS20-500_368.80Hz_598_0 running for 13.26 hrs on the CPU gives 1000 credits whereas LATeah1003L_132.0_0_0.0_22090264_0 running for about 11 mins on my GPU gives me 3,465 credits?  Many thanks, especially for putting up with another dumb question!

Lester

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 4,954
Credit: 31,019,082,521
RAC: 28,882,440

All GW searches and earlier

All GW searches and earlier FGRP searches have used a CPU application.  The credit award is based on the 'work content' of a task. Traditionally, there has been more work in the GW tasks than the FGRP tasks - hence the 1000 versus 693 credit awards.  After a lot of effort, (and assistance from an external programmer) an application was developed to perform the FGRP search on modern GPUs and take advantage of the parallel processing capability available there by virtue of the very large numbers of 'stream processors' in a GPU core.  To be able to do this you need a problem that can be solved in a massively parallel way.  Fortunately the FGRP search has turned out to be suitable for this approach.

In order to make GPU tasks run longer than what turned out to be the case, the 'task size' or work content was increased by a factor of 5.  So 5x693=3,465 which is the current credit award for these 5 times larger tasks.

 

Cheers,
Gary.

raghuram
raghuram
Joined: 8 Jul 18
Posts: 6
Credit: 4,652,415
RAC: 9,671

Hi, I'm new here, but lately

Hi,

I'm new here, but lately I've been losing some of my "Valid" tasks.

My total tasks count keeps decreasing.

Any idea why that's happening.

 

Thanks

Holmis
Joined: 4 Jan 05
Posts: 1,001
Credit: 710,831,136
RAC: 774,963

If every tasks that has ever

If every tasks that has ever been processed by Einstein@home were to be saved then the data storage capacity and database volume would be astounding. So completed and valid tasks/workunits are deleted after some time (2 weeks?) but the result of the task is saved in another database that only the project has access to. You do get to keep the credits that was awarded for completing the tasks! Wink

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 4,954
Credit: 31,019,082,521
RAC: 28,882,440

Further to what Holmis has

Further to what Holmis has explained, the online database will retain your tasks for a period of just a few days after the quorum has been completed and the results validated.  That could be quite a short period if all tasks in the quorum are returned quickly and validate with each other immediately.  Some tasks may have errors or may not be returned at all and so have to be re-issued, perhaps multiple times in a worst case scenario.  In such cases, it can be many weeks or months until a quorum is finally complete.  That is why, if you browse your complete current list of tasks, there will always be some quite 'old' entries still showing.

If you wish to review (and perhaps analyse) all your previously completed tasks, the necessary information is available to you locally, without having to copy and keep the transient stuff from the online database.  In your BOINC data directory (listed in the startup messages when BOINC starts) there is a text file named job_log_einstein.phys.uwm.edu.txt that contains lots of records, one for every single task your computer has ever returned.  Here is a small snip from one of my hosts.


1532637064 ue 6307.531381 ct 62.764000 fe 525000000000000 nm LATeah1011L_156.0_0_0.0_25887232_1 et 2577.617728 es 0
1532638508 ue 6307.531381 ct 64.032000 fe 525000000000000 nm LATeah1008L_148.0_0_0.0_38925446_2 et 2581.024172 es 0
1532638935 ue 44152.719665 ct 36603.540000 fe 105000000000000 nm LATeah0028F_1272.0_91877_0.0_2 et 36735.385641 es 0
1532639643 ue 6307.531381 ct 64.980000 fe 525000000000000 nm LATeah1011L_156.0_0_0.0_26032391_0 et 2578.479450 es 0
1532641102 ue 6307.531381 ct 62.966000 fe 525000000000000 nm LATeah1011L_164.0_0_0.0_1262394_1 et 2593.221940 es 0
1532642237 ue 6307.531381 ct 61.516000 fe 525000000000000 nm LATeah1011L_156.0_0_0.0_28201621_0 et 2592.169989 es 0
1532643680 ue 6307.531381 ct 61.140000 fe 525000000000000 nm LATeah1011L_148.0_0_0.0_5315429_2 et 2577.128542 es 0
1532644815 ue 6307.531381 ct 61.278000 fe 525000000000000 nm LATeah1011L_156.0_0_0.0_25890494_1 et 2577.129998 es 0
1532646254 ue 6307.531381 ct 61.714000 fe 525000000000000 nm LATeah1011L_156.0_0_0.0_26069904_0 et 2572.838006 es 0

All data fields (except the very first) have a 2-letter identification in front of them. Here is a list of the main ones.
ue  - the crunch time estimated by the project for this task.
ct  - the final CPU time actually used to crunch the task (not the same as elapsed time, particularly for GPU tasks).
fe  - the FLOPS estimate, ie. the 'work content' of a task as estimated by the project.
nm  - the full name of the task.
et  - the final elapsed time (or run time) consumed by the completed task.

The very first field (no identification label) is the time that the task was returned to the project.  It is the number of seconds since the Unix 'epoch' - 1970-01-01  00:00:00 UTC.  If you wanted to see the details of all work ever performed by your computer, you could import the file into a spreadsheet and process the information in whatever way suited you best.

 

Cheers,
Gary.

raghuram
raghuram
Joined: 8 Jul 18
Posts: 6
Credit: 4,652,415
RAC: 9,671

Thanks a lot Holmis and Gary!

Thanks a lot Holmis and Gary!

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 4,954
Credit: 31,019,082,521
RAC: 28,882,440

You're welcome! At your

You're welcome!

At your request, I've removed the duplicate question.

 

Cheers,
Gary.

gebruiker
gebruiker
Joined: 20 Sep 16
Posts: 1
Credit: 74,880
RAC: 0

Hello I am new here and have

EmbarassedHello I am new here and have a question where are the credits for. get that paid in bitcoin?

anniet
anniet
Joined: 6 Feb 14
Posts: 1,326
Credit: 4,729,827
RAC: 689

Hello, Gebruiker, and welcome

Hello, Gebruiker, and welcome to the forums :) I don't do any mining, nor am I an expert in anything  - but  do you mean Gridcoin?

I'm not sure if this post (from the news thread) is the most up-to-date announcement. If I find something more recent, I'll post a link to it for you.

 

Please wait here. Further instructions could pile up at any time. Thank you.

Holmis
Joined: 4 Jan 05
Posts: 1,001
Credit: 710,831,136
RAC: 774,963

gebruiker_5 wrote:Hello I am

gebruiker_5 wrote:
EmbarassedHello I am new here and have a question where are the credits for. get that paid in bitcoin?

Credits earned running Einstein@home can't be exchanged into anything, it's only a measure of how much you have contributed to the project and can be used to compare your contribution to that of other volunteers running Einstein@home.

There are other projects that I've heard talk about that can earn you e-currency but I don't know anything about them.

Attila
Attila
Joined: 28 Feb 12
Posts: 3
Credit: 26,257,649
RAC: 106

Gary Roberts wrote:All GW

Gary Roberts wrote:

All GW searches and earlier FGRP searches have used a CPU application.  The credit award is based on the 'work content' of a task. Traditionally, there has been more work in the GW tasks than the FGRP tasks - hence the 1000 versus 693 credit awards.  After a lot of effort, (and assistance from an external programmer) an application was developed to perform the FGRP search on modern GPUs and take advantage of the parallel processing capability available there by virtue of the very large numbers of 'stream processors' in a GPU core.  To be able to do this you need a problem that can be solved in a massively parallel way.  Fortunately the FGRP search has turned out to be suitable for this approach.

In order to make GPU tasks run longer than what turned out to be the case, the 'task size' or work content was increased by a factor of 5.  So 5x693=3,465 which is the current credit award for these 5 times larger tasks.

 


But doesnt it can be a problem? Like in the top 50 computers, all of them running exclusively FGRP on GPU, none of them crunching GW or others, so by the looks of it those projects kinda loose out... Like if you want to be in the top 50, you have to run FGRP on GPU, and halt the rest of the projects, so there is less workunit being processed in those then in FGRP. Or am i wrong?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.