I can only see results on my 3 machines for WU since Oct 31 and everything else is gone so the one machine (12087190) shows the 4 CPU tasks when Asteroids@Home was not sending work and 2 GPU tasks although it's been working on Einstein since September and has 3 GPU tasks in que now.
Copyright © 2024 Einstein@Home. All rights reserved.
Where are my WU results???
)
They move them offline to keep the database running quickly, although that does seem to be pretty quickly. When I look at my own pc's I see workunits from before the middle of October.
We had to decrease the time
)
We had to decrease the time that tasks are shown after the workunit is completed. Which means that they get moved into our Archive quicker.
The workunits mikey mentioned are ones that have at least one resend and therefore were only recently completed (by some other computer).
Can we see the archive? I'm
)
Can we see the archive?
I'm in need of real data on my machines' performance to decide which project to turn their GPU and CPU over to.
There are only 6 data points to extrapolate from on how Einstein compares to the other GPU projects my machines can run. It seems that Einstein is better than SETI, Collatz, Prime or Moo! because it uses much less CPU slices and thus doesn't interfere with my CPU-only projects, but I need more data.
Workunits are purged 7 days
)
Workunits are purged 7 days after all results arrived and are successfully validated. That should give you enough datapoints to extrapolate if you run Einstein@home exclusively for a week on your GPU. There is enough work available for BRP4G at the moment. There is no performance data in the archives only science data.
RE: I'm in need of real
)
If you look in the BOINC data directory, you will find job logs (one per project) for every project supported by that machine. These are text files that are added to every time a task is completed. I don't know if other projects have exactly the same fields and meanings but here is an example line from one of my machines doing FGRP4 tasks.
1440672822 ue 43390.076859 ct 61580.130000 fe 105000000000000 nm LATeah1061E_..... et 62817.037961 es 0
The first field is the time in seconds from the unix epoch (midnight on 31st Dec 1969/1st Jan 1970) for when the task was completed. After that, there are a series of numeric fields preceded by a 2-letter label. ue is the time estimate, ct is the CPU component of elasped time, fe is the FLOPS estimate for the amount of work in the task, nm is the task name and et is the elapsed or run time. I don't know what es is but it always seems to be zero here. All times are given in seconds. It is quite easy to insert the data as space separated variables into a spreadsheet and extract whatever statistical information you want. The file contains all task data from when you started unless you have deleted it at some intermediate point. BOINC recreates it if it gets deleted.It is far, far easier to get the data from these files than to try catching stuff before it gets deleted from the online database.
Cheers,
Gary.
RE: RE: I'm in need of
)
The job files are written by the boinc client, so for a certain version of boinc, the columns should be the same for all projects on the same host computer.
Some of the values - such as "fe" are decided by the project, and, like credit awards are different from project to project.
I checked the boinc source and "es" is exit status, and i have only seen the value 0 logged. I suspect this is a future enhancement to the job logs to record error codes and aborts - it gets a slight mention here wiki for RPCProtocol
++ on that.
However note - errors and aborts are not logged (at all), but invalids ARE logged in these files indistinguishable from the valid completed results.
The online database does however show up "Errors" and "Invalids" and these should be uncommon, there is no useful record of these events on the host.
hth.
RE: RE: I'm in need of
)
Well there ya go!
job_log_einstein.phys.uwm.edu.txt
Thanks for interpreting the line for me and Notepad++ even formats it properly.
This must be a CPU WU:
1419383623 ue 43305.180068 ct 58371.220000 fe 105000000000000 nm LATeah0061E_48.0_84_-2.9e-11_1 et 64876.048696
And this must be a GPU WU:
1420654934 ue 11593.143370 ct 948.907400 fe 280000000000000 nm p2030.20140113.G200.96-00.67.N.b0s0g0.00000_496_0 et 11858.336256 es 0
Only some have "es 0" after the line.
The performance of this same machine and GPU has cut the CPU requirement by 2/3rds since 2014 and I'm not sure what changed to make the improvement. I kept the 337.88 nVidia driver because advancing it was supposed to cause trouble for one project. Most probably, the improvement is in their APP and I'm seeing 1.5% CPU usage to elapsed time instead of the description of 20% for nVidia or 50% on ATI.
1446651843 ue 11455.278881 ct 343.249000 fe 280000000000000 nm p2030.20150905.G38.00-01.43.N.b2s0g0.00000_3280_1 et 11788.811134 es 0
----
@AgentB
I should be able to discern much of the computation error results as they should be shorter and not typical of this machines work; but that would mess with my analysis. Thanks for pointing this out and thanks to everyone for their help!
RE: This must be a CPU
)
Yes. The 'LAT' in the task name means the Large Area Telescope on the Fermi satellite that produced the data for the gamma ray pulsar search - a CPU only science run.
Yes, these are GPU tasks for the BRP4G run which is using Arecibo radio telescope data. The names of all such tasks start with 'p2030'. The other type of GPU task using Parkes radio telescope data start with 'PM'.
There were some significant optimisations made to the apps earlier this year which have made reductions to both CPU and elapsed times. There have also been other improvements which have resulted in more of the work being done by the GPU and less by the CPU over a longer time frame.
Cheers,
Gary.