I've been crunching for a few years now, but I've never had a problem until recently.
I bought and installed an Ampere GPU (3070), which means my machine has a problem with the new GPU work units. I had to abort something like 160 of them. After reading in another thread that they had prevented GRP work units from being downloaded onto machines like mine, I set it to allow new tasks and the first one it downloaded was a GRP work unit. I aborted that one as well.
Went into the settings and disabled GRP work units, so now I only have BRP and GW work units enabled. I saved the settings and updated my client, then set it to allow new tasks. But several hours later, it hasn't downloaded any new work units.
Do I just need to be more patient, or is there some penalty for aborting work units? My only other GPU project was GPUgrid which hasn't given new work units for weeks now, so my GPU is idle.
Copyright © 2024 Einstein@Home. All rights reserved.
Well there are no BRP4 gpu
)
Well there are no BRP4 gpu work units except for Intel iGPU cpus.
And the server has run out of GW gpu work units.
So that explains why your gpu is not getting any work.
I just noticed that I got a
)
I just noticed that I got a delivery of GW work units. If I had to guess, I'd say something around 200 or so.
Estimated time of completion is 1:13 per work unit. We'll see how they actually perform though...
MNMadman wrote: I just
)
You have 765 of them already - please reduce your work cache size to stop the flood. They take a lot longer than estimate.
Cheers,
Gary.
I am guessing that my RTX
)
I am guessing that my RTX 2080 is the problem as to why I am not getting any work units anymore. This system also has a gtx 1660ti. There is a notice to look here
JYSArea51 see scheduler log messages on https://einsteinathome.org/host/12627784/log
but there was no obvious explanation in that log as to what was causing the problem.
JStateson wrote: I am
)
it's not really a problem with your card(s) (they aren't broken or anything), it's a "problem" with the new GR LATeah3001L00 dataset that is for some unknown reason incompatible with nvidia Volta, Turing, and Ampere GPUs. both your 2080 and 1660ti are Turing. The project admins have disabled GR work fetch for systems with these GPUs until the problem is resolved on their end.
You'll have to switch that system to GW work, or crunch a different project, until they find whatever the issue is with the new GR work on these nvidia GPUs.
_________________________________________________________________________
Thanks, did the switch and
)
Thanks, did the switch and getting the GW now. I temporarily switched to Milkyway while figuring out the problem. MW was taking 8 minutes a work unit per gpu and 16 minutes if I did 4 at a time per GPU. That is nice, but is still significantly slower than the double precision float those much older AMD boards come with.
Yeah, they're taking 8-10
)
Yeah, they're taking 8-10 minutes when I'm not using the computer and up to 21 minutes when I am using it.
I aborted a bunch and the rest I should be able to finish and upload before the deadline, assuming we don't crash the servers when uploading starts working again.
And the estimated time is now accurate, so I should be getting the amount of WUs I can actually complete before the deadlines.
The server status page shows
)
The server status page shows that neither BRP4 nor BRP4G / BRP4G e1 work generators are running, therefore creating no WU. Is this on purpose? I didn't find a hint by using the search feature on this board.
Well they haven't generated
)
Well they haven't generated any BRP4G tasks in years. So you will never see that generator turned back on. That sub-project is well finished.
They turn on the BRP4 generator sporadically to fill the low demand of tasks as needed. Just wait a bit and you will get tasks again.
Hi Keith,I don't think
)
Hi Keith,
I don't think the project is finished yet. See
http://einstein6.aei.uni-hannover.de/EinsteinAtHome/download/progress/BRP4-progress/
for the results. There are many more WUs required to finish this.