The error rate of the new WUs seems pretty low, at least not worse than that of the old, so I started the workunit generator again. Deadline was set to 7d. I doubled the "fpops estimate" of the tasks, which affects the predicted runtime and also the credit.
I expect to have a validator working today or latest tomorrow.
There are a few optimizations planned for the app for the new workunits, but I doubt that we'll get that app version out the door before the holiday season. It's planned for January.
Also too I still couldn't get the CUDA app built for Windows, I'll continue to work on that. I couldn't find a guide or sth. on how to cross-compile a CUDA app for Windows with mingw-gcc. The cross-build process of our BRP App only works up to CUDA ~6 or so, and our GW App requires CUDA 8 at least. Any help on that is welcome.
The lower frequency run (160-250Hz) is planned to last 5-6 months, I don't really know what we'll do after that. If we don't have a new idea on time, we may pick up the high frequency run again.
...the deadline wouldn't be an issue if the estimated runtime wasn't completely wrong, it's around 20% of the old tasks while the actual runtime is pretty much exactly 3x of the old tasks (on my system at least), that's a huge discrepancy.
That's the point I was trying to make, not making any remark regarding responsibility or lack of it. Well, we'll see how things go with the new configuration...
Interesting, some hosts like for example this one complete the new tasks in same time as the old tasks. Any ideas why?
Unless the owner of the machine comes in and discloses how many concurrent tasks are running on the gpu and cpu, we can't know how they actually compare because they may not be the same between the old and new task runs.
You can't compare one host with another host. That should obvious.
I compare the old run times from one host with the new run times from the SAME host.
Of course with same load on GPUs + CPUs.
AND that shows me a really significant increase in run time.
But the configuration on the same host could have changed though. I've changed back to 1 task per GPU for the LF WUs, because there is little to be gained with interleaving CPU and GPU work when CPU work is a much smaller fraction of a task.
The error rate of the new WUs
)
The error rate of the new WUs seems pretty low, at least not worse than that of the old, so I started the workunit generator again. Deadline was set to 7d. I doubled the "fpops estimate" of the tasks, which affects the predicted runtime and also the credit.
I expect to have a validator working today or latest tomorrow.
There are a few optimizations planned for the app for the new workunits, but I doubt that we'll get that app version out the door before the holiday season. It's planned for January.
Also too I still couldn't get the CUDA app built for Windows, I'll continue to work on that. I couldn't find a guide or sth. on how to cross-compile a CUDA app for Windows with mingw-gcc. The cross-build process of our BRP App only works up to CUDA ~6 or so, and our GW App requires CUDA 8 at least. Any help on that is welcome.
BM
The lower frequency run
)
The lower frequency run (160-250Hz) is planned to last 5-6 months, I don't really know what we'll do after that. If we don't have a new idea on time, we may pick up the high frequency run again.
BM
John написал: I havent
)
uploaded again =)
Link wrote: ...the deadline
)
That's the point I was trying to make, not making any remark regarding responsibility or lack of it. Well, we'll see how things go with the new configuration...
Soli Deo Gloria
Interesting, some hosts like
)
Interesting, some hosts like for example this one complete the new tasks in same time as the old tasks. Any ideas why?
.
The run times have really
)
The run times have really changed ...
BADLY !!
Bernd, any timeline on when
)
Bernd, any timeline on when the validator will be running on the O3ASBu units?
_________________________________________________________________________
Unless the owners of the
)
Unless the owner of the machine comes in and discloses how many concurrent tasks are running on the gpu and cpu, we can't know how they actually compare because they may not be the same between the old and new task runs.
You can't compare one host
)
You can't compare one host with another host. That should obvious.
I compare the old run times from one host with the new run times from the SAME host.
Of course with same load on GPUs + CPUs.
AND that shows me a really significant increase in run time.
sfv
But the configuration on the
)
But the configuration on the same host could have changed though. I've changed back to 1 task per GPU for the LF WUs, because there is little to be gained with interleaving CPU and GPU work when CPU work is a much smaller fraction of a task.
I'm also seeing 3x time on the new tasks.