I haven't looked at the code of course, but since the only thing that matters here is GPU memory consumption, would it be possible to say, do calculations first on one half of an array, then the second half, instead of uploading it all at once?
It seems that the code looks up throughout whole table so there is no possibility to divide the job on parts. This way we need a new code and a new scheme of splitting WUs. The only optimization I see that might be executed is zipping BRP data files. Looking through the BRP4 files I found that they can be sufficiently deflated. This requires to change again a code of the splitter and changes in application (if it is not incorporated in BOINC code yet).
BTW, we are a bit off topic now. This thread is for FGRP1. Is it possible to move messages to appropriate thread?
I didn't browsed code yet. I browsed BRP4 files and found a lot of zeroes in there. I've tried to zip them and found that it is a good idea. Long time ago I heard that BOINC tends to incorporate zipping data before transfer, but haven't heard any news since that time.
I've found that some BOINC projects use compression/decompression programs that, at least under Windows Vista, work properly in one direction only. Therefore, for any new application version that uses zipping or other compression, I'd recommend two new modes - one where it sends back both the compressed output files and the uncompressed output files, and one where it sends back only the compressed output files. At least the first batch of workunits for the new version should use the first of these modes, with the validator set to use mainly the uncompressed output files and only check if the compressed output files match them. If this works well enough, then switch the next batch of workunits to the second mode, since the compressed output files are sufficient.
Credit for (newly generated) FGRP1 WUs has been raised to 337. This reflects the ratio of the average runtimes of FGRP1 and S6Bucket.
Looking at my results there seems to be fairly large spread in rations via architecture. My i7-9xx boxes are 1.6:1 and 1.5:1; with the heavily OCed box having the higher ratio. My old core 1 laptop is only 1.3:1. Does anyone have ratio numbers for other cpu architectures?
RE: I haven't looked at the
)
It seems that the code looks up throughout whole table so there is no possibility to divide the job on parts. This way we need a new code and a new scheme of splitting WUs. The only optimization I see that might be executed is zipping BRP data files. Looking through the BRP4 files I found that they can be sufficiently deflated. This requires to change again a code of the splitter and changes in application (if it is not incorporated in BOINC code yet).
BTW, we are a bit off topic now. This thread is for FGRP1. Is it possible to move messages to appropriate thread?
What steps do you use to
)
What steps do you use to browse the code? I remember getting a bit lost in the repository that Einstein@Home uses.
If there's a serious chance of pursuing this, that might be a good idea.
I didn't browsed code yet. I
)
I didn't browsed code yet. I browsed BRP4 files and found a lot of zeroes in there. I've tried to zip them and found that it is a good idea. Long time ago I heard that BOINC tends to incorporate zipping data before transfer, but haven't heard any news since that time.
I've found that some BOINC
)
I've found that some BOINC projects use compression/decompression programs that, at least under Windows Vista, work properly in one direction only. Therefore, for any new application version that uses zipping or other compression, I'd recommend two new modes - one where it sends back both the compressed output files and the uncompressed output files, and one where it sends back only the compressed output files. At least the first batch of workunits for the new version should use the first of these modes, with the validator set to use mainly the uncompressed output files and only check if the compressed output files match them. If this works well enough, then switch the next batch of workunits to the second mode, since the compressed output files are sufficient.
RE: Maybe we should
)
Credit for (newly generated) FGRP1 WUs has been raised to 337. This reflects the ratio of the average runtimes of FGRP1 and S6Bucket.
BM
BM
RE: RE: Maybe we should
)
Can you do a retroactive credit adjustment for older tasks?
RE: Credit for (newly
)
Looking at my results there seems to be fairly large spread in rations via architecture. My i7-9xx boxes are 1.6:1 and 1.5:1; with the heavily OCed box having the higher ratio. My old core 1 laptop is only 1.3:1. Does anyone have ratio numbers for other cpu architectures?
RE: Can you do a
)
Not with reasonable effort.
BM
BM
No problem. I've done 4 with
)
No problem. I've done 4 with the 200 credit tag. My next one is pending and should receive 337 credits.
Tullio
My ratio is 1.7,
)
My ratio is 1.7, (Q6600-[at]-3.6GHz) so credit adjustment is perfect.