No holiness required. Most of the credits (around 2 orders of magnitude less
than wu's) originate from cluster burn-in sessions at work (MPI/FKF). Every newly
purchased cluster is tested for some weeks to bring errors in memory, disk and
chipset to light. This could be done with artificial benchmarks but we prefer
to donate these CPU cycles to our Max-Planck colleagues at the Albert Einstein Institute in Potsdam and Hannover. Over the years this accumulates to "billions
and billions of cobblestones" to quote Carl Sagan ;-).
Quote:
Holy Higgs Boson.....I can't imagine ever having 167 million wu's finished on my machines.
GC S5HF for CPU (SSE2) + 3 BRP3 cuda (1.04 + 1.05). (BRP3 for the CPU is not supported).
Thank you very much!!!
I´m runnig 3 tasks on my GTX260. Each task takes around 9700s. If i run only one task it takes around 3900s. The GPU load goes up to 85%.
GC S5HF for CPU (SSE2) + 3 BRP3 cuda (1.04 + 1.05). (BRP3 for the CPU is not supported).
I can't test the following comments as I don't possess any CUDA capable cards, so I would simply warn that whilst I believe the following points are correct, I cannot give any guarantees. YMMV so please think carefully before adopting any of the suggestions.
Firstly, if your intent is to maximise production, you could simply leave out the specification of the graphics app, if you never intend to display the graphics. You could simply leave out all occurrences of lines like
Secondly, there is a better way to handle the transition from V1.04 to V1.05. What you have written ensures that any tasks in your cache branded as 1.04 get done with the V1.04 app. This would be important if (for example) there had been any change in the checkpoint and/or output formats between the two versions. I believe this isn't the case and that the difference is to do with thread priorities only. In that situation, because of the time saving advantages, it would be preferable to have 1.04 branded tasks crunched with the 1.05 app - even partly crunched ones. You can make a modification to app_info.xml to achieve this.
Instead of specifying the two apps separately, as you have done here
Note in particular that the second clause specifies that any task branded as 1.04 can be done with the V1.05 app. This means you will get the benefit of the new app for all 1.04 tasks in your cache. The messages tab will say that V1.04 is being used but if you check, you will actually find that it's indeed using the new app.
If you want to ensure that newly downloaded tasks are branded 1.05 rather than 1.04, make sure the clauses are in the order shown above, ie of 105 listed before 104.
As I stated at the outset, I believe the above is correct but I don't have the ability to test it out. Anyone implementing these suggestions should test for themselves. If you find anything that seems odd, please let me know. I did make a couple of cut and paste mistakes while composing but I think I've sorted that out (hopefully).
I've tried to check it out by visual inspection and by inference with the behaviour of a similar file used for CPU tasks under similar conditions. The version number change strategy works perfectly for CPU tasks and I believe it should also be much the same for GPUs.
This could be done with artificial benchmarks but we prefer to donate these CPU cycles to our Max-Planck colleagues at the Albert Einstein Institute in Potsdam and Hannover.
Superb! To paraphrase Red Riding Hood 'what a big RAC you have'. So let's hope you at least get Xmas cards from MPG/AEI ..... :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
No holiness required. Most of the credits (around 2 orders of magnitude less
than wu's) originate from cluster burn-in sessions at work (MPI/FKF). Every newly
purchased cluster is tested for some weeks to bring errors in memory, disk and
chipset to light. This could be done with artificial benchmarks but we prefer
to donate these CPU cycles to our Max-Planck colleagues at the Albert Einstein Institute in Potsdam and Hannover. Over the years this accumulates to "billions
and billions of cobblestones" to quote Carl Sagan ;-).
[/]
Yes I meant to say "credits" and not actually "WU's"
But I do know that most of the members with more credits than I have use clusters from at work or schools.
I have been doing this since seti classic and am just using machines that I bought myself and run at home and have spent more thousands of dollars than I even want to add up.
Since I live in the same State as Bill Gates maybe he will set me up with a cluster over in Redmond (at Microsoft)
But the info here is always good to see since as you know there are quite a few doing this.
GC S5HF for CPU (SSE2) + 3 BRP3 cuda (1.04 + 1.05). (BRP3 for the CPU is not supported).
I can't test the following comments as I don't possess any CUDA capable cards, so I would simply warn that whilst I believe the following points are correct, I cannot give any guarantees. YMMV so please think carefully before adopting any of the suggestions.
Firstly, if your intent is to maximise production, you could simply leave out the specification of the graphics app, if you never intend to display the graphics. You could simply leave out all occurrences of lines like
Secondly, there is a better way to handle the transition from V1.04 to V1.05. What you have written ensures that any tasks in your cache branded as 1.04 get done with the V1.04 app. This would be important if (for example) there had been any change in the checkpoint and/or output formats between the two versions. I believe this isn't the case and that the difference is to do with thread priorities only. In that situation, because of the time saving advantages, it would be preferable to have 1.04 branded tasks crunched with the 1.05 app - even partly crunched ones. You can make a modification to app_info.xml to achieve this.
Instead of specifying the two apps separately, as you have done here
Note in particular that the second clause specifies that any task branded as 1.04 can be done with the V1.05 app. This means you will get the benefit of the new app for all 1.04 tasks in your cache. The messages tab will say that V1.04 is being used but if you check, you will actually find that it's indeed using the new app.
If you want to ensure that newly downloaded tasks are branded 1.05 rather than 1.04, make sure the clauses are in the order shown above, ie of 105 listed before 104.
As I stated at the outset, I believe the above is correct but I don't have the ability to test it out. Anyone implementing these suggestions should test for themselves. If you find anything that seems odd, please let me know. I did make a couple of cut and paste mistakes while composing but I think I've sorted that out (hopefully).
I've tried to check it out by visual inspection and by inference with the behaviour of a similar file used for CPU tasks under similar conditions. The version number change strategy works perfectly for CPU tasks and I believe it should also be much the same for GPUs.
We can also make a copy of the application of 1.05 and 1.04 renamed ...
Well I tend to read these
)
Well I tend to read these topics just in case but as you can see I don't run a Cuda on any of the current 5 I am running.
Or on any of the others I have had or have waiting for me to bring back to life.
CAL ATI Radeon HD 2300/2400/3200 (RV610) (256MB) driver: 1.4.635 on my top 3 machines right now....so just processors for me.
Maybe when I get one of my former pc's back to life I will add a Cuda.
No holiness required. Most of
)
No holiness required. Most of the credits (around 2 orders of magnitude less
than wu's) originate from cluster burn-in sessions at work (MPI/FKF). Every newly
purchased cluster is tested for some weeks to bring errors in memory, disk and
chipset to light. This could be done with artificial benchmarks but we prefer
to donate these CPU cycles to our Max-Planck colleagues at the Albert Einstein Institute in Potsdam and Hannover. Over the years this accumulates to "billions
and billions of cobblestones" to quote Carl Sagan ;-).
GC S5HF for CPU (SSE2) + 3
)
GC S5HF for CPU (SSE2) + 3 BRP3 cuda (1.04 + 1.05). (BRP3 for the CPU is not supported).
The missing files can be downloaded here.
einstein_S5GC1HF
Global Correlations S5 HF search #1
einsteinbinary_BRP3
Binary Radio Pulsar Search
einstein_S5GC1HF_3.06_windows_intelx86__S5GCESSE2.exe
einstein_S5R6_3.01_graphics_windows_intelx86.exe
cufft32_23.dll
cudart32_23.dll
einsteinbinary_BRP3_1.04_windows_intelx86__BRP3cuda32.exe
einsteinbinary_BRP3_1.05_windows_intelx86__BRP3cuda32.exe
einsteinbinary_BRP3_1.00_graphics_windows_intelx86.exe
cudart_xp32_32_16.dll
cufft_xp32_32_16.dll
db.dev.win.4330b3e5
dbhs.dev.win.4330b3e5
db.dev.win.96b133b1
dbhs.dev.win.96b133b1
einsteinbinary_BRP3
104
windows_intelx86
0.200000
1.000000
BRP3cuda32
6.13.0
einsteinbinary_BRP3_1.04_windows_intelx86__BRP3cuda32.exe
cudart_xp32_32_16.dll
cudart32_32_16.dll
cufft_xp32_32_16.dll
cufft32_32_16.dll
einsteinbinary_BRP3_1.00_graphics_windows_intelx86.exe
graphics_app
db.dev.win.4330b3e5
db.dev
dbhs.dev.win.4330b3e5
dbhs.dev
CUDA
0.330000
220200960.000000
einsteinbinary_BRP3
105
windows_intelx86
0.200000
1.000000
BRP3cuda32
6.13.0
einsteinbinary_BRP3_1.05_windows_intelx86__BRP3cuda32.exe
cudart_xp32_32_16.dll
cudart32_32_16.dll
cufft_xp32_32_16.dll
cufft32_32_16.dll
einsteinbinary_BRP3_1.00_graphics_windows_intelx86.exe
graphics_app
db.dev.win.96b133b1
db.dev
dbhs.dev.win.96b133b1
dbhs.dev
CUDA
0.330000
220200960.000000
einstein_S5GC1HF
306
windows_intelx86
S5GCESSE2
6.13.0
einstein_S5GC1HF_3.06_windows_intelx86__S5GCESSE2.exe
einstein_S5R6_3.01_graphics_windows_intelx86.exe
graphics_app
RE: GC S5HF for CPU (SSE2)
)
Thank you very much!!!
I´m runnig 3 tasks on my GTX260. Each task takes around 9700s. If i run only one task it takes around 3900s. The GPU load goes up to 85%.
RE: GC S5HF for CPU (SSE2)
)
I can't test the following comments as I don't possess any CUDA capable cards, so I would simply warn that whilst I believe the following points are correct, I cannot give any guarantees. YMMV so please think carefully before adopting any of the suggestions.
Firstly, if your intent is to maximise production, you could simply leave out the specification of the graphics app, if you never intend to display the graphics. You could simply leave out all occurrences of lines like
and
Secondly, there is a better way to handle the transition from V1.04 to V1.05. What you have written ensures that any tasks in your cache branded as 1.04 get done with the V1.04 app. This would be important if (for example) there had been any change in the checkpoint and/or output formats between the two versions. I believe this isn't the case and that the difference is to do with thread priorities only. In that situation, because of the time saving advantages, it would be preferable to have 1.04 branded tasks crunched with the 1.05 app - even partly crunched ones. You can make a modification to app_info.xml to achieve this.
Instead of specifying the two apps separately, as you have done here
and here
do it like this (just the essential bits to do with version number shown)
Note in particular that the second clause specifies that any task branded as 1.04 can be done with the V1.05 app. This means you will get the benefit of the new app for all 1.04 tasks in your cache. The messages tab will say that V1.04 is being used but if you check, you will actually find that it's indeed using the new app.
If you want to ensure that newly downloaded tasks are branded 1.05 rather than 1.04, make sure the clauses are in the order shown above, ie of 105 listed before 104.
As I stated at the outset, I believe the above is correct but I don't have the ability to test it out. Anyone implementing these suggestions should test for themselves. If you find anything that seems odd, please let me know. I did make a couple of cut and paste mistakes while composing but I think I've sorted that out (hopefully).
I've tried to check it out by visual inspection and by inference with the behaviour of a similar file used for CPU tasks under similar conditions. The version number change strategy works perfectly for CPU tasks and I believe it should also be much the same for GPUs.
Cheers,
Gary.
RE: This could be done with
)
Superb! To paraphrase Red Riding Hood 'what a big RAC you have'. So let's hope you at least get Xmas cards from MPG/AEI ..... :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: No holiness required.
)
Yes I meant to say "credits" and not actually "WU's"
But I do know that most of the members with more credits than I have use clusters from at work or schools.
I have been doing this since seti classic and am just using machines that I bought myself and run at home and have spent more thousands of dollars than I even want to add up.
Since I live in the same State as Bill Gates maybe he will set me up with a cluster over in Redmond (at Microsoft)
But the info here is always good to see since as you know there are quite a few doing this.
RE: RE: GC S5HF for CPU
)
We can also make a copy of the application of 1.05 and 1.04 renamed ...
My previous app_info (GC S5HF
)
My previous app_info (GC S5HF for CPU (SSE2) + 3 BRP3 cuda (1.04 + 1.05)) is working.
But the scheduler gives WU for 1.04 application.
Below will app_info only using 1.05
1. disable the get of new WU.
2. finish all BRP3 tasks.
3. update the project.
4. replace app_info.
5. allow to get new WU.
if you do not complete all 1.04BRP3 WU and replace app_info, then all 1.04BRP3 WU will be aborted.
einstein_S5GC1HF
Global Correlations S5 HF search #1
einsteinbinary_BRP3
Binary Radio Pulsar Search
einstein_S5GC1HF_3.06_windows_intelx86__S5GCESSE2.exe
einstein_S5R6_3.01_graphics_windows_intelx86.exe
einsteinbinary_BRP3_1.05_windows_intelx86__BRP3cuda32.exe
einsteinbinary_BRP3_1.00_graphics_windows_intelx86.exe
cudart_xp32_32_16.dll
cufft_xp32_32_16.dll
db.dev.win.96b133b1
dbhs.dev.win.96b133b1
einsteinbinary_BRP3
105
windows_intelx86
0.200000
1.000000
BRP3cuda32
6.13.0
einsteinbinary_BRP3_1.05_windows_intelx86__BRP3cuda32.exe
cudart_xp32_32_16.dll
cudart32_32_16.dll
cufft_xp32_32_16.dll
cufft32_32_16.dll
einsteinbinary_BRP3_1.00_graphics_windows_intelx86.exe
graphics_app
db.dev.win.96b133b1
db.dev
dbhs.dev.win.96b133b1
dbhs.dev
CUDA
1.0000
220200960.000000
einstein_S5GC1HF
306
windows_intelx86
S5GCESSE2
6.13.0
einstein_S5GC1HF_3.06_windows_intelx86__S5GCESSE2.exe
einstein_S5R6_3.01_graphics_windows_intelx86.exe
graphics_app
your file can not receive
)
your file can not receive ABP2 (Arecibo Pulsar Binary Search (STSP))