Depends on how that feature is implemented. If it's implemented by a way that requires no more than having each workunit download a file specific to your settings, app_info.xml shouldn't be needed. If it's implemented by a way that requires the OpenCL feature allowing GPUs to run more than one program at a time, it may well take until sometime after a new version of BOINC provides more support for OpenCL GPU workunits.
I think you missed Richard's point: He already *has* an app_info.xml file installed, in order to do several BRP3/4 WUs in parallel. If you are using an app_info.xml file, only those apps listed in that file will get work, all other apps that are supported by the project, but missing from the app_info.xml file, are ignored. So the question was to receive the data needed to be included in app_info.xml file to support the Fermi search.
HBE
Actually, I think that was Sid, in message 112884 - I'm running Einstein au naturel, as nature intended. But Bikeman's point is well made - it's only if you have an app_info.xml file already that you would be excluded from the LAT tests without manual updating.
And if you did have Fermi-LAT in an app_info, you might have been caught out by the silent upgrade to v0.22 last Thursday: it's good to see some new test WUs.
One question, perhaps for Bernd: I see 240202413 and 240204286, created less than ten minutes apart, in the queue for the same computer - yet the second is estimated to run for five times as long as the first (2hr 08 vs. 10hr 42). Are we trying out some test scheme for processing variable length WUs, too?
The tests with the last app version 022 were pretty successful, and we intended to start generating WUs continuously today. However apparently there seems to be a bug in the validator (and a less important one in the workunit generator) that led us to disable both for now until this has been solved.
It could actually be that the workunits end up in different 'sizes', which in this case would mean a different number of sky points (visible in stderr). Flops estimation and credit should be adjusted automatically.
Depends on how that feature is implemented. If it's implemented by a way that requires no more than having each workunit download a file specific to your settings, app_info.xml shouldn't be needed. If it's implemented by a way that requires the OpenCL feature allowing GPUs to run more than one program at a time, it may well take until sometime after a new version of BOINC provides more support for OpenCL GPU workunits.
I think you missed Richard's point: He already *has* an app_info.xml file installed, in order to do several BRP3/4 WUs in parallel. If you are using an app_info.xml file, only those apps listed in that file will get work, all other apps that are supported by the project, but missing from the app_info.xml file, are ignored. So the question was to receive the data needed to be included in app_info.xml file to support the Fermi search.
HBE
I think you're missing something. Have you considered the possibility of having each workunit download a user-specific file, not app_info.xml, that contains all the information needed to tell the application programs they can run more than one workunit (of the same type) at once on GPUs?
I suppose you could find it easier just to offer the missing app_info.xml section, though.
Those are the applications I can choose from:[pre] Run only the selected applications Binary Radio Pulsar Search
Binary Radio Pulsar Search
Gravitational Wave S6 GC search
Gamma-ray pulsar search #1[/pre]
Gruß,
Gundolf
Computer sind nicht alles im Leben. (Kleiner Scherz)
I think you're missing something. Have you considered the possibility of having each workunit download a user-specific file, not app_info.xml, that contains all the information needed to tell the application programs they can run more than one workunit (of the same type) at once on GPUs?
This is something you need to tell the BOINC Core Client, not the (Einstein@Home) application. The Client schedules the GPU devices, not the App.
Well, actually where it belongs is in the section of client_state.xml (specifically, the tag within ). You, the user, can put it there yourself via an app_info.xml file, or the project can put it there via the application details on the server.
The trouble with asking the project to do it is that the server setting applies to everyone, and not every user wants to go down this route - some cards (like my 9800GTs, for example) perform well on single BRP4s, but lack the hardware context switching facilities of the later Fermi-generation GPUs which is what makes running multiple instances viable in the first place.
Anyway, all of this is out of place amongst the technical news of the gamma-ray search - this is a CPU application only, at least for the time being.
@ Bernd - now that we have three different classes of application running on the project, and Fermi-LAT tasks are still much rarer than the other two, it's getting quite difficult to pick out and monitor results for a particular application. I know you have a long-term project (perhaps stalled while the new apps roll out) to upgrade the entire website, but in the meantime would there be any possibility of importing the result-display-filtering code in newer copies of results.php? There was a further useful modification a couple of months ago, in [trac]changeset:23624[/trac], to add task counts to the filter links in the page header - you can see the combined effect at seti. I'd particularly like the ability to filter the user task list down to Fermi-LAT only, for quick monitoring.
Have some patience :-). I have not received any of them myself so far. The scheduler will eventually give some of those to us as well.
However, the last time your PC contacted E@H, it didn't even ask for more CPU work. Maybe it's busy with other projects. The GW search units you got earlier were all aborted by you, which will eventually limit the amounts of work units that you will be able to download.
RE: RE: Depends on how
)
Actually, I think that was Sid, in message 112884 - I'm running Einstein au naturel, as nature intended. But Bikeman's point is well made - it's only if you have an app_info.xml file already that you would be excluded from the LAT tests without manual updating.
And if you did have Fermi-LAT in an app_info, you might have been caught out by the silent upgrade to v0.22 last Thursday: it's good to see some new test WUs.
One question, perhaps for Bernd: I see 240202413 and 240204286, created less than ten minutes apart, in the queue for the same computer - yet the second is estimated to run for five times as long as the first (2hr 08 vs. 10hr 42). Are we trying out some test scheme for processing variable length WUs, too?
The tests with the last app
)
The tests with the last app version 022 were pretty successful, and we intended to start generating WUs continuously today. However apparently there seems to be a bug in the validator (and a less important one in the workunit generator) that led us to disable both for now until this has been solved.
It could actually be that the workunits end up in different 'sizes', which in this case would mean a different number of sky points (visible in stderr). Flops estimation and credit should be adjusted automatically.
BM
BM
A new application GPU? great
)
A new application GPU? great
RE: RE: Depends on how
)
I think you're missing something. Have you considered the possibility of having each workunit download a user-specific file, not app_info.xml, that contains all the information needed to tell the application programs they can run more than one workunit (of the same type) at once on GPUs?
I suppose you could find it easier just to offer the missing app_info.xml section, though.
RE: We will probably send
)
How can I opt IN?
When I looked at my preferences, there was no checkpoint for this search, and it ignored all my attempts to add one.
For my laptop only, though; not my desktop at present.
Those are the applications I
)
Those are the applications I can choose from:[pre] Run only the selected applications Binary Radio Pulsar Search
Binary Radio Pulsar Search
Gravitational Wave S6 GC search
Gamma-ray pulsar search #1[/pre]
Gruß,
Gundolf
Computer sind nicht alles im Leben. (Kleiner Scherz)
RE: I think you're missing
)
This is something you need to tell the BOINC Core Client, not the (Einstein@Home) application. The Client schedules the GPU devices, not the App.
BM
BM
Well, actually where it
)
Well, actually where it belongs is in the section of client_state.xml (specifically, the tag within ). You, the user, can put it there yourself via an app_info.xml file, or the project can put it there via the application details on the server.
The trouble with asking the project to do it is that the server setting applies to everyone, and not every user wants to go down this route - some cards (like my 9800GTs, for example) perform well on single BRP4s, but lack the hardware context switching facilities of the later Fermi-generation GPUs which is what makes running multiple instances viable in the first place.
Anyway, all of this is out of place amongst the technical news of the gamma-ray search - this is a CPU application only, at least for the time being.
@ Bernd - now that we have three different classes of application running on the project, and Fermi-LAT tasks are still much rarer than the other two, it's getting quite difficult to pick out and monitor results for a particular application. I know you have a long-term project (perhaps stalled while the new apps roll out) to upgrade the entire website, but in the meantime would there be any possibility of importing the result-display-filtering code in newer copies of results.php? There was a further useful modification a couple of months ago, in [trac]changeset:23624[/trac], to add task counts to the filter links in the page header - you can see the combined effect at seti. I'd particularly like the ability to filter the user task list down to Fermi-LAT only, for quick monitoring.
Hello, I still do not have
)
Hello,
I still do not have units for this application?!
RE: Hello, I still do not
)
Hi!
Have some patience :-). I have not received any of them myself so far. The scheduler will eventually give some of those to us as well.
However, the last time your PC contacted E@H, it didn't even ask for more CPU work. Maybe it's busy with other projects. The GW search units you got earlier were all aborted by you, which will eventually limit the amounts of work units that you will be able to download.
HBE