Support for (integrated) Intel GPUs (Ivy Bridge and later)

Rick A. Sponholz
Rick A. Sponholz
Joined: 27 Aug 05
Posts: 7
Credit: 53164814
RAC: 0

RE: Is the nVidia starved

Quote:

Is the nVidia starved for CPU support and hence downclocks itself into a power-saving mode? How is the GPU load?

MrS

Thanks for your reply MrS,
Don't think the GPU is starved for cpu support, having 8 cpu cores and only 5 GPU cores. GPU load varies, but usually about 90% plus or minus 10%. Regards, Rick

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 770
Credit: 578166875
RAC: 203462

Yes, it sounds like this is

Yes, it sounds like this is not the problem. Still, I'd take a look at GPU-Z: the newer versions report the reason for the current power state, if you've got a somewhat recent driver. This could be "power", "utilization" or probalby temperature or others. This might be most interesting for the misbehaving GPU.

MrS

Scanning for our furry friends since Jan 2002

Rick A. Sponholz
Rick A. Sponholz
Joined: 27 Aug 05
Posts: 7
Credit: 53164814
RAC: 0

I am happy to report

I am happy to report successful use of two iCPU's (both 4770's). For me, it required the bios setting for the iGPU to be set to, "Always On", my monitor attached to my GTX690's, AND a VGA Dummy Plug attached to the iGPU. They have been running for 5 hrs now without a problem. So far only SETI Beta wu's have run, but hope to get Einstein wu's soon. I'll report when I get some successfully run. Getting my 3770 iGPU working using the same techniques was a dismal failure. Will keep on trying though. Thanks to all for the help. Regards, Rick

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2957429672
RAC: 714764

Also been experimenting with

Also been experimenting with the HD 4600 in host 8864187 (i5-4570), both here and at SETI Beta. Attached to both projects and received work with no trouble at all - even the factory-installed drivers (Dell Optiplex 9020) were immediately ready to run. Maybe using the iGPU as primary display - indeed only display adapter - helps.

Two issues:

1) I couldn't get either project to supply work under anonymous platform. SETI declined politely, saying no work was available: Einstein recorded an internal server error. Since the BOINC server scheduler code update for intel_gpu was contributed by Oliver Bock from this project, could we check for bugs under anonymous platform sometime, please? I can forward the app_info.xml file I was trying to use, and be available to test from this end.

2) I'm intrigued by the difference in CPU usage between the two applications.

All tasks for computer 8864187 Einstein - CPU used for 3% of runtime.
All tasks for computer 67008 SETI Beta - CPU used for 99% of runtime.

(under some circumstances, the SETI app does require to do some pseudo-science - shaped random noise generation - on the CPU, but that doesn't apply in most of these cases)

Even though the Einstein app uses very little CPU time, I did find it needed to have a CPU core free to use: these tasks were run (both projects) with BOINC set to use 75% of processors, so three instances of SIMAP were running alongside and Einstein tasks were taking 11 minutes. With four SIMAP running (100% usage), the Einstein app had barely passed half way after 45 minutes, a huge difference.

I also noted a significant difference in power consumption. With Einstein running on the iGPU, total system draw is 88W: with SETI Beta, power draw rises by 10%, to ~98W. I'm in discussion with the SETI app developer (Raistmer) as to why this might be the case - he speculates that it might be a difference in kernel lengths. But it seems unlikely that OpenCL spin-wait loops would use so much power.

Observations?

Claggy
Claggy
Joined: 29 Dec 06
Posts: 560
Credit: 2699403
RAC: 0

RE: Two issues: There's

Quote:
Two issues:


There's a third issue too,

the 'Use INTEL GPU Enforced by version 7.0.27+ no' doesn't work at Seti Beta or Einstein, with all venues set to 'no',
the client still does work requests for the intel_gpu, the client correctly blocks CPU, Nvidia and AMD/ATI work requests:

25/09/2013 14:06:00 | Einstein@Home | [sched_op] Starting scheduler request
25/09/2013 14:06:00 | Einstein@Home | Sending scheduler request: To fetch work.
25/09/2013 14:06:00 | Einstein@Home | Requesting new tasks for intel_gpu
25/09/2013 14:06:00 | Einstein@Home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
25/09/2013 14:06:00 | Einstein@Home | [sched_op] NVIDIA work request: 0.00 seconds; 0.00 devices
25/09/2013 14:06:00 | Einstein@Home | [sched_op] intel_gpu work request: 259200.00 seconds; 1.00 devices
25/09/2013 14:06:01 | Einstein@Home | Starting task h1_0632.90_S6Directed__S6CasAf40a_633.05Hz_540_1 using einstein_S6CasA version 105 (SSE2) in slot 6
25/09/2013 14:06:03 | Einstein@Home | work fetch suspended by user
25/09/2013 14:06:04 | Einstein@Home | Scheduler request completed: got 0 new tasks
25/09/2013 14:06:04 | Einstein@Home | [sched_op] Server version 611
25/09/2013 14:06:04 | Einstein@Home | No work sent
25/09/2013 14:06:04 | Einstein@Home | see scheduler log messages on http://einstein.phys.uwm.edu//host_sched_logs/8941/8941572
25/09/2013 14:06:04 | Einstein@Home | Jobs for Intel GPU are available, but your preferences are set to not accept them
25/09/2013 14:06:04 | Einstein@Home | Project requested delay of 60 seconds
25/09/2013 14:06:04 | Einstein@Home | [sched_op] Deferring communication for 00:01:00
25/09/2013 14:06:04 | Einstein@Home | [sched_op] Reason: requested by project

and at Seti Beta it still asks for intel_gpu work when the preference is set to no:

25/09/2013 14:24:56 | SETI@home Beta Test | [sched_op] Starting scheduler request
25/09/2013 14:24:56 | SETI@home Beta Test | Sending scheduler request: To fetch work.
25/09/2013 14:24:56 | SETI@home Beta Test | Requesting new tasks for intel_gpu
25/09/2013 14:24:56 | SETI@home Beta Test | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
25/09/2013 14:24:56 | SETI@home Beta Test | [sched_op] NVIDIA work request: 0.00 seconds; 0.00 devices
25/09/2013 14:24:56 | SETI@home Beta Test | [sched_op] intel_gpu work request: 259200.00 seconds; 1.00 devices
25/09/2013 14:24:56 | SETI@home Beta Test | update requested by user
25/09/2013 14:25:00 | SETI@home Beta Test | Scheduler request completed: got 0 new tasks
25/09/2013 14:25:00 | SETI@home Beta Test | [sched_op] Server version 702
25/09/2013 14:25:00 | SETI@home Beta Test | No tasks sent
25/09/2013 14:25:00 | SETI@home Beta Test | No tasks are available for AstroPulse v6
25/09/2013 14:25:00 | SETI@home Beta Test | Tasks for CPU are available, but your preferences are set to not accept them
25/09/2013 14:25:00 | SETI@home Beta Test | Tasks for NVIDIA GPU are available, but your preferences are set to not accept them
25/09/2013 14:25:00 | SETI@home Beta Test | Tasks for AMD/ATI GPU are available, but your preferences are set to not accept them
25/09/2013 14:25:00 | SETI@home Beta Test | Tasks for Intel GPU are available, but your preferences are set to not accept them
25/09/2013 14:25:00 | SETI@home Beta Test | Project has no tasks available
25/09/2013 14:25:00 | SETI@home Beta Test | Project requested delay of 7 seconds
25/09/2013 14:25:00 | SETI@home Beta Test | [sched_op] Deferring communication for 00:00:07
25/09/2013 14:25:00 | SETI@home Beta Test | [sched_op] Reason: requested by project

and because of the inoperative anonymous platform won't resend the host's one lost task:

25/09/2013 14:10:37 | SETI@home Beta Test | [sched_op] Starting scheduler request
25/09/2013 14:10:37 | SETI@home Beta Test | Sending scheduler request: To fetch work.
25/09/2013 14:10:37 | SETI@home Beta Test | Requesting new tasks for intel_gpu
25/09/2013 14:10:37 | SETI@home Beta Test | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
25/09/2013 14:10:37 | SETI@home Beta Test | [sched_op] NVIDIA work request: 0.00 seconds; 0.00 devices
25/09/2013 14:10:37 | SETI@home Beta Test | [sched_op] intel_gpu work request: 259200.00 seconds; 1.00 devices
25/09/2013 14:10:40 | SETI@home Beta Test | Scheduler request completed: got 0 new tasks
25/09/2013 14:10:40 | SETI@home Beta Test | [sched_op] Server version 702
25/09/2013 14:10:40 | SETI@home Beta Test | No tasks sent
25/09/2013 14:10:40 | SETI@home Beta Test | No tasks are available for AstroPulse v6
25/09/2013 14:10:40 | SETI@home Beta Test | Tasks for CPU are available, but your preferences are set to not accept them
25/09/2013 14:10:40 | SETI@home Beta Test | Tasks for NVIDIA GPU are available, but your preferences are set to not accept them
25/09/2013 14:10:40 | SETI@home Beta Test | Tasks for AMD/ATI GPU are available, but your preferences are set to not accept them
25/09/2013 14:10:40 | SETI@home Beta Test | Project has no tasks available
25/09/2013 14:10:40 | SETI@home Beta Test | Project requested delay of 7 seconds
25/09/2013 14:10:40 | SETI@home Beta Test | [sched_op] Deferring communication for 00:00:07
25/09/2013 14:10:40 | SETI@home Beta Test | [sched_op] Reason: requested by project

Claggy

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2957429672
RAC: 714764

RE: RE: Two

Quote:
Quote:
Two issues:

There's a third issue too,

the 'Use INTEL GPU Enforced by version 7.0.27+ no' doesn't work at Seti Beta or Einstein, with all venues set to 'no',
the client still does work requests for the intel_gpu, the client correctly blocks CPU, Nvidia and AMD/ATI work requests:


I traced this one back to a problem in the client code: the 'inhibit requests' flag 1 is being properly issued by the server, but not being acted on by the client.

Claggy
Claggy
Joined: 29 Dec 06
Posts: 560
Credit: 2699403
RAC: 0

RE: RE: RE: Two

Quote:
Quote:
Quote:
Two issues:

There's a third issue too,

the 'Use INTEL GPU Enforced by version 7.0.27+ no' doesn't work at Seti Beta or Einstein, with all venues set to 'no',
the client still does work requests for the intel_gpu, the client correctly blocks CPU, Nvidia and AMD/ATI work requests:


I traced this one back to a problem in the client code: the 'inhibit requests' flag 1 is being properly issued by the server, but not being acted on by the client.


Looking through cs_account.cpp, there's no mention of parsing for no_intel_gpu preferences:

Quote:
1 // This file is part of BOINC.
2 // http://boinc.berkeley.edu
3 // Copyright (C) 2008 University of California
4 //
5 // BOINC is free software; you can redistribute it and/or modify it
6 // under the terms of the GNU Lesser General Public License
7 // as published by the Free Software Foundation,
8 // either version 3 of the License, or (at your option) any later version.
9 //
10 // BOINC is distributed in the hope that it will be useful,
11 // but WITHOUT ANY WARRANTY; without even the implied warranty of
12 // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
13 // See the GNU Lesser General Public License for more details.
14 //
15 // You should have received a copy of the GNU Lesser General Public License
16 // along with BOINC. If not, see .
17
18 #include "cpp.h"
19
20 #ifdef _WIN32
21 #include "boinc_win.h"
22 #else
23 #include "config.h"
24 #include
25 #include
26 #include
27 #include
28 #include
29 #if HAVE_SYS_STAT_H
30 #include
31 #endif
32 #endif
33
34 #include "error_numbers.h"
35 #include "filesys.h"
36 #include "parse.h"
37 #include "str_replace.h"
38 #include "str_util.h"
39 #include "url.h"
40
41 #include "client_msgs.h"
42 #include "client_state.h"
43 #include "file_names.h"
44 #include "log_flags.h"
45 #include "project.h"
46
47 using std::string;
48 using std::sort;
49
50 // write account_*.xml file.
51 // NOTE: this is called only when
52 // 1) attach to a project, and
53 // 2) after a scheduler RPC
54 // So in either case PROJECT.project_prefs
55 // (which normally is undefined) is valid
56 //
57 int PROJECT::write_account_file() {
58 char path[MAXPATHLEN];
59 FILE* f;
60 int retval;
61
62 get_account_filename(master_url, path);
63 f = boinc_fopen(TEMP_ACCT_FILE_NAME, "w");
64 if (!f) return ERR_FOPEN;
65
66 fprintf(f,
67 "\n"
68 " %s\n"
69 " %s\n",
70 master_url,
71 authenticator
72 );
73 // put project name in account file for informational purposes only
74 // (client state file is authoritative)
75 //
76 if (strlen(project_name)) {
77 fprintf(f, " %s\n", project_name);
78 }
79 fprintf(f, "\n%s\n",
80 project_prefs.c_str()
81 );
82 fprintf(f, "%s", gui_urls.c_str());
83 fprintf(f, "\n");
84 fclose(f);
85 retval = boinc_rename(TEMP_ACCT_FILE_NAME, path);
86 if (retval) return ERR_RENAME;
87 return 0;
88 }
89
90 static void handle_no_rsc_pref(PROJECT* p, const char* name) {
91 int i = rsc_index(name);
92 if (i no_rsc_pref[i] = true;
94 }
95
96 // parse an account_*.xml file, ignoring elements
97 // (since we don't know the host venue yet)
98 //
99 int PROJECT::parse_account(FILE* in) {
100 char buf2[256];
101 int retval;
102 bool in_project_prefs = false, btemp;
103 for (int i=0; if, "", devnull);
127 if (retval) return retval;
128 continue;
129 } else if (xp.parse_str("master_url", master_url, sizeof(master_url))) {
130 canonicalize_master_url(master_url, sizeof(master_url));
131 continue;
132 } else if (xp.parse_str("authenticator", authenticator, sizeof(authenticator))) continue;
133 else if (xp.parse_double("resource_share", resource_share)) continue;
134 else if (xp.parse_bool("no_cpu", btemp)) {
135 if (btemp) handle_no_rsc_pref(this, "CPU");
136 continue;

137 }
138
139 // deprecated
140 else if (xp.parse_bool("no_cuda", btemp)) {
141 if (btemp) handle_no_rsc_pref(this, GPU_TYPE_NVIDIA);
142 continue;

143 }
144 else if (xp.parse_bool("no_ati", btemp)) {
145 if (btemp) handle_no_rsc_pref(this, GPU_TYPE_ATI);
146 continue;

147 }
148
149 else if (xp.parse_str("no_rsc", buf2, sizeof(buf2))) {
150 handle_no_rsc_pref(this, buf2);
151 continue;
152 }
153 else if (xp.parse_str("project_name", project_name, sizeof(project_name))) continue;
154 else if (xp.match_tag("gui_urls")) {
155 string foo;
156 retval = copy_element_contents(xp.f->f, "", foo);
157 if (retval) return retval;
158 gui_urls = "\n"+foo+"\n";
159 continue;
160 } else if (xp.match_tag("project_specific")) {
161 retval = copy_element_contents(
162 xp.f->f,
163 "",
164 project_specific_prefs
165 );
166 if (retval) return retval;
167 continue;
168 } else {
169 // don't show unparsed XML errors if we're in project prefs
170 //
171 if (!in_project_prefs && log_flags.unparsed_xml) {
172 msg_printf(0, MSG_INFO,
173 "[unparsed_xml] PROJECT::parse_account(): unrecognized: %s\n",
174 xp.parsed_tag
175 );
176 }
177 }
178 }
179 return ERR_XML_PARSE;
180 }
181
182 // scan an account_*.xml file, looking for a element
183 // that matches this host's venue,
184 // and parsing that for resource share and prefs.
185 // Call this only after client_state.xml has been read
186 // (so that we know the host venue)
187 //
188 int PROJECT::parse_account_file_venue() {
189 char attr_buf[256], venue[256], path[MAXPATHLEN], buf2[256];
190 int retval;
191 bool in_right_venue = false, btemp;
192
193 get_account_filename(master_url, path);
194 FILE* in = boinc_fopen(path, "r");
195 if (!in) return ERR_FOPEN;
196
197 MIOFILE mf;
198 XML_PARSER xp(&mf);
199 mf.init_file(in);
200 while (!xp.get_tag(attr_buf, sizeof(attr_buf))) {
201 if (xp.match_tag("/account")) {
202 fclose(in);
203 return 0;
204 } else if (xp.match_tag("venue")) {
205 parse_attr(attr_buf, "name", venue, sizeof(venue));
206 if (!strcmp(venue, host_venue)) {
207 using_venue_specific_prefs = true;
208 in_right_venue = true;
209
210 // reset these
211 //
212 for (int i=0; i", devnull);
218 if (retval) return retval;
219 }
220 continue;
221 }
222 if (!in_right_venue) continue;
223 if (xp.match_tag("/venue")) {
224 in_right_venue = false;
225 continue;
226 } else if (xp.match_tag("project_specific")) {
227 retval = copy_element_contents(
228 xp.f->f,
229 "",
230 project_specific_prefs
231 );
232 if (retval) return retval;
233 continue;
234 } else if (xp.parse_double("resource_share", resource_share)) {
235 continue;
236 }
237 else if (xp.parse_bool("no_cpu", btemp)) {
238 if (btemp) handle_no_rsc_pref(this, "CPU");
239 continue;

240 }
241
242 // deprecated syntax
243 else if (xp.parse_bool("no_cuda", btemp)) {
244 if (btemp) handle_no_rsc_pref(this, GPU_TYPE_NVIDIA);
245 continue;

246 }
247 else if (xp.parse_bool("no_ati", btemp)) {
248 if (btemp) handle_no_rsc_pref(this, GPU_TYPE_ATI);
249 continue;

250 }
251
252 else if (xp.parse_str("no_rsc", buf2, sizeof(buf2))) {
253 handle_no_rsc_pref(this, buf2);
254 continue;
255 }
256 else {
257 // skip project preferences the client doesn't know about
258 //
259 xp.skip_unexpected();
260 }
261 }
262 fclose(in);
263 return ERR_XML_PARSE;
264 }
265
266 int PROJECT::parse_account_file() {
267 char path[MAXPATHLEN];
268 int retval;
269 FILE* f;
270
271 get_account_filename(master_url, path);
272 f = boinc_fopen(path, "r");
273 if (!f) return ERR_FOPEN;
274 retval = parse_account(f);
275 fclose(f);
276 return retval;
277 }
278
279 int CLIENT_STATE::parse_account_files_venue() {
280 unsigned int i;
281
282 for (i=0; ihost_venue)) {
285 p->parse_account_file_venue();
286 }
287 }
288 return 0;
289 }
290
291 int CLIENT_STATE::parse_account_files() {
292 string name;
293 PROJECT* project;
294 FILE* f;
295 int retval;
296
297 DirScanner dir(".");
298 while (dir.scan(name)) {
299 if (!is_file(name.c_str())) continue;
300 if (!is_account_file(name.c_str())) continue;
301
302 f = boinc_fopen(name.c_str(), "r");
303 if (!f) continue;
304 project = new PROJECT;
305
306 // Assume master_url_fetch_pending, sched_rpc_pending are
307 // true until we read client_state.xml
308 //
309 project->master_url_fetch_pending = true;
310 project->sched_rpc_pending = RPC_REASON_INIT;
311 retval = project->parse_account(f);
312 fclose(f);
313 if (retval) {
314 msg_printf(project, MSG_INTERNAL_ERROR,
315 "Couldn't parse account file %s", name.c_str()
316 );
317 delete project;
318 } else {
319 if (lookup_project(project->master_url)) {
320 msg_printf(project, MSG_INFO,
321 "Duplicate account file %s - ignoring", name.c_str()
322 );
323 delete project;
324 } else {
325 projects.push_back(project);
326 }
327 }
328 }
329 return 0;
330 }
331
332 void DAILY_STATS::clear() {
333 memset(this, 0, sizeof(DAILY_STATS));
334 }
335
336 int DAILY_STATS::parse(FILE* in) {
337 MIOFILE mf;
338 XML_PARSER xp(&mf);
339 mf.init_file(in);
340
341 clear();
342 while (!xp.get_tag()) {
343 if (xp.match_tag("/daily_statistics")) {
344 if (day == 0) return ERR_XML_PARSE;
345 return 0;
346 }
347 else if (xp.parse_double("day", day)) continue;
348 else if (xp.parse_double("user_total_credit", user_total_credit)) continue;
349 else if (xp.parse_double("user_expavg_credit", user_expavg_credit)) continue;
350 else if (xp.parse_double("host_total_credit", host_total_credit)) continue;
351 else if (xp.parse_double("host_expavg_credit", host_expavg_credit)) continue;
352 }
353 return ERR_XML_PARSE;
354 }
355
356 bool operator ::const_iterator i=temp.statistics.begin();
424 i!=temp.statistics.end(); i++
425 ) {
426 project->statistics.push_back(*i);
427 }
428 }
429 }
430 }
431 }
432 return 0;
433 }
434
435 int PROJECT::write_statistics_file() {
436 char path[MAXPATHLEN];
437 FILE* f;
438 int retval;
439
440 get_statistics_filename(master_url, path);
441 f = boinc_fopen(TEMP_STATS_FILE_NAME, "w");
442 if (!f) return ERR_FOPEN;
443 fprintf(f,
444 "\n"
445 " %s\n",
446 master_url
447 );
448
449 for (std::vector::iterator i=statistics.begin();
450 i!=statistics.end(); ++i
451 ) {
452 fprintf(f,
453 " \n"
454 " %f\n"
455 " %f\n"
456 " %f\n"
457 " %f\n"
458 " %f\n"
459 " \n",
460 i->day,
461 i->user_total_credit,
462 i->user_expavg_credit,
463 i->host_total_credit,
464 i->host_expavg_credit
465 );
466 }
467
468 fprintf(f,
469 "\n"
470 );
471
472 fclose(f);
473 retval = boinc_rename(TEMP_STATS_FILE_NAME, path);
474 if (retval) return ERR_RENAME;
475 return 0;
476 }
477
478 int CLIENT_STATE::add_project(
479 const char* master_url, const char* _auth, const char* project_name,
480 bool attached_via_acct_mgr
481 ) {
482 char path[MAXPATHLEN], canonical_master_url[256], auth[256];
483 PROJECT* project;
484 FILE* f;
485 int retval;
486
487 if (config.disallow_attach) {
488 return ERR_USER_PERMISSION;
489 }
490
491 safe_strcpy(canonical_master_url, master_url);
492 strip_whitespace(canonical_master_url);
493 canonicalize_master_url(canonical_master_url, sizeof(canonical_master_url));
494 if (!valid_master_url(canonical_master_url)) {
495 msg_printf(0, MSG_INFO, "Invalid URL: %s", canonical_master_url);
496 return ERR_INVALID_URL;
497 }
498
499 safe_strcpy(auth, _auth);
500 strip_whitespace(auth);
501 if (!strlen(auth)) {
502 msg_printf(0, MSG_INFO, "Missing account key");
503 return ERR_AUTHENTICATOR;
504 }
505
506 // check if we're already attached to this project
507 //
508 if (lookup_project(canonical_master_url)) {
509 msg_printf(0, MSG_INFO, "Already attached to %s", canonical_master_url);
510 return ERR_ALREADY_ATTACHED;
511 }
512
513 // create project state
514 //
515 project = new PROJECT;
516 safe_strcpy(project->master_url, canonical_master_url);
517 safe_strcpy(project->authenticator, auth);
518 safe_strcpy(project->project_name, project_name);
519 project->attached_via_acct_mgr = attached_via_acct_mgr;
520
521 retval = project->write_account_file();
522 if (retval) return retval;
523
524 get_account_filename(canonical_master_url, path);
525 f = boinc_fopen(path, "r");
526 if (!f) return ERR_FOPEN;
527 retval = project->parse_account(f);
528 fclose(f);
529 if (retval) return retval;
530
531 // remove any old files
532 // (unless PROJECT/app_info.xml is found, so that
533 // people using anonymous platform don't have to get apps again)
534 //
535 sprintf(path, "%s/%s", project->project_dir(), APP_INFO_FILE_NAME);
536 if (boinc_file_exists(path)) {
537 project->anonymous_platform = true;
538 f = fopen(path, "r");
539 if (f) {
540 parse_app_info(project, f);
541 fclose(f);
542 }
543 } else {
544 retval = remove_project_dir(*project);
545 }
546
547 retval = make_project_dir(*project);
548 if (retval) return retval;
549 projects.push_back(project);
550 project->sched_rpc_pending = RPC_REASON_INIT;
551 set_client_state_dirty("Add project");
552 return 0;
553 }
554
555 int CLIENT_STATE::parse_preferences_for_user_files() {
556 unsigned int i;
557
558 for (i=0; iparse_preferences_for_user_files();
560 }
561 return 0;
562 }
563

Claggy

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 770
Credit: 578166875
RAC: 203462

RE: I also noted a

Quote:

I also noted a significant difference in power consumption. With Einstein running on the iGPU, total system draw is 88W: with SETI Beta, power draw rises by 10%, to ~98W. I'm in discussion with the SETI app developer (Raistmer) as to why this might be the case - he speculates that it might be a difference in kernel lengths. But it seems unlikely that OpenCL spin-wait loops would use so much power.

Observations?


That difference in power consumption could well be caused by the CPU thread being used. We had a similar, or at least "probably related" problem at Collatz. The app was tuned for good GPU utilization but consumed an entire thread, despite being programmed not to do so. I experimented with various parameters and could reduce the CPU time to next to nothing by using shorter kernels (I think, can't remember the exact term). This reduced GPU utilization significantly, but could be made up for by running 2 or 3 WUs in parallel. I think one thread still had to be left free, but at least it wasn't working hard any more.

Quote:
even the factory-installed drivers (Dell Optiplex 9020) were immediately ready to run. Maybe using the iGPU as primary display - indeed only display adapter - helps.


Nah.. it's just a question of installing any half-recent driver.

MrS

Scanning for our furry friends since Jan 2002

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2957429672
RAC: 714764

Oliver, when you get a chance

Oliver, when you get a chance to look at this, refresh yourself with
http://boinc.berkeley.edu/trac/changeset/7e48057f4436343b0f5d885945d8b8f15ef432bc/boinc-v2

There are two active mechanisms for reporting server preferences back to the client.

Old:
1
1
0
1

New:
CPU
ATI
intel_gpu

Einstein is sending the 'old' version properly, including , but the client is only recognising cpu/ati/cuda. With your old server code, it isn't sending the 'new' format at all. (NB - small bug, it's actually duplicating the old format and sending it twice)

The fundamental design principle of BOINC - sadly neglected in recent times - was to maintain backward and forward compatibility between client and server at all times. That's why the old format is still live (even if deprecated), and IMHO should be extended so that new clients like my v7.2.16 inter-operate with old servers like yours - and vice-versa after your server upgrade.

That's why BOINC databases are stuffed full of XML blobs - which can expand and contract at will, as features are added and retired - instead of properly normalised, typed and indexed fields.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4312
Credit: 250495061
RAC: 34612

Hm. The problem with that

Hm. The problem with that would be not only in the server (i.e. scheduler) code, but extends into existing DB content and web code that is meant to be project-specific, i.e. manged by the project. I don't remember a mention of such a change on the relevant mailing list.

BM

Edit: as far as I can see, the commit and code you are referring to is only relevant for the Client Manager communication, not (necessarily) the Server Client one. Could you point me to a change*set) that would affect the server in the same way?

BM

BM

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.