First, let me make it clear that I'm experienced in running OpenCL on HD Graphics 4000, HD Graphics 4600 and HD Graphics 530 - I run the Beta apps, and apart from a slightly high validation failure rate (~10%, mostly from the 530), I'm very happy with them. They churn out 62 credits every 10 minutes, day in, day out, and using very little CPU time.
I've been shopping, and just taken delivery of [*]
11/02/2019 12:51:13 | | OpenCL: Intel GPU 0: Intel(R) UHD Graphics 620 (driver version 23.20.16.4973, device version OpenCL 2.1 NEO, 3166MB, 3166MB available, 211 GFLOPS peak) 11/02/2019 12:51:13 | | OpenCL CPU: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz (OpenCL driver vendor: Intel(R) Corporation, driver version 7.6.0.611, device version OpenCL 2.1 (Build 611)) 11/02/2019 12:51:13 | | Processor: 4 GenuineIntel Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz [Family 6 Model 142 Stepping 10] 11/02/2019 12:51:13 | | OS: Microsoft Windows 10: Professional x64 Edition, (10.00.17134.00)
It's https://einsteinathome.org/host/12766736
Running einsteinbinary_BRP4 version 134 (opencl-intel_gpu-Beta), I see four differences:
1) The app doesn't make any real progress. Using standard BOINC clients, it shows the usual pseudo-progress (starting after 1 minute, it proceeds slower and slower, asymptotically towards but never reaching 100%), but using a client with that silly code compiled out, it stays at zero.
2) Stderr reaches
Activated exception handling... [15:33:33][6924][INFO ] Starting data processing... [15:33:33][6924][INFO ] Using OpenCL platform provided by: Intel(R) Corporation [15:33:33][6924][INFO ] Using OpenCL device "Intel(R) UHD Graphics 620" by: Intel(R) Corporation [15:33:34][6924][INFO ] Checkpoint file unavailable: p2030.20170413.G36.17+01.63.N.b1s0g0.00000_1573.cpt (No such file or directory). ------> Starting from scratch... [15:33:34][6924][INFO ] Header contents: ------> Original WAPP file: ./p2030.20170413.G36.17+01.63.N.b1s0g0.00000_DM157.30 ------> Sample time in microseconds: 65.4762 ------> Observation time in seconds: 274.62705 ------> Time stamp (MJD): 57856.4063944347 ------> Number of samples/record: 0 ------> Center freq in MHz: 1214.289551 ------> Channel band in MHz: 0.336182022 ------> Number of channels/record: 960 ------> Nifs: 1 ------> RA (J2000): 185128.515499 ------> DEC (J2000): 34704.2441001 ------> Galactic l: 0 ------> Galactic b: 0 ------> Name: G36.17+01.63.N ------> Lagformat: 0 ------> Sum: 1 ------> Level: 3 ------> AZ at start: 0 ------> ZA at start: 0 ------> AST at start: 0 ------> LST at start: 0 ------> Project ID: -- ------> Observers: -- ------> File size (bytes): 0 ------> Data size (bytes): 0 ------> Number of samples: 4194304 ------> Trial dispersion measure: 157.3 cm^-3 pc ------> Scale factor: 0.00104312 [15:33:35][6924][INFO ] Seed for random number generator is 1171371230. [15:33:37][6924][INFO ] Derived global search parameters: ------> f_A probability = 0.08 ------> single bin prob(P_noise > P_thr) = 1.32531e-008 ------> thr1 = 18.139 ------> thr2 = 21.241 ------> thr4 = 26.2686 ------> thr8 = 34.6478 ------> thr16 = 48.9581
and never any further. The next line would normally be a 'checkpoint committed', but it never appears.
3) Unsurprisingly after that statement, it never writes a checkpoint file
4) After the three seconds or so of initial setup, it continues to use a full CPU core, when on other hardware types CPU usage drops to 3% or less.
So, on this hardware and with this driver, I think we have a looper. Given that it's a Beta app, can I set a logging flag or command line switch to get more diagnostics?
* Last year's Dell XPS 13 ultra-portable - I'm going travelling, and wanted something light. And Dell often have a discounted stock clearance sale around this time of year.
Copyright © 2024 Einstein@Home. All rights reserved.
Added to the above, I'm aware
)
Added to the above, I'm aware of the comment in https://einsteinathome.org/content/many-errors-w-multiple-tasks-intel-gpu#comment-166209 that "I've got a laptop with a 6xxx series Intel CPU that I'm running SETI on because they're the only project I'm aware of to've written a special app version that uses alternate (slower, but less cumulative error generating) calculations for the IGP that are able to validate against other devices."
I worked with the SETI developer - indeed, I purchased the HD 530 specially - to test out that bug fix. I've run SETI tests specifically on the UHD 620 to see how it runs.
http://setiathome.berkeley.edu/results.php?hostid=8670176 will have the outcomes when my wingmates show up.
I've got a laptop with a 520
)
I've got a laptop with a 520 running SETI for the same reason. The bugfest that several generations of Intel IGP have been for compute have me skeptical of Intel's ongoing effort to produce high performance dGPUs in the next few years.
Richard Haselgrove
)
Looking through those results a few days later, it's clear that almost all of them were "weakly similar" - close enough not to be thrown out entirely, but needing a third computer to be sure that a correct calculation had been done for the results database.
That feels like another loss of precision in my new UHD 620, and makes it probably unsuitable for running BOINC. All of my SETI tasks could have been completed by the other two wingmates working together, and my effort (and my electricity!) didn't help at all. And the endless loop in the Einstein app helped even less.
When I get time, I'll run some bench tests and send them to the SETI developer so he can see where the precision error occurs, but I've got more important things to do right now.
That's a pity. We can only
)
That's a pity. We can only hope that once Intel makes dGPUs that their drivers will be better then the ones for iGPUs.
Having paid no attention to
)
Having paid no attention to Intel's graphics for several years, I was pleasantly surprised to see that they are taking it seriously.
https://www.tomshardware.com/news/intel-xe-gpu-specs-features,38246.html
It appears they are trying to develop their own ecosystem, though that will take years to pull off. But with their ability to integrate CPUs and GPUs in close proximity, they could do good things for the distributed computing world.
I'm soon taking possession of
)
I'm soon taking possession of a Thinkpad x1 Extreme with the i7-8750h. Should I disable Intel GPU crunching on everything? From what I'm reading it's giving a fairly high error rate? I have a few other Ivy Bridge machines with Intel HD though, one of them lacking an alternative GPU.
Sorry, most of this is over my head. Just want to make sure I won't be wasting the processing power with no result.
wolfman1360 wrote:I'm soon
)
One other thing to think about it the heat generated using the Intel Gpu to crunch with in a laptop, it can be very high and cause your chip to die sooner than it should. So IF you decide to try it be sure to monitor the heat and consider putting it in a run with very good cooling and a fan underneath it too. Some people get fans that force air into the laptop but most laptops aren't really designed for good cooling in the first place.
I run cpu only units on my laptops, leaving at least one cpu core free, yes I have hyper-threading turned on, so for instance on my I7 quad core, 8 cores thru HT, only 6 cores are crunching cpu units. My dual core Mac is only using a single cpu core to crunch, I did that so the laptops are still usable for other things such as like right now I'm using the I7 in my recliner as I type this and crunching 6 wu's at the same time too.
What i do not understand is
)
What i do not understand is how an Official OpenCL 2.1 device would fail to achieve specification; When the company involved has a high reputation for CPU.. (Intel)
Intel GPU Technology should be the same as adapting AVX from and to AMD CPU,
The Technology of the GPU is pretty much Float & AVX/AVX2/AVX512 & lots of ram..
The point being that AMD & Intel are both capable of so much & Improving the record is priority No.1.
Thanks kindly RS
https://science.n-helix.com/2017/04/boinc.html
https://science.n-helix.com/2018/01/microprocessor-bug-meltdown.html
mikey wrote:One other thing
)
Would crunching with only the GPU net more/better results than the CPU?
By that, I mean disable CPU crunching entirely. Would this generate the same amount of heat as CPU only crunching, or does it add more? Or, as with many things, is it laptop dependent?
wolfman1360 wrote:Would
)
Probably will be laptop dependent. I've run both CPU tasks and GPU tasks on a MacBook Pro Retina 15" (Core i7-4980HQ CPU @ 2.80GHz + AMD Radeon R9 M370X Compute Engine) and find that it runs cooler when running GPU-only tasks than when running CPU-only tasks. And that's when running just one of its 8 cores. For my other MacBooks that don't have an Einstein-capable GPU, I run CPU tasks on 1 or 2 cores at ~70% CPU time. As Mikey said, laptops are not good at getting rid of heat.
I've been a Mac user at home from the beginning, but it irks me that their newer laptops have these powerful chips that cannot get anywhere near their full capabilities for a sustained go of it because the unit cannot get rid of the heat. Although, if you run a E@H laptop cool(ish) with just a fraction of their CPUs, it can give favorable credit/kWh productivity. Slow crunching, but efficient.
Ideas are not fixed, nor should they be; we live in model-dependent reality.