Differences between Intel iGPU work units and Nvidia / AMD

Domenic
Domenic
Joined: 22 Sep 15
Posts: 21
Credit: 95582242
RAC: 49
Topic 205981

Hey all,

Just curious what the differences would be. I remember reading on another project I do CPU work on that the Android ARM units are smaller than the X86 units for Desktop. Are the work units different for Intel's GPU lineup vs a dedicated card from Nvidia / AMD? I run a Macbook running Windows for the iGPU units because Mac does not support it, and my old 560 ti cranks tons because I hardly ever game really. By different I am meaning in size, or what they accomplish, etc.

 

I apologize if there are other posts on this I did not catch, a search for them did not seem to bring up much and considering the iGPU thing seems more recent, I figure there is not as much info.

solling2
solling2
Joined: 20 Nov 14
Posts: 219
Credit: 1577384662
RAC: 21048

A rough estimation of how

A rough estimation of how much has been crunched is given by the credit that your machine accumulates. Check when the tasks are validated, because the claimed credit value is much lower. Usually iGPUs are much less capable than dedicated GPU cards, thus the tasks have to be customized. Their results are welcome anyway. I don't know how an application is built but my best guess would be that a smaller task will simply cover a smaller piece of the parameter space that is scanned.

Domenic
Domenic
Joined: 22 Sep 15
Posts: 21
Credit: 95582242
RAC: 49

I know the iGPU is way less

I know the iGPU is way less capable, I just have lots of spare computers so the more I can wring out of them, the better usually. IIRC, the Intel GPUs get only certain tasks from this project because the other ones have not been written for it?

Also, maybe there is some other factor, but the fan does not spin as hard as say when I am running a game, which makes me think the GPU isn't getting mashed enough. Also, my 560ti can barely draw the OS screens let alone a youtube video when crunching, but my laptop plays everything at full speed even running. So I am wondering if there is something I am missing to make my intel chip run at max capacity, or if it is related to design (such as lack of video memory, speed of components, etc.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117282561729
RAC: 36032856

Domenic_2 wrote:I know the

Domenic_2 wrote:
I know the iGPU is way less capable, I just have lots of spare computers so the more I can wring out of them, the better usually. IIRC, the Intel GPUs get only certain tasks from this project because the other ones have not been written for it?

Because your computers are hidden, all of us mere mortals haven't really got a clue about what you have and what you might be able to "wring out of them" :-).   You need to specify, in a bit of detail, the CPU model, RAM, GPU, etc, of particular machines you might be commenting on.  There have been a number of 'generations' of Intel iGPUs and there have been many issues about driver versions that would allow particular models to return valid results.  There are long running threads (like this one) about this that you could peruse.

If you want to know what type of tasks are available for particular GPU types, just go to your account dashboard and look down in the bottom right hand corner for the 'applications' link.  If you open that page you will see that the only work available for Intel GPUs at the moment comes from the Arecibo binary radio pulsar search.  The tasks in this search are deliberately 'small' as they are designed for 'mobile' type devices and Intel GPUs.  In the past, much larger tasks were available for discrete GPUs.  It's unlikely they will return because the data supply can't keep up with the demand.

There is a much greater supply of work for discrete GPUs from the gamma-ray pulsar binary search (FGRPB1G).  These are 'large' tasks, suited to modern mid-range to high end cards.  The app uses OpenCL (and not CUDA) which is why older NVIDIA cards struggle in the way you describe for a 560Ti.  At some point in the future, there may be a CUDA app which should improve this.

 

Cheers,
Gary.

Domenic
Domenic
Joined: 22 Sep 15
Posts: 21
Credit: 95582242
RAC: 49

Sorry, I tend to keep things

Sorry, I tend to keep things hidden because I usually just have default privacy at well.. Private.

Anyway, I have a macbook late 2012 retina with

8GB RAM
Intel i5 3210M
Intel HD 4000

This is the only iGPU I was talking about. I apologize for putting in excess information that was not needed.

Anyway, currently running the latest Windows drivers (I am running W10). I knew that the iGPU project list was much shorter, I just did not realize the work chunks were so much smaller.

I just assumed that because my Nvidia was having trouble drawing the desktop when running at full throttle and the Intel not was because the Intel GPU portion was not being pushed to max capacity. I knew about the sharing system memory part with the iGPU, so maybe it cannot be fed fast enough? I have 4 threads and they all are doing another project. If I dedicated one "core" to the GPU to feed it, would that increase the performance of the GPU part?

From your one comment about larger tasks previously being available, will larger tasks not come back to iGPU's?

As for the 560 part, while not part of the original question, that is good to know about the OpenCL vs CUDA. I was planning to get a 1080ti as I usually buy my GPUs with the intent on using it for a long long time, but seeing as it will still struggle with OpenCL, I may have to reconsider.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117282561729
RAC: 36032856

Domenic_2 wrote:Sorry, I tend

Domenic_2 wrote:
Sorry, I tend to keep things hidden because I usually just have default privacy at well.. Private.

That's quite OK.  I have pretty much the same attitude to privacy matters.  The older I get the more I tend to cherish what might remain of mine and despair at how willingly some much younger people seem to abandon theirs :-).  For details about hosts I own, I usually provide a link to a hostID if I want to share the details of that host.  That way people can see both the hardware/OS details and the crunching performance.

Domenic_2 wrote:
If I dedicated one "core" to the GPU to feed it, would that increase the performance of the GPU part?

I've never tried to run tasks on the iGPU because I run Linux and there were no drivers available at the time.  I don't know if that has changed in more recent times.  I'm much more involved with discrete GPUs.  From what I remember by reading what others have posted, performance will suffer with iGPUs if you also load all CPU cores.

With discrete GPUs, the default is to require a 'free' CPU core to support each GPU task - for both AMD and NVIDIA.  You can see for yourself if this is really needed by looking at some hosts with particular GPUs of interest, ie., in the top hosts list.  All NVIDIA GPUs I've ever looked at have CPU times which are almost as large as the elapsed time.  In other words, a full CPU core is being 'used' for pretty much the entire duration of each GPU task.  So, the default really makes sense for NVIDIA GPUs.  It's quite a different story for AMD GPUs.  The CPU time used is very much smaller than the elapsed time.  I've found from experience that for the GPUs I use, I don't get any significant degradation in performance even from running as many as 4 GPU tasks over 2 GPUs with just a single CPU core (from a Pentium dual core processor) for support.  The GPU model is R7 370 (for both) and the CPU component is only ~140s out of a ~2050s elapsed time.  I use the app_config.xml mechanism to remove the default allocation of 1 CPU core per GPU task.  Running 4 GPU tasks actually 'reserves' zero CPU cores.  I reserve a single core by setting computing preferences to 'allow' BOINC to only use 50% of the cores.

Domenic_2 wrote:
From your one comment about larger tasks previously being available, will larger tasks not come back to iGPU's?

That comment was about discrete GPUs.  I don't believe there have ever been 'larger' tasks for iGPUs.  There would be a lot of 'back end' overhead in maintaining different task sizes for 'mobile' devices as opposed to iGPUs so I wouldn't expect any change in the current situation.

Domenic_2 wrote:
As for the 560 part, while not part of the original question, that is good to know about the OpenCL vs CUDA. I was planning to get a 1080ti as I usually buy my GPUs with the intent on using it for a long long time, but seeing as it will still struggle with OpenCL, I may have to reconsider.

I specifically said that older NVIDIA cards struggle.  The top hosts list shows that recent generation high end NVIDIA cards do very nicely, thank you, even though you still need to use the default 1 CPU core per GPU task for support.  If you have the money for a 1080Ti, don't let my comments discourage you in any way.  The evidence shows it will crunch very well if you provide the support.

My philosophy is that I object to paying an unfair premium just because a company decides it can get away with it.  The classic example is the high end Intel processors which have suddenly become a lot cheaper now that there is viable competition.  In this country, there is an 'Australia Tax' as well where components are significantly more expensive than what you would pay in the US, for example.

So I tend to take advantage of significant discounts available in older models when the new replacements are first launched, particularly if it appears likely that the replacement will have only minor potential for improved performance anyway.  This seems to work quite well for AMD but not so much for NVIDIA.  The latter always seem to be disproportionately expensive here so I end up buying AMD most of the time.  That does mean higher power costs but that has changed quite a bit with the RX 400 series and should be even better for the next series.

 

Cheers,
Gary.

Domenic
Domenic
Joined: 22 Sep 15
Posts: 21
Credit: 95582242
RAC: 49

Interesting... I have never

Interesting... I have never dedicated a full core to my 560 and it never has seemed to affect the project speed when it is the only project running and has all the CPU to itself. But I did notice that the CPU count changed recently from .2 to 1, so maybe it would now. I shall have to try that.

I also do not agree with paying a premium as f**k you money. But I figure that even if the price drops, I will not be changing GPUs for a LONG time because I do not follow the video game curve (Mostly because the new ones are crap), and as long as it works without malfunctioning it will be serving up work for years to come. I am okay paying a premium knowing that I will get a long use life out of the card.

And good to know about the iGPU task size. Kind of sad, but with their limited power / heat output, I can see why they have it set up how they do. If only Google would allow OpenCL on Android...

 

Oh, and thanks for slogging through my question. After reading it now, I can see I worded it horribly, like I had no idea what I was trying to ask. But you managed to answer it all anyway :)

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117282561729
RAC: 36032856

Domenic_2

Domenic_2 wrote:
Interesting... I have never dedicated a full core to my 560 and it never has seemed to affect the project speed when it is the only project running and has all the CPU to itself. But I did notice that the CPU count changed recently from .2 to 1, so maybe it would now. I shall have to try that.

OK, I'm wondering if I haven't explained things properly.  Since the current FGRP GPU app is a completely different beast from what went before when the BRP4G/BRP6 searches were in progress on discrete GPUs, I neglected to point out the big change in CPU support needed.  At the time of BRP searches, NVIDIA apps were based on CUDA and the default CPU 'support' estimate was set at 0.2 CPUs per GPU task.  There was essentially little or no penalty if you ran 2 or 3 GPU tasks on mid range GPUs without 'reserving' any CPU cores.  If you weren't modifying the default with the app_config.xml mechanism, you would have needed to be running 5 concurrent GPU tasks before any automatic 'reservation' of a CPU core by BOINC would have kicked in.

With the new OpenCL based FGRP GPU app, the default was immediately to reserve 1 CPU core per GPU task and this is what you are obviously alluding to with your comment, "changed recently from .2 to 1".  If you weren't doing anything to modify the default, each new FGRP GPU task would have automatically restricted any CPU task crunching by one core anyway.  It's unlikely you could do any better by further reserving an additional core over and above what BOINC does automatically.  I have GTX650s which don't improve at all, even if no CPU tasks at all are running.  The crunch times are very slow, the screen lags are woeful so I've shut down many of these hosts.  I'd certainly restart them if a better performing CUDA app was available.

 

Cheers,
Gary.

Domenic
Domenic
Joined: 22 Sep 15
Posts: 21
Credit: 95582242
RAC: 49

Oh, I gotcha. That also

Oh, I gotcha. That also explains why my screen is so laggy. Okay, this makes a lot more sense now. At least I do not have to tweak any settings because it will do it for me. I forget that there are subprojects withni einstein and as such, the requirements change.

It is unfortunate because I have a 5870 laying around, but I believe it has heat issues and have not opened it up to try and fix it yet. I may have to switch until there is a CUDA app like you mentioned.

Apologies for it not clicking immediately

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117282561729
RAC: 36032856

Domenic_2 wrote:... I have a

Domenic_2 wrote:
... I have a 5870 laying around, but I believe it has heat issues and have not opened it up to try and fix it yet.

A 5870 would probably do quite well here.  Its major problem would be power consumption compared with more modern cards.  Heat issues are usually fairly easy to fix.  The two most likely causes are (i) thermal grease which has deteriorated to become essentially an insulating layer and (ii) a fan with bad/dry bearings that stop it from running at the proper speed.  I have a bunch of 4850s - used at Milkyway until quite recently when they did a server 'upgrade' that prevents them from being recognised.  So they just gather dust now.  They only support OpenCL 1.0 and the minimum here is 1.1 so I've never been able to use them here.  They crunched at Milkyway for about 7 years and most had heat problems at various stages.

If your 5870 is anything like my 4850s, it's a pretty trivial exercise to replace the thermal grease if that's the problem.  I've done that on most of mine and it immediately dropped the temperatures by a large amount (100C+ -> 70-80C).  If it's the fan, it may be possible to re-oil the bearings.  Mine were sealed but I was able to drill through the plastic and inject some oil with a diabetes syringe.  I found it would only last for 6-12 months so in the end I zip tied a decent sized server style fan directly to the heat sink.  That seemed a fairly permanent solution as I never needed to revisit any of the ones with that 'mod' :-).

 

Cheers,
Gary.

Domenic
Domenic
Joined: 22 Sep 15
Posts: 21
Credit: 95582242
RAC: 49

I shall try to fix it then.

I shall try to fix it then. In comparison to my 560, the power consumption would certainly be more, but not so much as to make a completely noticeable difference. And if it would do a lot better, I'll take the power hit as well as the drop in gaming performance.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.