CPU and GPU av. credit

John
John
Joined: 1 Nov 13
Posts: 59
Credit: 573081286
RAC: 0
Topic 218020

Hi guys, Happy new year to everyone!

I've seen some quite big differences in the average credit, comparing both the cpu and gpu. Some don't make sense (for me). Especially now, that i'm planning to upgrade or to buy some new machines, and not sure how to best mix the c/gpus.

1.

On the Top Comp, rank 4 it's a double RX Vega 8gb Ram, doing 2.644 mil units. CPU is  i5-3570K with 128 processors ?!? Since when an i5 gen 3 can have 128 threads? Is this some sort of error of reading, maybe because of Win 10?

Rank 6, same GPUs, doing only 2.005 mil. units. CPU i7-8700K CPU with 32 threads/processors (this looks a bit more realistic, but still...the standard proc. has 6cores/12threads, so how do you get to 32?).

How do you understand the big difference, from 2,64 mil to about 2 mil, considering the new CPU from rank 6?

2.

On the machine I use, which GPU do you think would be best to add? An RX 580 seems too weak, another R9 NANO is pretty hard to find. So I was thinking about an RX Vega 56, to be similar to the r9 Nano. What do you think?

3.

Rank 7 vs 8: 4 x RX 570 gives an almost identical average with a single RX Vega. Comparing the price (in E.U.) 4 of 570s are about 1000 eur/1150usd. And RX Vega is about 900eur/1030usd. I know 570 is not a good idea, it's better to have 580s, but for comparison, which one would is best to go for: 1 single/powerful GPU or a 4x ...something, like 580? And why?

4.

Rank 3, the CPU is an i7-6700K CPU @ 4.00GHz [Family 6 Model 94 Stepping 3] (256 processors) How can that be? It's 4cores/8threads. Which translates to 8 processors. But from 8 to 256, how does it work? :) And is this just a display thing, or is it really more powerful?

Thanks a lot!

 

 

 

mikey
mikey
Joined: 22 Jan 05
Posts: 11889
Credit: 1828123638
RAC: 206352

John wrote:Hi guys, Happy new

John wrote:

Hi guys, Happy new year to everyone!

I've seen some quite big differences in the average credit, comparing both the cpu and gpu. Some don't make sense (for me). Especially now, that i'm planning to upgrade or to buy some new machines, and not sure how to best mix the c/gpus.

1.

On the Top Comp, rank 4 it's a double RX Vega 8gb Ram, doing 2.644 mil units. CPU is  i5-3570K with 128 processors ?!? Since when an i5 gen 3 can have 128 threads? Is this some sort of error of reading, maybe because of Win 10?

Rank 6, same GPUs, doing only 2.005 mil. units. CPU i7-8700K CPU with 32 threads/processors (this looks a bit more realistic, but still...the standard proc. has 6cores/12threads, so how do you get to 32?).

How do you understand the big difference, from 2,64 mil to about 2 mil, considering the new CPU from rank 6?

2.

On the machine I use, which GPU do you think would be best to add? An RX 580 seems too weak, another R9 NANO is pretty hard to find. So I was thinking about an RX Vega 56, to be similar to the r9 Nano. What do you think?

3.

Rank 7 vs 8: 4 x RX 570 gives an almost identical average with a single RX Vega. Comparing the price (in E.U.) 4 of 570s are about 1000 eur/1150usd. And RX Vega is about 900eur/1030usd. I know 570 is not a good idea, it's better to have 580s, but for comparison, which one would is best to go for: 1 single/powerful GPU or a 4x ...something, like 580? And why?

4.

Rank 3, the CPU is an i7-6700K CPU @ 4.00GHz [Family 6 Model 94 Stepping 3] (256 processors) How can that be? It's 4cores/8threads. Which translates to 8 processors. But from 8 to 256, how does it work? :) And is this just a display thing, or is it really more powerful?

Thanks a lot!   

Most of those people are using virtual machines to simulate more cores etc, as long as you have enough ram it works great, just setup a small Linux distro in the virtual environment and crunch away.

As for the gpu more physical gpu's per machine means ALOT more heat in that machine so that's a major concern, it also means you will need a bigger power supply also meaning more heat. I only put one gpu per machine as that also gives me more cpu cores to crunch with too.

As for the 580 over the 570 over the Vega buy what you can afford as next week there will be a better one and yours will be behind again. MOST projects can take Nvidia cards while some you can't use AMD cards at all, and at some projects the AMD cards are faster. So the idea is to figure out where you want to crunch right now and try and figure out which of the gpu's you want will be the best for those choices, for instance some projects use dual precision crunching and Nvidia cards do that by default but some are much better at it than others are, while some AMD gpu's do not do it at all.

The same kind of thinking goes into which cpu you want, if you are crunching at PrimeGrid hyper-threading actually slows down the crunching, while at other projects the added cores mean you can crunch more wu's at the same time. Get the latest model of what you can afford but also spend some money on the ram too, ram means things happen faster so crunching is faster, dual or even quad channel ram is best so buying 2 or 4 sticks of smaller faster ram is better than buying 1 or 2 huge sticks of slower ram, even if the total amount of ram is the same. For instance my new ryzien+ cpu uses quad channel ram so I got 4 8gb sticks of ram to utilize that instead of 2 16gb sticks of ram. Same amount total but it runs faster because of the quad channel.

You didn't mention the other slowdown point and that's the harddrive, get an SSD drive for your c: drive and then you can use a regular drive for your data drive, and then load most programs onto the data drive. That way Windows or Linux will run of the c:drive but Boinc will run off the data drive, Boinc doesn't need to access the drive very often so running it on an SSD doesn't help but the boot up process will happen MUCH faster with an SSD as your c:drive. I recommend a 500gb c:drive as some programs won't let you choose a different drive and it's gives plenty of storage space for them. They are under $100US now so aren't that much to have a faster system.

Gavin
Gavin
Joined: 21 Sep 10
Posts: 191
Credit: 40643434495
RAC: 1515611

Hi John, The current number

Hi John,

The current number 4 machine is mine and I can assure you it is an i5 4core/4thread processor with 8GB of ram. There is a mechanism within Boinc that allows you to 'simulate' cpu cores and is used by those of us with fast enough machines to circumvent work fetch issues.

In the recent past we had a run of tasks that completed so quickly that some of my computers hit the in built in Boinc/Einstein limit for daily work downloads which meant my fast hosts were running out of work after 12 hours. To get around this some of us 'simulated' our cpu count in order to get sufficient work. There is a daily workfetch limit so as to prevent a consistantly 'bad' host from returning errored tasks and a 'back off' period is then implemented. The limit is based on a calculation (which others my be able to explain better) of cpu core count and number of gpu's. Hence 'simulating' more cpu's results in more work being sent to provide fast hosts with enough work... Am I making sense?

Using the 'simulate' cpu's option is NOT for everyone and I won't explain how to do it, used correctly with a fast enough host and a small X amount of days work cache it can be useful when required and by small I mean less than 2 days total work ;-)

Hopefully, I have explained this in an understandable way but basically you do not need to worry about the erroniously reported core count from some of the Top Hosts in that list. Some of us have just fudged it in the past to get enough work to see the day through.

@ Mikey,

I can't speak for Guarav (No.1) but I'd be very surprised if any host in the top 10 or Top 50 even (at Einstein) was a VM. I'm also not convinced that using a Linux OS is significantly faster nowadays, I reverted my Kubuntu hosts back to Windows 12 months ago and have had little to no negative impact on production and surprisingly less hassle.

If I were looking to build another machine for Einstein the choices would be simplely based on Intel CPU and AMD GPU.  Depending on motherboard/cpu choice, don't over do the ram... Little fast = fast, lots of fast can = slower ;-) From a crunching perspective, using an SSD will not make any gains whatsoever, other than to percieved user experience and faster response of course.

Gav.

 

 

 

 

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7023644931
RAC: 1808012

Gavin wrote:In the recent

Gavin wrote:
In the recent past we had a run of tasks that completed so quickly that some of my computers hit the in built in Boinc/Einstein limit for daily work downloads

The tasks being issue now, and since a somewhat over two days ago seem to be of that same sort--so this issue may be upon us again.

John
John
Joined: 1 Nov 13
Posts: 59
Credit: 573081286
RAC: 0

Thanks for the tips

Thanks for the tips guys.

Gav, I got it, I think it only makes sense when there's a stable power network. I imagine it could be problematic if the computer shuts down (due to a power cut) and when you turn it back on the settings can be bad, if you forget to start the VM and so on. Especially if you have auto-start on, for BOINC.

But just out of curiosity, with 16gb Ram, from 8 processors how high can you go? After all, Ram is the cheapest, compared to a new GPU or CPU or a computer. And I have a feeling if done right, it can make a difference. I would try this trick only with a very big UPS. Which could hold the comp at least 24hours.

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7023644931
RAC: 1808012

John wrote:Tvery big UPS.

John wrote:
Tvery big UPS. Which could hold the comp at least 24hours.

That would be huge, by the standards of most UPS systems marketed for individual PC use.

Running Windows 10, I've not figured out how to get the Einstein tasks to suspend when the system is running on UPS power, though it should be possible as two of my three systems are in successful communication with their Cyberpower UPS systems by a UPS connection, so in some sense the "running on battery" status is available at the PC level.

If I learned how to get that to work, my existing UPS systems might get me out well past a half hour, but getting past a day would require a beast, even with that working.  Then I'd get to replace all those batteries about once every three years--unless I found a UPS that does a better job of looking after battery health.

mikey
mikey
Joined: 22 Jan 05
Posts: 11889
Credit: 1828123638
RAC: 206352

archae86 wrote:John

archae86 wrote:
John wrote:
Tvery big UPS. Which could hold the comp at least 24hours.

That would be huge, by the standards of most UPS systems marketed for individual PC use.

Running Windows 10, I've not figured out how to get the Einstein tasks to suspend when the system is running on UPS power, though it should be possible as two of my three systems are in successful communication with their Cyberpower UPS systems by a UPS connection, so in some sense the "running on battery" status is available at the PC level.

If I learned how to get that to work, my existing UPS systems might get me out well past a half hour, but getting past a day would require a beast, even with that working.  Then I'd get to replace all those batteries about once every three years--unless I found a UPS that does a better job of looking after battery health.

A generator can get you past the half hour mark. And until they start using Lithium batteries in UPS's they will continue to have a problem with battery management. The good news is that could be happening pretty soon as the older type batteries are replaced in the whole battery field with the newer Lithium ones. There are even newer ones coming but I don't see UPS's skipping over Lithium as it drops in price.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109394963383
RAC: 35803227

John wrote:... I got it, I

John wrote:
... I got it, I think it only makes sense when there's a stable power network. I imagine it could be problematic if the computer shuts down (due to a power cut) and when you turn it back on the settings can be bad, if you forget to start the VM and so on. Especially if you have auto-start on, for BOINC.

Unfortunately, I don't think you really have 'got it' :-).

Using the BOINC feature to simulate extra cores is not like using a VM and there are no bad settings or events that can happen when you "turn it back on".  It's nothing to do with whether you have stable power or not.  It's simply a fudge factor that allows a fast machine not to completely run out of work in certain circumstances as explained below.

If you set that BOINC feature to simulate some huge number of cores (say 96), there is no change in the number of tasks that the machine can crunch.  If a machine has 8 real cores or threads and one GPU, it can still only crunch the same number of CPU plus GPU tasks simultaneously when you change to 96 simulated cores.  And the time to crunch those tasks wont change.  If there's a power failure and the machine has to be restarted, nothing bad will happen.  The  same tasks that were crunching before the power fail will be restarted from their respective saved checkpoints.

The only reason that people at Einstein have been using simulated cores is to allow their daily GPU task quota to be increased.  At times there are GPU tasks that can be crunched much faster than the usual 'normal' time so that for fast, high end GPUs, the daily allowance may not last for a full day.  That allowance is a value calculated by the project servers depending on the hardware in each machine.  Even though we are talking about GPU tasks, the server uses BOTH number of GPUs AND number of CPUs in working out the quota.  If you 'pretend' to have a lot more CPU cores than you actually have, the server will allow you to have a bigger quota (for everything) and you can avoid running out of work when you have the fast crunching GPU tasks.  End of story.

John wrote:
But just out of curiosity, with 16gb Ram, from 8 processors how high can you go? After all, Ram is the cheapest, compared to a new GPU or CPU or a computer. And I have a feeling if done right, it can make a difference. I would try this trick only with a very big UPS. Which could hold the comp at least 24hours.

If you understand that the 'simulated cores' feature doesn't have any benefit for number of simultaneous tasks or individual crunch times, then there is no need to think about adding more RAM or a huge UPS.  If your machine has enough RAM to avoid swapping, there won't be a tangible benefit to crunch times from adding more of it.

Going back to your original set of questions in your opening post, I get the impression you are looking for advice about how to get the best 'bang for your buck' when you setup some additional machines for crunching.  Please be aware that there are many pitfalls in trying to compare the very small number of hosts that you can access at the top of the top computers list.  You really have to focus on the fact that the values shown are a single snapshot only.  The RAC values you are comparing can't be stable unless there has been something like a month of unchanged crunching conditions.

What were the values yesterday, last week, last month?  A machine shows a particular GPU today.  Was it the same GPU yesterday, last week, last month?  Does the machine run 24/7 or not?  Has the owner had it switched off recently while on holidays and just turned it back on?  Has the owner changed the number of GPUs recently?  Does the machine also support other projects besides Einstein?  Unless you consider a lot of factors like these and watch machines of interest over a period of several weeks, at least, you can never be sure that the snapshot credit values you see today have any true meaning whatsoever.

Fortunately, many people do seem to have fairly stable operating conditions.  If you are monitoring a given host and nothing much changes over say a week, you can infer that the current values are probably very close to the long term stable values.  Even then, you still can't infer 24/7 operation.  Some people may schedule their computer to stop crunching during set times, eg to avoid peak electricity costs.  The RAC might be stable but lower than what 24/7 operation would give.

Another factor is that comparisons between different hosts having supposedly similar hardware and similar hours of operation may still give a false impression because one may be crunching multiple GPU tasks concurrently whilst the other may be processing them one at a time.  To work that out, you have to analyse the individual task crunch times for both.

So you really can get a false impression from the current RAC shown on the top hosts list at a single point in time, even if it appears to be stable.

 

Cheers,
Gary.

Gavin
Gavin
Joined: 21 Sep 10
Posts: 191
Credit: 40643434495
RAC: 1515611

Gary. Thank you for taking

Gary. Thank you for taking the time to explain in a much clearer way than I did. You are a true gent! 

I particularly liked your use of the word 'pretend' to describe the use of 'simulating' cpu's and I hope the OP is now better informed :-)

To try and expand on what Gary has said about building a 'best bang for buck' machine, looking at the Top Hosts lists is not the way to putting your best foot forward ;-) There are too many variables involved as Gary has explained but to add to that you need to consider your intended use, cruncher only, daily driver for office type stuff or is it going to be a gaming rig etc...

Stick to a budget! Don't over do the specification unless you have a real need, less is often more. For example there is no point fitting 4 sticks of RAM when your chosen cpu only supports dual channel memory - for instance if you have a requirement for 16GB of RAM and have a dual channel cpu, use 2 x 8GB sticks, using 4 x 4GB sticks (if the motherboard supports) to get your 16GB requirement will actually slow you down a certain degree because the cpu will be working a little harder just to address the extra lanes. In the same manner installing 2 gpu's will not give you double the credit output of a single gpu. I could list lots of reasons to justify less is more but with any build or upgrade, the choice of path is personal and how far you take that is up to you :-) It's also worthwhile to consider choice of OS and bios/uefi settings, Windows/Linux power settings and (mostly) gpu driver version use.

Also don't forget that 'vintage' computers still have legs with a modern gpu or two installed. The latest and greatest kit is not always better ;-)

Above all, if you chosoe to 'shoot for the stars' and go for a Top Host slot consider the not insubstantial cost of power to get there and stay there!!!

Gav.

 

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109394963383
RAC: 35803227

Gavin, thanks very much for

Gavin, thanks very much for your kind words and thanks very much for providing the extra comments.  I agree with what you have added.  I had considered adding more information to my own reply at the time but I thought that what I had was already too long.  I decided to wait and see if the OP had further questions or provided more information about his ultimate aims.

I like your "less is more" comment.  It's often quite true, as is the "vintage computers" thought.  John did mention in the opening post that, "i'm planning to upgrade or to buy some ...".  If he has some older hardware that is (most importantly) suitable and functional, the cheapest way to get a dramatic boost in productivity is to upgrade by adding a modern GPU.  By 'suitable' I'm thinking of a least 'core 2' generation or later - say around 2010 or later - the later the better :-).

As an example, nearly 2 years ago, I was considering retiring a group of 6 machines, first built in late 2008.  They had Intel Q6600 2.4GHz quad core processors with 2x1GB sticks of DDR2 RAM and 300W 80+ efficient PSUs.  They had been doing CPU task crunching all their life.  All 6 are still running, with GPUs ranging from RX 460 to RX 580.  I've upgraded the PSUs for any that have more than an RX 460.  I've upgraded the RAM to at least 3GB (which actually is sufficient, even with a full graphics desktop environment).  They don't run CPU tasks any more, although they did each run a single CPU task until the summer heat arrived last October.  There is virtually no difference in output from those old clunkers when compared with the same GPU in a much more modern machine.

For John's benefit, if contemplating the upgrade of older hardware, you need to pay close attention to the condition of components like capacitors and chokes on the motherboard.  You also need to look closely at the power ratings of the PSU. You really want 80+ efficiency if possible and you must have a decent modern design that can supply virtually its full output at 12V.  It's wise to use a simple plug in type power meter to see what the machine draws from the wall at full load.  Make sure you're not drawing more than about 60% of the PSU's rated output.  PSUs are most efficient in the 40-60% power output range.  Make sure the PSU fan spins freely.  If you can, leave the cases completely open to assist with heat dissipation.

 If you have further questions, please feel free to ask.

 

Cheers,
Gary.

kb9skw
kb9skw
Joined: 25 Feb 05
Posts: 21
Credit: 372896707
RAC: 95297

Gavin and Gary, how does one

Gavin and Gary, how does one go about setting up this mechanism? I recently brought a machine online with two RX 570 GPUs each running 2x and I find it idle in the morning. The two GPS are being driven by an old Core2Duo based Pentium E2200 and I suspect running four workunits at once on a dual core CPU is the issue. The CPU is not doing any crunching as the four GPU tasks are using 50-65% of the CPU already.

 

Thoughts?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.