Flops, measured versus real

KentC
KentC
Joined: 21 Mar 13
Posts: 4
Credit: 9023
RAC: 0
Topic 196872

I am current looking over the workloads i have done in the past few days, and I have also allocated some more processing power to run the programs.

I have currently an increased workload and output of workunits if i'm not mistaken, after increasing the amount of cores available to the BOINC client. (from 2 to 4 ). I was interested in seeing how that increased my measured operations. However, to my disappointment, both measurements decreased. from 3k and 16k on the two processors to 2.1k and 11k on the quad. I am unsure how this measurement dropped, and I believe ( and hopefully not falsely ) I have seen increased throughput.

Real performance and measured performance are of course different and the later not reliable as the workloads change. However I am very curious to know how it was measured at a lower rating on 4 cores of the same machine then 2 cores of the same machine. No changes other then the allocated number of processors.

Neil Newell
Neil Newell
Joined: 20 Nov 12
Posts: 176
Credit: 169699457
RAC: 0

Flops, measured versus real

Do you mean the 'measured speed' of your host, as shown on this page? If so, it's notoriously unreliable as a guide to actual performance (benchmarks often are).

What really matters (and what you can use as a guide) is how long it takes to process a task, and that's shown first by the BOINC manager on your computer and later (once the task has been uploaded and reported) on the website.

What sort of video card/chip do you have? Some tasks can be hugely speeded up by using the graphics processor (GPU).

KentC
KentC
Joined: 21 Mar 13
Posts: 4
Credit: 9023
RAC: 0

Indeed, which is why it is

Indeed, which is why it is only a curiosity about it. I understand the benchmarks are not reliable and as mentioned have no bearing on actual performance. I do not have a graphics card on the host computer, as it is used to run non-graphical dependent programs ( thus i also am not using the boinc manager but the command line client instead. ).

Yes the measured speed is what I am referring to, or million operations per second both floating point and fixed.

Of course graphics cards are for the most part the top end of putting out flops, as many mid range to high end graphics cards can out perform my current processor ( AMD 3.5 Ghz vishera 8 core ) in terms of flops.

This isn't a concern about what the amount of work the cpu is doing currently, of course more would be better but again its a donated service and I'm only running this while i have the spare CPUs. I was just curious to find out why the benchmark would show a drop, of all things, in what it has measured.

As I am trying to figure out what sort of workload is potential for the CPU, finding out the what its real capabilities in flops would be both helpful and interesting to know, although i don't know how to go about finding that out from the work units or tasks that have been completed.[/u]

Neil Newell
Neil Newell
Joined: 20 Nov 12
Posts: 176
Credit: 169699457
RAC: 0

When I joined here, I too was

When I joined here, I too was unhappy with seeing a drop in the 'measured' performance. But after a period of time, I realised it actually just bounces around and doesn't directly relate to work done.

Really, 'work done' only has meaning if you compare to other people running the same type of work. BRP (pulsar) work is a good example, it's slow on a CPU when compared to a video GPU. Other work (like direct gravitational wave detection) doesn't use GPUs at all and your CPU should do really well there.

For example, look at this work unit you've processed; here's the same type of work unit processed by me and my wingman (click the links in the 'computer' column to see the host details).

mikey
mikey
Joined: 22 Jan 05
Posts: 11889
Credit: 1828158831
RAC: 203172

RE: Indeed, which is why it

Quote:

Indeed, which is why it is only a curiosity about it. I understand the benchmarks are not reliable and as mentioned have no bearing on actual performance. I do not have a graphics card on the host computer, as it is used to run non-graphical dependent programs ( thus i also am not using the boinc manager but the command line client instead. ).

Yes the measured speed is what I am referring to, or million operations per second both floating point and fixed.

Of course graphics cards are for the most part the top end of putting out flops, as many mid range to high end graphics cards can out perform my current processor ( AMD 3.5 Ghz vishera 8 core ) in terms of flops.

This isn't a concern about what the amount of work the cpu is doing currently, of course more would be better but again its a donated service and I'm only running this while i have the spare CPUs. I was just curious to find out why the benchmark would show a drop, of all things, in what it has measured.

As I am trying to figure out what sort of workload is potential for the CPU, finding out the what its real capabilities in flops would be both helpful and interesting to know, although i don't know how to go about finding that out from the work units or tasks that have been completed.[/u]

Theoretically it could be that using all 4 cores in your quad core machine has overwhelmed the other resources of the machine, so your numbers go down. On every machine there is a sweet spot where everything just cruises along, when you start to push the ram, hard drive, etc, etc towards their upper limits the numbers can actually go down. So for instance if you are using a really old motherboard that is not very efficient and only has 2 or even 3gb of ram in it, and it is older slower ram, it could be overwhelmed and actually not be running as efficiently as it was before. Now ram speed is tenuous, I upgraded my older pc's to faster ram and it made no difference, MORE ram made a huge difference though. Of course on 32bit Windows machines the limit of 3.?gb that Windows sees IS a problem, alot of it because Windows has TONS of background things running all the time. I see your one machine is running Linux, I do not know if the max ram problem exists in Linux based machines. BUT older motherboards do have a max limit on how much they can take, and that IS an ongoing problem.

Nobody316
Nobody316
Joined: 14 Jan 13
Posts: 141
Credit: 2008126
RAC: 0

RE: I see your one machine

Quote:
I see your one machine is running Linux, I do not know if the max ram problem exists in Linux based machines.

32 bit OS is the same in windows, Linux and Mac OS. 32 bit OS's can only handle up to 4GB max with a patch for windows to see and be able to use. Most of the time it's 3.5GB. 64 bit is better not just because of more RAM but also if need be you can use more Page File than 4096mb. Yes each series of motherboards do have there limits.

PC setup MSI-970A-G46 AMD FX-8350 8 core OC'd 4.45GHz 16GB ram PC3-10700 Geforce GTX 650Ti Windows 7 x64 Einstein@Home

Horacio
Horacio
Joined: 3 Oct 11
Posts: 205
Credit: 80557243
RAC: 0

Just one detail, the flops

Just one detail, the flops are a kind of average and they represent the performance of each core.
If the benchmarks were perfect they should give allways the same result, no matter how many cores you were using for BOINC.
Well, not exactly the same, because when you use more cores other resources that are shared among the cores may limit their performance, so a slightly decrease on the results of the benchmarks is an expected behaviour.

But also, the BOINC benchmark are not perfect, and neither were intended to show the actual performance, they are meant as an auxiliary meassure to help on the estimation of tasks duration for new hosts when there is no statistical data.

tullio
tullio
Joined: 22 Jan 05
Posts: 2118
Credit: 61407735
RAC: 0

My 32-bit Linux is pae, this

My 32-bit Linux is pae, this means it can use up to 64 GB RAM. I am using 8, the maximum allowed by my mainboards in 2 Linux boxes.
Tullio

Nobody316
Nobody316
Joined: 14 Jan 13
Posts: 141
Credit: 2008126
RAC: 0

RE: My 32-bit Linux is

Quote:
My 32-bit Linux is pae, this means it can use up to 64 GB RAM. I am using 8, the maximum allowed by my mainboards in 2 Linux boxes.
Tullio

Ah nice. This would be new to me. I just looked it up. I thought they would have had this many many years ago but everything takes time. I may have to dig deeper in to this since I see it's also for windows. If anyone cares to take a look at this... http://msdn.microsoft.com/en-us/library/windows/desktop/aa366796%28v=vs.85%29.aspx

Thanks for the info Tullio

PC setup MSI-970A-G46 AMD FX-8350 8 core OC'd 4.45GHz 16GB ram PC3-10700 Geforce GTX 650Ti Windows 7 x64 Einstein@Home

Neil Newell
Neil Newell
Joined: 20 Nov 12
Posts: 176
Credit: 169699457
RAC: 0

PAE works well on linux, I'd

PAE works well on linux, I'd imagine it does on Windows too (no reason why not). There are many 32 bit linux systems running more than 4GB here (up to 32Gb in fact). There is a small theoretical hit compared to a 64-bit kernel, but then 64-bit binaries have their own performance issues (and in the case of e@h the binaries are 32-bit anyway).

Also for linux you can boot a 64-bit kernel or 32-bit kernel and everything just keeps on working (i.e. a 32-bit userspace runs fine with a 64-bit kernel). This seems a pretty good set-up; only a limited number of applications benefit from 64-bit. So systems that only run 32-bit apps boot a 32-bit kernel, while systems that run 64-bit apps as well boot a 64-bit kernel with userspace support for both 32 and 64 bit binaries.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.