just didn't understand if allowing the Intel chip to run in Hyper Threading caused Boinc to miss read the instillation. I have only one cpu here but my profile shows 2.
That's OK.
Hyper threading is just a way to make single HW CPU to look like two processors. This may allow more productive use of available CPU parts. As they are really single CPU, gain is not 100% but rather something like 40-60%.
[edit]
Things are different with recent dual-core CPUs: they really are two complete CPUs on one silicon dye with exception of memory controller on dual-core AMD chips - that one is still shared between the two cores.
[/edit]
All the modern CPUs have many integer, ALU and FP units which can be used in parallel. Which they are indeed.
Modern processors have long pipelines. A pipeline effectively defines how many stages an instruction has to pass before it is completely executed - generally one stage per clock cycle. Example: calculating of sum of two numbers may take say 10 clock cycles (involving memory access, calculation itself and another memory access). Splitting the operation into many stages allows to start to execute similar instruction every clock cycle instead of waiting for previous instruction to finish. The latency of such instrucion execution is still, say, 10 clock cycles, while instruction throughput is much higher.
Processors tend to run different parts of programme in parallel just to gain speed. For example: if a program has a branch (if something1 then something2 else something3) then processor first starts to execute the condition - something1 part. If some other part of CPU gets idle, processor then speculatively chooses one of the if branches to execute without knowing the result of condition. If CPU is highly parallelized, then it might execute also the other branch (something3). When branch condition gets known, execution of wrong branch is terminated. If CPU was executing only the wrong branch, then some CPU cycles were wasted, but without wall-clock penalty. If CPU was executing the right branch, then we have a speed-up.
Of course there were Intel processors before 8088, but it's 8088 which started era of PCs. Which is basically same design as Zilog's Z80 which powered many home computers in early 80's, such as Sinclair Spectrum.
Not only home systems; my first job in the graphic-arts trade included preparing text for typesetting on a Kaypro II. It originally came with a Z-80 CPU, which my boss fried in an overclocking attempt. He then installed a Z-80B, which he managed to boost to a screaming 4.0 MHz!
RE: just didn't understand
)
That's OK.
Hyper threading is just a way to make single HW CPU to look like two processors. This may allow more productive use of available CPU parts. As they are really single CPU, gain is not 100% but rather something like 40-60%.
[edit]
Things are different with recent dual-core CPUs: they really are two complete CPUs on one silicon dye with exception of memory controller on dual-core AMD chips - that one is still shared between the two cores.
[/edit]
All the modern CPUs have many integer, ALU and FP units which can be used in parallel. Which they are indeed.
Modern processors have long pipelines. A pipeline effectively defines how many stages an instruction has to pass before it is completely executed - generally one stage per clock cycle. Example: calculating of sum of two numbers may take say 10 clock cycles (involving memory access, calculation itself and another memory access). Splitting the operation into many stages allows to start to execute similar instruction every clock cycle instead of waiting for previous instruction to finish. The latency of such instrucion execution is still, say, 10 clock cycles, while instruction throughput is much higher.
Processors tend to run different parts of programme in parallel just to gain speed. For example: if a program has a branch (if something1 then something2 else something3) then processor first starts to execute the condition - something1 part. If some other part of CPU gets idle, processor then speculatively chooses one of the if branches to execute without knowing the result of condition. If CPU is highly parallelized, then it might execute also the other branch (something3). When branch condition gets known, execution of wrong branch is terminated. If CPU was executing only the wrong branch, then some CPU cycles were wasted, but without wall-clock penalty. If CPU was executing the right branch, then we have a speed-up.
Metod ...![](http://www.boincstats.com/signature/user_797.gif)
RE: Of course there were
)
Not only home systems; my first job in the graphic-arts trade included preparing text for typesetting on a Kaypro II. It originally came with a Z-80 CPU, which my boss fried in an overclocking attempt. He then installed a Z-80B, which he managed to boost to a screaming 4.0 MHz!