How Do They Work? Multi Core Processors

A Multicore processor is a single computing component comprised of two or more CPUs that read and execute the actual program instructions.


The individual cores can execute multiple instructions in parallel, increasing the performance of software which is written to take advantage of the unique architecture.


A Dual core set-up is somewhat comparable to having multiple, separate processors installed in the same computer, but because the two processors are actually plugged into the same socket, the connection between them is faster.



Ideally, a dual core processor is nearly twice as powerful as a single core processor. In practice, performance gains are said to be about fifty percent: a dual core processor is likely to be about one-and-a-half times as powerful as a single core processor.


Multi-core processing is a growing industry trend as single-core processors rapidly reach the physical limits of possible complexity and speed. Most current systems are multi-core. Systems with a large number of processor core -- tens or hundreds -- are sometimes referred to as many-core or massively multi-core systems.


Multicore processors have to solve three problems:


  1. How do they boot?

  2. How do they communicate through memory

  3. How do they interrupt each other?


Different designs will solve these problems in different ways.


Booting doesn’t happen very often, and tends to be idiosyncratic. Intel processors have a common specification for how initialization works, but it is full of special cases for different generations and models of processors that most people would recognize as versions of the x86 architecture. You can read all about it in chapter 8 “Multiple processor management” of Volume 3 of the Intel 64 and IA-32 Architectures Software Developer’s Manual.


Generally one core is chosen as the boot processor and sets up the system so that the other processors can use a shorter initialization sequence and to keep the different processors from stepping on each other’s toes during boot.


After a system is up and running, the different cores mostly communicate through shared memory. Most multiprocessors these days have coherent shared memory, so that changes to memory made by one core will be visible to the others.


This is convenient, but sometimes the changes are not seen in the same order by different cores, so an elaborate system of locks and fences and complexity under the name of “memory ordering rules” is needed. The different cores are really co-equal and just run programs independently. Any coordination is a matter for software, either the OS or application code.


Coherent memory isn’t actually necessary for a multiprocessor and Silicon Graphics famously made multiprocessors out of MIPS R3000’s (I think!) that did not have coherent memory. It required more care by the OS to manage all the cache flushes necessary for the thing to work.


The final thing you need is some mechanism for sending interprocessor interrupts, so that one processor can get the attention of another. These are used by modern operating systems for many purposes, but as an easy example, if an application running on one core has gotten wedged into a tight loop, there has to be a way for another core to break it loose.


IPIs are also used for scheduling, cache flushes, TLB shootdowns, and many other purposes. On Intel processors, interprocessor interrupts are provided by the APIC (advanced programmable interrupt controller IIRC) that is on-chip.


It is certainly possible to run a multiprocessor without IPI, you can just have a periodic local timer interrupt on every core that checks a communications area in memory, but it would slow things down.


When you get down to the minimum requirements, it is just booting and some means of communications! At this level, a multicore is just like a bunch of individual computers on a LAN. Almost anything can be made to work.