It is hard to overemphasize the extent to which computers have come and how they have changed nearly every aspect of our lives. From basic devices like toasters to advanced ones like spacecraft, you can hardly find devices that use some kind of computing power.
At the heart of each of these devices is a type of central processing unit that is responsible for executing program instructions as well as coordinating other components that control the computer. For an in-depth explanation of CPU design and how the CPU works internally, check out this amazing series here at TechSpot. However, in this article, the focus is on one aspect of CPU design: the multi-core architecture and how it works in modern CPUs. p>
Unless you've been using a computer for two decades, a multi-CPU is likely to be the core of your system and not only full-size Desktop and server systems, but mobile and low-power devices as well. To cite a common example, the Apple Watch 7 Series uses a dual-core CPU. Being a small device that wraps around your wrist, it shows how important design innovations are that help boost your computer's performance. p>
On the Desktop side, a look at recent reviews of Steam devices can tell us how much multi-core CPUs dominate the PC market. More than 70% of Steam users own a CPU with 4 cores or more. But before we dive into this article, it's a good idea to define some terms, and even if we limit the scope to Desktop CPUs, most of what we're discussing is the same for mobile and server CPUs of different capacities. It is correct. .
First of all, let's define what "essence" is. A kernel is a completely independent microprocessor capable of running a computer program. The kernel typically includes compute, logic, and controllers as well as cache and data buses that allow it to execute program instructions independently. The kernel is contained in a processor package and acts as a unit. This configuration allows a single kernel to share some common resources such as a cache, which helps speed up program execution. Ideally, you would expect the number of cores in a CPU to have a linear scale with performance, but this is usually not the case and will be discussed later in this article. p>
Another aspect of CPU design that makes a little bit of confusion for many people is the distinction between a physical core and a logical core. The physical core refers to the unit of physical devices that are powered by the transistors and circuits that make up the core. On the other hand, logical kernel refers to the ability to execute kernel chain independent. This behavior is made possible by a number of factors beyond the CPU core and the process depends on the operating system for timing these issues. Another important factor is that the running program must be developed in such a way that it is interdisciplinary, and this can sometimes be difficult because the guidelines that make up a single program are difficult. be independent.
In addition, it shows the logical kernel assigning default resources to the physical core resources, so if a physical resource is used by one thread, other threads that require the same resource must be stopped, affecting performance. This means that the physical core can be designed to allow it to run more than one thread at a time, while the number of logical cores in this case represents the number of threads that can run simultaneously. p>
A quick look at the pre-multi-core era allows us to appreciate our progress. Single core CPU, as its name suggests, usually refers to central processing units with a single physical core. The first commercially available CPU was the Intel 4004, which was a technical surprise when it was introduced in 1971.
This 750 kHz 4-bit CPU revolutionized not only microprocessor design but the integrated circuit industry entire. At the same time, other important processors, such as the Texas Instruments TMS-0100, were developed to compete in similar markets, including calculators and control systems. Since then, the improvements in CPU performance have been largely due to an increase in the clock frequency and expansion of the data/address bus width. This is evident in designs such as the Intel 8086, which was a single-core processor with a maximum clock frequency of 10MHz, 16-bit data width and 20-bit address width, released in 1979.
The transition from the Intel 4004 to the 8086 introduced a 10-fold increase in the number of transistors that remained constant for later generations with increased specifications. In addition to increasing regular frequency and data bandwidth, other innovations that helped improve CPU performance included proprietary floating point modules, multipliers, as well as General Instruction Manual (ISA) architectural improvements and expansions.
Further research and investment led to the first pipeline CPU design in the Intel i386 (80386), which allowed it to run multiple instructions in parallel, achieved by separating the instruction flow into separate steps, so the instructions were running. In one step, other instructions can be followed in other steps. p>
The superscalar architecture is also introduced, which can be considered a breakthrough in multi-core design. Superscalar execution copies some instruction execution units, allowing the CPU to execute multiple instructions simultaneously, since there is no dependence on the instructions being executed. The first commercial CPUs to implement this technology were the Intel i960CA, AMD 29000 series, and Motorola MC88100.
One of the main factors that contributed to the rapid increase in CPU performance for each generation was the transistor technology that allowed. The size of the transistor must be reduced. This helped reduce the operating voltage of these transistors significantly and allowed the CPUs to aggregate large numbers of transistors, reducing the chip level, while increasing the cache and other dedicated accelerators. p>
Photo: Matthieu Riegler
Back in 1999, AMD introduced the classic and beloved Athlon CPU months later, with all the technologies we've talked about so far, at an amazing 1GHz frequency. This chip provided significant performance. Even better, CPU designers have continued to improve and innovate with new features such as branch prediction and multidisciplinary. Time, Intel Pentium 4 processor with a clock frequency of up to 3.8 GHz supports two threads. Given that time, many of us expected increased clock frequencies and wished processors to run at 10 GHz and beyond, but our ignorance can be justified because the average computer user did not have much information technology today.
Increasing the clock frequency and decreasing the size of the transistors leads to a faster design, but this is done at higher power consumption due to the proportional relationship between frequency and power. This increase in power increases the leakage current, which doesn't seem to be a problem when you have a chip with 25,000 transistors, but it has a big problem with modern chips with billions of transistors. p>
significantly. Increasing the temperature can cause the chips to crack because the heat cannot be dissipated effectively. This limit on clock frequency means that designers will have to rethink CPU design if significant progress is to be made in continuing the CPU optimization process. p> Enter the multi-core era
. Core processors with multiple logical cores will match a human with as many arms as logical cores, and then multi-core processors like a human with multiple brains and the corresponding number of arms. Technically speaking, having multiple brains means that your ability to think can be greatly increased. But before we get too far thinking about the character we just envisioned, let's take a step back and look at another computer design that was before multi-core design, a multiprocessor system. p>
These are the systems They have more than one physical CPU, shared main memory, and peripherals on the motherboard. Like many system innovations, these designs are primarily designed for specific workloads and applications, which feature what we see in supercomputers and servers. This concept was never introduced on the Desktop due to its poor performance for most typical consumer applications. The fact that the CPUs had to communicate via the external bus and the RAM meant that they had to deal with significant delays. RAM is "fast" but very slow compared to the registers and caches in the CPU core. Also, the fact that most Desktop applications were not designed to use these systems means that the cost of building a multiprocessor system for home and Desktop use is not worth it.
Conversely, because the design kernel is much closer to a multi-core CPU and is built on a single package that has faster communication buses. In addition, these cores have shared caches that are separate from separate caches, and this helps to improve communication between cores by significantly reducing latency. In addition, the basic level of cohesion and collaboration means that better performance compared to multiprocessor systems and Desktop applications can benefit from better advantages. In 2001 we saw the first true multi-core processor that IBM released under its Power4 chassis and, as expected, it was designed for workstations and server applications. However, in 2005 Intel introduced the first consumer-centric dual-core processor, a multi-core design, and later that year AMD released its version with the Athlon X2 architecture.
Race at a slower GHz. However, designers have had to focus on other innovations to improve CPU performance, primarily due to a number of design improvements and overall structural improvements. A key aspect was the multi-core design, which sought to increase the number of cores per generation. The defining moment for multi-core designs was the release of the Intel Core 2 series, which started as dual-core CPUs and grew to four cores in subsequent generations. Likewise, AMD followed suit with the Athlon 64 X2, which had a dual-core design, and later the Phenom series, which had three- and four-core designs. p>
Brief History of p multi-core CPUs ">
rumor says Another is that AMD has confirmed the introduction of what it calls 3D -V cache, which allows it to place a large cache on top of the processor core, and has this capability. Reduce response time and dramatically increase performance. This implementation represents a new form of foil packaging and is an area of research with great potential for the future. Size is shrinking at the moment, 5nm appears to be the edge, and although companies like TSMC and Samsung have announced testing at 3nm, we seem to be moving very quickly to the 1nm limit. We have to wait and see what happens next.
There is a lot of work being done now to search for suitable alternatives to silicon, such as smaller silicon carbon nanotubes. It can help keep volume down for longer. Another area of research is how transistors are structured and packaged into formats, such as the AMD V-cache stack and Intel Foveros-3D, which can go a long way in improving IC integration and increasing performance. >
Another domain that It promises to revolutionize computing in optical processors. Unlike traditional semiconductor transistor technology built around electronics, photonic processors use light or photons rather than electrons, and due to the properties of light with a very low impedance advantage over electrons that must be wired, move metal, it has tremendous potential. Realistically Optimizing CPU Speed It may take decades to build full-fledged optical computers, but in the next few years we could see hybrid computers that combine optical CPUs with traditional electronic motherboards and peripherals to create the required performance we want. p >
Another one of the most popular and yet computationally different models is the quantum computer model, which is still in its infancy, but the amount of research and development there is enormous.
The first 1-Qubit processors were introduced a while ago, and there is still a 54-qubit processor, approved by Google in 2019, claiming to achieve quantum supremacy, which is an interesting way of saying their processor can do something A traditional CPU can do. It cannot be done in real time. p> < p> Not to be missed. A team of Chinese designers unveiled their 66-qubit supercomputer in 2021, and competition with companies like IBM, which announced its 127th quantum computing chip, and Microsoft, which announced its efforts to develop quantum computers, continues to heat up. p >
Even if you're not going to be using any of these systems on your gaming PC anytime soon, it's always possible that at least some of these new technologies will enter the space in any way. Be a consumer. The adoption of new technologies that are generally prevalent was one way to reduce costs and pave the way for more investment in better technologies. p>
This is a Brief History of multi-core CPUs and previous designs and forward-looking models that could replace the multi-core CPU we know today. Agar Mikhawahid Amiqatir, CPU Ghotor Shweid, Anatomy of the CPU (and the whole secret of Anatomy of Hardware), a total of what a resource has been drawn towards, CPU and its complete history.