CPUs process data using instructions stored in the computer memory or RAM. The RAM is a temporary storage area that makes information and instructions available to the microprocessor, which does not have to use this information until required. The two processor classifications are the Reduced Instruction Set Computer (RISC) and the Complex Instruction Set Computer (CISC). The primary difference between the two is that a RISC-based chip uses more basic instruction sets to achieve a greater clock frequency to process more information per clock cycle than a CISC processor. CISC-based Read More
L1 Cache
The L1 cache refers to the first tier in a computer processor’s memory cache system that increases the speed at which the processor delivers results to the user. The L1 cache sits between the processor and the computer’s RAM (Random Access Memory) and stores the user’s most accessed data in order for the processor to find data quicker. The L1 cache is usually between 8 KB and 64 KB in size, making it the smallest memory cache in a multi-level cache system. How L1 Cache Works Because the L1 cache Read More
EM64T (Intel 64)
EM64T (Extended Memory 64 Technology), now known more commonly as Intel 64 or the x64 (that is when including AMD64 too), is a 64 bit superset/extensions that central processing units (CPUs) process. It is widely used in Intel’s processors, including Pentium 4, Pentium D, Pentium Extreme Edition, Celeron D, Xeon, Pentium Dual Core, and Core 2 processors. Originally codenamed Yamhill, the EM64T was first “announced” in 2004 when the Intel chairman at the time, Craig Barrett, announced that it was underway. This technology went through quite a bit of name Read More
PIO (Programmed Input/Output)
PIO is Programmed I/O. PIO is the oldest method is transferring memory over a IDE/ATA interface. The technique of a PIO involves using the CPU and support hardware to control the transfer of memory between the system itself and the hard drive. The speed of the PIO is called a PIO Mode. It starts at Mode 0 and increases with each mode having a faster cycle rate in nanoseconds. So, quite obviously, higher modes are better due to the fact that memory can be transferred so much faster due to Read More
What is a Gigaflop?
Flops are a special acronym that describes a unit of measurement known as "FLoating point Operations Per Second". This measurement is extremely important in determining the amount of operations which could be handled by computer technologies. In today's computing, gigaflops of data can be handled by different hardware options. A gigaflop is a measurement in terms of one billion floating point operations per second. This is useful for modern day computers because they tend to process so many operations in a given second. The fastest modern supercomputers are capable of performing at the petaflop level, Read More
High Performance Computing
High performance computers have been around for a little longer than two decades but it wasn't until the early 1990s that the international community decided that they needed a new way of assessing and measuring supercomputers. In 1993, a benchmark test and subsequent list known as the Top500 was born. The Top500 is responsible for categorizing the power and capabilities of the top 500 high performance computers in the world. The majority of this article will list the manufacturers of the top 10 supercomputers of that list and describe the Read More
What is a Teraflop?
Flops is an acronym that describes a unit of measurement known as “Floating point Operations Per Second”. A teraflop measures by trillion floating point operations per second. A variety of technologies can now offer more than a teraflop of power, the most recognizable being servers and desktop supercomputers. Why Teraflops are Useful Teraflops are a large amount of power. High level electronics such as servers must be capable of handling large amounts of information. Handling a large amount of strain, while remaining fast and capable of doing background work, and Read More
Checksum Error
In order to understand what a checksum error is, it is important to first learn what a checksum is. A checksum is a redundancy check during a computer’s start up process, which makes sure that the computer’s data is intact and unhampered. The data is scanned and tested for accuracy, either based on how well it relates to data elsewhere or based on previous data that was stored on the same computer. Essentially, all of the bits of data in a particular document or file are added up and a Read More
HyperTransport
HyperTransport is a CPU to I/O and CPU to CPU bus design. HyperTransport is an open standard which has been incorporated into AMD's Opteron and Athlon64 64-bit x86 processors, Transmeta's Efficeon x86 processor, Broadcom's BCM1250 64-bit MIPS processor, and PMC-Sierra's RM9000 64-bit MIPS processor family. Integrating HyperTransport into the CPU enables the elimination of the Front Side Bus along with the performance penalties usually associated with that bus. HyperTransport affects more than the CPU though. HyperTransport is a complete system bus which integrates PCI, PCI-X, USB, FireWire, AGP 8x, InfiniBand, Read More
CPU (Central Processing Unit)
CPU is an acronym that stands for central processing unit. The central processing unit is responsible for performing all of the mathematical calculations that are required for a computer to function properly. Because a computer cannot function without the CPU (which may also be referred to as the central processor or just the processor), it is not uncommon to hear people refer to the CPU as the "brains" of a computer. How does the CPU work? To properly perform its job, the CPU must complete a cycle of four steps. Read More
Share on: