Table of Contents:1
: Introduction 1.1
: What's New 1.2
: Disclaimer Section 2: General Hardware Knowledge 2.1
: The Central Processing Unit 2.2
: The Graphics Card 2.3
: The Memory 2.4
: The Motherboard 2.5
: The Power Supply Section 3: Getting Started 3.1
: Case Management 3.1A
: Airflow Management [031A]3.1B
: Cable Management [31B]3.1C
: Lighting And Other Visuals [031C]3.2
: Hardware Installation 3.2A
: The Central Processing Unit [032A]3.2B
: The Graphics Cards And Expansion Cards [032B]3.2C
: The Hard Drives And Bay Drives [032C]3.2D
: The Memory [032D]3.2E
: The Motherboard [032E]3.2F
: The Power Supply [032F]3.2G
: External Hardware - Input Devices [032G]3.2H
: External Hardware - Monitors [032H]3.2I
: External Hardware - Speakers And Headphones [032I]3.3
: Component Cooling 3.3A
: Convection Cooling [033A]3.3B
: Liquid Cooling [033B]3.3C
: Peltier/TEC Cooling [033C]3.3D
: Phase Change/Refrigeration Cooling [033D]3.3E
: Thermal Paste [033E]Section 4: The Basics Of SLI 4.1
: Fundamentals Of SLI 4.1A
: SLI Installation And Activation [041A]4.2B
: SLI Rendering Modes [041B]4.3C
: SLI Rendering Profiles [041C]4.2
: Standard SLI Requirements 4.2A
: nForce SLI Chipset Specifications [042A]4.2B
: SLI Configuration Power Consumption [042B]4.3
: Graphics Cards And GPU Hierarchy Section 5: Credits Key: Bright green numbers indicate a main topic, a mid-tone green number indicates a parent topic, and a dark green number indicates a child topic, increasing in specifics in that order. The numbers next to the topics in brackets are index identifiers. Input the number to your browser's "find" function to jump topics quickly!Section 1: Introduction 
Hello everybody, this is my attempt at creating a general-purpose guide (please note this is not a FAQ
article) to help those who would like to learn more about PC components, whether it be to know how to spot the best upgrade or simply to expand one's general knowledge of the matter. I'll be adding, removing, and editing much of the content here on my own, but I'd also like as much input as possible from you, the readers, to make this more user-friendly. If you have any questions, comments, ideas or suggestions, please use the feedback thread
. I'll be covering hardware here for the most part unless drivers are involved, but if need be, I can add whatever software may be necessary. I realize that the length of this article may be a bit unsettling, but I did my best to write the descriptions in detail while keeping them easy to understand. Many many
hours have gone into writing this guide, and I hope it has all gone to good use!What's New 
This is a massive revision of the original H/SLI guide, which has now been broken down into five major sections and separated by post. Additionally, there are links at the top of each post to allow users to skip to certain sections. Several definitions have been redefined, clarified and elaborated upon to make the guide more informative, and some large paragraphs have been broken down making this more user-friendly. Perhaps the most noticeable addition is the addition of specification details on all nForce SLI-ready chipsets and also the single- and dual-video card configuration power consumption figures. As always, if you have a suggestion for material or notice an inaccuracy, please post it here and let me know!Disclaimer 
All information contained within this article is for educational purposes only and is not to be considered as or referred to as official documentation nor may it be substituted for professional assistance or knowledge. All content contained within this guide is subject to being false or incorrectly represented, and may be edited, removed, or otherwise altered without notice. Any advice and/or opinions in this guide are not those of NVIDIA and should not replace that of a professional. By reading this guide, you absolve all responsibility of your actions from NVIDIA, its employees and affiliates, the author, and all other forum members. This guide may be freely linked to or duplicated electronically under the circumstances that proper credit is given to the author and a direct link to the original article is provided.
Section 2: General Hardware Knowledge 
This section covers all of the essential hardware components: the CPU, the graphics card, the system memory, the motherboard, and the power supply. Efforts have been made to provide you with the best (yet easiest to understand) explanation of the critical aspects of such component's purpose, base functions, performance examples, and recommendations. The terms included with every component's description are terms for consideration, such as the socket type of a CPU or the difference between DDR and DDR2 RAM. Enough talk already; let's get down to the bones of it all!
The Central Processing Unit 
The CPU, or Central Processing Unit, does exactly what its name implies. The CPU is like the brain of the computer, able to handle intense mathematical calculations and execute complex sets of instructions. Intel and AMD are the two mainstream CPU vendors, supplying a wide spectrum of specialized processors to the world's markets. What kind of processor you should purchase relies heavily on what you will be using it for, such as video gaming, server or mobile computing, and variations of such processors exist to allow the user to purchase a speed or format suited to their needs. To find what processor is best for you, here are a few points to consider: will you be multitasking frequently, be running intense graphics and games often, engage in video editing or 3D modeling, or all of the above? If so, you're going to probably want the best of the best, which is a dual-core 64-bit processor. Before we get ahead of ourselves, let's break down the important aspects of a CPU:
- CPU Clock: This is what separates a slow processor from a quick one with the smallest explanation required, or at least on a general basis. It is the numerical representation of how many clocks (instructions) per second the processor is able to perform, and the value is achieved by multiplying the core clock by the multiplier. The higher the value is, the better the processor will be at handling large amounts of information at any given time. However, a higher-clocked processor will not necessarily perform better than one with a lower clock because of the microarchitecture of the CPUs. A Pentium D 965 EE clocked at 4.0 GHz sounds impressive, but in actuality it will be easily outperformed by a Core 2 Duo E6600 clocked at 2.2 GHz because the latter has a more efficient processor architecture and smaller manufacturing process which allows for a higher number of transistors, resulting in more computing horsepower. Overclocking is the increase of the CPU speed, CPU multiplier, or both simultaneously to increase the overall processor speed. In some cases, the processor may consume more power than its current voltage setting will allow after a certain speed has been reached, at which point heightening the CPU voltage is necessary for stability but also for any further speed increases. The only downside to overclocking and/or overvolting is a reduction in the processor's lifespan, increased heat production, and possible component failure.
- Data Bus Width: How large the internal data buses of a CPU are determine the maximum amount of RAM that can be allocated to the processor, with the two most common bus sizes being 32-bit and 64-bit. Those numbers represent how wide the lanes are (in bits) for the processor to send or retrieve information from the memory subsystem. For instance, a 32-bit processor can read or write to a maximum of 4 gigabytes of memory at one time, and to find that out you would use this equation: 2^32 = 4,294,967,296 bytes. 4,294,967,296 / 1024 (to convert to kilobytes) / 1024 (to convert to megabytes) / 1024 (to convert to gigabytes) = 4 Gigabytes. However, on a 64-bit processor the maximum amount of RAM is 18,446,744,073,709,551,616 (2^64) bytes, which is well over 17 billion Gigabytes (16Â± Exabytes!) Unfortunately, current and most likely future operating systems map system and video RAM together under a virtual cap, so it is highly unlikely that one could ever expect to see a fraction of that estimate ever being utilized in a standard system. That aside, the growth of maximum RAM becomes exponential rather than multiplicative which is why 64-bit processors don't allow for a meager total of 8 GB. 64-bit computing is explained further below. However, all of this extra processing ability comes at a price: in order for all 64 bits to be utilized, a 64-bit operating system (OS) needs to be installed on the system. From there, games and applications coded to run on a 64-bit architecture will be the only tasks to truly take advantage of the massive processing horsepower, but this typically requires recoding most of the application. On the flip side, programs and the like are not required to be coded in native 64-bit to be run successfully on such a CPU; through compatibility modes or emulation, 32-bit applications will be able to be handled by the CPU just as any other program would, ignoring any possible compatibility issues. Despite all the requirements, when multi-threaded 64-bit applications begin to become common, there will be a substantial benefit over single-threaded 32-bit versions.
- Dual- And Multi-Core: Dual- and multi-core processors are like their single-core kin, but contain two or more identical processors beside each other on the CPU die and separately connected to the Front-Side Bus. The terms dual- and multi-core can be confused with dual- and multi-processor, but the difference is that a multi-processor system has more than one physical CPU present on the motherboard compared to a multi-core processor. The perks of a dual- or multi-core CPU is that more than one processor is able to handle a workload, both have independent connections to the FSB, and have caches allocated to each core. This provides enough system resources for the operating system to permit multiple parallel applications, even more so if the CPU is dual- or multi-core. Multi-core processors have their own demons though: much like the 64-bit processors, programs and applications have to be coded to accept a multi-core workload distribution most of the time. Applications like video, photo, or audio editing are able to naturally harness the extra power because all of the information is available to the CPU at once to distribute accordingly whereas video games, at their current state, are not a likely candidate for multi-core processing since the data is processed in real-time and little or no information is available beforehand.
- Front Side Bus: The FSB, or Front-Side Bus, is the communication bridge between the CPU and the Northbridge. The frequency of the FSB is related to the maximum memory bandwidth, but since the Front Side Bus sends four bits of data per clock, compared to the two bits per clock for system memory, half of the FSB's effective speed is the maximum usable speed for the memory. With the release of new CPUs sporting a 1333 MHz FSBs, any memory clocked at or around 667 MHz will be fully utilized, but most mainstream processors have only a 1066 MHz FSB, meaning only memory near the 533 MHz range will be used fully. The reason that the maximum memory bandwidth is equal to half of the effective FSB speed is because the FSB has a quad data rate (QDR) where even the most advanced memory has a double data rate (DDR.) Only when system RAM uses QDR technology will it be allowed to match the FSB clock. It (the FSB clock) can be increased only if both the CPU and motherboard allow, and the same risks apply to overclocking the FSB as they do the CPU itself. While it is possible to raise the FSB clock higher than originally configured, and though it will allow for more memory bandwidth, most motherboards have a limitation on the RAM's bandwidth which can render this process rather pointless. There are modules of high-speed DDR2 RAM modules available with clocks anywhere between 800 MHz and 1.25 GHz (and higher for DDR3 modules) and while the FSB on most processors is limited to speeds well below that there will be an increase in system performance. However, the improvement in performance will be marginal and is not likely to fully justify the cost of such RAM. In some cases, faster RAM can lead to degraded performance because of the increased timings compared to modules operating at a lower frequency, which are able to use tighter timings.
- L1/L2 Cache: In order to reduce latencies between the processor and memory subsystem, CPUs often have dedicated and high-speed storage areas to contain frequently accessed data. Different amounts of cache allow for better performance, with the L1 cache being small in size but very fast while the L2 cache has a larger capacity but is not as responsive. This is where the CPU will write the results of an operation before transmitting it to the appropriate subsystem, and is convenient in the sense that it is much faster than using the system RAM, but smaller and slower than system memory and the hard disks.
- Manufacturing Process: As technology improves, the manufacturing process of both the CPU along with its inner components. This measurement is taken in nanometers from the thinnest-gauge wire inside the processor, and the smaller the number the thinner the wire. While this does not represent the width of all of the wires in the processor it does indicate that there are more transistors that can be installed thanks to the space saved by using the smaller wires. Even though the manufacturing processes shouldn't be considered a definitive method of comparison between two processors in terms of computing power, it does generally indicate that the CPU has more transistors, which tends to lead in better performance.
- Socket Format: Though the socket type of a processor has no major impact on its performance, it does limit the component to only be installed on approved motherboards. A few examples are AMD's 939 and AM2, which contain 939 and 940 pins (respectively) and Intel's Socket T (a.k.a. LGA 775) with 775 pins and 664 with obviously 664 pins on the bottom of the CPU. Typically CPU vendors will align the pins in such a way that earlier versions (say an AMD 939 to an AM2 processor) will not seat correctly which forces the user to upgrade.
The Graphics Card 
Graphics cards have come a long way from their ancestors. Nowadays graphics cards have specifically designed dedicated memory and highly-efficient graphics processing units (GPUs) that are able to churn out amazing, high-definition 3D scenes in today's most advanced video games. The resurrection of an old concept has become a new PC trend: installing two video cards on a single motherboard and operating in tandem to theoretically double the performance. This feature used to be found only on dated Voodoo video cards, but NVIDIA brought the old idea to a brilliant life, labeling it "SLI" (Scalable Link-Interface.) ATI, the second name in video cards, combated this idea with their CrossFire concept but have not faced the same success as NVIDIA's SLI. SLI will be further explained later in Section 4: The Basics Of SLI - for now however, we will review the key points in deciding what video card may be right for you.
The Memory 
- Card Interface: Most video cards now typically use either one type of interface or another: AGP or PCI-E. AGP (Accelerated Graphics Port) has been around for some time and only recently triumphed by the newcomer, PCI-E. Cards that utilize the Peripheral Component Interconnect Express interface gain large advantages over AGP, such as higher maximum bandwidths and bi-directional communication buses. For PCI-E cards, the maximum bandwidth is almost double the fastest AGP (8x) bandwidth, and the upstream speeds are determined by the application's requirements rather than hardware. On AGP 8x slot, the maximum upstream bandwidth is 266 MB/s (and downstream of 2.1 GB/s) while the PCI-E slot has a bi-directional bandwidth of 4.0 GB/s.
- Core Slowdown Threshold: The Core Slowdown Threshold, or CST, is a predetermined temperature that, when reached, will cause the card to automatically reduce its core, shader (if applicable) and memory clock speeds significantly in order to prevent damage. This value ranges from card to card, hovering between the 120C and 130C range, and cannot be changed. This temperature is well below the point where the card will suffer critical damage, however if a graphics card's junction temperature ever meets or exceeds the CST value, inspection of the card's heatsink and observation of the internal airflow is greatly encouraged, as the problem can result from an offset heatsink, poor airflow, a blocked fan or any combination of those, plus much more.
- DVI: DVI is a method of information transportation between a source and the video card. LCD screens use DVI input to attain an image to project, while CRT use what is called an analog input. DVI stands for Digital Video Interface, and though it uses a different connector to relay information a CRT monitor can accept a DVI-supplied image through a small cable adapter.
- DirectX Support: DirectX is Microsoft's standardized programming interface for 3D applications, namely video games and other 3D applications. The interface has been progressively streamlined for better visual, audio, and networking performance and efficiency, with the most recent release being DirectX 10. DirectX is only supported when both the hardware, software and the operating system are compatible with the version in question, but there is normally a backup version in place if compatibility modes need to be accessed.
- GPU Clock: Like the CPU clock, this is the numerical representation of how fast the graphics card's processor is. As with the CPU speed, the higher the value, the better the GPU will perform much of the time. Additionally, enthusiasts are able to overclock this speed as well, but graphics cards are much more sensitive to overclocking than CPUs are. Even with today's best GPUs speeds have yet to exceed one Gigahertz, and to push the processor speeds up by 100 Megahertz without instabilities is a rarity and will almost always require a 3rd-party cooling solution to combat the increased heat output. In addition to the dangers of overclocking applied to all PC components, graphics processors can present unique symptoms when clocked too high or become too hot. When this happens, bits of information are lost or become corrupted and show up as what are called "artifacts." These are often persistent or flashing discolored pixels that appear on textures but may also result from other non-hardware issues.
- Pixel Pipelines: Despite its name, the term "pipeline" is more of a simile than a physical object. What it implies is the series of arithmetic sequences a scene has to be finalized in before it is finally rendered and displayed on a screen. The higher the number of pixel pipelines means the more efficiently the graphics processor will be able to handle and compute data assigned to it. The only downside to this method is when there is an overabundance of a certain type of object in a scene the sequence has to repeat before it can be passed off to be rendered. This has been solved by NVIDIA's stream processors, explained below.
- Stream Processors: NVIDIA came up with a simple solution to a common problem when they invented the stream processor. Made to replace the pixel pipeline concept, the stream processors are flexible, individual transistors capable of handling a multitude of different rendering styles. For example, when a shadow-heavy hallway had to be rendered with a card using pixel pipelines, the aforementioned sequence needed to be repeated because there are steps strictly assigned to completing that certain function. With stream processors, nearly any graphical task called to be rendered can be distributed to all of the stream processors and sorted accordingly. Whether there is a room filled with shadows or heavy on bump-mapping and vertexes, the stream processors can handle all of the operations at once in tandem with other processors performing similar tasks. The obvious end result is a shorter waiting time for a scene to be completed by using a simple yet effective system. In a DirectX 10 environment, the stream processors are capable of handling the task of building the geometry in the scene, which up until now was something left to the CPU. The stream processors are much faster at processing this sort of information and are able to immediately shift the load to other stream processors for vertex and pixel shading. This greatly reduces the amount of time it takes for the geometry to be built and sent to the graphics cards, and it also lessens the severity of CPU bottlenecks.
- Video Memory: All video cards have predetermined amounts of onboard memory installed, which serves to store the frame buffer. The frame buffer is a collection of all the information in a scene being rendered by the video card, including textures, Anti-Aliasing processes and more, and the more space that is available for it the faster the scene will be rendered. When the frame buffer becomes overloaded with data, the video cards have to swap out textures, sometimes referred to "texture thrashing," and will always end in decreased performance. With systems using shared memory, when the frame buffer becomes too large it is partially dumped onto the system memory in an effort to prevent considerable performance loss, though it will still be rather evident.
- Video Memory Bandwidth: How much information the video memory is able to send or receive is called the memory bandwidth, calculated in Gigabytes per second or GB/s. This is the maximum theoretical bandwidth of the memory, but is not and never would be the average bandwidth of the memory. A simple equation is all that's needed to calculate the memory bandwidth: divide the memory bit-interface by 8 and multiply by the effective memory speed. For example, a video card with a 256-bit memory interface with a memory clock of 1300 MHz would have a maximum bandwidth of 41,600 MB/s or roughly 40 GB/s.
- Video Memory Interface: Just like the bus sizes for, say the PCI-Express slot, this represents how many bits wide the buses are for the video memory. As the bus size increases, the total amount of information that can be read or written at any given moment grows as well, which of course has a large impact on the responsiveness of the memory and the performance as a result.
- Video Memory Type: Referred to as GDDR, or Graphics Double Data Rate, this is memory that has been specifically manufactured for use in games and other graphics-related applications. Rather than using standard DDR memory like that found in your system, graphics cards use GDDR due to its graphics-oriented nature. Today the most common forms of GDDR found on video cards ranged from GDDR2 to GDDR4, with low-range cards utilizing the former and ATI's high-end CrossFire edition cards fitted with the latter. As the suffix number grows, expect higher memory clocks as well as lower power consumption to be found. ATI's GDDR4 is predicted to consume as much, if not less, power than GDDR3 while exceeding effective speeds of 2 GHz.
No matter what size the CPU cache is or how large the hard drive is, there is always going to be a need for a dedicated temporary storage method: enter the system memory. If you will, think of RAM (Random Access Memory, or system memory) as a small, two-dimensional static hard drive. Inside the RAM are data banks filled with transistors that hold bits of information. As the RAM cycles or refreshes the data banks, these transistors are charged with a value of 1 or discharge to a default value of 0. To keep a transistor charged, it needs to be recharged every cycle or else it will default to 0. As with all other storage methods, a large size usually equates to better performance, and the RAM is no exception. Programs such as video editing software and video games store resources in the RAM because of its high bandwidth in comparison to hard drives, meaning that larger memory sizes allow for a greater amount of files to be stored on the memory. There are quite a few aspects of RAM that can be confusing if you don't know what they are, so here are some important terms to learn about it:
The Motherboard 
- Bit/Byte: These are the basic units of measurement for storage mediums. Bits are the smallest units, representing a 1 or a 0. Bytes are equivalent to 8 bits and serve as the suffix to indicate the size of a measurement, such as a Megabyte (1,024 Kilobytes) and Gigabytes (1,024 Megabytes.) When abbreviated, bytes will show as a capitalized letter whereas bits will be displayed as a lowercase letter, such as MB (Megabytes) and Mb (Megabits.)
- CAS Latency: This is the most important timing of any type of memory, the CAS (Column Access Strobe) latency. This value represents how many cycles apart a column address is activated and how long the data is available thereafter. The lower this value is the faster the memory can respond to new data and thus the better the memory will perform at high speeds. For example, memory with a CAS latency of 2 can theoretically handle data twice as fast as memory with a CAS latency of 4, although this can never happen due to other factors. In fact, increasing or decreasing this timing usually only has a moderate impact when changed by several whole values, though by then system stability may have been comprised. Timings in general are covered later, and though this is technically a timing it can have a major impact on the performance of a memory module and the system overall.
- DIMM: DIMM stands for Dual In-Line Memory Module, meaning that the RAM has circuits and memory banks on both sides of the chip. This technology allows signals to be sent to separate sides of the module simultaneously, effectively doubling the performance ceiling. It is currently very difficult to purchase SIMMs (Single In-Line Memory Modules) from typical retailers, online or otherwise.
- Dual Channel: Dual channel memory is a motherboard and chipset innovation that allows for increased bandwidth by utilizing separated memory channels. By separating the channels, the memory modules are free to access the memory controller, thus increasing memory bandwidth. However, to use this feature dual-channel RAM must be used along with a dual channel-ready motherboard and chipset, and identical modules are essentially required (and ideal) for hassle-free performance.
- EPP: EPP, or Enhanced Performance Profiles, is a feature that is being implemented into most mainstream DDR2 and DDR3 memory modules that improves stability while making overclocking the memory much more simple. RAM with EPP have an embedded EEPROM chip that holds information on a multitude of motherboards and different values (usually timings and sometimes voltages according to the specified frequency,) and it is this tiny chip that makes things much easier for new overclockers to easily experiment with different setups without endangering their system. What this EEPROM does is retain a reference chart of sorts with different timings and whatnot that are automatically applied when a certain speed is reached, and those settings have been predetermined for optimal reliability. Those settings, however, are not aimed to produce the fastest results, only the most stable.
- Memory Types: Currently, there are three mainstream desktop variations of system memory: DDR, DDR2 and DDR3. DDR stands for Double Data Rate, meaning that two bits of data can be transferred or received per cycle. This is essentially twice as fast as SDR, which was only able to receive or send one bit of data per cycle. DDR is most commonly applied to the standard 184-pinned memory modules seen in everyday PCs. DDR memory usually does not exceed an effective speed of 400 MHz, but there are faster and of course slower modules available. The advantage to DDR memory is that they usually have very tight timings in relation to their rated speed, and they also do not produce a lot of heat. DDR2 memory is the mainstream variant, as technology improvements it brings with it in contrast to DDR are well-respected in gaming and overclocking enthusiast communities everywhere. DDR2 memory modules have 240 pins and have proven to be leaps and bounds ahead of DDR speed-wise. So far, the maximum speed of DDR2 has yet to be recorded, but companies such as OCZ have made modules exceeding an effective speed of 1.25 GHz. With all that power come drawbacks, however; the heat produced by such high-speed components are enough to require manufacturers to install elaborate and exotic heatspreaders to keep the memory cool. Also, due to the high speeds, precautions have to be taken to keep memory from becoming corrupted or lost, and in this case it is higher timings. It is not uncommon to see a DDR2 module with a CAS latency exceeding 5 or 6 clocks and command rates well into the teens. DDR3 modules can almost be considered a modified DDR2 modules since they both use the 240-pin configuration. DDR3 uses an improved silicon technology and a remapped data structure that allows the memory to be clocked at higher speeds while using less power, much in the same way DDR2 improved upon standard DDR memory. Like all other hardware components, memory of all types can be overclocked. Motherboards sometimes synchronize the CPU speed to the memory speed, but this isn't true for all of them. Overclocking the RAM has what could be considered the fewest repercussions in comparison to the CPU or graphics processor, but it too has its own pile of potential issues. When overclocking RAM, there are several key values to take into account. As the speeds of the RAM increase, the timings will need to be loosened so data doesn't become corrupted. When RAM is clocked too high, even with properly adjusted timings and voltages it will become unstable, which is when the entire system fails. Seeing the "Blue Screen Of Death," failure to pass POST, freezing or lockups can indicate that your memory has been overclocked past its limitations, but there is also something else to consider: heat output. Just like essentially every other component in a PC, when the RAM is overclocked its heat output will undoubtedly increase. Improved cooling methods will need to be implemented to prevent a probable meltdown from an overzealous overclock, but heat is not going to be the only factor that attributes to the failure of the RAM, so care needs to be exercised when overclocking your memory.
- Registered/Unbuffered: Most memory modules on the market today targeted for desktop use are unbuffered, and those aimed at server use are registered. Unbuffered memory allows the memory controller to address each memory bank on the modules individually, but with increasing memory sizes this becomes a hindrance on the system's performance and stability because of the larger time window required to read or write data. For systems such as internet servers that contain a large amount of memory, registered modules are brought into play. This type of RAM uses buffers to hold back data which sacrifices performance in favor for high-speed stability. While registered memory is a good idea for systems which require zero data loss, they are not necessary until the amount of memory has surpassed that which many high-end desktop motherboards can support, meaning that their use in a standard condition would not be beneficial or practical.
- Timings: The timings (or latency) ratings of RAM indicate how quickly the memory is able to read or write data in clocks. There are dozens of different timings, but most vendors only supply the four most important on specification sheets (other than the CAS latency.) Those timings are the Row-To-Column (tRCD) delay, the Row Precharge (tRP/tRCP) delay, the Row Active (tRA/tRD/tRAS) delay, and finally the Command Rate (CMD.) Their abbreviations are listed beside the term and italicized. The Row-To-Column (tRCD) delay is responsible for activating a row address, the Row Precharge (tRP/tRCP) delay for deactivating a row address, the Row Active (tRA/tRD/tRAS) delay represents how many cycles are between the activation and deactivation of a row address, and the Command Rate (CMD) is the delay from the chip select and the initial command. Timings are something to look for when purchasing RAM, especially in modules that are clocked to a higher frequency, as they can make a difference between two identical modules if they are indeed higher on one set and lower on the other.
Many people consider the CPU to be the most vital component of the computer, some would say the graphics card, but those are nothing without a circuit board to tell them what to do. Motherboards provide the foundation to define the performance of such hardware, and without them the PC would cease to exist as we know it. The largest and arguably one of the most complex pieces of hardware in a PC, the motherboard is littered with all sorts of electrical components and designated slots. There is of course the socket for the CPU, memory slots for system memory modules, PCI expansion slots for hardware such as sound cards or wireless cards, dedicated graphics ports such as AGP or PCI-E, and much more can all be found on the motherboard. There is more than meets the eye however, so let's get started!
The Power Supply 
- BIOS And CMOS: The Basic Input/Output System (BIOS) is one of the most basic yet vital aspects of the motherboard. The BIOS is responsible for remembering all of the basic system settings, such as RAID preferences, component voltages and speeds, security features, time and date, and many other seemingly unimportant bits of information. All of this data is stored on the CMOS (Complimentary Metal-Oxide Semiconductors) which is accessed when the system starts up. The size of the information stored is negligible, therefore it is not stored on the hard drive (for a multitude of other reasons as well) and because of how infrequently the data is read, only a replaceable button battery is needed to supply a sufficient current to keep the data intact for years.
- Form Factor: How the key component interfaces are assembled on the motherboard is called its form factor. The two most common form factors are ATX and BTX, with unique advantages and downsides. ATX (Advanced Technology Extended) has been the motherboard standard for many years now and has become the most widely accepted motherboard layout because of it. For ATX, the expansions slots are located at the bottom left-hand corner of the motherboard, the CPU socket near the center, and the DIMM slots to the right of the processor socket. Not all ATX motherboards have their components arranged in exactly the same layout, but they will abide by that general definition. Sadly, this aging design is proving to be less effective when high-end processors and video cards are installed because of a higher heat output. BTX (Balanced Technology Extended) is a much more recent invention designed to combat the growing amount of heat produced by core components like the CPU and graphics cards through careful component placement. On a BTX motherboard, the DIMM and expansions slots have switched places, situating all the heat-producing components in alignment meaning they can be cooled with less fans. BTX was also built to compact the I/O shield to allow for more possible expansion slots, and a standard BTX I/O shield is smaller than a micro ATX's, and there are two smaller BTX versions still. One of the few disadvantages to purchasing a BTX motherboard is that the mounting screws are situated in such a way that an ATX case could not accommodate it unless otherwise specified by the manufacturer.
- Graphics Interface: The interface upon which the graphics card is installed is not only important for card compatibility, it is also an indication of what kind of performance cap you can receive from that card. If the motherboard does not have an integrated graphics accelerator (normally found on low-end boards) the two types of interfaces are AGP and PCI-E; you may refer to the definition of Card Interface in Section 2.2: The Graphics Card for more information. Only a video card of the same type may be installed and may be restricted to a certain speed with AGP slots. An 8x AGP video card cannot run at full speed on a 4x AGP slot, though a 4x AGP card may run on an 8x slot, however no benefits will be seen over a 4x slot. AGP and PCI-E buses can be overclocked, but there are hazards equivalent to overclocking the GPU or graphics memory that apply to an overclocked AGP bus. As of now, there is currently no retail video card that meets or exceeds the 4.0GB/s bi-directional bandwidth of the PCI-E bus, so though overclocking the bus is an option it is rather pointless for the time being.
- Memory Standard: What type of memory the motherboard accepts is very important due to the differences between the two standards: DDR and DDR2/DDR3. DDR RAM has 184 pins across the bottom while DDR2 and DDR3 have 240, meaning they require appropriate DIMM slots to be installed. Because the pins on DDR2/DDR3 modules are compact, a standard 184-pin DDR module is incapable of properly seating in such a slot and vice versa. Before purchasing any motherboard it is advised that you check to make sure it supports the type of memory you plan on installing; it's also worth noting the motherboard's physical memory cap so you don't try to install more memory than the chipset can allocate.
- Northbridge/Southbridge: The Northbridge and Southbridge are extremely important to the motherboard and all of the system's components. These are essentially the information relay centers of the PC, and both chips make up what is called the chipset. With older chipsets, the Northbridge is the bridge between all of the major components: the CPU, the memory, and the graphics cards. For SLI, specialized Northbridges are required to handle the extra communications and also optimize the connections between the subsystems. The Southbridge handles mostly data storage and transfer, such as SATA hard drives, FireWire (IEEE 1394) and USB. Most of the data to be processed to be written to the hard drives for example will be run or relayed through the Southbridge. Concerning newer chipsets, the flow of data has been much more simplified and the Northbridge and Southbridge now share many duties to keep the rate of communication more balanced. Additionally, some tasks have been switched, even isolated, between the chips in order to keep bandwidth at its maximum for high data-rate subcomponents, for example the PCI-Express lanes. Depending on the age and technology of the motherboard, these chips can be separated or conjoined. Many new motherboards have a combined chipset, but there is no real performance advantage to a combined chipset over a divided one. Combined chipsets are used more commonly to free up space on the motherboard and also to use a single large heatsink rather than two.
- PCI: PCI (Peripheral Component Interconnect) slots may be rather dated, but their flexible functionality and usefulness are to this day unparalleled. Despite being clocked at a fixed speed of 33 MHz, a wide range of expansion cards can be used freely, such as sound cards, modems, RAID controllers, and much more. There are several video cards that are able to be used with the standard PCI interface, but because it is slower than AGP (which was specially designed for such a purpose) they are best suited for individuals who do not require more than minimal graphics processing power. Even with the rather lowly clock, PCI slots are considerably faster than the age-old ISA technology, making PCI ideal for high-end sound cards and modems alike.
- Power Connector: As with all other components, power is required to keep everything running under the hood. Motherboards have by far the largest power connector, typically ending with either 20 or 24 pins. How many pins the motherboard power connector require depends on its own power consumption; motherboards aimed at speed enthusiasts with speedy CPUs and exotic chipsets will require a 24-pin connector and possibly one or two more 4-pin connectors to supply extra power to the components. Modern power supplies can sometimes come equipped with what's called a 20+4 pin connector, meaning that the power connector has the 20 standard pins (so it can be used with older motherboards) and a 4-pin side connector for those who require all 24 pins.
- Socket Type: Similar to the graphics interface, it is important you keep an eye out for the socket type accepted by the motherboard. Unlike the graphics cards however, there is no general interface that the CPU can be installed upon and they will only seat in a socket designed specifically for their format. There are a handful of adapters available for one to install the CPU on a socket that it doesn't match up with, but compatibility issues may surface if used as a permanent solution. Depending on the vendor and what technology is offered on the motherboard, the CPU socket is likely to change accordingly. Understandably, you're unlikely to find an AM2 (940) socket on a motherboard that only accepts AGP video cards and SDRAM because of how outdated those technologies are. The motherboard vendor typically tailors new releases or inventions towards the high-end CPU market at first, and may over time occasionally be transferred to older sockets.
No animal can survive without a heart, and no machine can function without a power source. For the PC, that power source is a power supply. In a sense, think of it as an AC-to-DC converter for your system. Despite the fact that AC current can supply a very high current to destinations hundreds of miles away, it does so by varying the amperage and voltage levels in the line. What this would mean in a PC is a fluctuating current which would cause component failure and system instability. DC current can only supply electricity up to a certain limit without the aid of substation to boost the power, but it does not change its current to do so. This means a steady and dependable stream of power that is ideal when used in a computer, hence why it is used in such a scenario. Current power supplies aren't much different principally in relation to their predecessors, but there are still quite a few things to take into account.
- ATX/ATX12V: Of course, the first step to purchasing the right power supply is knowing what type to search for. ATX power supplies are the successor to AT units in several ways. ATX power supplies have +3.3v rails, a single 24-pin main connector for the motherboard, and a "soft-off" feature that allows the operating system and other software to shut down the system without any physical interaction required on the user's part. ATX has been dethroned to the now mainstream ATX12V power supplies though, which has many different versions that vary accordingly (though their physical size does not deviate from the standard ATX specification.) ATX 1.0 was the first release of the PSU design spec; initially the fan on a power supply was set to pull cool air from outside the unit and exhaust it into the PC case. This was quickly seen as a bad move by many power supply manufacturers and was not widely adhered to for much longer, and eventually removed from the design. ATX12V v1.2 came along soon after, and the major change was the removal of the -5V rail from the power distribution tables. ATX12V v1.3 was the first series to provide a 4-pin +12V connector specifically for delivering extra power to the CPU as well as 6-pin +3.3V and +5V auxiliary power connectors. ATX12V also v1.3 came through with the 15-pin SATA connectors, but other than that little changed until ATX12V v2.0 was released. This removed the 6-pin auxiliary connector and changed the ATX connector to 24 pins from 20 pins.
- Connectors: There's no sense in purchasing a power supply that doesn't have what you need for supplying your system with power. Take note of your motherboard's main power connector requirement, but also consider what the CPU and graphics cards will need as well. Older motherboards (pre-ATX12V v2.0) will use the old 20-pin configuration, and some new power supplies only have a 24-pin connector provided. Most PSU manufacturers caught on to this and quickly developed the 20+4-pin connector. This is the standard 20-pin main connector for older boards with a detachable 4-pin connector for the newer boards. In the event that your central processor requires additional power, power supply manufacturers provide specially designed 4-pin connectors which draw power straight off of a +12V rail, which you can learn about more later. New graphics cards (typically PCI-E) require more power than what can be provided stably from a 4-pin Molex, and thus have their own 6-pin connector made just for that job; these connectors also draw from the +12V rail. Exotic high-wattage power supplies have as many as four of these connectors since they are aimed at quad (7950 GX2) SLI or SLI with two 8800-series cards.
- Efficiency: Purchasing an efficient power supply might seem like something pushed by tree-huggers, but it is in reality a smart decision. The efficiency of the power supply is represented by a percent value, which is the total DC output relative to the input AC current. No power supply will ever achieve a 100% efficiency rating because there is going to be a loss of output potential from the conversion which becomes heat. Low-efficiency power supplies require a higher AC input current to provide its rated DC output current, meaning that more power will have to be drawn. For example, a 600W power supply with a 85% efficiency needs to draw approximately 706W from the AC current, meaning about 106 watts are lost in the conversion from AC to DC current. An identical 600W power supply with only a 75% efficiency needs to draw 800 watts, meaning nearly 100 more watts have to be drawn in order to compensate for the loss. As one could imagine, a higher efficiency translates into lower electrical bills in comparison to a power supply with even a marginally lower rating. It should be worth noting that efficiency is rated at load, not when idle, and the efficiency of some power supplies can be as low as sixty or seventy percent when idling.
- Continuous Power Output: Depending on what kind of PC user you are is dependent on what strength power supply to purchase for the most effective experience. The wattage ratings power supplies receive is found by multiplying the amperage by the total voltage maximum, and this is indicative of how many devices or components can be concurrently powered. Careful inspection of the +12V rails should be considered, as they are vital to supplying the most power-hungry (CPU and graphics cards) components with sufficient power. When running an SLI or CrossFire setup of graphics cards, it is important to find a power supply with no less than 34 amps (total) running across the 12V rails to ensure stable and reliable performance. In systems that are running only a single high-end graphics card, such as a 7900 GTX, only 24A will be needed on the +12V rails, and a meager 18A for low- and mid-range cards. While it is important to not overshoot your peak power consumption, it is clearly as crucial you do not delve below it. Power supplies rated at, say, 650 Watts can provide a continuous, stable stream of energy at or near that rating. If your system's power requirements are above that, the power supply will inevitably fail. No power supply can produce above its maximum ratings for more than half a minute (at best) and it is quite dangerous to do so. In the interest of your system's health, and for the sake of future proofing, it's generally a good idea to go about 50W beyond your PC's peak power consumption, but if budget permits, it's always best to go with the most powerful unit available.
- Modular Cabling: Modular cabling is an enthusiast-targeted technology for anyone with a case where space is limited and/or the user has a side viewing window. This feature allows the user to connect only the cables necessary for their uses into the power supply through a set of plugs, effectively reducing cable clutter and promoting improved airflow as well as case aesthetics. Semi-modular power supplies come with the main connector and usually one PCI-E 6-pin connector that is permanently installed since most systems require these cables on the most basic level. Unfortunately, there is the possibility that the added electrical resistance can cause voltage reductions, translating into lost power, eventually leading to system instability and component failure. Additionally, the pins on modular cables are susceptible to being bent, breaking, burning and corroding; a possibility some users prefer not to risk.
- PFC: Power Factor Correction, or PFC, is a technique used to counteract what is called a Power Factor when it drops below a value of 1. In essence, the Power Factor is how the power grid sees the device attached; a PF of 1 is seen as a purely resistive load and is less of a problem for the grid. Numbers lower than one are reactive loads, meaning that their impedence (reactivity to AC current) is dynamic and the load can be seen as capacitive, inductive or a mixture of both. These reactive loads can cause upstream difficulties which is why Great Britain and Europe require PFC on all power supplies sold there. The Power Factor is the ratio between the power in use and the power available from the source, or in this case the output capacity of the power supply and the current supplied to it from the outlet it's plugged into. As previously covered, when the PF value is one, the device is putting all of the power available to use, which is ideal. Unfortunately, common devices with a PF of 1 are items like a heating element found in an oven or toaster and such a Power Factor cannot be achieved by a device with a fluctuating power consumption. For a more subject-related example, a 500W power supply with a PFC of 0.85 (85%) is consuming about 88W meaning its total continuous output can only be something to the effect of 412W. To put it all into perspective, let's assume that you have a 1000W power supply with a power factor of 0.95 (95%) and an 85% efficiency rating (at all times, just to make things easy.) On top of the 1000 Watts the power supply must draw (at full load,) there will be an additional 176 watts drawn to compensate for the amount of energy lost in the AC-DC conversion (efficiency) equating to approximately 1176W peak current draw. The power supply itself will consume approximately 50 watts (Power Factor) meaning that the power supply will only be able to dispense a maximum of 950W continuously instead of its rated continuous power of 1000W. Modern power supplies usually come with one type of PFC unit or another: passive or active. Passive PFC units require no power to function, and usually have a power factor between 0.7 and 0.8 (70% and 80% respectively.) Active PFC units do require power (however very little) to handle the power load and produce nearly ideal Power Factors anywhere in the neighborhood of 0.9 (90%) to 0.99 (99%,) but these units in turn cost more because of their complexity. Power supplies with a lower Power Factor won't necessarily cause your power bill to rise considerably, but it means that your power supply won't be putting all the energy you're pumping into it to good use.
- Protective Measures: Due to the fact that the power supply is the most ubiquitous component in the entire system, it poses the largest risk of damaging all of the other devices connected to it at failure. Power supply manufacturers have devised numerous countermeasures to protect users and their machines in the event of several worst-case scenarios, all of which are now standard and you will be unable to find any modern power supply without them. Several of such protection features are overvoltage, overload/overcurrent, and short-circuit protection. Overvoltage protection is a device that detects when the incoming voltages exceed the specified output of the power supply and will automatically shut down the unit if an elevated voltage is detected. Overload/overcurrent protections are fairly similar to overvoltage protection methods; they detect excessive input currents or power loads, which also cause an immediate shutdown. Last is short-circuit protection which is obviously a method of detecting short-circuits within the power supply, and again will cause an emergency shutdown. As stated earlier, these safety precautions can not be found absent on any modern power supply, but they are for the better of your system's health.
- Single/Multiple +12V Rails: All power supply units have different types of power distribution lines known as "rails" which vary in both voltage and amperage. The reason these rails are varied is to provide sufficient power to a component without wasting or isolating a current, and the three main types of rails (not including the grounds) are the +3.3V, +5V and the +12V rail(s.) The +3.3V rails are used for components with a very small power draw such as PCI cards, LED nodes and cathode lamps. +5V rails can power the DIMMs, low-speed fans, submersible/inline pumps, optical and hard drives (though the drives also utilize the +12V rails to power the spindle motors.) The +12V rails are reserved for component with the highest draw, namely the CPU and video cards. When a power supply conforms to ATX12V regulations a limit of 20A is placed on the +12V rails, meaning a maximum of 240VA (Volt-Amps, or Watts) can be shipped across a single rail. Considering the fact that modern computers can draw more than 240W from one +12V rail with little effort, companies are designing PSUs with multiple +12V rails to handle the extra draw. The CPU can have its own dedicated rail while the graphics cards can share another rail (and in some cases have independent rails) to allow for stability at high loads. While this idea seems logical, there are hidden dangers in this design. First is that when a power supply abides by the ATX12V guidelines the rails are restricted to 240VA and a power draw exceeds that limit, the unit will fail and shut down immediately. Second is that when anything less than 240VA is drawn off of a rail the remaining power becomes isolated and useless unless the current between the rails is shared; not all units allow for this, however. Power supplies with multiple rails use what's called a virtual rail which is a technology that allows for multiple rails to draw off of one large, unrestricted rail thereby eliminating the isolated power issue, although it does not remove the possibility of an instantaneous shutdown from a draw exceeding 240VA. High-quality power supply manufacturers rely on single +12V rails to distribute power to the system, and for good reason: using a single, massive rail allows for an incredible amount of current to be shipped anywhere in the system without limitation. Processors and graphics engines with a high power draw do not pose the difficulty to a single rail unit that they do to a multi-rail system for that reason, adding a great deal of appeal to performance enthusiasts. No power supply is "truly" multi-railed since the current is fed to the power leads via one transformer; very few units exist that contain more than one transformer.
Section 3: Getting Started 
In this section, we'll deal with getting your system to start screaming: first with airflow planning, cable management, and lighting suggestions, then onto the fun stuff: hardware and cooling installation! I'll do my best to provide accurate and easy-to-follow steps for the installation guide, but please keep in mind that this section is not meant to substitute for your component's manual or your common sense meaning that I nor anyone else can accept the responsibility for your actions. The items in all sections have been listed in alphabetical order and it is advised that you consult with your manual or a professional if you are unsure about the order of hardware installation for your system.
Case Management 
While it is important to know what parts to purchase to suit your computing needs, cabling arrangement and airflow management are just as important as installing the components themselves. By reducing cable clutter and strategically installing fans, you can reduce your system's ambient temperature while also improving its appearance. Here we will cover airflow management, cabling management, and lighting and other visual installation procedures.
Airflow Management [031A]
In order to keep everything working cool and smoothly, intelligent airflow management is the key. While the construction of your case, motherboard form factor, and preferred noise levels will call for different styles of fans, the principles remain unchanged: better airflow over critical heat-producing components will translate into cooler system temperatures. There all sorts of fans available on the market that are tailored to specific needs, such as high-speed fans aimed at top performance or slower fans for quieter solutions are common examples.
Before purchasing any fans, it is a good idea to draw out and plan just how they will be situated in your case. Take into consideration which components you wish to cool down and how that air will be removed afterwards; for example, placing a fan behind your CPU heatsink to exhaust the warm air and a fan on the front of the case to draw in cool air. Users with an ATX-styled motherboard will normally be required to incorporate side fans and/or fans situated on or near the bottom of the case to cool the video card(s) because of the layout of the motherboard. BTX users are lucky in the sense that all of the heat-producing components are aligned in a linear pattern, eliminating the need for a myriad of side fans and reducing the number of fans in general required to cool the system's hardware. Some users like to install top-mounted fans to exhaust any additional warm air that may escape the other fans, but under most circumstances this will not cause such a dramatic decrease in temperatures as long as there is an appropriate balance of intake and exhaust airflow. Balancing the amount of air entering and exiting the case can be rather important for your system's health, as several scenarios can come into play if there is an under or overabundance of either. When more air is being pushed into the system than can be exhausted, warm air is stalled as it has to wait to be exhausted out the rear of the case, and on the other hand when there is more air leaving the case than entering it the warm air is quickly removed however more dust is likely to be drawn into the case. When the airflow is balanced, what will happen is exactly what one would expect to happen: warm air being replaced at an equal rate by cool air with an average dust intake.
Fans that move massive amounts of air can be quite noisy which may deter users who prefer a calm computing environment, but will obviously produce better temperature reductions and improved airflow. Those who yearn for a quiet computer should look into low-speed fans, which sacrifice airflow for silence. The size of the fan can affect how noisy it is depending on its CFM rating because a smaller fan will need to turn much faster to move the same amount of air as a larger-sized fan. The fan's RPM rating, blade angle, and blade count can also have an effect on the fan's noise level, as a high-RPM fan with 9 acutely angled blades will be real screamer while an equally fast fan with 5 aggressively angled blades will most likely produce less noise (in terms of frequency.) Those factors alongside the fan's size and CFM rating should all be meditated upon before buying a fan. Most companies provide decibel ratings to give the customer an better picture of the fan's performance; you should find the quieter (albeit slower) fans to be within or below the 35dB range with high-performance fans going into the 60dB range (as loud as a window air conditioning unit.) Some fans come bundled with or are compatible with fan controllers which feature rheostats and voltage regulators to control the speed of the fan either by an automatic temperature-controlled setting or manual control. Controllers can be useful in systems that are infrequently loaded heavily and the controller can set the fans to spin in relation to a temperature probe's readout rather than leaving the fans running at full-speed accomplishing very little.
If you're concerned about the amount of dust being sucked into your system, several companies make fan filters normally composed of plastic foam, metal mesh, or a combination of both. These will help to reduce how much dust the fans will draw in, but they will not totally eliminate it and filters will hinder the fan's CFM (cubic foot/minute) rating. To make things simpler these filters can usually be easily removed and cleaned, and foam filters can even be washable in a dishwasher or under water. Another point of airflow management is the kind of fans you desire which again depend on personal preference.
Cable Management [031B]
No matter if you've got the best components money can buy and a killer cooling solution to push it along, a messy cabling job can destroy the internal aesthetic value of your case. While it is more difficult to plan than airflow management, there are a few things you can do to help ease the pain. As a general rule, always keep cables away from heat-producing sources: heatsinks are the main obstacle to keep an eye out for. Also be sure that when hiding or binding your cables to not do so in such a way that there is a constant tension on a cable, especially one that is in use, since it can come unplugged and have rather devastating effects. So you don't mess up your intelligent airflow blueprints, keep cables and wires neatly secured to the walls, floors, and any supports that are inside your case while keeping all the aforementioned cautions in the back of your mind. This is precaution to pay extra attention to if you are still using ribbon-style IDE cables which are infamous for killing airflow in one grand swoop. Keeping cables together and branching off only when required can take a bit of effort, but when placed out of airflow and sight, it will pay off tremendously. If at all possible, hide cables that are too long and need to be looped or cables that aren't in use at all behind the motherboard tray or in the 5.25" bays. Doing so will make cable switching or rerouting much easier. Unfortunately, since power supplies don't come with interchangeable short and long cables, it's helpful for you to first find a spot (post-installation) to stow your unused cables. On that note though, it would be in your best interest to purchase a modular power supply, which you may refer to the definition of in Section 2.5: The Power Supply. Even if you have a modular power supply there will still be a need for advanced cable management. Some cases have specialized channels to make this easy, but you can sometimes get lucky by finding a space behind the motherboard try or folding the cables up in the front bays if you've got any.
Hiding cables isn't the only aspect of cable management however: binding a mass of cables and wires together or directing them through clever use of zip-ties or other such products also play an important role. By using zip-ties, Velcro, purse locks, and the like you will be able to tidy up your cable clutter effectively and easily. Zip-ties are long, plastic strips with a square hole at the top and a slightly pointed bottom with grooves all the way up the body. They vary in length and width but will undoubtedly provide the best grip on any thickness of cable it can be secured around. Because zip-ties are built to not lose their hold once secured, they are the most difficult to remove when necessary. Using wire cutters or scissors is the only way to free a cable from a zip-tie, which presents its major flaw: if your hand was to slip even slightly, it could mean an exposed or severed cable. Velcro is the little brother to the zip-tie, but it has its own strengths that its big brother can't match. Velcro uses thousands of extraordinarily tiny loops and hooks (each on a different side of the strip) that when introduced produce a respectable hold that will not only ensure a firm hold, but they are also reusable. However, because Velcro is two-sided and somewhat thick, it will have some degree of difficulty maintaining a grip on smaller-gauge wire such as fan connectors or USB cables, so it is more ideal for large cables. Purse locks are U-shaped bits of plastic with ball-tipped ends, and can be considered the baby brother of the lot. Purse locks are not constructed to hold a large amount of wiring together; they cannot wrap around smaller cables, and have no real locking mechanism. Essentially, their only purpose is normally reserved for water cooling where they can be used to keep tubes together without obstructing the view of fluid within as strongly as zip-ties or Velcro strips would.
Lighting And Other Visuals [031C]
For some people, only their case can provide them with a sense of pride. Some would agree with the logic that performance means nothing if it doesn't look good; after all, you wouldn't buy an Enzo Ferrari if it looked like a Ford Focus, would you? Of course not, and this is exactly why lighting and other visual improvements can mean so much to gamers and enthusiasts. Firstly, we'll cover the most prominent type of case modification: lighting. Light kits come in three forms: Cold Cathode tubes (CCFL,) Neon tubes, and LED.
Cold Cathode lamps are filled with a mercury vapor and coated with a special phosphor that alters the Ultraviolet light emitted from the ionizing mercury vapor to a predetermined color. Among the most common colors are the three primary colors, UV, white, and a few others. Not to worry, though: UV lamps are coated to block harmful UV-B and UV-C rays which are known to damage the skin and cause cancer, and are in reality a specialized black light. UV CCFLs are most popular in cases with liquid cooling systems that have UV-reactive fluids or reactive components because the purplish glow emitted from the lamp is not nearly as powerful as the others listed above. Cold Cathode lamps are very useful for bringing attention to the case through brilliant light without using much space. Lamps can range from 4" in length to 15" and thus have a multitude of potential applications for a user with any size of case. All CCFL kits require a small inverter (DC-AC) which needs to be kept cool at all times, else they can burn out and even catch on fire. Keeping the converter near a fan or in a direct air pathway is a good idea unless you have a very chilly case (below 40C ambient.) Cold Cathode lamps do produce a certain amount of heat by themselves, but in most cases it is negligible.
Neon tubes are very similar to Cold Cathode tubes except they are filled with low-pressure Neon gas that when combined with an electrical current produces a brilliant thunderbolt-like pattern on the rim of the tube. Neon lights are available in just about all the flavors that CCFL kits are, and also have an inverter that needs identical cooling requirements. Due to the undulating thunderbolt pattern the Neon gas produces, the amount of light given off will fluctuate creating a subtle pulsating effect.
On the other side of the lighting scene are LEDs, which are still moderately unused though their popularity is rising because of their flexibility. LEDs require no inverter to supply them with a different form of current, consume a very small amount of power, can come in many different colors, never burn out, produce a great amount of light for their size, take up little space, produce no heat, and cost next to nothing to make. Obviously, LEDs open the doors to different and unique customization possibilities, but there are a few drawbacks. With the bulb being so small, their ability to create an ambient light becomes difficult unless dozens of LEDs are strategically and painstakingly situated throughout the case or the light is put through a type of diffuser. Understandably, this suits LED lights for something more like a spotlight-like purpose, which they naturally do well.
While lighting is all good and well for the inside of the case, there is still the matter of the outside. When it comes to making the outside of your case as good as the inside, vinyls, stickers, windows, fan grates, and laser-cut side panels are what to look for. Vinyls and stickers are fairly self-explanatory: graphics and/or font with an adhesive backing that is directly applied to a flat surface to enhance the case's appearance as well as display the owner's support for a company or product. Windows are now becoming more common on cases geared towards gamers and enthusiasts, normally made from either acrylic plastic or Plexiglas. Windows allow other people to peer inside the belly of the beast while keeping the components safe (as opposed to having no side or an open side) and can make a case look much less plain and boring. Window shapes can be from a simple box or rectangle to intricate designs such as a game logo (think Unreal Tournament or Half-Life) made from a precision cutting laser. Fan grates are special Aluminum designs that are placed on the outside of the case and sit on the fan's mounting screws, placed only to draw attention and nothing else. Fan grates can also be in the shape of symbols, logos, or shapes like windows, of course being on a smaller scale. With no real purpose other than for aesthetics, fan grates have yet to find a purpose for the common everyday user.
Hardware Installation 
Now that the functions and purpose of your system's core hardware has been reviewed, we've discussed how to maintain a balanced and beneficial airflow, pondered over where and how you should manage your cables, and revealed what can make your case stand out, it's now time to put it all together! Please keep in mind that the information below has been sorted in alphabetical order and is not meant in any way to substitute for your product's manual or professional knowledge (like everything else in this guide.) The descriptions here will be relatively short considering the processes are generally simple and self-explanatory, but elaboration will be provided when necessary to avoid confusion. All descriptions are based on the assumption that you have compatible hardware and proper knowledge of handling it!
Note: This section will not cover the installation process of non-stock cooling solutions (i.e. heatsinks and waterblocks) due to vast differences between them even in their own respects. The physics behind them are discussed later on as well as what to look for, similar to what you saw in Section 2: General Hardware Knowledge, but you should always consult an experienced person if you have any questions about installation or what would be best for your system.
The Central Processing Unit [032A]
As with every new hardware installation, make sure that your system is fully powered down and the power cable is removed from the power supply. ATX power supplies do not fully shut down when the system is off since it needs to be able to turn on the computer at any given time, so precaution must be taken. After discharging any electrostatic buildup that may have accumulated at this time, remove the CPU from its packaging and, if provided, leave the heatsink within arm's reach for post-installation attachment. If the heatsink has a pre-applied thermal pad, leave any protection as it is; prematurely removing the cover may expose the thermal material to dirt and dust which can lead to inefficient thermal transfer. In the bottom-right corner of the processor's silicon base there may be an arrow or triangle which indicates the proper way to seat it in the socket. When you're ready to install the CPU, raise the small lever beside the socket (opens the socket) and gently sit the CPU atop the plastic block. Lightly push the processor into place, and then lock the socket by returning the lever to its initial position. Attach the heatsink as directed by the manual then plug in its power connector to the dedicated motherboard pin unless specified otherwise. Double-check that the heatsink is seated properly and that everything is in order, and you should be good to go!
The Graphics Cards And Expansion Cards [032B]
Before removing the expansion or graphics card from its packaging, check the area where the card will remain for stray cables or any other sort of obstruction that could make installation difficult or impossible. If permanent objects seem to prevent the card from being installed, such as a tall Northbridge/Southbridge cooler, then do not continue with the process and remedy the situation however necessary if at all possible. If all that blocks the area is a cable or two adjust them as necessary and, as per your case's manual, remove as many PCI brackets as necessary. PCI brackets are strips of metal found on the back of your case which protect the PCI slots from dust and the like, and can be either the old-school metal with mounting screw style or a "tool-less" styling. The old-fashioned PCI brackets need to be bent or otherwise physically manipulated to be removed from the expansion area and use screws to hold expansion cards in place. This has the advantage of being sturdy and straightforward, though it can be slightly agitating for someone who constantly replaces expansion cards. Tool-less PCI brackets use some form of plastic retention method to hold cards in place and can vary in concept from vendor to vendor, but the principles remain generally the same. However, some users dislike this style because the plastic construction tends to give the impression that this bracket is flimsy and weak, though its ease of use may redeem its shortcomings for a few. Whatever the situation, remove your video card from its anti-static packaging, handling it only by its outer edges (but not its PCI connector pins, as that could spell trouble.) Line up the card's I/O plate to its respective bracket and slowly move it towards the motherboard, keeping an eye on the card's pitch and angle as well as any unnoticed objects that might obstruct or touch the card. If all looks well, go ahead and begin pushing the card into the PCI slot. Some effort may need to be exerted in order for the card to seat, but don't push too much; there may be another reason why it isn't installing so easily. Double-, even triple-check if anything is in the card's way, and if it still looks alright then continue applying pressure to the card. Be careful not to bend the card while installing, but once it does eventually fall into place, secure it however your case is constructed to do so. At this point, all that's left to do is connect any required cables: power, communication, and so on. Install any drivers that may be needed, and you're set.
The Hard Drives And Bay Drives [032C]
To avoid any possible problems that may arise in the future, always install any necessary drivers for your motherboard (i.e. RAID) before installing a new hard drive or bay drive. Once the driver installations are complete, power down the system and remove the power cord as usual. Attach the necessary cables for your drive before fitting the unit into place, including power, audio, communications, and any other necessary connectors. Once that is all said and done, ready the drive by removing it from its packaging and gathering any required tools for the installation, such as screws and a screwdriver if your case uses old-fashioned screw mounting. Before going through with sliding the drive in place, if you're installing a hard drive, check the jumpers that they properly label the drive's hierarchal position (i.e primary master, secondary master, etc.) Most new cases support a tool-less retention system much like PCI's, and if this is the case then you will most likely not have to resort to using screws and such. Check the drive bays if using screws or brackets on both sides is necessary, however this is not common with newer cases. Either way, slide the drive in its bay (hard drives to 3.5", bay drives to 5.25" unless otherwise specified) and align the holes on the side of the drive to those of the cage. When aligned, secure the drive evenly on both sides if necessary (most new cases do not require this to be done, but it never hurts to check.) Once you're happy with how the drive is sitting in the drive bay, now you can install the cables discussed earlier that you set aside, then you can close up the case -- you're finished!
The Memory [032D]
System RAM is probably one of the easiest components to install; ironically enough it is a component that requires a good amount of attention in doing so. Don't overlook standard procedure because of its simple installation procedure though; power down the PC and unplug the power cord, and rid yourself of any static. Of course, you should have double-checked with your motherboard's specifications as to what kind of RAM it accepts (184-pin DDR or 240-pin DDR2 and DDR3) since the two are not compatible in the opposing slot type. Additionally, you should have purchased only as much as your CPU, motherboard, and operating system will allow, and you can learn about that by referring to the definition of Bus Sizes in Section 2.1: The Central Processing Unit. The DIMM module has a notch in its connector pin strip which indicates the direction to install it into the motherboard and will only allow the RAM to seat correctly. If your memory does not fit into the DIMM slot either way, you may have purchased the wrong type of RAM. There are plastic brackets on the top and bottom of the RAM slots which need to be moved out of the way before pushing the memory into place, and returned afterwards. Those brackets help hold the modules steady, but they are not needed to keep the RAM in place (though it is recommended.) If you have two modules that will be running in Dual-Channel, do not install the modules side-by-side; rather, leave one slot between them. On the motherboard, the DIMM slots are normally numbered, from left to right they are: A1, A2, B1, and B2. In this example you would want to install the modules in slot A1 and slot B1, but if you only have one (or three) module(s) you should install it in either slot A2 or B2; just to be safe though, you should consult with your motherboard's manual. It should be pointed out that mixing different RAM modules may lead to system instabilities and should be avoided at all costs. When expanding an existing memory subsystem, it would be in your best interest to purchase modules from the same manufacturer with the same speeds, timings, and voltages. Module density can vary in pairs (such as two 1 GB modules and two 512 MB modules) but other than size, mixing RAM is something to be cautious about. Doing so will not necessarily produce any immediate or delayed problems, but it can be dangerous and is generally not accepted as good practice.
The Motherboard [032E]
Assuming your case and power supply are compatible with your motherboard, ready your screwdriver and make as much room as possible within the case, followed by unplugging the power supply and so on. The layout of your motherboard dictates how and where it will be installed in the case, so checking your case's manual would be wise if you are unsure about this. Once you have identified the proper locations of the brass supports, screw them in by hand (using a screwdriver only if absolutely necessary) and lay the motherboard (still in its anti-static wrapping) atop them to ensure that they are in the correct position. If the holes match up, you may now remove the mainboard from its packaging and place it on top of the mounts. Using screws provided by either the case or your motherboard, secure the motherboard to the case or removable tray (for those fortunate enough to have such a commodity) and connect the proper power cables once it has been fully screwed down. Refer to your motherboard's manual for the power cables that may be required (outside of the main connector,) such as a 4-pin Molex for SLI's EZ-Plug. For ease of future component installation, refrain from attaching any additional cables, such as IDE or SATA HDD cables or CD/DVD drive wires, until the said component has been installed in the system.
The Power Supply [032F]
Taking into account you will be handling the supplier of all the electrical power for your system here, it would be a smart move to not plug in the power cord until your system is actually ready for use; otherwise exposed power prongs (such as 3-pin fan connectors) could become live and, if they happen to touch any metal, create a spark that could damage the unit or any system components inside or touching the case. That aside, what type of case you own depends on where your power supply will sit, with ATX cases typically having the PSU located at the top and BTX located at the bottom of the case. This is not always the case, but the cut-out part of the case to fit the unit in is difficult to miss. The location of the mounting screws on the back of the power supply only allow it to be installed in one way, so incorrect placement shouldn't be of concern here. Lead the power connectors into the case and proceed to slide the unit in the cut out. All that is necessary now is that you align the proper holes to the ones on the case, secure it with the mounting screws, adjust the power leads as necessary and start installing another component!
External Hardware - Input Devices [032G]
Input devices include anything that sends real-time information to the computer, whether it be a mouse, keyboard, microphone or web cam. Most if not all of these devices use USB (and sometimes FireWire/IEEE 1394) ports to communicate with the computer because of its high bandwidth and versatility with lots of different gadgets. These ports are unmistakable because of their small rectangular shape and silicon tongue-like key which prevents from devices being plugged in upside-down and also guide them into the port. Unless the device needs an external power source, which for these devices is highly unlikely, all of the power consumed by the gadget is supplied by the USB/FireWire port, which only leaves driver installation (if needed) to begin using your device.
External Hardware - Monitors [032H]
Since analog and DVI monitor output ports do not supply power to the monitor, you will need to use up a free wall socket in order to power your monitor, no matter the size. Once you can sit the monitor where you'll be using it, or close enough to where you can still access the back panel, plug in the power cord to the monitor (whether or not the cord is live does not matter.) Now, depending on your monitor you will be using either a DVI cable or an analog cable, which are usually reserved for LCD screens and CRT screens respectively. LCD displays can sometimes accept either an analog or DVI signal, and there are no benefits or losses to either. DVI ports are larger than analog ports and feature a distinct cross-shaped key on one side of the D-Sub port, while analog ports are smaller with a dense collection of pins that are aligned in such a way that the connector will only install correctly one way. Both will require you to screw the plugs in snugly (so the connection is not lost at any time under normal circumstances) and adapters are available, normally supplied with the video card, if your cable does not match. Most of the time drivers are not necessary as monitors are plug-and-play, but some LCDs and CRTs come with software that allow for their users to adjust the color and display options through a program rather than by using the monitor's menu buttons. From there, all that is required is for you to adjust the monitor's display options as per your liking and you're set.
External Hardware - Speakers And Headphones [032I]
As with monitors, audio jacks do not provide an ample amount of current to allow your speakers to be powered by the jacks alone, especially if your speakers include a subwoofer. From here on out, we'll assume your speakers are hooked to a subwoofer which also provides them with power. Your subwoofer should have clearly marked or specified ports for certain cables to be plugged into on the back of the unit, as well as cables that will lead to your PC. The color code of PC audio ports differ and can sometimes be totally absent, so refer to your motherboard's manual (for onboard sound) or your sound card manufacturer's manual for clues as to what cable plugs into where. On a general scale, lime green indicates the line out, red or pink is for the microphone input, and blue is for hooking up outside audio devices (CD players, stereos, etc.) Sound devices that support more than dual-channeled audio will have additional ports for different speakers, and as stated before the color system will differ between makes and models, so be sure to check you've got your cables properly connected.
Component Cooling 
When purchasing a replacement cooler for your system, it's vital that you know exactly what you're looking for and what suits your needs. Cooling solutions nowadays range from simple arrangements of aluminum fins with a fan to specialized refrigeration units with all sorts of other concepts in between. This section will cover all of the main types of coolers: convection, liquid, Peltier/TEC, phase change and other exotic concepts. We'll cover the physics of each type of cooler, their unique advantages and disadvantages, and recommended applications for them. After all, it's no use buying some outrageously expensive and high-performance cooler if you won't need it, right? Right -- so let's get to it!
Convection Cooling [033A]
- Thermodynamics: Convection cooling makes use of the simplest method of removing heat, which is by absorbing the heat through thermally conductive metals. In a nutshell, the heatsink sits atop the component and the heat naturally flows into the metals of the heatsink where it is radiated into the case and eventually exhausted by case fans. Modern heatsinks are fitted with dozens or even hundreds of Aluminum or Copper fins that may be serrated, curved or otherwise shaped to maximize the heatsink's surface area. By enlarging the surface area of the cooler it allows for more heat to be absorbed and removed from the base of the heatsink which results in lower component temperatures. Passive heatsinks are nothing but a collection of metal fins (sometimes utilizing a heat pipe technology which is covered in a moment) that don't always provide the best reduction in temperature but are desired by enthusiasts looking for a noiseless cooling solution. Most standard convection-based heatsinks are equipped with a fan that blows air directly across the fins to aid in the removal of heat rather than allowing the heat to dissipate passively. High-performance convection heatsinks feature a new technology called heat pipes which are hollow tubes filled with a pressurized liquid (generally water or Acetone) with a spongy material on the outer rim of the tube, known as the "wick." When the heat pipe is exposed to heat the liquid boils and vaporizes into a gas which naturally flows to a cooler place (away from the heat source) where it condenses back into a liquid. During the conversion of gas to liquid the heat is removed from the heat pipe, and this efficient process is what makes heat pipes so desirable. To keep the liquid moving at a constant rate, the wick directs the liquid back to its starting position to begin the process over again.
- Advantages And Disadvantages: Convection cooling is without a doubt the most common type of cooling solution used in computer systems to date because they are cost-effective and provide satisfactory results. Convection heatsinks are easy to come across, are simple to install, come in all sorts of shapes and sizes, are fairly inexpensive, can be fanless and generally produce little noise. The only downsides to convection cooling is that they can be rather large when intended for use on high-end processors and cannot achieve temperatures below the ambient temperature. This means that if the temperature of the room (not the case) is 23C/73F, the component cannot be cooled to anything below that.
- Recommended Use: Convection heatsinks can be used on just about every component with a significant heat signature such as the CPU, GPUs, the chipset, system RAM and even the hard drives on occasion. While they cannot be cooled to sub-ambient levels, the heatsinks do a sufficient job of removing heat from the components. Convection heatsinks are best used in systems with only a moderate (if any) overclock with a good amount of internal airflow and should be avoided when going for a significant overclock, airflow is poor, or the ambient temperature is high.
Liquid Cooling [033B]
- Thermodynamics: Liquid cooling is somewhat of a new member to the PC cooling scene that uses a slightly ironic approach to rid PC components of heat: by using liquid. Liquid cooling is more commonly known as water cooling, however very rarely is water used in a liquid cooling loop because it promotes algae growth. On occasion PC enthusiasts will use distilled water because it's readily available, relatively inexpensive and inhibits algae and bacterial growth. More often than not however, manufactured coolants specifically made for liquid cooling in PCs can be found in a liquid cooling system. These special liquids carry unique properties such as UV reactivity, reduced electrical conductivity, algae/bacterial inhibiting formulas and/or lowered thermal resistance (better heat transfer.) Liquid cooling systems require a considerable amount of components to function, and those components are: the pump, the reservoir, the radiator, waterblocks, and tubing. The pump is, obviously, a device that pushes the coolant throughout the cooling loop using an electric motor and impeller. The reservoir is a hollow block or container that holds a certain amount of additional coolant, though the reservoir is an optional, however strongly advised, component to include in a loop. The radiator is a large block of metal fins with Copper tubing running along on the inside. Fans usually sit on the top of the radiator and drive cool air through the fins with absorb the heat from the Copper tubing in the same way a convection heatsink would. A waterblock is a specially designed piece of Copper or Aluminum that is made exclusively for a particular component (such as the CPU) that has water channels bored through strategic locations on the block, sometimes with fins along the channels to help draw heat away from the source. Tubing is obviously what ties all of those components together and can be made of several types and sizes of plastic which can be transparent, opaque or even UV-reactive. The components can be arranged in any sort of fashion the user desires or deems necessary, and every once in a while a second pump or radiator will be added to help with certain systems.
- Advantages And Disadvantages: Liquid cooling systems have a large advantage over convection coolers in terms of thermal resistance since every liquid is much denser than a gas and can therefore more efficiently transfer heat under identical conditions. However, because the different parts of the liquid cooling system only use the air and liquid to cool the computer's components down, rather than using some sort of active solution, temperatures cannot drop below the ambient room temperature. The list of components for the cooling system can also add up to be expensive rather quickly, and assembling them is something best left to an experienced professional. Since this type of cooling solution uses liquid it means that if any seal or tube were to break there is a serious risk of an electrical short that could very well destroy the entire computer in mere seconds.
- Recommended Use: Liquid cooling systems may be great performers and add a considerable amount of "cool factor" to your PC, but they should only be used by experienced individuals who know exactly what they're doing and what is needed to get it done. Liquid cooling systems should be used in cases that have seriously overclocked components that require an additional amount of cooling. Even though this type of cooling system does not base its performance solely on air, this does not mean that fans can be eliminated from the case altogether; circulation is still necessary to exhaust warm air radiated from components not included in the cooling loop as well as warm air created by the radiator.
Peltier/TEC Cooling [033C]
- Thermodynamics: Peltier units, or TECs (thermoelectric coolers,) utilize what's known as the Peltier effect. The Peltier effect is, simply put, that a one side of a plate will become warm and the other will become cold when a current flows through two different types of semiconductors. Two different metals are sandwiched between ceramic substrates and are internally aligned in an alternating fashion. When a current is introduced, a temperature differential occurs on the sides of the TEC. The temperature differences create high and low energy states within the unit that the electrons are transported to and from. In the process of the electrons moving from one side of the plate to the other, heat is efficiently removed from the cold side of the unit. When heatsinks or other cooling methods are introduced, the heat can be removed from the TEC by alternative means thereby reducing the energy state of the Peltier unit and effectively improving its cooling capacity.
- Advantages And Disadvantages: Thermoelectric coolers have the ability to vary in power and cooling capacity, allowing users of all extremes to purchase a suitably powered unit for their cooling needs. Ranging from as few dozen Watts to several hundred, TECs can be used for anything between an old Pentium that doesn't have a fitting heatsink to a performance enthusiast overclocking his or her quad-core CPU past 4 GHz. Another plus to TEC units is that there are no moving parts that can break, wear out or malfunction, almost guaranteeing flawless operation. While the heat pumping ability of a TEC is almost unrivaled, the power consumption of high-end units usually warrants the addition of another power supply. High-powered thermoelectric coolers also raise the ambient temperature of the case and the room it occupies since the hot side of the unit faces away from the component that's being cooled, allowing it to freely radiate hot air into the case. Additionally, these high-powered coolers almost always require some form of insulation surrounding the cooler and component to prevent condensation from forming and shorting out the computer. Undershooting the heat output of a component that will be cooled by a Peltier can also have dangerous effects as said component will not be cooled to a satisfactory level.
- Recommended Use: Peltier units should only be used by experienced computer users who know exactly what is required for optimal cooling and safety. However tempting these units' promising cooling capabilities may be, there is much to consider in regards to the power of the unit, power draw, heat output and condensation prevention. For that reason, TECs should only be used and installed by an expert.
Phase Change/Refrigeration Cooling [033D]
- Thermodynamics: To help paint a better picture before diving into the specifics, a phase change unit is essentially what you would find on the back of your refrigerator, but in this case it has been scaled down and specially suited for use in/with a PC. Now that that's out of the way, it's onto the details. All phase change units have a chemical refrigerant that is passed through the system and changes states along the way, naming the reason it's called "phase change." The refrigerant is first passed (as a gas) through a compressor and pressurized to a high degree. From there it moves into the condenser, the point in this system where the refrigerant is cooled and changed into a liquid. The transition from gas to liquid results in an exothermic reaction, meaning that when the gas becomes a liquid, heat is given off. Once the refrigerant has become a liquid, it is forced into a Copper capillary tube which feeds it into a low-pressure evaporation chamber. This chamber allows the liquid to naturally convert back to a gas, and during this process an endothermic reaction occurs, or in other words heat is absorbed. Phase change units have this evaporation chamber situated to sit right over top of the CPU, allowing for the maximum amount of heat to be shipped away for best results.
- Advantages And Disadvantages: Phase change coolers are an overclocker's dream come true: they're relatively easy to install and manage, not to mention they enable the user to seriously overclock their processor and still keep it below the freezing point. Some of the world's best overclocking feats have been accomplished through the use of a phase change unit because of their impressive cooling ability. That ability comes with a hefty price, however: these units are expensive and quite dangerous. The processor must be thoroughly protected against condensation formation and care must be taken not to puncture, cut or over-bend any part of the unit. One of the more obvious drawbacks to this kind of cooler is its size; store-bought phase change units can be a quarter the size of one's case and can be just as loud as one, too.
- Recommended Use: These coolers should only be handled by seasoned experts who know what it is they're playing with. While their potential is astounding and tempting, these aren't for the faint of heart. Phase change units are by far the most advanced, and the most dangerous, mainstream form of cooling and aren't meant to serve as an everyday cooling solution. Many companies warn that these units are to be used for overclocking and benchmarking only; using one for day-to-day tasks can compromise its lifespan and potentially destroy not only the unit but your system.
Thermal Paste [033E]
- Thermodynamics: Thermal paste is a thick, thermally conductive substance that is applied between a component and a heatsink to help improve thermal exchange. When the heatsink is secured, it compresses the paste and pushes it into microscopic hills and valleys of the heatsink, displacing air in the process. By removing the air and filling these gaps it increases the overall surface area of the paste which turns into better thermal exchange from the heat-producing component to the heatsink. Not a lot of paste is required; most of the time a drop the size of a grain of white, uncooked rice will be more than enough. Some users who apply thermal paste themselves spread the paste over the component to ensure even coverage, usually with a credit card, guitar pick or anything of that nature. A variant of the thermal paste is a thermal pad: a spongy material with special tape on either side of the pad to make it adhere to the component. This is not as strongly preferred by professionals because if this tape happens to pick up any dirt or grease before it is installed, the strength of the adhesive is reduced which can allow the pad to fall off of the component over time.
- Advantages And Disadvantages: Thermal paste is all ups and no downs, surprisingly enough. It displaces air in the microscopic valleys in a heatsink to make heat transfer more efficient, it aids in reducing component temperatures and is very easy to come across. Even when the paste is liberally applied and either is pushed off of the substrate or falls onto a circuit board, it is very easy to clean up and doesn't leave a residue (unless it is "baked" on.) Be mindful that all pastes have to cure before they become as thermally efficient as possible, which can range between 24 and 48 hours. In that time, it is a good idea to place a heavy load on whatever component the paste has been applied to and also let it cool down for some time afterwards.
- Recommended Use: If any component produces enough heat to need a heatsink, it'll need thermal paste! It's as simple and ubiquitous as that. Application is easy as is cleaning up any mess.
Section 4: The Basics Of SLI 
This section will focus on NVIDIA's new dual-GPU technology known as SLI, or Scalable Link Interface. Here we will cover: how SLI works, the different SLI rendering modes and features, standard hardware requirements for SLI, an overview of SLI-ready nForce chipset features, dual-GPU power consumption rates and the various graphics cards that can be used SLI.
Fundamentals Of SLI 
SLI is, simply put, two graphics processors doing the work of one. Each graphics card is assigned 50% of the visual workload for a given scene and both GPUs render their share concurrently, effectively doubling the output. As we know, no system is 100% efficient, but SLI can come as close as a 90% performance gain under supported titles and applications. In this section we'll briefly cover how to get your SLI rig up and running, how it works and how to tune it to suit your needs!
SLI Installation And Activation [041A]
Here we will be focusing on how to install your SLI-ready video cards, prepare your system on a hardware level for SLI and finally enabling it through software in your operating system. The process will be explained as clearly as possible and will be going on the assumption that you are familiar with your operating system's interface as well as programs and utilities provided by NVIDIA. Bear in mind that issues may arise that can possibly prevent the activation of SLI, in which case you should seek assistance on these forums.
- Installation And Preparation: First thing's first: getting your two cards installed and prepped for SLI mode. Before seating the cards, be sure to check your motherboard for what's known as an "SLI selector chip." This is a small card that appears on lower-end or older SLI-ready motherboards and is the manual form of enabling SLI on a hardware level (though you will have to enable SLI through the NVIDIA Control Panel.) If you find such a card, flip it into the proper position and proceed with the installation process as necessary. For those with motherboards that lack a selector chip, you may be required to enter the BIOS and set the PCI-Express Broadcast Aperture to "Auto" for future ease. At this time, you may install your video cards in the appropriate PCI-Express lanes. If you know of a difference between two cards' clock speeds or VRAM size, look below at the SLI-Ready Graphics Card item in the Section 4.2: Standard SLI Requirements section. Once the cards are bridged with the motherboard-provided connector and powered via dedicated power leads, it's time to close things up and get ready to kick SLI into gear!
- Activation: Assuming nothing has gone wrong between the beginning of this section and now, bring your PC to life and let Windows load. (Sorry - if you have Linux and need help enabling SLI, I'd have to ask you to either make a post asking about it on forums or tell me how to so I can put it here!) If you haven't done so by this point, install your video drivers (or upgrade if you so choose.) Once all of the background tasks and miscellaneous programs have loaded, right-click in an empty space on your desktop and click on "NVIDIA Control Panel." If you don't have the context-menu shortcut, just open the Windows Control Panel and there should be an icon with the same name amongst the others. You'll be greeted with a fairly basic interface comprised of a few images and titles, but for now all that is required is access to the "3D Settings" category which contains the option to enable SLI under the "Set SLI configuration" section. Inside should be the option to either disable or enable SLI, and of course you'll want to enable it for now. If you have multiple monitors installed, all but one will go blank because multi-monitor setups are not supported in SLI at this time, but you can always disable SLI for whenever more than one display is necessary. Enabling or disabling SLI with a 3D application active warrants a full system reboot, so be sure to close out any programs that may cause this to occur.
SLI Rendering Modes [041B]
SLI works in two different modes: Alternate Frame Rendering (AFR) and Split-Frame Rendering (SFR.) There is no "better" rendering mode in terms of process or efficacy since they're both doing the same amount of work, just in different fashions. Either mode will surely provide a significant improvement over single-GPU rendering in compatible applications, that much is certain.
- Alternate Frame Rendering: This is a rendering mode where one video card renders even-numbered frames while the other renders odd-numbered frames. This mode usually works best for both supported and some unsupported games and applications since it is has a fairly solid track record with both, considering it isn't as algorithmically intense like SFR. AFR has two modes, aptly named Alternate Frame Rendering 1 and Alternate Frame Rendering 2. AFR2 can be considered a "safe mode" version of AFR in the sense that some games may be able to use AFR2 instead of AFR1, although it can produce better frame rates in some situations when both modes work.
- Split-Frame Rendering: This is a rendring mode usually reserved for games and applications with official SLI profiles and support due to the way the load is distributed amongst the processors. SFR draws a line where half of the visual information lies on the screen and allocates one half per GPU, hence the name "split-frame." The only problem with this rendering mode is that the visual workload cannot be holistically divided, that is, it cannot be divided equally without a specific profile for the application, meaning than an official SLI profile must be in place for it to work correctly.
- SLI Visual Indicators: The SLI visual indicator is not a separate rendering mode, merely an extension of whichever is currently being used. This function works best and most easily understood when enabled with Split-Frame mode, however it will work with Alternate Frame rendering as well. For SFR, a horizontal band will move vertically, depicting where half of the visual workload meets, and depending on the concentration of visual detail the position of said line can differ. In a basic cuboidal room with no objects or actors, the line will hover right around the middle of the screen. In a room with stairs, windows, ceiling fans and light fixtures, doors, support columns and whatnot, the load balance is going to be thrown way off since there is more visual content in one part of the screen than the other. The visual indicator for Alternate Frame rendering modes isn't as simple nor as helpful, though they are still functional. For AFR, an empty vertical bar is displayed on the left side of the screen with a green box in the middle. As the amount of scaling increases, the box will grow accordingly, and of course as the amount of scaling decreases the box will shrink. When the box is small, this means that the application is being restricted by the CPU, but when the box extends the height of the monitor it indicates that the video subsystem is holding the system back. To enable or disable the SLI visual indicators, open the NVIDIA Control Panel, and in the "3D Settings" category, under the menu aptly named "3D Settings," you will find this feature.
- SLI Anti-Aliasing: While not an actual rendering mode, this is an Anti-Aliasing setting exclusive to SLI configurations. SLI Anti-Aliasing, otherwise known as SLIAA, takes advantage of the presence of the two processors by splitting the Anti-Aliasing workload on both processors rather than one alone. The lowest setting is SLI8x, allocating a 4x process to each GPU. Most users can enable SLI16x which applies an 8xS process over each GPU, though this setting does come with a high performance hit. For the lucky few who have (working) Quad-SLI, they have the option to enable SLI32x, though it too applies an 8xS AA process to each video processor like SLI16x does. Enabling SLIAA means that the scene will only be rendered with the primary GPU, and although it can lead to some performance gain, the gains may not be as significant compared to AFR or SFR.
SLI Rendering Profiles [041C]
In this section we'll review the numerous graphics options available to you through the NVIDIA Control Panel's "3D Options" category. With or without an SLI computer, you have the option to set numerous visual options to optimize your games and applications the way you choose. There are "global" settings and then there are program-specific settings, both of which are fully customizable to suit your needs. We'll go over the majority of the options here, covering their definition and how they help improve the quality of an image or scene. Additionally, suggested settings for those options will be noted to help keep both the image quality and frames-per-second count in good amount.
- Anisotropic Filtering: This is an image improvement technique which reduces the amount of blurriness of textures when they are viewed at an oblique angle, such as a runway in a flight simulator or a billboard on a racing game. Instead of using square or rectangular buffers which do not properly simulate perspective, a dynamic trapezoidal buffer is used instead to allow for minimal distortion even at extreme viewing angles. Rather than forcing a single trapezoidal shape to all textures, the degree of texture distortion directly relates to the angle of the trapezoidal buffer used, meaning this technique can improve image quality of textures being viewed at all angles, not just one. Levels range from zero to 16x: zero of course applying no special filtering while 16x will filter virtually any texture drawn on the screen regardless of distance. In a general sense, the "best" setting for 6-series users would be about 4x, 7-series about 8x, and 8-series users can use 16x AF without a problem. Results can vary according to your video hardware specifications, level of detail, resolution and scene complexity.
- Anti-Aliasing: This is a visual feature which reduces the amount of jagged lines on a 3D object by enlarging the scene, frame by frame, then scaling it back down to the specified resolution. When the scene is being shrunk back down to the original size, the pixels smoothen out and look less abrupt which obviously gives the scene an added sense of realism. The settings of Anti-Aliasing range from zero up to SLIAA 32x with many steps in between. 2x Anti-Aliasing samples each pixel twice, giving a reasonable reduction in jagged edges without a high performance cost. 2xQ is the same as 2x AA, but it was designed to provide the same image quality as a 4x process without the performance hit, which it does by merging neighboring pixels, but the downside is that the blurriness causes a reduction in image quality. 4x AA samples each pixel four times and gives the best balance of image quality to system performance. 8xS is a "combination" Anti-Aliasing setting that uses a 1x2 Supersampling buffer with a 2x Multisampling buffer, often resulting in a spectacular image without much of a performance reduction. With higher-end video cards there are also 16x and 16xQ processes, however this time the "Q" stands for "quality" since it uses entirely different sampling sizes than standard 16x and leads to better image quality, of course at the cost of some performance. Beyond those modes are the SLI AA modes that were explained previously. High-end users will be able to apply a 8xS or 16x/16xQ process without suffering much of a performance cost, while mid-range and lower-end users may be able to still use 4x and 8xS processes without seeing a drastic reduction in frame rates. As noted above, users can enable certain levels of SLI Anti-Aliasing depending on their hardware. As with Anisotropic filtering, how well your system performs at certain settings depends on the resolution, level of detail, how detailed the scene is and video hardware specs.
- Conformant Texture Clamp: This OpenGL-specific setting merely instructs the system how to handle texture coordinates. This setting only needs to be disabled if texture problems are noticed in an OpenGL application.
- Error Reporting: All this option determines is whether or not programs are allowed to check for informational errors. This can cause performance reductions in OpenGL applications that constantly check for errors.
- Extension Limit: Modern applications use longer strings of code to instruct the system to perform a certain task, and what this option does is limit the driver extension string, or how long a string of code can be. If enabled while running a modern OpenGL application, code can be cut off which can result in the program crashing or freezing. It's best to leave this disabled for newer programs, but enabled for older ones to prevent string looping.
- Force Mipmaps: Mipmaps are used in virtually every game on the market today because they allow for trilinear and Anisotropic filtering to be applied. This is used to reduce "shimmering," or the unnatural bright coloration of a pixel, that can appear on textures a certain distance away from the camera. Forcing mipmaps allows for more advanced texture filtering to be applied in the event that your game doesn't use its own mipmaps, and you can force either Bilinear or or Trilinear mipmaps. Bilinear mipmaps may display contradistinctions between the mipmaps whereas Trilinear mipmaps create smooth transitions between them.
- Hardware Acceleration: This is a setting to ensure best performance and compatibility with single or multi-monitor configurations. When using a single monitor, this should be set as "Single display performance mode" unless significant visual artifacting (unrelated to GPU overclocking or overheating) is seen, in which case you should select "Compatibility performance mode." If using more than one monitor, this needs to be set at "Multiple monitor performance mode." If you notice artifacting on a multi-display setup, you may select the compatibility mode to remedy the situation.
- Negative LOD Bias: A LOD bias, or a Level Of Detail bias, is something set by a game or other 3D application to determine how sharp a texture appears after a certain distance. Some applications use a negative LOD bias to sharpen stationary textures, but when the texture is moved shimmering can be observed. Setting this to "Clamp" restricts the LOD bias from dropping below zero, and setting it to "Allow" allows for a negative LOD bias.
- SLI Performance Mode: Pretty self-explanatory - this permits the use of one of several SLI rendering modes, all of which were explained previously. The choices here are Split-Frame rendering, two Alternate Frame rendering modes, Single-GPU rendering and NVIDIA's recommended rendering mode that differs from program to program.
- Texture Filtering: Here you can choose one of four levels of texture details: High Quality, Quality, Performance and High Performance, in descending order in terms of best image quality. The Quality modes use few or no optimization measures and provide the best image quality at the cost of performance. The Performance modes apply optimizations that degrade the overall quality of the image for the sake of performance, but with these modes additional shimmering can be observed.
- Threaded Optimization: This option allows for applications to utilize more than one physical CPU and thus should be left at "On" or "Auto" with newer applications, though it should be disabled when using more dated programs to prevent crashing.
- Transparency Anti-Aliasing: This is a specialized sort of Anti-Aliasing, however it is only applied to textures with alpha layers, or transparencies, to help reduce the appearance of jagged edges. Only two transparency Anti-Aliasing modes are available: Multisampling and Supersampling. Multisampling will offer only a small visual improvement though at an equally small performance hit, but for serious image improvement, Supersampling is the way to go even though the performance cost is significantly higher. Of course, this option can also be disabled entirely if it's raw FPS you seek.
- Triple Buffering: Triple buffering is a technique that uses a portion of the video memory to prepare the images that are about to be displayed. The first buffer is the image being displayed, the second buffer is the next frame to be displayed, and the third buffer is the image that's being created by the GPU. This technique uses one more buffer (and as a result, 33% more memory) than double buffering but helps to reduce shearing or tearing. This should be enabled with V-Sync to minimize the occurrence of tearing.
- V-Sync: V-Sync, or Vertical Synchronization, is the process of limiting and synchronizing the frame output of the GPU(s) to the refresh rate of the monitor. For example, a display with a vertical refresh rate of 75Hz updates the screen 75 times per second; when V-Sync is enabled the GPU(s) synchronize their output frames to that of the monitor. When V-Sync is disabled, the GPU(s) are not limited by the refresh rate of the monitor and produce as many frames as possible by their hardware limitations. While it may appear that your system performance has increased, you may actually be seeing less than if you had V-Sync enabled. Since the frames are not in sync with the monitor's update rate, some frames are not displayed or partially rendered, and the latter consequence is known as tearing. Tearing is when a frame is not completely displayed on the screen and can have either a black line running across the bottom or a half of the screen missing, which is a common result of V-Sync being disabled.
Standard SLI Requirements 
While SLI may sound simple enough, there are a few items on the system checklist that need to be accounted for before making your next purchase. NVIDIA has begun certifying numerous types of products to be "SLI-Ready," but what does it mean? In this section, we're going to cover just that, and if you even need a component that is SLI-certified!
- SLI-Ready Graphics Card: This should be fairly obvious - if you don't have an SLI-ready video card, you can't run SLI! In order to enable SLI, the two video cards must be of the same family and model. For example, two 8800 GTXs could run in SLI while an 8800 GTX and an 8800 GTS could not because there are numerous discrepancies between them that could not be compensated for despite the fact they are closely related. Unlike the days where SLI had just been brought back from the dead, the two cards do not have to be from the same manufacturer, have the same BIOS, be the same speed or even have the same amount of VRAM. While it isn't terribly important that you keep a close eye on the VGA BIOS version or who the card is produced by, you do need to be mindful of the card's clocks and memory size. If there is a difference in the clock speeds (GPU or memory, doesn't matter which) you've got a decision to make: sacrifice speed for safety or go with the risks of forcing an overclock, and the latter does not always work. If you install the slower of the two cards in the primary PCI-Express slot, the system will automatically underclock the second (faster) card to keep them in step. This is usually the recommended configuration because it has the lowest chance for failure or future problems. The other option is to install the faster card as the master, forcing the other card to be overclocked, but as stated earlier this configuration may not always force an overclock or allow SLI to be enabled at all. While this can result in improved performance, the system may become less stable, the initially slower card becomes more prone to overheating and has a higher chance to return visual artifacts. If there is a VRAM subsystem size difference, say between two 8800 GTSs, the card with the smaller subsystem must be installed as the master to keep both cards on the same playing field. You will also be required to alter the CoolBits registry string by changing its value to 18. This will automatically disable certain memory banks to level the VRAM size of each card, thus allowing them to operate at the same speed. It should be noted that this is not an officially supported feature and will not work every time.
- SLI-Ready Memory: SLI-Ready memory modules are a specialized sort of EPP modules that are aimed to be used with nForce motherboards. Enhanced Performance Profiles, or EPP, essentially simplify the process of overclocking system memory by depending on a predetermined set of values to ensure maximum reliability. While these values aren't necessarily what would be the fastest configuration, that isn't the aim: it is to be the best stable speed possible. RAM with EPP support, in a nutshell, are modules that have been tested on a multitude of motherboards and have built-in profiles for each which maintain internal stability. SLI-ready modules are more or less the same, however they have been tested on nForce platforms and constructed for similar reliability and overclockability. SLI-ready RAM is not a necessity to enabling SLI; it is completely optional and in no way will affect SLI performance.
- SLI-Ready Motherboard: Even if you have two SLI-certified video cards, you're out of luck if you don't have an SLI-ready motherboard. NVIDIA nForce chipsets are specially-made MCPs (Northbridges) that handle information very differently than traditional chipsets, of course due to SLI. Only if you are a prodigious hardware modifier and can tweak video drivers like no one's business, SLI can only be enabled on an nForce chipset; not an Intel, Crossfire or ULi or any other. Even if the SLI gods smile down upon you and allows you to enable SLI on an uncertified motherboard, don't expect spectacular results.
- SLI-Ready Power Supply: It's pretty obvious to assume that if you have two video cards you have more power being consumed, but it's not like adding another hard drive or a fan; graphics cards pull some serious power and you'll need a quality power supply to do the job. SLI-ready power supplies are certified to work with not one, but two of the most powerful cards at the time of their debut much of the time. These units (more often than not) have large +12V rails or a good number of them dedicated to the PCI-Express leads, a crucial feature in feeding the video cards enough power for them to do their work in comfort. An SLI-ready power supply isn't a necessity to run SLI, however it would be wise to invest in one just to be on the safe side of things.
- SLI-Ready Case: SLI-certified cases are fairly new but are the only item here that are not optimized or specialized in any way to accommodate additional system components or enhance airflow or anything to that effect. Actually, these cases started appearing after the release of the 8800 GTX to denote which cases could contain the 10.5" beast, and usually that's the only characteristic that makes a case "SLI-ready." One of MaximumPC's members wrote a helpful list of cases that can and cannot accommodate the 8800 GTX, which you can view by clicking here.
nForce SLI Chipset Specifications [042A]
Although you do not need a specific nForce chipset to run a certain pair of video cards, there are many differences between the three tiers: nForce 600, nForce 500 and nForce 4. All three chipset families have different subsets with different features, intended for different levels of enthusiasts. Here we will cover only the versions that support SLI, as there are some that do not.
nForce 600, AMD: Unlike the nForce 500 series, where a myriad of chipset subsets are available, there is but one 600-series MCP available at all, which is the nForce 680a SLI.
- nForce 680a SLI: The 680a chipset does support SLI, however only for the first two (out of four available) PCI-Express slots will allow for SLI. There are two 16x PCI-E slots and two 8x PCI-E slots, but no there have been no announcements for transforming this platform into a dual SLI or quad SLI support. Aside from that, the 680a features DualNet and FirstPacket technology and high-definition audio, as seem to be rather standard on most other chipsets.
nForce 600, Intel: For Intel platforms, there is a wider selection compared to AMD. These chipsets are built with the overclocker in mind, complete with a plethora of overclocking options and a streamlined component distribution amongst the Northbridge and Southbridge. SLI-ready chipsets include: nForce 680i SLI, nForce 680i LT SLI and the nForce 650i SLI. The only chipset that does not support SLI is the 650i Ultra.
- nForce 680i SLI: The 680i SLI is the top-of-the-line Intel MCP available at this time, complete with 16x SLI support and a third 8x PCI-Express slot dedicated for future graphics expansion. Additionally, the 680i supports CPUs with 1333 MHz FSBs, SLI-ready RAM with EPP, native dual Gigabit technology with network teaming along with FirstPacket technology, DualDDR2 and high-definition audio.
- nForce 680i LT SLI: The 680i LT is a chipset designed for those who want extreme overclocking capabilities, 16x SLI and not not have to worry about breaking the bank. Although it supports everything the standard 680i MCPs do, some overclocking features have been removed.
- nForce 650i SLI: Like the 680i LT, the 650 is aimed more at mainstream users who aren't concerned with intricately tuning and overclocking their system, as it does not have the third "expansion" 8x PCI-Express slot. It also doesn't support SLI-ready RAM with Enhanced Performance Profiles nor does it have native dual Gigabit network facilities. That aside, it supports the other technologies that the 680i chipsets do.
nForce 500, AMD: The 500-series MCPs made for AMD configurations range greatly in features and also in number. Starting from the most advanced, there is: nForce 590 SLI, nForce 570 SLI, nForce 570 LT SLI, nForce 560 SLI and nForce 500 SLI. Chipsets that do not support SLI include: nForce 570 Ultra, nForce 550, nForce 520, nForce 520 LE, nForce 500 Ultra and the nForce 500.
- nForce 590 SLI: As the top contender of the 500-series, this chipset supports full 16x SLI, Gigabit network teaming, FirstPacket technology and HD audio, some of which are shared with other high-end chipsets.
- nForce 570 SLI: The only real difference between the 590 and 570 chipsets is quite minute: the 570 dies not support full 16x SLI and instead runs at 8x with both video cards installed, like all of the other 500-chipsets do. The 570 still supports the DualNet and FirstPacket technologies as well as high-definition audio.
- nForce 570 LT SLI: 570 LT chipsets are nearly identical to the standard 570 chipsets, however they do not feature the DualNet technology. Everything else supported by the 570 chipsets can be found on 570 LT ones.
- nForce 560/500 SLI: Geared towards the more budget-conscious enthusiast, 560 and 500 MCPs lack the dual Gigabit network ports, FirstPacket technology and HD audio. Like previous chipsets, SLI will turn down the PCI-E buses to 8x.
nForce 500, Intel: While there aren't as many 500-series chipsets for Intel users, the 590 and 570 MCPs are a quality set, both supporting some of the top-of-the-line technologies.
- nForce 590 SLI: As with previous top-end chipsets, the 590 boasts full 16x SLI, supports SLI-ready RAM, DualDDR2 technology, DualNet and FirstPacket technology and also high-definition audio.
- nForce 570 SLI: The 570 MCP drops support for 16x SLI, running at 8x like so many others, and also doesn't make use of SLI-ready RAM and DualNet technology. By upgrading the chipset drivers 570 users can attain support for FirstPacket technology, which is not initially supported.
nForce4: In order from the most full-featured to the more basic in the nForce4 series, there is the nForce4 SLI x16, nForce4 SLI and nForce4 SLI XE. Chipsets that do not support SLI include the nForce4 Ultra and the nForce4.
- nForce4 SLI x16: The x16 chipsets are the only nForce4 that support dual 16x PCI-Express slots in SLI, although there are no performance gains associated with 16x SLI in comparison to 8x SLI. With that said, it is aimed at high-end enthusiasts who are concerned with exceeding the bandwidth of a PCI-E 8x bus, though at this time no card is capable of doing just that.
- nForce4 SLI: This is the "standard" nForce4 SLI chipset, with all of the supported features found on the x16 chipset save the 16x SLI. It too is found on AMD and Intel platforms, supported on socket 939, 940 and LGA 775 motherboards.
- nForce4 SLI XE: The XE is a special nForce4 chipset, exclusive only to Intel LGA 775 socket-bearing motherboards. It only supports 8x SLI and doesn't feature TCP/IP acceleration.
SLI Configuration Power Consumption [042B]
It is important that your hardware can support SLI, but it's also important that your power supply is up to the task of providing a sufficient level of power to the system. Since the graphics card is one of the most power-hungry components in the system, adding a second one can sometimes warrant the use of a stronger PSU. These figures are the approximate peak power draw of an SLI setup, provided in both total Watts drawn and Amps drawn from the +12V rail. Single-card power consumption is equal to roughly half of the figures provided.
- 8800 Ultra: 248W / 20.6A
- 8800 GTX: 246W / 20.5A
- 8800 GTS 640 MB: 200W / 16.6A
- 8800 GTS 320 MB: 197W / 16.4A
- 8600 GTS: 94W / 7.8A
- 8600 GT: 84W / 7A
- 8500 GT: 80W / 6.6A
- 7950 GX2: 205W / 17A
- 7950 GT: 125W / 10.4A
- 7900 GTX: 159W / 13.2A
- 7900 GT 512 MB: 109W / 9.0A
- 7900 GT 256 MB: 95W / 7.9A
- 7900 GS: 89W / 7.4A
- 7800 GTX 512 MB: 177W / 14.7A
- 7800 GTX 256 MB: 154W / 12.8A
- 7800 GT: 109W / 9A
- 7600 GT: 72W / 6A
- 7600 GS: 57W / 4.7A
- 7300 GT: 52W / 4.3A
- 7300 GS: 36W / 3A
- 7300 LE: 34W / 2.8A
- 7100 GS: 34W / 2.8A
- 6800 Ultra: 140W / 8.4A
- 6800 GT: 108W / 6A
- 6800 GS: 105W / 5.9A
- 6800: 77W / 4.9A
- 6800 XT: 77W / 4.9A
- 6800 LE: 77W / 4.9A
- 6600 GT: 95W / 5.4A
- 6600: 57W / 3.5A
- 6600 LE: 57W / 3.5A
Note: The 6-series graphics cards draw more Amps off of the +5V rail than the +12V rail.
Graphics Cards And GPU Hierarchy 
This section covers all of the GeForce graphics cards and processors that can be used under SLI, ordered by chipset series and model number. Beside each card is its GPU name, just for reference. Only the 8-series GeForce video processors are DirectX 10-compliant, which are highlighted in white. For a detailed list of video card specifications, click here.
- 8800 Ultra (G80)
- 8800 GTX (G80)
- 8800 GTS 640 MB (G80)
- 8800 GTS 320 MB (G80)
- 8600 GTS (G86)
- 8600 GT (G86)
- 8500 GT* (G86)
- 7950 GX2 (G71)
- 7950 GT (G71)
- 7900 GTX (G71)
- 7900 GT 512 MB(G71)
- 7900 GT 256 MB(G71)
- 7900 GS (G71)
- 7800 GTX 512 MB (G70)
- 7800 GTX 256 MB (G70)
- 7800 GT (G70)
- 7600 GT (G73)
- 7600 GS (G73)
- 7300 GT (G73)
- 7300 GS* (G72)
- 7300 LE* (NV47)
- 7100 GS* (NV44)
- 6800 Ultra (NV40)
- 6800 GT (NV40)
- 6800 GS (NV42)
- 6800 (NV40)
- 6800 XT (NV42)
- 6800 LE (NV41)
- 6600 GT (NV43)
- 6600 (NV43)
- 6600 LE* (NV43-V)
* = No SLI bridge required
Section 5: Credits 
This section needs no explanation - these are all the people who helped me write this guide and keep the information accurate!A big thanks goes to:
A big thanks goes to all of those who helped, and thank you for reading!If you have any questions, comments, ideas or suggestions, please let me know via the feedback thread.
Edited by Kelly - 27 Oct 2007 at 12:53am