Google

Sabtu, 08 Desember 2007

Alternatives to Graphics Card Overclocking

Flashing and Unlocking are two popular ways to gain performance out of a video card, without technically overclocking.

Flashing refers to using the BIOS of another card, based on the same core and design specs, to "override" the original BIOS, thus effectively making it a higher model card; however, 'flashing' can be difficult, and sometimes a bad flash can be irreversible. Sometimes stand-alone software to modify the BIOS files can be found, i.e. NiBiTor, (GeForce 6/7 series are well regarded in this aspect). It is not necessary to acquire a BIOS file from a better model video card (although it should be said that the card which BIOS is to be used should be compatible, i.e. the same model base, design and/or manufacture process, revisions etc.). For example, video cards with 3D accelerators (the vast majority of today's market) have two voltage and speed settings - one for 2D and one for 3D - but were designed to operate with three voltage stages, the third being somewhere in the middle of the aforementioned two, serving as a fallback when the card overheats or as a middle-stage when going from 2D to 3D operation mode. Therefore, it could be wise to set this middle-stage prior to "serious" overclocking, specifically because of this fallback ability - the card can drop down to this speed, reducing by a few (or sometimes a few dozen, depending on the setting) percent of its efficiency and cool down, without dropping out of 3D mode (and afterwards return to the desired full-speed clock and voltage settings).

Some cards also have certain abilities not directly connected with overclocking. For example, NVIDIA's GeForce 6600GT (AGP flavor) features a temperature monitor (used internally by the card), which is invisible to the user in the 'vanilla' version of the card's BIOS. Modifying the BIOS (taking it out, reprogramming the values and flashing it back in) can allow a 'Temperature' tab to become visible in the card driver's advanced menu.

Unlocking refers to enabling extra pipelines and/or pixel shaders. The 6800LE, the 6800GS and 6800 (AGP models only) and Radeon X800 Pro VIVO were some of the first cards to benefit from unlocking. While these models have either 8 or 12 pipes enabled, they share the same 16x6 GPU core as a 6800GT or Ultra, but may not have passed inspection when all their pipelines and shaders were unlocked. In more recent generations, both ATI and Nvidia have laser cut pipelines to prevent this practice.[citation needed].

Graphics cards in the same series all share a processor based on the same architecture. For example, all 7 series cards have the 7 series GPU architecture. The differences between cards are the number of transistors in the processor and the speed at which it is clocked. A higher number in the series will have a higher transistor count, for example an 8800 will have more transistors than an 8600. A processor with a higher clock speed is one that has been thoroughly tested at that speed, with the output being checked by ATI or NVIDIA to ensure that there are no silent errors. These are small errors which would go undetected without examining the output for them. Lower models of the processor have not been certified at higher speeds, but can be run at a higher speed than specified.

It is important to remember that while pipeline unlocking sounds very promising, there is absolutely no way of determining if these 'unlocked' pipelines will operate without errors, or at all (this information is solely at the manufacturer's discretion). In a worst-case scenario, the card may not start up ever again, resulting in a 'dead' piece of equipment. It is possible to revert to the card's previous settings, but it involves manual BIOS flashing using special tools and an identical but original BIOS chip.

Overclocking graphics cards

Graphics cards can also be overclocked, with utilities such as NVIDIA's Coolbits, or the PEG Link Mode on ASUS motherboards. Overclocking a video card usually shows a much better result in gaming than overclocking a processor or memory. Just like overclocking a processor, sufficient cooling is a must. Many graphics cards overheat and burn out when overclocked too much.

Sometimes, it is possible to see that a graphics card is pushed beyond its limits before any permanent damage is done by observing on-screen distortions known as artifacts. Two such discriminated "warning bells" are widely understood: green-flashing, random triangles appearing on the screen usually correspond to overheating problems on the GPU (Graphics Processing Unit) itself, while white, flashing dots appearing randomly (usually in groups) on the screen often mean that the card's RAM (memory) is overheating. It is common to run into one of those problems when overclocking graphics cards. Showing both symptoms at the same time usually means that the card is severely pushed beyond its heat/speed/voltage limits. If seen at normal speed, voltage and temperature, they may indicate faults with the card itself.

Some overclockers use a hardware voltage modification where a potentiometer is applied to the video card to manually adjust the voltage. This results in much greater flexibility, as overclocking software for graphics cards is rarely able to freely adjust the voltage. Voltage mods are very risky and may result in a dead video card, especially if the voltage modification ("voltmod") is applied by an inexperienced individual. It is also worth mentioning that adding physical elements to the video card immediately voids the warranty (even if the component has been designed and manufactured with overclocking in mind, and has the appropriate section in its warranty).

Measuring effects of overclocking

Benchmarks are used to evaluate performance. The benchmarks can themselves become a kind of 'sport', in which users compete for the highest scores. As discussed above, stability and functional correctness may be compromised when overclocking, and meaningful benchmark results depend on correct execution of the benchmark. Because of this, benchmark scores may be qualified with stability and correctness notes (e.g. an overclocker may report a score, noting that the benchmark only runs to completion 1 in 5 times, or that signs of incorrect execution such as display corruption are visible while running the benchmark).

Given only benchmark scores it may be difficult to judge the difference overclocking makes to the computing experience. For example, some benchmarks test only one aspect of the system, such as memory bandwidth, without taking into consideration how higher speeds in this aspect will improve the system performance as a whole. Apart from demanding applications such as video encoding, high-demand databases and scientific computing, memory bandwidth is typically not a bottleneck, so a great increase in memory bandwidth may be unnoticeable to a user depending on the applications they prefer to use. Other benchmarks, such as 3DMark attempt to replicate game conditions, but because some tests involve non-deterministic physics, such as ragdoll motion, the scene is slightly different each time and small differences in test score are overcome by the noise floor.[citation needed]

Factors allowing overclocking

Overclockability arises in part due to the economics of the manufacturing processes of CPUs. In most cases, CPUs with different rated clock speeds are manufactured via exactly the same process. The clock speed that the CPU is rated for is at or below the speed at which the CPU has passed the manufacturer's functionality tests when operating in worst-case conditions (for example, the highest allowed temperature and lowest allowed supply voltage). Manufacturers must also leave additional margin for reasons discussed below. Sometimes manufacturers have an excess of similarly high-performing parts and cannot sell them all at the flagship price, so some are marked as medium-speed chips to be sold for medium prices. The performance of a given CPU stepping usually does not vary as widely as the marketing clock levels[citation needed].

When a manufacturer rates a chip for a certain speed, it must ensure that the chip functions properly at that speed over the entire range of allowed operating conditions. When overclocking a system, the operating conditions are usually tightly controlled, making the manufacturer's margin available as free headroom. Other system components are generally designed with margins for similar reasons; overclocked systems absorb this designed headroom and operate at lower tolerances. Pentium architect Bob Colwell calls overclocking an "uncontrolled experiment in better-than-worst-case system operation".[8]

Some of what appears to be spare margin is actually required for proper operation of a processor throughout its lifetime. As semiconductor devices age, various effects such as hot carrier injection, negative bias thermal instability and electromigration reduce circuit performance. When overclocking a new chip it is possible to take advantage of this margin, but as the chip ages this can result in situations where a processor that has operated correctly at overclocked speeds for years spontaneously fails to operate at those same speeds later. If the overclocker is not actively testing for system stability when these effects become significant, errors encountered are likely to be blamed on sources other than the overclocking.

Stability and functional correctness

As an overclocked component operates outside of the manufacturer's recommended operating conditions, it may function incorrectly, leading to system instability. An unstable overclocked system, while it may work fast, can be frustrating to use. Another risk is silent data corruption—errors that are initially undetected. In general, overclockers claim that testing can ensure that an overclocked system is stable and functioning correctly. Although software tools are available for testing hardware stability, it is generally impossible for anyone (even the processor manufacturer) to thoroughly test the functionality of a processor. A particular "stress test" can verify only the functionality of the specific instruction sequence used in combination with the data and may not detect faults in those operations. For example, an arithmetic operation may produce the correct result but incorrect flags; if the flags are not checked, the error will go undetected. Achieving good fault coverage requires immense engineering effort, and despite all the resources dedicated to validation by manufacturers, mistakes can still be made. To further complicate matters, in process technologies such as silicon on insulator, devices display hysteresis—a circuit's performance is affected by the events of the past, so without carefully targeted tests it is possible for a particular sequence of state changes to work at overclocked speeds in one situation but not another even if the voltage and temperature are the same. Often, an overclocked system which passes stress tests experiences instabilities in other programs.[7]

In overclocking circles, "stress tests" or "torture tests" are used to check for correct operation of a component. These workloads are selected as they put a very high load on the component of interest (e.g. a graphically-intensive application for testing video cards, or a processor-intensive application for testing processors). Popular stress tests include Prime95, Super PI, SiSoftware Sandra, BOINC, Intel Thermal Analysis Tool and Memtest86. The hope is that any functional-correctness issues with the overclocked component will show up during these tests, and if no errors are detected during the test, the component is then deemed "stable". Since fault coverage is important in stability testing, the tests are often run for long periods of time, hours or even days.

Manufacturer and vendor overclocking

Commercial system builders or component resellers sometimes overclock to sell items at higher profit margins. The retailer makes more money by buying lower-value components, overclocking them, and selling them at prices appropriate to a non-overclocked system at the new speed. In some cases an overclocked component is functionally identical to a non-overclocked one of the new speed, however, if an overclocked system is marketed as a non-overclocked system (it is generally assumed that unless a system is specifically marked as overclocked, it is not overclocked), it is considered fraudulent.

Overclocking is sometimes offered as a legitimate service or feature for consumers, in which a manufacturer or retailer tests the overclocking capability of processors, memory, video cards, and other hardware products. Several video card manufactures now offer factory overclocked versions of their graphics accelerators, complete with a warranty, which offers an attractive solution for enthusiasts seeking an improved performance without sacrificing common warranty protections. Such factory overclocked products often demand a marginal price premium over reference-clocked components, but the performance increase and cost savings can sometimes outweigh the price increases associated with similar, albeit higher-performance offerings from the next product tier.

Naturally, manufacturers would prefer enthusiasts pay additional money for profitable high-end products, in addition to concerns of less reliable components and shortened product life spans impacting brand image. It is speculated that such concerns are often motivating factors for manufacturers to implement overclocking prevention mechanisms such as CPU locking. These measures are sometimes marketed as a consumer protection benefit, which typically generates a negative reception from overclocking enthusiasts.