These innovations whet our collective appetite for more interactive and compelling 3D experiences. Satisfying this demand is what motivated the development of the Cg language.
In the mids, the world's fastest graphics hardware consisted of multiple chips that worked together to render images and display them to a screen. The most complex computer graphics systems consisted of dozens of chips spread over several boards. As time progressed and semiconductor technology improved, hardware engineers incorporated the functionality of complicated multichip designs into a single graphics chip.
This development resulted in tremendous economies of integration and scale. You may be surprised to learn that the GPU now exceeds the CPU in the number of transistors present in each microchip. Transistor count is a rough measure of how much computer hardware is devoted to a microchip. For example, Intel packed its 2. At that time, the VGA controller was what we now call a "dumb" frame buffer. This meant that the CPU was responsible for updating all the pixels. Today the CPU rarely manipulates pixels directly.
Instead, graphics hardware designers build the "smarts" of pixel updates into the GPU. Industry observers have identified four generations of GPU evolution so far. Each generation delivers better performance and evolving programmability of the GPU feature set. Each generation also influences and incorporates the functionality of the two major 3D programming interfaces, OpenGL and DirectX. DirectX is an evolving set of Microsoft multimedia programming interfaces, including Direct3D for 3D programming.
The graphics systems developed by these companies introduced many of the concepts, such as vertex transformation and texture mapping, that we take for granted today. These systems were very important to the historical development of computer graphics, but because they were so expensive, they did not achieve the mass-market success of single-chip GPUs designed for PCs and video game consoles. Today, GPUs are far more powerful and much cheaper than any prior systems.
These GPUs are capable of rasterizing pre-transformed triangles and applying one or two textures. They also implement the DirectX 6 feature set. However, GPUs in this generation suffer from two clear limitations.
First, they lack the ability to transform vertices of 3D objects; instead, vertex transformations occur in the CPU. Second, they have a quite limited set of math operations for combining textures to compute the color of rasterized pixels. Fast vertex transformation was one of the key capabilities that differentiated high-end workstations from PCs prior to this generation.
Although the set of math operations for combining textures and coloring pixels expanded in this generation to include cube map textures and signed math operations, the possibilities are still limited. Put another way, this generation is more configurable, but still not truly programmable.
This generation provides vertex programmability rather than merely offering more configurability. Instead of supporting the conventional transformation and lighting modes specified by OpenGL and DirectX 7, these GPUs let the application specify a sequence of instructions for processing vertices. Considerably more pixel-level configurability is available, but these modes are not powerful enough to be considered truly programmable.
Because these GPUs support vertex programmability but lack true pixel programmability, this generation is transitional. DirectX 8 pixel shaders and various vendor-specific OpenGL extensions expose this generation's fragment-level configurability.
- The Best Werewolf Short Stories 1800-1849: A Classic Werewolf Anthology (Best Short Stories 1800-1849).
- Stolen Child?
- Barnes Notes on the New Testament-Book of Jude.
- Levy Processes in Credit Risk (The Wiley Finance Series)?
- Depression, Avoid Mild Symptoms!
These GPUs provide both vertex-level and pixel-level programmability. This level of programmability opens up the possibility of offloading complex vertex transformation and pixel-shading operations from the CPU to the GPU. This is the generation of GPUs where Cg gets really interesting. The notes highlight the most significant improvements in each design. Performance rates may not be comparable with designs from other hardware vendors. Future GPUs will further generalize the programmable aspects of current GPUs, and Cg will make this additional programmability easy to use.
A pipeline is a sequence of stages operating in parallel and in a fixed order. Each stage receives its input from the prior stage and sends its output to the subsequent stage. Like an assembly line where dozens of automobiles are manufactured at the same time, with each automobile at a different stage of the line, a conventional graphics hardware pipeline processes a multitude of vertices, geometric primitives, and fragments in a pipelined fashion.
Figure shows the graphics hardware pipeline used by today's GPUs.
The 3D application sends the GPU a sequence of vertices batched into geometric primitives: typically polygons, lines, and points. As shown in Figure , there are many ways to specify geometric primitives. Every vertex has a position but also usually has several other attributes such as a color, a secondary or specular color, one or multiple texture coordinate sets, and a normal vector.
The normal vector indicates what direction the surface faces at the vertex, and is typically used in lighting calculations.
Rendering ray tracing
Vertex transformation is the first processing stage in the graphics hardware pipeline. Vertex transformation performs a sequence of math operations on each vertex. These operations include transforming the vertex position into a screen position for use by the rasterizer, generating texture coordinates for texturing, and lighting the vertex to determine its color. We will explain many of these tasks in subsequent chapters. The transformed vertices flow in sequence to the next stage, called primitive assembly and rasterization.
First, the primitive assembly step assembles vertices into geometric primitives based on the geometric primitive batching information that accompanies the sequence of vertices. This results in a sequence of triangles, lines, or points.
- Hunting The Grisly And Other Sketches.
- See a Problem?.
- The Cusco Theory;
- The RenderMan companion : a programmer's guide to realistic computer graphics!
- Manage Partitions with GParted How-to.
- Essential RenderMan ®;
These primitives may require clipping to the view frustum the view's visible region of 3D space , as well as any enabled application-specified clip planes. The rasterizer may also discard polygons based on whether they face forward or backward. This process is known as culling. Polygons that survive these clipping and culling steps must be rasterized.
Rasterization is the process of determining the set of pixels covered by a geometric primitive. Polygons, lines, and points are each rasterized according to the rules specified for each type of primitive.
Rendering Ray Tracing
The results of rasterization are a set of pixel locations as well as a set of fragments. There is no relationship between the number of vertices a primitive has and the number of fragments that are generated when it is rasterized. For example, a triangle made up of just three vertices could take up the entire screen, and therefore generate millions of fragments! Earlier, we told you to think of a fragment as a pixel if you did not know precisely what a fragment was. At this point, however, the distinction between a fragment and a pixel becomes important. The term pixel is short for "picture element.
A fragment is the state required potentially to update a particular pixel.
The term "fragment" is used because rasterization breaks up each geometric primitive, such as a triangle, into pixel-sized fragments for each pixel that the primitive covers. A fragment has an associated pixel location, a depth value, and a set of interpolated parameters such as a color, a secondary specular color, and one or more texture coordinate sets.
These various interpolated parameters are derived from the transformed vertices that make up the particular geometric primitive used to generate the fragments. You can think of a fragment as a "potential pixel. Once a primitive is rasterized into a collection of zero or more fragments, the interpolation, texturing, and coloring stage interpolates the fragment parameters as necessary, performs a sequence of texturing and math operations, and determines a final color for each fragment.
In addition to determining the fragment's final color, this stage may also determine a new depth or may even discard the fragment to avoid updating the frame buffer's corresponding pixel. Allowing for the possibility that the stage may discard a fragment, this stage emits one or zero colored fragments for every input fragment it receives.
- Drink My Blood: The Forever Wife.
- The Renderman Tutorial: Book 3!
- The Cg Tutorial - Chapter 1. Introduction;
The raster operations stage performs a final sequence of per-fragment operations immediately before updating the frame buffer. During this stage, hidden surfaces are eliminated through a process known as depth testing. Other effects, such as blending and stencil-based shadowing, also occur during this stage. The raster operations stage checks each fragment based on a number of tests, including the scissor, alpha, stencil, and depth tests.
These tests involve the fragment's final color or depth, the pixel location, and per-pixel values such as the depth value and stencil value of the pixel.
Related The Renderman Tutorial: Book 5
Copyright 2019 - All Right Reserved