Software Rasterizer Pipeline
This project was incredibly interesting for many reasons. First of all it went beyond what a libraries like DirectX or OpenGL do, cutting down all the modern day optimization and help to get to the roots of 3D graphics. This in turn helped gain deeper understanding of how the core of those libraries work.
This image shows all the basic steps required in a graphics pipeline for 3D rendering. It is this steps, most of which are controlled behind the scenes by either the graphics card or OpenGL/DirectX, that I aimed to replicate. With this all the optimization done behind the scenes by GPUs and OpenGL is gone and the entirety of the graphics pipeline is now in my hands.
This image shows all the basic steps required in a graphics pipeline for 3D rendering. It is this steps, most of which are controlled behind the scenes by either the graphics card or OpenGL/DirectX, that I aimed to replicate. With this all the optimization done behind the scenes by GPUs and OpenGL is gone and the entirety of the graphics pipeline is now in my hands.
|
This project started simple enough, with some simple but important facts:
Since this project has the goal of truly understanding the graphics pipeline in 3D games a compromise of sorts is needed. I'll be using a framework with OpenGL to handle the window and a text editor where I can input data to the rasterizer. However, everything else will be handled using only the CPU and two simple functions to call OpenGL.
- Almost all screens used today use tiny points to represent images. Pixels.
- A pixel has an RGB color.
- Anything drawn on the screen has to be drawn one point at a time.
Since this project has the goal of truly understanding the graphics pipeline in 3D games a compromise of sorts is needed. I'll be using a framework with OpenGL to handle the window and a text editor where I can input data to the rasterizer. However, everything else will be handled using only the CPU and two simple functions to call OpenGL.
Straightforward enough, this functions manually set the current drawing colour and draw a point on the screen. The lowest level of modern graphics possible (albeit highly inefficient since it cuts away years of optimization by both OpenGL and GPU's).
The simple framework is used to translate text commands into the 3D geometry and rasterization parameters. This means that every single piece of the pipeline needs a command and parameters that have to be translated. Again, highly inefficient but simple enough for this project. This is an example of the text in the framework.
The simple framework is used to translate text commands into the 3D geometry and rasterization parameters. This means that every single piece of the pipeline needs a command and parameters that have to be translated. Again, highly inefficient but simple enough for this project. This is an example of the text in the framework.
fillmode point // Will tell the rasterizer weather to render the vertices in points, lines(wireframe) or fill mode.
drawbegin triangle // Will tell the rasterizer of to consume the vertices, 1->point, 2->line, 3->triangle, 4->quad.
color 1 0 0 // Sets the color of the vertex to the RGB values entered.
vertex2 10 10 // Creates a 2D vertex with screen equivalent coordinates.
drawend // Wraps up all the geometry imputed and renders it according to the parameters stated.
Since this is a rather dull and relatively uninteresting part of the project I won't go into further detail about it. Suffice to say that this is the interface to the rasterizer and that every single part of it, from geometry to parameters, will have to go through it.
Another integral part of graphics is math. Graphics and math go hand in hand, so the creation of a small but useful math library is necessary. This will increase as the project grows but for the meantime it needs basic 2D stuff like a vector and a 3X3 matrix for 2D transformations. (I won't go deep into the math since I'll be creating a post about my game engine library in the future)
The first step, the most basic.
The next step is render lines. This proposed a couple of interesting problems I never truly though about before. First and foremost is the fact that in a grid based(pixels) graphics there is no such thing as a true straight line that's not vertical, horizontal or at 45°. In order to represent a line it has to be created to fit the grid.
The second problem comes from the fact that only 2 vertices are created by the user. Only two sets of positions and colors to create the whole line. The solution to both? Linear Interpolation. (GPU's are particularly good at this).
Linear Interpolation( LERP), is perhaps only second to matrix multiplications in GPU operations. This is because every single line in geometry must be created by interpolating between two points. (The only other options being inefficient , like having an actual vertex for each point in the geometry). This amount of LERP operations only increases when triangles are filled (more on that later). LERP is important because it allows not only to dynamically create all the points in a line between two points, but to interpolate all of the vertex's data like colour, normal and texture coordinates.
Linear Interpolation( LERP), is perhaps only second to matrix multiplications in GPU operations. This is because every single line in geometry must be created by interpolating between two points. (The only other options being inefficient , like having an actual vertex for each point in the geometry). This amount of LERP operations only increases when triangles are filled (more on that later). LERP is important because it allows not only to dynamically create all the points in a line between two points, but to interpolate all of the vertex's data like colour, normal and texture coordinates.
The way it works is by getting a value t between 0 and 1 which represents the percentage of the line at which point C is at. Now, since we're working on the screen coordinates for position we must use