FPGA 3D Graphics Engine
The goal of this project was to have Linux userspace application issue commands to a GPU implemented on the FPGA to render and manipulate 3D objects. The GPU shares a common bus with other peripherals on the FPGA.
A software rasterization program parses 3D objects specified in the Wavefront .obj file format and performs a series of transformations and tests to produce pixel data. This data is then sent to a renderer backend (Either OpenGL as a reference, or the VGA Subsystem on the DE1-SoC Computer System from Intel).
The GPU when completed will perform the vertex transformations and tests and communicate with the VGA Subsystem on the CPU’s behalf. The application software will only have to parse the .obj file.
Results
The reference OpenGL implementation achieves an average frame-time of 10ms when rendering the Utah Teapot on an i7 6700HQ. The ARM A9 on the DE1 achieves an average frame-time of 280ms (running under Linux).
NOTE: The GPU hardware is currently a work in progress. It’s results will be posted above when completed.
The two rendering backends produce very similar visual results. The difference in background colour is because there isn’t an easy way to set the background colour on the VGA Subsystem. It would require a traversal of the entire framebuffer.
The VGA Subsystem also uses a more limited 16bit colour scheme compared to OpenGL.
For guidance on understanding the 3D Graphics pipeline, I used the following sources:
Additional Rendered Objects
Stanford Bunny
Stanford Dragon
GPU RTL Progress
- Avalon Interfaces
- Command Stream Avalon MM Slave
- Vertex Fetch Avalon Master
- Raster Backend Avalon Master
- Vertex Processor
- Vertex Transform
- Vertex Shader
- Rasterization Engine
- Triangle Setup
- Bounding Box Function
- Edge Function & Half Plane Test
- Depth Test & Z Buffer
- Linux Kernel Module