Computer Graphics: Difference between revisions
| (16 intermediate revisions by the same user not shown) | |||
| Line 66: | Line 66: | ||
\end{bmatrix} | \end{bmatrix} | ||
</math> | </math> | ||
===Transformation matrix=== | |||
<math> | |||
L = T * R * S | |||
</math> | |||
Depending on implementation, it may be more memory-efficient or compute-efficient to represent affine transformations as their own class rather than 4x4 matrices. For example, a rotation can be represented with 3 floats in angle-axis or 4 floats in quaternion coordinates rather than a 3x3 rotation matrix. | |||
For example, see | |||
* [https://eigen.tuxfamily.org/dox/classEigen_1_1Transform.html Eigen::Transform] | |||
===Barycentric Coordinates=== | |||
==MVP Matrices== | ==MVP Matrices== | ||
| Line 71: | Line 83: | ||
To convert from model coordinates <math>v</math> to screen coordinates <math>w</math>, you do multiply by the MVP matrices <math>w=P*V*M*v</math> | To convert from model coordinates <math>v</math> to screen coordinates <math>w</math>, you do multiply by the MVP matrices <math>w=P*V*M*v</math> | ||
* The model matrix <math>M</math> applies the transform of your object. This includes the position and rotation. <math>M*v</math> is in world coordinates. | * The model matrix <math>M</math> applies the transform of your object. This includes the position and rotation. <math>M*v</math> is in world coordinates. | ||
* The view matrix <math>V</math> applies the transform of your camera. <math>V*M*v</math> is in camera or view coordinates. | * The view matrix <math>V</math> applies the inverse transform of your camera. <math>V*M*v</math> is in camera or view coordinates (i.e. coordinates relative to the camera). | ||
* The projection matrix <math>P</math> applies the projection of your camera, typically an orthographic or a perspective camera. The perspective camera shrinks objects in the distance. | * The projection matrix <math>P</math> applies the projection of your camera, typically an orthographic or a perspective camera. The perspective camera shrinks objects in the distance. | ||
| Line 108: | Line 120: | ||
[https://www.scratchapixel.com/lessons/3d-basic-rendering/perspective-and-orthographic-projection-matrix/building-basic-perspective-projection-matrix https://www.scratchapixel.com/lessons/3d-basic-rendering/perspective-and-orthographic-projection-matrix/building-basic-perspective-projection-matrix] | [https://www.scratchapixel.com/lessons/3d-basic-rendering/perspective-and-orthographic-projection-matrix/building-basic-perspective-projection-matrix https://www.scratchapixel.com/lessons/3d-basic-rendering/perspective-and-orthographic-projection-matrix/building-basic-perspective-projection-matrix] | ||
Notes: In computer vision, this is | https://www.songho.ca/opengl/gl_projectionmatrix.html | ||
It contains the intrinsic parameters of your pinhole camera such as field of view and focal length | |||
The projection matrix applies a perspective projection based on the field of view of the camera. This is done dividing the x,y view coordinates by the z-coordinate so that further object appear closer to the center. Note that the output is typically in normalized device coordinates (NDC) <math>[-1, 1]\times[-1, 1]</math> rather than image coordinates <math>{0, ..., W-1} \times {0, ..., H-1}</math>. Additionally, in NDC, the y-coordinate typically points upwards unlike image coordinates. | |||
The Z-coordinate in the projection matrix represents a remapped version of the z-depth, i.e. depth along the camera forward axis. In OpenGL, this maps z=-f to 1 and z=-n to -1 where -z is forward. | |||
Notes: In computer vision, this is analogous to the calibration matrix <math>K</math>. | |||
It contains the intrinsic parameters of your pinhole camera such as field of view and focal length. The focal length determines the resolution of your output. | |||
===Inverting the projection=== | ===Inverting the projection=== | ||
| Line 128: | Line 146: | ||
{{main | Wikipedia: Lambertian reflectance}} | {{main | Wikipedia: Lambertian reflectance}} | ||
This is a way to model diffuse (matte) materials. | This is a way to model diffuse (matte) materials. | ||
<math>I_D = (\mathbf{L} \cdot \mathbf{N}) * C * I_{L}</math> | |||
* <math>\mathbf{N}</math> is the normal vector. | |||
* <math>\mathbf{L}</math> is the vector to the light. | |||
* <math>C</math> is the color. | |||
* <math>I_{L}</math> is the intensity of light. | |||
===Phong reflection model=== | ===Phong reflection model=== | ||
| Line 136: | Line 160: | ||
Here, the image is a linear combination of ambient, diffuse, and specular colors. | Here, the image is a linear combination of ambient, diffuse, and specular colors. | ||
If <math>N</math> is the normal vector, <math>V</math> is a vector from the vertex to the viewer, <math>L</math> from the light to the vertex, and <math>R</math> the incident vector (i.e. L rotated 180 around N) then | If <math>\mathbf{N}</math> is the normal vector, <math>\mathbf{V}</math> is a vector from the vertex to the viewer, <math>\mathbf{L}</math> from the light to the vertex, and <math>\mathbf{R}</math> the incident vector (i.e. <math>\mathbf{L}</math> rotated 180 around <math>\mathbf{N}</math>) then | ||
* Ambient is a constant color for every pixel. | * Ambient is a constant color for every pixel. | ||
* The diffuse coefficient is <math>N \cdot L</math>. | * The diffuse coefficient is <math>\mathbf{N} \cdot \mathbf{L}</math>. | ||
* The specular coefficient is <math>(R \cdot V)^n</math> where <math>n</math> is the ''shininess''. | * The specular coefficient is <math>(\mathbf{R} \cdot \mathbf{V})^n</math> where <math>n</math> is the ''shininess''. | ||
The final color is <math>k_{ambient} * ambientColor + k_{diffuse} * (N \cdot L) * diffuseColor + k_{specular} * (R \cdot V)^n * specularColor</math>. | The final color is <math>k_{ambient} * ambientColor + k_{diffuse} * (\mathbf{N} \cdot \mathbf{L}) * diffuseColor + k_{specular} * (\mathbf{R} \cdot \mathbf{V})^n * specularColor</math>. | ||
;Notes | ;Notes | ||
| Line 147: | Line 171: | ||
===Physically Based=== | ===Physically Based=== | ||
See [https://static1.squarespace.com/static/58586fa5ebbd1a60e7d76d3e/t/593a3afa46c3c4a376d779f6/1496988449807/s2012_pbs_disney_brdf_notes_v2.pdf pbs disney brdf notes] and the [http://www.pbr-book.org/ pbr-book] | See [https://static1.squarespace.com/static/58586fa5ebbd1a60e7d76d3e/t/593a3afa46c3c4a376d779f6/1496988449807/s2012_pbs_disney_brdf_notes_v2.pdf pbs disney brdf notes] and the [http://www.pbr-book.org/ pbr-book] | ||
In frameworks and libraries, these are often refered to as ''standard materials''. | In frameworks and libraries, these are often refered to as ''standard materials'' or in Blender, ''Principled BSDF''. | ||
==Blending and Pixel Formats== | |||
===Pixel Formats=== | |||
===Blending=== | |||
To output transparent images, i.e. images with alpha, you'll generally want to blend using [[Premultiplied Alpha]]. Rendering in premultiplied alpha prevents your RGB color values from getting mixed with the background color empty pixels. | |||
===Rendering=== | |||
For rasterization, the render loop typically consists of: | |||
# Render the shadow map. | |||
# Render all opaque objects front-to-back. | |||
## Opaque objects write to the depth buffer. | |||
# Render all transparent objects back-to-front | |||
## Transparent objects do not write to the depth buffer. | |||
Rendering opaque objects front to back minimizes overdraw, where a pixel gets drawn to multiple times in a single frame. | |||
Rendering transparent objects back to front is needed for proper blending of transparent materials. | |||
==Anti-aliasing== | |||
For a high-quality anti-aliasing, you'll generally want to multiple multi-sampling (MSAA). | |||
This causes the GPU to render the depth buffer at a higher resolution to determine the contribution of your fragment shader's color to the final image. | |||
See https://learnopengl.com/Advanced-OpenGL/Anti-Aliasing#:~:text=How%20MSAA%20really%20works%20is,buffer%20to%20determine%20subsample%20coverage for more details. | |||
==More Terms== | ==More Terms== | ||