The "Phong Light" Summary
Preface
The lighting is hell. For me, personally, it's pain to make the lighting shader. I mean, not just a simple ambient light everywhere. Whenever I tried to make something a little fucking bit complex, like provide multiple lighting and combine them with the object material, everything fucks up.
I knew what's the main problem, but I always defer this shit because "it's not time to learn this complex stuff". Yes. The main issue was my weak understanding of Phong Light theory.
And, after a hard week investigation of this shit, I finally could say that I understand something good enough.
But I'll definitely forget it, so this is the reason of this summary.
Let's talk about the data
The data is everything. And the Graphics Programming obeys this law as well.
And the lighting shaders could be really different depends on the vertex input data, so let's take a look at the most popular data layouts!
Vertex data
The vertex attributes are essential and commonly contains of vertex position, the normal, and texture coordinates:
layout(location = 0) in vec3 a_position; layout(location = 1) in vec3 a_normal; layout(location = 2) in vec2 a_uv;
Nothing really special. The LearnOpenGL describes it perfectly. So, what could be wrong?
Model or Model View?
When we're done with our attributes, we really need to provide the transform matrices:
Modelβ Usually the game object transformation;Viewβ The camera transformation;Projectionβ describes the viewport transformation.
And we know why and how they're used:
uniform mat4 u_model;
uniform mat4 u_view;
uniform mat4 u_projection;
void main() {
gl_Position = u_projection * u_view * u_model * vec4(a_position, 1.0);
}But sometimes you could meet another version of this code:
uniform mat4 u_model_view;
uniform mat4 u_projection;
void main() {
gl_Position = u_projection * u_model_view * vec4(a_position, 1.0);
}And there are 2 reasons of passing already multiplied model and view matrices:
And that's reasonable. You could have a model with thousand of vertices, and the whole scene could have up to millions of vertices, so it's expensive to multiply matrixes inside the shaders. And using SIMD instructions (like in the glm library) you could find this multiplication once before passing to the shader.
Those 2 versions changes how we calculate the lighting as well, and we'll see it.
Don't forget about normal!
If you read the LearnOpenGL really pedantically, my congratulations! But I didn't get it for 2 years π.
But we cannot just multiply the u_model to the a_normal, because the shear and scale will affect them as well:
As you could see in the image above, the multiplication makes the normals useless since they're no perpendicular to the face anymore. And to fix it, we need to make such transformation matrix, which transforms the normals better way.
The best visualization of this problem I've found in the webglfundamentals.org, and the best math explanation is in the lighthouse3d.
You could check them and figure out how it works, but I'll write some thoughts here for myself. You could skip it as I'd been doing till this week...
Normal Matrix Calculation
- The normal is always perpendicular to the face, and that's mean that the angle between them is 90 degrees or cosines between normalized vectors of them is
0; - Knowing this shit, we could guess that the dot product of normal and a face vectors is
0as well; - So, let guess that
Xis such a transform, which gives us the dot product between transformed normal and transformed surface vector "a" same0:dot(X * a_normal, u_model * a) == 0; - So, we could replace the dot production with just production of vectors by transposing the left part:
transpose(X * a_normal) * u_model * a == 0; - There's only one way to get
0here. If the multiplication oftranspose(x)andu_modelgives us the identity matrix:transpose(X) * u_normal == I; - So, the last transformation is getting X from the line above:
X = transpose(inverse(u_normal))
Now we can happily calculate this shit on the CPU since it's super hard to calculate for each vertex and pass to the shader through u_normal_matrix uniform:
uniform mat4 u_model; uniform mat4 u_view; uniform mat4 u_projection; uniform mat3 u_normal_uniform;
Notice that the u_normal_uniform is 3x3 matrix. We don't really need to have 4 dimensions since the normal isn't transform and shouldn't be normalized. So, you could make your program faster by converting the model matrix into 3x3 matrix and inverse and transpose it easier!
const mat3x3 normal_uniform{
transpose(inverse(mat3x3{ object.transform }))
};
pipeline.set_uniform("u_model", object.transform);
pipeline.set_uniform("u_normal_uniform", normal_uniform);Normal matrix with ModelView transform
In case of ModelView you should pass
const mat4x4 model_view{ object.transform * camera.make_view() };
const mat3x3 normal_uniform{
transpose(inverse(mat3x3{ model_view }))
};
pipeline.set_uniform("u_model_view", model_view);
pipeline.set_uniform("u_normal_uniform", normal_uniform);And it's useful to know that you don't even need to transpose it by yourself since the OpenGL has the glUniformMatrix* methods with transposing option:
void glUniformMatrix3fv(
GLint location,
GLsizei count,
GLboolean transpose, // You could use it!
const GLfloat *value
);const mat4x4 model_view{ object.transform * camera.make_view() };
const mat3x3 normal_uniform{
transpose()
};
pipeline.set_uniform("u_model_view", model_view);
pipeline.set_uniform("u_normal_uniform",
inverse(mat3x3{ model_view }),
{ .transpose = true }
);Final possible vertex shaders
- If you just need your code, here it is
- Or, if you're super optimizer, you could take the mode-view version
Base model and its modifications
Before we start to cut the Phong-Based lighting into the components, there's an important idea we need to highlight:
The Base Model could be modified to get different light sources
That means that the description below explains the base model, which doesn't give us realistic light, but modifying this base model by adding the attenuation by distance or changing the material and light properties we could reach the better lighting.
The Light Components
The Phong Light is simulation. You'd hear a thousand times that it's impossible to calculate the real lighting behavior using computers. So, we just lie to each other and happily continue to develop.
The Phong Light contains of 3 great deceptions:
Ambient Lie
The AMBIENT lies that there's no million times reflected light from the Sun or the Moon. If you power off all the light sources inside your home at night and cover all the windows at night, you're probably insane, you know? Anyway, I did it as well and after 30 minutes noticed that I actually can see. Maybe it's the brains trick, but I guess it's the ambient light.
In the graphics programming, we just take the global light color and multiply it by the surface scale factor. Like the surface itself is the source of light!
If the global light color is obvious, so what about the surface scale factor?
Surface scale factor could be as the material parameter, as such the texture or just the constant for each object in your game:
vec3 ambient = u_material.ambient_scale * u_light.color;
So, that's the easiest of light components.
AMBIENT is the simulation of global illumination like if the light rays were reflected thousands times.
Diffuse Lie
The DIFFUSE is a better liar than the AMBIENT one. It simulates the light source impact. All we know that as closer the light source to the surface, as brighter the surface is.
And everything gets in sense if you remember that the closest ray from the point to the surface is perpendicular to the surface.
So, it's easier to use dot production between the normalized vector to the light surface and fragment normal:
If you take a look at this image, you'll notice that the (1) angle between normal and light direction is smaller than the (2) second one. And same relation with the distance.
The closer the to-light-vector to the normal, the higher the cosine and light impact as well
If we assume that the normal is X axis and the dot production of to_light vector to normal gives us the nice scale factor, excepting it could be negative.
To cut the negative scale, we just could use the max function:
// to_light and normal ARE normalized!!!! float diffuse_impact = max(dot(to_light, normal), 0.0);
But why can't we just use the distance? The answer is pretty straightforward:
We use cosine instead of distance to not pass the "max_distance" and to not interpolate between them to get the [o.O; 1.0] impact scale factor, but you could!
So, what to do with it? Well, you could scale your diffuse texture color to this factor or you could take the material color and scale it as well. Here's the calculation pipeline:
The 4 step could be changed as you want. Instead of light color, you could use the material or texel (texture pixel):
vec3 to_light_direction = normalize(u_light.position - frag_position);
That how it looks like in the fragment shader. The normalization is required!float diffuse_impact = max(dot(to_light_direction, normal), 0.0);
And the second step to get the scale factor. With this scale, we could make the color darker if the angle is too high;vec3 diffuse = texture(u_diffuse_texture, frag_uv).rgb * diffuse_impact;
We don't need to colorize on this step by color of light. We only scale the fragment color from the black to the actual color:
If you have multiply sources light, you can sum the factor! But be careful. You could, but probably don't want to have the scale greater than 1.0.
If you're thinking about the distance, continue to read this summary. It's just subtype or modification of this base lies.
Specular Lie
As you could notice, the diffuse light doesn't actually highlight the object as the lighting usually does. It slightly turns pixels on when the light source is getting closer to the surface, but what about the patch of reflected light?
The SPECULAR pretends the reflected light. Well, it actually doesn't really pretend like other light components. The reason of pointless honest is that we actually need to find the reflected vector of the light. Let's figure out!
The specular "prints" on the texture the mostly lighting color. You could see it in real live as well:
Look at that book. The center of light reflection is mostly the color of light source, and as closer the reflection direction to our eyes, as brighter it is:
That's the identical idea to the diffuse impact, but the directional vector to the camera position plays the role of normal vector, while the "to-light" role was taken by the reflected light direction. Sounds hard? Well, let's summarize:
The closer the reflected light kicks your eye, the more pain you feel
==
The less the angle between the reflected light vector and vector to your eye, the harder it hurts
So, the calculation pipeline is a little harder than diffuse, but I just add some mess to be clear, so it just looks like that:
- The first step is getting the reflection itself. To do so, we could take our
to_lightvector, reverse it by multiplying on -1 (or just-to_light) and applying a reflect function relative to fragmentnormal:vec3 reflection = reflect(-to_light, normal); - Then, we need to find the vector to the view position. It's as simple as find the
to_lightdirection: - If you use
u_model_viewmatrix, you don't need any other information:vec3 to_view = normalize(/* camera_pos */ -frag_position); - Otherwise, you need to provide the camera position as the
uniformparameter:vec3 to_view = normalize(u_camera_pos - frag_position); - Then, we could find the specular impact, aka cosine using dot production:
float specular_impact = max(dot(reflection, to_view), 0.0); - The angle between two vectors is linear, but we see the light reflection only in a small area, so, to make it the light color concentrated in the center of reflection, we need to apply the reform function. It's usually the power function.
The shininess is an actually the parameter of reform function, in our and common case is power value. Let's take a look at this gif:
The X coordinate is the specular_impact value, while the Y coordinate is the result of pow(specular_impact, u_material.shininess):
- The higher the shininess, the faster the power result is 0.0;
- That's mean that we transform the linear angle to cubic form and our light glint won't be too big
Combination of Lies
When you get what's each component of phong light lies about, you could remember that we need to sum those components, and that's the basic light:
Example with static material and a single texture: (link)
#version 300 core
layout(location = 0) in vec3 frag_position;
layout(location = 1) in vec3 frag_normal;
layout(location = 2) in vec2 frag_uv;
layout(location = 0) out vec4 frag_color;
struct Light {
vec3 position;
vec3 color;
};
struct Material {
vec3 ambient;
vec3 diffuse;
vec3 specular;
float shininess;
};
uniform sampler2D u_texture;
uniform Light u_light;
uniform Material u_material;
uniform vec3 u_view_position; // ONLY IF WE DON'T USE MODEL_VIEW!
void main() {
vec3 normal = normalize(frag_normal);
vec3 texel = texture(u_texture, frag_uv).rgb;
// AMBIENT COMPONENT
vec3 ambient = u_material.ambient * u_light.color; // you could use texel as well
// DIFFUSE COMPONENT
vec3 to_light = normalize(u_light.position - frag_position);
float diffuse_impact = max(dot(to_light, normal), 0.0);
vec3 diffuse = u_material.diffuse * diffuse_impact;
// SPECULAR COMPONENT
vec3 reflected = normalize(reflect(-to_light, normal));
vec3 to_eye = normalize(u_view_position - frag_position); // ONLY IF WE DON'T USE MODEL_VIEW!
// vec3 to_eye = normalize(-frag_position); // If we use MODEL_VIEW
float specular_impact = max(dot(reflected, to_eye), 0.0);
float specular_reformed = pow(specular_impact, u_material.shininess);
vec3 specular = u_material.specular * u_light.color * specular_reformed;
// GETTING RESULT FRAGMENT COLOR:
frag_color = vec4((ambient + diffuse + specular) * texel, 1.0);
}Notice that if we transform everything with view transform, the eyes position will be the center of the coordinate system. So, in case of using ModelView method we don't need to pass the camera position and calculate the to_eye vector by subtracting from [0, 0, 0] vector, which is just a vector reversing
The example with textured materials you could find here!
This light is super fake since it doesn't count the real light distance or power of light source, but this is the skeleton of the lighting. You could modify, add or delete some properties to each component to achieve the better lights. And we'll take a look at those modifications in the next topic!
What's next?
This summary described the base model. Anyway, there are a lot of unanswered questions left behind. I'm tired to write this exact page so, I'll continue after a little break and answer those questions later:
- How to restrict the light to get the local lighting source like a lamp or torch? What about the Sun lighting and of course I want to have a player's flashlight!
(Directional, Point and Spotlight); - How to push multiply lighting sources to the scene and how to combine them inside the shaders? How those fucking game engines handle thousand of light sources!?
- Why someone adds the ambient, diffuse and specular to the light structure?
- How to make the lighting more realistic? (Blinn-Phong, Shadows)?