Refer to: https://learnopengl.com/#!Advanced-Lighting/Shadows/Shadow-Mapping

### 1, Shadows in the game

Shadow is the result of light blocking. When the light of a light source cannot reach the surface of an object due to the blocking of other objects, the object is in shadow. Theoretically, shadows are everywhere, but it is not particularly easy to render a good shadow effect. At present, a perfect shadow algorithm has not been found in the field of real-time rendering. Although there are many approximate shadow technologies, they all have their own weaknesses and shortcomings

At present, one of the most commonly used technologies is shadow mapping, which is an easy to expand shadow algorithm. It is still difficult to achieve a good effect through shadow mapping, but you can first understand the concept of shadow mapping and try to render the simplest shadow effect

### 2, Shadow mapping

The principle of shadow mapping is very simple: when rendering from the perspective of the position of light, all visible segments will be lit, and what cannot be seen must be in the shadow. Of course, the projection method also needs to be considered. Generally, it is orthogonal projection for parallel light and perspective projection for point light and other light sources

However, for the point light source, its projection has no specific angle of view, which can be understood as a 360 ° omni-directional angle of view. Therefore, when calculating the shadow, you need to render the depth value to the cube map. In order to get started, you can only consider the directional light (orthogonal projection) temporarily, which is simpler

Do you remember OpenGL Depth test , maybe we can use the depth value to judge whether the clip is blocked from the current light source?

As shown in the left figure below, the Yellow segment is the segment directly illuminated, and the black segment is blocked. For all black segments, there must be other visual segments on the connection between it and the light source. In this way, what we need to do is the lower right figure: for each segment , determine whether there is a fragment that meets the conditionsIts depth value <, that is, judgment Is the point directly illuminated by the light source

Specific steps:

- Enter the coordinate space with the light source as the observation point, and render a depth map according to the depth test method: for each depth value in the depth map, it must be the depth value of the first slice seen under the perspective view of the light source
- After getting the depth map, check all rendered clips in the normal rendering process : passed Transformed to the coordinate space of the light source Then index the depth map to get the nearest visible depth from the light's perspective, according to its sum Point depth relationship Whether the point is blocked, if If it is occluded, the lighting calculation results of the current clip are ignored

### 3, Depth map

In order to render the depth map, you need to define another frame buffer. Because you only need to care about the depth value, you can specify the texture format as GL_DEPTH_COMPONENT, and notify OpenGL not to read or write the color buffer through glDrawBuffer(GL_NONE) and glReadBuffer(GL_NONE). In addition, its resolution can be customized, which is not necessarily the size of the screen

GLuint depthFBO; glGenFramebuffers(1, &depthFBO); glBindFramebuffer(GL_FRAMEBUFFER, depthFBO); GLuint* depthColorBuffer = getAttachmentTexture(1, true); glBindTexture(GL_TEXTURE_2D, depthColorBuffer[0]); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthColorBuffer[0], 0); glDrawBuffer(GL_NONE); glReadBuffer(GL_NONE); //No need to consider color if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) cout << "ERROR::FRAMEBUFFER:: Framebuffer is not complete!" << endl; glBindFramebuffer(GL_FRAMEBUFFER, 0);

Things to consider when drawing:

- If you have a custom depth map resolution, you need glViewport() to set the drawing area
- Take the light source as the observation point, calculate the projection and observation matrix of the light space, and the space matrix of the light source is obtained by multiplying them

In the code, for simplicity, the point light source is temporarily calculated as a directional light, so the projection matrix is an orthogonal matrix

- The two parameters of far, far, and near are the two parameters of far, left, and bottom: the two parameters of far, left, and near are the two parameters of far, right, and near

while (!glfwWindowShouldClose(window)) { glViewport(0, 0, SHADOW_WIDTH, SHADOW_HEIGHT); glBindFramebuffer(GL_FRAMEBUFFER, depthFBO); glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); glm::mat4 lightProjection, lightView; glm::mat4 lightSpaceMatrix; GLfloat near_plane = 0.2f, far_plane = 15.0f; lightProjection = glm::ortho(-20.0f, 20.0f, -20.0f, 20.0f, near_plane, far_plane); lightView = glm::lookAt(glm::vec3(-4.0f, 2.6f, -0.25f), glm::vec3(0.0f), glm::vec3(0.0, 1.0, 0.0)); lightSpaceMatrix = lightProjection * lightView; shaderDepth.Use(); glUniformMatrix4fv(glGetUniformLocation(shaderDepth.Program, "lightSpaceMatrix"), 1, GL_FALSE, glm::value_ptr(lightSpaceMatrix)); //wall.Draw(shaderObj, 3); wood.Draw(shaderObj, 8); //ground.Draw(shaderObj, groundIndex + 1); //lightObj.Draw(shaderObj, 1); //...... }

Shaders are nothing special, because fragment shaders don't need any processing when color buffering

If correct, the depth map is like this: it is red, because for the three RGB attributes, of course, only the R attribute has a value, and its value is the depth

#version 330 core layout (location = 0) in vec3 position; layout (location = 5) in mat4 model; uniform mat4 lightSpaceMatrix; void main() { gl_Position = lightSpaceMatrix * model * vec4(position, 1.0f); } // #version 330 core void main() { }

### 4, Render shadows

After getting the depth map, start the normal rendering process, but this time you need to transfer the shadow map to the fragment shader and the light source space matrix to the vertex shader for light source space transformation

For vertex shaders, complete code directly:

LightSpaceFragPosIn refers to the coordinates of the fragment in the light space, corresponding to the coordinates in the figure , pass in the clip shader for the next calculation

#version 420 core layout (location = 0) in vec3 position; layout (location = 1) in vec3 normal; layout (location = 2) in vec2 texture; layout (location = 3) in vec3 tangent; layout (location = 4) in vec3 bitangent; layout (location = 5) in mat4 model; out VS_OUT { vec2 texIn; vec3 normalIn; vec3 fragPosIn; vec4 LightSpaceFragPosIn; mat3 TBN; }vs_out; uniform mat4 lightSpaceMatrix; layout (std140, binding = 0) uniform Matrices { mat4 view; //Observation matrix mat4 projection; //Projection matrix }; void main() { gl_Position = projection * view * model * vec4(position, 1.0); vs_out.fragPosIn = vec3(model * vec4(position, 1.0f)); vs_out.texIn = texture; mat3 normalMat = transpose(inverse(mat3(model))); vs_out.normalIn = normalMat * normal; vec3 T = normalize(normalMat * tangent); vec3 N = normalize(normalMat * normal); T = normalize(T - dot(T, N) * N); vec3 B = cross(T, N); vs_out.TBN = mat3(T, B, N); vs_out.LightSpaceFragPosIn = lightSpaceMatrix * vec4(vs_out.fragPosIn, 1.0); }

For clip shaders, the modified parts are as follows:

The ShadowCalculation method is to judge whether the current segment is occluded. If it is occluded, the contribution of the current light source to the segment color will not be considered when calculating the corresponding illumination below, but for the logic in the ShadowCalculation method:

- Vertex shader outputs vertex positions to GL_ During position, OpenGL will automatically perform perspective division, and we calculate it ourselves Of course, you need to manually map to the NDC space. This is easy to do. Divide the first three components by w
- Then it is often operated: convert the coordinate range from [- 1,1] to [0,1]
- The third step is to obtain the depth value of the depth map for depth comparison
- Finally, if If the depth value of is greater than the corresponding depth value of the depth map, you can determine whether it is occluded

#version 330 core uniform sampler2D shadowMap; //...... void main() { //...... //vec3 result = //...... for (int i = 0; i <= 0; i++) { float shadow = ShadowCalculation(LightSpaceFragPosIn); result = result + (1.0 - shadow) * CalcPointLight(pointLights[i], normal, fragPos, viewDir); } //...... lightColor = vec4(result.rgb, 1.0); } float ShadowCalculation(vec4 fragPosLightSpace) { // Perform perspective Division vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w; // Transform to the range of [0,1] projCoords = projCoords * 0.5 + 0.5; // Get the depth of the nearest point (use fragPosLight in the range of [0,1] as the coordinate) float closestDepth = texture(shadowMap, projCoords.xy).r; // Gets the depth of the current clip from the perspective of the light source float currentDepth = projCoords.z; // Checks whether the current clip is in shadow float shadow = currentDepth > closestDepth ? 1.0 : 0.0; return shadow; }

If there is no problem, you can get the following results:

There are already shadow effects, but there are many problems, including shadow distortion and obvious sawtooth. These must be optimized, but this chapter is not small. Let's divide it into P