0 Preface
The notes in this section correspond to the learning content on the official website: base lighting
exist Light 01 We first learned about the concept of color and tried to make ambient light
In this section, we will learn some simple lighting models.
The original tutorial was introduced with the Phong lighting model, we introduced it with the Blinn-Phong lighting model
1 An overview of the principle of a simple lighting model (Blinn-Phong lighting model)
OpenGL lighting uses a simplified model that approximates reality.
These lighting models are based on our understanding of the physics of light.
One of these models is called the Blinn-Phong lighting model.
The main structure of the Blinn-Phong lighting model consists of three components: ambient, diffuse and specular lighting.
- Ambient Lighting: In the teacup image above, some parts are backlit but still have color because they receive various indirect lighting.
Reflected light from the environment is complex, to simulate this we'll use an ambient light constant that will always give objects some color. - Diffuse Lighting: Simulates the directional impact of light sources on objects.
It is the most visually salient component in the Blinn-Phong lighting model. The more a part of an object is facing the light source, the brighter it will be. - Specular Lighting: Simulates the highlights that appear on shiny objects.
The color of the specular light will be more inclined to the color of the light than the color of the object.
Before doing these three sets of lighting, we define a few things
Our current lighting is considered on the shading point, and the shading point must be a plane locally
We can define these directions (unit vectors), the normal n on the plane, and the observation direction v, the light direction l, the surface material
We consider the coloring situation of any point, just look at itself, without considering the influence of the existence of other objects, that is, there is no shadow or something
1.1 Diffuse reflection
-
A beam of light hitting the surface of an object will be diffusely reflected in all directions
-
However, when the surface orientation and the angle of the light are different, the brightness of the object is different.
The black border on the purple square below represents a shading point, and the light is seen as discrete rays, each with the same energy.
How much energy a shading point can receive per unit area indicates how bright it is.
At different angles, the "quantity" of light received per unit area is different, and the energy received is also different.
Lambert's cosine law defines that the energy received is proportional to the cosine of the normal and the light direction.
-
We assume that the light comes from a point light source, and the point light source emits energy all the time. We can see the attenuation of light energy in a spherical way
The light energy emitted at a certain time, in the process of diffusion, the diffusion distance changes, that is, the surface area of the spherical shell becomes larger, so the energy becomes less and less per unit point.
(Assuming no loss in vacuum) When the distance from the point light source is 1, we define the light source intensity at this time as I, and when the distance is r, it becomes I/r².
That is, the intensity of the received light source is also inversely proportional to the square of the distance.
-
We know the distance, we know how much light is transmitted to this neighborhood, and we know how much light is absorbed according to the angle, so we can get the formula for diffuse reflection.
The meaning of the max function is that if the included angle exceeds 90 degrees, we will not be able to take it.
The front coefficient indicates the absorption effect of the light on the point, the material effect of the surface
Since the diffuse reflection is the same in all directions, it means that the effect of the diffuse reflection is the same no matter which direction it is viewed from (that is, the formula has nothing to do with the viewing direction).
-
Here are some renderings to help understand
1.2 Specular reflection/highlight (compared with Phong lighting model)
-
We can see the specular reflection (highlight), it should be when the viewing direction and the specular reflection direction are close enough
-
When the viewing direction and the specular reflection direction are close enough, in fact, when the half-way vector is close enough to the normal direction
We measure whether we can see highlights, just look at whether n and h are close
The front coefficient is the specular reflection coefficient
In the Blinn-Phong lighting model, the absorption of specular reflections by the surface of the object is simplified.
-
Although the cos obtained from the dot product of n and h can represent whether n and h are close, but the tolerance of cos for the angular range is too high, we need to control the angular visible range of the highlight, so the above formula will add an index.
In this lighting model, the control index will probably be between 100 and 200.
-
The following figure shows the effect of the formula
-
If the halfway vector is not used, the reflection direction and the observation direction are used directly, then it is the Phong illumination model, and the Blinn-Phong illumination model is an improvement to it.
Because computationally speaking, the half-way vector is much easier to calculate than the reflection direction. And if the observation direction and the light direction are in the same direction, and the angle between it and the reflection direction is greater than 90 degrees, the highlight is considered to have disappeared, and we will compare it in the later implementation.
1.3 Ambient Light
For ambient light, assign a constant directly to the shading point,
Ambient light assumes energy is Iα Surface coefficient Kα
get a (hypothetical) constant ambient light
1.4 Synthesis
Finally, the Blinn-Phong lighting model is obtained by synthesis
2 Implementation
The Blinn-Phong lighting model is just an improvement on the specular reflection of the Phong lighting model.
Then we can implement the Phong lighting model first, and then make a slight change to get the Blinn-Phong lighting model.
2.1 Phong lighting model
We will use the normal vector, so add it to the input of the vertex shader
layout(location = 3) in vec3 aNormal; // The attribute position value of the normal vector is 2
Then we go back and modify the vertex array, add the normal vector, and finally add a set of ground
//vertex uv normal float vertices[] = { -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.5f, 0.5f, -0.5f, 1.0f, 1.0f, 0.0f, 0.0f, -1.0f, 0.5f, 0.5f, -0.5f, 1.0f, 1.0f, 0.0f, 0.0f, -1.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, -1.0f, -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, 0.0f, 0.0f, -1.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.5f, 0.5f, 0.5f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.5f, 0.5f, 0.5f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, -0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, -0.5f, 0.5f, 0.5f, 1.0f, 0.0f, -1.0f, 0.0f, 0.0f, -0.5f, 0.5f, -0.5f, 1.0f, 1.0f, -1.0f, 0.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, -0.5f, 0.5f, 0.5f, 1.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.5f, 0.5f, -0.5f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.5f, -0.5f, -0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.5f, -0.5f, -0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, 1.0f, 0.0f, -1.0f, 0.0f, 0.5f, -0.5f, -0.5f, 1.0f, 1.0f, 0.0f, -1.0f, 0.0f, 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, -1.0f, 0.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, 1.0f, 0.0f, -1.0f, 0.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, -0.5f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, -15.0f, -3.0f, -15.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 15.0f, -3.0f, -15.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 15.0f, -3.0f, 15.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 15.0f, -3.0f, 15.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -15.0f, -3.0f, 15.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, -15.0f, -3.0f, -15.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, };
Add the ground to the render loop
if (i == 0) { glDrawArrays(GL_TRIANGLES, 36, 6); }
change vertex attributes
// location attribute glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)0); glEnableVertexAttribArray(0); // color properties //glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(3 * sizeof(float))); //glEnableVertexAttribArray(1); // uv attribute glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(3 * sizeof(float))); glEnableVertexAttribArray(2); // normal vector properties glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(5 * sizeof(float))); glEnableVertexAttribArray(3);
Rewrite the content of the vertex shader, we don't need to use the color attribute, output the shading point and normal
#version 330 core layout(location = 0) in vec3 aPos; // The attribute position value of the position variable is 0 //layout(location = 1) in vec3 aColor; // attribute location value of color variable is 1 layout(location = 2) in vec2 aTexCoord; // The attribute position value of the uv variable is 2 layout(location = 3) in vec3 aNormal; // The attribute position value of the normal vector is 2 //out vec4 vertexColor; out vec2 TexCoord; //Shading points and normals out vec3 FragPos; out vec3 Normal; //uniform mat4 transform; uniform mat4 modelMat; uniform mat4 viewMat; uniform mat4 projMat; void main(){ gl_Position = projMat * viewMat * modelMat * vec4(aPos.xyz,1.0); Normal =mat3(transpose(inverse(modelMat)))*aNormal; FragPos=(modelMat * vec4(aPos.xyz,1.0)).xyz; //vertexColor = vec4(aColor,1.0); TexCoord = aTexCoord; }
- We know that we need to pass the normal vector from the vertex shader to the fragment shader, and convert the normal vector to world space coordinates, but this is not done by multiplying a model matrix.
- The normal vector is just a direction vector, and we only want to scale and rotate it, so we only use the 3×3 matrix in the upper left corner of the model matrix.
Second, if the model matrix performs unequal scaling, the change in the vertices will cause the normals to no longer be perpendicular to the surface.
- Even for shaders, matrix inversion is an expensive operation, so you should avoid doing matrix inverse operations in shaders whenever possible, since each vertex needs to be calculated once by the GPU.
It's fine to do this for learning purposes, but for an application that requires efficiency, it's better to use the CPU to calculate the normal matrix before drawing, and then pass the value to the shader via uniform (like the model matrix) .
Next, change the fragment shader. At this time, the position of the light and the color of the light will be used, and the ambient light will be reset. We go back to the main, and add it after the code that changed the Uniform. At the same time, change the ambient light and remove it. The object color of the section note.
glUniform3f(glGetUniformLocation(myShader->ID, "ambientColor"), 0.2f, 0.2f, 0.2f); glUniform3f(glGetUniformLocation(myShader->ID, "lightPos"), 0.0f,8.0f,8.0f); glUniform3f(glGetUniformLocation(myShader->ID, "lightColor"), 1.0f, 1.0f, 1.0f);
At this time, modify the fragment shader according to the formula. At this time, we only use diffuse reflection and ambient light.
#version 330 core //in vec4 vertexColor; in vec2 TexCoord; //Shading points and normals in vec3 FragPos; in vec3 Normal; out vec4 FragColor; uniform sampler2D ourTexture; uniform sampler2D ourFace; uniform vec3 lightPos; uniform vec3 lightColor; uniform vec3 ambientColor; void main(){ //FragColor = mix(texture(ourTexture,TexCoord),texture(ourFace,TexCoord),texture(ourFace,TexCoord).a*0.2); vec3 r=lightPos-FragPos; //light direction vec3 lightDir = normalize(r); //distance factor float coefficient=180/(pow(r.x,2)+pow(r.y,2)+pow(r.z,2)); //normal vec3 normal = normalize(Normal); //diffuse reflection float diff = max(dot(normal, lightDir), 0.0); vec3 diffuse = coefficient * diff * lightColor; //material vec4 objColor= mix(texture(ourTexture,TexCoord),texture(ourFace,TexCoord),texture(ourFace,TexCoord).a*0.2); FragColor=vec4(diffuse+ambientColor,1.0)*objColor; }
We turned the original dark green background black
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
At this point, the operation should be able to get a picture without the highlight effect (schematic)
The following deals with specular reflection
Pass the camera position to the fragment shader in the main function
glUniform3f(glGetUniformLocation(myShader->ID, "cameraPos"), camera.Position.x, camera.Position.y, camera.Position.z);
Received in the fragment shader
uniform vec3 cameraPos; ...... //observation direction vec3 cameraVec=normalize(cameraPos - FragPos);
Calculate the specular vector in the fragment shader
//Phong lighting model specular reflection vec3 reflectDir = reflect(-lightDir, normal); float specularAmount = pow(max(dot(cameraVec, reflectDir), 0.0),100); vec3 specular = coefficient * specularAmount * lightColor; FragColor=vec4(diffuse+ambientColor+specular,1.0)*objColor;
- The reflect function requires that the first vector is the vector pointing from the light source to the position of the fragment, but lightDir is currently the opposite, pointing from the fragment to the light source, so it is negated.
- The second argument requires a normal vector, so we provide a normalized norm vector.
We can change the position of the light source to see the effect more directly
glUniform3f(glGetUniformLocation(myShader->ID, "lightPos"), 0.0f,3.0f,3.0f);
For the sake of clarity, we change the material to a natural color
vec4 objColor=vec4(1.0f, 0.5f, 0.31f,0.0f);
got the answer
2.2 Blinn-Phong lighting model/comparison
Based on the Phong lighting model, we use the half-way vector to calculate the specular reflection, and for comparison, use the button b to control which lighting model to use
In addition, in general, to obtain the same intensity of specular light when using the Blinn-Phong model, the specular coefficient needs to be 2-4 times that of the Phong model.
Since the difference between the two models is only the specular reflection part, we do not consider the others, even if coefficient=1,
and remove the influence of ambient light
FragColor=vec4(specular+ambientColor,1.0)*objColor;
And set the coefficient to be smaller, enlarge the highlight area,
In the fragment shader add
uniform int Blinn; ...... if(Blinn==0){ //Phong lighting model specular reflection vec3 reflectDir = reflect(-lightDir, normal); specularAmount = pow(max(dot(cameraVec, reflectDir), 0.0),10); specular = coefficient * specularAmount * lightColor; }else { //Blinn-Phong lighting model specular reflection vec3 halfwarDir = normalize(lightDir + cameraVec); specularAmount = pow(max(dot(normal, halfwarDir), 0.0), 40); specular = coefficient * specularAmount * lightColor; }
then in the main function
int Blinn = 0; void processInput(GLFWwindow* window) { ...... if (glfwGetKey(window, GLFW_KEY_B) == GLFW_PRESS) Blinn =abs(Blinn-1); } ...... glUniform1i(glGetUniformLocation(myShader->ID, "Blinn"), Blinn);
Let's see the effect
The first is the highlight of the Phong lighting model
Then the highlights of the Blinn-Phong lighting model
The light using the Blinn-Phong model appears to be more concentrated, that is, the specular reflection in the distance can still be seen. For the Phong model, the angle between the reflection vector and the observation vector in the distance is greater than 90 degrees, so the specular reflection in the distance disappears. .
So relatively speaking, the effect of Blinn-Phong is more realistic, and its calculation is more convenient.
3 complete code
This time, two shader and main functions have been modified and released here
main.cpp
#include <iostream> #define GLEW_STATIC #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Shader.h" #include "Camera.h" #define STB_IMAGE_IMPLEMENTATION #include "stb_image.h" #include <glm/glm.hpp> #include <glm/gtc/matrix_transform.hpp> #include <glm/gtc/type_ptr.hpp> #pragma region Model Data //vertex uv normal float vertices[] = { -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.5f, 0.5f, -0.5f, 1.0f, 1.0f, 0.0f, 0.0f, -1.0f, 0.5f, 0.5f, -0.5f, 1.0f, 1.0f, 0.0f, 0.0f, -1.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, -1.0f, -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, 0.0f, 0.0f, -1.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.5f, 0.5f, 0.5f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.5f, 0.5f, 0.5f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, -0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, -0.5f, 0.5f, 0.5f, 1.0f, 0.0f, -1.0f, 0.0f, 0.0f, -0.5f, 0.5f, -0.5f, 1.0f, 1.0f, -1.0f, 0.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, -0.5f, 0.5f, 0.5f, 1.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.5f, 0.5f, -0.5f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.5f, -0.5f, -0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.5f, -0.5f, -0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, 1.0f, 0.0f, -1.0f, 0.0f, 0.5f, -0.5f, -0.5f, 1.0f, 1.0f, 0.0f, -1.0f, 0.0f, 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, -1.0f, 0.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, 1.0f, 0.0f, -1.0f, 0.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, -0.5f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, -15.0f, -3.0f, -15.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 15.0f, -3.0f, -15.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 15.0f, -3.0f, 15.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 15.0f, -3.0f, 15.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -15.0f, -3.0f, 15.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, -15.0f, -3.0f, -15.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, }; glm::vec3 cubePositions[] = { glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(2.0f, 5.0f, -15.0f), glm::vec3(-1.5f, -2.2f, -2.5f), glm::vec3(-3.8f, -2.0f, -12.3f), glm::vec3(2.4f, -0.4f, -3.5f), glm::vec3(-1.7f, 3.0f, -7.5f), glm::vec3(1.3f, -2.0f, -2.5f), glm::vec3(1.5f, 2.0f, -2.5f), glm::vec3(1.5f, 0.2f, -1.5f), glm::vec3(-1.3f, 1.0f, -1.5f) }; #pragma endregion #pragma region Camera Declare //build camera glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 3.0f); glm::vec3 cameraTarget = glm::vec3(0.0f, 0.0f, -1.0f); glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f); Camera camera(cameraPos, cameraTarget, cameraUp); #pragma endregion #pragma region Input Declare //for mobile float deltaTime = 0.0f; // The time difference between the current frame and the previous frame float lastFrame = 0.0f; // the time of the previous frame int Blinn = 0; void processInput(GLFWwindow* window) { //See if you press the esc key and then exit if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS) { glfwSetWindowShouldClose(window, true); } //Smoother camera system if (deltaTime != 0) { camera.cameraPosSpeed = 5 * deltaTime; } //The camera moves back and forth, left and right according to the direction of the lens if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS) camera.PosUpdateForward(); if (glfwGetKey(window, GLFW_KEY_S) == GLFW_PRESS) camera.PosUpdateBackward(); if (glfwGetKey(window, GLFW_KEY_A) == GLFW_PRESS) camera.PosUpdateLeft(); if (glfwGetKey(window, GLFW_KEY_D) == GLFW_PRESS) camera.PosUpdateRight(); if (glfwGetKey(window, GLFW_KEY_Q) == GLFW_PRESS) camera.PosUpdateUp(); if (glfwGetKey(window, GLFW_KEY_E) == GLFW_PRESS) camera.PosUpdateDown(); if (glfwGetKey(window, GLFW_KEY_B) == GLFW_PRESS) Blinn =abs(Blinn-1); } float lastX; float lastY; bool firstMouse = true; //The mouse controls the camera direction void mouse_callback(GLFWwindow* window, double xpos, double ypos) { if (firstMouse == true) { lastX = xpos; lastY = ypos; firstMouse = false; } float deltaX, deltaY; float sensitivity = 0.05f; deltaX = (xpos - lastX)*sensitivity; deltaY = (ypos - lastY)*sensitivity; lastX = xpos; lastY = ypos; camera.ProcessMouseMovement(deltaX, deltaY); }; //zoom float fov = 45.0f; void scroll_callback(GLFWwindow* window, double xoffset, double yoffset) { if (fov >= 1.0f && fov <= 45.0f) fov -= yoffset; if (fov <= 1.0f) fov = 1.0f; if (fov >= 45.0f) fov = 45.0f; } #pragma endregion unsigned int LoadImageToGPU(const char* filename, GLint internalFormat, GLenum format, int textureSlot) { unsigned int TexBuffer; glGenTextures(1, &TexBuffer); glActiveTexture(GL_TEXTURE0 + textureSlot); glBindTexture(GL_TEXTURE_2D, TexBuffer); // Set the wrapping and filtering methods for the currently bound texture object glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); // Load and generate textures int width, height, nrChannel; unsigned char *data = stbi_load(filename, &width, &height, &nrChannel, 0); if (data) { glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, width, height, 0, format, GL_UNSIGNED_BYTE, data); glGenerateMipmap(GL_TEXTURE_2D); } else { printf("Failed to load texture"); } stbi_image_free(data); return TexBuffer; } int main() { #pragma region Open a Window glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR,3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); //Open GLFW Window GLFWwindow* window = glfwCreateWindow(800,600,"My OpenGL Game",NULL,NULL); if(window == NULL) { printf("Open window failed."); glfwTerminate(); return - 1; } glfwMakeContextCurrent(window); //Turn off mouse display glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED); //The callback function listens for the mouse glfwSetCursorPosCallback(window, mouse_callback); //The callback function listens to the scroll wheel glfwSetScrollCallback(window, scroll_callback); //Init GLEW glewExperimental = true; if (glewInit() != GLEW_OK) { printf("Init GLEW failed."); glfwTerminate(); return -1; } glViewport(0, 0, 800, 600); glEnable(GL_DEPTH_TEST); #pragma endregion #pragma region Init Shader Program Shader* myShader = new Shader("vertexSource.vert", "fragmentSource.frag"); #pragma endregion #pragma region Init and Load Models to VAO,VBO unsigned int VAO; glGenVertexArrays(1, &VAO); glBindVertexArray(VAO); unsigned int VBO; glGenBuffers(1, &VBO); glBindBuffer(GL_ARRAY_BUFFER,VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); // location attribute glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)0); glEnableVertexAttribArray(0); // color properties //glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(3 * sizeof(float))); //glEnableVertexAttribArray(1); // uv attribute glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(3 * sizeof(float))); glEnableVertexAttribArray(2); // normal vector properties glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(5 * sizeof(float))); glEnableVertexAttribArray(3); #pragma endregion #pragma region Init and Load Textures //Coordinate flip stbi_set_flip_vertically_on_load(true); //material unsigned int TexBufferA; TexBufferA = LoadImageToGPU("container.jpg",GL_RGB,GL_RGB,0); unsigned int TexBufferB; TexBufferB = LoadImageToGPU("awesomeface.png", GL_RGBA, GL_RGBA, 1); #pragma endregion #pragma region Prepare MVP matrices //model glm::mat4 modelMat; //view glm::mat4 viewMat; //projection glm::mat4 projMat; #pragma endregion while (!glfwWindowShouldClose(window)) { //Process Input processInput(window); //Clear Screen glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); for (unsigned int i = 0; i < 10 ;i++) { //Set Model matrix modelMat = glm::translate(glm::mat4(1.0f), cubePositions[i]); float angle = 20.0f * i; modelMat = glm::rotate(modelMat, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); //Set view matrix viewMat = camera.GetViewMatrix(); //Set projection matrix projMat = glm::perspective(glm::radians(fov), 800.0f / 600.0f, 0.1f, 100.0f); //Set Material -> Shader Program myShader->use(); //Set Material -> Textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, TexBufferA); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, TexBufferB); //Set Material -> Uniforms glUniform1i(glGetUniformLocation(myShader->ID, "ourTexture"), 0); glUniform1i(glGetUniformLocation(myShader->ID, "ourFace"), 1); glUniformMatrix4fv(glGetUniformLocation(myShader->ID, "modelMat"), 1, GL_FALSE, glm::value_ptr(modelMat)); glUniformMatrix4fv(glGetUniformLocation(myShader->ID, "viewMat"), 1, GL_FALSE, glm::value_ptr(viewMat)); glUniformMatrix4fv(glGetUniformLocation(myShader->ID, "projMat"), 1, GL_FALSE, glm::value_ptr(projMat)); glUniform3f(glGetUniformLocation(myShader->ID, "ambientColor"), 0.2f, 0.2f, 0.2f); glUniform3f(glGetUniformLocation(myShader->ID, "lightPos"), 0.0f,3.0f,3.0f); glUniform3f(glGetUniformLocation(myShader->ID, "lightColor"), 1.0f, 1.0f, 1.0f); glUniform3f(glGetUniformLocation(myShader->ID, "cameraPos"), camera.Position.x, camera.Position.y, camera.Position.z); glUniform1i(glGetUniformLocation(myShader->ID, "Blinn"), Blinn); //Set Model glBindVertexArray(VAO); //DrawCall glDrawArrays(GL_TRIANGLES, 0, 36); if (i == 0) { glDrawArrays(GL_TRIANGLES, 36, 6); } } //Clean up prepare for next render loop glfwSwapBuffers(window); glfwPollEvents(); //Recording the time float currentFrame = glfwGetTime(); deltaTime = currentFrame - lastFrame; lastFrame = currentFrame; } //Exit program glfwTerminate(); return 0; }
vertex shader
#version 330 core layout(location = 0) in vec3 aPos; // The attribute position value of the position variable is 0 //layout(location = 1) in vec3 aColor; // attribute location value of color variable is 1 layout(location = 2) in vec2 aTexCoord; // The attribute position value of the uv variable is 2 layout(location = 3) in vec3 aNormal; // The attribute position value of the normal vector is 2 //out vec4 vertexColor; out vec2 TexCoord; //Shading points and normals out vec3 FragPos; out vec3 Normal; //uniform mat4 transform; uniform mat4 modelMat; uniform mat4 viewMat; uniform mat4 projMat; void main(){ gl_Position = projMat * viewMat * modelMat * vec4(aPos.xyz,1.0); Normal =mat3(transpose(inverse(modelMat)))*aNormal; FragPos=(modelMat * vec4(aPos.xyz,1.0)).xyz; //vertexColor = vec4(aColor,1.0); TexCoord = aTexCoord; }
fragment shader
#version 330 core //in vec4 vertexColor; in vec2 TexCoord; //Shading points and normals in vec3 FragPos; in vec3 Normal; out vec4 FragColor; uniform sampler2D ourTexture; uniform sampler2D ourFace; uniform vec3 lightPos; uniform vec3 lightColor; uniform vec3 ambientColor; uniform vec3 cameraPos; uniform int Blinn; void main(){ //FragColor = mix(texture(ourTexture,TexCoord),texture(ourFace,TexCoord),texture(ourFace,TexCoord).a*0.2); vec3 r=lightPos-FragPos; //light direction vec3 lightDir = normalize(r); //distance factor float coefficient=50/(pow(r.x,2)+pow(r.y,2)+pow(r.z,2)); //coefficient=1; //normal vec3 normal = normalize(Normal); //diffuse reflection float diff = max(dot(normal, lightDir), 0.0); vec3 diffuse = coefficient * diff * lightColor; //material //vec4 objColor= mix(texture(ourTexture,TexCoord),texture(ourFace,TexCoord),texture(ourFace,TexCoord).a*0.2); vec4 objColor= vec4(1.0f, 0.5f, 0.31f,1.0f); //observation direction vec3 cameraVec= normalize(cameraPos - FragPos); vec3 specular; float specularAmount; if(Blinn==0){ //Phong lighting model specular reflection vec3 reflectDir = reflect(-lightDir, normal); specularAmount = pow(max(dot(cameraVec, reflectDir), 0.0),100); specular = coefficient * specularAmount * lightColor; }else { //Blinn-Phong lighting model specular reflection vec3 halfwarDir = normalize(lightDir + cameraVec); specularAmount = pow(max(dot(normal, halfwarDir), 0.0), 200); specular = coefficient * specularAmount * lightColor; } FragColor=vec4(specular+diffuse+ambientColor,1.0)*objColor; }