Unity shader learning notes

Unity shader learning notes (IX)

20 non photorealistic rendering

20.1 cartoon style rendering

There are many methods to realize cartoon rendering, one of which is the use of hue based shading technology. In the implementation, a one-dimensional texture will be sampled using the diffuse coefficient to control the hue of the diffuse reflection. In addition, cartoon style usually needs to draw the outline at the edge of the object, which is different from the previous screen post-processing technology for the tracing of the screen image. This is a model-based tracing method, and the implementation of this method is simpler.

20.1.1 rendering contours

There are five types for drawing model contours:

① Contour drawing based on viewing angle and surface normal.

This method uses the point multiplication results of the viewing angle direction and the surface normal to obtain the contour information. This method is simple and fast. You can get the rendering result in one Pass, but it has great limitations, and in some cases, the tracing effect rendered by the model is not satisfactory.

② Procedural geometric contour rendering.

This method uses two Pass renderings. The first Pass renders the patch on the back and uses some techniques to make its outline visible; The second Pass renders the front patch normally. The advantages of this method are fast and effective, and it is suitable for most models with smooth surface, but the disadvantages are also obvious, which is not suitable for the flat model of cube.

③ Contour rendering based on image processing.

The edge detection mentioned above is based on this method. The advantage is that it can be applied to any kind of model. However, it has great limitations. Some contours with small changes in depth and normal cannot be detected, such as paper on the desktop.

④ Contour rendering based on contour edge detection

The biggest problem of the first three kinds of rendering is the inability to control the style of contour rendering. For example, we will render the outline of some unique styles. Therefore, we need to accurately detect the contour lines, and then render them. The method to detect whether an edge is a contour edge is to detect whether the two triangular patches adjacent to this edge meet this formula:
( n 0 ⋅ v > 0 ) ≠ ( n 1 ⋅ v > 0 ) (n_{0}\cdot v > 0) ≠(n_{1}\cdot v > 0) (n0​⋅v>0)​=(n1​⋅v>0)
Where n0 and n1 respectively represent the normal direction of two adjacent three patches, and v is the direction from the viewing angle to any vertex of the edge. The essence of this formula is to detect whether two adjacent triangular sides face the front and the back. This detection can be done under the Geometry Shader. Of course, this method has disadvantages: the implementation is relatively complex, and there will be animation coherence problems, that is, because the contour is extracted separately frame by frame, there will be jumping between frames.

⑤ The fifth is actually a mixture of the first four.

Firstly, find the accurate contour line, render the model and contour edge into the texture, then identify the contour line by image processing, and render it in image space.

Firstly, use the method of procedural geometric contour rendering to outline the model: render the model with two passes - in the first Pass, use the contour color to render the face of the whole back, and expand the model vertices outward for a distance along the normal direction in the viewing angle space, so as to make the back contour visible:

viewPos = viewPos + viewNormal * _Outline;

However, vertex normals cannot be directly used for expansion, because for some concave models, the back patch may obscure the front patch. Therefore, before expanding the back vertex, we first process the z components of the vertex normal to make them equal to a fixed value, and then normalize the normal before expanding the vertex. The advantage of this is that the expanded back is more flat, thus reducing the possibility of blocking the front patch.

viewNoemal.z = -0.5;
viewNormal = normalize(viewNormal);
viewPos = viewPos + viewNormal * _Outline;

20.1.2 adding highlights

In order to realize the cartoon style solid color areas with obvious boundaries on the model, we need to implement the Blinn Phong model and make corresponding improvements. We originally calculated the specular reflection as follows:

float spec = pow(max(0, dot(normal, halfDir)), _Gloss);

In cartoon rendering, we need to compare the point multiplication results of normal and halfDir with a threshold. If it is less than the threshold, the specular reflection coefficient is 0, otherwise it returns 1.

float spec = dot(worldNormal, worldHalfDir);
spec = step(threshold, spec);

We use the step function to compare with the threshold. The step function accepts two parameters. One parameter is the reference value and the second parameter is the value to be compared. If the second parameter is greater than or equal to the first parameter, it returns 1; otherwise, it returns 0

However, this rough judgment method will cause sawtooth at the boundary of the highlight area. This is because the edge of the highlight area is not a smooth gradient, but a sudden change from 0 to 1. Therefore, we need to smooth an area with a small boundary.

float spec = dot(worldNormal, worldHalfDir);
spec = lerp(0, 1, smoothstep(-w, w, spec - threshold));

The meaning of the w parameter in the smoothstep function is: when the spec threshold is less than - W, it returns 0; when it is greater than W, it returns 1; otherwise, interpolation is performed between 0 and 1. In this way, a smoothed spec value from 0 to 1 can be obtained at the boundary of the highlight area ([- w,w] interval), so as to achieve the purpose of anti aliasing. The value of W can be determined by yourself, but the best thing is to get the approximate derivative between adjacent pixels through fwidth function.

20.1.3 realization

① Declare the attributes that need to be used

Properties{
    _Color ("Color Tint", Color) = (1,1,1,1)
    _MainText ("Main Tex", 2D) = "White" {}
    _Ramp ("Ramp Texture", 2D) = "White" {}
    _Outline ("Outline", Range(0, 1)) = 0.1
    _OutlineColor ("Outline Color", Color) = (0,0,0,1)
    _Specular ("Specular", Color) = (1,1,1,1)
    _SpecularScale ("Specular Scale", Range(0, 0.1)) = 0.01
}

_ Ramp is used to control the gradient texture of diffuse tone_ Outline is used to control the width of the outline_ The outline color corresponds to the outline color_ Specular is the specular color_ The SpecularColor controls the threshold used to calculate specular reflections.

② Define the Pass required to render the contour line. This Pass only renders the triangular patch on the back

Set the correct rendering state first

Pass{
    NAME "OUTLINE"
    Cull Front
}

Use the Cull command to Cull the triangular patches on the front and render only the back.

③ Define vertex shaders and slice shaders required for stroke

v2f vert (a2v v) 
{
	v2f o;
			
	float4 pos = mul(UNITY_MATRIX_MV, v.vertex); 
	float3 normal = mul((float3x3)UNITY_MATRIX_IT_MV, v.normal);  
	normal.z = -0.5;
	pos = pos + float4(normalize(normal), 0) * _Outline;
	o.pos = mul(UNITY_MATRIX_P, pos);
				
	return o;
}
			
float4 frag(v2f i) : SV_Target 
{ 
	return float4(_OutlineColor.rgb, 1);               
}

In the vertex shader, we first transform the vertices and normals to the viewing space, so that the stroke can achieve the best effect in the viewing space. Then set the z component of the normal, normalize it, and then expand the vertex along its direction to obtain the expanded vertex coordinates. The processing of normals is to avoid the vertex after the back expansion blocking the front patch as much as possible. Finally, the vertices are transformed from view space to clipping space.

The slice shader does just one thing - render the entire back with the contour color

④ Next is the Pass where the lighting model is located to render the front of the model. Since the lighting model needs to use Unity to provide lighting and other information, we need to make corresponding settings for Pass and add corresponding compilation instructions.

Pass{
    Tags {"LightMode" = "ForwardBase"}
    
    Cull Back
        
    CGPROGRAM
        
    #pragma vertex vert
    #pragma fragment frag
        
    #pragma multi_compile_fwdbase
    ......;
}

⑤ Define vertex shader

struct v2f{
    float4 pos : POSITION;
    float2 uv : TEXCOORD0;
    float3 worldNormal : TEXCOORD1;
    float3 worldPos : TEXCOORD2;
    SHADOW_COORDS(3);
};

v2f vert(a2v v){
    v2f o;
    
    o.pos = UnityObjectToClipPos( v.vertex);
	o.uv = TRANSFORM_TEX (v.texcoord, _MainTex);
	o.worldNormal  = UnityObjectToWorldNormal(v.normal);
	o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
				
	TRANSFER_SHADOW(o);
				
	return o;
}

⑥ The chip shader contains the code to calculate the lighting model:

float4 frag(v2f i) : SV_Target 
{ 
    //Normalize the direction vector required for later calculation
	fixed3 worldNormal = normalize(i.worldNormal);
	fixed3 worldLightDir = normalize(UnityWorldSpaceLightDir(i.worldPos));
	fixed3 worldViewDir = normalize(UnityWorldSpaceViewDir(i.worldPos));
	fixed3 worldHalfDir = normalize(worldLightDir + worldViewDir);
	
    //Samples the texture and calculates the reflection coefficient of the material
	fixed4 c = tex2D (_MainTex, i.uv);
	fixed3 albedo = c.rgb * _Color.rgb;
	
    //Calculate ambient lighting
	fixed3 ambient = UNITY_LIGHTMODEL_AMBIENT.xyz * albedo;
			
    //Calculates the shadow value in the current world coordinates
	UNITY_LIGHT_ATTENUATION(atten, i, i.worldPos);
				
    //Calculation of diffuse reflection coefficient by semi Lambert illumination model
	fixed diff =  dot(worldNormal, worldLightDir);
	diff = (diff * 0.5 + 0.5) * atten;
    //Use this diffuse coefficient for gradient textures_ Ramp samples and calculates the final diffuse illumination
	fixed3 diffuse = _LightColor0.rgb * albedo * tex2D(_Ramp, float2(diff, diff)).rgb;
				
    //Calculate the specular reflection and use the fwidth function to accurately calculate the w value.
	fixed spec = dot(worldNormal, worldHalfDir);
	fixed w = fwidth(spec) * 2.0;
	fixed3 specular = _Specular.rgb * lerp(0, 1, smoothstep(-w, w, spec + _SpecularScale - 1)) * step(0.0001, _SpecularScale);//Step (0.0001, _spectralscale) is used to_ When the spectral scale is 0, the illumination of specular reflection is completely eliminated
				
	return fixed4(ambient + diffuse + specular, 1.0);
}

20.2 rendering of sketch style

The real-time sketch style rendering is realized through the sketch textures generated in advance. These textures form a tone art mapping.

This requires the use of multiple sketch textures for rendering, or the generation of multi-level fade textures.

① Declare the required attributes (here we directly choose to use six sketch textures to achieve sketch style rendering)

Prperties{
    _Color ("Color Tint", Color) = (1,1,1,1)
    _TileFactor ("Tile Factor", Float) = 1
    _Outline ("Outline", Rnage(0,1)) = 0.1
    _Hatch0 ("Hatch 0") = "white" {}
    _Hatch1 ("Hatch 1") = "white" {}
    _Hatch2 ("Hatch 2") = "white" {}
    _Hatch3 ("Hatch 3") = "white" {}
    _Hatch4 ("Hatch 4") = "white" {}
    _Hatch5 ("Hatch 5") = "white" {}
}

_ Color is used to control the attributes of model color_ TileFactor is the tiling factor of the texture_ The larger the TileFactor, the denser the sketch lines on the model_ Hatch0~5 correspond to six sketch textures

② Sketch lines from 1 to 20 will also be used to render the outline

SubShader {
    Tags {"RenderType" = "Opaque" "Queue" = "Geometry"}
    UsePass ".../.../.../Toon Shading/OUTLINE"
}

Note that Unity will convert all Pass names into uppercase, so we need to use uppercase Pass names in UsePass.

③ Defines the Pass where the lighting model is located

Pass{
    Tags {"LightMode" = "ForwardBase"}
    
    CGPROGRAM
        
    #pragma vertex vert
    #pragma fragment frag
        
    #pragma multi_compile_fwdbase
}

④ Calculate the blending weights of six textures in the vertex shader, and add corresponding variables in the v2f structure

struct v2f{
    float4 pos : SV_POSITION;
    float2 uv : TEXCOORD0;
    fixed3 hatchWeights0 : TEXCOORD1;
    fixed3 hatchWeights1 : TEXCOORD2;
    float3 worldPos : TEXCOORD3;
    SHADOW_COORDS(4);
}

We declare six textures, that is, we need six blend weights, which we store in two variables of type fixed3 (hatchWeights0 and hatchWeights1). Declare worldPos and use SHADOW_COORDS macro declares that the sampling coordinates of shadow texture are to add shadow effect.

⑤ Key vertex shader

v2f vert(a2v v){
    v2f o;
    
    o.pos = UnityObjectToClipPos(v.vertex);
    o.uv = v.texcoord.xy * _TileFactor;
    
    fixed3 worldLightDir = normalize(WorldSpaceLightDir(v.vertex));
    fixed3 worldNormal = UnityObjectToWorldNormal(v.normal);
    fixed diff = saturate(dot(worldLightDir, worldNormal));
    
    o.hatchWeights0 = fixed3(0,0,0);
    o.hatchWeights1 = fixed3(0,0,0);
    
    float hatchFactor = diff * 0.7;
	if (hatchFactor > 6.0) {
		// Pure white, do nothing
	} else if (hatchFactor > 5.0) {
		o.hatchWeights0.x = hatchFactor - 5.0;
	} else if (hatchFactor > 4.0) {
		o.hatchWeights0.x = hatchFactor - 4.0;
		o.hatchWeights0.y = 1.0 - o.hatchWeights0.x;
	} else if (hatchFactor > 3.0) {
		o.hatchWeights0.y = hatchFactor - 3.0;
		o.hatchWeights0.z = 1.0 - o.hatchWeights0.y;
	} else if (hatchFactor > 2.0) {
		o.hatchWeights0.z = hatchFactor - 2.0;
		o.hatchWeights1.x = 1.0 - o.hatchWeights0.z;
	} else if (hatchFactor > 1.0) {
		o.hatchWeights1.x = hatchFactor - 1.0;
		o.hatchWeights1.y = 1.0 - o.hatchWeights1.x;
	} else {
		o.hatchWeights1.y = hatchFactor;
		o.hatchWeights1.z = 1.0 - o.hatchWeights1.y;
	}
    
    o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
    
    TRANSFORM_SHADOW(o);
    
    return o;
}

Use_ TileFactor obtains the texture sampling coordinates, and then calculates the light per vertex to obtain the diffuse reflection coefficient diff. At the beginning, we initialize the weight value to 0, scale the diff to the range of [0,7] to get the hatchFactor, evenly divide it into 7 sub intervals, judge the sub interval of hatchFactor to calculate the corresponding texture blending weight. Finally, calculate the world coordinates of vertices and use TRANSFORM_SHADOW macro to calculate shadow texture sampling coordinates.

⑥ Slice shader section

fixed4 frag(v2f i) : SV_Target 
{			
	fixed4 hatchTex0 = tex2D(_Hatch0, i.uv) * i.hatchWeights0.x;
	fixed4 hatchTex1 = tex2D(_Hatch1, i.uv) * i.hatchWeights0.y;
	fixed4 hatchTex2 = tex2D(_Hatch2, i.uv) * i.hatchWeights0.z;
	fixed4 hatchTex3 = tex2D(_Hatch3, i.uv) * i.hatchWeights1.x;
	fixed4 hatchTex4 = tex2D(_Hatch4, i.uv) * i.hatchWeights1.y;
	fixed4 hatchTex5 = tex2D(_Hatch5, i.uv) * i.hatchWeights1.z;
	fixed4 whiteColor = fixed4(1, 1, 1, 1) * (1 - i.hatchWeights0.x - i.hatchWeights0.y - i.hatchWeights0.z - i.hatchWeights1.x - i.hatchWeights1.y - i.hatchWeights1.z);
				
	fixed4 hatchColor = hatchTex0 + hatchTex1 + hatchTex2 + hatchTex3 + hatchTex4 + hatchTex5 + whiteColor;
				
	UNITY_LIGHT_ATTENUATION(atten, i, i.worldPos);
								
	return fixed4(hatchColor.rgb * _Color.rgb * atten, 1.0);
}

After obtaining the mixing weights of 6 textures in the vertex shader, we sample each texture in the slice shader and multiply them by their corresponding weight values to obtain the sampling color of each texture, including the contribution of pure white in rendering, which is obtained by subtracting the weights of all 6 textures from 1. The final result is the result of mixing various color values and multiplying them with shadow values and model colors.

21 noise

Noise has many uses, such as texture dissolution, flame, ripple and so on.

21.1 ablation effect

In the game, you often see the effect of map burning, which is ablation, and it is also used for character death and so on.

The principle of ablation is: noise texture + transparency test. We compare the result of noise texture sampling with a threshold that controls the ablation degree. If it is less than the threshold, we use the clip function to cut off its corresponding pixels, and these parts are the burned areas. The burning effect of the edge of the hollowed out area is the result of mixing the two colors with the original texture color after being processed by pow function.

① Declare each attribute

Properties{
    _BurnAmount ("Burn Amount", Range(0.0, 1.0)) = 0.0
    _LineWidth ("Burn Line Width", Range(0.0, 2.0)) = 0.1
    _MainTex ("Base (RGB)", 2D) = "White"{}
    _BumpMap ("Normal Map", 2D) = "bump" {}
    _BurnFirstColor("Burn First Color", Color) = (1, 0, 0, 1)
	_BurnSecondColor("Burn Second Color", Color) = (1, 0, 0, 1)
	_BurnMap("Burn Map", 2D) = "white"{}
}

_ BurnAmount is used to control the ablation degree. When the value is 0, the object is a true normal effect, and when the value is 1, the object is completely ablated_ LineWidth is used to control the LineWidth when simulating the burning effect. The larger its value is, the wider the spread range of the flame edge is_ MainTex and_ BumpMap corresponds to the original diffuse texture and normal texture of the object respectively_ BurnFirstColor and_ BurnSecondColor corresponds to the two colors of the flame edge_ BurnMap is the key noise texture.

② Define the Pass required for ablation

Pass{
    Tags {"LightMode" = "ForwardBase"}
    
    Cull Off
    
    CGPROGRAM
        
    #include "Linghting.cginc"
    #include "AutoLight.cginc"
        
    #pragma multi_compile_fwdbase
}

The key is to use the Cull command to turn off patch culling, that is, the front and back of the model will be rendered. This is because ablation will lead to bare internal structure of the model. If only the front side is rendered, there will be wrong results.

③ Define vertex shader

struct a2v{
    float4 vertex : POSITION;
    float3 normal : NORMAL;
    float4 tangent : TANGENT;
    float4 texcoord : TEXCOORD0;
};

struct v2f{
    float4 pos : SV_POSITION;
    float2 uvMainTex : TEXCOORD0;
    float2 uvBumpMap : TEXCOORD1;
    float2 uvBurnmap : TEXCOORD2;
    float3 lightDir : TEXCOORD3;
    float3 worldPos : TEXCOORD4;
    SHADOW_COORDS(5);
};

v2f vert(a2v v){
    v2f o;
    o.pos = UnityObjectToClipPos(v.vertex);
    
    o.uvMainTex = TRANASFORM_TEX(v.texcoord, _MainTex);
    o.uvBumpMap = TRANASFORM_TEX(v.texcoord, _BumpMap);
    o.uvBurnMap = TRANASFORM_TEX(v.texcoord, _BurnMap);
    
    TANGENT_SPACE_ROTATION;
    o.lightDir = mul(rotation, ObjSpaceLightDir(v.vertex)).xyz;
    
    o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
    
    TRANSFER_SHADOW(o);
    
    return o;
}

Using TRANSFORM_TEX calculates the texture coordinates corresponding to the three textures, and then transforms the light source direction from model space to tangent space. Finally, in order to get shadow information, transfer is used_ Shadow to calculate the vertex position in world space and the sampling coordinates of shadow texture.

④ Simulate the ablation effect in the slice shader

fixed4 frag(v2f i) : SV_POSITION{
    fixed3 burn = tex2D(_BurnMap, i.uvBurnMap).rgb;
    
    clip(burn.r - _BurnAmount);
    
    float3 tangentLightDir = normalize(i.lightDir);
    fixed3 tangentNormal = UnpackedNormal(tex2D(_BumpMap, i.uvBumpMap));
    
    fixed3 albedo = tex2D(_MainTex, i.uvMainTex).rgb;
    
    fixed3 ambient = UNITY_LIGHTMODEL_AMBIENT.xyz * albedo;
    
    fixed3 diffuse = _LightColor0.rgb * albedo * saturate(dot(tangentNormal, tangentLightDir));
    
    fixed t = 1 - smoothstep(0.0, _LineWidth, burn.r - _BurnAmount);
    fixed3 burnColor = lerp(_BurnFirstColor, _BurnSecondColor, t);
    burnColor = pow(burnColor, 5);
    
    UNITY_LIGHT_ATTENUATION(atten, i, i.worldPos);
    fixed3 finalColor = lerp(ambient + diffuse * atten, burnColor, t * step(0.001, _BurnAmount));
    
    return fixed4(finalColor, 1);
}

Firstly, the noise texture is sampled, and the sampling results are compared with those used to control the degree of ablation_ BurnAmount subtracts and passes the result to the clip function. When the result is less than 0, the pixel is eliminated and will not be displayed. Firstly, the reflectivity of the material is obtained according to the diffuse texture; Then calculate the ambient light, so as to calculate the diffuse illumination. Then calculate the scorch color - at the width of_ Simulate a burnt color change within the range of LineWidth: the first step is to use the previous boundary detection and use the smoothStep function to calculate the mixing coefficient T. when t value is 1, it indicates that the pixel is located at the ablation boundary; when t value is 0, it indicates that the pixel is a normal model color, and the smooth step is burn r - _ BurnAmount, which indicates a burning effect that needs to be simulated. Then use this t value to mix the colors of the two flames:_ BurnFirstColor and_ Burnscondcolor, in order to more realistically simulate the scorch trace, use the pow function to process the result. Then the T value is used to mix the normal illumination and scorched color, and the step function is used to ensure that when_ When BurnAmount is 0, no ablation effect is displayed, and finally the mixed color value finalColor is returned.

⑤ In addition to the above Pass, we also define a Pass for shadow casting to avoid that the eliminated area can still cast shadows like other objects.

Pass{
    Tags {"LightMode" = "ShadowCaster"}
    
    CGPROGRAM
        
    #pragma vertex vert
	#pragma fragment frag
			
	#pragma multi_compile_shadowcaster
}

Attributes to declare:

fixed _BurnAmount;
sampler2D _BurnMap;
float4 _BurnMap_ST;

Vertex shader and slice shader:

struct v2f 
{
	V2F_SHADOW_CASTER;
	float2 uvBurnMap : TEXCOORD1;
};
			
v2f vert(appdata_base v) 
{
	v2f o;
			
	TRANSFER_SHADOW_CASTER_NORMALOFFSET(o)
				
	o.uvBurnMap = TRANSFORM_TEX(v.texcoord, _BurnMap);
				
	return o;
}
			
fixed4 frag(v2f i) : SV_Target 
{
	fixed3 burn = tex2D(_BurnMap, i.uvBurnMap).rgb;
				
	clip(burn.r - _BurnAmount);
				
	SHADOW_CASTER_FRAGMENT(i)
}

The focus of shadow casting is that we need to eliminate slices or animate vertices according to the processing of normal Pass, so that the shadow can match the results of normal rendering of objects. In the custom shadow casting Pass, we use the built-in macro v2f provided by Unity_ SHADOW_ CASTER,TRANSFER_SHADOW_CASTER_NORMALOFFSET and SHADOW_CASTER_FRAGMENT to calculate various variables required for shadow casting. In the above code, we first use v2f in the v2f structure_ SHADOW_ Caster to define the variables that shadow casting needs to define. Then calculate the sampling coordinates uvBurnMap of the noise texture in the vertex shader, then use the sampling results of the noise texture in the slice shader according to the processing method mentioned above to eliminate the slices, and finally use SHADOW_CASTER_FRAGMENT to let Unity complete the shadow casting part and output the results to the depth map and shadow mapping texture.

Through the three built-in macros of Unity, we can easily customize the required shadow casting Pass, but the inconvenience is that these macros need to use some specific input variables, such as TRANSFER_SHADOW_CASTER_NORMALOFFSET will use V as the input structure. V needs to include vertex position v.vertex and vertex normal v.normal. You can directly use the built-in appdata_base structure. If you need vertex animation, modify v.vertex in the vertex shader and Pass it to TRANSFER_SHADOW_CASTER_NORMALOFFSET.

21.2 water wave effect

In the process of simulating the real-time water surface effect, we will also use the noise texture: the noise texture is used as a height map to constantly modify the normal information of the water surface. At the same time, in order to realize the effect of simulating the continuous flow of water, the time-related variables will be used to sample the noise texture. After obtaining the normal information, the normal reflection + refraction calculation will be carried out to obtain the final water surface fluctuation effect. In general, we realize Fresnel launch.

Use a cube texture as the environment texture, use GrabPass to obtain the rendering texture of the current screen, and use the normal direction in tangent space to offset the screen coordinates of pixels, and then use the coordinates to sample the rendering texture on the screen, so as to simulate the refraction effect. For the color mixing of emission and refraction, we use Fresnel coefficient to dynamically determine the mixing coefficient:
f r e s n e l = p o w ( 1 − s a t u r a t e ( v ⋅ n ) , 4 ) fresnel = pow(1-saturate(v\cdot n),4) fresnel=pow(1−saturate(v⋅n),4)
v and n correspond to the viewing angle direction and normal direction respectively. The smaller the angle between them, the smaller the fresnel value, the weaker the reflection and the stronger the refraction. The fresnel coefficient is also commonly used in the calculation of edge illumination.

① Declare the attributes that need to be used

Properties {
    _Color ("Main Color", Color) = (0, 0.15, 0.115, 1)
    _MainTex ("Base (RGB)", 2D) = "White" {}
    _WaveMap ("Wave Map", 2D) = "bump" {}
    _CubeMap ("Environment CubeMap", Cube) = "_Skybox" {}
    _WaveXSpeed ("Wave Horizontal Speed", Range(-0.1, 0.1)) = 0.01
    _WaveYSpeed ("Wave Vertical Speed", Range(-0.1, 0.1)) = 0.01
    _Distortion ("Distortion", Range(0, 100)) = 10
}

_ Color is used to control the color of water surface_ MainTex is the texture of water surface ripple material_ WaveMap is a normal texture generated by noise texture_ CubeMap is a cube texture used to simulate reflection_ Distortion is used to control the distortion degree of the image when simulating refraction_ WaveXSpeed and_ WaveYSpeed is used to control the translation speed of normal texture in X and Y directions respectively.

② Use GrabPass to get the screen image

SubShader{
    Tags {"Queue" = "Transparent" "RenderType" = "Qpaque"}
    GrabPass {"_RefractionTex"}
}

In the label of the SubShader, set the rendering Queue to Transparent and the RenderType to Qpaque. Setting the Queue to Transparent can ensure that all other opaque objects have been rendered to the screen during rendering, otherwise we can't see the image through the water. RenderType is set so that the object can be rendered correctly when needed when replaced with a shader. This usually happens when you need to get the depth and normal texture of the camera. Then we define a Pass to capture the screen image through GrabPass - we define a string whose name determines which texture the captured screen image will be stored in.

③ Variables that define the Pass required for the water surface:

fixed4 _Color;
sampler2D _MainTex;
float4 _MainTex_ST;
sampler2D _WaveMap;
float4 _WaveMap_ST;
samplerCUBE _CubeMap;
fixed _WaveXSpeed;
fixed _WaveYSpeed;
float _Distortion;
sampler2D _RefractionTex;
float4 _RefractionTex_TexelSize;

_ RefractionTex_TexelSize is the size of the texture captured by GrabPass. We will use this variable when offsetting the sampling coordinates of the screen image.

④ Define vertex shader

struct v2f {
    float4 pos : SV_POSITION;
    float4 scrPos : TEXCOORD0;
    flaot4 uv : TEXCOORD1;
    float4 TtoW0 : TEXCOORD2;
    float4 TtoW1 : TEXCOORD3;
    float4 TtoW2 : TEXCOORD4;
};

v2f vert (a2v v){
    v2f o;
    o.pos = UnityObjectToClipPos(v.vertex);
    
    o.srcPos = ComputeGrabScreenPos(o.pos);
    
    o.uv.xy = TRANSFORM_TEX(v.texcoord, _MainTex);
    o.uv.zw = TRANSFORM_TEX(v.texcoord, _WaveMap);
    
    float3 worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
    fixed3 worldNormal = UnityObjectToWorldNormal(v.normal);
    fixed3 worldTangent = UnityObjectToWorldDir(v.tangent.xyz);
    fixed3 worldBinormal = cross(worldNormal, worldTangent) * v.tangent.w;
    
    o.TtoW0 = float4(worldTangent.x, worldBinormal.x, worldNormal.x, worldPos.x);
    o.TtoW1 = float4(worldTangent.y, worldBinormal.y, worldNormal.y, worldPos.y);
    o.TtoW2 = float4(worldTangent.z, worldBinormal.z, worldNormal.z, worldPos.z);   
    
    return o;
}

After the vertex transformation, the sampling coordinates corresponding to the captured screen image are obtained by calling ComputeGrabScreenPos. Recalculation_ MainTex and_ The sampling coordinates of WaveMap and store them in the xy and zw components of float4 variable. Then the transformation matrix from tangent space to world space is solved according to the previously learned method.

⑤ Defining a slice shader

fixed4 frag(v2f i) : SV_Target 
{
	float3 worldPos = float3(i.TtoW0.w, i.TtoW1.w, i.TtoW2.w);
	fixed3 viewDir = normalize(UnityWorldSpaceViewDir(worldPos));
	float2 speed = _Time.y * float2(_WaveXSpeed, _WaveYSpeed);
				
	//Then obtain normal information in tangent space
	fixed3 bump1 = UnpackNormal(tex2D(_WaveMap, i.uv.zw + speed)).rgb;
	fixed3 bump2 = UnpackNormal(tex2D(_WaveMap, i.uv.zw - speed)).rgb;
	fixed3 bump = normalize(bump1 + bump2);
				
	//Calculates the offset in tangent space
	float2 offset = bump.xy * _Distortion * _RefractionTex_TexelSize.xy;
	i.scrPos.xy = offset * i.scrPos.z + i.scrPos.xy;
	fixed3 refrCol = tex2D( _RefractionTex, i.scrPos.xy/i.scrPos.w).rgb;
				
	// Transfer normals to world space
	bump = normalize(half3(dot(i.TtoW0.xyz, bump), dot(i.TtoW1.xyz, bump), dot(i.TtoW2.xyz, bump)));
	fixed4 texColor = tex2D(_MainTex, i.uv.xy + speed);
	fixed3 reflDir = reflect(-viewDir, bump);
	fixed3 reflCol = texCUBE(_Cubemap, reflDir).rgb * texColor.rgb * _Color.rgb;
				
	fixed fresnel = pow(1 - saturate(dot(viewDir, bump)), 4);
	fixed3 finalColor = reflCol * fresnel + refrCol * (1 - fresnel);
				
	return fixed4(finalColor, 1);
}

First, the world coordinates are obtained by using the w component of TtoW, and the corresponding viewing angle direction of the slice element is obtained by using the value. In addition, use Unity's built-in_ Time.y component sum_ WaveXSpeed, _ WaveYSpeed attribute to calculate the current offset of the normal texture, and use this value to sample the normal texture - to simulate the water surface fluctuation effect of the intersection of two layers; Then use this value and_ Distortion, _ RefractionTex_TexelSize to offset the sampling coordinates of the screen image to simulate the refraction effect_ The larger the distortion value, the greater the offset, and the object behind the water surface looks the most deformed. Multiplying the offset by the z value of the screen coordinate is to simulate the effect of greater depth and greater refraction. Of course, the offset value can be directly superimposed on the screen coordinate. Then, the screen coordinates are divided by perspective, and then the captured screen image is divided by the coordinates_ Refractiontex samples to get the simulated refraction color.

Finally, the normal direction is transformed from tangent space to world space, and the reflection direction of the viewing angle direction relative to the normal direction is obtained. The cube map is sampled with this reflection direction, and the reflection color is obtained by multiplying the result with the main texture color. We also animated the main texture to simulate the water wave effect.

In order to mix refraction and reflection colors, we calculate the Fresnel coefficient.

21.3 use noise to achieve global fog effect

In front of us, we realized the global fog effect based on screen post-processing technology through depth texture - reconstruct the position of each pixel in world space through depth texture, then use a height based formula to calculate the mixing coefficient of fog effect, and finally use this coefficient to mix the color of fog and the color of the original screen.

This is actually a uniform fog effect based on height - that is, the fog concentration is the same at the same height. But most of the time, what we want is to simulate an uneven and constantly floating fog.

This requires the use of a noise texture to achieve. The main content of the implementation is basically the same as before, except that the influence on noise is added to the calculation of height in the Shader's slice Shader.

(1) C# script

① Inherited from the previous PostEffectsBase base base class

② Declare the shader required for this effect and create the corresponding material:

public Shader fogShader;
private Material fogMaterial = null;

public Material material{
    get{
        fogMaterial = CheckShaderAndCreateMaterial(fogShader, fogMaterial);
        return fogMaterial;
    }
}

③ We need to obtain the relevant parameters of the Camera, including the distance to the clipping plane, FOV, etc. at the same time, we also need to obtain the front, top, right and other directions of the Camera in world space. Therefore, we declare two variables to store the Camera and Transform components of the Camera.

private Camera myCamera;
public Camera camera{
    get{
        if (myCamera == null){
            myCamera = GetComponent<Camera>();
        }
        return myCamera;
    }
}

private Transform myCameraTransform;
public Transform cameraTransform{
    get{
        if (myCameraTransform == null){
            myCameraTransform = camera.transform;
        }
        return myCameraTransform;
    }
}

④ Defining the simulated fog effect is a required parameter

[Rnage (0.1f, 3.0f)]
public float fogDensity = 1.0f;

public Color fogColor = Color.white;

public float fogStart = 0.0f; 
public float fogEnd = 2.0f;

public Texture noiseTexture;

[Range (-0.5f, 0.5f)]
public float fogXSpeed = 0.1f;

[Range (-0.5f, 0.5f)]
public float fogYSpeed = 0.1f;

[Range (0.0f, 3.0f)]
public float noiseAmount = 1.0f;

fogDensity is used to control the concentration of fog, fogColor is used to control the concentration of fog, fogStart and fogEnd are used to control the starting and ending height of fog effect respectively, noiseTexture is the noise texture we will use, and fogXSpeed and fogYSpeed are used to simulate the floating effect of fog. Noisemaunt is used to control the noise level. When its value is 0, it means that no noise is applied, that is, a uniform global fog effect based entirely on height is obtained.

⑤ The OnEnable function sets the corresponding state of the camera

void OnEnable(){
    camera.depthTextureMode |= DepthTextureMode.Depth;
}

⑥ OnRenderImage function

void OnRenderImage(RenderTexture src, RenderTexture dest){
    if (material != null){
        Matrix4x4 frustumCorners = Matrix4x4.idensity;
        
        //Calculate the vectors corresponding to the four corners of the near clipping plane. See the global fog effect in screen post-processing for the specific mathematical theory
        float fov = camera.fieldOfView;
        float near = camera.nearClipPlane;
        float far = camera.farClipPlane;
        float aspect = camera.aspect;
        
        float halfHeight = near * Mathf.Tan(fov * 0.5f * Mathf.Deg2Rad);
        Vector3 toRight = cameraTransform.right * halfHeight * aspect;
        Vector3 toTop = cameraTransform.up * halfHeight;
        
        Vector3 topLeft = cameraTransform.forward * near + toTop - toRight;
        float scale = topLeft.magnitude / near;
        
        topLeft.Normalize();
        topLeft *= scale;
        
		Vector3 topRight = cameraTransform.forward * near + toRight + toTop;
		topRight.Normalize();
		topRight *= scale;

		Vector3 bottomLeft = cameraTransform.forward * near - toTop - toRight;
		bottomLeft.Normalize();
		bottomLeft *= scale;

		Vector3 bottomRight = cameraTransform.forward * near + toRight - toTop;
		bottomRight.Normalize();
		bottomRight *= scale;
		
        frustumCorners.SetRow(0, bottomLeft);
		frustumCorners.SetRow(1, bottomRight);
		frustumCorners.SetRow(2, topRight);
		frustumCorners.SetRow(3, topLeft);
        
        //Store the above four vectors in a matrix type variable frustumCorners
        material.SertMatrix("_FrustumCornersRay", frustumCorners);
        
        material.SetFloat("_FogDensity", fogDensity);
        material.SetColor("_FogColor", fogColor);
        material.SetFloat("_FogStart", fogStart);
        material.SetFloat("_FogEnd", fogEnd);
        
        material.SetTexture("_NoiseTex", NoiseTex);
        material.SetFloat("_FogXSpeed", FogXSpeed);
        material.SetFloat("_FogYSpeed", FogYSpeed);
        material.SetFloat("_NoiseAmount", NoiseAmount);
        
        Graphics.Blit(src, dest, material);
    }else{
        Graphics.Blit(src, dest);
    }
}

(2)shader

① Declare each attribute

Properties{
    _MainTex ("Base (RGB)", 2D) = "white" {}
	_FogDensity ("Fog Density", Float) = 1.0
	_FogColor ("Fog Color", Color) = (1, 1, 1, 1)
	_FogStart ("Fog Start", Float) = 0.0
	_FogEnd ("Fog End", Float) = 1.0
	_NoiseTex ("Noise Texture", 2D) = "white" {}
	_FogXSpeed ("Fog Horizontal Speed", Float) = 0.1
	_FogYSpeed ("Fog Vertical Speed", Float) = 0.1
	_NoiseAmount ("Noise Amount", Float) = 1
}

② Use cginclude to manage code

③ Declare variables based on attributes

float4x4 _FrustumCornersRay;
		
sampler2D _MainTex;
half4 _MainTex_TexelSize;
sampler2D _CameraDepthTexture;
half _FogDensity;
fixed4 _FogColor;
float _FogStart;
float _FogEnd;
sampler2D _NoiseTex;
half _FogXSpeed;
half _FogYSpeed;
half _NoiseAmount;

④ Since the vertex shader has the same global fog effect as screen post-processing, we will no longer write, but pay more attention to differentiated slice shaders

fixed4 frag(v2f i) : SV_Target 
{
	float linearDepth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv_depth));
	float3 worldPos = _WorldSpaceCameraPos + linearDepth * i.interpolatedRay.xyz;
			
	float2 speed = _Time.y * float2(_FogXSpeed, _FogYSpeed);
	float noise = (tex2D(_NoiseTex, i.uv + speed).r - 0.5) * _NoiseAmount;
					
	float fogDensity = (_FogEnd - worldPos.y) / (_FogEnd - _FogStart); 
	fogDensity = saturate(fogDensity * _FogDensity * (1 + noise));
			
	fixed4 finalColor = tex2D(_MainTex, i.uv);
	finalColor.rgb = lerp(finalColor.rgb, _FogColor.rgb, fogDensity);
		
	return finalColor;
}

First, the position of the pixel in world space is reconstructed according to the depth texture. Then use the built-in_ Time.y variable sum_ FogXSpeed, _ The FogYSpeed attribute calculates the offset of the current noise texture, and samples the noise texture to obtain the noise value. The noise value is added to the calculation of fog effect concentration to obtain the fog effect mixing coefficient fogDensity after applying the noise, and then the coefficient is used to mix the fog color with the original color.

⑤ Defines the Pass required for fog rendering

Pass {          	
	CGPROGRAM  
			
	#pragma vertex vert  
	#pragma fragment frag  
			  
	ENDCG
}

21.4 about noise texture

What is noise texture? It can be regarded as a program texture, which is generated by computer using relevant algorithms. Perlin noise and Worly noise are the two most commonly used noise types. Perlin noise can be used to generate more natural noise, while Worly noise is usually used to simulate porous noise such as stone, water and paper. This, such as normal texture and depth texture, can also be generated through Photoshop, but if you want to control the generation of noise texture more freely, you need to understand the implementation principle of these two kinds of noise and design yourself.

ex, i.uv + speed).r - 0.5) * _NoiseAmount;

float fogDensity = (_FogEnd - worldPos.y) / (_FogEnd - _FogStart); 
fogDensity = saturate(fogDensity * _FogDensity * (1 + noise));
		
fixed4 finalColor = tex2D(_MainTex, i.uv);
finalColor.rgb = lerp(finalColor.rgb, _FogColor.rgb, fogDensity);
	
return finalColor;

}

First, the position of the pixel in world space is reconstructed according to the depth texture. Then use the built-in_Time.y Variables and _FogXSpeed, _FogYSpeed Attribute calculates the offset of the current noise texture, and samples the noise texture to obtain the noise value. Finally, the noise value is added to the calculation of fog effect concentration to obtain the fog effect mixing coefficient after applying noise fogDensity,This coefficient is then used to mix the fog color with the original color.

⑤Defines what is required for fog rendering Pass

```c#
Pass {          	
	CGPROGRAM  
			
	#pragma vertex vert  
	#pragma fragment frag  
			  
	ENDCG
}

21.4 about noise texture

What is noise texture? It can be regarded as a program texture, which is generated by computer using relevant algorithms. Perlin noise and Worly noise are the two most commonly used noise types. Perlin noise can be used to generate more natural noise, while Worly noise is usually used to simulate porous noise such as stone, water and paper. This, such as normal texture and depth texture, can also be generated through Photoshop, but if you want to control the generation of noise texture more freely, you need to understand the implementation principle of these two kinds of noise and design yourself.

Tags: Unity Shader

Posted by Ange52 on Sun, 08 May 2022 15:08:34 +0300