Android OpenGL ES - inverse, exposure, contrast, saturation, hue filter

After the previous blogs, we should have a basic understanding of OpenGL, so we should practice it. It's coming, it's coming, the real filter is coming

Put the renderings first

The learning of filter is a step-by-step process. Let's talk about simple filter first in this chapter, which also allows readers to uncover the mystery of camera filter

OpenGL ES - simple filter

After the previous blogs, we should be able to realize the filter as shown in the original image above. I also reconstructed the code of the previous project. The class diagram is shown below. Please check the link at the end of the article for the complete code.

Our later filter code will only focus on the core Shader Code, because the GL environment has been explained in great detail in our previous blogs. If some readers are not familiar with it, please check the previous blog.

Default filter

vertex shader

attribute vec4 position;
attribute vec4 inputTextureCoordinate;

varying vec2 textureCoordinate;

void main() {
   gl_Position = position;
   textureCoordinate = inputTextureCoordinate.xy;
}

fragment shader

precision mediump float;
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
    gl_FragColor = texture2D(inputImageTexture, textureCoordinate);
};

The default filter, as we mentioned earlier, uses the texture2D function to sample the texture, that is, OpenGL displays the image

Reversed phase

The definition of image inversion is easy to understand. We already know that in GL, the color is represented by R, G, B and a. the range of R, G, B and a is 0.0f~1.0f. If the coloring color = vec4(r,g,b,a), the inverted color
invert_color = vec4((1.0-color.rgb), color.a);

So the inverted fragment shader is naturally below

fragment shader

precision mediump float;
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
     lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
    // textureColor.w is actually texture color A transparency. We do not participate in inversion here, because our transparency is originally 1 (completely opaque). If we participate in inversion, it is 0, which is completely transparent,
     gl_FragColor = vec4((1.0 - textureColor.rgb), textureColor.w);
};

The inverse effect diagram is as follows

Brightness

PS
Brightness is a physical quantity that indicates the actual feeling of the human eye on the luminous or reflected light intensity on the surface of the luminous body or irradiated object. Brightness and light intensity are often confused in general daily terms. In short, when any two object surfaces are photographed and the final result is the same brightness, or the two surfaces appear to be the same brightness by the eyes, they are the same brightness.

Brightness (lightness) reflects the lightness and darkness of color. It forms HSL color space together with hue (H) and saturation (S). To adjust the brightness, you only need to add a degree value to the RGB color space at the same time.

fragment shader

varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform lowp float brightness;
void main()
{
    lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
    gl_FragColor = vec4((textureColor.rgb + vec3(brightness)), textureColor.w);
};

Here we define a uniform low-P float brightness. We dynamically modify this value by dragging Seekbar on the interface. Remember how to assign values to the variables modified by uniform in GLSL, glUnifrom * series functions
Here comes the renderings,

Exposure

PS
The principles of exposure and brightness are basically the same. Brightness is an omni-directional linear increase in color value, and exposure is an exponential superposition based on the primary color value (the red will be redder, the green will be greener, the blue will be bluer, and the white light will be brighter)

fragment shader

varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform lowp float exposure;
void main()
{
    lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
    gl_FragColor = vec4(textureColor.rgb * pow(2.0, exposure), textureColor.w);// rgb * 2 ^ effect value
};

Here, the built-in function pow of GLSL is used for exponential operation

Contrast

After touching the brightness and exposure above, let's look at a slightly more complex contrast

PS
Contrast is the ratio of black to white, that is, the gradient from black to white. The larger the ratio, the more gradient levels from black to white, and the richer the color performance.

Contrast is very important to the visual effect. Generally speaking, the greater the contrast, the clearer the image and the brighter the color; If the contrast is small, the whole picture will be gray.

In short, contrast is the difference between the pixel color and a median value. It can make the bright color brighter and the gray color darker.

Here is a simple linear contrast algorithm:

Result = median difference * contrast + median

fragment shader

varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform lowp float m_contrast;
void main()
{
    lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
    gl_FragColor = vec4(((textureColor.rgb - vec3(0.5)) * m_contrast + vec3(0.5)), textureColor.w);
}

Saturation

PS
Color saturation: saturation refers to the brightness of color, also known as the purity of color. Saturation depends on the ratio of the color containing component to the achromatic component (gray) in the color. The greater the color component, the greater the saturation; The greater the achromatic component, the smaller the saturation. Pure colors are highly saturated, such as bright red and green. Colors mixed with white, gray or other hues are unsaturated colors, such as purplish, pink, yellowish brown, etc. completely unsaturated colors have no hues at all, such as various grays between black and white.

Concept summary: saturation = X · primary color + Y · gray value, where (x+y=1)

Luma algorithm is used to calculate the gray scale: gray = R0 2125 + G0. 7154 + B*0.0721

fragment shader

varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
const mediump vec3 luminanceWeighting = vec3(0.2125, 0.7154, 0.0721);
uniform lowp float saturation;
void main()
{
    lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
    lowp float luminance = dot(textureColor.rgb, luminanceWeighting);
    lowp vec3 greyScaleColor = vec3(luminance);
    gl_FragColor = vec4(mix(greyScaleColor, textureColor.rgb, saturation), textureColor.w);
     // GLSL built-in function mix(x,y,a) = x*(1-a)+y*a, which just meets the formula definition of saturation.
}

Hue (hue)

Finally, let's explain that adjusting the image tone is the image processing technique above the introduction. There are many and deep theoretical parts, which should be understood slowly. Let's first understand what is hue and how to define hue.

PS
Hue does not refer to the nature of color, but the overall evaluation of a painting. Although a painting may use a variety of colors, it generally has one tone, which is blue or red, warm or cold, and so on.

Color model

Before understanding hue, let's first explain the color model, the three color models in digital image processing: RGB, HSI and CMYK (note! This is not a format, but a color model)

(1) The most commonly used RGB color model.

RGB is a space defined according to the color recognized by human eyes, which can represent most colors. It is the most basic and commonly used hardware oriented color space in image processing. It is a system of light mixing.

It can be seen that RGB color mode uses a point in three-dimensional space to represent a color. Each point has three components, representing the brightness values of red, green and blue respectively. The brightness value is limited to [0,1]. In the cube of RGB model, the color corresponding to the origin is black, and its three component values are 0; The color of the vertex farthest from the origin is white, and its three component values are 1. The gray value from black to white is distributed on the connecting line of the two points, and the dotted line is called gray line; The other points of the cube correspond to different colors, namely the three primary colors red, green, blue and their mixed colors yellow, magenta and cyan.

(2) HSI color model, visual transmission, communication use

HSI color space starts from the human visual system and uses hue, Saturation or chroma and Intensity or brightness to describe color.

H -- phase angle of color. Red, green and blue are separated by 120 degrees respectively; Complementary colors differ by 180 degrees, that is, the category of colors.
S -- expressed as the ratio between the purity of the selected color and the maximum purity of the color, range: [0,1], that is, the depth of the color.
I -- indicates the brightness of color, range: [0, 1], human eyes are very sensitive to brightness!

It can be seen that HSI color space and RGB color space are only different representations of the same physical quantity, so there is a conversion relationship between them: the hue in HSI color mode is represented by color category, and the saturation is just inversely proportional to the brightness of white light of the color, representing the proportion of gray and hue, and the brightness is the relative brightness of the color.

(3) CMYK model is used for the color mode of printed matter relying on reflection

CMYK is a color mode that relies on reflection. How do we read the contents of newspapers? It is the sunlight or light shining on the newspaper and then reflected into our eyes that we can see the content. It needs an external light source. You can't read newspapers if you're in a dark room. As long as the image displayed on the screen is expressed in RGB mode. As long as the image is seen on the printed matter, it is represented by CMYK mode. Most devices that deposit color pigments on paper, such as color printers and copiers, require CMY data input and internal RGB to CMY conversion.

Cyan, Magenta and Yellow are the secondary colors of light and the colors of pigments. K takes the last letter of black. The reason why it does not take the initial letter is to avoid confusion with blue. When the three primary colors of red, green and blue are mixed, white will be produced, and when the three primary colors of cyan, Magenta and Yellow are mixed, black will be produced. Theoretically, only three kinds of CMY inks are enough, but because the current manufacturing process can not produce high-purity inks, the result of CMY addition is actually a dark red.

Hue describes the overall color effect. Among the three color models, HSI color model can better approach the intuitive concept of hue, saturation and brightness. In this respect, rgb format is not convenient for calculation, It is recommended to switch to YIQ color space for calculation

fragment shader

precision highp float;
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform mediump float hueAdjust;
const highp vec4 kRGBToYPrime = vec4 (0.299, 0.587, 0.114, 0.0);
const highp vec4 kRGBToI = vec4 (0.595716, -0.274453, -0.321263, 0.0);
const highp vec4 kRGBToQ = vec4 (0.211456, -0.522591, 0.31135, 0.0);
const highp vec4 kYIQToR = vec4 (1.0, 0.9563, 0.6210, 0.0);
const highp vec4 kYIQToG = vec4 (1.0, -0.2721, -0.6474, 0.0);
const highp vec4 kYIQToB = vec4 (1.0, -1.1070, 1.7046, 0.0);
void main ()
{
     // Sample the input pixel
     highp vec4 color = texture2D(inputImageTexture, textureCoordinate);
     // Convert to YIQ
     highp float YPrime = dot (color, kRGBToYPrime);
     highp float I = dot (color, kRGBToI);
     highp float Q = dot (color, kRGBToQ);
     // Calculate the hue and chroma
     highp float hue = atan (Q, I);
     highp float chroma = sqrt (I * I + Q * Q);
     // Make the user's adjustments
     hue += (-hueAdjust);
     // Convert back to YIQ Q = chroma * sin (hue);
     I = chroma * cos (hue);
     // Convert back to RGB
     highp vec4 yIQ = vec4 (YPrime, I, Q, 0.0);
     color.r = dot (yIQ, kYIQToR);
     color.g = dot (yIQ, kYIQToG);
     color.b = dot (yIQ, kYIQToB);
     // Save the result
     gl_FragColor = color;
};

Summary

Github code

Posted by ChibiGuy on Mon, 09 May 2022 20:38:27 +0300