CesiumJS 2022 ^ principle [5] - packaging design related to shaders

All interfaces involved in this article are not in the public documents. You need to download the source code on GitHub and create your own private class documents.

npm run generateDocumentation -- --private
yarn generateDocumentation -- --private
pnpm generateDocumentation -- --private

Of course, this article will not involve the explanation of shader algorithm.

1. Encapsulation of WebGL interface

Any pursuing WebGL 3D library will encapsulate the WebGL native interface. CesiumJS has been sealed and tested internally for ten years, and WebGL has been released for 11 years since 2011. During this period, minor repairs are inevitable.

What's more, CesiumJS is a geographic 3D framework of JavaScript, which has two characteristics in source code design:

  • object-oriented
  • modularization

As for the modularization strategy, CesiumJS has been changed from require.js in version 1.63 JS has switched to the native es module format. WebGL is an instruction style interface using global state. If it is changed to object-oriented style, it must be encapsulated. ThreeJS is a representative work of WebGL encapsulation in the general Web3D library.

Encapsulation has another advantage, that is, the changes of the underlying WebGL interface in more than ten years can be shielded after encapsulation, and the API mode of the upper application after encapsulation is basically unchanged.

1.1. Buffered object encapsulation

CesiumJS encapsulates WebGLBuffer and VAO officially supported by WebGL 2.0 (extended in 1.0), which are encapsulated into Buffer class and VertexArray class respectively.

The Buffer class is relatively simple and provides a static creation method of simple factory mode:

// Create and store vertex buffer objects
Buffer.createVertexBuffer = function (options) {
  // ...

  return new Buffer({
    context: options.context,
    bufferTarget: WebGLConstants.ARRAY_BUFFER,
    typedArray: options.typedArray,
    sizeInBytes: options.sizeInBytes,
    usage: options.usage,
  });
};

// Create vertex index buffer object
Buffer.createIndexBuffer = function (options) {
  // ...
};

When the Buffer object is instantiated, it will create a WebGLBuffer and upload the type array:

// Buffer constructor
const buffer = gl.createBuffer();
gl.bindBuffer(bufferTarget, buffer);
gl.bufferData(bufferTarget, hasArray ? typedArray : sizeInBytes, usage);
gl.bindBuffer(bufferTarget, null);

In addition to these two static methods for creating, there are also some methods for copying buffer objects, so I won't list them one by one.

Note that the Buffer object does not save the original vertex type array data. This is for the sake of saving JavaScript memory.

The vertex array object VertexArray encapsulates a data model VertexArrayObject in the OpenGL system. In WebGL, it is intended to save the performance loss of setting multiple vertices to buffer to the global state object.

Creating the vertex array object of CesiumJS is also very simple. Just assemble the Buffer object in the format of WebGL Vertex Attribute:

const positionBuffer = Buffer.createVertexBuffer({
  context: context,
  sizeInBytes: 12,
  usage: BufferUsage.STATIC_DRAW
})
const normalBuffer = Buffer.createVertexBuffer({
  context: context,
  sizeInBytes: 12,
  usage: BufferUsage.STATIC_DRAW
})
const attributes = [
  {
    index: 0,
    vertexBuffer: positionBuffer,
    componentsPerAttribute: 3,
    componentDatatype: ComponentDatatype.FLOAT
  },
  {
    index: 1,
    vertexBuffer: normalBuffer,
    componentsPerAttribute: 3,
    componentDatatype: ComponentDatatype.FLOAT
  }
]
const va = new VertexArray({
  context: context,
  attributes: attributes
})

If you are practicing with the above code, you will not be able to create it successfully and find a problem: no Context parameter is passed to Buffer or VertexArray, because the Context object (type Context) is the encapsulated object of the underlying interface such as WebGL rendering Context object. Without it, you cannot create the original interface object such as WebGLBuffer.

Therefore, Buffer and VertexArray are not isolated API s and must be used together with other encapsulation. They must at least rely on Context objects. In 1.4, we will introduce how to use Context class to encapsulate WebGL underlying interface and access Context objects.

It is rarely necessary to directly create Buffer and VertexArray. Using these two interfaces means that the data you obtain conforms to VBO format. Other human reading friendly data formats must be converted to VBO format to use these two types directly. These two classes come in handy if you need to use the instruction objects mentioned in Section 2.

1.2. Texture and sampling parameter encapsulation

Texture is a very complex topic in WebGL.

Let's start with parameters. There was no native Sampler API in WebGL 1.0, and the WebGLSampler interface was launched in webgl 2.0. Therefore, CesiumJS encapsulates a simple Sampler class:

function Sampler(options) {
  // ...
  this._wrapS = wrapS;
  this._wrapT = wrapT;
  this._minificationFilter = minificationFilter;
  this._magnificationFilter = magnificationFilter;
  this._maximumAnisotropy = maximumAnisotropy;
}

In fact, it is not difficult to make the texture sampling parameters in WebGL 1.0 into an object.

Texture class texture is the encapsulation of WebGLTexture. It not only encapsulates WebGLTexture, but also encapsulates the function of data upload. Just feel free to transfer the mapping data.

Like Buffer and VertexArray, Texture also needs context parameter.

import {
  Texture, Sampler,
} from 'cesium'

new Texture({
  context: context,
  width: 1920,
  height: 936,
  source: new Float32Array([/* ... */]), // RGBA image data with gray value of 0 ~ 255
  // Optional sampling parameters
  sampler: new Sampler()
})

You can use imagerylayer JS module to find the code for creating image tile texture:

ImageryLayer.prototype._createTextureWebGL = function (context, imagery) {
  // ...
  return new Texture({
    context: context,
    source: image,
    pixelFormat: this._imageryProvider.hasAlphaChannel
      ? PixelFormat.RGBA
      : PixelFormat.RGB,
    sampler: sampler,
  });
}

In addition to creating textures, CesiumJS also provides texture copying tool functions, such as copying a texture from a frame buffer object:

Texture.fromFramebuffer = function (/* ... */) { /* ... */ }

Texture.prototype.copyFromFramebuffer = function (/* ... */) { /* ... */ }

Or create a mipmap:

Texture.prototype.generateMipmap = function (/* ... */) { /* ... */ }

1.3. Shader encapsulation

As we all know, WebGL's shader related API s are WebGLShader and WebGLProgram. Vertex shader and slice shader together constitute a shader program object. In the rendering of a frame, it is composed of multiple channels. Before triggering the draw action, each channel usually needs to switch the shader program to achieve different calculation effects.

CesiumJS rendering is far more complex than general-purpose Web3D, which means there are a large number of shader programs. There are many objects, so we need to manage them. Cesium JS encapsulates the underlying API and also designs a caching mechanism.

CesiumJS uses ShaderSource class to manage shader code text, ShaderProgram class to manage WebGLProgram and WebGLShader, ShaderCache class to cache ShaderProgram, and ShaderFunction, ShaderStruct and ShaderDestination to assist ShaderSource in processing glsl function, structure member and macro definition in shader code text.

In addition, there is a ShaderBuilder class to assist in the creation of ShaderProgram.

Like Buffer, VertexArray and Texture, this pile of private classes cannot be used alone. They are usually used together with various instruction objects in Section 2.

The following is an example. It uses the static method fromCache of ShaderProgram to create shader program objects. This method will create objects and cache them into ShaderCache objects at the same time. Those interested can view the cached code by themselves.

const vertexShaderText = `attribute vec3 position;
 void main() {
   gl_Position = czm_projection * czm_view * czm_model * vec4(position, 1.0);
 }`
const fragmentShaderText = `uniform vec3 u_color;
 void main(){
   gl_FragColor = vec4(u_color, 1.0);
 }`
 
const program = ShaderProgram.fromCache({
  context: context,
  vertexShaderSource: vertexShaderText,
  fragmentShaderSource: fragmentShaderText,
  attributeLocations: {
    "position": 0,
  },
})

The complete example can be found in my previous article on drawing triangles with DrawCommand.

1.4. Context objects and render channels

The encapsulation of the underlying interface of WebGL is basically in the Context class. The core is the rendering Context (WebGLRenderingContext, WebGL2RenderingContext) object. In addition, there are some important rendering related functions and member variables on the Context:

  • A series of features only supported in WebGL 2.0 and extended in WebGL 1.0
  • Support information for compressed textures
  • UniformState object
  • PassState object
  • RenderState object
  • Participate in the function of frame rendering, such as draw, readPixels, etc
  • Create PickId for pickup
  • Operation and verification of Framebuffer object

Generally, the Context object can be accessed through the FrameState object on the Scene object.

WebGL rendering context objects expose many constants. CesiumJS encapsulates the constants on the rendering context and the constants that may be used into WebGL constants JS exported object.

Another thing to be noted is the channel. WebGL has no channel API, and it is very common to switch shaders for multi-channel rendering within one frame. The behavior of each channel triggering draw is called channel.

CesiumJS packages the rendering behavior of high-level 3D objects into three types of instruction objects, which will be described in Section 2; These instruction objects have priority. CesiumJS describes these priorities as channels and uses pass JS. At present, the instruction object has 10 priorities:

const Pass = {
  ENVIRONMENT: 0,
  COMPUTE: 1,
  GLOBE: 2,
  TERRAIN_CLASSIFICATION: 3,
  CESIUM_3D_TILE: 4,
  CESIUM_3D_TILE_CLASSIFICATION: 5,
  CESIUM_3D_TILE_CLASSIFICATION_IGNORE_SHOW: 6,
  OPAQUE: 7,
  TRANSLUCENT: 8,
  OVERLAY: 9,
  NUMBER_OF_PASSES: 10,
};

NUMBER_ OF_ The passes member represents that there are currently 10 priorities.

On the frame state object, there is also a passes member:

// FrameState constructor
this.passes = {
  render: false,
  pick: false,
  depth: false,
  postProcess: false,
  offscreen: false,
};

These five Boolean values control which channel is used for rendering.

The channel state value of the instruction object and the channel state on the frame state object together constitute the concept of "channel" in CesiumJS's huge abstract model.

In fact, I think this design will lead to a large number of if judgment and sorting processing during Scene single frame rendering, which is somewhat redundant. Providing channel encoder like WebGPU may simplify the concept of channel.

1.5. uniform encapsulation

Unified value, that is, the Uniform in WebGL. Unfamiliar readers need to learn WebGL related concepts first.

In each frame, a large number of state values are different from the previous frame, that is, they need to be updated into the shader at any time. CesiumJS encapsulates this. This frequently changing unified value is encapsulated into the AutomaticUniforms object. Each member is an instance of the AutomaticUniform class:

// AutomaticUniforms.js
function AutomaticUniform(options) {
  this._size = options.size;
  this._datatype = options.datatype;
  this.getValue = options.getValue;
}

Take a member from the exported AutomaticUniforms object by default:

czm_projection: new AutomaticUniform({
  size: 1,
  datatype: WebGLConstants.FLOAT_MAT4,
  getValue: function (uniformState) {
    return uniformState.projection;
  },
})

This unified value is the projection matrix of the camera. Its value function requires a UniformState parameter, which is obtained in real time from the unified value state object (type UniformState).

The Context object has a read-only UniformState getter that points to a private member. When Scene executes the instruction list on the frame state, it will call the drawing function of Context and further call Context JS module, which will execute the operation of the shader program object_ setUniforms method:

shaderProgram._setUniforms(
  uniformMap,
  context._us,
  context.validateShaderProgram
);

This function can set the custom uniformMap passed down from the instruction object and automatic uniforms to the WebGLProgram built in ShaderProgram, that is, to complete the setting of unified values in the shader.

1.6. Render container encapsulation

Rendering container mainly refers to frame buffer object and rendering buffer object.

The rendering buffer object, CesiumJS, is encapsulated as a Renderbuffer class, which is a very simple encapsulation of WebGLRenderbuffer. I won't go into detail, but it should be mentioned separately. If msaa is enabled, relevant binding functions will be called:

// Renderbuffer.js

function Renderbuffer(options) {
  // ...
  const gl = context._gl;

  // ...
  this._renderbuffer = this._gl.createRenderbuffer();

  gl.bindRenderbuffer(gl.RENDERBUFFER, this._renderbuffer);
  if (numSamples > 1) {
    gl.renderbufferStorageMultisample(
      gl.RENDERBUFFER,
      numSamples,
      format,
      width,
      height
    );
  } else {
    gl.renderbufferStorage(gl.RENDERBUFFER, format, width, height);
  }
  gl.bindRenderbuffer(gl.RENDERBUFFER, null);
}

Next, let's talk about the encapsulation of frame buffer.

Ordinary frame buffer, that is, the conventional WebGLFramebuffer, is encapsulated into the Framebuffer class. It has several array members, which are used to save the color attachment used for frame buffer, container texture of depth template attachment and container rendering buffer.

function Framebuffer(options) {
  const context = options.context;
  //>>includeStart('debug', pragmas.debug);
  Check.defined("options.context", context);
  //>>includeEnd('debug');

  const gl = context._gl;
  const maximumColorAttachments = ContextLimits.maximumColorAttachments;

  this._gl = gl;
  this._framebuffer = gl.createFramebuffer();

  this._colorTextures = [];
  this._colorRenderbuffers = [];
  this._activeColorAttachments = [];

  this._depthTexture = undefined;
  this._depthRenderbuffer = undefined;
  this._stencilRenderbuffer = undefined;
  this._depthStencilTexture = undefined;
  this._depthStencilRenderbuffer = undefined;
  
  // ...
}

It is also very simple to use. You can call the binding related methods on the prototype chain. CesiumJS supports MRT, so there is a corresponding bindDraw method:

Framebuffer.prototype.bindDraw = function () {
  const gl = this._gl;
  gl.bindFramebuffer(gl.DRAW_FRAMEBUFFER, this._framebuffer);
};

msaa uses the class MultisampleFramebuffer; CesiumJS also designed FramebufferManager class to manage frame buffer objects, which are used in post-processing, OIT, picking, frame buffer management of Scene and other modules.

2. Class III directives

CesiumJS does not directly process geographic 3D objects. Instead, in various updated process control functions, each 3D object generates an object called "instruction" and sends it to the relevant rendering queue of frame state objects.

These instruction objects mask the differences of various high-level "human friendly" 3D data objects. Context can easily and uniformly process the data resources (buffer, texture) and behavior (shader) they carry.

These instruction objects fall into three categories:

  • Drawing instruction (drawing instruction), DrawCommand class, which is responsible for rendering drawings
  • Screen clearing instruction, ClearCommand class, is responsible for clearing the drawing area
  • General computing instruction, ComputeCommand class, GPU parallel computing with WebGL

Here is a brief explanation.

2.1. Drawing instruction (drawing instruction)

That is, the DrawCommand class, which is located in renderer / DrawCommand JS module.

The drawing instruction is generated by various advanced 3D objects during each frame update of the Scene object and added to the frame state object for rendering.

I once wrote an article about DrawCommand drawing the simplest triangle. If you click into my user article list and can't find it, you may see the pirated version of this article:)

In short, creating a DrawCommand requires both data (VertexArray, uniformMap, RenderState) and behavior (ShaderProgram). Most of the auxiliary materials used to create DrawCommand need Context objects.

Its execution process is as follows:

DrawCommand.prototype.execute
  Context.prototype.draw
    fn beginDraw
    fn continueDraw

The article on Scene rendering a frame has mentioned how to execute these rendering instructions. It starts from the updateAndExecuteCommands method on the Scene prototype chain, goes all the way to the executeCommand function, and finally calls the execution methods of various instruction objects.

In the above simple logic flow, the function in the beginDraw module is responsible for binding the frame buffer object and rendering state, and binding the ShaderProgram to the WebGL global state:

function beginDraw(/* ... */) {
  // ...
  bindFramebuffer(context, framebuffer);
  applyRenderState(context, renderState, passState, false);
  shaderProgram._bind();
  // ...
}

Next, the continueraw function will set (update) the unified value to the WebGL global state:

// Function continueraw
shaderProgram._setUniforms(
  uniformMap,
  context._us,
  context.validateShaderProgram
);

Then go through the conventional drawing process of WebGL, bind VertexArray, judge whether index buffer is used, and draw vertex data respectively by branching logic:

// Function continueraw
va._bind();
const indexBuffer = va.indexBuffer;

if (defined(indexBuffer)) {
  // ...
} else {
  count = defaultValue(count, va.numberOfVertices);
  if (instanceCount === 0) {
    context._gl.drawArrays(primitiveType, offset, count);
  } else {
    context.glDrawArraysInstanced(
      primitiveType,
      offset,
      count,
      instanceCount
    );
  }
}

va._unBind();

Code VA_ bind(); Is binding various vertex data.

2.2. Screen clearing instruction

The purpose of the screen clearing instruction is the same as that of the clear method of WebGL, that is, empty the color part and depth template part of the current frame buffer (or canvas), fill in specific values, and encapsulate it into a ClearCommand class.

The screen clearing instruction is relatively simple. Its execution and drawing instruction are a process, both in scene In the executeCommand function under JS module:

// function executeCommand
if (command instanceof ClearCommand) {
  command.execute(context, passState);
  return;
}

You can see that once it is executed, it will not continue to execute the following code about the drawing instruction. return directly.

Next, the clear method on the Context prototype chain will be executed. It will also bind the frame buffer, set the rendering state, and finally call gl The clear method brushes the set color, depth and template value to be filled after clearing. The process is relatively simple, so the source code is not pasted.

There are several clear screen instruction member objects on the Scene object. In the rendering process, the updateAndClearFramebuffers function executes the color clear screen instruction, while the template and depth clear screen instruction is executed by the executeCommands function:

// Scene.js module

function executeCommandsInViewport(/* ... */) {
  // ...
  if (firstViewport) {
    if (defined(backgroundColor)) {
      updateAndClearFramebuffers(scene, passState, backgroundColor);
    }
    // ...
  }
  executeCommands(scene, passState);  
}

function updateAndClearFramebuffers(scene, passState, clearColor) {
  // ...
  const clear = scene._clearColorCommand;
  Color.clone(clearColor, clear.color);
  clear.execute(context, passState);
  // ...
}

function executeCommands(scene, passState) {
  // ...
  const clearDepth = scene._depthClearCommand;
  const clearStencil = scene._stencilClearCommand;
  // ...
  
  for (let i = 0; i < numFrustums; ++i) {
    // ...
    clearDepth.execute(context, passState);
    if (context.stencilBuffer) {
      clearStencil.execute(context, passState);
    }
    // ...  Execute the branching logic of other commands
  }
}

Of course, the object with ClearCommand is not only Scene, but also in other places. You can search the new ClearCommand keyword globally in the source code.

2.3. General computing instruction

Early WebGL 1.0 didn't support GPU very well. If you want GPU to simulate ordinary parallel computing, you need to encode the data into texture, sample and calculate the texture through the rendering pipeline, then output it to the frame buffer object, and then use the interface function of WebGL to read pixels to read the results.

WebGL 2.0's computational shader is late.

CesiumJS initially used ComputeCommand to distinguish the DrawCommand used for rendering tasks.

There are few calculation instructions used in CesiumJS source code. The most classic is the re projection method of image layer:

ImageryLayer.prototype._reprojectTexture = function (/**/) {
  // ...
  const computeCommand = new ComputeCommand({
    persists: true,
    owner: this,
    preExecute: function (command) {
      reprojectToGeographic(command, context, texture, imagery.rectangle);
    },
    postExecute: function (outputTexture) {
      imagery.texture = outputTexture;
      that._finalizeReprojectTexture(context, outputTexture);
      imagery.state = ImageryState.READY;
      imagery.releaseReference();
    },
    canceled: function () {
      imagery.state = ImageryState.TEXTURE_LOADED;
      imagery.releaseReference();
    },
  });
  // ...
}

The "housekeeper" that executes it is not the Context but the ComputeEngine class.

ComputeCommand.prototype.execute = function (computeEngine) {
  computeEngine.execute(this);
};

Of course, if you look at the constructor of ComputeEngine class, it is just an encapsulation of Context, using decorator mode:

function ComputeEngine(context) {
  this._context = context;
}

View computeengine prototype. In fact, the core execution part of execute also uses DrawCommand and ClearCommand to execute the incoming ShaderProgram on an independent Framebuffer.

ComputeEngine.prototype.execute = function (computeCommand) {
  // ...
  const outputTexture = computeCommand.outputTexture;
  const width = outputTexture.width;
  const height = outputTexture.height;

  const context = this._context;
  const vertexArray = defined(computeCommand.vertexArray)
    ? computeCommand.vertexArray
    : context.getViewportQuadVertexArray();
  const shaderProgram = defined(computeCommand.shaderProgram)
    ? computeCommand.shaderProgram
    : createViewportQuadShader(context, computeCommand.fragmentShaderSource);
  // Use outputTexture as the drawing result carrier of fbo
  const framebuffer = createFramebuffer(context, outputTexture);
  const renderState = createRenderState(width, height);
  const uniformMap = computeCommand.uniformMap;

  // Use the variables in the module to complete fbo screen clearing
  const clearCommand = clearCommandScratch;
  clearCommand.framebuffer = framebuffer;
  clearCommand.renderState = renderState;
  clearCommand.execute(context);

  // Use the variables in the module to complete the execution of fbo rendering pipeline
  const drawCommand = drawCommandScratch;
  drawCommand.vertexArray = vertexArray;
  drawCommand.renderState = renderState;
  drawCommand.shaderProgram = shaderProgram;
  drawCommand.uniformMap = uniformMap;
  drawCommand.framebuffer = framebuffer;
  drawCommand.execute(context);

  framebuffer.destroy();
  // ...
}

The specific principles of computational coloring and texture coding are not introduced. They belong to the principle of shader. This article (this series of articles) introduces more details of architecture design.

3. Custom shader

Cesium JS has some public API s that allow developers to write their own coloring process.

Before Cesium team vigorously developed the next-generation 3dfiles and the new architecture of model class, the ability of this part was relatively weak. Only one Fabric material specification can write the material effect of existing geometric objects, and there are few documents.

With the experimental launch of the next generation 3dfiles and the new model class, it brings a higher degree of freedom CustomShader API, which not only has complete documents, but also gives developers the greatest freedom to modify graphics rendering.

3.1. Custom shaders from earlier Fabric material specifications

When writing the Primitive API, there is such a field:

new Primitive({
  //...
  appearance: new MaterialAppearance({
    material: Material.fromType('Color'),
    faceForward: true
  })
})

This materialapearance is a subclass of the Appearance class. In addition to the above two attributes, you can also pass vertex shader code yourself.

However, vertex shaders and slice shaders are usually not directly provided to the derivative subclasses of Appearance, because the shaders required by Appearance objects have additional requirements. Usually, the Material glsl function is written when creating Material:

const fabricMaterial = new Material({
  fabric: {
    uniforms: {
      my_var: 0.5,
    },
    source: `czm_material czm_getMaterial(czm_materialInput input) {
      czm_material material = czm_getDefaultMaterial(input);
      material.diffuse = vec3(materialInput.st, 0.0);
      material.alpha = my_var;
      return material;
    }`
  }
})

Then pass the material object that complies with the Fabric material specification to the appearance object:

new MaterialAppearance({
  material: fabricMaterial
})

Fabric material specification is not introduced here. I'll have a chance to write another article in the future. Simply put, pass a JavaScript object to the fabric member variable of material. This object can customize a material, have uniformMap, and use a function in glsl code to return a czm_material structure as material.

Although you can create a glsl structure as a material, it only works on part of the shading process of the slice.

Appearance API is used for Primitive objects generated by Primitive API. Appearance objects support the direct transfer of two major shader codes required by Primitive, but there are also restrictions. Some vertex attributes and varying must exist, and they have to deal with the conversion of rendering pipeline by themselves. There is little information in this regard.

Through the following code, you can output the two built-in default shader codes generated by the simplest materialapearance object for your own modification:

const appearance = new Cesium.MaterialAppearance({
  material: new Cesium.Material({}),
})

const vs = appearance.vertexShaderSource
const fs = appearance.fragmentShaderSource
const fsWithFabricMaterial = appearance.getFragmentShaderSource()

// Print these three variables, which are glsl code strings
// console.log(vs, fs, fsWithFabricMaterial)

3.2. Custom shaders in post-processing

CesiumJS actually has built-in a lot of common post processors, such as glow and FXAA. Just refer to the static fields exported by the class PostProcessStageLibrary.

Although these post-processing are the most basic for the whole FBO, the effect is general.

You can visit Postprocessstages accesses the container of the post processor. For example, enabling fast anti aliasing is enabled as follows:

viewer.scene.postProcessStages.fxaa.enabled = true

Built in ambient light masking (AO) or glow (Bloom) must be executed before all post processors, and FXAA must be executed after all post processors. These three stages are also the post processor created by CesiumJS by default.

You can create a separate post-processing stage or composite post-processing stage as a post processor and transfer it into the post-processing stage collection container. If you do not use other common post-processing algorithms provided by the official, you can also write your own shaders.

The official document reads:

The input texture of each post-processing stage is the texture rendered by Scene or the output texture of the previous post-processing stage.

Referring to the documentation of the PostProcessStage class, the official also provides two examples:

// Example 1: rough color modification
const fs = `uniform sampler2D colorTexture;
varying vec2 v_textureCoordinates;
uniform float scale;
uniform vec3 offset;
void main() {
  vec4 color = texture2D(colorTexture, v_textureCoordinates);
  gl_FragColor = vec4(color.rgb * scale + offset, 1.0);
}`
scene.postProcessStages.add(new Cesium.PostProcessStage({
  fragmentShader: fs,
  uniforms: {
    scale: 1.1,
    offset: function() {
      return new Cesium.Cartesian3(0.1, 0.2, 0.3);
    }
  }
}))

The post-processing also supports the judgment of the picked object, that is, example 2, modifying the color of the picked object:

const fs = `uniform sampler2D colorTexture;
varying vec2 v_textureCoordinates;
uniform vec4 highlight;
void main() {
  vec4 color = texture2D(colorTexture, v_textureCoordinates);
  if (czm_selected()) {
    vec3 highlighted = 
      highlight.a * highlight.rgb + (1.0 - highlight.a) * color.rgb;
    gl_FragColor = vec4(highlighted, 1.0);
  } else {
    gl_FragColor = color;
  }
}`
const stage = scene.postProcessStages.add(new Cesium.PostProcessStage({
  fragmentShader: fs,
  uniforms: {
    highlight: function() {
      return new Cesium.Color(1.0, 0.0, 0.0, 0.5);
    }
  }
}))
stage.selected = [cesium3DTileFeature]

The selected of PostProcessStage is a js array, which supports setting cesium3dfilefeature, Label, Billboard and other classes with pickId accessors, or Model, cesium3dfilepointfeature and other classes with pickIds accessors as "selected objects", and creating a selection texture in the update process (PostProcessStage.prototype.update).

The information about post-processing may be introduced in an application article in the future.

3.3. CustomShader API brought by new architecture

This may be updated at any time with the update of the new modelempirical architecture (in May 2022, this architecture is starting to replace the schema related to the original Model class).

At present, the CustomShader API only supports the use of cesium3dfileset and modelempirical classes. The incoming custom shader acts on each tile or model.

give an example:

import {
  CustomShader, UniformType, TextureUniform, VaryingType
} from 'cesium'
const customShader = new CustomShader({
  uniforms: {
    u_colorIndex: {
      type: UniformType.FLOAT,
      value: 1.0
    },
    u_normalMap: {
      type: UniformType.SAMPLER_2D,
      value: new TextureUniform({
        url: "http://example.com/normal.png"
      })
    }
  },
  varyings: {
    v_selectedColor: VaryingType.VEC3
  },
  vertexShaderText: `void vertexMain(
    VertexInput vsInput,
    inout czm_modelVertexOutput vsOutput
  ) {
    v_selectedColor = mix(
      vsInput.attributes.color_0,
      vsInput.attributes.color_1, u_colorIndex
    );
    vsOutput.positionMC += 0.1 * vsInput.attributes.normal;
  }`,
  fragmentShaderText: `void fragmentMain(
    FragmentInput fsInput,
    inout czm_modelMaterial material
  ) {
    material.normal = texture2D(
      u_normalMap, fsInput.attributes.texCoord_0
    );
    material.diffuse = v_selectedColor;
  }`
})

Relevant specification documents can be found in the root directory of the source code: Documentation / customshaderguide / readme MD file.

You can set the uniformMap required in the rendering pipeline, specify the exchange values between the two shaders, and access the two structures verteinput and FragmentInput encapsulated by CesiumJS in the two shaders. They provide you with as detailed values as possible, such as:

  • Vertex Attributes
  • Feature / batch ID (FeatureID/BatchID)
  • Attribute Metadata in 3dfiles 1.1 specification

Vertex attributes provide as detailed vertex information as possible. Common: vertex coordinates, normals, texture coordinates, colors, etc. are attached, and values in various coordinate systems are attached. For example, you can access the vertex coordinates in the camera coordinate system in the vertex shader as follows:

void vertexMain(
  VertexInput vsInput,
  inout czm_modelVertexOutput vsOutput
) {
  vsOutput.positionMC = czm_projection * (
    vec4(vsInput.attributes.positionEC) + vec4(.0, .0, 10.0, 0.0)
  );
}

The above code increases the z value of the observation coordinate by 10 units, and finally multiplies it by the built-in projection matrix as the output model coordinate.

All built-in glsl functions, constants and automatic variables provided by CesiumJS support the use in CustomShader API.

This undoubtedly provides great convenience for developers who want to modify the shape and effect of the model.

4. Summary

Personally, I think WebGL encapsulation has been discussed in this paper. Higher level application encapsulation, such as OIT, GPUPick, shader and instruction generation process of various objects, are based on the content of this article and occur in the rendering process of Scene.

Finally, I would like to emphasize that this paper is only an introduction to the architecture, not a detailed explanation of the shader algorithm. The shader algorithm can be described as the brainstorming area of CesiumJS, which can't be accommodated in one article.

After having the rendering architecture and WebGL packaging foundation, it's time to look at the rendering architecture design of the most classic model (glTF) and 3dfiles. The old version of the model architecture is being replaced by the new architecture, so it will be explained directly based on the new architecture.

Tags: webgl Shader cesium webgis

Posted by rash on Sun, 15 May 2022 05:49:27 +0300