什么是GLSL?它在图形渲染管线中的作用是什么?
What is GLSL? What role does it play in the graphics rendering pipeline?
*考察点:GLSL基础概念。*
共 108 道题目
What is GLSL? What role does it play in the graphics rendering pipeline?
What is GLSL? What role does it play in the graphics rendering pipeline?
考察点:GLSL基础概念。
答案:
GLSL(OpenGL Shading Language)是OpenGL的官方着色器语言,用于在GPU上编写可编程渲染管线程序。它是一种类C语言,专门设计用于图形处理,允许开发者直接控制顶点处理和像素着色过程,实现各种复杂的视觉效果。
核心特点:
在渲染管线中的位置:
主要作用:
// 顶点着色器示例
attribute vec3 position;
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
// 片段着色器示例
precision mediump float;
uniform vec3 color;
void main() {
gl_FragColor = vec4(color, 1.0);
}
实际应用:
What are the versions of GLSL? What are the main differences between different versions?
What are the versions of GLSL? What are the main differences between different versions?
考察点:GLSL版本演进。
答案:
GLSL版本与OpenGL/WebGL版本紧密相关,每个版本都引入了新特性和语法改进。版本号通常以三位数字表示(如110、330、450),对应OpenGL版本的主次版本号。
主要版本对比:
1. GLSL 1.10-1.20(OpenGL 2.0-2.1)
// 使用attribute和varying关键字
attribute vec3 position;
varying vec2 vUv;
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
2. GLSL 1.30-1.50(OpenGL 3.0-3.2)
in和out关键字替代attribute和varying#version 130
in vec3 position;
out vec2 vUv;
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
3. GLSL 3.30+(OpenGL 3.3+)
#version 330 core
layout(location = 0) in vec3 position;
layout(location = 1) in vec2 uv;
out vec2 vUv;
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
4. GLSL ES(WebGL)
// WebGL 1.0
precision mediump float;
attribute vec3 position;
varying vec2 vUv;
// WebGL 2.0
#version 300 es
precision mediump float;
in vec3 position;
out vec2 vUv;
版本选择建议:
What is the basic structure of a GLSL program? How to write the simplest shader?
What is the basic structure of a GLSL program? How to write the simplest shader?
考察点:着色器程序结构。
答案:
GLSL程序由顶点着色器和片段着色器组成,两者协同工作完成图形渲染。每个着色器都必须包含一个main()函数作为入口点,通过变量限定符实现数据传递。
基本结构组成:
最简单的着色器示例:
// 顶点着色器 - 最简版本
attribute vec3 position;
void main() {
gl_Position = vec4(position, 1.0);
}
// 片段着色器 - 最简版本
precision mediump float;
void main() {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); // 红色
}
完整结构示例:
// 顶点着色器 - 完整版本
#version 300 es
precision highp float;
// 输入变量
in vec3 position;
in vec3 normal;
in vec2 uv;
// Uniform变量
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
// 输出到片段着色器
out vec3 vNormal;
out vec2 vUv;
out vec3 vPosition;
void main() {
vNormal = normal;
vUv = uv;
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
vPosition = worldPosition.xyz;
gl_Position = projectionMatrix * viewMatrix * worldPosition;
}
// 片段着色器 - 完整版本
#version 300 es
precision mediump float;
// 从顶点着色器接收
in vec3 vNormal;
in vec2 vUv;
in vec3 vPosition;
// Uniform变量
uniform vec3 lightPosition;
uniform vec3 cameraPosition;
uniform sampler2D diffuseTexture;
// 输出
out vec4 fragColor;
void main() {
// 纹理采样
vec4 texColor = texture(diffuseTexture, vUv);
// 简单光照计算
vec3 normal = normalize(vNormal);
vec3 lightDir = normalize(lightPosition - vPosition);
float diff = max(dot(normal, lightDir), 0.0);
vec3 finalColor = texColor.rgb * diff;
fragColor = vec4(finalColor, texColor.a);
}
关键要点:
What is the compilation and linking process in GLSL?
What is the compilation and linking process in GLSL?
考察点:着色器编译机制。
答案:
GLSL着色器的编译和链接是在运行时由WebGL/OpenGL驱动程序完成的,不同于传统的预编译语言。这个过程包括源码编译、着色器链接和程序验证三个主要阶段。
编译和链接流程:
1. 创建着色器对象
// WebGL环境中
const vertexShader = gl.createShader(gl.VERTEX_SHADER);
const fragmentShader = gl.createShader(gl.FRAGMENT_SHADER);
2. 加载着色器源码
const vertexSource = `
attribute vec3 position;
void main() {
gl_Position = vec4(position, 1.0);
}
`;
const fragmentSource = `
precision mediump float;
void main() {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
`;
gl.shaderSource(vertexShader, vertexSource);
gl.shaderSource(fragmentShader, fragmentSource);
3. 编译着色器
gl.compileShader(vertexShader);
gl.compileShader(fragmentShader);
// 检查编译错误
if (!gl.getShaderParameter(vertexShader, gl.COMPILE_STATUS)) {
console.error('顶点着色器编译错误:', gl.getShaderInfoLog(vertexShader));
}
4. 创建程序对象
const program = gl.createProgram();
5. 附加着色器
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragmentShader);
6. 链接程序
gl.linkProgram(program);
// 检查链接错误
if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {
console.error('程序链接错误:', gl.getProgramInfoLog(program));
}
7. 使用程序
gl.useProgram(program);
完整示例:
function createShaderProgram(gl, vertexSource, fragmentSource) {
// 编译顶点着色器
const vertexShader = compileShader(gl, gl.VERTEX_SHADER, vertexSource);
// 编译片段着色器
const fragmentShader = compileShader(gl, gl.FRAGMENT_SHADER, fragmentSource);
// 链接程序
const program = gl.createProgram();
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragmentShader);
gl.linkProgram(program);
// 验证程序
if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {
const info = gl.getProgramInfoLog(program);
throw new Error('程序链接失败: ' + info);
}
// 清理着色器对象(可选)
gl.deleteShader(vertexShader);
gl.deleteShader(fragmentShader);
return program;
}
function compileShader(gl, type, source) {
const shader = gl.createShader(type);
gl.shaderSource(shader, source);
gl.compileShader(shader);
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
const info = gl.getShaderInfoLog(shader);
gl.deleteShader(shader);
throw new Error('着色器编译失败: ' + info);
}
return shader;
}
常见编译错误:
调试技巧:
gl.getShaderInfoLog()获取详细错误信息gl.validateProgram()验证程序状态What are the basic data types in GLSL? What are their characteristics?
What are the basic data types in GLSL? What are their characteristics?
考察点:基础数据类型。
答案:
GLSL提供了丰富的数据类型系统,包括标量类型、向量类型、矩阵类型和采样器类型。这些类型专门为图形计算优化,支持高效的并行处理。
标量类型:
// 浮点数
float a = 1.0;
float b = 1.5e-3; // 科学计数法
// 整数
int count = 10;
int negative = -5;
// 无符号整数(GLSL 1.30+)
uint index = 0u;
// 布尔值
bool isVisible = true;
bool flag = false;
向量类型:
// 浮点向量
vec2 position = vec2(1.0, 2.0); // 2D向量
vec3 color = vec3(1.0, 0.5, 0.0); // 3D向量/RGB
vec4 vertex = vec4(0.0, 1.0, 0.0, 1.0); // 4D向量/RGBA
// 整数向量
ivec2 pixel = ivec2(100, 200);
ivec3 indices = ivec3(0, 1, 2);
ivec4 data = ivec4(1, 2, 3, 4);
// 布尔向量
bvec2 flags = bvec2(true, false);
bvec3 mask = bvec3(false, true, false);
bvec4 test = bvec4(true);
// 向量构造方式
vec3 v1 = vec3(1.0); // (1.0, 1.0, 1.0)
vec3 v2 = vec3(1.0, 2.0, 3.0); // (1.0, 2.0, 3.0)
vec4 v3 = vec4(v2, 4.0); // (1.0, 2.0, 3.0, 4.0)
vec4 v4 = vec4(vec2(1.0, 2.0), vec2(3.0, 4.0)); // (1.0, 2.0, 3.0, 4.0)
矩阵类型:
// 浮点矩阵
mat2 rotation = mat2(1.0, 0.0, 0.0, 1.0); // 2x2矩阵
mat3 transform = mat3(1.0); // 3x3单位矩阵
mat4 mvp = mat4(1.0); // 4x4单位矩阵
// 非方阵(GLSL 1.20+)
mat2x3 m23 = mat2x3(1.0); // 2列3行
mat3x4 m34 = mat3x4(1.0); // 3列4行
// 矩阵构造
mat3 m = mat3(
1.0, 0.0, 0.0, // 第一列
0.0, 1.0, 0.0, // 第二列
0.0, 0.0, 1.0 // 第三列
);
采样器类型:
// 2D纹理采样器
uniform sampler2D diffuseMap;
uniform sampler2D normalMap;
// 立方体贴图采样器
uniform samplerCube envMap;
// 3D纹理采样器
uniform sampler3D volumeTexture;
// 阴影采样器
uniform sampler2DShadow shadowMap;
类型特点:
1. 强类型系统
float a = 1.0;
int b = 1;
// float c = a + b; // 错误:类型不匹配
float c = a + float(b); // 正确:显式转换
2. 向量分量访问
vec4 color = vec4(1.0, 0.5, 0.0, 1.0);
// 通过xyz/rgba/stpq访问
float r = color.r; // 或 color.x 或 color.s
float g = color.g; // 或 color.y 或 color.t
vec3 rgb = color.rgb;
vec2 rg = color.rg;
// Swizzle操作
vec4 swizzled = color.bgra; // (0.0, 0.5, 1.0, 1.0)
vec3 repeated = color.rrr; // (1.0, 1.0, 1.0)
3. 矩阵列主序
mat3 m = mat3(1.0, 2.0, 3.0, // 第一列
4.0, 5.0, 6.0, // 第二列
7.0, 8.0, 9.0); // 第三列
// 访问元素
float value = m[0][1]; // 第一列第二行 = 2.0
类型转换规则:
// 显式转换
int i = 5;
float f = float(i); // 5.0
float f2 = 3.7;
int i2 = int(f2); // 3(截断)
// 向量转换
vec3 v3 = vec3(1.0, 2.0, 3.0);
vec4 v4 = vec4(v3, 4.0); // 扩展
vec2 v2 = v3.xy; // 收缩
// 不允许隐式转换
// vec3 wrong = vec3(1, 2, 3); // 错误
vec3 correct = vec3(1.0, 2.0, 3.0); // 正确
实际应用:
What are the vector types in GLSL? How to access vector components?
What are the vector types in GLSL? How to access vector components?
考察点:向量类型和swizzle操作。
答案:
GLSL的向量类型是图形计算的核心数据结构,支持2、3、4维向量,并提供灵活的分量访问方式(Swizzling)。向量运算在GPU上高度优化,可以并行执行,是着色器编程的基础。
向量类型分类:
// 浮点向量(最常用)
vec2 texCoord = vec2(0.5, 0.5); // 纹理坐标
vec3 position = vec3(1.0, 2.0, 3.0); // 3D位置
vec4 color = vec4(1.0, 0.0, 0.0, 1.0); // RGBA颜色
// 整数向量
ivec2 pixel = ivec2(100, 200); // 像素坐标
ivec3 indices = ivec3(0, 1, 2); // 索引
ivec4 data = ivec4(1, 2, 3, 4); // 整数数据
// 无符号整数向量(GLSL 1.30+)
uvec2 size = uvec2(1024u, 768u);
uvec3 count = uvec3(10u, 20u, 30u);
uvec4 mask = uvec4(0xFFu);
// 布尔向量
bvec2 flags = bvec2(true, false);
bvec3 mask3 = bvec3(true);
bvec4 test = bvec4(false, true, true, false);
分量访问方式:
1. 坐标访问(xyz)
vec4 pos = vec4(1.0, 2.0, 3.0, 4.0);
float x = pos.x; // 1.0
float y = pos.y; // 2.0
float z = pos.z; // 3.0
float w = pos.w; // 4.0
vec3 xyz = pos.xyz; // vec3(1.0, 2.0, 3.0)
vec2 xy = pos.xy; // vec2(1.0, 2.0)
2. 颜色访问(rgba)
vec4 color = vec4(1.0, 0.5, 0.0, 1.0);
float red = color.r; // 1.0
float green = color.g; // 0.5
float blue = color.b; // 0.0
float alpha = color.a; // 1.0
vec3 rgb = color.rgb; // vec3(1.0, 0.5, 0.0)
3. 纹理访问(stpq)
vec4 texCoord = vec4(0.0, 1.0, 0.5, 0.5);
float s = texCoord.s; // 0.0
float t = texCoord.t; // 1.0
float p = texCoord.p; // 0.5
float q = texCoord.q; // 0.5
vec2 st = texCoord.st; // vec2(0.0, 1.0)
Swizzle操作:
1. 重新排序
vec4 color = vec4(1.0, 0.5, 0.0, 1.0);
vec4 bgra = color.bgra; // vec4(0.0, 0.5, 1.0, 1.0)
vec3 bgr = color.bgr; // vec3(0.0, 0.5, 1.0)
vec2 gr = color.gr; // vec2(0.5, 1.0)
2. 分量重复
vec3 color = vec3(1.0, 0.5, 0.0);
vec3 rrr = color.rrr; // vec3(1.0, 1.0, 1.0)
vec4 rrrr = color.rrrr; // vec4(1.0, 1.0, 1.0, 1.0)
vec2 rg = color.rg; // vec2(1.0, 0.5)
3. 任意组合
vec4 v = vec4(1.0, 2.0, 3.0, 4.0);
vec4 wzyx = v.wzyx; // vec4(4.0, 3.0, 2.0, 1.0)
vec3 xxy = v.xxy; // vec3(1.0, 1.0, 2.0)
vec2 zw = v.zw; // vec2(3.0, 4.0)
向量运算:
vec3 a = vec3(1.0, 2.0, 3.0);
vec3 b = vec3(4.0, 5.0, 6.0);
// 向量加法
vec3 sum = a + b; // vec3(5.0, 7.0, 9.0)
// 向量减法
vec3 diff = b - a; // vec3(3.0, 3.0, 3.0)
// 向量乘法(逐分量)
vec3 product = a * b; // vec3(4.0, 10.0, 18.0)
// 标量乘法
vec3 scaled = a * 2.0; // vec3(2.0, 4.0, 6.0)
// 点积
float dotProduct = dot(a, b); // 32.0 (1*4 + 2*5 + 3*6)
// 叉积
vec3 crossProduct = cross(a, b); // vec3(-3.0, 6.0, -3.0)
// 向量长度
float len = length(a); // sqrt(14.0) ≈ 3.742
// 标准化
vec3 normalized = normalize(a); // a / length(a)
实用技巧:
1. 颜色转换
// RGB转灰度
vec3 color = vec3(1.0, 0.5, 0.0);
float gray = dot(color, vec3(0.299, 0.587, 0.114));
// Alpha混合
vec4 color1 = vec4(1.0, 0.0, 0.0, 0.7);
vec4 color2 = vec4(0.0, 1.0, 0.0, 0.5);
vec3 blended = mix(color2.rgb, color1.rgb, color1.a);
2. 坐标变换
// 交换XY坐标
vec2 uv = vec2(0.3, 0.7);
vec2 swapped = uv.yx; // vec2(0.7, 0.3)
// 镜像坐标
vec2 mirrored = vec2(1.0) - uv; // vec2(0.7, 0.3)
3. 分量操作
vec4 data = vec4(1.0, 2.0, 3.0, 4.0);
// 只修改某些分量
data.xy = vec2(5.0, 6.0); // vec4(5.0, 6.0, 3.0, 4.0)
data.w = 0.5; // vec4(5.0, 6.0, 3.0, 0.5)
// 使用swizzle赋值
data.rgb = vec3(0.0); // vec4(0.0, 0.0, 0.0, 0.5)
注意事项:
color.xg是错误的)What are uniform, attribute, and varying variables? What are their differences?
What are uniform, attribute, and varying variables? What are their differences?
考察点:变量限定符理解。
答案:
uniform、attribute和varying是GLSL中的变量限定符,用于控制数据在着色器间的传递和作用域。它们是连接JavaScript/应用层与GPU着色器的桥梁,理解它们的区别是着色器编程的基础。
三种变量对比:
| 特性 | uniform | attribute | varying |
|---|---|---|---|
| 数据源 | JavaScript传入 | 顶点缓冲区 | 顶点着色器输出 |
| 作用域 | 所有顶点/片段共享 | 每个顶点独立 | 顶点→片段插值 |
| 可修改性 | 只读 | 只读 | 顶点输出/片段输入 |
| 使用阶段 | 顶点+片段 | 仅顶点 | 顶点→片段 |
1. uniform变量(全局常量)
// 顶点着色器
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform float time;
attribute vec3 position;
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
// 片段着色器
uniform vec3 lightColor;
uniform vec3 objectColor;
void main() {
gl_FragColor = vec4(objectColor * lightColor, 1.0);
}
JavaScript设置uniform:
// 获取uniform位置
const timeLocation = gl.getUniformLocation(program, 'time');
const colorLocation = gl.getUniformLocation(program, 'objectColor');
// 设置uniform值
gl.uniform1f(timeLocation, Date.now() / 1000);
gl.uniform3f(colorLocation, 1.0, 0.5, 0.0);
uniform特点:
2. attribute变量(顶点属性)
// 顶点着色器
attribute vec3 position;
attribute vec3 normal;
attribute vec2 uv;
attribute vec3 color;
void main() {
gl_Position = vec4(position, 1.0);
}
JavaScript设置attribute:
// 创建缓冲区
const positionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
-1.0, -1.0, 0.0,
1.0, -1.0, 0.0,
0.0, 1.0, 0.0
]), gl.STATIC_DRAW);
// 获取attribute位置
const positionLocation = gl.getAttribLocation(program, 'position');
// 启用attribute
gl.enableVertexAttribArray(positionLocation);
// 设置attribute指针
gl.vertexAttribPointer(
positionLocation,
3, // 分量数量
gl.FLOAT, // 数据类型
false, // 是否归一化
0, // 步长
0 // 偏移
);
attribute特点:
3. varying变量(顶点间插值)
// 顶点着色器
attribute vec3 position;
attribute vec2 uv;
attribute vec3 normal;
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vPosition;
void main() {
vUv = uv;
vNormal = normal;
vPosition = position;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
// 片段着色器
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vPosition;
uniform sampler2D texture;
void main() {
vec4 texColor = texture2D(texture, vUv);
vec3 normal = normalize(vNormal);
// 使用插值后的数据
float lighting = max(dot(normal, vec3(0.0, 0.0, 1.0)), 0.0);
gl_FragColor = vec4(texColor.rgb * lighting, 1.0);
}
varying插值机制:
// 三角形三个顶点的颜色
// 顶点1: vColor = vec3(1.0, 0.0, 0.0) // 红色
// 顶点2: vColor = vec3(0.0, 1.0, 0.0) // 绿色
// 顶点3: vColor = vec3(0.0, 0.0, 1.0) // 蓝色
// 片段着色器接收到的是插值后的颜色
// 三角形中心: vColor ≈ vec3(0.33, 0.33, 0.33) // 灰色
varying特点:
GLSL 3.00+(WebGL 2.0)语法:
// 用in/out替代attribute/varying
// 顶点着色器
#version 300 es
in vec3 position; // 替代 attribute
in vec2 uv;
out vec2 vUv; // 替代 varying(输出)
out vec3 vPosition;
void main() {
vUv = uv;
vPosition = position;
gl_Position = vec4(position, 1.0);
}
// 片段着色器
#version 300 es
precision mediump float;
in vec2 vUv; // 替代 varying(输入)
in vec3 vPosition;
out vec4 fragColor; // 替代 gl_FragColor
void main() {
fragColor = vec4(vUv, 0.0, 1.0);
}
实际应用场景:
// 顶点着色器 - 完整示例
attribute vec3 position;
attribute vec3 normal;
attribute vec2 uv;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat3 normalMatrix;
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vWorldPosition;
void main() {
// 传递UV坐标
vUv = uv;
// 变换法向量
vNormal = normalMatrix * normal;
// 计算世界坐标
vec4 worldPos = modelMatrix * vec4(position, 1.0);
vWorldPosition = worldPos.xyz;
// 计算裁剪空间位置
gl_Position = projectionMatrix * viewMatrix * worldPos;
}
// 片段着色器
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vWorldPosition;
uniform sampler2D diffuseMap;
uniform vec3 lightPosition;
uniform vec3 cameraPosition;
void main() {
// 纹理采样
vec4 texColor = texture2D(diffuseMap, vUv);
// 光照计算
vec3 normal = normalize(vNormal);
vec3 lightDir = normalize(lightPosition - vWorldPosition);
vec3 viewDir = normalize(cameraPosition - vWorldPosition);
// 漫反射
float diff = max(dot(normal, lightDir), 0.0);
// 镜面反射
vec3 reflectDir = reflect(-lightDir, normal);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), 32.0);
vec3 finalColor = texColor.rgb * (diff + spec);
gl_FragColor = vec4(finalColor, texColor.a);
}
最佳实践:
v开头(如vUv, vNormal)How to use matrix types in GLSL? What are the common matrix operations?
How to use matrix types in GLSL? What are the common matrix operations?
考察点:矩阵类型应用。
答案:
矩阵是GLSL中进行几何变换的核心工具,用于实现平移、旋转、缩放、投影等操作。GLSL采用列主序(column-major)存储矩阵,这与数学中的行主序不同,需要特别注意。
矩阵类型声明:
// 方阵
mat2 rotation2D = mat2(1.0); // 2x2单位矩阵
mat3 rotation3D = mat3(1.0); // 3x3单位矩阵
mat4 transform = mat4(1.0); // 4x4单位矩阵
// 非方阵(GLSL 1.20+)
mat2x3 m23 = mat2x3(1.0); // 2列3行
mat2x4 m24 = mat2x4(1.0); // 2列4行
mat3x2 m32 = mat3x2(1.0); // 3列2行
mat3x4 m34 = mat3x4(1.0); // 3列4行
mat4x2 m42 = mat4x2(1.0); // 4列2行
mat4x3 m43 = mat4x3(1.0); // 4列3行
矩阵构造:
// 单位矩阵
mat4 identity = mat4(1.0);
// 等同于
mat4 identity = mat4(
1.0, 0.0, 0.0, 0.0, // 第一列
0.0, 1.0, 0.0, 0.0, // 第二列
0.0, 0.0, 1.0, 0.0, // 第三列
0.0, 0.0, 0.0, 1.0 // 第四列
);
// 从列向量构造
vec4 col0 = vec4(1.0, 0.0, 0.0, 0.0);
vec4 col1 = vec4(0.0, 1.0, 0.0, 0.0);
vec4 col2 = vec4(0.0, 0.0, 1.0, 0.0);
vec4 col3 = vec4(0.0, 0.0, 0.0, 1.0);
mat4 m = mat4(col0, col1, col2, col3);
// 从mat3构造mat4
mat3 m3 = mat3(1.0);
mat4 m4 = mat4(m3); // 右下角为1.0
矩阵访问:
mat4 m = mat4(
1.0, 2.0, 3.0, 4.0, // 第一列
5.0, 6.0, 7.0, 8.0, // 第二列
9.0, 10.0, 11.0, 12.0, // 第三列
13.0, 14.0, 15.0, 16.0 // 第四列
);
// 访问列
vec4 column0 = m[0]; // vec4(1.0, 2.0, 3.0, 4.0)
vec4 column1 = m[1]; // vec4(5.0, 6.0, 7.0, 8.0)
// 访问元素
float value = m[1][2]; // 第2列第3行 = 7.0
// 修改元素
m[0][1] = 99.0; // 修改第1列第2行
m[2] = vec4(0.0); // 修改整个第3列
常见矩阵运算:
1. 矩阵乘法
mat4 modelMatrix = mat4(1.0);
mat4 viewMatrix = mat4(1.0);
mat4 projectionMatrix = mat4(1.0);
// 矩阵相乘(注意顺序)
mat4 mvMatrix = viewMatrix * modelMatrix;
mat4 mvpMatrix = projectionMatrix * mvMatrix;
// 矩阵与向量相乘
vec4 position = vec4(1.0, 2.0, 3.0, 1.0);
vec4 transformed = mvpMatrix * position;
2. 坐标变换
attribute vec3 position;
uniform mat4 modelMatrix; // 模型矩阵
uniform mat4 viewMatrix; // 视图矩阵
uniform mat4 projectionMatrix; // 投影矩阵
void main() {
// 方式1:分步变换
vec4 worldPos = modelMatrix * vec4(position, 1.0);
vec4 viewPos = viewMatrix * worldPos;
vec4 clipPos = projectionMatrix * viewPos;
gl_Position = clipPos;
// 方式2:合并矩阵
mat4 mvpMatrix = projectionMatrix * viewMatrix * modelMatrix;
gl_Position = mvpMatrix * vec4(position, 1.0);
}
3. 法向量变换
attribute vec3 normal;
uniform mat4 modelMatrix;
uniform mat3 normalMatrix; // 法向量矩阵(modelMatrix的逆转置)
varying vec3 vNormal;
void main() {
// 正确的法向量变换
vNormal = normalMatrix * normal;
// 错误示例(不要这样做)
// vNormal = mat3(modelMatrix) * normal; // 非均匀缩放会出错
}
4. 矩阵内置函数
mat4 m = mat4(1.0);
// 转置
mat4 transposed = transpose(m);
// 求逆(如果存在)
mat4 inverted = inverse(m);
// 行列式(仅方阵)
float det = determinant(m);
// 矩阵相乘
mat4 result = matrixCompMult(m, m); // 逐元素相乘
实用变换矩阵:
1. 2D旋转矩阵
mat2 rotate2D(float angle) {
float c = cos(angle);
float s = sin(angle);
return mat2(
c, s, // 第一列
-s, c // 第二列
);
}
// 使用
vec2 point = vec2(1.0, 0.0);
float angle = radians(45.0);
vec2 rotated = rotate2D(angle) * point;
2. 3D旋转矩阵(绕Z轴)
mat3 rotateZ(float angle) {
float c = cos(angle);
float s = sin(angle);
return mat3(
c, s, 0.0, // 第一列
-s, c, 0.0, // 第二列
0.0, 0.0, 1.0 // 第三列
);
}
3. 缩放矩阵
mat4 scale(vec3 s) {
return mat4(
s.x, 0.0, 0.0, 0.0,
0.0, s.y, 0.0, 0.0,
0.0, 0.0, s.z, 0.0,
0.0, 0.0, 0.0, 1.0
);
}
4. 平移矩阵
mat4 translate(vec3 t) {
return mat4(
1.0, 0.0, 0.0, 0.0,
0.0, 1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
t.x, t.y, t.z, 1.0 // 最后一列为平移
);
}
完整应用示例:
// 顶点着色器
attribute vec3 position;
attribute vec3 normal;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat3 normalMatrix;
varying vec3 vNormal;
varying vec3 vWorldPosition;
void main() {
// 变换法向量
vNormal = normalize(normalMatrix * normal);
// 计算世界空间位置
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
vWorldPosition = worldPosition.xyz;
// MVP变换
mat4 mvMatrix = viewMatrix * modelMatrix;
mat4 mvpMatrix = projectionMatrix * mvMatrix;
gl_Position = mvpMatrix * vec4(position, 1.0);
}
// 片段着色器
varying vec3 vNormal;
varying vec3 vWorldPosition;
uniform vec3 lightPosition;
uniform vec3 cameraPosition;
void main() {
vec3 normal = normalize(vNormal);
vec3 lightDir = normalize(lightPosition - vWorldPosition);
vec3 viewDir = normalize(cameraPosition - vWorldPosition);
// Phong光照
float diff = max(dot(normal, lightDir), 0.0);
vec3 reflectDir = reflect(-lightDir, normal);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), 32.0);
vec3 color = vec3(diff + spec);
gl_FragColor = vec4(color, 1.0);
}
性能优化技巧:
What is a Sampler in GLSL? What types are there?
What is a Sampler in GLSL? What types are there?
考察点:纹理采样器。
答案:
采样器(Sampler)是GLSL中用于访问纹理数据的特殊类型,它封装了纹理对象和采样参数。采样器让着色器能够从纹理中读取颜色、法线、深度等数据,是实现复杂材质效果的基础。
常用采样器类型:
1. 2D纹理采样器
uniform sampler2D diffuseMap; // 漫反射贴图
uniform sampler2D normalMap; // 法线贴图
uniform sampler2D specularMap; // 高光贴图
uniform sampler2D roughnessMap; // 粗糙度贴图
varying vec2 vUv;
void main() {
// 采样2D纹理
vec4 diffuse = texture2D(diffuseMap, vUv);
vec4 normal = texture2D(normalMap, vUv);
gl_FragColor = diffuse;
}
2. 立方体贴图采样器
uniform samplerCube envMap; // 环境贴图
uniform samplerCube skybox; // 天空盒
varying vec3 vReflect;
void main() {
// 采样立方体贴图(使用3D方向向量)
vec4 envColor = textureCube(envMap, vReflect);
gl_FragColor = envColor;
}
3. 3D纹理采样器(体积纹理)
uniform sampler3D volumeTexture; // 体积纹理
varying vec3 vUvw;
void main() {
// 采样3D纹理
vec4 volumeColor = texture3D(volumeTexture, vUvw);
gl_FragColor = volumeColor;
}
4. 阴影采样器
uniform sampler2DShadow shadowMap; // 阴影贴图
varying vec4 vShadowCoord;
void main() {
// 深度比较采样
float shadow = shadow2D(shadowMap, vShadowCoord.xyz).r;
gl_FragColor = vec4(vec3(shadow), 1.0);
}
WebGL 2.0 / GLSL ES 3.00新增类型:
#version 300 es
precision mediump float;
// 整数纹理采样器
uniform isampler2D intTexture; // 整数纹理
uniform usampler2D uintTexture; // 无符号整数纹理
// 数组纹理
uniform sampler2DArray texArray; // 2D纹理数组
// Multisample采样器
uniform sampler2DMS msTexture; // 多重采样纹理
in vec2 vUv;
out vec4 fragColor;
void main() {
// 使用texture函数(WebGL 2.0)
vec4 color = texture(texArray, vec3(vUv, 0.0));
fragColor = color;
}
纹理采样函数:
1. WebGL 1.0 / GLSL ES 1.00
// 2D纹理采样
vec4 color = texture2D(sampler2D, vec2 coord);
vec4 color = texture2D(sampler2D, vec2 coord, float bias); // 带mipmap偏移
// 立方体贴图采样
vec4 color = textureCube(samplerCube, vec3 coord);
// 3D纹理采样(如果支持)
vec4 color = texture3D(sampler3D, vec3 coord);
// 投影纹理采样
vec4 color = texture2DProj(sampler2D, vec3 coord); // coord.xy / coord.z
vec4 color = texture2DProj(sampler2D, vec4 coord); // coord.xy / coord.w
2. WebGL 2.0 / GLSL ES 3.00
#version 300 es
precision mediump float;
uniform sampler2D tex;
in vec2 vUv;
out vec4 fragColor;
void main() {
// 统一的texture函数
vec4 color = texture(tex, vUv);
// 带LOD的采样
vec4 colorLod = textureLod(tex, vUv, 2.0); // 指定mipmap层级
// 纹理尺寸查询
ivec2 size = textureSize(tex, 0); // 获取level 0的尺寸
// 偏移采样
vec4 offset = textureOffset(tex, vUv, ivec2(1, 0));
// 梯度采样
vec4 grad = textureGrad(tex, vUv, vec2(1.0, 0.0), vec2(0.0, 1.0));
fragColor = color;
}
纹理采样实用技巧:
1. 多重纹理混合
uniform sampler2D texture1;
uniform sampler2D texture2;
uniform float mixFactor;
varying vec2 vUv;
void main() {
vec4 color1 = texture2D(texture1, vUv);
vec4 color2 = texture2D(texture2, vUv);
// 线性混合
vec4 finalColor = mix(color1, color2, mixFactor);
gl_FragColor = finalColor;
}
2. 纹理坐标操作
uniform sampler2D texture;
uniform float time;
varying vec2 vUv;
void main() {
// UV动画
vec2 animatedUV = vUv + vec2(time * 0.1, 0.0);
// UV缩放
vec2 scaledUV = vUv * 2.0;
// UV旋转
float angle = time;
mat2 rotation = mat2(cos(angle), sin(angle), -sin(angle), cos(angle));
vec2 rotatedUV = rotation * (vUv - 0.5) + 0.5;
// 采样
vec4 color = texture2D(texture, animatedUV);
gl_FragColor = color;
}
3. 法线贴图采样
uniform sampler2D normalMap;
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vTangent;
varying vec3 vBitangent;
void main() {
// 采样法线贴图(存储在[0,1]范围)
vec3 normalMapColor = texture2D(normalMap, vUv).rgb;
// 转换到[-1,1]范围
vec3 tangentNormal = normalize(normalMapColor * 2.0 - 1.0);
// 构建TBN矩阵
mat3 TBN = mat3(
normalize(vTangent),
normalize(vBitangent),
normalize(vNormal)
);
// 变换到世界空间
vec3 worldNormal = normalize(TBN * tangentNormal);
gl_FragColor = vec4(worldNormal * 0.5 + 0.5, 1.0);
}
4. 环境映射
uniform samplerCube envMap;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 cameraPosition;
void main() {
// 计算反射方向
vec3 viewDir = normalize(vPosition - cameraPosition);
vec3 reflectDir = reflect(viewDir, normalize(vNormal));
// 采样环境贴图
vec4 envColor = textureCube(envMap, reflectDir);
gl_FragColor = envColor;
}
采样器配置(JavaScript):
// 创建纹理
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
// 上传纹理数据
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
// 设置过滤模式
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR_MIPMAP_LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
// 设置包裹模式
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.REPEAT);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.REPEAT);
// 生成mipmap
gl.generateMipmap(gl.TEXTURE_2D);
// 绑定到采样器
const samplerLocation = gl.getUniformLocation(program, 'diffuseMap');
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.uniform1i(samplerLocation, 0); // 使用纹理单元0
性能优化建议:
What is the role of vertex shaders? Where are they in the rendering pipeline?
What is the role of vertex shaders? Where are they in the rendering pipeline?
考察点:顶点着色器概念。
答案:
顶点着色器是图形渲染管线中第一个可编程阶段,负责处理每个顶点的位置、颜色、纹理坐标等属性。它将顶点从模型空间变换到裁剪空间,并为后续的光栅化和片段着色器准备插值数据。
在渲染管线中的位置:
顶点数据输入
↓
[顶点着色器] ← 我们在这里
↓
图元装配
↓
光栅化
↓
片段着色器
↓
测试与混合
↓
帧缓冲输出
顶点着色器的主要职责:
1. 坐标变换(最核心)
attribute vec3 position;
uniform mat4 modelMatrix; // 模型变换
uniform mat4 viewMatrix; // 视图变换
uniform mat4 projectionMatrix; // 投影变换
void main() {
// 模型空间 → 世界空间
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
// 世界空间 → 视图空间(相机空间)
vec4 viewPosition = viewMatrix * worldPosition;
// 视图空间 → 裁剪空间
gl_Position = projectionMatrix * viewPosition;
// 或者合并为一个MVP矩阵
// mat4 mvpMatrix = projectionMatrix * viewMatrix * modelMatrix;
// gl_Position = mvpMatrix * vec4(position, 1.0);
}
2. 法向量变换
attribute vec3 normal;
uniform mat3 normalMatrix; // 法向量矩阵(模型矩阵的逆转置)
varying vec3 vNormal;
void main() {
// 变换法向量到世界空间
vNormal = normalize(normalMatrix * normal);
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
}
3. 数据传递到片段着色器
attribute vec3 position;
attribute vec2 uv;
attribute vec3 normal;
attribute vec3 color;
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vColor;
varying vec3 vPosition;
void main() {
// 传递纹理坐标
vUv = uv;
// 传递法向量
vNormal = normalMatrix * normal;
// 传递顶点颜色
vColor = color;
// 传递世界坐标(用于光照计算)
vec4 worldPos = modelMatrix * vec4(position, 1.0);
vPosition = worldPos.xyz;
gl_Position = projectionMatrix * viewMatrix * worldPos;
}
4. 顶点动画
attribute vec3 position;
attribute vec3 normal;
uniform float time;
uniform float amplitude;
varying vec3 vNormal;
void main() {
vec3 pos = position;
// 波浪动画
float wave = sin(position.x * 2.0 + time) * amplitude;
pos.y += wave;
// 更新法向量(简化)
vec3 tangent = vec3(1.0, cos(position.x * 2.0 + time) * amplitude * 2.0, 0.0);
vNormal = normalize(cross(tangent, vec3(0.0, 0.0, 1.0)));
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(pos, 1.0);
}
内置输入变量(attribute):
// 标准顶点属性
attribute vec3 position; // 顶点位置
attribute vec3 normal; // 顶点法向量
attribute vec2 uv; // 纹理坐标
attribute vec2 uv2; // 第二套UV
attribute vec3 color; // 顶点颜色
attribute vec3 tangent; // 切线
attribute vec3 bitangent; // 副切线
// 骨骼动画
attribute vec4 skinIndex; // 骨骼索引
attribute vec4 skinWeight; // 骨骼权重
内置输出变量:
// 必须输出的变量
gl_Position // vec4类型,裁剪空间坐标
// 可选输出变量
gl_PointSize // float类型,点的大小(渲染点时使用)
完整的顶点着色器示例:
// 顶点属性
attribute vec3 position;
attribute vec3 normal;
attribute vec2 uv;
// 变换矩阵
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat3 normalMatrix;
// 传递到片段着色器
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vWorldPosition;
varying vec3 vViewPosition;
void main() {
// 1. 计算世界坐标
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
vWorldPosition = worldPosition.xyz;
// 2. 计算视图坐标
vec4 viewPosition = viewMatrix * worldPosition;
vViewPosition = viewPosition.xyz;
// 3. 计算裁剪空间坐标(输出)
gl_Position = projectionMatrix * viewPosition;
// 4. 变换法向量
vNormal = normalize(normalMatrix * normal);
// 5. 传递UV坐标
vUv = uv;
}
高级应用场景:
1. 实例化渲染
attribute vec3 position;
attribute mat4 instanceMatrix; // 每个实例的变换矩阵
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
void main() {
vec4 worldPosition = instanceMatrix * vec4(position, 1.0);
gl_Position = projectionMatrix * viewMatrix * worldPosition;
}
2. 骨骼动画(蒙皮)
attribute vec3 position;
attribute vec4 skinIndex;
attribute vec4 skinWeight;
uniform mat4 boneMatrices[50]; // 骨骼矩阵数组
void main() {
mat4 skinMatrix =
skinWeight.x * boneMatrices[int(skinIndex.x)] +
skinWeight.y * boneMatrices[int(skinIndex.y)] +
skinWeight.z * boneMatrices[int(skinIndex.z)] +
skinWeight.w * boneMatrices[int(skinIndex.w)];
vec4 skinnedPosition = skinMatrix * vec4(position, 1.0);
gl_Position = projectionMatrix * viewMatrix * modelMatrix * skinnedPosition;
}
3. 变形目标(Morph Targets)
attribute vec3 position;
attribute vec3 morphTarget0;
attribute vec3 morphTarget1;
uniform float morphWeight0;
uniform float morphWeight1;
void main() {
vec3 morphedPosition = position +
morphTarget0 * morphWeight0 +
morphTarget1 * morphWeight1;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(morphedPosition, 1.0);
}
顶点着色器性能考虑:
实际应用:
What is gl_Position? How to use it?
What is gl_Position? How to use it?
考察点:内置输出变量。
答案:
gl_Position 是顶点着色器的内置输出变量,用于指定顶点在裁剪空间(Clip Space)中的最终位置。它是顶点着色器中唯一必须赋值的变量,GPU会根据这个位置进行后续的光栅化处理。
基本特征:
vec4(齐次坐标)gl_Position的坐标含义:
gl_Position = vec4(x, y, z, w);
// x, y, z: 裁剪空间坐标
// w: 齐次坐标分量(用于透视除法)
// GPU自动执行透视除法得到NDC坐标:
// NDC.x = x / w (范围 -1 到 1)
// NDC.y = y / w (范围 -1 到 1)
// NDC.z = z / w (范围 -1 到 1,深度值)
标准使用方式:
1. 完整MVP变换
attribute vec3 position;
uniform mat4 modelMatrix; // 模型矩阵
uniform mat4 viewMatrix; // 视图矩阵
uniform mat4 projectionMatrix; // 投影矩阵
void main() {
// 方式1:分步变换
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
vec4 viewPosition = viewMatrix * worldPosition;
gl_Position = projectionMatrix * viewPosition;
// 方式2:组合矩阵(更高效)
mat4 mvpMatrix = projectionMatrix * viewMatrix * modelMatrix;
gl_Position = mvpMatrix * vec4(position, 1.0);
}
2. 简化的2D渲染
attribute vec2 position;
void main() {
// 直接输出2D坐标(已经在NDC空间)
gl_Position = vec4(position, 0.0, 1.0);
// position范围应该是-1到1
}
3. 正交投影
attribute vec3 position;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
void main() {
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
gl_Position = projectionMatrix * mvPosition;
// 正交投影时,w分量通常为1.0
// 透视除法不会改变坐标
}
高级应用场景:
1. 广告牌效果(Billboard)
attribute vec3 position;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform vec3 billboardPosition;
void main() {
// 提取相机右向量和上向量
vec3 cameraRight = vec3(viewMatrix[0][0], viewMatrix[1][0], viewMatrix[2][0]);
vec3 cameraUp = vec3(viewMatrix[0][1], viewMatrix[1][1], viewMatrix[2][1]);
// 构建面向相机的四边形
vec3 worldPosition = billboardPosition
+ cameraRight * position.x
+ cameraUp * position.y;
gl_Position = projectionMatrix * viewMatrix * vec4(worldPosition, 1.0);
}
2. 自定义投影
attribute vec3 position;
uniform float customFov;
uniform float aspectRatio;
void main() {
// 手动实现透视投影
float f = 1.0 / tan(customFov / 2.0);
float nearPlane = 0.1;
float farPlane = 100.0;
vec4 clipPos;
clipPos.x = position.x * f / aspectRatio;
clipPos.y = position.y * f;
clipPos.z = (farPlane + nearPlane) / (nearPlane - farPlane) * position.z
+ (2.0 * farPlane * nearPlane) / (nearPlane - farPlane);
clipPos.w = -position.z; // 透视除法的关键
gl_Position = clipPos;
}
3. 屏幕空间效果
attribute vec3 position;
void main() {
// 输出屏幕空间坐标(全屏四边形)
gl_Position = vec4(position.xy, 0.0, 1.0);
// 常用于后期处理效果
}
常见问题和注意事项:
1. 坐标空间理解
// 错误:直接使用模型空间坐标
gl_Position = vec4(position, 1.0); // ❌ 坐标范围不对
// 正确:必须转换到裁剪空间
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0); // ✅
2. 齐次坐标w分量
// 透视投影:w != 1
gl_Position = projectionMatrix * viewPosition;
// w通常等于-viewPosition.z(相机空间的z)
// 正交投影:w = 1
gl_Position = vec4(position.xy, 0.0, 1.0);
3. 深度值控制
// 强制物体渲染在最前面
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
gl_Position.z = 0.0; // 修改深度为近平面
// 强制物体渲染在最后面
gl_Position.z = gl_Position.w; // 深度为远平面
调试技巧:
// 方法1:输出到varying变量观察
varying vec4 vClipPosition;
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
vClipPosition = gl_Position; // 在片段着色器中可视化
}
// 方法2:简化测试
void main() {
// 测试是否是坐标变换问题
gl_Position = vec4(position * 0.5, 1.0); // 直接输出模型坐标
}
性能考虑:
mvpMatrix = P * V * M,减少GPU计算实际应用:
How to perform coordinate transformations in vertex shaders?
How to perform coordinate transformations in vertex shaders?
考察点:MVP变换。
答案:
顶点着色器中的坐标变换是3D图形渲染的核心,通过一系列矩阵变换将顶点从模型空间转换到裁剪空间。标准流程包括模型变换(M)、视图变换(V)和投影变换(P),称为MVP变换。
坐标空间变换流程:
模型空间 → 世界空间 → 视图空间 → 裁剪空间 → NDC空间 → 屏幕空间
(Model) (World) (View) (Clip) (NDC) (Screen)
↓ ↓ ↓ ↓ ↓
M矩阵 V矩阵 P矩阵 透视除法 视口变换
完整的MVP变换实现:
attribute vec3 position; // 模型空间坐标
attribute vec3 normal; // 模型空间法向量
attribute vec2 uv; // 纹理坐标
// 变换矩阵
uniform mat4 modelMatrix; // M: 模型矩阵
uniform mat4 viewMatrix; // V: 视图矩阵
uniform mat4 projectionMatrix; // P: 投影矩阵
uniform mat3 normalMatrix; // 法向量变换矩阵
// 输出到片段着色器
varying vec3 vWorldPosition; // 世界空间位置
varying vec3 vViewPosition; // 视图空间位置
varying vec3 vNormal; // 世界空间法向量
varying vec2 vUv;
void main() {
// 1. 模型空间 → 世界空间
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
vWorldPosition = worldPosition.xyz;
// 2. 世界空间 → 视图空间
vec4 viewPosition = viewMatrix * worldPosition;
vViewPosition = viewPosition.xyz;
// 3. 视图空间 → 裁剪空间
gl_Position = projectionMatrix * viewPosition;
// 4. 法向量变换(只需旋转和缩放,不需要平移)
vNormal = normalize(normalMatrix * normal);
// 5. 纹理坐标直接传递
vUv = uv;
}
各个变换矩阵详解:
1. 模型矩阵(Model Matrix)
// 模型矩阵 = 平移 × 旋转 × 缩放
uniform mat4 modelMatrix;
// 应用:将模型从局部坐标系变换到世界坐标系
vec4 worldPos = modelMatrix * vec4(position, 1.0);
// 手动构建模型矩阵示例(在CPU端更常见)
mat4 buildModelMatrix(vec3 translation, vec3 rotation, vec3 scale) {
// 实际使用时在CPU端构建
// 这里仅展示概念
mat4 T = translationMatrix(translation);
mat4 R = rotationMatrix(rotation);
mat4 S = scaleMatrix(scale);
return T * R * S;
}
2. 视图矩阵(View Matrix)
// 视图矩阵:相机的逆变换
uniform mat4 viewMatrix;
// 应用:从世界空间变换到相机空间
vec4 viewPos = viewMatrix * worldPos;
// 视图矩阵定义了相机的位置和朝向
// viewMatrix = inverse(cameraMatrix)
3. 投影矩阵(Projection Matrix)
// 透视投影矩阵
uniform mat4 projectionMatrix;
// 应用:从视图空间变换到裁剪空间
gl_Position = projectionMatrix * viewPos;
// 透视投影特点:
// - 近大远小效果
// - w分量 ≈ -viewPos.z
// - 需要透视除法
// 正交投影特点:
// - 无远近大小变化
// - w分量 = 1.0
// - 不需要实质性透视除法
优化方案:预组合矩阵
// 方案1:MVP矩阵在CPU端预计算
uniform mat4 mvpMatrix; // projectionMatrix * viewMatrix * modelMatrix
void main() {
gl_Position = mvpMatrix * vec4(position, 1.0);
// 一次矩阵乘法,性能最优
}
// 方案2:MV矩阵预组合
uniform mat4 modelViewMatrix; // viewMatrix * modelMatrix
uniform mat4 projectionMatrix;
void main() {
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
gl_Position = projectionMatrix * mvPosition;
// 仍需要视图空间坐标时使用
}
法向量变换(重要):
// 法向量变换需要特殊处理
uniform mat3 normalMatrix; // transpose(inverse(modelViewMatrix的3x3部分))
void main() {
// 正确的法向量变换
vec3 transformedNormal = normalize(normalMatrix * normal);
// 错误示例(非均匀缩放会导致法向量错误)
// vec3 wrongNormal = mat3(modelMatrix) * normal; // ❌
}
// 为什么需要逆转置矩阵?
// 1. 法向量是切空间的垂直向量
// 2. 非均匀缩放会改变垂直关系
// 3. 逆转置矩阵保持垂直性
实际应用示例:
1. 标准3D物体渲染
attribute vec3 position;
attribute vec3 normal;
attribute vec2 uv;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat3 normalMatrix;
varying vec3 vNormal;
varying vec3 vPosition;
varying vec2 vUv;
void main() {
// 完整MVP变换
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
vec4 viewPosition = viewMatrix * worldPosition;
gl_Position = projectionMatrix * viewPosition;
// 传递数据到片段着色器
vNormal = normalMatrix * normal;
vPosition = viewPosition.xyz;
vUv = uv;
}
2. 骨骼动画变换
attribute vec3 position;
attribute vec4 skinIndex; // 骨骼索引
attribute vec4 skinWeight; // 骨骼权重
uniform mat4 boneMatrices[50];
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
void main() {
// 骨骼矩阵混合
mat4 skinMatrix =
skinWeight.x * boneMatrices[int(skinIndex.x)] +
skinWeight.y * boneMatrices[int(skinIndex.y)] +
skinWeight.z * boneMatrices[int(skinIndex.z)] +
skinWeight.w * boneMatrices[int(skinIndex.w)];
// 应用骨骼变换
vec4 skinnedPosition = skinMatrix * vec4(position, 1.0);
// 继续标准变换
vec4 viewPosition = viewMatrix * skinnedPosition;
gl_Position = projectionMatrix * viewPosition;
}
3. 实例化渲染
attribute vec3 position;
attribute mat4 instanceMatrix; // 每个实例的模型矩阵
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
void main() {
// 使用实例矩阵进行变换
vec4 worldPosition = instanceMatrix * vec4(position, 1.0);
vec4 viewPosition = viewMatrix * worldPosition;
gl_Position = projectionMatrix * viewPosition;
}
特殊变换技巧:
1. 广告牌(面向相机)
uniform mat4 viewMatrix;
uniform vec3 objectPosition;
void main() {
// 提取相机的方向向量
vec3 cameraRight = vec3(viewMatrix[0][0], viewMatrix[1][0], viewMatrix[2][0]);
vec3 cameraUp = vec3(viewMatrix[0][1], viewMatrix[1][1], viewMatrix[2][1]);
// 构建始终面向相机的四边形
vec3 worldPos = objectPosition
+ cameraRight * position.x
+ cameraUp * position.y;
gl_Position = projectionMatrix * viewMatrix * vec4(worldPos, 1.0);
}
2. 天空盒变换
attribute vec3 position;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
void main() {
// 移除视图矩阵的平移部分
mat4 viewRotation = mat4(mat3(viewMatrix));
vec4 pos = projectionMatrix * viewRotation * vec4(position, 1.0);
// 强制深度为最大值
gl_Position = pos.xyww; // z = w,透视除法后 z = 1.0
}
性能优化建议:
常见错误:
// 错误1:矩阵乘法顺序错误
gl_Position = vec4(position, 1.0) * mvpMatrix; // ❌ 错误顺序
// 正确
gl_Position = mvpMatrix * vec4(position, 1.0); // ✅
// 错误2:法向量使用模型矩阵直接变换
vNormal = mat3(modelMatrix) * normal; // ❌ 非均匀缩放时错误
// 正确
vNormal = normalMatrix * normal; // ✅
// 错误3:忘记齐次坐标
gl_Position = projectionMatrix * vec3(position); // ❌ 类型错误
// 正确
gl_Position = projectionMatrix * vec4(position, 1.0); // ✅
实际应用:
What are the commonly used built-in variables in vertex shaders?
What are the commonly used built-in variables in vertex shaders?
考察点:内置变量使用。
答案:
GLSL顶点着色器提供了一系列内置变量,包括输入变量(attribute)、输出变量和特殊用途变量。这些内置变量简化了着色器编程,提供了与GPU固定管线的接口。
输出内置变量(必须或常用):
1. gl_Position(必须)
// 类型:vec4
// 用途:输出顶点的裁剪空间坐标
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
2. gl_PointSize(渲染点时使用)
// 类型:float
// 用途:指定点精灵的大小(像素)
void main() {
gl_PointSize = 10.0; // 点的半径为10像素
gl_Position = mvpMatrix * vec4(position, 1.0);
}
// 动态点大小(基于距离)
void main() {
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
float distance = length(mvPosition.xyz);
gl_PointSize = 50.0 / distance; // 距离越远点越小
gl_Position = projectionMatrix * mvPosition;
}
输入内置变量(GLSL ES 1.0):
1. gl_VertexID(GLSL 1.30+)
// 类型:int
// 用途:当前顶点的索引ID
// 常用于程序化生成几何体
void main() {
// 基于顶点ID生成位置
float angle = float(gl_VertexID) * 0.1;
vec3 pos = vec3(cos(angle), sin(angle), 0.0);
gl_Position = mvpMatrix * vec4(pos, 1.0);
}
2. gl_InstanceID(实例化渲染)
// 类型:int
// 用途:当前实例的索引
// 用于实例化渲染,每个实例有不同的ID
uniform vec3 instancePositions[100];
void main() {
// 基于实例ID偏移位置
vec3 offset = instancePositions[gl_InstanceID];
vec3 worldPos = position + offset;
gl_Position = mvpMatrix * vec4(worldPos, 1.0);
}
其他特殊内置变量:
1. gl_ClipDistance(裁剪平面距离)
// 类型:float[]
// 用途:用户自定义裁剪平面
// GLSL 1.30+
uniform vec4 clipPlane; // 裁剪平面方程
void main() {
vec4 worldPos = modelMatrix * vec4(position, 1.0);
gl_Position = projectionMatrix * viewMatrix * worldPos;
// 计算到裁剪平面的距离
gl_ClipDistance[0] = dot(worldPos, clipPlane);
// 如果距离 < 0,该顶点会被裁剪
}
实用示例集合:
1. 基础顶点着色器完整示例
// 输入属性(来自顶点缓冲)
attribute vec3 position;
attribute vec3 normal;
attribute vec2 uv;
attribute vec4 color;
// Uniform变量(来自程序)
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
uniform mat3 normalMatrix;
// 输出到片段着色器
varying vec2 vUv;
varying vec3 vNormal;
varying vec4 vColor;
void main() {
// 使用内置输出变量
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
gl_PointSize = 5.0;
// 传递数据
vUv = uv;
vNormal = normalMatrix * normal;
vColor = color;
}
2. 实例化渲染使用gl_InstanceID
attribute vec3 position;
attribute vec3 normal;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
// 实例数据
uniform vec3 instanceOffsets[1000];
uniform vec3 instanceColors[1000];
varying vec3 vColor;
void main() {
// 使用gl_InstanceID获取当前实例的数据
vec3 offset = instanceOffsets[gl_InstanceID];
vec3 instanceColor = instanceColors[gl_InstanceID];
// 应用实例偏移
vec3 worldPos = (modelMatrix * vec4(position, 1.0)).xyz + offset;
gl_Position = projectionMatrix * viewMatrix * vec4(worldPos, 1.0);
vColor = instanceColor;
}
3. 点精灵动画(使用gl_PointSize)
attribute vec3 position;
attribute float size;
attribute float alpha;
uniform mat4 mvpMatrix;
uniform float time;
varying float vAlpha;
void main() {
// 动态点大小
float pulse = sin(time * 2.0 + position.x) * 0.5 + 0.5;
gl_PointSize = size * (1.0 + pulse * 0.5);
gl_Position = mvpMatrix * vec4(position, 1.0);
vAlpha = alpha;
}
4. 程序化几何生成(使用gl_VertexID)
uniform mat4 mvpMatrix;
uniform float radius;
uniform int segments;
varying vec3 vColor;
void main() {
// 基于顶点ID生成圆形
float angle = float(gl_VertexID) * 6.28318 / float(segments);
vec3 pos = vec3(
cos(angle) * radius,
sin(angle) * radius,
0.0
);
gl_Position = mvpMatrix * vec4(pos, 1.0);
// 颜色也基于角度
vColor = vec3(
cos(angle) * 0.5 + 0.5,
sin(angle) * 0.5 + 0.5,
0.5
);
}
WebGL/Three.js中的内置变量对应:
// Three.js自动提供的uniform
// - projectionMatrix
// - modelViewMatrix
// - viewMatrix
// - modelMatrix
// - normalMatrix
// - cameraPosition
// 在Three.js的顶点着色器中使用
const vertexShader = `
// Three.js自动注入这些uniform
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
uniform mat3 normalMatrix;
// 用户定义的attribute
attribute vec3 position;
attribute vec3 normal;
varying vec3 vNormal;
void main() {
vNormal = normalMatrix * normal;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
`;
版本差异注意事项:
GLSL ES 1.0(WebGL 1.0)
// 使用attribute和varying
attribute vec3 position;
varying vec2 vUv;
void main() {
gl_Position = vec4(position, 1.0);
}
GLSL ES 3.0(WebGL 2.0)
#version 300 es
// 使用in和out
in vec3 position;
out vec2 vUv;
void main() {
gl_Position = vec4(position, 1.0);
}
实际应用:
How to pass data from vertex shader to fragment shader?
How to pass data from vertex shader to fragment shader?
考察点:varying变量传递。
答案:
顶点着色器通过varying变量(GLSL ES 3.0中为out变量)将数据传递给片段着色器。GPU会自动对这些变量进行插值,使得每个片段都能获得平滑过渡的值。
基本传递机制:
// ========== 顶点着色器 ==========
attribute vec3 position;
attribute vec2 uv;
attribute vec3 normal;
attribute vec4 color;
uniform mat4 mvpMatrix;
uniform mat3 normalMatrix;
// varying变量声明(输出)
varying vec2 vUv;
varying vec3 vNormal;
varying vec4 vColor;
varying vec3 vPosition;
void main() {
// 计算并输出到varying
vUv = uv;
vNormal = normalize(normalMatrix * normal);
vColor = color;
vPosition = position;
gl_Position = mvpMatrix * vec4(position, 1.0);
}
// ========== 片段着色器 ==========
precision mediump float;
// varying变量声明(输入,必须与顶点着色器匹配)
varying vec2 vUv;
varying vec3 vNormal;
varying vec4 vColor;
varying vec3 vPosition;
void main() {
// 使用插值后的varying变量
vec3 normal = normalize(vNormal);
gl_FragColor = vColor;
}
插值原理:
// 顶点着色器(三角形的三个顶点)
// 顶点1: vColor = vec4(1.0, 0.0, 0.0, 1.0) 红色
// 顶点2: vColor = vec4(0.0, 1.0, 0.0, 1.0) 绿色
// 顶点3: vColor = vec4(0.0, 0.0, 1.0, 1.0) 蓝色
// 片段着色器(三角形中心)
// 自动插值: vColor ≈ vec4(0.33, 0.33, 0.33, 1.0) 混合色
// 插值公式(重心坐标):
// fragmentValue = w1 * vertex1Value + w2 * vertex2Value + w3 * vertex3Value
// 其中 w1 + w2 + w3 = 1.0
常见数据传递类型:
1. 纹理坐标传递
// 顶点着色器
attribute vec2 uv;
varying vec2 vUv;
void main() {
vUv = uv; // 直接传递
gl_Position = mvpMatrix * vec4(position, 1.0);
}
// 片段着色器
varying vec2 vUv;
uniform sampler2D diffuseMap;
void main() {
vec4 texColor = texture2D(diffuseMap, vUv);
gl_FragColor = texColor;
}
2. 法向量传递
// 顶点着色器
attribute vec3 normal;
uniform mat3 normalMatrix;
varying vec3 vNormal;
void main() {
// 变换后传递
vNormal = normalize(normalMatrix * normal);
gl_Position = mvpMatrix * vec4(position, 1.0);
}
// 片段着色器
varying vec3 vNormal;
void main() {
// 必须重新归一化(插值可能改变长度)
vec3 N = normalize(vNormal);
gl_FragColor = vec4(N * 0.5 + 0.5, 1.0); // 可视化法向量
}
3. 世界坐标传递
// 顶点着色器
attribute vec3 position;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
varying vec3 vWorldPosition;
void main() {
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
vWorldPosition = worldPosition.xyz;
gl_Position = projectionMatrix * viewMatrix * worldPosition;
}
// 片段着色器
uniform vec3 lightPosition;
varying vec3 vWorldPosition;
void main() {
// 使用世界坐标计算光照
vec3 lightDir = normalize(lightPosition - vWorldPosition);
// ... 光照计算
}
4. 顶点颜色传递
// 顶点着色器
attribute vec4 color;
varying vec4 vColor;
void main() {
vColor = color;
gl_Position = mvpMatrix * vec4(position, 1.0);
}
// 片段着色器
varying vec4 vColor;
void main() {
gl_FragColor = vColor; // 自动插值的渐变色
}
高级传递技巧:
1. 多个坐标空间同时传递
// 顶点着色器
attribute vec3 position;
attribute vec3 normal;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat3 normalMatrix;
// 传递多个空间的数据
varying vec3 vWorldPosition;
varying vec3 vViewPosition;
varying vec3 vWorldNormal;
varying vec3 vViewNormal;
void main() {
// 世界空间
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
vWorldPosition = worldPosition.xyz;
vWorldNormal = normalize(mat3(modelMatrix) * normal);
// 视图空间
vec4 viewPosition = viewMatrix * worldPosition;
vViewPosition = viewPosition.xyz;
vViewNormal = normalize(normalMatrix * normal);
gl_Position = projectionMatrix * viewPosition;
}
2. 自定义插值数据
// 顶点着色器
attribute vec3 position;
uniform float time;
varying float vDistance;
varying float vTime;
void main() {
vDistance = length(position); // 到原点的距离
vTime = time; // uniform可以通过varying传递
gl_Position = mvpMatrix * vec4(position, 1.0);
}
// 片段着色器
varying float vDistance;
varying float vTime;
void main() {
// 基于距离的脉冲效果
float pulse = sin(vDistance * 5.0 - vTime * 3.0);
vec3 color = vec3(pulse * 0.5 + 0.5);
gl_FragColor = vec4(color, 1.0);
}
3. 平面映射(flat关键字)
// GLSL 1.30+
// 顶点着色器
#version 130
in vec3 position;
in vec3 color;
flat out vec3 vColor; // 不插值,保持常量
void main() {
vColor = color;
gl_Position = mvpMatrix * vec4(position, 1.0);
}
// 片段着色器
#version 130
flat in vec3 vColor; // 接收不插值的颜色
void main() {
gl_FragColor = vec4(vColor, 1.0); // 每个三角形颜色一致
}
性能优化建议:
1. 减少varying数量
// 不好的做法:传递过多varying
varying vec3 vPosition;
varying vec3 vNormal;
varying vec2 vUv;
varying vec4 vColor;
varying vec3 vTangent;
varying vec3 vBitangent;
varying float vAO;
// ... 更多
// 优化:只传递必要的数据
varying vec3 vNormal;
varying vec2 vUv;
// 其他数据在片段着色器中重新计算或从纹理读取
2. 合并向量通道
// 不好:使用多个float
varying float vMetalness;
varying float vRoughness;
varying float vAO;
// 优化:合并到vec3
varying vec3 vMaterialProps; // xyz = metalness, roughness, ao
常见错误:
// 错误1:varying名称不匹配
// 顶点着色器
varying vec2 vUv;
// 片段着色器
varying vec2 uv; // ❌ 名称必须完全一致
// 错误2:类型不匹配
// 顶点着色器
varying vec3 vColor;
// 片段着色器
varying vec4 vColor; // ❌ 类型必须相同
// 错误3:忘记声明
// 顶点着色器
vNormal = normal; // ❌ 未声明varying
// 正确
varying vec3 vNormal;
vNormal = normal; // ✅
实际应用:
What is the role of fragment shaders? What data does it process?
What is the role of fragment shaders? What data does it process?
考察点:片段着色器概念。
答案:
片段着色器(Fragment Shader)是图形渲染管线中的最后可编程阶段,负责计算每个像素(片段)的最终颜色。它在光栅化后运行,对屏幕上的每个像素执行一次,是实现各种视觉效果的核心。
核心作用:
处理的数据类型:
1. 输入数据(从顶点着色器插值)
precision mediump float;
// 从顶点着色器接收的插值数据
varying vec2 vUv; // 纹理坐标
varying vec3 vNormal; // 法向量
varying vec3 vPosition; // 位置
varying vec4 vColor; // 顶点颜色
// Uniform数据(来自JavaScript)
uniform sampler2D diffuseMap; // 纹理采样器
uniform vec3 lightPosition; // 光源位置
uniform vec3 cameraPosition; // 相机位置
uniform float time; // 时间
void main() {
// 处理插值数据...
}
2. 输出数据
void main() {
// 必须输出颜色
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); // RGBA
// WebGL 2.0中的输出
// out vec4 fragColor;
// fragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
基础片段着色器示例:
1. 简单颜色输出
precision mediump float;
void main() {
// 输出纯红色
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
2. 使用varying数据
precision mediump float;
varying vec2 vUv;
varying vec3 vNormal;
void main() {
// UV坐标作为颜色(调试用)
gl_FragColor = vec4(vUv, 0.0, 1.0);
// 或者法向量可视化
vec3 normalColor = vNormal * 0.5 + 0.5; // 映射-1~1到0~1
gl_FragColor = vec4(normalColor, 1.0);
}
3. 纹理采样
precision mediump float;
varying vec2 vUv;
uniform sampler2D diffuseMap;
void main() {
// 从纹理采样颜色
vec4 texColor = texture2D(diffuseMap, vUv);
gl_FragColor = texColor;
}
常见处理任务:
1. 光照计算
precision mediump float;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 lightPosition;
uniform vec3 lightColor;
uniform vec3 objectColor;
void main() {
// 标准化法向量
vec3 normal = normalize(vNormal);
// 光照方向
vec3 lightDir = normalize(lightPosition - vPosition);
// 漫反射
float diff = max(dot(normal, lightDir), 0.0);
vec3 diffuse = diff * lightColor;
// 环境光
vec3 ambient = vec3(0.1);
// 最终颜色
vec3 result = (ambient + diffuse) * objectColor;
gl_FragColor = vec4(result, 1.0);
}
2. 多纹理混合
precision mediump float;
varying vec2 vUv;
uniform sampler2D texture1;
uniform sampler2D texture2;
uniform float mixFactor;
void main() {
vec4 color1 = texture2D(texture1, vUv);
vec4 color2 = texture2D(texture2, vUv);
// 线性混合两个纹理
vec4 finalColor = mix(color1, color2, mixFactor);
gl_FragColor = finalColor;
}
3. 透明度处理
precision mediump float;
varying vec2 vUv;
uniform sampler2D diffuseMap;
uniform float opacity;
void main() {
vec4 texColor = texture2D(diffuseMap, vUv);
// Alpha测试(完全透明或不透明)
if (texColor.a < 0.5) {
discard; // 丢弃该片段
}
// Alpha混合
gl_FragColor = vec4(texColor.rgb, texColor.a * opacity);
}
4. 程序化纹理生成
precision mediump float;
varying vec2 vUv;
uniform float time;
void main() {
// 棋盘格图案
float checker = mod(floor(vUv.x * 8.0) + floor(vUv.y * 8.0), 2.0);
// 动画波纹
float wave = sin(vUv.x * 10.0 + time) * sin(vUv.y * 10.0 + time);
vec3 color = vec3(checker) * (0.5 + wave * 0.5);
gl_FragColor = vec4(color, 1.0);
}
片段着色器执行流程:
光栅化 → 片段生成 → 片段着色器 → 深度测试 → Alpha混合 → 帧缓冲
↓ ↓ ↓ ↓ ↓
生成片段 计算颜色 Z-buffer 透明混合 最终像素
性能特点:
1. 执行次数
// 片段着色器每个像素执行一次
// 1920x1080分辨率 = 2,073,600次执行/帧
// 60FPS = 124,416,000次执行/秒
// 因此片段着色器是性能瓶颈最可能出现的地方
2. 性能优化原则
// 不好:在片段着色器中进行复杂计算
void main() {
// 每个像素都计算矩阵(非常慢)
mat4 transform = complexMatrixCalculation();
// ...
}
// 好:在顶点着色器或CPU端预计算
// 顶点着色器
varying vec3 vTransformedPos;
void main() {
vTransformedPos = (transformMatrix * vec4(position, 1.0)).xyz;
}
// 片段着色器直接使用
varying vec3 vTransformedPos;
void main() {
// 直接使用,无需重新计算
}
特殊功能:
1. discard语句
void main() {
// 丢弃片段,不写入帧缓冲
if (someCondition) {
discard;
}
gl_FragColor = vec4(1.0);
}
2. 深度值操作
void main() {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
// 修改深度值(GLSL 1.30+)
gl_FragDepth = customDepth;
}
实际应用:
What is the difference between gl_FragColor and gl_FragData?
What is the difference between gl_FragColor and gl_FragData?
考察点:颜色输出变量。
答案:
gl_FragColor和gl_FragData都是片段着色器的输出变量,用于指定片段的最终颜色。两者的主要区别在于输出目标的数量:gl_FragColor用于单目标渲染,而gl_FragData用于多渲染目标(MRT)。
gl_FragColor(单目标输出):
precision mediump float;
varying vec2 vUv;
uniform sampler2D diffuseMap;
void main() {
vec4 texColor = texture2D(diffuseMap, vUv);
// 输出到默认帧缓冲或当前绑定的渲染目标
gl_FragColor = texColor;
// 类型:vec4 (R, G, B, A)
}
gl_FragColor特点:
vec4gl_FragData(多目标输出):
precision mediump float;
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vPosition;
uniform sampler2D diffuseMap;
void main() {
// 延迟渲染G-Buffer输出
// 输出0:漫反射颜色
gl_FragData[0] = texture2D(diffuseMap, vUv);
// 输出1:法向量
gl_FragData[1] = vec4(vNormal * 0.5 + 0.5, 1.0);
// 输出2:位置
gl_FragData[2] = vec4(vPosition, 1.0);
// 输出3:其他属性(金属度、粗糙度等)
gl_FragData[3] = vec4(0.5, 0.8, 0.0, 1.0);
}
gl_FragData特点:
vec4数组gl_FragData[n],n = 0, 1, 2, …GL_MAX_DRAW_BUFFERS决定(通常4-8个)版本差异:
GLSL ES 1.0(WebGL 1.0)
// 单目标
void main() {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
// 多目标(需要扩展支持)
#extension GL_EXT_draw_buffers : require
void main() {
gl_FragData[0] = vec4(1.0, 0.0, 0.0, 1.0);
gl_FragData[1] = vec4(0.0, 1.0, 0.0, 1.0);
}
GLSL ES 3.0(WebGL 2.0)
#version 300 es
precision mediump float;
// 显式声明输出变量
layout(location = 0) out vec4 outColor;
layout(location = 1) out vec4 outNormal;
layout(location = 2) out vec4 outPosition;
void main() {
outColor = vec4(1.0, 0.0, 0.0, 1.0);
outNormal = vec4(0.0, 1.0, 0.0, 1.0);
outPosition = vec4(0.0, 0.0, 1.0, 1.0);
}
实际应用场景:
1. 延迟渲染(Deferred Rendering)
// G-Buffer Pass
precision mediump float;
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vPosition;
varying vec3 vTangent;
uniform sampler2D albedoMap;
uniform sampler2D normalMap;
uniform sampler2D roughnessMap;
#extension GL_EXT_draw_buffers : require
void main() {
// MRT输出多个缓冲
vec4 albedo = texture2D(albedoMap, vUv);
vec3 normal = texture2D(normalMap, vUv).rgb * 2.0 - 1.0;
float roughness = texture2D(roughnessMap, vUv).r;
// 输出到不同的渲染目标
gl_FragData[0] = albedo; // 颜色缓冲
gl_FragData[1] = vec4(normal, 1.0); // 法向量缓冲
gl_FragData[2] = vec4(vPosition, 1.0); // 位置缓冲
gl_FragData[3] = vec4(roughness, 0.0, 0.0, 1.0); // 材质属性缓冲
}
2. 后期处理多Pass
// 第一Pass:提取亮区
void main() {
vec4 color = texture2D(sceneTexture, vUv);
float brightness = dot(color.rgb, vec3(0.2126, 0.7152, 0.0722));
gl_FragData[0] = color; // 原始颜色
gl_FragData[1] = brightness > 1.0 ? color : vec4(0.0); // 亮区
}
检查MRT支持:
// WebGL 1.0
const ext = gl.getExtension('WEBGL_draw_buffers');
if (ext) {
const maxDrawBuffers = gl.getParameter(ext.MAX_DRAW_BUFFERS_WEBGL);
console.log('支持的最大渲染目标数:', maxDrawBuffers);
}
// WebGL 2.0
const maxDrawBuffers = gl.getParameter(gl.MAX_DRAW_BUFFERS);
console.log('支持的最大渲染目标数:', maxDrawBuffers);
注意事项:
1. 不能同时使用
void main() {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); // ❌
gl_FragData[0] = vec4(0.0, 1.0, 0.0, 1.0); // ❌
// 只能选择一种方式
}
2. 性能影响
// MRT会增加带宽消耗
// 4个渲染目标 = 4倍的内存写入
// 移动设备上需要特别注意
3. 格式一致性
// 所有渲染目标必须有相同的尺寸和格式
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
// 附加多个颜色附件
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, ...);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT1, ...);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT2, ...);
实际应用:
How to perform texture sampling in fragment shaders?
How to perform texture sampling in fragment shaders?
考察点:纹理采样基础。
答案:
纹理采样是片段着色器中最常见的操作之一,通过采样器(Sampler)从纹理中读取颜色值。GLSL提供了多种纹理采样函数,适用于不同类型的纹理和采样需求。
基础纹理采样:
precision mediump float;
// 纹理采样器(来自JavaScript)
uniform sampler2D diffuseMap; // 2D纹理
// 纹理坐标(从顶点着色器插值)
varying vec2 vUv;
void main() {
// 基础采样
vec4 texColor = texture2D(diffuseMap, vUv);
gl_FragColor = texColor;
}
主要采样函数:
1. texture2D(2D纹理)
uniform sampler2D diffuseMap;
varying vec2 vUv;
void main() {
// 标准采样
vec4 color = texture2D(diffuseMap, vUv);
// 带LOD偏移的采样
vec4 colorBias = texture2D(diffuseMap, vUv, bias);
gl_FragColor = color;
}
2. textureCube(立方体纹理)
uniform samplerCube envMap;
varying vec3 vReflect; // 反射方向
void main() {
// 立方体贴图采样(使用3D方向向量)
vec4 envColor = textureCube(envMap, vReflect);
gl_FragColor = envColor;
}
3. texture2DProj(投影纹理)
uniform sampler2D shadowMap;
varying vec4 vShadowCoord; // 齐次坐标
void main() {
// 投影纹理采样(自动透视除法)
vec4 shadowColor = texture2DProj(shadowMap, vShadowCoord);
gl_FragColor = shadowColor;
}
纹理坐标处理:
1. UV坐标变换
uniform sampler2D diffuseMap;
uniform vec2 uvOffset;
uniform vec2 uvScale;
uniform float uvRotation;
varying vec2 vUv;
void main() {
// 平移和缩放
vec2 uv = vUv * uvScale + uvOffset;
// 旋转
float s = sin(uvRotation);
float c = cos(uvRotation);
mat2 rotationMatrix = mat2(c, -s, s, c);
uv = rotationMatrix * (uv - 0.5) + 0.5;
vec4 texColor = texture2D(diffuseMap, uv);
gl_FragColor = texColor;
}
2. UV重复和镜像
uniform sampler2D diffuseMap;
varying vec2 vUv;
void main() {
// 方式1:使用fract实现重复
vec2 uv = fract(vUv * 3.0); // 重复3次
// 方式2:镜像重复
vec2 mirrorUv = abs(fract(vUv * 0.5) * 2.0 - 1.0);
vec4 texColor = texture2D(diffuseMap, uv);
gl_FragColor = texColor;
}
多纹理混合:
1. 简单混合
uniform sampler2D texture1;
uniform sampler2D texture2;
uniform float mixFactor;
varying vec2 vUv;
void main() {
vec4 color1 = texture2D(texture1, vUv);
vec4 color2 = texture2D(texture2, vUv);
// 线性混合
vec4 finalColor = mix(color1, color2, mixFactor);
gl_FragColor = finalColor;
}
2. 使用混合贴图
uniform sampler2D baseTexture;
uniform sampler2D detailTexture;
uniform sampler2D blendMap;
varying vec2 vUv;
void main() {
vec4 base = texture2D(baseTexture, vUv);
vec4 detail = texture2D(detailTexture, vUv * 4.0); // 细节纹理重复
float blend = texture2D(blendMap, vUv).r;
vec4 finalColor = mix(base, detail, blend);
gl_FragColor = finalColor;
}
3. 三向投影纹理映射
uniform sampler2D textureX;
uniform sampler2D textureY;
uniform sampler2D textureZ;
varying vec3 vPosition;
varying vec3 vNormal;
void main() {
vec3 absNormal = abs(vNormal);
// 根据法向量确定权重
vec3 weights = absNormal / (absNormal.x + absNormal.y + absNormal.z);
// 三个方向采样
vec4 colorX = texture2D(textureX, vPosition.yz) * weights.x;
vec4 colorY = texture2D(textureY, vPosition.xz) * weights.y;
vec4 colorZ = texture2D(textureZ, vPosition.xy) * weights.z;
gl_FragColor = colorX + colorY + colorZ;
}
高级采样技术:
1. 手动Mipmap采样
#extension GL_EXT_shader_texture_lod : enable
uniform sampler2D diffuseMap;
varying vec2 vUv;
uniform float mipmapLevel;
void main() {
// 显式指定mipmap级别
vec4 color = texture2DLodEXT(diffuseMap, vUv, mipmapLevel);
gl_FragColor = color;
}
2. 导数和纹理过滤
uniform sampler2D diffuseMap;
varying vec2 vUv;
void main() {
// 计算UV坐标的偏导数
vec2 dx = dFdx(vUv);
vec2 dy = dFdy(vUv);
// 基于导数的渐变过滤
vec4 color = texture2DGradEXT(diffuseMap, vUv, dx, dy);
gl_FragColor = color;
}
3. 纹理数组采样(WebGL 2.0)
#version 300 es
precision mediump float;
precision mediump sampler2DArray;
uniform sampler2DArray textureArray;
in vec2 vUv;
in float vTextureIndex;
out vec4 fragColor;
void main() {
// 从纹理数组采样
vec4 color = texture(textureArray, vec3(vUv, vTextureIndex));
fragColor = color;
}
纹理过滤和采样:
// GPU会根据纹理参数自动选择过滤方式
// 放大过滤(Magnification):
// - NEAREST:最近邻(锯齿状)
// - LINEAR:双线性插值(平滑)
// 缩小过滤(Minification):
// - NEAREST:最近邻
// - LINEAR:双线性插值
// - NEAREST_MIPMAP_NEAREST:mipmap最近邻
// - LINEAR_MIPMAP_NEAREST:mipmap双线性
// - NEAREST_MIPMAP_LINEAR:mipmap三线性
// - LINEAR_MIPMAP_LINEAR:完全三线性过滤
性能优化:
// 1. 减少纹理采样次数
// 不好
void main() {
vec4 c1 = texture2D(tex, vUv);
vec4 c2 = texture2D(tex, vUv); // 重复采样
gl_FragColor = (c1 + c2) * 0.5;
}
// 好
void main() {
vec4 c = texture2D(tex, vUv);
gl_FragColor = c; // 只采样一次
}
// 2. 使用合适的纹理格式和大小
// - 2的幂次方尺寸(256, 512, 1024)更高效
// - 压缩纹理格式(DXT, ETC, ASTC)减少带宽
// 3. Mipmap使用
// - 启用mipmap可以提高缩小时的性能和质量
// - 减少纹理缓存未命中
实际应用:
How to implement simple color blending in fragment shaders?
How to implement simple color blending in fragment shaders?
考察点:颜色计算基础。
答案:
颜色混合是片段着色器中的基础操作,用于组合多个颜色值创建最终输出。GLSL提供了内置函数和数学运算来实现各种混合效果。
基础混合方法:
1. 线性插值(mix函数)
precision mediump float;
uniform vec3 color1;
uniform vec3 color2;
uniform float mixFactor; // 0.0到1.0
void main() {
// 线性混合:result = color1 * (1-t) + color2 * t
vec3 blendedColor = mix(color1, color2, mixFactor);
gl_FragColor = vec4(blendedColor, 1.0);
}
2. 加法混合
varying vec4 vColor1;
varying vec4 vColor2;
void main() {
// 简单相加
vec3 result = vColor1.rgb + vColor2.rgb;
// 限制在0-1范围
result = clamp(result, 0.0, 1.0);
// 或者使用min
result = min(result, vec3(1.0));
gl_FragColor = vec4(result, 1.0);
}
3. 乘法混合
uniform sampler2D diffuseMap;
uniform vec3 tintColor;
varying vec2 vUv;
void main() {
vec4 texColor = texture2D(diffuseMap, vUv);
// 乘法混合(调色)
vec3 result = texColor.rgb * tintColor;
gl_FragColor = vec4(result, texColor.a);
}
纹理与颜色混合:
1. 纹理调色
uniform sampler2D diffuseMap;
uniform vec3 tintColor;
uniform float tintStrength;
varying vec2 vUv;
void main() {
vec4 texColor = texture2D(diffuseMap, vUv);
// 线性混合纹理颜色和调色
vec3 tinted = mix(texColor.rgb, texColor.rgb * tintColor, tintStrength);
gl_FragColor = vec4(tinted, texColor.a);
}
2. 顶点颜色与纹理混合
uniform sampler2D diffuseMap;
varying vec2 vUv;
varying vec4 vColor;
void main() {
vec4 texColor = texture2D(diffuseMap, vUv);
// 纹理颜色 × 顶点颜色
vec4 finalColor = texColor * vColor;
gl_FragColor = finalColor;
}
常见混合模式:
1. 正常混合(Alpha混合)
vec4 source = texture2D(sourceTexture, vUv);
vec4 destination = texture2D(destTexture, vUv);
// Alpha混合公式
vec3 result = source.rgb * source.a + destination.rgb * (1.0 - source.a);
float alpha = source.a + destination.a * (1.0 - source.a);
gl_FragColor = vec4(result, alpha);
2. 叠加混合(Overlay)
vec3 overlay(vec3 base, vec3 blend) {
return mix(
2.0 * base * blend,
1.0 - 2.0 * (1.0 - base) * (1.0 - blend),
step(0.5, base)
);
}
void main() {
vec3 baseColor = texture2D(baseTexture, vUv).rgb;
vec3 blendColor = texture2D(blendTexture, vUv).rgb;
vec3 result = overlay(baseColor, blendColor);
gl_FragColor = vec4(result, 1.0);
}
3. 屏幕混合(Screen)
vec3 screen(vec3 base, vec3 blend) {
return 1.0 - (1.0 - base) * (1.0 - blend);
}
void main() {
vec3 base = texture2D(texture1, vUv).rgb;
vec3 blend = texture2D(texture2, vUv).rgb;
vec3 result = screen(base, blend);
gl_FragColor = vec4(result, 1.0);
}
4. 正片叠底(Multiply)
void main() {
vec3 base = texture2D(texture1, vUv).rgb;
vec3 blend = texture2D(texture2, vUv).rgb;
vec3 result = base * blend;
gl_FragColor = vec4(result, 1.0);
}
渐变效果:
1. 线性渐变
uniform vec3 color1;
uniform vec3 color2;
varying vec2 vUv;
void main() {
// 水平渐变
vec3 gradient = mix(color1, color2, vUv.x);
// 垂直渐变
// vec3 gradient = mix(color1, color2, vUv.y);
// 对角渐变
// float t = (vUv.x + vUv.y) * 0.5;
// vec3 gradient = mix(color1, color2, t);
gl_FragColor = vec4(gradient, 1.0);
}
2. 径向渐变
uniform vec3 centerColor;
uniform vec3 edgeColor;
uniform vec2 center;
varying vec2 vUv;
void main() {
float dist = distance(vUv, center);
// 径向渐变
vec3 gradient = mix(centerColor, edgeColor, dist);
gl_FragColor = vec4(gradient, 1.0);
}
3. 平滑过渡
uniform vec3 color1;
uniform vec3 color2;
varying vec2 vUv;
void main() {
// 使用smoothstep创建平滑过渡
float t = smoothstep(0.3, 0.7, vUv.x);
vec3 gradient = mix(color1, color2, t);
gl_FragColor = vec4(gradient, 1.0);
}
多颜色混合:
1. 三色混合
uniform vec3 color1;
uniform vec3 color2;
uniform vec3 color3;
varying vec2 vUv;
void main() {
float t = vUv.x;
vec3 result;
if (t < 0.5) {
result = mix(color1, color2, t * 2.0);
} else {
result = mix(color2, color3, (t - 0.5) * 2.0);
}
gl_FragColor = vec4(result, 1.0);
}
2. 基于遮罩的混合
uniform sampler2D texture1;
uniform sampler2D texture2;
uniform sampler2D maskTexture;
varying vec2 vUv;
void main() {
vec4 color1 = texture2D(texture1, vUv);
vec4 color2 = texture2D(texture2, vUv);
float mask = texture2D(maskTexture, vUv).r;
// 使用遮罩混合两个纹理
vec4 result = mix(color1, color2, mask);
gl_FragColor = result;
}
实用混合函数库:
// 加法混合
vec3 add(vec3 base, vec3 blend) {
return min(base + blend, vec3(1.0));
}
// 减法混合
vec3 subtract(vec3 base, vec3 blend) {
return max(base - blend, vec3(0.0));
}
// 变暗混合
vec3 darken(vec3 base, vec3 blend) {
return min(base, blend);
}
// 变亮混合
vec3 lighten(vec3 base, vec3 blend) {
return max(base, blend);
}
// 柔光混合
vec3 softLight(vec3 base, vec3 blend) {
return mix(
2.0 * base * blend + base * base * (1.0 - 2.0 * blend),
sqrt(base) * (2.0 * blend - 1.0) + 2.0 * base * (1.0 - blend),
step(0.5, blend)
);
}
void main() {
vec3 base = texture2D(texture1, vUv).rgb;
vec3 blend = texture2D(texture2, vUv).rgb;
// 选择需要的混合模式
vec3 result = softLight(base, blend);
gl_FragColor = vec4(result, 1.0);
}
实际应用:
What are the commonly used mathematical functions in GLSL? What are their application scenarios?
What are the commonly used mathematical functions in GLSL? What are their application scenarios?
考察点:数学函数应用。
答案:
GLSL提供了丰富的内置数学函数,涵盖三角函数、指数函数、通用数学函数等。这些函数在GPU上高度优化,是实现各种着色器效果的基础工具。合理使用数学函数能够实现复杂的视觉效果和物理模拟。
常用数学函数分类:
1. 三角函数
float angle = 1.57; // 90度(弧度制)
// 基础三角函数
float s = sin(angle); // 正弦值
float c = cos(angle); // 余弦值
float t = tan(angle); // 正切值
// 反三角函数
float asinVal = asin(0.5); // 反正弦
float acosVal = acos(0.5); // 反余弦
float atanVal = atan(1.0); // 反正切
float atan2Val = atan(y, x); // 两参数反正切
// 弧度角度转换
float radians = radians(180.0); // 度转弧度:π
float degrees = degrees(3.14159); // 弧度转度:180
应用场景:
2. 指数和对数函数
float x = 2.0;
// 指数函数
float powVal = pow(x, 3.0); // x的3次方:8.0
float expVal = exp(x); // e的x次方
float exp2Val = exp2(x); // 2的x次方:4.0
// 对数函数
float logVal = log(x); // 自然对数
float log2Val = log2(x); // 以2为底的对数:1.0
// 平方根
float sqrtVal = sqrt(x); // 平方根:1.414
float invSqrtVal = inversesqrt(x); // 平方根倒数(快速)
应用场景:
3. 通用数学函数
float value = -3.5;
// 绝对值和符号
float absVal = abs(value); // 绝对值:3.5
float signVal = sign(value); // 符号:-1.0
// 取整函数
float floorVal = floor(value); // 向下取整:-4.0
float ceilVal = ceil(value); // 向上取整:-3.0
float fractVal = fract(value); // 小数部分:0.5
float roundVal = round(value); // 四舍五入(GLSL 1.30+)
// 取值范围控制
float minVal = min(value, 0.0); // 最小值:-3.5
float maxVal = max(value, 0.0); // 最大值:0.0
float clampVal = clamp(value, -2.0, 2.0); // 限制范围:-2.0
// 取模
float modVal = mod(value, 2.0); // 取模运算
应用场景:
4. 插值和混合函数
float a = 0.0;
float b = 1.0;
float t = 0.5;
// 线性插值
float mixVal = mix(a, b, t); // 线性混合:0.5
// 阶跃函数
float stepVal = step(0.5, t); // t >= 0.5 ? 1.0 : 0.0
// 平滑插值
float smoothVal = smoothstep(0.0, 1.0, t); // 平滑过渡:0.5
应用场景:
实用代码示例:
1. 波浪效果
// 顶点着色器中的波浪变形
uniform float time;
attribute vec3 position;
void main() {
vec3 pos = position;
// 使用sin函数创建波浪
float wave = sin(pos.x * 2.0 + time) * 0.5;
wave += sin(pos.z * 1.5 + time * 0.7) * 0.3;
pos.y += wave;
gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}
2. 光照衰减
// 片段着色器中的距离衰减
uniform vec3 lightPosition;
varying vec3 vWorldPosition;
void main() {
float distance = length(lightPosition - vWorldPosition);
// 使用平方衰减(物理正确)
float attenuation = 1.0 / (1.0 + distance * distance);
// 或使用更平滑的衰减
float smoothAttenuation = 1.0 / pow(distance + 1.0, 2.0);
vec3 finalColor = lightColor * attenuation;
gl_FragColor = vec4(finalColor, 1.0);
}
3. 程序化纹理
// 使用数学函数生成棋盘格图案
varying vec2 vUv;
void main() {
// 使用floor和mod创建棋盘格
float checker = mod(floor(vUv.x * 8.0) + floor(vUv.y * 8.0), 2.0);
vec3 color = vec3(checker);
gl_FragColor = vec4(color, 1.0);
}
4. 脉冲动画
uniform float time;
varying vec2 vUv;
void main() {
// 组合多个三角函数创建复杂动画
float pulse = sin(time * 3.0) * 0.5 + 0.5;
float ring = sin((length(vUv - 0.5) - time * 0.1) * 20.0) * 0.5 + 0.5;
vec3 color = vec3(pulse * ring);
gl_FragColor = vec4(color, 1.0);
}
性能优化建议:
x * x 比 pow(x, 2.0) 更快inversesqrt 比 1.0 / sqrt() 更快函数组合技巧:
// 创建周期性边界框
float box(vec2 p, vec2 size) {
vec2 d = abs(p) - size;
return length(max(d, 0.0)) + min(max(d.x, d.y), 0.0);
}
// 平滑最小值(用于形状混合)
float smin(float a, float b, float k) {
float h = clamp(0.5 + 0.5 * (b - a) / k, 0.0, 1.0);
return mix(b, a, h) - k * h * (1.0 - h);
}
// 重映射函数
float remap(float value, float inMin, float inMax, float outMin, float outMax) {
return outMin + (outMax - outMin) * (value - inMin) / (inMax - inMin);
}
实际应用:
What are the vector and matrix operation functions in GLSL?
What are the vector and matrix operation functions in GLSL?
考察点:向量矩阵函数。
答案:
GLSL提供了专门的向量和矩阵运算函数,这些函数针对GPU的并行架构优化,能够高效处理图形计算中常见的线性代数运算。掌握这些函数是编写高性能着色器的关键。
向量运算函数:
1. 向量长度和距离
vec3 v = vec3(3.0, 4.0, 0.0);
vec3 a = vec3(1.0, 2.0, 3.0);
vec3 b = vec3(4.0, 6.0, 8.0);
// 向量长度(模)
float len = length(v); // sqrt(3²+4²+0²) = 5.0
// 两点距离
float dist = distance(a, b); // length(b - a)
// 点乘(标量积)
float dotProduct = dot(a, b); // a.x*b.x + a.y*b.y + a.z*b.z
// 叉乘(向量积)- 仅适用于vec3
vec3 crossProduct = cross(a, b); // 垂直于a和b的向量
2. 向量归一化和反射
vec3 v = vec3(3.0, 4.0, 0.0);
vec3 normal = vec3(0.0, 1.0, 0.0);
// 归一化(单位向量)
vec3 normalized = normalize(v); // v / length(v)
// 面法线归一化(快速)
vec3 faceforward = faceforward(normal, v, normal); // 确保法向量朝向正确
// 反射向量
vec3 incident = normalize(vec3(1.0, -1.0, 0.0));
vec3 reflected = reflect(incident, normal); // 光线反射
// 折射向量
float eta = 0.67; // 折射率比(如空气到水)
vec3 refracted = refract(incident, normal, eta); // 光线折射
矩阵运算函数:
1. 矩阵乘法
mat4 modelMatrix = mat4(1.0);
mat4 viewMatrix = mat4(1.0);
mat4 projectionMatrix = mat4(1.0);
// 矩阵 × 矩阵
mat4 mvMatrix = viewMatrix * modelMatrix;
mat4 mvpMatrix = projectionMatrix * mvMatrix;
// 矩阵 × 向量
vec4 position = vec4(1.0, 2.0, 3.0, 1.0);
vec4 transformed = mvpMatrix * position;
// 向量 × 矩阵(注意:不同于矩阵×向量)
vec4 result = position * mvpMatrix; // 转置后的效果
2. 矩阵转置和逆
mat3 m = mat3(1.0);
// 转置矩阵(GLSL 1.20+)
mat3 transposed = transpose(m);
// 逆矩阵(GLSL 1.40+)
mat3 inverted = inverse(m);
// 行列式(GLSL 1.50+)
float det = determinant(m);
实用向量函数示例:
1. 光照计算基础
precision mediump float;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 lightPosition;
uniform vec3 cameraPosition;
void main() {
// 归一化法向量
vec3 N = normalize(vNormal);
// 光照方向
vec3 L = normalize(lightPosition - vPosition);
// 视线方向
vec3 V = normalize(cameraPosition - vPosition);
// 漫反射强度
float diffuse = max(dot(N, L), 0.0);
// 反射向量
vec3 R = reflect(-L, N);
// 镜面反射强度
float specular = pow(max(dot(R, V), 0.0), 32.0);
vec3 color = vec3(diffuse + specular);
gl_FragColor = vec4(color, 1.0);
}
2. 环境映射(反射和折射)
precision mediump float;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 cameraPosition;
uniform samplerCube envMap;
uniform float refractionRatio;
void main() {
vec3 N = normalize(vNormal);
vec3 V = normalize(cameraPosition - vPosition);
// 反射
vec3 reflectDir = reflect(-V, N);
vec4 reflectColor = textureCube(envMap, reflectDir);
// 折射
vec3 refractDir = refract(-V, N, refractionRatio);
vec4 refractColor = textureCube(envMap, refractDir);
// 菲涅尔效应
float fresnel = pow(1.0 - dot(V, N), 3.0);
// 混合反射和折射
vec4 finalColor = mix(refractColor, reflectColor, fresnel);
gl_FragColor = finalColor;
}
3. 向量投影和正交化
// 向量投影
vec3 projectVector(vec3 a, vec3 b) {
return dot(a, b) / dot(b, b) * b;
}
// 格拉姆-施密特正交化
void orthogonalize(inout vec3 tangent, vec3 normal) {
tangent = normalize(tangent - dot(tangent, normal) * normal);
}
// 构建切线空间矩阵
mat3 buildTBN(vec3 normal, vec3 tangent) {
vec3 N = normalize(normal);
vec3 T = normalize(tangent);
T = normalize(T - dot(T, N) * N); // 格拉姆-施密特正交化
vec3 B = cross(N, T); // 副切线
return mat3(T, B, N);
}
矩阵变换示例:
1. 手动构建变换矩阵
// 平移矩阵
mat4 translate(vec3 v) {
return mat4(
1.0, 0.0, 0.0, 0.0,
0.0, 1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
v.x, v.y, v.z, 1.0
);
}
// 缩放矩阵
mat4 scale(vec3 v) {
return mat4(
v.x, 0.0, 0.0, 0.0,
0.0, v.y, 0.0, 0.0,
0.0, 0.0, v.z, 0.0,
0.0, 0.0, 0.0, 1.0
);
}
// 绕Z轴旋转矩阵
mat4 rotateZ(float angle) {
float c = cos(angle);
float s = sin(angle);
return mat4(
c, s, 0.0, 0.0,
-s, c, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0
);
}
2. 矩阵分解和提取
// 从变换矩阵提取缩放
vec3 extractScale(mat4 m) {
return vec3(
length(vec3(m[0][0], m[0][1], m[0][2])),
length(vec3(m[1][0], m[1][1], m[1][2])),
length(vec3(m[2][0], m[2][1], m[2][2]))
);
}
// 从变换矩阵提取旋转
mat3 extractRotation(mat4 m) {
vec3 scale = extractScale(m);
return mat3(
m[0].xyz / scale.x,
m[1].xyz / scale.y,
m[2].xyz / scale.z
);
}
// 从变换矩阵提取平移
vec3 extractTranslation(mat4 m) {
return m[3].xyz;
}
性能优化技巧:
// 优化1:避免重复计算
// 不好
void main() {
vec3 lightDir = normalize(lightPos - vPos);
float diffuse = max(dot(normalize(vNormal), lightDir), 0.0);
float specular = pow(max(dot(reflect(-lightDir, normalize(vNormal)), viewDir), 0.0), 32.0);
// normalize(vNormal)被调用了两次
}
// 好
void main() {
vec3 N = normalize(vNormal); // 只计算一次
vec3 L = normalize(lightPos - vPos);
float diffuse = max(dot(N, L), 0.0);
float specular = pow(max(dot(reflect(-L, N), viewDir), 0.0), 32.0);
}
// 优化2:使用内置函数而非手动实现
// 不好:手动计算归一化
vec3 normalize_manual(vec3 v) {
float len = sqrt(v.x * v.x + v.y * v.y + v.z * v.z);
return v / len;
}
// 好:使用内置函数
vec3 normalized = normalize(v); // GPU优化的实现
向量swizzle组合:
vec4 color = vec4(1.0, 0.5, 0.25, 1.0);
// swizzle读取
vec3 rgb = color.rgb;
vec2 rg = color.rg;
float r = color.r;
// swizzle重排
vec4 bgra = color.bgra;
vec3 grb = color.grb;
// swizzle写入
color.rgb = vec3(0.0);
color.xy = vec2(1.0);
// 重复分量
vec3 rrr = color.rrr;
vec4 xxxx = color.xxxx;
实际应用:
What are the texture lookup functions in GLSL? What are their differences?
What are the texture lookup functions in GLSL? What are their differences?
考察点:纹理函数对比。
答案:
GLSL提供了多种纹理查找(采样)函数,支持不同类型的纹理和采样方式。这些函数在不同版本的GLSL中有所不同,理解它们的区别对于正确编写着色器至关重要。
GLSL ES 1.0 / WebGL 1.0 纹理函数:
1. 基础纹理采样
precision mediump float;
uniform sampler2D diffuseMap; // 2D纹理采样器
uniform samplerCube envMap; // 立方体纹理采样器
varying vec2 vUv;
varying vec3 vReflect;
void main() {
// 2D纹理采样
vec4 color = texture2D(diffuseMap, vUv);
// 立方体纹理采样
vec4 envColor = textureCube(envMap, vReflect);
gl_FragColor = color;
}
2. 带LOD(Level of Detail)的采样
// 顶点着色器中使用(自动计算mipmap级别)
vec4 color = texture2DLod(diffuseMap, vUv, 0.0);
// 片段着色器中指定mipmap级别
vec4 color = texture2DLodEXT(diffuseMap, vUv, 2.0); // 需要扩展
// 立方体纹理LOD
vec4 envColor = textureCubeLodEXT(envMap, vReflect, 1.0);
3. 投影纹理采样
// 透视投影纹理(自动除以w分量)
vec4 projCoord = vec4(vUv.x, vUv.y, 0.0, 2.0);
vec4 color = texture2DProj(diffuseMap, projCoord);
// 等价于: texture2D(diffuseMap, projCoord.xy / projCoord.w)
// 4D投影坐标
vec4 shadowCoord = vec4(vUv, depth, 1.0);
vec4 shadowColor = texture2DProj(shadowMap, shadowCoord);
GLSL ES 3.0 / WebGL 2.0 纹理函数:
1. 统一的texture函数
#version 300 es
precision mediump float;
uniform sampler2D diffuseMap;
uniform samplerCube envMap;
uniform sampler2DArray textureArray;
uniform sampler3D volumeTexture;
in vec2 vUv;
in vec3 vReflect;
out vec4 fragColor;
void main() {
// 统一使用texture函数(自动根据采样器类型选择)
vec4 color2D = texture(diffuseMap, vUv);
vec4 colorCube = texture(envMap, vReflect);
vec4 colorArray = texture(textureArray, vec3(vUv, 0.0));
vec4 color3D = texture(volumeTexture, vec3(vUv, 0.5));
fragColor = color2D;
}
2. 纹理尺寸查询
// 获取纹理尺寸
ivec2 texSize = textureSize(diffuseMap, 0); // 0是mipmap级别
// 计算像素坐标
vec2 pixelCoord = vUv * vec2(texSize);
3. 纹理偏移采样
// 偏移采样(用于边缘检测、模糊等)
vec4 color = textureOffset(diffuseMap, vUv, ivec2(1, 0));
// 多次偏移采样实现模糊
vec4 blur = vec4(0.0);
blur += textureOffset(diffuseMap, vUv, ivec2(-1, -1));
blur += textureOffset(diffuseMap, vUv, ivec2( 0, -1));
blur += textureOffset(diffuseMap, vUv, ivec2( 1, -1));
blur += textureOffset(diffuseMap, vUv, ivec2(-1, 0));
blur += textureOffset(diffuseMap, vUv, ivec2( 0, 0));
blur += textureOffset(diffuseMap, vUv, ivec2( 1, 0));
blur += textureOffset(diffuseMap, vUv, ivec2(-1, 1));
blur += textureOffset(diffuseMap, vUv, ivec2( 0, 1));
blur += textureOffset(diffuseMap, vUv, ivec2( 1, 1));
blur /= 9.0;
4. 手动梯度采样
// 使用显式梯度进行采样(用于自定义mipmap选择)
vec2 dUvdx = dFdx(vUv);
vec2 dUvdy = dFdy(vUv);
vec4 color = textureGrad(diffuseMap, vUv, dUvdx, dUvdy);
5. 纹理采集(Texel Fetch)
// 直接读取指定像素(不使用滤波)
ivec2 coord = ivec2(100, 200);
vec4 texel = texelFetch(diffuseMap, coord, 0); // 0是mipmap级别
纹理函数对比表:
| 函数 | GLSL版本 | 采样器类型 | 特点 | 用途 |
|---|---|---|---|---|
texture2D |
ES 1.0 | sampler2D | 基础2D采样 | 最常用 |
textureCube |
ES 1.0 | samplerCube | 立方体贴图 | 环境映射 |
texture2DProj |
ES 1.0 | sampler2D | 投影纹理 | 阴影映射 |
texture2DLod |
ES 1.0 | sampler2D | 指定LOD | 顶点着色器 |
texture |
ES 3.0 | 所有类型 | 统一接口 | 通用采样 |
textureLod |
ES 3.0 | 所有类型 | 指定LOD | Mipmap控制 |
textureSize |
ES 3.0 | 所有类型 | 查询尺寸 | 像素计算 |
textureOffset |
ES 3.0 | 2D/3D/Array | 偏移采样 | 滤波效果 |
textureGrad |
ES 3.0 | 所有类型 | 手动梯度 | 自定义LOD |
texelFetch |
ES 3.0 | 所有类型 | 直接读取 | 数据纹理 |
实用代码示例:
1. 多纹理混合(WebGL 1.0)
precision mediump float;
uniform sampler2D texture1;
uniform sampler2D texture2;
uniform sampler2D blendMask;
uniform float mixFactor;
varying vec2 vUv;
void main() {
vec4 color1 = texture2D(texture1, vUv);
vec4 color2 = texture2D(texture2, vUv);
vec4 mask = texture2D(blendMask, vUv);
// 使用遮罩混合
vec4 finalColor = mix(color1, color2, mask.r * mixFactor);
gl_FragColor = finalColor;
}
2. 法线贴图采样
precision mediump float;
uniform sampler2D normalMap;
varying vec2 vUv;
varying vec3 vTangent;
varying vec3 vBitangent;
varying vec3 vNormal;
void main() {
// 采样法线贴图
vec3 normalTex = texture2D(normalMap, vUv).xyz;
// 从[0, 1]转换到[-1, 1]
normalTex = normalTex * 2.0 - 1.0;
// 构建TBN矩阵(切线空间到世界空间)
mat3 TBN = mat3(vTangent, vBitangent, vNormal);
// 转换法向量到世界空间
vec3 normal = normalize(TBN * normalTex);
gl_FragColor = vec4(normal * 0.5 + 0.5, 1.0);
}
3. 环境映射(立方体贴图)
precision mediump float;
uniform samplerCube envMap;
uniform vec3 cameraPosition;
varying vec3 vPosition;
varying vec3 vNormal;
void main() {
vec3 N = normalize(vNormal);
vec3 V = normalize(cameraPosition - vPosition);
// 计算反射向量
vec3 R = reflect(-V, N);
// 采样立方体贴图
vec4 envColor = textureCube(envMap, R);
gl_FragColor = envColor;
}
4. 纹理数组采样(WebGL 2.0)
#version 300 es
precision mediump float;
uniform sampler2DArray textureArray;
uniform int layerIndex;
in vec2 vUv;
out vec4 fragColor;
void main() {
// 采样纹理数组的指定层
vec3 coord = vec3(vUv, float(layerIndex));
vec4 color = texture(textureArray, coord);
fragColor = color;
}
5. 自定义Mipmap级别(WebGL 2.0)
#version 300 es
precision mediump float;
uniform sampler2D diffuseMap;
uniform float lodLevel;
in vec2 vUv;
out vec4 fragColor;
void main() {
// 手动指定mipmap级别
vec4 color = textureLod(diffuseMap, vUv, lodLevel);
fragColor = color;
}
6. 边缘检测(Sobel算子)
#version 300 es
precision mediump float;
uniform sampler2D inputTexture;
in vec2 vUv;
out vec4 fragColor;
void main() {
// Sobel算子
float sobelX = 0.0;
float sobelY = 0.0;
// X方向卷积核
sobelX += textureOffset(inputTexture, vUv, ivec2(-1, -1)).r * -1.0;
sobelX += textureOffset(inputTexture, vUv, ivec2(-1, 0)).r * -2.0;
sobelX += textureOffset(inputTexture, vUv, ivec2(-1, 1)).r * -1.0;
sobelX += textureOffset(inputTexture, vUv, ivec2( 1, -1)).r * 1.0;
sobelX += textureOffset(inputTexture, vUv, ivec2( 1, 0)).r * 2.0;
sobelX += textureOffset(inputTexture, vUv, ivec2( 1, 1)).r * 1.0;
// Y方向卷积核
sobelY += textureOffset(inputTexture, vUv, ivec2(-1, -1)).r * -1.0;
sobelY += textureOffset(inputTexture, vUv, ivec2( 0, -1)).r * -2.0;
sobelY += textureOffset(inputTexture, vUv, ivec2( 1, -1)).r * -1.0;
sobelY += textureOffset(inputTexture, vUv, ivec2(-1, 1)).r * 1.0;
sobelY += textureOffset(inputTexture, vUv, ivec2( 0, 1)).r * 2.0;
sobelY += textureOffset(inputTexture, vUv, ivec2( 1, 1)).r * 1.0;
float edge = sqrt(sobelX * sobelX + sobelY * sobelY);
fragColor = vec4(vec3(edge), 1.0);
}
性能优化建议:
texture2DLod避免隐式导数计算常见应用:
How to use GLSL geometric functions (length, distance, dot, cross, etc.)?
How to use GLSL geometric functions (length, distance, dot, cross, etc.)?
考察点:几何计算函数。
答案:
GLSL的几何函数是图形编程中最基础和常用的工具,用于处理向量和空间计算。这些函数在光照计算、碰撞检测、距离场等场景中必不可少。
核心几何函数:
1. length - 向量长度
// 计算向量的模(长度)
vec3 v = vec3(3.0, 4.0, 0.0);
float len = length(v); // sqrt(3² + 4² + 0²) = 5.0
// 实际应用:计算距离衰减
varying vec3 vPosition;
uniform vec3 lightPosition;
void main() {
float dist = length(lightPosition - vPosition);
float attenuation = 1.0 / (1.0 + dist * dist);
gl_FragColor = vec4(vec3(attenuation), 1.0);
}
2. distance - 两点距离
// 计算两点之间的欧几里得距离
vec3 point1 = vec3(0.0, 0.0, 0.0);
vec3 point2 = vec3(3.0, 4.0, 0.0);
float dist = distance(point1, point2); // 5.0
// 等价于: length(point2 - point1)
// 实际应用:距离场绘制
varying vec2 vUv;
uniform vec2 center;
void main() {
float d = distance(vUv, center);
float circle = step(d, 0.5); // 半径0.5的圆
gl_FragColor = vec4(vec3(circle), 1.0);
}
3. dot - 点乘(标量积)
// 计算两个向量的点乘
vec3 a = vec3(1.0, 0.0, 0.0);
vec3 b = vec3(0.0, 1.0, 0.0);
float dotProduct = dot(a, b); // 0.0 (垂直)
// 点乘的几何意义:
// dot(a, b) = |a| * |b| * cos(θ)
// - 结果 > 0:夹角 < 90°(同向)
// - 结果 = 0:夹角 = 90°(垂直)
// - 结果 < 0:夹角 > 90°(反向)
// 实际应用:Lambert漫反射
precision mediump float;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 lightPosition;
void main() {
vec3 N = normalize(vNormal);
vec3 L = normalize(lightPosition - vPosition);
// Lambert模型:光照强度与法向量和光线方向夹角的余弦成正比
float diffuse = max(dot(N, L), 0.0);
gl_FragColor = vec4(vec3(diffuse), 1.0);
}
4. cross - 叉乘(向量积)
// 计算两个vec3向量的叉乘(仅支持vec3)
vec3 a = vec3(1.0, 0.0, 0.0);
vec3 b = vec3(0.0, 1.0, 0.0);
vec3 c = cross(a, b); // vec3(0.0, 0.0, 1.0)
// 叉乘的特性:
// 1. 结果向量垂直于a和b
// 2. |c| = |a| * |b| * sin(θ)
// 3. 遵循右手定则
// 实际应用:构建TBN矩阵(切线空间)
attribute vec3 position;
attribute vec3 normal;
attribute vec3 tangent;
varying mat3 vTBN;
void main() {
vec3 N = normalize(normalMatrix * normal);
vec3 T = normalize(normalMatrix * tangent);
// 计算副切线(Bitangent)
vec3 B = cross(N, T);
// 构建TBN矩阵
vTBN = mat3(T, B, N);
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
5. normalize - 归一化
// 将向量转换为单位向量(长度为1)
vec3 v = vec3(3.0, 4.0, 0.0);
vec3 normalized = normalize(v); // vec3(0.6, 0.8, 0.0)
// 等价于: v / length(v)
// 注意:对零向量normalize会产生未定义行为
vec3 zero = vec3(0.0);
vec3 bad = normalize(zero); // 未定义!
// 安全的归一化
vec3 safeNormalize(vec3 v) {
float len = length(v);
return len > 0.0 ? v / len : vec3(0.0, 0.0, 1.0);
}
6. faceforward - 面向正确方向的法向量
// 确保法向量朝向观察者
vec3 N = vec3(0.0, 0.0, 1.0); // 原始法向量
vec3 I = vec3(0.0, 0.0, -1.0); // 入射向量
vec3 Nref = vec3(0.0, 0.0, 1.0); // 参考法向量
// 如果dot(Nref, I) < 0,返回N,否则返回-N
vec3 facing = faceforward(N, I, Nref);
// 实际应用:双面材质
varying vec3 vNormal;
varying vec3 vViewDirection;
void main() {
// 确保法向量总是朝向摄像机
vec3 normal = faceforward(vNormal, vViewDirection, vNormal);
gl_FragColor = vec4(normal * 0.5 + 0.5, 1.0);
}
7. reflect - 反射向量
// 计算反射向量
vec3 I = normalize(vec3(1.0, -1.0, 0.0)); // 入射向量
vec3 N = vec3(0.0, 1.0, 0.0); // 法向量
vec3 R = reflect(I, N); // 反射向量
// 公式: R = I - 2.0 * dot(N, I) * N
// 实际应用:镜面反射(Phong高光)
precision mediump float;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 lightPosition;
uniform vec3 cameraPosition;
void main() {
vec3 N = normalize(vNormal);
vec3 L = normalize(lightPosition - vPosition);
vec3 V = normalize(cameraPosition - vPosition);
// 计算反射光线
vec3 R = reflect(-L, N);
// Phong高光
float specular = pow(max(dot(R, V), 0.0), 32.0);
gl_FragColor = vec4(vec3(specular), 1.0);
}
8. refract - 折射向量
// 计算折射向量(Snell定律)
vec3 I = normalize(vec3(0.0, -1.0, 0.0)); // 入射向量
vec3 N = vec3(0.0, 1.0, 0.0); // 法向量
float eta = 0.67; // 折射率比(n1/n2,如空气到水: 1.0/1.33)
vec3 T = refract(I, N, eta); // 折射向量
// 全内反射情况:当入射角度大于临界角时,返回零向量
// 临界角 = asin(n2/n1)
// 实际应用:玻璃/水材质
precision mediump float;
uniform samplerCube envMap;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 cameraPosition;
void main() {
vec3 N = normalize(vNormal);
vec3 V = normalize(cameraPosition - vPosition);
// 折射(空气到玻璃:1.0/1.52)
vec3 refractVec = refract(-V, N, 1.0 / 1.52);
vec4 refractColor = textureCube(envMap, refractVec);
// 反射
vec3 reflectVec = reflect(-V, N);
vec4 reflectColor = textureCube(envMap, reflectVec);
// 菲涅尔效应
float fresnel = pow(1.0 - max(dot(V, N), 0.0), 3.0);
gl_FragColor = mix(refractColor, reflectColor, fresnel);
}
实用几何函数组合:
1. 软阴影/光晕效果
varying vec2 vUv;
uniform vec2 lightPos;
uniform float radius;
void main() {
float dist = distance(vUv, lightPos);
// 方式1:硬边缘
float hardGlow = step(dist, radius);
// 方式2:线性衰减
float linearGlow = 1.0 - clamp(dist / radius, 0.0, 1.0);
// 方式3:平滑衰减
float smoothGlow = 1.0 - smoothstep(0.0, radius, dist);
// 方式4:二次衰减
float quadraticGlow = 1.0 / (1.0 + dist * dist);
gl_FragColor = vec4(vec3(smoothGlow), 1.0);
}
2. 边缘检测(Rim Lighting)
precision mediump float;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 cameraPosition;
uniform vec3 rimColor;
uniform float rimPower;
void main() {
vec3 N = normalize(vNormal);
vec3 V = normalize(cameraPosition - vPosition);
// 使用点乘检测边缘
float rim = 1.0 - max(dot(N, V), 0.0);
rim = pow(rim, rimPower); // 控制边缘宽度
vec3 color = rimColor * rim;
gl_FragColor = vec4(color, 1.0);
}
3. 平面投影
// 将点投影到平面上
vec3 projectPointOnPlane(vec3 point, vec3 planeNormal, vec3 planePoint) {
vec3 v = point - planePoint;
float dist = dot(v, planeNormal);
return point - dist * planeNormal;
}
// 点到平面的距离
float distanceToPlane(vec3 point, vec3 planeNormal, vec3 planePoint) {
return abs(dot(point - planePoint, planeNormal));
}
4. 半球光照
precision mediump float;
varying vec3 vNormal;
uniform vec3 skyColor;
uniform vec3 groundColor;
void main() {
vec3 N = normalize(vNormal);
vec3 up = vec3(0.0, 1.0, 0.0);
// 使用点乘在天空和地面颜色之间插值
float hemisphere = dot(N, up) * 0.5 + 0.5;
vec3 color = mix(groundColor, skyColor, hemisphere);
gl_FragColor = vec4(color, 1.0);
}
5. 距离场应用(SDF)
// 球体距离场
float sdSphere(vec3 p, float radius) {
return length(p) - radius;
}
// 盒子距离场
float sdBox(vec3 p, vec3 b) {
vec3 d = abs(p) - b;
return length(max(d, 0.0)) + min(max(d.x, max(d.y, d.z)), 0.0);
}
// 圆环距离场
float sdTorus(vec3 p, vec2 t) {
vec2 q = vec2(length(p.xz) - t.x, p.y);
return length(q) - t.y;
}
// 应用:软阴影
float softShadow(vec3 ro, vec3 rd, float mint, float maxt, float k) {
float res = 1.0;
for(float t = mint; t < maxt; ) {
float h = sdSphere(ro + rd * t, 1.0);
if(h < 0.001) return 0.0;
res = min(res, k * h / t);
t += h;
}
return res;
}
性能优化建议:
// 优化1:避免重复计算length
// 不好
float dist1 = length(v);
float dist2 = length(v);
// 好
float dist = length(v);
// 优化2:比较距离时可以使用平方距离(避免sqrt)
// 不好
if(length(v) < threshold) { }
// 好
float sqrThreshold = threshold * threshold;
if(dot(v, v) < sqrThreshold) { } // dot(v,v) = length(v)²
// 优化3:归一化向量后避免重复归一化
// 不好
vec3 L = normalize(lightDir);
float diffuse = dot(N, normalize(lightDir));
// 好
vec3 L = normalize(lightDir);
float diffuse = dot(N, L);
实际应用场景:
How to use interpolation functions (mix, smoothstep, step, etc.) in GLSL?
How to use interpolation functions (mix, smoothstep, step, etc.) in GLSL?
考察点:插值函数应用。
答案:
插值函数是GLSL中非常强大的工具,用于在两个值之间进行平滑或阶跃过渡。它们在颜色混合、动画缓动、边缘软化、阈值判断等场景中应用广泛。
核心插值函数:
1. mix - 线性插值
// 基础语法:mix(a, b, t)
// 返回: a * (1 - t) + b * t
// 标量插值
float a = 0.0;
float b = 1.0;
float t = 0.5;
float result = mix(a, b, t); // 0.5
// 向量插值
vec3 color1 = vec3(1.0, 0.0, 0.0); // 红色
vec3 color2 = vec3(0.0, 0.0, 1.0); // 蓝色
vec3 blended = mix(color1, color2, 0.5); // 紫色
// 实际应用:颜色渐变
varying vec2 vUv;
uniform vec3 startColor;
uniform vec3 endColor;
void main() {
vec3 color = mix(startColor, endColor, vUv.x);
gl_FragColor = vec4(color, 1.0);
}
2. step - 阶跃函数
// 基础语法:step(edge, x)
// 返回: x < edge ? 0.0 : 1.0
float edge = 0.5;
float value = 0.7;
float result = step(edge, value); // 1.0
// 实际应用:阈值判断
varying vec2 vUv;
void main() {
// 创建一个圆形遮罩
float dist = length(vUv - 0.5);
float circle = step(dist, 0.3); // 半径0.3的圆
gl_FragColor = vec4(vec3(circle), 1.0);
}
3. smoothstep - S曲线平滑插值
// 基础语法:smoothstep(edge0, edge1, x)
// 当 x < edge0: 返回 0.0
// 当 x > edge1: 返回 1.0
// 当 edge0 < x < edge1: 返回平滑插值(3阶Hermite多项式)
float edge0 = 0.0;
float edge1 = 1.0;
float x = 0.5;
float result = smoothstep(edge0, edge1, x); // ~0.5(平滑)
// smoothstep的数学公式:
// t = clamp((x - edge0) / (edge1 - edge0), 0.0, 1.0);
// return t * t * (3.0 - 2.0 * t);
// 实际应用:边缘软化
varying vec2 vUv;
void main() {
float dist = length(vUv - 0.5);
// 硬边缘圆(使用step)
float hardCircle = step(dist, 0.3);
// 软边缘圆(使用smoothstep)
float softCircle = 1.0 - smoothstep(0.25, 0.35, dist);
gl_FragColor = vec4(vec3(softCircle), 1.0);
}
实用代码示例:
1. 多色渐变
varying vec2 vUv;
void main() {
vec3 color1 = vec3(1.0, 0.0, 0.0); // 红
vec3 color2 = vec3(0.0, 1.0, 0.0); // 绿
vec3 color3 = vec3(0.0, 0.0, 1.0); // 蓝
float t = vUv.x;
vec3 color;
// 方式1:三段渐变
if(t < 0.5) {
color = mix(color1, color2, t * 2.0);
} else {
color = mix(color2, color3, (t - 0.5) * 2.0);
}
// 方式2:使用smoothstep控制权重
float w1 = smoothstep(0.0, 0.33, t);
float w2 = smoothstep(0.33, 0.66, t);
float w3 = smoothstep(0.66, 1.0, t);
color = mix(mix(color1, color2, w1), color3, w2);
gl_FragColor = vec4(color, 1.0);
}
2. 缓动函数
// 自定义easing函数
float easeInQuad(float t) {
return t * t;
}
float easeOutQuad(float t) {
return t * (2.0 - t);
}
float easeInOutQuad(float t) {
return t < 0.5 ? 2.0 * t * t : -1.0 + (4.0 - 2.0 * t) * t;
}
float easeInCubic(float t) {
return t * t * t;
}
float easeOutCubic(float t) {
float f = t - 1.0;
return f * f * f + 1.0;
}
// 指数缓动
float easeInExpo(float t) {
return t == 0.0 ? 0.0 : pow(2.0, 10.0 * (t - 1.0));
}
// 应用
uniform float time;
varying vec2 vUv;
void main() {
float t = fract(time);
float easedT = easeInOutQuad(t);
float y = vUv.y;
float animatedY = easedT;
float line = smoothstep(animatedY - 0.02, animatedY, y) -
smoothstep(animatedY, animatedY + 0.02, y);
gl_FragColor = vec4(vec3(line), 1.0);
}
3. 柔和阴影/光晕
varying vec2 vUv;
uniform vec2 lightPos;
uniform float lightRadius;
uniform float softness;
void main() {
float dist = distance(vUv, lightPos);
// 硬边缘光
float hardLight = step(dist, lightRadius);
// 线性衰减
float linearLight = 1.0 - clamp(dist / lightRadius, 0.0, 1.0);
// 平滑衰减
float softLight = 1.0 - smoothstep(0.0, lightRadius, dist);
// 可控软度的光晕
float innerRadius = lightRadius * (1.0 - softness);
float glow = 1.0 - smoothstep(innerRadius, lightRadius, dist);
gl_FragColor = vec4(vec3(glow), 1.0);
}
4. 条纹/网格图案
varying vec2 vUv;
void main() {
// 方式1:硬边缘条纹
float stripes1 = step(0.5, fract(vUv.x * 10.0));
// 方式2:软边缘条纹
float t = fract(vUv.x * 10.0);
float stripes2 = smoothstep(0.4, 0.6, t);
// 网格
float gridX = step(0.95, fract(vUv.x * 10.0));
float gridY = step(0.95, fract(vUv.y * 10.0));
float grid = max(gridX, gridY);
gl_FragColor = vec4(vec3(grid), 1.0);
}
5. 进度条
varying vec2 vUv;
uniform float progress; // 0.0 到 1.0
void main() {
// 硬边缘进度条
float bar1 = step(vUv.x, progress);
// 软边缘进度条
float bar2 = smoothstep(vUv.x - 0.02, vUv.x + 0.02, progress);
// 带边框的进度条
float barHeight = step(0.4, vUv.y) - step(0.6, vUv.y);
float barFill = step(vUv.x, progress);
float barOutline = step(0.38, vUv.y) - step(0.62, vUv.y);
vec3 fillColor = vec3(0.2, 0.8, 0.3);
vec3 bgColor = vec3(0.1);
vec3 outlineColor = vec3(1.0);
vec3 color = mix(bgColor, fillColor, barFill * barHeight);
color = mix(color, outlineColor, barOutline * (1.0 - barHeight));
gl_FragColor = vec4(color, 1.0);
}
6. 阈值和遮罩
precision mediump float;
uniform sampler2D diffuseMap;
uniform sampler2D maskMap;
uniform float threshold;
uniform float feather;
varying vec2 vUv;
void main() {
vec4 color = texture2D(diffuseMap, vUv);
float mask = texture2D(maskMap, vUv).r;
// 硬阈值
float hardMask = step(threshold, mask);
// 软阈值
float softMask = smoothstep(threshold - feather, threshold + feather, mask);
// 应用遮罩
color.a *= softMask;
gl_FragColor = color;
}
7. 距离场字体渲染
precision mediump float;
uniform sampler2D fontAtlas; // 距离场纹理
uniform float smoothing;
varying vec2 vUv;
void main() {
float dist = texture2D(fontAtlas, vUv).a;
// 使用smoothstep实现抗锯齿边缘
float alpha = smoothstep(0.5 - smoothing, 0.5 + smoothing, dist);
vec3 color = vec3(1.0);
gl_FragColor = vec4(color, alpha);
}
8. 渐变与混合模式
varying vec2 vUv;
void main() {
// 线性渐变
vec3 gradient = vec3(vUv.x);
// 径向渐变
float radial = length(vUv - 0.5);
vec3 radialGradient = vec3(1.0 - radial * 2.0);
// 角度渐变
float angle = atan(vUv.y - 0.5, vUv.x - 0.5);
vec3 angularGradient = vec3((angle + 3.14159) / (2.0 * 3.14159));
// 平滑的菱形渐变
vec2 center = vUv - 0.5;
float diamond = abs(center.x) + abs(center.y);
vec3 diamondGradient = vec3(1.0 - smoothstep(0.0, 0.7, diamond));
gl_FragColor = vec4(diamondGradient, 1.0);
}
插值函数对比:
| 函数 | 特点 | 连续性 | 使用场景 |
|---|---|---|---|
mix |
线性插值 | C0连续 | 颜色混合、简单动画 |
step |
硬阈值 | 不连续 | 遮罩、二值化 |
smoothstep |
S曲线 | C1连续 | 柔和过渡、边缘软化 |
性能优化:
// 优化1:避免重复计算
// 不好
void main() {
float c1 = smoothstep(0.0, 1.0, vUv.x);
float c2 = smoothstep(0.0, 1.0, vUv.x); // 重复计算
}
// 好
void main() {
float c = smoothstep(0.0, 1.0, vUv.x);
// 复用c
}
// 优化2:使用内置函数而非手动实现
// 不好:手动实现smoothstep
float mySmoothstep(float e0, float e1, float x) {
float t = clamp((x - e0) / (e1 - e0), 0.0, 1.0);
return t * t * (3.0 - 2.0 * t);
}
// 好:使用内置函数
float result = smoothstep(e0, e1, x);
实际应用场景:
How to implement Lambert diffuse lighting model in shaders?
How to implement Lambert diffuse lighting model in shaders?
考察点:漫反射计算。
答案:
Lambert漫反射是最基础的光照模型,基于物理观察:表面接收的光照强度与光线方向和表面法向量夹角的余弦值成正比。它简单高效,广泛应用于实时渲染中。
Lambert漫反射原理:
Lambert定律(余弦定律):
Diffuse = max(dot(N, L), 0.0) * lightColor * materialColor
其中:
N:归一化的表面法向量L:归一化的光源方向(从表面指向光源)dot(N, L):计算两向量夹角的余弦值max(..., 0.0):确保背面不被照亮基础实现:
顶点着色器
attribute vec3 position;
attribute vec3 normal;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat3 normalMatrix; // transpose(inverse(mat3(modelMatrix)))
varying vec3 vNormal;
varying vec3 vPosition;
void main() {
// 变换法向量到世界空间
vNormal = normalize(normalMatrix * normal);
// 变换顶点位置到世界空间
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
vPosition = worldPosition.xyz;
// 计算最终位置
gl_Position = projectionMatrix * viewMatrix * worldPosition;
}
片段着色器
precision mediump float;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 lightPosition;
uniform vec3 lightColor;
uniform vec3 materialColor;
uniform vec3 ambientColor;
void main() {
// 归一化法向量(插值后需要重新归一化)
vec3 N = normalize(vNormal);
// 计算光照方向
vec3 L = normalize(lightPosition - vPosition);
// Lambert漫反射
float diffuse = max(dot(N, L), 0.0);
// 最终颜色 = 环境光 + 漫反射
vec3 ambient = ambientColor * materialColor;
vec3 diffuseColor = diffuse * lightColor * materialColor;
vec3 finalColor = ambient + diffuseColor;
gl_FragColor = vec4(finalColor, 1.0);
}
完整的Lambert光照实现:
1. 单光源带衰减
precision mediump float;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 lightPosition;
uniform vec3 lightColor;
uniform float lightIntensity;
uniform vec3 materialColor;
uniform vec3 ambientColor;
void main() {
vec3 N = normalize(vNormal);
vec3 L = normalize(lightPosition - vPosition);
// 计算距离衰减
float distance = length(lightPosition - vPosition);
float attenuation = 1.0 / (1.0 + 0.1 * distance + 0.01 * distance * distance);
// Lambert漫反射
float diffuse = max(dot(N, L), 0.0);
// 应用衰减和强度
vec3 diffuseColor = diffuse * lightColor * lightIntensity * attenuation;
// 最终颜色
vec3 ambient = ambientColor * materialColor;
vec3 finalColor = (ambient + diffuseColor) * materialColor;
gl_FragColor = vec4(finalColor, 1.0);
}
2. 多光源实现
precision mediump float;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 lightPositions[4];
uniform vec3 lightColors[4];
uniform float lightIntensities[4];
uniform int numLights;
uniform vec3 materialColor;
uniform vec3 ambientColor;
void main() {
vec3 N = normalize(vNormal);
// 环境光
vec3 ambient = ambientColor * materialColor;
// 累加所有光源的漫反射
vec3 totalDiffuse = vec3(0.0);
for(int i = 0; i < 4; i++) {
if(i >= numLights) break;
vec3 L = normalize(lightPositions[i] - vPosition);
float distance = length(lightPositions[i] - vPosition);
float attenuation = 1.0 / (1.0 + 0.1 * distance + 0.01 * distance * distance);
float diffuse = max(dot(N, L), 0.0);
totalDiffuse += diffuse * lightColors[i] * lightIntensities[i] * attenuation;
}
vec3 finalColor = (ambient + totalDiffuse) * materialColor;
gl_FragColor = vec4(finalColor, 1.0);
}
3. 使用纹理的Lambert
precision mediump float;
uniform sampler2D diffuseMap;
varying vec3 vNormal;
varying vec3 vPosition;
varying vec2 vUv;
uniform vec3 lightPosition;
uniform vec3 lightColor;
void main() {
// 从纹理采样颜色
vec3 materialColor = texture2D(diffuseMap, vUv).rgb;
vec3 N = normalize(vNormal);
vec3 L = normalize(lightPosition - vPosition);
// Lambert漫反射
float diffuse = max(dot(N, L), 0.0);
// 应用光照
vec3 finalColor = materialColor * lightColor * diffuse;
gl_FragColor = vec4(finalColor, 1.0);
}
4. Half-Lambert(阀门公司改进版)
precision mediump float;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 lightPosition;
uniform vec3 lightColor;
uniform vec3 materialColor;
void main() {
vec3 N = normalize(vNormal);
vec3 L = normalize(lightPosition - vPosition);
// Half-Lambert:将[-1, 1]映射到[0, 1]
float halfLambert = dot(N, L) * 0.5 + 0.5;
// 可选:增加对比度
halfLambert = pow(halfLambert, 2.0);
vec3 finalColor = materialColor * lightColor * halfLambert;
gl_FragColor = vec4(finalColor, 1.0);
}
5. 双面Lambert
precision mediump float;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 lightPosition;
uniform vec3 frontColor;
uniform vec3 backColor;
void main() {
vec3 N = normalize(vNormal);
vec3 L = normalize(lightPosition - vPosition);
float diffuse = dot(N, L);
// 正面和背面使用不同颜色
vec3 color = diffuse > 0.0 ? frontColor : backColor;
float intensity = abs(diffuse);
vec3 finalColor = color * intensity;
gl_FragColor = vec4(finalColor, 1.0);
}
6. Oren-Nayar漫反射(粗糙表面)
precision mediump float;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 lightPosition;
uniform vec3 cameraPosition;
uniform vec3 materialColor;
uniform float roughness; // 0.0 = 光滑(Lambert), 1.0 = 粗糙
void main() {
vec3 N = normalize(vNormal);
vec3 L = normalize(lightPosition - vPosition);
vec3 V = normalize(cameraPosition - vPosition);
float NdotL = max(dot(N, L), 0.0);
float NdotV = max(dot(N, V), 0.0);
// Oren-Nayar简化模型
float roughnessSq = roughness * roughness;
float A = 1.0 - 0.5 * roughnessSq / (roughnessSq + 0.33);
float B = 0.45 * roughnessSq / (roughnessSq + 0.09);
// 计算方位角
vec3 lightProjected = normalize(L - N * dot(N, L));
vec3 viewProjected = normalize(V - N * dot(N, V));
float cosPhiDiff = max(dot(lightProjected, viewProjected), 0.0);
float alpha = max(acos(NdotV), acos(NdotL));
float beta = min(acos(NdotV), acos(NdotL));
float orenNayar = NdotL * (A + B * cosPhiDiff * sin(alpha) * tan(beta));
vec3 finalColor = materialColor * orenNayar;
gl_FragColor = vec4(finalColor, 1.0);
}
实际应用优化:
顶点着色器中计算(性能优化)
// ========== 顶点着色器 ==========
attribute vec3 position;
attribute vec3 normal;
uniform mat4 mvpMatrix;
uniform mat3 normalMatrix;
uniform vec3 lightPosition;
uniform mat4 modelMatrix;
varying float vDiffuse;
void main() {
// 在顶点着色器计算光照(逐顶点)
vec3 N = normalize(normalMatrix * normal);
vec4 worldPos = modelMatrix * vec4(position, 1.0);
vec3 L = normalize(lightPosition - worldPos.xyz);
vDiffuse = max(dot(N, L), 0.0);
gl_Position = mvpMatrix * vec4(position, 1.0);
}
// ========== 片段着色器 ==========
precision mediump float;
varying float vDiffuse;
uniform vec3 materialColor;
uniform vec3 lightColor;
void main() {
vec3 color = materialColor * lightColor * vDiffuse;
gl_FragColor = vec4(color, 1.0);
}
性能对比:
| 计算位置 | 优点 | 缺点 | 适用场景 |
|---|---|---|---|
| 顶点着色器 | 性能高 | 效果粗糙 | 低模模型、移动设备 |
| 片段着色器 | 效果精细 | 性能开销大 | 高质量渲染、特写镜头 |
常见问题和解决方案:
1. 法向量未归一化
// 错误:插值后的法向量不是单位向量
void main() {
float diffuse = max(dot(vNormal, L), 0.0); // 错误!
}
// 正确:重新归一化
void main() {
vec3 N = normalize(vNormal); // 必须归一化
float diffuse = max(dot(N, L), 0.0);
}
2. 法向量变换错误
// 错误:直接使用modelMatrix变换法向量
vec3 worldNormal = mat3(modelMatrix) * normal; // 错误!
// 正确:使用法向量矩阵(逆转置矩阵)
mat3 normalMatrix = transpose(inverse(mat3(modelMatrix)));
vec3 worldNormal = normalMatrix * normal;
3. 背面消除问题
// 单面光照
float diffuse = max(dot(N, L), 0.0);
// 双面光照
float diffuse = abs(dot(N, L));
实际应用场景:
How to implement Phong specular effect in fragment shaders?
How to implement Phong specular effect in fragment shaders?
考察点:镜面反射计算。
答案:
Phong高光模型用于模拟光滑表面的镜面反射效果,通过计算反射光线与视线方向的夹角来确定高光强度。它是经典的实时光照模型之一,效果自然且计算高效。
Phong高光原理:
Phong公式:
Specular = pow(max(dot(R, V), 0.0), shininess) * lightColor * specularColor
其中:
R:反射向量(光线关于法向量的反射)V:视线方向(从表面指向摄像机)shininess:高光指数(控制高光大小,越大越聚焦)dot(R, V):反射向量与视线夹角的余弦值基础Phong实现:
顶点着色器
attribute vec3 position;
attribute vec3 normal;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat3 normalMatrix;
varying vec3 vNormal;
varying vec3 vPosition;
void main() {
// 世界空间法向量
vNormal = normalize(normalMatrix * normal);
// 世界空间位置
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
vPosition = worldPosition.xyz;
gl_Position = projectionMatrix * viewMatrix * worldPosition;
}
片段着色器
precision mediump float;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 lightPosition;
uniform vec3 cameraPosition;
uniform vec3 lightColor;
uniform vec3 specularColor;
uniform float shininess;
void main() {
// 归一化向量
vec3 N = normalize(vNormal);
vec3 L = normalize(lightPosition - vPosition);
vec3 V = normalize(cameraPosition - vPosition);
// 计算反射向量
vec3 R = reflect(-L, N);
// Phong高光
float spec = pow(max(dot(R, V), 0.0), shininess);
vec3 specular = spec * lightColor * specularColor;
gl_FragColor = vec4(specular, 1.0);
}
完整的Phong光照模型(Ambient + Diffuse + Specular):
precision mediump float;
varying vec3 vNormal;
varying vec3 vPosition;
// 光源参数
uniform vec3 lightPosition;
uniform vec3 lightColor;
uniform float lightIntensity;
// 摄像机位置
uniform vec3 cameraPosition;
// 材质参数
uniform vec3 ambientColor;
uniform vec3 diffuseColor;
uniform vec3 specularColor;
uniform float shininess; // 通常范围: 1-256
void main() {
vec3 N = normalize(vNormal);
vec3 L = normalize(lightPosition - vPosition);
vec3 V = normalize(cameraPosition - vPosition);
vec3 R = reflect(-L, N);
// 1. 环境光
vec3 ambient = ambientColor;
// 2. 漫反射(Lambert)
float diff = max(dot(N, L), 0.0);
vec3 diffuse = diff * diffuseColor * lightColor;
// 3. 镜面反射(Phong)
float spec = pow(max(dot(R, V), 0.0), shininess);
vec3 specular = spec * specularColor * lightColor;
// 4. 距离衰减
float distance = length(lightPosition - vPosition);
float attenuation = 1.0 / (1.0 + 0.1 * distance + 0.01 * distance * distance);
// 最终颜色
vec3 finalColor = ambient + (diffuse + specular) * attenuation * lightIntensity;
gl_FragColor = vec4(finalColor, 1.0);
}
Blinn-Phong高光(改进版):
Blinn-Phong使用半向量(H)代替反射向量®,计算更高效且更符合物理规律。
precision mediump float;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 lightPosition;
uniform vec3 cameraPosition;
uniform vec3 lightColor;
uniform vec3 specularColor;
uniform float shininess;
void main() {
vec3 N = normalize(vNormal);
vec3 L = normalize(lightPosition - vPosition);
vec3 V = normalize(cameraPosition - vPosition);
// 计算半向量(光线和视线的中间方向)
vec3 H = normalize(L + V);
// Blinn-Phong高光
float spec = pow(max(dot(N, H), 0.0), shininess);
vec3 specular = spec * lightColor * specularColor;
gl_FragColor = vec4(specular, 1.0);
}
Phong vs Blinn-Phong对比:
| 模型 | 计算向量 | 计算成本 | 高光特点 |
|---|---|---|---|
| Phong | 反射向量R | 稍高 | 高光更聚焦 |
| Blinn-Phong | 半向量H | 更低 | 高光更柔和,更符合物理 |
使用纹理的Phong光照:
precision mediump float;
uniform sampler2D diffuseMap;
uniform sampler2D specularMap; // 高光强度贴图
uniform sampler2D normalMap;
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 lightPosition;
uniform vec3 cameraPosition;
uniform float shininess;
void main() {
// 采样纹理
vec3 diffuseColor = texture2D(diffuseMap, vUv).rgb;
float specularIntensity = texture2D(specularMap, vUv).r;
vec3 N = normalize(vNormal);
vec3 L = normalize(lightPosition - vPosition);
vec3 V = normalize(cameraPosition - vPosition);
vec3 R = reflect(-L, N);
// 漫反射
float diff = max(dot(N, L), 0.0);
vec3 diffuse = diff * diffuseColor;
// 镜面反射(使用specularMap控制强度)
float spec = pow(max(dot(R, V), 0.0), shininess);
vec3 specular = spec * specularIntensity * vec3(1.0);
vec3 finalColor = diffuse + specular;
gl_FragColor = vec4(finalColor, 1.0);
}
多光源Phong:
precision mediump float;
varying vec3 vNormal;
varying vec3 vPosition;
const int MAX_LIGHTS = 4;
uniform int numLights;
uniform vec3 lightPositions[MAX_LIGHTS];
uniform vec3 lightColors[MAX_LIGHTS];
uniform float lightIntensities[MAX_LIGHTS];
uniform vec3 cameraPosition;
uniform vec3 materialDiffuse;
uniform vec3 materialSpecular;
uniform float shininess;
void main() {
vec3 N = normalize(vNormal);
vec3 V = normalize(cameraPosition - vPosition);
vec3 totalDiffuse = vec3(0.0);
vec3 totalSpecular = vec3(0.0);
for(int i = 0; i < MAX_LIGHTS; i++) {
if(i >= numLights) break;
vec3 L = normalize(lightPositions[i] - vPosition);
vec3 R = reflect(-L, N);
float distance = length(lightPositions[i] - vPosition);
float attenuation = 1.0 / (1.0 + 0.1 * distance + 0.01 * distance * distance);
// 漫反射
float diff = max(dot(N, L), 0.0);
totalDiffuse += diff * lightColors[i] * lightIntensities[i] * attenuation;
// 镜面反射
float spec = pow(max(dot(R, V), 0.0), shininess);
totalSpecular += spec * lightColors[i] * lightIntensities[i] * attenuation;
}
vec3 finalColor = totalDiffuse * materialDiffuse + totalSpecular * materialSpecular;
gl_FragColor = vec4(finalColor, 1.0);
}
高光衰减和菲涅尔效应:
precision mediump float;
varying vec3 vNormal;
varying vec3 vPosition;
uniform vec3 lightPosition;
uniform vec3 cameraPosition;
uniform float shininess;
uniform float fresnelPower;
void main() {
vec3 N = normalize(vNormal);
vec3 L = normalize(lightPosition - vPosition);
vec3 V = normalize(cameraPosition - vPosition);
vec3 H = normalize(L + V);
// Blinn-Phong高光
float spec = pow(max(dot(N, H), 0.0), shininess);
// 菲涅尔效应(Schlick近似)
float fresnel = pow(1.0 - max(dot(V, N), 0.0), fresnelPower);
// 结合菲涅尔的高光
vec3 specular = spec * fresnel * vec3(1.0);
gl_FragColor = vec4(specular, 1.0);
}
各向异性高光(Anisotropic Specular):
precision mediump float;
varying vec3 vNormal;
varying vec3 vTangent;
varying vec3 vPosition;
uniform vec3 lightPosition;
uniform vec3 cameraPosition;
uniform float roughnessX; // 切线方向粗糙度
uniform float roughnessY; // 副切线方向粗糙度
void main() {
vec3 N = normalize(vNormal);
vec3 T = normalize(vTangent);
vec3 B = cross(N, T);
vec3 L = normalize(lightPosition - vPosition);
vec3 V = normalize(cameraPosition - vPosition);
vec3 H = normalize(L + V);
// 各向异性高光计算
float dotTH = dot(T, H);
float dotBH = dot(B, H);
float dotNH = dot(N, H);
float exponent = (dotTH * dotTH / (roughnessX * roughnessX) +
dotBH * dotBH / (roughnessY * roughnessY)) /
(1.0 - dotNH * dotNH);
float spec = exp(-exponent);
gl_FragColor = vec4(vec3(spec), 1.0);
}
性能优化技巧:
// 优化1:避免重复计算normalize
// 不好
void main() {
vec3 R = reflect(-normalize(lightPos - vPos), normalize(vNormal));
float spec = pow(max(dot(R, normalize(camPos - vPos)), 0.0), shininess);
}
// 好
void main() {
vec3 N = normalize(vNormal);
vec3 L = normalize(lightPos - vPos);
vec3 V = normalize(camPos - vPos);
vec3 R = reflect(-L, N);
float spec = pow(max(dot(R, V), 0.0), shininess);
}
// 优化2:使用Blinn-Phong代替Phong(少一次reflect计算)
// Phong: 需要计算reflect
vec3 R = reflect(-L, N);
float spec = pow(max(dot(R, V), 0.0), shininess);
// Blinn-Phong: 只需要加法和归一化
vec3 H = normalize(L + V);
float spec = pow(max(dot(N, H), 0.0), shininess);
// 优化3:预计算常量
// CPU端预计算shininess相关的系数
uniform float shininessNormalization; // = (shininess + 8.0) / (8.0 * PI)
float spec = pow(max(dot(N, H), 0.0), shininess) * shininessNormalization;
常见问题和解决方案:
1. 高光出现在背面
// 问题:当光源在背后时,高光仍然可见
float spec = pow(max(dot(R, V), 0.0), shininess);
// 解决:只在被照亮的面计算高光
float diff = max(dot(N, L), 0.0);
float spec = pow(max(dot(R, V), 0.0), shininess);
spec *= step(0.0, diff); // 或 spec *= (diff > 0.0 ? 1.0 : 0.0);
2. 高光过亮
// 能量守恒的高光计算
float normalization = (shininess + 2.0) / (2.0 * 3.14159);
float spec = pow(max(dot(N, H), 0.0), shininess) * normalization;
3. shininess参数调整
// Phong和Blinn-Phong的shininess需要不同值才能达到相似效果
// 通常 shininess_blinn ≈ 4 * shininess_phong
// Phong
float specPhong = pow(max(dot(R, V), 0.0), 32.0);
// Blinn-Phong(需要更高的值)
float specBlinn = pow(max(dot(N, H), 0.0), 128.0);
实际应用场景:
How to implement Normal Mapping in shaders?
How to implement Normal Mapping in shaders?
考察点:法线贴图技术。
答案:
法线贴图是一种纹理映射技术,通过在纹理中存储法向量信息,为低模模型添加精细的表面细节,显著提升视觉效果而不增加几何复杂度。实现需要构建切线空间(TBN)矩阵进行坐标变换。
法线贴图原理:
法线贴图存储的是切线空间(Tangent Space)中的法向量:
基础实现:
顶点着色器 - 构建TBN矩阵
attribute vec3 position;
attribute vec3 normal;
attribute vec3 tangent;
attribute vec2 uv;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat3 normalMatrix;
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vTangent;
varying vec3 vBitangent;
varying vec3 vPosition;
void main() {
// 变换到世界空间
vNormal = normalize(normalMatrix * normal);
vTangent = normalize(normalMatrix * tangent);
// 计算副切线(Bitangent/Binormal)
vBitangent = cross(vNormal, vTangent);
// 世界空间位置
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
vPosition = worldPosition.xyz;
// 传递UV
vUv = uv;
gl_Position = projectionMatrix * viewMatrix * worldPosition;
}
片段着色器 - 应用法线贴图
precision mediump float;
uniform sampler2D normalMap;
uniform sampler2D diffuseMap;
uniform vec3 lightPosition;
uniform vec3 cameraPosition;
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vTangent;
varying vec3 vBitangent;
varying vec3 vPosition;
void main() {
// 1. 构建TBN矩阵(切线空间 → 世界空间)
vec3 N = normalize(vNormal);
vec3 T = normalize(vTangent);
vec3 B = normalize(vBitangent);
mat3 TBN = mat3(T, B, N);
// 2. 从法线贴图采样
vec3 normalMap = texture2D(normalMap, vUv).rgb;
// 3. 将法向量从[0,1]映射到[-1,1]
normalMap = normalMap * 2.0 - 1.0;
// 4. 转换到世界空间
vec3 worldNormal = normalize(TBN * normalMap);
// 5. 使用转换后的法向量进行光照计算
vec3 L = normalize(lightPosition - vPosition);
float diffuse = max(dot(worldNormal, L), 0.0);
vec3 diffuseColor = texture2D(diffuseMap, vUv).rgb;
vec3 finalColor = diffuseColor * diffuse;
gl_FragColor = vec4(finalColor, 1.0);
}
完整的法线贴图光照:
precision mediump float;
uniform sampler2D diffuseMap;
uniform sampler2D normalMap;
uniform sampler2D specularMap;
uniform vec3 lightPosition;
uniform vec3 cameraPosition;
uniform vec3 lightColor;
uniform float shininess;
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vTangent;
varying vec3 vBitangent;
varying vec3 vPosition;
void main() {
// 构建TBN矩阵
vec3 N = normalize(vNormal);
vec3 T = normalize(vTangent);
// 格拉姆-施密特正交化
T = normalize(T - dot(T, N) * N);
vec3 B = cross(N, T);
mat3 TBN = mat3(T, B, N);
// 从法线贴图采样并转换
vec3 tangentNormal = texture2D(normalMap, vUv).rgb * 2.0 - 1.0;
vec3 worldNormal = normalize(TBN * tangentNormal);
// 采样其他纹理
vec3 diffuseColor = texture2D(diffuseMap, vUv).rgb;
float specularIntensity = texture2D(specularMap, vUv).r;
// 光照计算
vec3 L = normalize(lightPosition - vPosition);
vec3 V = normalize(cameraPosition - vPosition);
vec3 H = normalize(L + V);
// 漫反射
float diff = max(dot(worldNormal, L), 0.0);
vec3 diffuse = diff * diffuseColor * lightColor;
// 镜面反射
float spec = pow(max(dot(worldNormal, H), 0.0), shininess);
vec3 specular = spec * specularIntensity * lightColor;
// 最终颜色
vec3 finalColor = diffuse + specular;
gl_FragColor = vec4(finalColor, 1.0);
}
优化方案一:在切线空间计算(性能更好)
将光照计算转移到切线空间进行,避免在片段着色器中构建TBN矩阵。
顶点着色器
attribute vec3 position;
attribute vec3 normal;
attribute vec3 tangent;
attribute vec2 uv;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat3 normalMatrix;
uniform vec3 lightPosition;
uniform vec3 cameraPosition;
varying vec2 vUv;
varying vec3 vLightDir; // 切线空间中的光照方向
varying vec3 vViewDir; // 切线空间中的视线方向
void main() {
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
// 构建TBN矩阵
vec3 N = normalize(normalMatrix * normal);
vec3 T = normalize(normalMatrix * tangent);
T = normalize(T - dot(T, N) * N);
vec3 B = cross(N, T);
// TBN的转置矩阵(世界空间 → 切线空间)
mat3 TBN_inv = transpose(mat3(T, B, N));
// 将光照和视线方向转换到切线空间
vLightDir = TBN_inv * (lightPosition - worldPosition.xyz);
vViewDir = TBN_inv * (cameraPosition - worldPosition.xyz);
vUv = uv;
gl_Position = projectionMatrix * viewMatrix * worldPosition;
}
片段着色器
precision mediump float;
uniform sampler2D diffuseMap;
uniform sampler2D normalMap;
uniform float shininess;
varying vec2 vUv;
varying vec3 vLightDir;
varying vec3 vViewDir;
void main() {
// 从法线贴图采样(已经在切线空间)
vec3 normal = texture2D(normalMap, vUv).rgb * 2.0 - 1.0;
normal = normalize(normal);
// 归一化方向向量
vec3 L = normalize(vLightDir);
vec3 V = normalize(vViewDir);
vec3 H = normalize(L + V);
// 光照计算(在切线空间)
float diff = max(dot(normal, L), 0.0);
float spec = pow(max(dot(normal, H), 0.0), shininess);
vec3 diffuseColor = texture2D(diffuseMap, vUv).rgb;
vec3 finalColor = diffuseColor * diff + vec3(spec);
gl_FragColor = vec4(finalColor, 1.0);
}
视差贴图(Parallax Mapping)扩展:
视差贴图在法线贴图基础上添加深度感。
precision mediump float;
uniform sampler2D diffuseMap;
uniform sampler2D normalMap;
uniform sampler2D depthMap;
uniform float heightScale;
varying vec2 vUv;
varying vec3 vViewDir; // 切线空间
void main() {
// 视差偏移
vec3 V = normalize(vViewDir);
float height = texture2D(depthMap, vUv).r;
// 简单视差
vec2 offset = V.xy / V.z * (height * heightScale);
vec2 parallaxUv = vUv - offset;
// 使用偏移后的UV采样
vec3 normal = texture2D(normalMap, parallaxUv).rgb * 2.0 - 1.0;
vec3 color = texture2D(diffuseMap, parallaxUv).rgb;
// ... 光照计算
gl_FragColor = vec4(color, 1.0);
}
陡峭视差映射(Steep Parallax Mapping):
precision mediump float;
uniform sampler2D diffuseMap;
uniform sampler2D normalMap;
uniform sampler2D depthMap;
uniform float heightScale;
uniform int numLayers;
varying vec2 vUv;
varying vec3 vViewDir;
void main() {
vec3 V = normalize(vViewDir);
// 计算每层的深度步长
float layerDepth = 1.0 / float(numLayers);
float currentLayerDepth = 0.0;
// 计算UV偏移步长
vec2 deltaUv = V.xy / V.z * heightScale / float(numLayers);
vec2 currentUv = vUv;
// 当前深度值
float currentDepthMapValue = texture2D(depthMap, currentUv).r;
// 沿视线方向步进,直到找到交点
while(currentLayerDepth < currentDepthMapValue) {
currentUv -= deltaUv;
currentDepthMapValue = texture2D(depthMap, currentUv).r;
currentLayerDepth += layerDepth;
}
// 使用偏移后的UV
vec3 normal = texture2D(normalMap, currentUv).rgb * 2.0 - 1.0;
vec3 color = texture2D(diffuseMap, currentUv).rgb;
gl_FragColor = vec4(color, 1.0);
}
法线贴图的常见问题:
1. 切线未正交化
// 问题:插值后的切线可能不再与法线垂直
vec3 N = normalize(vNormal);
vec3 T = normalize(vTangent);
mat3 TBN = mat3(T, cross(N, T), N); // 可能不正交
// 解决:格拉姆-施密特正交化
vec3 N = normalize(vNormal);
vec3 T = normalize(vTangent);
T = normalize(T - dot(T, N) * N); // 正交化
vec3 B = cross(N, T);
mat3 TBN = mat3(T, B, N);
2. 法线贴图颜色空间错误
// 错误:直接使用sRGB颜色空间的法线
vec3 normal = texture2D(normalMap, vUv).rgb;
// 正确:将[0,1]映射到[-1,1]
vec3 normal = texture2D(normalMap, vUv).rgb * 2.0 - 1.0;
3. 副切线方向错误
// 某些模型需要翻转副切线
vec3 B = cross(N, T) * tangent.w; // tangent.w通常为±1
法线贴图压缩技术:
只存储XY分量(DXT5nm格式)
// 法线贴图只存储R和G通道
vec2 normalXY = texture2D(normalMap, vUv).rg * 2.0 - 1.0;
// 重建Z分量(假设是单位向量)
float normalZ = sqrt(1.0 - dot(normalXY, normalXY));
vec3 normal = vec3(normalXY, normalZ);
性能优化:
| 方法 | 优点 | 缺点 |
|---|---|---|
| 世界空间计算 | 直观易懂 | 每像素构建TBN矩阵 |
| 切线空间计算 | 性能更好 | 顶点着色器计算量增加 |
| 预计算TBN | 减少计算 | varying变量增多(9个float) |
实际应用场景:
Three.js中的应用示例:
const material = new THREE.MeshStandardMaterial({
map: diffuseTexture,
normalMap: normalTexture,
normalScale: new THREE.Vector2(1, 1),
displacementMap: heightTexture,
displacementScale: 0.1
});
How to handle lighting calculations for multiple light sources in shaders?
How to handle lighting calculations for multiple light sources in shaders?
考察点:多光源处理。
答案:
多光源光照计算是实时渲染中的常见需求,需要累加每个光源的贡献。实现时需要考虑性能优化、光源类型差异和衰减模型。
基础多光源实现(循环累加):
片段着色器
precision mediump float;
const int MAX_LIGHTS = 4;
// 光源数据
uniform int numLights;
uniform vec3 lightPositions[MAX_LIGHTS];
uniform vec3 lightColors[MAX_LIGHTS];
uniform float lightIntensities[MAX_LIGHTS];
// 场景数据
uniform vec3 cameraPosition;
uniform vec3 ambientLight;
// 材质参数
uniform vec3 materialDiffuse;
uniform vec3 materialSpecular;
uniform float shininess;
varying vec3 vNormal;
varying vec3 vPosition;
void main() {
vec3 N = normalize(vNormal);
vec3 V = normalize(cameraPosition - vPosition);
// 环境光(所有光源共享)
vec3 ambient = ambientLight * materialDiffuse;
// 累加所有光源的贡献
vec3 totalDiffuse = vec3(0.0);
vec3 totalSpecular = vec3(0.0);
for(int i = 0; i < MAX_LIGHTS; i++) {
if(i >= numLights) break;
// 光照方向和距离
vec3 L = lightPositions[i] - vPosition;
float distance = length(L);
L = normalize(L);
// 距离衰减
float attenuation = 1.0 / (1.0 + 0.1 * distance + 0.01 * distance * distance);
// 漫反射
float diff = max(dot(N, L), 0.0);
vec3 diffuse = diff * materialDiffuse * lightColors[i] * lightIntensities[i] * attenuation;
// 镜面反射(Blinn-Phong)
vec3 H = normalize(L + V);
float spec = pow(max(dot(N, H), 0.0), shininess);
vec3 specular = spec * materialSpecular * lightColors[i] * lightIntensities[i] * attenuation;
totalDiffuse += diffuse;
totalSpecular += specular;
}
// 最终颜色
vec3 finalColor = ambient + totalDiffuse + totalSpecular;
gl_FragColor = vec4(finalColor, 1.0);
}
多光源类型支持:
precision mediump float;
const int MAX_LIGHTS = 8;
const int LIGHT_TYPE_DIRECTIONAL = 0;
const int LIGHT_TYPE_POINT = 1;
const int LIGHT_TYPE_SPOT = 2;
struct Light {
int type;
vec3 position;
vec3 direction;
vec3 color;
float intensity;
float range;
float innerCone;
float outerCone;
};
uniform int numLights;
uniform Light lights[MAX_LIGHTS];
uniform vec3 cameraPosition;
uniform vec3 materialDiffuse;
uniform vec3 materialSpecular;
uniform float shininess;
varying vec3 vNormal;
varying vec3 vPosition;
// 计算点光源
vec3 calculatePointLight(Light light, vec3 N, vec3 V, vec3 fragPos) {
vec3 L = light.position - fragPos;
float distance = length(L);
L = normalize(L);
// 范围衰减
float attenuation = 1.0 - clamp(distance / light.range, 0.0, 1.0);
attenuation *= attenuation;
// 漫反射
float diff = max(dot(N, L), 0.0);
vec3 diffuse = diff * materialDiffuse * light.color;
// 镜面反射
vec3 H = normalize(L + V);
float spec = pow(max(dot(N, H), 0.0), shininess);
vec3 specular = spec * materialSpecular * light.color;
return (diffuse + specular) * light.intensity * attenuation;
}
// 计算方向光
vec3 calculateDirectionalLight(Light light, vec3 N, vec3 V) {
vec3 L = normalize(-light.direction);
// 漫反射
float diff = max(dot(N, L), 0.0);
vec3 diffuse = diff * materialDiffuse * light.color;
// 镜面反射
vec3 H = normalize(L + V);
float spec = pow(max(dot(N, H), 0.0), shininess);
vec3 specular = spec * materialSpecular * light.color;
return (diffuse + specular) * light.intensity;
}
// 计算聚光灯
vec3 calculateSpotLight(Light light, vec3 N, vec3 V, vec3 fragPos) {
vec3 L = light.position - fragPos;
float distance = length(L);
L = normalize(L);
// 聚光灯方向衰减
float theta = dot(L, normalize(-light.direction));
float epsilon = light.innerCone - light.outerCone;
float spotIntensity = clamp((theta - light.outerCone) / epsilon, 0.0, 1.0);
// 距离衰减
float attenuation = 1.0 - clamp(distance / light.range, 0.0, 1.0);
attenuation *= attenuation;
// 漫反射
float diff = max(dot(N, L), 0.0);
vec3 diffuse = diff * materialDiffuse * light.color;
// 镜面反射
vec3 H = normalize(L + V);
float spec = pow(max(dot(N, H), 0.0), shininess);
vec3 specular = spec * materialSpecular * light.color;
return (diffuse + specular) * light.intensity * attenuation * spotIntensity;
}
void main() {
vec3 N = normalize(vNormal);
vec3 V = normalize(cameraPosition - vPosition);
vec3 totalLight = vec3(0.0);
for(int i = 0; i < MAX_LIGHTS; i++) {
if(i >= numLights) break;
if(lights[i].type == LIGHT_TYPE_POINT) {
totalLight += calculatePointLight(lights[i], N, V, vPosition);
} else if(lights[i].type == LIGHT_TYPE_DIRECTIONAL) {
totalLight += calculateDirectionalLight(lights[i], N, V);
} else if(lights[i].type == LIGHT_TYPE_SPOT) {
totalLight += calculateSpotLight(lights[i], N, V, vPosition);
}
}
gl_FragColor = vec4(totalLight, 1.0);
}
延迟渲染(Deferred Rendering)方案:
延迟渲染将光照计算推迟到后处理阶段,特别适合大量光源场景。
第一遍:几何Pass(输出G-Buffer)
#version 300 es
precision highp float;
in vec2 vUv;
in vec3 vNormal;
in vec3 vPosition;
layout(location = 0) out vec4 gPosition;
layout(location = 1) out vec4 gNormal;
layout(location = 2) out vec4 gAlbedoSpec;
uniform sampler2D diffuseMap;
uniform sampler2D specularMap;
void main() {
// 输出位置
gPosition = vec4(vPosition, 1.0);
// 输出法向量
gNormal = vec4(normalize(vNormal), 1.0);
// 输出反照率和高光强度
gAlbedoSpec.rgb = texture(diffuseMap, vUv).rgb;
gAlbedoSpec.a = texture(specularMap, vUv).r;
}
第二遍:光照Pass
#version 300 es
precision highp float;
in vec2 vUv;
out vec4 fragColor;
uniform sampler2D gPosition;
uniform sampler2D gNormal;
uniform sampler2D gAlbedoSpec;
const int MAX_LIGHTS = 32; // 延迟渲染可以支持更多光源
uniform int numLights;
uniform vec3 lightPositions[MAX_LIGHTS];
uniform vec3 lightColors[MAX_LIGHTS];
uniform float lightIntensities[MAX_LIGHTS];
uniform vec3 cameraPosition;
void main() {
// 从G-Buffer读取数据
vec3 fragPos = texture(gPosition, vUv).rgb;
vec3 normal = texture(gNormal, vUv).rgb;
vec3 albedo = texture(gAlbedoSpec, vUv).rgb;
float specular = texture(gAlbedoSpec, vUv).a;
vec3 V = normalize(cameraPosition - fragPos);
vec3 lighting = vec3(0.0);
// 累加所有光源
for(int i = 0; i < MAX_LIGHTS && i < numLights; i++) {
vec3 L = lightPositions[i] - fragPos;
float distance = length(L);
L = normalize(L);
float attenuation = 1.0 / (1.0 + 0.1 * distance + 0.01 * distance * distance);
// 漫反射
float diff = max(dot(normal, L), 0.0);
vec3 diffuse = diff * albedo * lightColors[i];
// 镜面反射
vec3 H = normalize(L + V);
float spec = pow(max(dot(normal, H), 0.0), 32.0);
vec3 specularColor = spec * specular * lightColors[i];
lighting += (diffuse + specularColor) * lightIntensities[i] * attenuation;
}
fragColor = vec4(lighting, 1.0);
}
性能优化技巧:
1. 早期剔除(Early Exit)
for(int i = 0; i < MAX_LIGHTS; i++) {
if(i >= numLights) break;
// 距离剔除
float distance = length(lightPositions[i] - vPosition);
if(distance > lights[i].range) continue;
// ... 光照计算
}
2. 光源分组
// 将重要光源放在前面,次要光源放在后面
// 可以根据距离或重要性动态排序
3. 使用UBO(Uniform Buffer Objects)- WebGL 2.0
#version 300 es
layout(std140) uniform LightBlock {
vec4 positions[32];
vec4 colors[32];
int count;
};
前向+(Forward+)渲染:
使用光源剔除和分块光照计算。
#version 300 es
precision highp float;
// Tile-based culling结果
uniform sampler2D lightGrid; // 每个tile的光源列表
uniform int tileSize;
// ...其他uniforms
void main() {
// 计算当前fragment所在的tile
ivec2 tileID = ivec2(gl_FragCoord.xy) / tileSize;
// 从lightGrid读取该tile影响的光源
// ...累加光源贡献
}
实际应用建议:
| 场景 | 光源数量 | 推荐方案 | 原因 |
|---|---|---|---|
| 移动端 | 1-4 | 前向渲染 | 简单高效 |
| PC端室内 | 10-50 | 延迟渲染 | 大量小光源 |
| 大型室外 | 1-2主光+多点光 | 前向+ | 平衡性能和效果 |
常见问题和解决方案:
1. 循环展开限制
// 问题:某些GPU不支持动态循环
for(int i = 0; i < numLights; i++) { }
// 解决:使用固定上限
for(int i = 0; i < MAX_LIGHTS; i++) {
if(i >= numLights) break;
}
2. 光源闪烁(Popping)
// 使用平滑过渡而非硬截断
float fadeStart = light.range * 0.8;
float fade = 1.0 - smoothstep(fadeStart, light.range, distance);
实际应用场景:
How to implement texture Mipmapping in GLSL?
How to implement texture Mipmapping in GLSL?
考察点:Mipmap技术。
答案:
Mipmap是一种纹理过滤技术,预先生成不同分辨率的纹理版本,根据物体距离自动选择合适的级别,可以有效减少摩尔纹、提升渲染性能和质量。在GLSL中主要通过纹理采样函数和LOD控制来实现。
Mipmap基础原理:
Mipmap链:原始纹理 → 1/2分辨率 → 1/4分辨率 → … → 1x1像素
自动Mipmap选择(片段着色器):
precision mediump float;
uniform sampler2D diffuseMap; // 需要启用mipmapping
varying vec2 vUv;
void main() {
// 自动选择mipmap级别(基于屏幕空间导数)
vec4 color = texture2D(diffuseMap, vUv);
gl_FragColor = color;
}
手动指定Mipmap级别:
WebGL 1.0
// 顶点着色器中使用texture2DLod(无隐式导数计算)
attribute vec3 position;
attribute vec2 uv;
uniform sampler2D diffuseMap;
varying vec4 vColor;
void main() {
// 在顶点着色器中必须显式指定LOD
vColor = texture2DLod(diffuseMap, uv, 0.0); // LOD 0
gl_Position = vec4(position, 1.0);
}
WebGL 2.0 / GLSL ES 3.0
#version 300 es
precision mediump float;
uniform sampler2D diffuseMap;
uniform float lodLevel;
in vec2 vUv;
out vec4 fragColor;
void main() {
// 显式指定mipmap级别
vec4 color = textureLod(diffuseMap, vUv, lodLevel);
fragColor = color;
}
基于距离的LOD选择:
precision mediump float;
uniform sampler2D diffuseMap;
uniform vec3 cameraPosition;
uniform float maxDistance;
varying vec2 vUv;
varying vec3 vPosition;
void main() {
// 计算到摄像机的距离
float distance = length(cameraPosition - vPosition);
// 根据距离计算LOD级别
float lod = log2(distance / maxDistance + 1.0) * 3.0;
lod = clamp(lod, 0.0, 8.0); // 限制在合理范围
// 使用计算的LOD采样
vec4 color = texture2DLodEXT(diffuseMap, vUv, lod);
gl_FragColor = color;
}
手动梯度控制(textureGrad):
#version 300 es
precision mediump float;
uniform sampler2D diffuseMap;
in vec2 vUv;
out vec4 fragColor;
void main() {
// 手动指定UV梯度
vec2 dUvdx = dFdx(vUv);
vec2 dUvdy = dFdy(vUv);
// 可以手动缩放梯度来控制mipmap选择
dUvdx *= 0.5; // 选择更高分辨率的mipmap
dUvdy *= 0.5;
vec4 color = textureGrad(diffuseMap, vUv, dUvdx, dUvdy);
fragColor = color;
}
查询纹理Mipmap信息:
#version 300 es
precision mediump float;
uniform sampler2D diffuseMap;
in vec2 vUv;
out vec4 fragColor;
void main() {
// 获取纹理在指定LOD级别的尺寸
ivec2 size0 = textureSize(diffuseMap, 0); // Level 0尺寸
ivec2 size1 = textureSize(diffuseMap, 1); // Level 1尺寸
// 计算当前使用的LOD级别(需要扩展)
float lod = textureQueryLod(diffuseMap, vUv).x; // 部分GPU支持
vec4 color = texture(diffuseMap, vUv);
fragColor = color;
}
可视化Mipmap级别:
#version 300 es
precision mediump float;
uniform sampler2D diffuseMap;
in vec2 vUv;
out vec4 fragColor;
void main() {
// 计算当前像素的mipmap级别
vec2 dx = dFdx(vUv * vec2(textureSize(diffuseMap, 0)));
vec2 dy = dFdy(vUv * vec2(textureSize(diffuseMap, 0)));
float delta = max(dot(dx, dx), dot(dy, dy));
float mipLevel = 0.5 * log2(delta);
// 用颜色可视化mipmap级别
vec3 colors[8];
colors[0] = vec3(1.0, 0.0, 0.0); // 红 - Level 0
colors[1] = vec3(1.0, 0.5, 0.0); // 橙 - Level 1
colors[2] = vec3(1.0, 1.0, 0.0); // 黄 - Level 2
colors[3] = vec3(0.0, 1.0, 0.0); // 绿 - Level 3
colors[4] = vec3(0.0, 1.0, 1.0); // 青 - Level 4
colors[5] = vec3(0.0, 0.0, 1.0); // 蓝 - Level 5
colors[6] = vec3(0.5, 0.0, 1.0); // 紫 - Level 6
colors[7] = vec3(1.0, 0.0, 1.0); // 品红 - Level 7
int level = clamp(int(mipLevel), 0, 7);
fragColor = vec4(colors[level], 1.0);
}
各向异性过滤模拟:
#version 300 es
precision mediump float;
uniform sampler2D diffuseMap;
uniform int maxSamples;
in vec2 vUv;
out vec4 fragColor;
void main() {
vec2 dx = dFdx(vUv);
vec2 dy = dFdy(vUv);
// 找出主要采样方向
float dxLen = length(dx);
float dyLen = length(dy);
vec2 majorAxis;
float samples;
if(dxLen > dyLen) {
majorAxis = dx;
samples = min(dxLen / dyLen, float(maxSamples));
} else {
majorAxis = dy;
samples = min(dyLen / dxLen, float(maxSamples));
}
// 沿主轴采样多次
vec4 color = vec4(0.0);
for(float i = 0.0; i < samples; i += 1.0) {
float t = (i + 0.5) / samples - 0.5;
vec2 offset = majorAxis * t;
color += texture(diffuseMap, vUv + offset);
}
color /= samples;
fragColor = color;
}
JavaScript端的Mipmap设置:
// WebGL 1.0
const gl = canvas.getContext('webgl');
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
// 上传纹理数据
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
// 自动生成mipmap
gl.generateMipmap(gl.TEXTURE_2D);
// 设置过滤模式
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR_MIPMAP_LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
// WebGL 2.0 - 手动上传每个mipmap级别
for(let level = 0; level < mipLevels.length; level++) {
gl.texImage2D(gl.TEXTURE_2D, level, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, mipLevels[level]);
}
Mipmap过滤模式:
| 模式 | MIN_FILTER设置 | 效果 | 性能 |
|---|---|---|---|
| 最近点 | NEAREST_MIPMAP_NEAREST | 块状,快速切换 | 最快 |
| 线性 | LINEAR_MIPMAP_NEAREST | 平滑,快速切换 | 快 |
| 双线性 | NEAREST_MIPMAP_LINEAR | 块状,平滑切换 | 中等 |
| 三线性 | LINEAR_MIPMAP_LINEAR | 最平滑 | 稍慢 |
性能优化建议:
常见问题:
1. 纹理尺寸不是2的幂
// WebGL 1.0需要POT纹理才能使用mipmap
// 如果是NPOT(非2的幂),需要:
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
// WebGL 2.0支持NPOT纹理的mipmap
2. 在顶点着色器中采样纹理
// 必须使用texture2DLod,因为没有隐式导数
vec4 color = texture2DLod(sampler, uv, 0.0);
实际应用场景:
How to implement Cubemap sampling in shaders?
How to implement Cubemap sampling in shaders?
考察点:立方体纹理。
答案:
立方体贴图(Cubemap)是由6个2D纹理组成的纹理,用于环境映射、天空盒、全景反射等场景。在GLSL中使用3D方向向量进行采样,自动选择对应的面。
基础Cubemap采样:
precision mediump float;
uniform samplerCube envMap; // 立方体纹理采样器
varying vec3 vReflectDir; // 反射方向向量
void main() {
// 使用3D方向向量采样
vec4 envColor = textureCube(envMap, vReflectDir);
gl_FragColor = envColor;
}
环境映射(反射):
顶点着色器
attribute vec3 position;
attribute vec3 normal;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat3 normalMatrix;
uniform vec3 cameraPosition;
varying vec3 vReflectDir;
void main() {
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
vec3 worldNormal = normalize(normalMatrix * normal);
// 计算观察方向
vec3 viewDir = normalize(worldPosition.xyz - cameraPosition);
// 计算反射方向
vReflectDir = reflect(viewDir, worldNormal);
gl_Position = projectionMatrix * viewMatrix * worldPosition;
}
片段着色器
precision mediump float;
uniform samplerCube envMap;
varying vec3 vReflectDir;
void main() {
vec4 reflectColor = textureCube(envMap, vReflectDir);
gl_FragColor = reflectColor;
}
折射效果:
precision mediump float;
uniform samplerCube envMap;
uniform float refractionRatio; // 折射率比(如1.0/1.33 空气到水)
varying vec3 vPosition;
varying vec3 vNormal;
uniform vec3 cameraPosition;
void main() {
vec3 N = normalize(vNormal);
vec3 V = normalize(cameraPosition - vPosition);
// 折射向量
vec3 refractDir = refract(-V, N, refractionRatio);
vec4 refractColor = textureCube(envMap, refractDir);
// 反射向量
vec3 reflectDir = reflect(-V, N);
vec4 reflectColor = textureCube(envMap, reflectDir);
// 菲涅尔效应混合
float fresnel = pow(1.0 - max(dot(V, N), 0.0), 3.0);
vec4 finalColor = mix(refractColor, reflectColor, fresnel);
gl_FragColor = finalColor;
}
天空盒:
顶点着色器
attribute vec3 position;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
varying vec3 vDirection;
void main() {
// 使用顶点位置作为方向向量
vDirection = position;
// 移除视图矩阵的平移(保持天空盒在摄像机周围)
mat4 viewRotation = mat4(mat3(viewMatrix));
vec4 pos = projectionMatrix * viewRotation * vec4(position, 1.0);
// 强制深度为最远(z/w = 1.0)
gl_Position = pos.xyww;
}
片段着色器
precision mediump float;
uniform samplerCube skybox;
varying vec3 vDirection;
void main() {
vec4 color = textureCube(skybox, vDirection);
gl_FragColor = color;
}
WebGL 2.0 / GLSL ES 3.0:
#version 300 es
precision mediump float;
uniform samplerCube envMap;
in vec3 vReflectDir;
out vec4 fragColor;
void main() {
// 使用texture函数代替textureCube
vec4 envColor = texture(envMap, vReflectDir);
fragColor = envColor;
}
带LOD的Cubemap采样:
#version 300 es
precision mediump float;
uniform samplerCube envMap;
uniform float roughness; // 0.0 = 光滑, 1.0 = 粗糙
in vec3 vReflectDir;
out vec4 fragColor;
void main() {
// 根据粗糙度选择mipmap级别
float lod = roughness * 8.0; // 假设有8级mipmap
vec4 envColor = textureLod(envMap, vReflectDir, lod);
fragColor = envColor;
}
双重折射(色散效果):
precision mediump float;
uniform samplerCube envMap;
uniform vec3 cameraPosition;
varying vec3 vPosition;
varying vec3 vNormal;
void main() {
vec3 N = normalize(vNormal);
vec3 V = normalize(cameraPosition - vPosition);
// 不同波长的折射率
float etaR = 1.0 / 1.52; // 红光
float etaG = 1.0 / 1.52; // 绿光
float etaB = 1.0 / 1.54; // 蓝光(稍高)
vec3 refractR = refract(-V, N, etaR);
vec3 refractG = refract(-V, N, etaG);
vec3 refractB = refract(-V, N, etaB);
float r = textureCube(envMap, refractR).r;
float g = textureCube(envMap, refractG).g;
float b = textureCube(envMap, refractB).b;
gl_FragColor = vec4(r, g, b, 1.0);
}
动态Cubemap(实时反射):
JavaScript端设置:
// 创建6个渲染目标(每个面一个)
const cubeRenderTarget = new THREE.WebGLCubeRenderTarget(512);
const cubeCamera = new THREE.CubeCamera(0.1, 1000, cubeRenderTarget);
// 每帧更新
function render() {
// 更新cubemap
reflectiveMesh.visible = false; // 隐藏反射物体
cubeCamera.update(renderer, scene);
reflectiveMesh.visible = true;
// 使用更新的cubemap渲染场景
renderer.render(scene, camera);
}
局部校正Cubemap(Parallax Correction):
precision mediump float;
uniform samplerCube envMap;
uniform vec3 probePosition; // cubemap中心位置
uniform vec3 boxMin; // 包围盒最小点
uniform vec3 boxMax; // 包围盒最大点
varying vec3 vPosition;
varying vec3 vNormal;
uniform vec3 cameraPosition;
void main() {
vec3 N = normalize(vNormal);
vec3 V = normalize(cameraPosition - vPosition);
vec3 R = reflect(-V, N);
// 计算射线与包围盒的交点
vec3 firstPlaneIntersect = (boxMax - vPosition) / R;
vec3 secondPlaneIntersect = (boxMin - vPosition) / R;
vec3 furthestPlane = max(firstPlaneIntersect, secondPlaneIntersect);
float distance = min(min(furthestPlane.x, furthestPlane.y), furthestPlane.z);
// 计算校正后的采样方向
vec3 intersectPos = vPosition + R * distance;
vec3 correctedR = intersectPos - probePosition;
vec4 envColor = textureCube(envMap, correctedR);
gl_FragColor = envColor;
}
JavaScript端创建Cubemap:
// 方式1:从6张图片加载
const loader = new THREE.CubeTextureLoader();
const cubemap = loader.load([
'px.jpg', // positive x
'nx.jpg', // negative x
'py.jpg', // positive y
'ny.jpg', // negative y
'pz.jpg', // positive z
'nz.jpg' // negative z
]);
// 方式2:从全景图生成
const loader = new THREE.TextureLoader();
const panorama = loader.load('panorama.jpg');
const cubemap = THREE.CubeTextureLoader.fromEquirectangular(panorama);
实际应用场景:
How are texture coordinate wrap modes reflected in shaders?
How are texture coordinate wrap modes reflected in shaders?
考察点:纹理寻址模式。
答案:
纹理wrap模式决定了当纹理坐标超出[0,1]范围时如何处理采样。虽然wrap模式主要在WebGL端设置,但可以在着色器中手动模拟这些行为以实现特殊效果。
WebGL端设置wrap模式:
const gl = canvas.getContext('webgl');
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
// 设置wrap模式
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.REPEAT); // 水平重复
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.REPEAT); // 垂直重复
// 其他模式:
// gl.CLAMP_TO_EDGE - 边缘拉伸
// gl.MIRRORED_REPEAT - 镜像重复
// gl.REPEAT - 普通重复
着色器中手动实现wrap模式:
1. REPEAT(重复)
precision mediump float;
uniform sampler2D diffuseMap;
varying vec2 vUv;
void main() {
// fract自动实现重复效果
vec2 uv = fract(vUv);
vec4 color = texture2D(diffuseMap, uv);
gl_FragColor = color;
}
2. CLAMP_TO_EDGE(边缘拉伸)
precision mediump float;
uniform sampler2D diffuseMap;
varying vec2 vUv;
void main() {
// 限制在[0, 1]范围
vec2 uv = clamp(vUv, 0.0, 1.0);
vec4 color = texture2D(diffuseMap, uv);
gl_FragColor = color;
}
3. MIRRORED_REPEAT(镜像重复)
precision mediump float;
uniform sampler2D diffuseMap;
varying vec2 vUv;
void main() {
vec2 uv = vUv;
// 手动实现镜像重复
uv = abs(fract(uv * 0.5) * 2.0 - 1.0);
vec4 color = texture2D(diffuseMap, uv);
gl_FragColor = color;
}
4. CLAMP_TO_BORDER(边界颜色)- WebGL 2.0
#version 300 es
precision mediump float;
uniform sampler2D diffuseMap;
uniform vec4 borderColor;
in vec2 vUv;
out vec4 fragColor;
void main() {
// 检查是否越界
if(vUv.x < 0.0 || vUv.x > 1.0 || vUv.y < 0.0 || vUv.y > 1.0) {
fragColor = borderColor;
} else {
fragColor = texture(diffuseMap, vUv);
}
}
实用wrap模式组合:
1. 局部纹理重复
precision mediump float;
uniform sampler2D baseMap;
uniform sampler2D detailMap;
uniform float detailScale;
varying vec2 vUv;
void main() {
// 基础纹理使用CLAMP
vec4 base = texture2D(baseMap, clamp(vUv, 0.0, 1.0));
// 细节纹理使用REPEAT
vec4 detail = texture2D(detailMap, fract(vUv * detailScale));
// 混合
vec4 finalColor = base * detail;
gl_FragColor = finalColor;
}
2. 自定义镜像模式
precision mediump float;
uniform sampler2D diffuseMap;
varying vec2 vUv;
vec2 mirrorWrap(vec2 uv) {
vec2 result = uv;
// 每个单位区间内镜像
for(int i = 0; i < 2; i++) {
float coord = i == 0 ? result.x : result.y;
float cell = floor(coord);
float frac = fract(coord);
// 奇数单元镜像
if(mod(abs(cell), 2.0) > 0.5) {
frac = 1.0 - frac;
}
if(i == 0) result.x = frac;
else result.y = frac;
}
return result;
}
void main() {
vec2 uv = mirrorWrap(vUv);
vec4 color = texture2D(diffuseMap, uv);
gl_FragColor = color;
}
3. 棋盘格wrap(交替模式)
precision mediump float;
uniform sampler2D texture1;
uniform sampler2D texture2;
varying vec2 vUv;
void main() {
vec2 cell = floor(vUv);
vec2 localUv = fract(vUv);
// 棋盘格模式选择纹理
float pattern = mod(cell.x + cell.y, 2.0);
vec4 color = pattern < 0.5
? texture2D(texture1, localUv)
: texture2D(texture2, localUv);
gl_FragColor = color;
}
4. 径向wrap(极坐标)
precision mediump float;
uniform sampler2D diffuseMap;
varying vec2 vUv;
void main() {
// 转换到极坐标
vec2 center = vec2(0.5);
vec2 delta = vUv - center;
float radius = length(delta);
float angle = atan(delta.y, delta.x) / (2.0 * 3.14159) + 0.5;
// 径向重复
radius = fract(radius * 5.0);
vec2 polarUv = vec2(angle, radius);
vec4 color = texture2D(diffuseMap, polarUv);
gl_FragColor = color;
}
实际应用实例:
1. 无缝贴图(Seamless Tiling)
precision mediump float;
uniform sampler2D tileMap;
uniform float tileScale;
varying vec2 vUv;
void main() {
vec2 uv = fract(vUv * tileScale);
// 确保边缘平滑过渡
vec2 fade = smoothstep(0.0, 0.1, uv) * smoothstep(1.0, 0.9, uv);
float alpha = min(fade.x, fade.y);
vec4 color = texture2D(tileMap, uv);
color.a *= alpha;
gl_FragColor = color;
}
2. 纹理动画(滚动)
precision mediump float;
uniform sampler2D diffuseMap;
uniform float time;
uniform vec2 scrollSpeed;
varying vec2 vUv;
void main() {
// 纹理滚动
vec2 uv = vUv + scrollSpeed * time;
uv = fract(uv); // REPEAT wrap
vec4 color = texture2D(diffuseMap, uv);
gl_FragColor = color;
}
常见问题和解决方案:
1. 接缝可见
// 问题:使用REPEAT时在接缝处可见
// 解决:确保纹理本身是无缝的,或使用混合技术
// 三平面投影(避免拉伸和接缝)
vec3 blendWeights = abs(vNormal);
blendWeights = normalize(max(blendWeights, 0.00001));
vec4 xaxis = texture2D(tex, vPosition.yz);
vec4 yaxis = texture2D(tex, vPosition.xz);
vec4 zaxis = texture2D(tex, vPosition.xy);
vec4 color = xaxis * blendWeights.x + yaxis * blendWeights.y + zaxis * blendWeights.z;
2. POT vs NPOT纹理
// WebGL 1.0中NPOT纹理的限制
if(!isPowerOfTwo(image.width) || !isPowerOfTwo(image.height)) {
// NPOT纹理必须使用CLAMP_TO_EDGE
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
}
实际应用场景:
How to create and use custom shaders in WebGL?
How to create and use custom shaders in WebGL?
考察点:WebGL着色器集成。
答案:
在WebGL中创建自定义着色器需要编译着色器代码、创建着色器程序、设置attribute和uniform,然后在渲染时使用。完整流程包括编译、链接、错误处理和数据绑定。
完整的着色器创建流程:
// 1. 编译着色器
function createShader(gl, type, source) {
const shader = gl.createShader(type);
gl.shaderSource(shader, source);
gl.compileShader(shader);
// 检查编译错误
const success = gl.getShaderParameter(shader, gl.COMPILE_STATUS);
if (!success) {
console.error('Shader compilation error:', gl.getShaderInfoLog(shader));
gl.deleteShader(shader);
return null;
}
return shader;
}
// 2. 创建着色器程序
function createProgram(gl, vertexShader, fragmentShader) {
const program = gl.createProgram();
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragmentShader);
gl.linkProgram(program);
// 检查链接错误
const success = gl.getProgramParameter(program, gl.LINK_STATUS);
if (!success) {
console.error('Program linking error:', gl.getProgramInfoLog(program));
gl.deleteProgram(program);
return null;
}
return program;
}
// 3. 使用着色器
const canvas = document.querySelector('#canvas');
const gl = canvas.getContext('webgl');
// 顶点着色器源码
const vertexShaderSource = `
attribute vec4 a_position;
attribute vec2 a_texCoord;
uniform mat4 u_matrix;
varying vec2 v_texCoord;
void main() {
gl_Position = u_matrix * a_position;
v_texCoord = a_texCoord;
}
`;
// 片段着色器源码
const fragmentShaderSource = `
precision mediump float;
uniform sampler2D u_texture;
uniform vec4 u_color;
varying vec2 v_texCoord;
void main() {
vec4 texColor = texture2D(u_texture, v_texCoord);
gl_FragColor = texColor * u_color;
}
`;
// 创建着色器和程序
const vertexShader = createShader(gl, gl.VERTEX_SHADER, vertexShaderSource);
const fragmentShader = createShader(gl, gl.FRAGMENT_SHADER, fragmentShaderSource);
const program = createProgram(gl, vertexShader, fragmentShader);
// 4. 获取attribute和uniform位置
const positionLocation = gl.getAttribLocation(program, 'a_position');
const texCoordLocation = gl.getAttribLocation(program, 'a_texCoord');
const matrixLocation = gl.getUniformLocation(program, 'u_matrix');
const textureLocation = gl.getUniformLocation(program, 'u_texture');
const colorLocation = gl.getUniformLocation(program, 'u_color');
// 5. 创建和绑定缓冲区
const positionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
const positions = [
-1.0, -1.0,
1.0, -1.0,
-1.0, 1.0,
1.0, 1.0,
];
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(positions), gl.STATIC_DRAW);
// 6. 设置attribute
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);
// 7. 使用程序并设置uniform
gl.useProgram(program);
// 设置矩阵
const matrix = new Float32Array(16);
// ... 填充矩阵数据
gl.uniformMatrix4fv(matrixLocation, false, matrix);
// 设置颜色
gl.uniform4f(colorLocation, 1.0, 0.5, 0.2, 1.0);
// 设置纹理
gl.uniform1i(textureLocation, 0); // 使用纹理单元0
// 8. 绘制
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
实用工具类封装:
class ShaderProgram {
constructor(gl, vertexSource, fragmentSource) {
this.gl = gl;
this.program = this.createProgram(vertexSource, fragmentSource);
this.attributes = {};
this.uniforms = {};
this.cacheLocations();
}
createShader(type, source) {
const gl = this.gl;
const shader = gl.createShader(type);
gl.shaderSource(shader, source);
gl.compileShader(shader);
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
const info = gl.getShaderInfoLog(shader);
gl.deleteShader(shader);
throw new Error(`Shader compilation failed: ${info}`);
}
return shader;
}
createProgram(vertexSource, fragmentSource) {
const gl = this.gl;
const vertexShader = this.createShader(gl.VERTEX_SHADER, vertexSource);
const fragmentShader = this.createShader(gl.FRAGMENT_SHADER, fragmentSource);
const program = gl.createProgram();
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragmentShader);
gl.linkProgram(program);
if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {
const info = gl.getProgramInfoLog(program);
gl.deleteProgram(program);
throw new Error(`Program linking failed: ${info}`);
}
// 删除着色器(已经链接到程序)
gl.deleteShader(vertexShader);
gl.deleteShader(fragmentShader);
return program;
}
cacheLocations() {
const gl = this.gl;
const program = this.program;
// 缓存所有attribute位置
const numAttributes = gl.getProgramParameter(program, gl.ACTIVE_ATTRIBUTES);
for (let i = 0; i < numAttributes; i++) {
const info = gl.getActiveAttrib(program, i);
const location = gl.getAttribLocation(program, info.name);
this.attributes[info.name] = location;
}
// 缓存所有uniform位置
const numUniforms = gl.getProgramParameter(program, gl.ACTIVE_UNIFORMS);
for (let i = 0; i < numUniforms; i++) {
const info = gl.getActiveUniform(program, i);
const location = gl.getUniformLocation(program, info.name);
this.uniforms[info.name] = location;
}
}
use() {
this.gl.useProgram(this.program);
}
setAttribute(name, buffer, size, type = this.gl.FLOAT, normalized = false, stride = 0, offset = 0) {
const gl = this.gl;
const location = this.attributes[name];
if (location === undefined) {
console.warn(`Attribute ${name} not found`);
return;
}
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.enableVertexAttribArray(location);
gl.vertexAttribPointer(location, size, type, normalized, stride, offset);
}
setUniform(name, value) {
const gl = this.gl;
const location = this.uniforms[name];
if (location === null) {
console.warn(`Uniform ${name} not found`);
return;
}
// 根据值类型自动选择uniform函数
if (typeof value === 'number') {
gl.uniform1f(location, value);
} else if (value.length === 2) {
gl.uniform2fv(location, value);
} else if (value.length === 3) {
gl.uniform3fv(location, value);
} else if (value.length === 4) {
gl.uniform4fv(location, value);
} else if (value.length === 16) {
gl.uniformMatrix4fv(location, false, value);
}
}
dispose() {
this.gl.deleteProgram(this.program);
}
}
// 使用示例
const shader = new ShaderProgram(gl, vertexSource, fragmentSource);
shader.use();
shader.setAttribute('a_position', positionBuffer, 3);
shader.setUniform('u_color', [1.0, 0.0, 0.0, 1.0]);
shader.setUniform('u_matrix', matrix);
实际应用场景:
How to use ShaderMaterial in Three.js?
How to use ShaderMaterial in Three.js?
考察点:Three.js着色器材质。
答案:
Three.js的ShaderMaterial允许完全自定义着色器,同时自动处理Three.js的内置uniform和attribute。它是创建特效和自定义材质的强大工具。
基础ShaderMaterial使用:
const material = new THREE.ShaderMaterial({
uniforms: {
time: { value: 0.0 },
color: { value: new THREE.Color(0xff0000) },
texture1: { value: textureLoader.load('texture.jpg') }
},
vertexShader: `
varying vec2 vUv;
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
`,
fragmentShader: `
uniform float time;
uniform vec3 color;
uniform sampler2D texture1;
varying vec2 vUv;
void main() {
vec4 texColor = texture2D(texture1, vUv);
gl_FragColor = vec4(color * texColor.rgb, 1.0);
}
`
});
// 动画更新uniform
function animate() {
material.uniforms.time.value += 0.01;
renderer.render(scene, camera);
}
Three.js内置uniform和attribute:
// === 内置Uniforms ===
uniform mat4 modelMatrix; // 模型矩阵
uniform mat4 modelViewMatrix; // 模型视图矩阵
uniform mat4 projectionMatrix; // 投影矩阵
uniform mat4 viewMatrix; // 视图矩阵
uniform mat3 normalMatrix; // 法线矩阵
uniform vec3 cameraPosition; // 摄像机位置
// === 内置Attributes ===
attribute vec3 position; // 顶点位置
attribute vec3 normal; // 法向量
attribute vec2 uv; // UV坐标
attribute vec2 uv2; // 第二组UV
attribute vec4 tangent; // 切线
实用案例:
1. 波浪效果
const material = new THREE.ShaderMaterial({
uniforms: {
time: { value: 0 }
},
vertexShader: `
uniform float time;
varying vec2 vUv;
void main() {
vUv = uv;
vec3 pos = position;
pos.z += sin(pos.x * 2.0 + time) * 0.5;
gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}
`,
fragmentShader: `
varying vec2 vUv;
void main() {
gl_FragColor = vec4(vUv, 0.5, 1.0);
}
`
});
2. 菲涅尔效果
const material = new THREE.ShaderMaterial({
uniforms: {
fresnelColor: { value: new THREE.Color(0x00ff00) }
},
vertexShader: `
varying vec3 vNormal;
varying vec3 vPosition;
void main() {
vNormal = normalize(normalMatrix * normal);
vPosition = (modelViewMatrix * vec4(position, 1.0)).xyz;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
`,
fragmentShader: `
uniform vec3 fresnelColor;
varying vec3 vNormal;
varying vec3 vPosition;
void main() {
vec3 viewDir = normalize(-vPosition);
float fresnel = pow(1.0 - dot(viewDir, vNormal), 3.0);
gl_FragColor = vec4(fresnelColor * fresnel, 1.0);
}
`,
transparent: true
});
How to use WebGL attributes and uniforms in shaders?
How to use WebGL attributes and uniforms in shaders?
考察点:数据传递机制。
答案:
Attribute和Uniform是将数据从JavaScript传递到着色器的两种主要机制。Attribute是每个顶点的数据,Uniform是全局常量数据。
Attribute(顶点属性):
// JavaScript端
const gl = canvas.getContext('webgl');
// 获取attribute位置
const positionLocation = gl.getAttribLocation(program, 'a_position');
const colorLocation = gl.getAttribLocation(program, 'a_color');
// 创建并填充缓冲区
const positionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
const positions = new Float32Array([0, 0, 0, 1, 0, 0, 1, 1, 0]);
gl.bufferData(gl.ARRAY_BUFFER, positions, gl.STATIC_DRAW);
// 启用并设置attribute
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(
positionLocation, // attribute位置
3, // 每个顶点3个分量(x,y,z)
gl.FLOAT, // 数据类型
false, // 是否归一化
0, // 步长(0=紧密排列)
0 // 偏移量
);
// GLSL顶点着色器
attribute vec3 a_position;
attribute vec3 a_color;
varying vec3 v_color;
void main() {
gl_Position = vec4(a_position, 1.0);
v_color = a_color;
}
Uniform(全局变量):
// JavaScript端
// 获取uniform位置
const colorLocation = gl.getUniformLocation(program, 'u_color');
const matrixLocation = gl.getUniformLocation(program, 'u_matrix');
const timeLocation = gl.getUniformLocation(program, 'u_time');
// 设置uniform值
gl.uniform4f(colorLocation, 1.0, 0.5, 0.0, 1.0); // vec4
gl.uniform3fv(colorLocation, [1.0, 0.5, 0.0]); // vec3(数组形式)
gl.uniform1f(timeLocation, 0.5); // float
gl.uniform1i(textureLocation, 0); // sampler2D(纹理单元)
gl.uniformMatrix4fv(matrixLocation, false, matrix);// mat4
// GLSL片段着色器
precision mediump float;
uniform vec4 u_color;
uniform float u_time;
uniform sampler2D u_texture;
varying vec3 v_color;
void main() {
vec4 texColor = texture2D(u_texture, v_color.xy);
gl_FragColor = u_color * texColor;
}
Attribute vs Uniform对比:
| 特性 | Attribute | Uniform |
|---|---|---|
| 作用域 | 仅顶点着色器 | 顶点+片段着色器 |
| 数据量 | 每个顶点不同 | 所有顶点相同 |
| 更新频率 | 每次draw call | 可重用 |
| 用途 | 位置、颜色、UV等 | 矩阵、时间、纹理等 |
实用类型对应表:
// Uniform设置函数
gl.uniform1f(loc, v); // float
gl.uniform1i(loc, v); // int, sampler2D
gl.uniform2f(loc, x, y); // vec2
gl.uniform3f(loc, x, y, z); // vec3
gl.uniform4f(loc, x, y, z, w); // vec4
gl.uniform2fv(loc, [x, y]); // vec2(数组)
gl.uniform3fv(loc, [x, y, z]); // vec3(数组)
gl.uniform4fv(loc, [x,y,z,w]); // vec4(数组)
gl.uniformMatrix2fv(loc, false, m);// mat2
gl.uniformMatrix3fv(loc, false, m);// mat3
gl.uniformMatrix4fv(loc, false, m);// mat4
How to debug GLSL shader code? What are the common debugging methods?
How to debug GLSL shader code? What are the common debugging methods?
考察点:着色器调试技巧。
答案:
着色器调试相对困难,因为无法直接断点调试。常用方法包括颜色可视化、分段测试、编译错误检查和使用专业工具。
1. 颜色可视化调试(最常用)
// 可视化向量值(映射到颜色)
gl_FragColor = vec4(vNormal * 0.5 + 0.5, 1.0);
// 可视化UV坐标
gl_FragColor = vec4(vUv, 0.0, 1.0);
// 可视化单个值(灰度)
gl_FragColor = vec4(vec3(someValue), 1.0);
// 条件调试(检查分支)
if (someCondition) {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); // 红色
} else {
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0); // 绿色
}
// 范围检查
float value = clamp(debugValue, 0.0, 1.0);
gl_FragColor = vec4(value, value, value, 1.0);
2. 编译和链接错误检查
function checkShaderCompilation(gl, shader) {
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
const info = gl.getShaderInfoLog(shader);
console.error('Shader compilation error:', info);
// 显示具体的错误行号和信息
const lines = info.split('\n');
lines.forEach(line => {
if (line.includes('ERROR')) {
console.error(' →', line);
}
});
return false;
}
return true;
}
3. 使用WebGL Inspector(Chrome扩展)
4. SpectorJS(帧分析工具)
// 集成SpectorJS
const spector = new SPECTOR.Spector();
spector.displayUI();
// 捕获单帧进行分析
spector.captureCanvas(canvas);
5. 分段测试法
// 逐步输出中间结果
void main() {
vec3 normal = normalize(vNormal);
// 测试1:只输出法向量
// gl_FragColor = vec4(normal * 0.5 + 0.5, 1.0);
// return;
vec3 lightDir = normalize(lightPos - vPosition);
// 测试2:输出光照方向
// gl_FragColor = vec4(lightDir * 0.5 + 0.5, 1.0);
// return;
float diff = max(dot(normal, lightDir), 0.0);
// 测试3:输出漫反射
gl_FragColor = vec4(vec3(diff), 1.0);
}
6. 使用辅助函数
// 热力图可视化
vec3 heatmap(float value) {
vec3 colors[5];
colors[0] = vec3(0.0, 0.0, 1.0); // 蓝
colors[1] = vec3(0.0, 1.0, 1.0); // 青
colors[2] = vec3(0.0, 1.0, 0.0); // 绿
colors[3] = vec3(1.0, 1.0, 0.0); // 黄
colors[4] = vec3(1.0, 0.0, 0.0); // 红
value = clamp(value, 0.0, 1.0) * 4.0;
int idx = int(floor(value));
float t = fract(value);
if (idx >= 4) return colors[4];
return mix(colors[idx], colors[idx + 1], t);
}
// 使用
gl_FragColor = vec4(heatmap(someValue), 1.0);
// 网格线辅助
float grid(vec2 uv, float scale) {
vec2 coord = fract(uv * scale);
vec2 grid = abs(fract(coord - 0.5) - 0.5) / fwidth(coord);
float line = min(grid.x, grid.y);
return 1.0 - min(line, 1.0);
}
7. 使用#define进行调试开关
#define DEBUG_NORMALS 0
#define DEBUG_UV 0
#define DEBUG_LIGHTING 1
void main() {
#if DEBUG_NORMALS
gl_FragColor = vec4(vNormal * 0.5 + 0.5, 1.0);
return;
#endif
#if DEBUG_UV
gl_FragColor = vec4(vUv, 0.0, 1.0);
return;
#endif
// 正常渲染代码
vec3 color = calculateLighting();
gl_FragColor = vec4(color, 1.0);
}
8. RenderDoc(专业工具)
常见问题诊断:
// 问题1:黑屏 - 检查数值范围
gl_FragColor = vec4(abs(someVector), 1.0); // 确保非负
// 问题2:闪烁 - 检查NaN
if (isnan(value) || isinf(value)) {
gl_FragColor = vec4(1.0, 0.0, 1.0, 1.0); // 品红色表示错误
return;
}
// 问题3:看不见 - 检查alpha通道
gl_FragColor = vec4(color, 1.0); // 强制alpha=1
// 问题4:性能问题 - 简化着色器
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); // 纯色测试
实用调试清单:
How to implement Physically Based Rendering (PBR) in shaders?
How to implement Physically Based Rendering (PBR) in shaders?
考察点:PBR渲染实现。
答案:
PBR(基于物理的渲染)是一种遵循物理规律的渲染方法,使用能量守恒、菲涅尔效应和微表面理论来实现真实的材质外观。主流实现包括Cook-Torrance BRDF模型。
核心PBR概念:
Cook-Torrance BRDF实现:
precision highp float;
const float PI = 3.14159265359;
// 法线分布函数 - GGX/Trowbridge-Reitz
float DistributionGGX(vec3 N, vec3 H, float roughness) {
float a = roughness * roughness;
float a2 = a * a;
float NdotH = max(dot(N, H), 0.0);
float NdotH2 = NdotH * NdotH;
float nom = a2;
float denom = (NdotH2 * (a2 - 1.0) + 1.0);
denom = PI * denom * denom;
return nom / denom;
}
// 几何函数 - Schlick-GGX
float GeometrySchlickGGX(float NdotV, float roughness) {
float r = (roughness + 1.0);
float k = (r * r) / 8.0;
float nom = NdotV;
float denom = NdotV * (1.0 - k) + k;
return nom / denom;
}
float GeometrySmith(vec3 N, vec3 V, vec3 L, float roughness) {
float NdotV = max(dot(N, V), 0.0);
float NdotL = max(dot(N, L), 0.0);
float ggx2 = GeometrySchlickGGX(NdotV, roughness);
float ggx1 = GeometrySchlickGGX(NdotL, roughness);
return ggx1 * ggx2;
}
// 菲涅尔方程 - Schlick近似
vec3 fresnelSchlick(float cosTheta, vec3 F0) {
return F0 + (1.0 - F0) * pow(clamp(1.0 - cosTheta, 0.0, 1.0), 5.0);
}
// 带粗糙度的菲涅尔
vec3 fresnelSchlickRoughness(float cosTheta, vec3 F0, float roughness) {
return F0 + (max(vec3(1.0 - roughness), F0) - F0) * pow(clamp(1.0 - cosTheta, 0.0, 1.0), 5.0);
}
// PBR主函数
varying vec3 vPosition;
varying vec3 vNormal;
varying vec2 vUv;
uniform vec3 cameraPosition;
uniform vec3 lightPositions[4];
uniform vec3 lightColors[4];
uniform sampler2D albedoMap;
uniform sampler2D normalMap;
uniform sampler2D metallicMap;
uniform sampler2D roughnessMap;
uniform sampler2D aoMap;
void main() {
// 采样纹理
vec3 albedo = pow(texture2D(albedoMap, vUv).rgb, vec3(2.2)); // sRGB转线性
float metallic = texture2D(metallicMap, vUv).r;
float roughness = texture2D(roughnessMap, vUv).r;
float ao = texture2D(aoMap, vUv).r;
vec3 N = normalize(vNormal);
vec3 V = normalize(cameraPosition - vPosition);
// 计算F0(0度角的菲涅尔反射率)
vec3 F0 = vec3(0.04); // 非金属默认值
F0 = mix(F0, albedo, metallic);
// 反射率方程
vec3 Lo = vec3(0.0);
for(int i = 0; i < 4; ++i) {
vec3 L = normalize(lightPositions[i] - vPosition);
vec3 H = normalize(V + L);
float distance = length(lightPositions[i] - vPosition);
float attenuation = 1.0 / (distance * distance);
vec3 radiance = lightColors[i] * attenuation;
// Cook-Torrance BRDF
float NDF = DistributionGGX(N, H, roughness);
float G = GeometrySmith(N, V, L, roughness);
vec3 F = fresnelSchlick(max(dot(H, V), 0.0), F0);
vec3 numerator = NDF * G * F;
float denominator = 4.0 * max(dot(N, V), 0.0) * max(dot(N, L), 0.0) + 0.0001;
vec3 specular = numerator / denominator;
// kS是镜面反射比例,kD是漫反射比例
vec3 kS = F;
vec3 kD = vec3(1.0) - kS;
kD *= 1.0 - metallic; // 金属没有漫反射
float NdotL = max(dot(N, L), 0.0);
Lo += (kD * albedo / PI + specular) * radiance * NdotL;
}
// 环境光(简化版IBL)
vec3 ambient = vec3(0.03) * albedo * ao;
vec3 color = ambient + Lo;
// HDR色调映射
color = color / (color + vec3(1.0));
// Gamma校正
color = pow(color, vec3(1.0/2.2));
gl_FragColor = vec4(color, 1.0);
}
基于图像的光照(IBL):
uniform samplerCube irradianceMap;
uniform samplerCube prefilterMap;
uniform sampler2D brdfLUT;
void main() {
// ... 前面的代码相同
// IBL环境光
vec3 F = fresnelSchlickRoughness(max(dot(N, V), 0.0), F0, roughness);
vec3 kS = F;
vec3 kD = 1.0 - kS;
kD *= 1.0 - metallic;
// 漫反射辐照度
vec3 irradiance = textureCube(irradianceMap, N).rgb;
vec3 diffuse = irradiance * albedo;
// 镜面反射
vec3 R = reflect(-V, N);
vec3 prefilteredColor = textureLod(prefilterMap, R, roughness * MAX_LOD).rgb;
vec2 brdf = texture2D(brdfLUT, vec2(max(dot(N, V), 0.0), roughness)).rg;
vec3 specular = prefilteredColor * (F * brdf.x + brdf.y);
vec3 ambient = (kD * diffuse + specular) * ao;
vec3 color = ambient + Lo;
// HDR + Gamma
color = color / (color + vec3(1.0));
color = pow(color, vec3(1.0/2.2));
gl_FragColor = vec4(color, 1.0);
}
实际应用场景:
How to implement Screen Space Reflection (SSR) using shaders?
How to implement Screen Space Reflection (SSR) using shaders?
考察点:SSR技术。
答案:
SSR通过在屏幕空间进行光线步进,利用深度缓冲检测反射交点,实现实时反射效果。适用于平面和近平面反射。
基础SSR实现:
#version 300 es
precision highp float;
uniform sampler2D sceneTexture;
uniform sampler2D normalTexture;
uniform sampler2D depthTexture;
uniform mat4 projectionMatrix;
uniform mat4 invProjectionMatrix;
uniform vec2 resolution;
in vec2 vUv;
out vec4 fragColor;
// 屏幕空间到视图空间
vec3 screenToView(vec3 screenPos) {
vec4 clipPos = vec4(screenPos * 2.0 - 1.0, 1.0);
vec4 viewPos = invProjectionMatrix * clipPos;
return viewPos.xyz / viewPos.w;
}
// 视图空间到屏幕空间
vec3 viewToScreen(vec3 viewPos) {
vec4 clipPos = projectionMatrix * vec4(viewPos, 1.0);
vec3 ndcPos = clipPos.xyz / clipPos.w;
return ndcPos * 0.5 + 0.5;
}
void main() {
// 读取G-Buffer数据
vec3 normal = texture(normalTexture, vUv).xyz;
float depth = texture(depthTexture, vUv).r;
if (depth >= 1.0) {
fragColor = vec4(0.0);
return;
}
// 重建视图空间位置
vec3 viewPos = screenToView(vec3(vUv, depth));
vec3 viewNormal = normalize(normal);
vec3 viewDir = normalize(viewPos);
// 计算反射方向
vec3 reflectDir = reflect(viewDir, viewNormal);
// 光线步进
const int maxSteps = 32;
const float stepSize = 0.1;
float hitDist = 0.0;
bool hit = false;
for (int i = 0; i < maxSteps; i++) {
vec3 rayPos = viewPos + reflectDir * hitDist;
vec3 screenPos = viewToScreen(rayPos);
// 越界检查
if (screenPos.x < 0.0 || screenPos.x > 1.0 ||
screenPos.y < 0.0 || screenPos.y > 1.0) break;
float sampledDepth = texture(depthTexture, screenPos.xy).r;
vec3 sampledViewPos = screenToView(vec3(screenPos.xy, sampledDepth));
// 深度差异检测
if (rayPos.z < sampledViewPos.z && rayPos.z > sampledViewPos.z - 0.5) {
hit = true;
break;
}
hitDist += stepSize;
}
if (hit) {
vec3 hitScreenPos = viewToScreen(viewPos + reflectDir * hitDist);
vec4 reflectColor = texture(sceneTexture, hitScreenPos.xy);
// 边缘衰减
vec2 edgeFade = smoothstep(0.0, 0.1, hitScreenPos.xy) *
smoothstep(1.0, 0.9, hitScreenPos.xy);
float fade = edgeFade.x * edgeFade.y;
fragColor = vec4(reflectColor.rgb * fade, 1.0);
} else {
fragColor = vec4(0.0);
}
}
优化版SSR(分层步进):
// 粗略步进 + 精细步进
const int coarseSteps = 16;
const int fineSteps = 8;
// 第一阶段:粗略步进
for (int i = 0; i < coarseSteps; i++) {
// ...粗略检测
}
// 第二阶段:二分搜索精细定位
for (int i = 0; i < fineSteps; i++) {
hitDist *= 0.5;
// ...精细调整
}
How are shaders organized in Deferred Rendering?
How are shaders organized in Deferred Rendering?
考察点:延迟渲染架构。
答案:
延迟渲染将渲染分为几何Pass和光照Pass,先将几何信息写入G-Buffer,再进行光照计算,适合大量光源场景。
几何Pass(G-Buffer生成):
#version 300 es
precision highp float;
in vec3 vPosition;
in vec3 vNormal;
in vec2 vUv;
layout(location = 0) out vec4 gPosition;
layout(location = 1) out vec4 gNormal;
layout(location = 2) out vec4 gAlbedoSpec;
uniform sampler2D diffuseMap;
uniform sampler2D specularMap;
void main() {
gPosition = vec4(vPosition, 1.0);
gNormal = vec4(normalize(vNormal), 1.0);
gAlbedoSpec.rgb = texture(diffuseMap, vUv).rgb;
gAlbedoSpec.a = texture(specularMap, vUv).r;
}
光照Pass:
#version 300 es
precision highp float;
uniform sampler2D gPosition;
uniform sampler2D gNormal;
uniform sampler2D gAlbedoSpec;
struct Light {
vec3 position;
vec3 color;
float radius;
};
uniform Light lights[32];
uniform int numLights;
uniform vec3 viewPos;
in vec2 vUv;
out vec4 fragColor;
void main() {
vec3 fragPos = texture(gPosition, vUv).rgb;
vec3 normal = texture(gNormal, vUv).rgb;
vec3 albedo = texture(gAlbedoSpec, vUv).rgb;
float specular = texture(gAlbedoSpec, vUv).a;
vec3 lighting = albedo * 0.1; // 硬编码环境光
vec3 viewDir = normalize(viewPos - fragPos);
for(int i = 0; i < numLights; ++i) {
float distance = length(lights[i].position - fragPos);
if(distance < lights[i].radius) {
vec3 lightDir = normalize(lights[i].position - fragPos);
vec3 diffuse = max(dot(normal, lightDir), 0.0) * albedo * lights[i].color;
lighting += diffuse * (1.0 - distance/lights[i].radius);
}
}
fragColor = vec4(lighting, 1.0);
}
How to implement Volumetric Lighting effects in shaders?
How to implement Volumetric Lighting effects in shaders?
考察点:体积光实现。
答案:
体积光通过光线步进技术模拟光线在介质中的散射效果,创造"上帝之光"效果。
precision highp float;
uniform sampler2D depthTexture;
uniform vec3 lightPosition;
uniform vec3 cameraPosition;
uniform mat4 invViewProjectionMatrix;
varying vec2 vUv;
const int NUM_SAMPLES = 32;
vec3 worldPosFromDepth(vec2 uv, float depth) {
vec4 clipPos = vec4(uv * 2.0 - 1.0, depth * 2.0 - 1.0, 1.0);
vec4 worldPos = invViewProjectionMatrix * clipPos;
return worldPos.xyz / worldPos.w;
}
void main() {
float depth = texture2D(depthTexture, vUv).r;
vec3 worldPos = worldPosFromDepth(vUv, depth);
vec3 rayStart = cameraPosition;
vec3 rayEnd = worldPos;
vec3 rayDir = rayEnd - rayStart;
float rayLength = length(rayDir);
rayDir = normalize(rayDir);
float stepSize = rayLength / float(NUM_SAMPLES);
vec3 step = rayDir * stepSize;
vec3 currentPos = rayStart;
float volumetricLight = 0.0;
for(int i = 0; i < NUM_SAMPLES; i++) {
vec3 toLight = lightPosition - currentPos;
float dist = length(toLight);
float attenuation = 1.0 / (1.0 + dist * dist * 0.01);
volumetricLight += attenuation;
currentPos += step;
}
volumetricLight /= float(NUM_SAMPLES);
gl_FragColor = vec4(vec3(volumetricLight), 1.0);
}
What is the implementation principle of Shadow Mapping in shaders?
What is the implementation principle of Shadow Mapping in shaders?
考察点:阴影映射技术。
答案:
阴影贴图通过两遍渲染实现:第一遍从光源视角渲染深度,第二遍比较片段深度判断是否在阴影中。
// Pass 1: 生成阴影贴图
void main() {
gl_FragColor = vec4(gl_FragCoord.z);
}
// Pass 2: 使用阴影贴图
uniform sampler2D shadowMap;
uniform mat4 lightSpaceMatrix;
varying vec3 vPosition;
float shadowCalculation(vec4 fragPosLightSpace) {
vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
projCoords = projCoords * 0.5 + 0.5;
float closestDepth = texture2D(shadowMap, projCoords.xy).r;
float currentDepth = projCoords.z;
float bias = 0.005;
float shadow = currentDepth - bias > closestDepth ? 1.0 : 0.0;
// PCF软阴影
float shadowSum = 0.0;
vec2 texelSize = 1.0 / vec2(textureSize(shadowMap, 0));
for(int x = -1; x <= 1; ++x) {
for(int y = -1; y <= 1; ++y) {
float pcfDepth = texture2D(shadowMap, projCoords.xy + vec2(x, y) * texelSize).r;
shadowSum += currentDepth - bias > pcfDepth ? 1.0 : 0.0;
}
}
return shadowSum / 9.0;
}
void main() {
vec4 fragPosLightSpace = lightSpaceMatrix * vec4(vPosition, 1.0);
float shadow = shadowCalculation(fragPosLightSpace);
vec3 lighting = (1.0 - shadow) * lightColor;
gl_FragColor = vec4(lighting, 1.0);
}
How to implement water surface effects using shaders? What factors need to be considered?
How to implement water surface effects using shaders? What factors need to be considered?
考察点:水面渲染。
答案:
水面效果需要考虑:波浪动画、法线扰动、反射折射、菲涅尔效应、深度衰减。
uniform float time;
uniform sampler2D normalMap;
uniform samplerCube envMap;
varying vec3 vPosition;
varying vec3 vNormal;
varying vec2 vUv;
void main() {
// 1. 波浪法线
vec2 uv1 = vUv + vec2(time * 0.1, 0.0);
vec2 uv2 = vUv + vec2(0.0, time * 0.08);
vec3 normal1 = texture2D(normalMap, uv1).xyz * 2.0 - 1.0;
vec3 normal2 = texture2D(normalMap, uv2).xyz * 2.0 - 1.0;
vec3 normal = normalize(normal1 + normal2);
// 2. 菲涅尔反射
vec3 viewDir = normalize(cameraPosition - vPosition);
float fresnel = pow(1.0 - dot(viewDir, normal), 3.0);
// 3. 反射折射
vec3 reflectDir = reflect(-viewDir, normal);
vec4 reflectColor = textureCube(envMap, reflectDir);
vec3 refractDir = refract(-viewDir, normal, 0.75);
vec4 refractColor = textureCube(envMap, refractDir);
vec4 waterColor = mix(refractColor, reflectColor, fresnel);
gl_FragColor = waterColor;
}
How to implement particle system rendering in shaders?
How to implement particle system rendering in shaders?
考察点:粒子着色器。
答案:
GPU粒子系统在顶点着色器中计算粒子位置和属性,实现高性能大规模粒子效果。
// 顶点着色器
attribute float particleId;
uniform float time;
uniform sampler2D positionTexture;
void main() {
vec2 uv = vec2(mod(particleId, 512.0) / 512.0, floor(particleId / 512.0) / 512.0);
vec4 particleData = texture2D(positionTexture, uv);
vec3 position = particleData.xyz;
float life = particleData.w;
position.y += time * 2.0 - life * 10.0;
float size = (1.0 - life) * 10.0;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
gl_PointSize = size;
}
// 片段着色器
void main() {
vec2 coord = gl_PointCoord - 0.5;
float dist = length(coord);
if(dist > 0.5) discard;
float alpha = 1.0 - dist * 2.0;
gl_FragColor = vec4(1.0, 0.5, 0.2, alpha);
}
How to implement post-processing effects (such as Bloom, blur) using shaders?
How to implement post-processing effects (such as Bloom, blur) using shaders?
考察点:后期处理技术。
答案:
后期处理在屏幕空间对渲染结果进行图像处理,常见效果包括Bloom、模糊、色彩分级。
// 高斯模糊
uniform sampler2D inputTexture;
uniform vec2 resolution;
varying vec2 vUv;
void main() {
vec2 texelSize = 1.0 / resolution;
vec4 result = vec4(0.0);
float weights[5] = float[](0.227027, 0.1945946, 0.1216216, 0.054054, 0.016216);
result += texture2D(inputTexture, vUv) * weights[0];
for(int i = 1; i < 5; ++i) {
result += texture2D(inputTexture, vUv + vec2(texelSize.x * float(i), 0.0)) * weights[i];
result += texture2D(inputTexture, vUv - vec2(texelSize.x * float(i), 0.0)) * weights[i];
}
gl_FragColor = result;
}
// Bloom效果
uniform sampler2D sceneTexture;
uniform sampler2D bloomTexture;
void main() {
vec4 scene = texture2D(sceneTexture, vUv);
vec4 bloom = texture2D(bloomTexture, vUv);
gl_FragColor = scene + bloom * 0.5;
}
How to implement procedural texture generation in shaders?
How to implement procedural texture generation in shaders?
考察点:程序化纹理。
答案:
程序化纹理通过数学函数动态生成,无需加载图片,支持无限分辨率和参数化控制。
// Perlin噪声
float noise(vec2 p) {
return fract(sin(dot(p, vec2(12.9898, 78.233))) * 43758.5453);
}
// 棋盘格
float checkerboard(vec2 uv, float scale) {
vec2 c = floor(uv * scale);
return mod(c.x + c.y, 2.0);
}
// Voronoi图案
float voronoi(vec2 uv) {
vec2 i = floor(uv);
vec2 f = fract(uv);
float minDist = 1.0;
for(int y = -1; y <= 1; y++) {
for(int x = -1; x <= 1; x++) {
vec2 neighbor = vec2(float(x), float(y));
vec2 point = noise(i + neighbor) + neighbor;
float dist = length(point - f);
minDist = min(minDist, dist);
}
}
return minDist;
}
What are the performance optimization strategies for GLSL shaders?
What are the performance optimization strategies for GLSL shaders?
考察点:着色器优化。
答案:
性能优化策略:减少运算、避免分支、优化纹理采样、使用合适精度、预计算常量。
优化技巧:
How to reduce the performance impact of branches and loops in shaders?
How to reduce the performance impact of branches and loops in shaders?
考察点:分支优化。
答案:
使用数学函数和条件表达式代替分支,固定循环次数,利用GPU并行特性。
// 不好:动态分支
if(condition) {
color = vec3(1.0, 0.0, 0.0);
} else {
color = vec3(0.0, 1.0, 0.0);
}
// 好:使用mix
float t = step(0.5, condition);
color = mix(vec3(0.0, 1.0, 0.0), vec3(1.0, 0.0, 0.0), t);
// 不好:变长循环
for(int i = 0; i < numIterations; i++) { }
// 好:固定上限+早退出
for(int i = 0; i < MAX_ITERATIONS; i++) {
if(i >= numIterations) break;
}
What is the impact of shader precision on performance and quality?
What is the impact of shader precision on performance and quality?
考察点:精度控制。
答案:
精度影响计算准确度和性能,移动端建议mediump平衡性能和质量。
precision mediump float; // 全局默认
uniform highp mat4 mvpMatrix; // 矩阵需要高精度
varying mediump vec2 vUv; // UV用中精度
varying lowp vec4 vColor; // 颜色用低精度
How to implement GPU computing (GPGPU)? What is the role of shaders in it?
How to implement GPU computing (GPGPU)? What is the role of shaders in it?
考察点:GPGPU应用。
答案:
GPGPU利用GPU进行通用计算,通过片段着色器对纹理数据进行并行处理,实现物理模拟、图像处理等。
// 粒子物理计算
uniform sampler2D positionTexture;
uniform sampler2D velocityTexture;
uniform float deltaTime;
void main() {
vec4 position = texture2D(positionTexture, vUv);
vec4 velocity = texture2D(velocityTexture, vUv);
// 重力
velocity.y -= 9.8 * deltaTime;
// 更新位置
position += velocity * deltaTime;
// 边界检测
if(position.y < 0.0) {
position.y = 0.0;
velocity.y *= -0.5;
}
gl_FragColor = position;
}
How to design reusable shader libraries? What are the best practices for modular shaders?
How to design reusable shader libraries? What are the best practices for modular shaders?
考察点:着色器架构设计。
答案:
模块化着色器通过chunk系统、宏定义、函数库实现代码复用和灵活组合。
// 基础lighting chunk
vec3 calculateLighting(vec3 normal, vec3 lightDir, vec3 viewDir, vec3 albedo, float roughness) {
float diff = max(dot(normal, lightDir), 0.0);
vec3 h = normalize(lightDir + viewDir);
float spec = pow(max(dot(normal, h), 0.0), (1.0 - roughness) * 256.0);
return albedo * diff + vec3(spec);
}
// 主着色器组合chunks
#include <common>
#include <lighting>
#include <fog>
void main() {
vec3 lighting = calculateLighting(normal, lightDir, viewDir, albedo, roughness);
lighting = applyFog(lighting, fogDensity, fogColor);
gl_FragColor = vec4(lighting, 1.0);
}
最佳实践:
完成!全部48道GLSL面试题已完成答案编写!
What is Three.js? What role does it play in Web 3D development?
What is Three.js? What role does it play in Web 3D development?
考察点:Three.js基础概念。
答案:
Three.js是一个基于WebGL的JavaScript 3D图形库,它简化了在Web浏览器中创建和显示3D内容的过程。Three.js封装了复杂的WebGL API,为开发者提供了更高级、更友好的接口来创建3D场景、几何体、材质、光照和动画。
核心作用:
主要特点:
适用场景:
实际应用:
What are the core components of Three.js? What are the relationships between them?
What are the core components of Three.js? What are the relationships between them?
考察点:核心架构理解。
答案:
Three.js的核心架构基于经典的3D图形渲染管线,主要包含四个核心组件:场景(Scene)、相机(Camera)、渲染器(Renderer)和对象(Object3D)。这些组件协同工作,形成完整的3D渲染流程。
核心组件:
1. Scene(场景)
const scene = new THREE.Scene();
// 场景是所有3D对象的容器
scene.add(mesh); // 添加网格对象
scene.add(light); // 添加光源
2. Camera(相机)
// 透视相机
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
camera.position.z = 5;
3. Renderer(渲染器)
const renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
4. Object3D(3D对象)
const geometry = new THREE.BoxGeometry();
const material = new THREE.MeshBasicMaterial({color: 0x00ff00});
const cube = new THREE.Mesh(geometry, material);
scene.add(cube);
组件关系:
渲染流程:
How to create a basic 3D scene in Three.js?
How to create a basic 3D scene in Three.js?
考察点:基础场景搭建。
答案:
创建基本3D场景需要遵循Three.js的标准流程:场景→相机→渲染器→对象→渲染循环。这是所有Three.js应用的基础模板。
基本步骤:
1. 初始化核心组件
// 创建场景
const scene = new THREE.Scene();
// 创建相机 (视野角度, 宽高比, 近裁剪面, 远裁剪面)
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
// 创建渲染器
const renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
2. 创建3D对象
// 创建几何体
const geometry = new THREE.BoxGeometry(1, 1, 1);
// 创建材质
const material = new THREE.MeshBasicMaterial({color: 0x00ff00});
// 创建网格(几何体+材质)
const cube = new THREE.Mesh(geometry, material);
// 添加到场景
scene.add(cube);
3. 设置相机位置
camera.position.z = 5;
4. 创建渲染循环
function animate() {
requestAnimationFrame(animate);
// 添加旋转动画
cube.rotation.x += 0.01;
cube.rotation.y += 0.01;
// 渲染场景
renderer.render(scene, camera);
}
animate();
完整示例:
// 完整的基础场景代码
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
const renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
const geometry = new THREE.BoxGeometry();
const material = new THREE.MeshBasicMaterial({color: 0x00ff00});
const cube = new THREE.Mesh(geometry, material);
scene.add(cube);
camera.position.z = 5;
function animate() {
requestAnimationFrame(animate);
cube.rotation.x += 0.01;
cube.rotation.y += 0.01;
renderer.render(scene, camera);
}
animate();
注意事项:
What are the commonly used geometries in Three.js? How to create and use them?
What are the commonly used geometries in Three.js? How to create and use them?
考察点:几何体应用。
答案:
Three.js提供了丰富的内置几何体类型,从基础的立方体到复杂的参数化曲面。几何体定义了3D对象的形状和顶点信息,是构建3D场景的基础元素。
常用几何体类型:
1. 基础几何体
// 立方体几何体
const boxGeometry = new THREE.BoxGeometry(1, 1, 1); // 宽、高、深
// 球体几何体
const sphereGeometry = new THREE.SphereGeometry(1, 32, 32); // 半径、水平分段、垂直分段
// 平面几何体
const planeGeometry = new THREE.PlaneGeometry(1, 1, 32, 32); // 宽、高、宽度分段、高度分段
// 圆柱体几何体
const cylinderGeometry = new THREE.CylinderGeometry(1, 1, 2, 32); // 顶部半径、底部半径、高度、径向分段
2. 高级几何体
// 圆环几何体
const torusGeometry = new THREE.TorusGeometry(1, 0.3, 16, 100); // 环半径、管半径、径向分段、管向分段
// 圆锥几何体
const coneGeometry = new THREE.ConeGeometry(1, 2, 32); // 底面半径、高度、径向分段
// 十二面体几何体
const dodecahedronGeometry = new THREE.DodecahedronGeometry(1); // 半径
3. 自定义几何体
// 使用BufferGeometry创建自定义几何体
const customGeometry = new THREE.BufferGeometry();
const vertices = new Float32Array([
-1.0, -1.0, 1.0,
1.0, -1.0, 1.0,
1.0, 1.0, 1.0
]);
customGeometry.setAttribute('position', new THREE.BufferAttribute(vertices, 3));
使用几何体:
// 创建网格对象
const geometry = new THREE.BoxGeometry(1, 1, 1);
const material = new THREE.MeshBasicMaterial({color: 0x00ff00});
const mesh = new THREE.Mesh(geometry, material);
// 添加到场景
scene.add(mesh);
// 几何体变换
mesh.scale.set(2, 1, 1); // 缩放
mesh.rotation.x = Math.PI / 4; // 旋转
mesh.position.set(1, 0, 0); // 位移
性能优化:
geometry.dispose() 释放内存实际应用:
What is Material? What are the commonly used material types in Three.js?
What is Material? What are the commonly used material types in Three.js?
考察点:材质系统理解。
答案:
材质(Material)决定了3D对象的外观表现,包括颜色、纹理、反射、透明度等视觉属性。材质与几何体结合形成网格对象,是Three.js渲染系统的核心组成部分。材质实际上是着色器程序的高级封装。
常用材质类型:
1. 基础材质
// MeshBasicMaterial - 不受光照影响的基础材质
const basicMaterial = new THREE.MeshBasicMaterial({
color: 0xff0000, // 颜色
wireframe: true, // 线框模式
transparent: true, // 透明
opacity: 0.5 // 透明度
});
// MeshNormalMaterial - 法向量材质,用于调试
const normalMaterial = new THREE.MeshNormalMaterial();
2. 光照材质
// MeshLambertMaterial - 漫反射材质
const lambertMaterial = new THREE.MeshLambertMaterial({
color: 0x00ff00,
emissive: 0x004400 // 自发光颜色
});
// MeshPhongMaterial - 高光材质
const phongMaterial = new THREE.MeshPhongMaterial({
color: 0x0000ff,
specular: 0x111111, // 高光颜色
shininess: 100 // 高光强度
});
3. 物理材质
// MeshStandardMaterial - 基于物理的标准材质
const standardMaterial = new THREE.MeshStandardMaterial({
color: 0x806060,
metalness: 0.2, // 金属度
roughness: 0.8 // 粗糙度
});
// MeshPhysicalMaterial - 高级物理材质
const physicalMaterial = new THREE.MeshPhysicalMaterial({
color: 0xffffff,
metalness: 0,
roughness: 0,
clearcoat: 1.0, // 清漆层
clearcoatRoughness: 0.1
});
4. 纹理材质
// 加载纹理
const textureLoader = new THREE.TextureLoader();
const texture = textureLoader.load('path/to/texture.jpg');
const texturedMaterial = new THREE.MeshStandardMaterial({
map: texture, // 漫反射贴图
normalMap: normalTexture, // 法向贴图
roughnessMap: roughnessTexture // 粗糙度贴图
});
材质属性配置:
const material = new THREE.MeshStandardMaterial({
// 基础属性
color: 0xffffff, // 基础颜色
transparent: false, // 是否透明
opacity: 1.0, // 透明度
side: THREE.FrontSide, // 渲染面(前面/后面/双面)
// 物理属性
metalness: 0.0, // 金属度 (0-1)
roughness: 1.0, // 粗糙度 (0-1)
// 纹理贴图
map: null, // 颜色贴图
normalMap: null, // 法向贴图
envMap: null // 环境贴图
});
性能考虑:
适用场景:
How to add and control light sources in Three.js?
How to add and control light sources in Three.js?
考察点:光照系统基础。
答案:
光源是Three.js中实现逼真渲染效果的关键要素。不同类型的光源模拟现实世界中的各种光照情况,与材质配合产生丰富的视觉效果。只有使用光照材质(如MeshLambertMaterial、MeshPhongMaterial等)的对象才会受到光源影响。
常用光源类型:
1. 环境光(AmbientLight)
// 环境光 - 均匀照亮所有对象
const ambientLight = new THREE.AmbientLight(0x404040, 0.6); // 颜色, 强度
scene.add(ambientLight);
2. 方向光(DirectionalLight)
// 方向光 - 模拟太阳光
const directionalLight = new THREE.DirectionalLight(0xffffff, 1);
directionalLight.position.set(1, 1, 1); // 设置光源方向
directionalLight.target.position.set(0, 0, 0); // 设置照射目标
scene.add(directionalLight);
scene.add(directionalLight.target);
3. 点光源(PointLight)
// 点光源 - 模拟灯泡
const pointLight = new THREE.PointLight(0xff6600, 1, 100); // 颜色, 强度, 距离
pointLight.position.set(10, 10, 10);
scene.add(pointLight);
// 添加光源辅助器可视化光源位置
const pointLightHelper = new THREE.PointLightHelper(pointLight, 1);
scene.add(pointLightHelper);
4. 聚光灯(SpotLight)
// 聚光灯 - 模拟手电筒或舞台灯
const spotLight = new THREE.SpotLight(0xffffff, 1, 100, Math.PI / 6, 0.25, 1);
// 颜色, 强度, 距离, 角度, 边缘模糊度, 衰减度
spotLight.position.set(0, 10, 0);
spotLight.target.position.set(0, 0, 0);
scene.add(spotLight);
scene.add(spotLight.target);
光源控制技巧:
1. 动态光源控制
// 动画控制光源
function animateLight() {
const time = Date.now() * 0.001;
pointLight.position.x = Math.cos(time) * 10;
pointLight.position.z = Math.sin(time) * 10;
pointLight.intensity = Math.sin(time * 2) * 0.5 + 0.5;
}
2. 光源阴影设置
// 开启渲染器阴影
renderer.shadowMap.enabled = true;
renderer.shadowMap.type = THREE.PCFSoftShadowMap;
// 光源投射阴影
directionalLight.castShadow = true;
directionalLight.shadow.mapSize.width = 2048;
directionalLight.shadow.mapSize.height = 2048;
// 对象接收和投射阴影
mesh.castShadow = true; // 投射阴影
floor.receiveShadow = true; // 接收阴影
3. 光源性能优化
// 限制光源影响范围
pointLight.distance = 50; // 设置衰减距离
pointLight.decay = 2; // 设置衰减率
// 合理设置阴影质量
light.shadow.camera.near = 0.1;
light.shadow.camera.far = 25;
典型光照设置:
// 三点光照法 - 经典摄影布光
const keyLight = new THREE.DirectionalLight(0xffffff, 1); // 主光
const fillLight = new THREE.DirectionalLight(0x8888ff, 0.3); // 补光
const backLight = new THREE.DirectionalLight(0xff8888, 0.2); // 背光
keyLight.position.set(2, 2, 1);
fillLight.position.set(-1, 1, 1);
backLight.position.set(0, 0, -1);
scene.add(keyLight, fillLight, backLight);
scene.add(new THREE.AmbientLight(0x404040, 0.1)); // 微弱环境光
适用场景:
What types of cameras are there in Three.js? What are their characteristics?
What types of cameras are there in Three.js? What are their characteristics?
考察点:相机系统理解。
答案:
相机定义了观察3D世界的视角和投影方式,是Three.js渲染管线的关键组件。不同类型的相机适用于不同的应用场景,影响最终的视觉效果和用户体验。
主要相机类型:
1. 透视相机(PerspectiveCamera)
// 透视相机 - 模拟人眼视觉
const camera = new THREE.PerspectiveCamera(
75, // fov: 视野角度(度)
window.innerWidth / window.innerHeight, // aspect: 宽高比
0.1, // near: 近裁剪面
1000 // far: 远裁剪面
);
camera.position.set(0, 0, 5);
透视相机特点:
2. 正交相机(OrthographicCamera)
// 正交相机 - 平行投影
const frustumSize = 10;
const aspect = window.innerWidth / window.innerHeight;
const camera = new THREE.OrthographicCamera(
-frustumSize * aspect / 2, // left
frustumSize * aspect / 2, // right
frustumSize / 2, // top
-frustumSize / 2, // bottom
0.1, // near
1000 // far
);
camera.position.set(0, 0, 5);
正交相机特点:
相机控制技术:
1. 相机移动和旋转
// 基础变换
camera.position.set(x, y, z); // 设置位置
camera.rotation.set(x, y, z); // 设置旋转
camera.lookAt(target); // 看向目标点
// 围绕目标旋转
const target = new THREE.Vector3(0, 0, 0);
camera.position.x = Math.cos(angle) * radius;
camera.position.z = Math.sin(angle) * radius;
camera.lookAt(target);
2. 相机控制器
// OrbitControls - 轨道控制器
import { OrbitControls } from 'three/examples/jsm/controls/OrbitControls.js';
const controls = new OrbitControls(camera, renderer.domElement);
controls.enableDamping = true; // 启用阻尼
controls.dampingFactor = 0.1; // 阻尼系数
controls.enableZoom = true; // 启用缩放
// FlyControls - 飞行控制器
import { FlyControls } from 'three/examples/jsm/controls/FlyControls.js';
const flyControls = new FlyControls(camera, renderer.domElement);
flyControls.movementSpeed = 10;
flyControls.rollSpeed = Math.PI / 24;
3. 相机动画
// 使用Tween.js实现平滑相机动画
import * as TWEEN from '@tweenjs/tween.js';
function animateCamera(targetPosition, targetLookAt) {
new TWEEN.Tween(camera.position)
.to(targetPosition, 1000)
.easing(TWEEN.Easing.Quadratic.Out)
.start();
new TWEEN.Tween(controls.target)
.to(targetLookAt, 1000)
.easing(TWEEN.Easing.Quadratic.Out)
.start();
}
相机参数优化:
1. 视野角度调整
// 广角镜头效果 (FOV > 75)
camera.fov = 90; // 更宽视野,边缘可能有畸变
// 长焦镜头效果 (FOV < 50)
camera.fov = 35; // 较窄视野,更适合特写
camera.updateProjectionMatrix(); // 更新投影矩阵
2. 裁剪面优化
// 合理设置近远裁剪面
camera.near = 0.1; // 过小可能产生Z-fighting
camera.far = 1000; // 过大影响深度精度
camera.updateProjectionMatrix();
响应式相机设置:
// 窗口大小变化时更新相机
function onWindowResize() {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight);
}
window.addEventListener('resize', onWindowResize);
应用场景对比:
How to load external 3D models in Three.js?
How to load external 3D models in Three.js?
考察点:模型加载机制。
答案:
Three.js支持多种3D模型格式的加载,通过不同的加载器(Loader)处理各种格式文件。模型加载是构建复杂3D场景的重要技术,涉及异步加载、材质处理、性能优化等多个方面。
常用模型格式和加载器:
1. GLTF/GLB格式(推荐)
import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';
const loader = new GLTFLoader();
// 基础加载
loader.load(
'path/to/model.gltf',
function (gltf) {
// 成功回调
scene.add(gltf.scene);
// 访问模型组件
const model = gltf.scene;
const animations = gltf.animations;
const cameras = gltf.cameras;
},
function (progress) {
// 进度回调
console.log('Loading progress:', progress.loaded / progress.total * 100 + '%');
},
function (error) {
// 错误回调
console.error('Loading error:', error);
}
);
2. OBJ格式
import { OBJLoader } from 'three/examples/jsm/loaders/OBJLoader.js';
import { MTLLoader } from 'three/examples/jsm/loaders/MTLLoader.js';
// 先加载材质文件(可选)
const mtlLoader = new MTLLoader();
mtlLoader.load('path/to/model.mtl', function(materials) {
materials.preload();
const objLoader = new OBJLoader();
objLoader.setMaterials(materials);
objLoader.load('path/to/model.obj', function(object) {
scene.add(object);
});
});
3. FBX格式
import { FBXLoader } from 'three/examples/jsm/loaders/FBXLoader.js';
const fbxLoader = new FBXLoader();
fbxLoader.load('path/to/model.fbx', function(object) {
// FBX模型通常需要调整缩放
object.scale.setScalar(0.01);
scene.add(object);
});
加载器配置和优化:
1. 添加加载管理器
import { LoadingManager } from 'three';
const manager = new LoadingManager();
// 监听加载事件
manager.onStart = function(url, itemsLoaded, itemsTotal) {
console.log('Started loading:', url);
};
manager.onLoad = function() {
console.log('All assets loaded');
// 隐藏加载界面,开始渲染
};
manager.onProgress = function(url, itemsLoaded, itemsTotal) {
console.log('Loading progress:', itemsLoaded / itemsTotal * 100 + '%');
};
manager.onError = function(url) {
console.error('Loading error:', url);
};
// 使用管理器
const loader = new GLTFLoader(manager);
2. 纹理路径配置
// 设置纹理路径
loader.setPath('models/');
loader.load('mymodel.gltf', function(gltf) {
scene.add(gltf.scene);
});
模型后处理:
1. 模型变换
loader.load('model.gltf', function(gltf) {
const model = gltf.scene;
// 缩放调整
model.scale.setScalar(2);
// 位置调整
model.position.set(0, -1, 0);
// 旋转调整
model.rotation.y = Math.PI;
// 遍历模型子对象
model.traverse(function(child) {
if (child.isMesh) {
child.castShadow = true;
child.receiveShadow = true;
}
});
scene.add(model);
});
2. 动画处理
let mixer;
loader.load('animated-model.gltf', function(gltf) {
scene.add(gltf.scene);
// 创建动画混合器
if (gltf.animations.length > 0) {
mixer = new THREE.AnimationMixer(gltf.scene);
// 播放第一个动画
const action = mixer.clipAction(gltf.animations[0]);
action.play();
}
});
// 在渲染循环中更新动画
function animate() {
if (mixer) {
mixer.update(clock.getDelta());
}
renderer.render(scene, camera);
}
性能优化策略:
1. 模型压缩
// 使用Draco压缩
import { DRACOLoader } from 'three/examples/jsm/loaders/DRACOLoader.js';
const dracoLoader = new DRACOLoader();
dracoLoader.setDecoderPath('js/libs/draco/');
loader.setDRACOLoader(dracoLoader);
2. 渐进式加载
// 先加载低精度模型,再替换高精度模型
loader.load('low-res-model.gltf', function(gltf) {
scene.add(gltf.scene);
// 后台加载高精度模型
loader.load('high-res-model.gltf', function(highResGltf) {
scene.remove(gltf.scene);
scene.add(highResGltf.scene);
});
});
错误处理和调试:
loader.load(
'model.gltf',
function(gltf) {
// 检查模型完整性
if (!gltf.scene) {
console.error('Model scene is empty');
return;
}
// 输出模型信息用于调试
console.log('Model loaded:', {
scene: gltf.scene,
animations: gltf.animations.length,
materials: gltf.materials?.length || 0
});
scene.add(gltf.scene);
},
undefined,
function(error) {
console.error('Model loading failed:', error);
// 加载备用模型或显示错误提示
}
);
实际应用:
How to implement basic animation effects in Three.js?
How to implement basic animation effects in Three.js?
考察点:动画基础。
答案:
Three.js动画系统基于时间循环和属性插值,通过持续更新对象属性创建流畅的动画效果。动画是3D交互体验的核心要素,从简单的旋转到复杂的骨骼动画都有对应的实现方法。
基本动画实现方式:
1. 手动动画(直接属性修改)
let cube;
const clock = new THREE.Clock();
function animate() {
requestAnimationFrame(animate);
const elapsedTime = clock.getElapsedTime();
// 旋转动画
cube.rotation.x = elapsedTime * 0.5;
cube.rotation.y = elapsedTime * 0.3;
// 位置动画(正弦波)
cube.position.y = Math.sin(elapsedTime * 2) * 2;
// 缩放动画
const scale = 1 + Math.sin(elapsedTime * 3) * 0.3;
cube.scale.setScalar(scale);
renderer.render(scene, camera);
}
animate();
2. Tween.js补间动画
import * as TWEEN from '@tweenjs/tween.js';
// 位置补间动画
function animatePosition(object, targetPosition, duration = 1000) {
new TWEEN.Tween(object.position)
.to(targetPosition, duration)
.easing(TWEEN.Easing.Quadratic.Out)
.onUpdate(() => {
// 动画更新回调
})
.onComplete(() => {
console.log('Animation completed');
})
.start();
}
// 旋转补间动画
function animateRotation(object, targetRotation, duration = 1000) {
new TWEEN.Tween(object.rotation)
.to(targetRotation, duration)
.easing(TWEEN.Easing.Elastic.Out)
.start();
}
// 在渲染循环中更新Tween
function animate() {
requestAnimationFrame(animate);
TWEEN.update(); // 更新所有补间动画
renderer.render(scene, camera);
}
// 使用示例
animatePosition(cube, {x: 5, y: 2, z: -3}, 2000);
animateRotation(cube, {x: 0, y: Math.PI, z: 0}, 1500);
3. Three.js内置动画系统
// 创建关键帧轨道
const positionKF = new THREE.VectorKeyframeTrack(
'.position',
[0, 1, 2],
[0, 0, 0, 5, 0, 0, 0, 0, 0]
);
const scaleKF = new THREE.VectorKeyframeTrack(
'.scale',
[0, 1, 2],
[1, 1, 1, 2, 2, 2, 1, 1, 1]
);
// 创建动画剪辑
const clip = new THREE.AnimationClip('Action', 2, [positionKF, scaleKF]);
// 创建动画混合器
const mixer = new THREE.AnimationMixer(cube);
const action = mixer.clipAction(clip);
action.play();
// 更新动画
function animate() {
requestAnimationFrame(animate);
const delta = clock.getDelta();
mixer.update(delta);
renderer.render(scene, camera);
}
动画控制技术:
1. 动画链和序列
// 使用Tween.js创建动画序列
function createAnimationSequence(object) {
// 第一个动画:移动
const tween1 = new TWEEN.Tween(object.position)
.to({x: 5, y: 0, z: 0}, 1000)
.easing(TWEEN.Easing.Quadratic.Out);
// 第二个动画:旋转
const tween2 = new TWEEN.Tween(object.rotation)
.to({x: 0, y: Math.PI, z: 0}, 800)
.easing(TWEEN.Easing.Bounce.Out);
// 第三个动画:缩放
const tween3 = new TWEEN.Tween(object.scale)
.to({x: 2, y: 2, z: 2}, 600)
.easing(TWEEN.Easing.Elastic.InOut);
// 链接动画
tween1.chain(tween2);
tween2.chain(tween3);
// 启动序列
tween1.start();
}
2. 相机动画
function animateCamera(targetPosition, targetLookAt) {
// 相机位置动画
new TWEEN.Tween(camera.position)
.to(targetPosition, 2000)
.easing(TWEEN.Easing.Cubic.InOut)
.start();
// 相机朝向动画(通过控制器target)
if (controls) {
new TWEEN.Tween(controls.target)
.to(targetLookAt, 2000)
.easing(TWEEN.Easing.Cubic.InOut)
.start();
}
}
3. 材质动画
function animateMaterial(material) {
// 颜色动画
const startColor = {r: 1, g: 0, b: 0};
const endColor = {r: 0, g: 1, b: 1};
new TWEEN.Tween(startColor)
.to(endColor, 2000)
.onUpdate(() => {
material.color.setRGB(startColor.r, startColor.g, startColor.b);
})
.start();
// 透明度动画
new TWEEN.Tween(material)
.to({opacity: 0.3}, 1000)
.easing(TWEEN.Easing.Quadratic.InOut)
.yoyo(true)
.repeat(Infinity)
.start();
}
性能优化:
1. 动画性能监控
const stats = new Stats();
document.body.appendChild(stats.dom);
function animate() {
stats.begin();
// 渲染代码
TWEEN.update();
renderer.render(scene, camera);
stats.end();
requestAnimationFrame(animate);
}
2. 动画复用和池化
// 动画对象池
class AnimationPool {
constructor() {
this.pool = [];
this.activeAnimations = [];
}
getTween() {
return this.pool.pop() || new TWEEN.Tween();
}
returnTween(tween) {
tween.stop();
this.pool.push(tween);
}
}
交互式动画:
// 鼠标悬停动画
function setupHoverAnimation(object) {
const originalScale = object.scale.clone();
object.userData.onMouseEnter = () => {
new TWEEN.Tween(object.scale)
.to({x: 1.2, y: 1.2, z: 1.2}, 200)
.easing(TWEEN.Easing.Back.Out)
.start();
};
object.userData.onMouseLeave = () => {
new TWEEN.Tween(object.scale)
.to(originalScale, 200)
.easing(TWEEN.Easing.Back.Out)
.start();
};
}
实际应用:
How to handle mouse interaction events in Three.js?
How to handle mouse interaction events in Three.js?
考察点:交互事件处理。
答案:
Three.js的交互事件处理基于射线投射(Raycasting)技术,将2D屏幕坐标转换为3D空间射线,检测与3D对象的交互。交互事件是实现沉浸式3D体验的关键技术。
射线投射基础:
1. 基本射线投射设置
// 创建射线投射器和鼠标坐标
const raycaster = new THREE.Raycaster();
const mouse = new THREE.Vector2();
// 更新鼠标坐标(标准化到 -1 到 1)
function updateMousePosition(event) {
mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
}
// 射线检测
function raycast() {
raycaster.setFromCamera(mouse, camera);
const intersects = raycaster.intersectObjects(scene.children, true);
if (intersects.length > 0) {
const object = intersects[0].object;
console.log('Hit object:', object);
return intersects[0];
}
return null;
}
2. 鼠标事件处理
// 鼠标移动事件
renderer.domElement.addEventListener('mousemove', (event) => {
updateMousePosition(event);
const intersect = raycast();
if (intersect) {
// 悬停效果
document.body.style.cursor = 'pointer';
// 触发对象的悬停事件
if (intersect.object.userData.onMouseEnter && !intersect.object.userData.isHovered) {
intersect.object.userData.onMouseEnter();
intersect.object.userData.isHovered = true;
}
} else {
// 取消悬停效果
document.body.style.cursor = 'default';
// 清除所有对象的悬停状态
scene.traverse((child) => {
if (child.userData.isHovered) {
if (child.userData.onMouseLeave) {
child.userData.onMouseLeave();
}
child.userData.isHovered = false;
}
});
}
});
// 鼠标点击事件
renderer.domElement.addEventListener('click', (event) => {
updateMousePosition(event);
const intersect = raycast();
if (intersect) {
// 触发点击事件
if (intersect.object.userData.onClick) {
intersect.object.userData.onClick(intersect);
}
}
});
交互事件类型:
1. 对象选择和高亮
let selectedObject = null;
const originalMaterials = new Map();
function setupSelection() {
renderer.domElement.addEventListener('click', (event) => {
updateMousePosition(event);
const intersect = raycast();
if (intersect) {
// 取消之前选择
if (selectedObject) {
selectedObject.material = originalMaterials.get(selectedObject);
}
// 选择新对象
selectedObject = intersect.object;
if (!originalMaterials.has(selectedObject)) {
originalMaterials.set(selectedObject, selectedObject.material);
}
// 应用高亮材质
selectedObject.material = new THREE.MeshBasicMaterial({
color: 0xff6600,
wireframe: true
});
}
});
}
2. 拖拽功能
class DragControls {
constructor(objects, camera, domElement) {
this.objects = objects;
this.camera = camera;
this.domElement = domElement;
this.raycaster = new THREE.Raycaster();
this.mouse = new THREE.Vector2();
this.isDragging = false;
this.dragObject = null;
this.dragPlane = new THREE.Plane();
this.intersection = new THREE.Vector3();
this.setupEvents();
}
setupEvents() {
this.domElement.addEventListener('mousedown', this.onMouseDown.bind(this));
this.domElement.addEventListener('mousemove', this.onMouseMove.bind(this));
this.domElement.addEventListener('mouseup', this.onMouseUp.bind(this));
}
onMouseDown(event) {
this.updateMouse(event);
this.raycaster.setFromCamera(this.mouse, this.camera);
const intersects = this.raycaster.intersectObjects(this.objects);
if (intersects.length > 0) {
this.isDragging = true;
this.dragObject = intersects[0].object;
// 设置拖拽平面
this.dragPlane.setFromNormalAndCoplanarPoint(
this.camera.getWorldDirection(new THREE.Vector3()),
intersects[0].point
);
}
}
onMouseMove(event) {
this.updateMouse(event);
if (this.isDragging && this.dragObject) {
this.raycaster.setFromCamera(this.mouse, this.camera);
if (this.raycaster.ray.intersectPlane(this.dragPlane, this.intersection)) {
this.dragObject.position.copy(this.intersection);
}
}
}
onMouseUp() {
this.isDragging = false;
this.dragObject = null;
}
updateMouse(event) {
this.mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
this.mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
}
}
// 使用拖拽控制器
const dragControls = new DragControls([cube, sphere], camera, renderer.domElement);
3. 对象信息显示
// 创建信息提示框
function createTooltip() {
const tooltip = document.createElement('div');
tooltip.style.position = 'absolute';
tooltip.style.background = 'rgba(0,0,0,0.8)';
tooltip.style.color = 'white';
tooltip.style.padding = '5px 10px';
tooltip.style.borderRadius = '3px';
tooltip.style.pointerEvents = 'none';
tooltip.style.display = 'none';
document.body.appendChild(tooltip);
return tooltip;
}
const tooltip = createTooltip();
renderer.domElement.addEventListener('mousemove', (event) => {
updateMousePosition(event);
const intersect = raycast();
if (intersect && intersect.object.userData.info) {
// 显示提示信息
tooltip.style.display = 'block';
tooltip.style.left = event.clientX + 10 + 'px';
tooltip.style.top = event.clientY - 30 + 'px';
tooltip.textContent = intersect.object.userData.info;
} else {
tooltip.style.display = 'none';
}
});
高级交互技术:
1. 多点触控支持
// 触摸事件处理
renderer.domElement.addEventListener('touchstart', handleTouchStart);
renderer.domElement.addEventListener('touchmove', handleTouchMove);
renderer.domElement.addEventListener('touchend', handleTouchEnd);
function handleTouchStart(event) {
if (event.touches.length === 1) {
// 单点触摸,类似鼠标点击
const touch = event.touches[0];
updateMousePosition(touch);
raycast();
}
}
2. 性能优化
// 事件节流
function throttle(func, delay) {
let timeoutId;
let lastExecTime = 0;
return function(...args) {
const currentTime = Date.now();
if (currentTime - lastExecTime > delay) {
func.apply(this, args);
lastExecTime = currentTime;
} else {
clearTimeout(timeoutId);
timeoutId = setTimeout(() => {
func.apply(this, args);
lastExecTime = Date.now();
}, delay - (currentTime - lastExecTime));
}
};
}
// 使用节流的鼠标移动事件
const throttledMouseMove = throttle((event) => {
updateMousePosition(event);
raycast();
}, 16); // 60 FPS
renderer.domElement.addEventListener('mousemove', throttledMouseMove);
对象交互状态管理:
// 为对象添加交互行为
function makeInteractive(object, options = {}) {
object.userData.interactive = true;
object.userData.onClick = options.onClick || (() => {});
object.userData.onMouseEnter = options.onMouseEnter || (() => {});
object.userData.onMouseLeave = options.onMouseLeave || (() => {});
object.userData.info = options.info || '';
}
// 使用示例
makeInteractive(cube, {
onClick: () => console.log('Cube clicked!'),
onMouseEnter: () => cube.material.color.setHex(0xff0000),
onMouseLeave: () => cube.material.color.setHex(0x00ff00),
info: 'This is a cube'
});
实际应用:
What is the rendering principle of Three.js? How does the rendering loop work?
What is the rendering principle of Three.js? How does the rendering loop work?
考察点:渲染机制理解。
答案:
Three.js渲染原理基于现代图形渲染管线,通过WebGL与GPU通信,将3D场景转换为2D屏幕像素。渲染循环是实时3D应用的核心,负责连续更新和绘制场景内容。
渲染管线流程:
1. 几何处理阶段
// 顶点着色器处理
// 1. 模型变换(Model Transform)
object.matrixWorld.multiplyMatrices(parent.matrixWorld, object.matrix);
// 2. 视图变换(View Transform)
camera.matrixWorldInverse.getInverse(camera.matrixWorld);
// 3. 投影变换(Projection Transform)
camera.projectionMatrix.makePerspective(fov, aspect, near, far);
// 4. 视口变换(Viewport Transform)
renderer.setViewport(x, y, width, height);
2. 光栅化和片元处理
// 片元着色器处理
// 1. 纹理采样
const texture = textureLoader.load('image.jpg');
material.map = texture;
// 2. 光照计算
const light = new THREE.DirectionalLight(0xffffff, 1);
material.needsUpdate = true; // 触发着色器重编译
// 3. 最终颜色输出
gl_FragColor = vec4(finalColor, opacity);
渲染循环架构:
1. 基础渲染循环
function renderLoop() {
// 1. 请求下一帧
requestAnimationFrame(renderLoop);
// 2. 更新时间
const deltaTime = clock.getDelta();
const elapsedTime = clock.getElapsedTime();
// 3. 更新场景对象
updateScene(deltaTime);
// 4. 更新动画
if (mixer) {
mixer.update(deltaTime);
}
// 5. 更新控制器
if (controls) {
controls.update();
}
// 6. 执行渲染
renderer.render(scene, camera);
// 7. 性能监控
stats.update();
}
2. 多阶段渲染循环
class RenderManager {
constructor(renderer, scene, camera) {
this.renderer = renderer;
this.scene = scene;
this.camera = camera;
this.renderTargets = [];
this.postProcessing = [];
}
render() {
// 阴影贴图渲染
this.renderShadowMaps();
// 主场景渲染
this.renderMainScene();
// 后期处理
this.applyPostProcessing();
// UI渲染
this.renderUI();
}
renderShadowMaps() {
// 渲染深度贴图用于阴影
this.scene.traverse((child) => {
if (child.isLight && child.castShadow) {
this.renderer.setRenderTarget(child.shadow.map);
this.renderer.render(this.scene, child.shadow.camera);
}
});
this.renderer.setRenderTarget(null);
}
renderMainScene() {
this.renderer.render(this.scene, this.camera);
}
}
渲染优化技术:
1. 视锥剔除(Frustum Culling)
// Three.js自动进行视锥剔除
const frustum = new THREE.Frustum();
const cameraMatrix = new THREE.Matrix4();
function updateFrustum() {
cameraMatrix.multiplyMatrices(camera.projectionMatrix, camera.matrixWorldInverse);
frustum.setFromProjectionMatrix(cameraMatrix);
}
function isObjectVisible(object) {
return frustum.containsPoint(object.position) ||
frustum.intersectsBox(object.geometry.boundingBox);
}
2. 批量渲染
// 实例化渲染优化
const instancedGeometry = new THREE.InstancedBufferGeometry();
instancedGeometry.copy(baseGeometry);
// 设置实例属性
const instanceCount = 1000;
const instanceMatrix = new THREE.InstancedBufferAttribute(
new Float32Array(instanceCount * 16), 16
);
instancedGeometry.setAttribute('instanceMatrix', instanceMatrix);
// 批量更新实例矩阵
for (let i = 0; i < instanceCount; i++) {
const matrix = new THREE.Matrix4();
matrix.setPosition(x, y, z);
matrix.toArray(instanceMatrix.array, i * 16);
}
instanceMatrix.needsUpdate = true;
3. LOD(Level of Detail)系统
// 距离基础LOD
const lod = new THREE.LOD();
lod.addLevel(highDetailMesh, 0); // 0-50 单位距离
lod.addLevel(mediumDetailMesh, 50); // 50-100 单位距离
lod.addLevel(lowDetailMesh, 100); // 100+ 单位距离
scene.add(lod);
// 在渲染循环中更新LOD
function updateLOD() {
scene.traverse((child) => {
if (child.isLOD) {
child.update(camera);
}
});
}
渲染状态管理:
1. 渲染器状态
class RenderState {
constructor() {
this.currentProgram = null;
this.currentGeometry = null;
this.currentMaterial = null;
this.drawCalls = 0;
this.triangles = 0;
}
reset() {
this.drawCalls = 0;
this.triangles = 0;
}
update(info) {
this.drawCalls = info.render.calls;
this.triangles = info.render.triangles;
}
}
const renderState = new RenderState();
// 渲染信息监控
function logRenderInfo() {
const info = renderer.info;
console.log({
drawCalls: info.render.calls,
triangles: info.render.triangles,
geometries: info.memory.geometries,
textures: info.memory.textures
});
}
2. 渲染队列管理
// 自定义渲染顺序
scene.traverse((child) => {
if (child.isMesh) {
// 透明物体后渲染
if (child.material.transparent) {
child.renderOrder = 1000;
}
// 不透明物体先渲染
else {
child.renderOrder = 0;
}
}
});
多相机渲染:
function renderMultiCamera() {
// 主视角渲染
renderer.setViewport(0, 0, window.innerWidth * 0.7, window.innerHeight);
renderer.render(scene, mainCamera);
// 小地图渲染
renderer.setViewport(
window.innerWidth * 0.7,
window.innerHeight * 0.7,
window.innerWidth * 0.3,
window.innerHeight * 0.3
);
renderer.render(scene, miniMapCamera);
// 重置视口
renderer.setViewport(0, 0, window.innerWidth, window.innerHeight);
}
性能监控和调试:
// 渲染性能分析
class PerformanceMonitor {
constructor() {
this.frameTime = 0;
this.fps = 0;
this.lastTime = performance.now();
}
update() {
const now = performance.now();
this.frameTime = now - this.lastTime;
this.fps = 1000 / this.frameTime;
this.lastTime = now;
// 性能警告
if (this.fps < 30) {
console.warn('Low FPS detected:', this.fps.toFixed(1));
}
}
}
实际应用:
How to implement complex material effects in Three.js? What is the role of shaders?
How to implement complex material effects in Three.js? What is the role of shaders?
考察点:着色器应用。
答案:
着色器(Shader)是运行在GPU上的程序,负责控制3D对象的渲染效果。Three.js的内置材质实际上是预写好的着色器程序,而自定义着色器则可以实现各种复杂的视觉效果,是高级3D图形编程的核心技术。
着色器基础概念:
1. 着色器类型
2. GLSL语法基础
// 顶点着色器示例
attribute vec3 position;
attribute vec2 uv;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
varying vec2 vUv;
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
// 片元着色器示例
uniform float time;
uniform vec3 color;
varying vec2 vUv;
void main() {
float wave = sin(vUv.x * 10.0 + time) * 0.1;
vec3 finalColor = color + vec3(wave);
gl_FragColor = vec4(finalColor, 1.0);
}
Three.js中的自定义着色器:
1. ShaderMaterial基础用法
const shaderMaterial = new THREE.ShaderMaterial({
uniforms: {
time: { value: 0.0 },
color: { value: new THREE.Color(0xff6600) },
texture1: { value: textureLoader.load('texture.jpg') }
},
vertexShader: `
varying vec2 vUv;
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
`,
fragmentShader: `
uniform float time;
uniform vec3 color;
uniform sampler2D texture1;
varying vec2 vUv;
void main() {
vec4 texColor = texture2D(texture1, vUv);
float wave = sin(vUv.y * 20.0 + time * 5.0) * 0.1 + 0.9;
gl_FragColor = texColor * vec4(color * wave, 1.0);
}
`,
transparent: true
});
// 在渲染循环中更新uniform
function animate() {
shaderMaterial.uniforms.time.value = clock.getElapsedTime();
renderer.render(scene, camera);
}
复杂材质效果实现:
1. 水面效果着色器
const waterMaterial = new THREE.ShaderMaterial({
uniforms: {
time: { value: 0 },
waterColor: { value: new THREE.Color(0x0077be) },
foamColor: { value: new THREE.Color(0xffffff) },
normalMap: { value: normalTexture },
envMap: { value: cubeTexture }
},
vertexShader: `
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vPosition;
uniform float time;
void main() {
vUv = uv;
vNormal = normalize(normalMatrix * normal);
// 顶点波浪动画
vec3 pos = position;
pos.z += sin(pos.x * 0.5 + time) * 0.1;
pos.z += cos(pos.y * 0.3 + time * 0.7) * 0.05;
vPosition = (modelViewMatrix * vec4(pos, 1.0)).xyz;
gl_Position = projectionMatrix * vec4(vPosition, 1.0);
}
`,
fragmentShader: `
uniform float time;
uniform vec3 waterColor;
uniform vec3 foamColor;
uniform sampler2D normalMap;
uniform samplerCube envMap;
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vPosition;
void main() {
// 动态法向贴图
vec2 animUv = vUv + vec2(time * 0.01, time * 0.02);
vec3 normalColor = texture2D(normalMap, animUv).rgb;
vec3 normal = normalize(vNormal + (normalColor - 0.5) * 0.3);
// 环境反射
vec3 viewDir = normalize(-vPosition);
vec3 reflectDir = reflect(-viewDir, normal);
vec3 envColor = textureCube(envMap, reflectDir).rgb;
// 菲涅尔效应
float fresnel = pow(1.0 - dot(viewDir, normal), 3.0);
// 泡沫效果
float foam = sin(vUv.x * 50.0 + time * 3.0) * sin(vUv.y * 30.0 + time * 2.0);
foam = smoothstep(0.7, 1.0, foam);
vec3 finalColor = mix(waterColor, envColor, fresnel * 0.7);
finalColor = mix(finalColor, foamColor, foam * 0.3);
gl_FragColor = vec4(finalColor, 0.9);
}
`,
transparent: true
});
2. 全息效果材质
const hologramMaterial = new THREE.ShaderMaterial({
uniforms: {
time: { value: 0 },
glitchIntensity: { value: 0.1 },
scanlineSpeed: { value: 2.0 },
hologramColor: { value: new THREE.Color(0x00ffff) }
},
vertexShader: `
varying vec2 vUv;
varying vec3 vPosition;
uniform float time;
uniform float glitchIntensity;
void main() {
vUv = uv;
// 顶点扰动产生全息不稳定效果
vec3 pos = position;
float glitch = sin(pos.y * 50.0 + time * 10.0) * glitchIntensity;
pos.x += glitch * (sin(time * 7.0) * 0.5 + 0.5);
vPosition = pos;
gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}
`,
fragmentShader: `
uniform float time;
uniform float scanlineSpeed;
uniform vec3 hologramColor;
varying vec2 vUv;
varying vec3 vPosition;
void main() {
// 扫描线效果
float scanline = sin(vUv.y * 100.0 + time * scanlineSpeed);
scanline = smoothstep(0.0, 1.0, scanline);
// 边缘光效
float edge = 1.0 - smoothstep(0.0, 0.1,
min(min(vUv.x, 1.0 - vUv.x), min(vUv.y, 1.0 - vUv.y)));
// 噪点效果
float noise = fract(sin(dot(vUv + time, vec2(12.9898, 78.233))) * 43758.5453);
// 组合效果
float intensity = scanline * 0.5 + edge * 2.0 + noise * 0.1;
vec3 finalColor = hologramColor * intensity;
// 透明度基于强度
float alpha = intensity * 0.8;
gl_FragColor = vec4(finalColor, alpha);
}
`,
transparent: true,
side: THREE.DoubleSide,
blending: THREE.AdditiveBlending
});
着色器优化技术:
1. Uniform缓存和批处理
class ShaderManager {
constructor() {
this.uniformCache = new Map();
this.shaderPrograms = new Map();
}
updateUniforms(material, uniforms) {
// 只更新变化的uniform
Object.keys(uniforms).forEach(key => {
const newValue = uniforms[key];
const cachedValue = this.uniformCache.get(material.uuid + key);
if (!this.isEqual(cachedValue, newValue)) {
material.uniforms[key].value = newValue;
this.uniformCache.set(material.uuid + key, newValue);
}
});
}
isEqual(a, b) {
if (a instanceof THREE.Color && b instanceof THREE.Color) {
return a.equals(b);
}
return a === b;
}
}
2. 条件编译优化
function createShaderMaterial(options = {}) {
let defines = '';
if (options.useNormalMap) defines += '#define USE_NORMALMAP\n';
if (options.useSpecularMap) defines += '#define USE_SPECULARMAP\n';
if (options.useEnvironmentMap) defines += '#define USE_ENVMAP\n';
const fragmentShader = `
${defines}
uniform vec3 diffuse;
#ifdef USE_NORMALMAP
uniform sampler2D normalMap;
#endif
#ifdef USE_SPECULARMAP
uniform sampler2D specularMap;
#endif
void main() {
vec3 color = diffuse;
#ifdef USE_NORMALMAP
// 法向贴图处理
#endif
gl_FragColor = vec4(color, 1.0);
}
`;
return new THREE.ShaderMaterial({
fragmentShader,
// ... 其他配置
});
}
后期处理着色器:
// 自定义后期处理Pass
class CustomPass extends THREE.Pass {
constructor(uniforms = {}) {
super();
this.material = new THREE.ShaderMaterial({
uniforms: {
tDiffuse: { value: null },
time: { value: 0 },
...uniforms
},
vertexShader: `
varying vec2 vUv;
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
`,
fragmentShader: `
uniform sampler2D tDiffuse;
uniform float time;
varying vec2 vUv;
void main() {
vec4 texel = texture2D(tDiffuse, vUv);
// 色彩偏移效果
float offset = sin(time) * 0.01;
vec4 cr = texture2D(tDiffuse, vUv + vec2(offset, 0));
vec4 cga = texture2D(tDiffuse, vUv);
vec4 cb = texture2D(tDiffuse, vUv - vec2(offset, 0));
gl_FragColor = vec4(cr.r, cga.ga, cb.b, 1.0);
}
`
});
}
render(renderer, writeBuffer, readBuffer) {
this.material.uniforms.tDiffuse.value = readBuffer.texture;
this.material.uniforms.time.value = performance.now() * 0.001;
renderer.setRenderTarget(writeBuffer);
renderer.render(this.scene, this.camera);
}
}
实际应用:
How does the Scene Graph work in Three.js?
How does the Scene Graph work in Three.js?
考察点:场景图机制。
答案:
场景图是一个树形的层次结构,用于组织和管理3D场景中的所有对象。每个节点代表一个3D对象,节点间的父子关系决定了变换的继承和对象的渲染顺序。场景图是现代3D引擎的核心架构模式。
场景图结构:
1. 基础层次结构
// 场景图示例
scene (Scene)
├── car (Group)
│ ├── body (Mesh)
│ ├── wheels (Group)
│ │ ├── frontLeft (Mesh)
│ │ ├── frontRight (Mesh)
│ │ ├── rearLeft (Mesh)
│ │ └── rearRight (Mesh)
│ └── lights (Group)
│ ├── headlight1 (PointLight)
│ └── headlight2 (PointLight)
└── environment (Group)
├── ground (Mesh)
└── skybox (Mesh)
2. 创建层次结构
// 创建汽车场景图
const car = new THREE.Group();
car.name = 'car';
// 车身
const bodyGeometry = new THREE.BoxGeometry(2, 0.5, 4);
const bodyMaterial = new THREE.MeshStandardMaterial({color: 0xff0000});
const body = new THREE.Mesh(bodyGeometry, bodyMaterial);
body.name = 'body';
// 车轮组
const wheels = new THREE.Group();
wheels.name = 'wheels';
// 创建单个车轮
function createWheel() {
const wheelGeometry = new THREE.CylinderGeometry(0.3, 0.3, 0.2, 12);
const wheelMaterial = new THREE.MeshStandardMaterial({color: 0x333333});
return new THREE.Mesh(wheelGeometry, wheelMaterial);
}
const frontLeft = createWheel();
frontLeft.position.set(-0.8, -0.5, 1.2);
frontLeft.rotation.z = Math.PI / 2;
const frontRight = createWheel();
frontRight.position.set(0.8, -0.5, 1.2);
frontRight.rotation.z = Math.PI / 2;
// 构建层次关系
wheels.add(frontLeft, frontRight);
car.add(body, wheels);
scene.add(car);
变换继承机制:
1. 本地变换和世界变换
// 本地变换(相对于父对象)
car.position.set(5, 0, 0); // 汽车位置
car.rotation.y = Math.PI / 4; // 汽车旋转
wheels.rotation.x = 0.1; // 车轮组旋转(相对于汽车)
frontLeft.rotation.y += 0.05; // 前左轮旋转(相对于车轮组)
// 获取世界变换
car.updateMatrixWorld();
const worldPosition = new THREE.Vector3();
const worldRotation = new THREE.Quaternion();
const worldScale = new THREE.Vector3();
frontLeft.matrixWorld.decompose(worldPosition, worldRotation, worldScale);
console.log('前左轮世界坐标:', worldPosition);
2. 矩阵链式变换
// Three.js内部变换计算
function updateMatrixWorld(parent) {
// 更新本地矩阵
this.matrix.compose(this.position, this.quaternion, this.scale);
// 计算世界矩阵
if (parent) {
this.matrixWorld.multiplyMatrices(parent.matrixWorld, this.matrix);
} else {
this.matrixWorld.copy(this.matrix);
}
// 递归更新子对象
this.children.forEach(child => {
child.updateMatrixWorld(this);
});
}
场景图遍历:
1. 深度优先遍历
// 遍历场景图中的所有对象
function traverseScene(object, callback) {
callback(object);
object.children.forEach(child => {
traverseScene(child, callback);
});
}
// 使用Three.js内置traverse方法
scene.traverse((object) => {
if (object.isMesh) {
console.log('发现网格对象:', object.name);
object.castShadow = true;
object.receiveShadow = true;
}
if (object.isLight) {
console.log('发现光源:', object.type);
}
});
2. 条件遍历和过滤
// 按类型查找对象
function findObjectsByType(root, type) {
const results = [];
root.traverse((object) => {
if (object.type === type || object.constructor.name === type) {
results.push(object);
}
});
return results;
}
// 按名称查找对象
function findObjectByName(root, name) {
let result = null;
root.traverse((object) => {
if (object.name === name && !result) {
result = object;
}
});
return result;
}
// 使用示例
const allMeshes = findObjectsByType(scene, 'Mesh');
const carBody = findObjectByName(scene, 'body');
场景图管理:
1. 动态添加和移除
class SceneManager {
constructor(scene) {
this.scene = scene;
this.objects = new Map();
}
addObject(id, object) {
this.objects.set(id, object);
this.scene.add(object);
// 设置对象引用以便查找
object.userData.id = id;
}
removeObject(id) {
const object = this.objects.get(id);
if (object) {
this.scene.remove(object);
this.objects.delete(id);
// 清理资源
this.disposeObject(object);
}
}
disposeObject(object) {
object.traverse((child) => {
if (child.geometry) {
child.geometry.dispose();
}
if (child.material) {
if (Array.isArray(child.material)) {
child.material.forEach(mat => mat.dispose());
} else {
child.material.dispose();
}
}
});
}
}
2. 层级管理
// 创建分层管理系统
class LayerManager {
constructor() {
this.layers = {
background: new THREE.Group(),
objects: new THREE.Group(),
ui: new THREE.Group(),
effects: new THREE.Group()
};
// 设置渲染顺序
this.layers.background.renderOrder = 0;
this.layers.objects.renderOrder = 100;
this.layers.ui.renderOrder = 200;
this.layers.effects.renderOrder = 300;
}
addToLayer(layerName, object) {
if (this.layers[layerName]) {
this.layers[layerName].add(object);
}
}
setLayerVisible(layerName, visible) {
if (this.layers[layerName]) {
this.layers[layerName].visible = visible;
}
}
}
性能优化:
1. 批量更新
// 批量更新对象变换
class BatchUpdater {
constructor() {
this.updateQueue = [];
}
addUpdate(object, transform) {
this.updateQueue.push({ object, transform });
}
processUpdates() {
this.updateQueue.forEach(({ object, transform }) => {
if (transform.position) {
object.position.copy(transform.position);
}
if (transform.rotation) {
object.rotation.copy(transform.rotation);
}
if (transform.scale) {
object.scale.copy(transform.scale);
}
});
this.updateQueue.length = 0;
// 批量更新世界矩阵
scene.updateMatrixWorld();
}
}
2. 视锥剔除优化
// 基于场景图的视锥剔除
function cullSceneGraph(camera, root) {
const frustum = new THREE.Frustum();
const matrix = new THREE.Matrix4();
matrix.multiplyMatrices(camera.projectionMatrix, camera.matrixWorldInverse);
frustum.setFromProjectionMatrix(matrix);
root.traverse((object) => {
if (object.isMesh) {
// 计算包围盒
if (!object.geometry.boundingBox) {
object.geometry.computeBoundingBox();
}
const box = object.geometry.boundingBox.clone();
box.applyMatrix4(object.matrixWorld);
// 视锥剔除
object.visible = frustum.intersectsBox(box);
}
});
}
实际应用:
How to optimize the performance of Three.js applications? What are the common optimization strategies?
How to optimize the performance of Three.js applications? What are the common optimization strategies?
考察点:性能优化策略。
答案:
Three.js性能优化是构建高质量3D Web应用的关键技术,涉及渲染管线的各个环节。优化策略需要从几何体、材质、光照、动画等多个维度综合考虑,平衡视觉效果和运行性能。
渲染性能优化:
1. 减少Draw Call
// 使用InstancedMesh批量渲染相同对象
const geometry = new THREE.BoxGeometry();
const material = new THREE.MeshBasicMaterial({color: 0x00ff00});
const count = 1000;
const instancedMesh = new THREE.InstancedMesh(geometry, material, count);
const matrix = new THREE.Matrix4();
for (let i = 0; i < count; i++) {
matrix.setPosition(
Math.random() * 100 - 50,
Math.random() * 100 - 50,
Math.random() * 100 - 50
);
instancedMesh.setMatrixAt(i, matrix);
}
instancedMesh.instanceMatrix.needsUpdate = true;
scene.add(instancedMesh);
2. 几何体优化
// LOD系统实现
const lod = new THREE.LOD();
// 不同精度的几何体
const highDetail = new THREE.SphereGeometry(1, 32, 32);
const mediumDetail = new THREE.SphereGeometry(1, 16, 16);
const lowDetail = new THREE.SphereGeometry(1, 8, 8);
lod.addLevel(new THREE.Mesh(highDetail, material), 0);
lod.addLevel(new THREE.Mesh(mediumDetail, material), 50);
lod.addLevel(new THREE.Mesh(lowDetail, material), 100);
// 几何体合并减少对象数量
const mergedGeometry = BufferGeometryUtils.mergeBufferGeometries([geo1, geo2, geo3]);
const mergedMesh = new THREE.Mesh(mergedGeometry, material);
3. 材质和纹理优化
// 纹理压缩和复用
class TextureManager {
constructor() {
this.cache = new Map();
this.loader = new THREE.TextureLoader();
}
load(url, options = {}) {
if (this.cache.has(url)) {
return this.cache.get(url);
}
const texture = this.loader.load(url);
// 纹理优化设置
texture.generateMipmaps = options.mipmap !== false;
texture.minFilter = options.minFilter || THREE.LinearMipmapLinearFilter;
texture.magFilter = options.magFilter || THREE.LinearFilter;
// 压缩格式支持
if (renderer.capabilities.isWebGL2) {
texture.format = THREE.RGBFormat;
texture.type = THREE.UnsignedByteType;
}
this.cache.set(url, texture);
return texture;
}
}
// 材质复用
const materialCache = new Map();
function getMaterial(params) {
const key = JSON.stringify(params);
if (!materialCache.has(key)) {
materialCache.set(key, new THREE.MeshStandardMaterial(params));
}
return materialCache.get(key);
}
内存管理优化:
1. 资源释放
class ResourceManager {
constructor() {
this.geometries = new Set();
this.materials = new Set();
this.textures = new Set();
}
track(resource) {
if (resource.isGeometry || resource.isBufferGeometry) {
this.geometries.add(resource);
} else if (resource.isMaterial) {
this.materials.add(resource);
} else if (resource.isTexture) {
this.textures.add(resource);
}
}
dispose() {
this.geometries.forEach(geo => geo.dispose());
this.materials.forEach(mat => mat.dispose());
this.textures.forEach(tex => tex.dispose());
this.geometries.clear();
this.materials.clear();
this.textures.clear();
}
}
// 对象池模式
class ObjectPool {
constructor(createFn, resetFn) {
this.createFn = createFn;
this.resetFn = resetFn;
this.pool = [];
}
get() {
return this.pool.pop() || this.createFn();
}
release(obj) {
this.resetFn(obj);
this.pool.push(obj);
}
}
2. 视锥剔除
// 自定义视锥剔除
function performFrustumCulling(camera, objects) {
const frustum = new THREE.Frustum();
const matrix = new THREE.Matrix4();
matrix.multiplyMatrices(camera.projectionMatrix, camera.matrixWorldInverse);
frustum.setFromProjectionMatrix(matrix);
objects.forEach(object => {
if (object.geometry && object.geometry.boundingSphere) {
object.visible = frustum.intersectsSphere(object.geometry.boundingSphere);
}
});
}
How to implement physics effects in Three.js? How to integrate with physics engines?
How to implement physics effects in Three.js? How to integrate with physics engines?
考察点:物理系统集成。
答案:
物理效果为3D场景增加真实性和交互性,Three.js本身不包含物理引擎,需要与第三方物理库集成。常用的物理引擎包括Cannon.js、Ammo.js(Bullet物理引擎的JavaScript版本)等。
Cannon.js集成:
1. 基础物理世界设置
import * as CANNON from 'cannon-es';
// 创建物理世界
const world = new CANNON.World({
gravity: new CANNON.Vec3(0, -9.82, 0), // 重力
});
world.broadphase = new CANNON.NaiveBroadphase(); // 碰撞检测算法
// 物理材质
const physicsMaterial = new CANNON.Material('physics');
const physicsContactMaterial = new CANNON.ContactMaterial(
physicsMaterial,
physicsMaterial,
{
friction: 0.4, // 摩擦力
restitution: 0.3, // 弹性恢复系数
}
);
world.addContactMaterial(physicsContactMaterial);
// 地面物理体
const groundShape = new CANNON.Plane();
const groundBody = new CANNON.Body({
mass: 0, // 质量为0表示静态物体
shape: groundShape,
material: physicsMaterial
});
groundBody.quaternion.setFromAxisAngle(new CANNON.Vec3(-1, 0, 0), Math.PI * 0.5);
world.addBody(groundBody);
2. 刚体创建和同步
class PhysicsObject {
constructor(mesh, shape, mass = 1) {
this.mesh = mesh;
// 创建物理体
this.body = new CANNON.Body({
mass: mass,
shape: shape,
material: physicsMaterial
});
// 同步初始位置
this.body.position.copy(mesh.position);
this.body.quaternion.copy(mesh.quaternion);
world.addBody(this.body);
}
update() {
// 将物理体状态同步到渲染网格
this.mesh.position.copy(this.body.position);
this.mesh.quaternion.copy(this.body.quaternion);
}
}
// 创建掉落的立方体
function createFallingBox(position) {
const boxGeometry = new THREE.BoxGeometry(1, 1, 1);
const boxMaterial = new THREE.MeshStandardMaterial({color: Math.random() * 0xffffff});
const boxMesh = new THREE.Mesh(boxGeometry, boxMaterial);
const boxShape = new CANNON.Sphere(0.5);
const physicsBox = new PhysicsObject(boxMesh, boxShape, 1);
boxMesh.position.copy(position);
physicsBox.body.position.copy(position);
scene.add(boxMesh);
return physicsBox;
}
3. 物理更新循环
const physicsObjects = [];
const timeStep = 1 / 60; // 60 FPS
function updatePhysics() {
// 更新物理世界
world.step(timeStep);
// 同步所有物理对象
physicsObjects.forEach(obj => {
obj.update();
});
}
function animate() {
requestAnimationFrame(animate);
updatePhysics();
renderer.render(scene, camera);
}
How to implement particle systems in Three.js?
How to implement particle systems in Three.js?
考察点:粒子系统开发。
答案:
粒子系统用于模拟火焰、烟雾、雨雪、爆炸等自然现象。Three.js中可以通过Points对象、Sprite或自定义着色器实现高效的粒子效果。
基础粒子系统:
1. Points粒子系统
class ParticleSystem {
constructor(count = 1000) {
this.count = count;
this.particles = [];
// 创建几何体
this.geometry = new THREE.BufferGeometry();
this.positions = new Float32Array(count * 3);
this.colors = new Float32Array(count * 3);
this.sizes = new Float32Array(count);
this.velocities = [];
this.initParticles();
this.createMaterial();
this.createPoints();
}
initParticles() {
for (let i = 0; i < this.count; i++) {
// 位置
this.positions[i * 3] = (Math.random() - 0.5) * 20;
this.positions[i * 3 + 1] = Math.random() * 10;
this.positions[i * 3 + 2] = (Math.random() - 0.5) * 20;
// 颜色
this.colors[i * 3] = Math.random();
this.colors[i * 3 + 1] = Math.random();
this.colors[i * 3 + 2] = Math.random();
// 大小
this.sizes[i] = Math.random() * 3 + 1;
// 速度
this.velocities[i] = {
x: (Math.random() - 0.5) * 0.1,
y: Math.random() * 0.1 + 0.02,
z: (Math.random() - 0.5) * 0.1
};
}
this.geometry.setAttribute('position', new THREE.BufferAttribute(this.positions, 3));
this.geometry.setAttribute('color', new THREE.BufferAttribute(this.colors, 3));
this.geometry.setAttribute('size', new THREE.BufferAttribute(this.sizes, 1));
}
createMaterial() {
this.material = new THREE.PointsMaterial({
size: 2,
sizeAttenuation: true,
vertexColors: true,
transparent: true,
opacity: 0.8,
blending: THREE.AdditiveBlending
});
}
createPoints() {
this.points = new THREE.Points(this.geometry, this.material);
}
update(deltaTime) {
const positions = this.geometry.attributes.position.array;
for (let i = 0; i < this.count; i++) {
const i3 = i * 3;
// 更新位置
positions[i3] += this.velocities[i].x;
positions[i3 + 1] += this.velocities[i].y;
positions[i3 + 2] += this.velocities[i].z;
// 重置超出范围的粒子
if (positions[i3 + 1] > 10) {
positions[i3 + 1] = 0;
positions[i3] = (Math.random() - 0.5) * 20;
positions[i3 + 2] = (Math.random() - 0.5) * 20;
}
}
this.geometry.attributes.position.needsUpdate = true;
}
}
2. 高级粒子着色器
const particleShaderMaterial = new THREE.ShaderMaterial({
uniforms: {
time: { value: 0 },
pointTexture: { value: textureLoader.load('particle.png') }
},
vertexShader: `
attribute float size;
attribute vec3 customColor;
varying vec3 vColor;
uniform float time;
void main() {
vColor = customColor;
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
// 动态大小变化
float dynamicSize = size * (sin(time + position.x) * 0.3 + 1.0);
gl_PointSize = dynamicSize * (300.0 / -mvPosition.z);
gl_Position = projectionMatrix * mvPosition;
}
`,
fragmentShader: `
uniform sampler2D pointTexture;
varying vec3 vColor;
void main() {
gl_FragColor = vec4(vColor, 1.0);
gl_FragColor = gl_FragColor * texture2D(pointTexture, gl_PointCoord);
// 软粒子效果
float alpha = 1.0 - length(gl_PointCoord - 0.5) * 2.0;
gl_FragColor.a *= alpha;
}
`,
blending: THREE.AdditiveBlending,
depthTest: false,
transparent: true,
vertexColors: true
});
What is post-processing in Three.js? How to implement it?
What is post-processing in Three.js? How to implement it?
考察点:后期处理技术。
答案:
后期处理是在场景渲染完成后对整个画面进行的图像处理效果,如景深、辉光、色彩校正等。Three.js通过EffectComposer和各种Pass实现后期处理管线。
后期处理管线:
1. EffectComposer设置
import { EffectComposer } from 'three/examples/jsm/postprocessing/EffectComposer.js';
import { RenderPass } from 'three/examples/jsm/postprocessing/RenderPass.js';
import { UnrealBloomPass } from 'three/examples/jsm/postprocessing/UnrealBloomPass.js';
// 创建后期处理组合器
const composer = new EffectComposer(renderer);
// 渲染通道
const renderPass = new RenderPass(scene, camera);
composer.addPass(renderPass);
// 辉光效果
const bloomPass = new UnrealBloomPass(
new THREE.Vector2(window.innerWidth, window.innerHeight),
1.5, // 强度
0.4, // 半径
0.85 // 阈值
);
composer.addPass(bloomPass);
// 渲染循环中使用composer
function animate() {
composer.render();
}
How to implement shadow effects in Three.js? What are the differences between shadow types?
How to implement shadow effects in Three.js? What are the differences between shadow types?
考察点:阴影渲染技术。
答案:
阴影为3D场景提供深度感和真实感。Three.js支持多种阴影算法,每种都有不同的质量和性能特征。
阴影系统配置:
1. 基础阴影设置
// 启用阴影渲染
renderer.shadowMap.enabled = true;
renderer.shadowMap.type = THREE.PCFSoftShadowMap; // 阴影类型
// 光源阴影设置
const directionalLight = new THREE.DirectionalLight(0xffffff, 1);
directionalLight.castShadow = true;
directionalLight.shadow.mapSize.width = 2048;
directionalLight.shadow.mapSize.height = 2048;
directionalLight.shadow.camera.near = 0.5;
directionalLight.shadow.camera.far = 50;
// 对象阴影属性
mesh.castShadow = true; // 投射阴影
ground.receiveShadow = true; // 接收阴影
2. 阴影类型对比
// PCF阴影 - 平滑边缘
renderer.shadowMap.type = THREE.PCFShadowMap;
// PCF软阴影 - 更平滑的边缘
renderer.shadowMap.type = THREE.PCFSoftShadowMap;
// VSM阴影 - 方差阴影贴图
renderer.shadowMap.type = THREE.VSMShadowMap;
// 基础阴影 - 硬边缘,性能最好
renderer.shadowMap.type = THREE.BasicShadowMap;
How to handle rendering of large-scale scenes in Three.js?
How to handle rendering of large-scale scenes in Three.js?
考察点:大场景渲染策略。
答案:
大规模场景渲染是3D应用性能的关键挑战,需要采用多种优化策略来处理大量几何体、复杂光照和海量数据。核心思路是减少不必要的渲染计算和内存占用。
空间分割和剔除:
1. 八叉树空间分割
class Octree {
constructor(center, size, maxObjects = 10, maxLevels = 5) {
this.center = center;
this.size = size;
this.maxObjects = maxObjects;
this.maxLevels = maxLevels;
this.level = 0;
this.objects = [];
this.nodes = [];
}
split() {
const subSize = this.size / 2;
const x = this.center.x;
const y = this.center.y;
const z = this.center.z;
this.nodes[0] = new Octree(new THREE.Vector3(x - subSize, y + subSize, z - subSize), subSize);
this.nodes[1] = new Octree(new THREE.Vector3(x + subSize, y + subSize, z - subSize), subSize);
// ... 创建8个子节点
// 将对象分配到子节点
this.objects.forEach(obj => {
const index = this.getIndex(obj);
if (index !== -1) {
this.nodes[index].insert(obj);
}
});
this.objects = [];
}
insert(object) {
if (this.nodes.length > 0) {
const index = this.getIndex(object);
if (index !== -1) {
this.nodes[index].insert(object);
return;
}
}
this.objects.push(object);
if (this.objects.length > this.maxObjects && this.level < this.maxLevels) {
if (this.nodes.length === 0) {
this.split();
}
}
}
retrieve(frustum, returnObjects = []) {
if (this.intersectsFrustum(frustum)) {
returnObjects.push(...this.objects);
this.nodes.forEach(node => {
node.retrieve(frustum, returnObjects);
});
}
return returnObjects;
}
}
2. 层次LOD系统
class LODManager {
constructor(camera) {
this.camera = camera;
this.lodLevels = new Map();
}
registerLOD(object, levels) {
// levels: [{distance: 0, mesh: highDetailMesh}, {distance: 50, mesh: lowDetailMesh}]
this.lodLevels.set(object.uuid, levels);
}
update() {
this.lodLevels.forEach((levels, objectId) => {
const object = scene.getObjectByProperty('uuid', objectId);
if (!object) return;
const distance = this.camera.position.distanceTo(object.position);
// 选择合适的LOD级别
let selectedLevel = levels[levels.length - 1];
for (const level of levels) {
if (distance <= level.distance) {
selectedLevel = level;
break;
}
}
// 切换模型
if (object.visible && object.geometry !== selectedLevel.mesh.geometry) {
object.geometry = selectedLevel.mesh.geometry;
object.material = selectedLevel.mesh.material;
}
});
}
}
流式加载系统:
1. 分块加载
class ChunkManager {
constructor(chunkSize = 100) {
this.chunkSize = chunkSize;
this.loadedChunks = new Map();
this.loadingPromises = new Map();
}
getChunkKey(position) {
const x = Math.floor(position.x / this.chunkSize);
const z = Math.floor(position.z / this.chunkSize);
return `${x}_${z}`;
}
async loadChunk(chunkKey) {
if (this.loadedChunks.has(chunkKey)) {
return this.loadedChunks.get(chunkKey);
}
if (this.loadingPromises.has(chunkKey)) {
return this.loadingPromises.get(chunkKey);
}
const promise = this.fetchChunkData(chunkKey);
this.loadingPromises.set(chunkKey, promise);
try {
const chunkData = await promise;
this.loadedChunks.set(chunkKey, chunkData);
this.loadingPromises.delete(chunkKey);
return chunkData;
} catch (error) {
this.loadingPromises.delete(chunkKey);
throw error;
}
}
updateVisibleChunks(cameraPosition, viewDistance) {
const requiredChunks = new Set();
// 计算视野范围内的块
for (let x = -viewDistance; x <= viewDistance; x++) {
for (let z = -viewDistance; z <= viewDistance; z++) {
const chunkPos = {
x: cameraPosition.x + x * this.chunkSize,
z: cameraPosition.z + z * this.chunkSize
};
requiredChunks.add(this.getChunkKey(chunkPos));
}
}
// 加载需要的块
requiredChunks.forEach(chunkKey => {
this.loadChunk(chunkKey);
});
// 卸载远离的块
this.loadedChunks.forEach((chunk, chunkKey) => {
if (!requiredChunks.has(chunkKey)) {
this.unloadChunk(chunkKey);
}
});
}
}
How to implement VR/AR applications in Three.js?
How to implement VR/AR applications in Three.js?
考察点:VR/AR开发。
答案:
Three.js通过WebXR API支持VR和AR应用开发,提供沉浸式的3D体验。VR应用需要处理双眼渲染、头部追踪和手柄交互,而AR应用则需要现实世界的相机融合。
VR应用实现:
1. WebXR VR设置
import { VRButton } from 'three/examples/jsm/webxr/VRButton.js';
// 启用XR支持
renderer.xr.enabled = true;
document.body.appendChild(VRButton.createButton(renderer));
// VR控制器设置
const controller1 = renderer.xr.getController(0);
const controller2 = renderer.xr.getController(1);
controller1.addEventListener('selectstart', onSelectStart);
controller1.addEventListener('selectend', onSelectEnd);
scene.add(controller1);
scene.add(controller2);
// 控制器射线
const geometry = new THREE.BufferGeometry().setFromPoints([
new THREE.Vector3(0, 0, 0),
new THREE.Vector3(0, 0, -1)
]);
const line = new THREE.Line(geometry, new THREE.LineBasicMaterial({color: 0xffffff}));
controller1.add(line.clone());
controller2.add(line.clone());
2. VR交互实现
class VRInteractionManager {
constructor(scene, controllers) {
this.scene = scene;
this.controllers = controllers;
this.raycaster = new THREE.Raycaster();
this.interactables = [];
}
addInteractable(object, callbacks = {}) {
object.userData.isInteractable = true;
object.userData.onHover = callbacks.onHover;
object.userData.onSelect = callbacks.onSelect;
this.interactables.push(object);
}
update() {
this.controllers.forEach(controller => {
// 获取控制器射线
this.raycaster.setFromXRController(controller);
const intersects = this.raycaster.intersectObjects(this.interactables);
if (intersects.length > 0) {
const intersect = intersects[0];
// 触发悬停事件
if (intersect.object.userData.onHover) {
intersect.object.userData.onHover(intersect);
}
// 更新射线长度到交点
const line = controller.getObjectByName('line');
if (line) {
line.scale.z = intersect.distance;
}
}
});
}
onSelectStart(controller) {
const intersects = this.raycaster.intersectObjects(this.interactables);
if (intersects.length > 0) {
const intersect = intersects[0];
if (intersect.object.userData.onSelect) {
intersect.object.userData.onSelect(intersect);
}
}
}
}
AR应用实现:
1. WebXR AR设置
import { ARButton } from 'three/examples/jsm/webxr/ARButton.js';
// AR环境设置
renderer.xr.enabled = true;
document.body.appendChild(ARButton.createButton(renderer, {
requiredFeatures: ['hit-test']
}));
// AR命中测试
let hitTestSource = null;
let hitTestSourceRequested = false;
function onSelect() {
if (reticle.visible) {
// 在命中点放置对象
const object = new THREE.Mesh(geometry, material);
object.position.copy(reticle.position);
object.quaternion.copy(reticle.quaternion);
scene.add(object);
}
}
controller.addEventListener('select', onSelect);
2. AR平面检测
class ARPlaneDetection {
constructor(renderer) {
this.renderer = renderer;
this.hitTestSource = null;
this.reticle = this.createReticle();
}
createReticle() {
const geometry = new THREE.RingGeometry(0.15, 0.2, 32).rotateX(-Math.PI / 2);
const material = new THREE.MeshBasicMaterial();
const reticle = new THREE.Mesh(geometry, material);
reticle.matrixAutoUpdate = false;
reticle.visible = false;
return reticle;
}
async initHitTest(session) {
const referenceSpace = this.renderer.xr.getReferenceSpace();
const hitTestSourceRequested = await session.requestHitTestSource({
space: await session.requestReferenceSpace('viewer')
});
this.hitTestSource = hitTestSourceRequested;
}
update(frame) {
if (this.hitTestSource) {
const hitTestResults = frame.getHitTestResults(this.hitTestSource);
if (hitTestResults.length > 0) {
const hit = hitTestResults[0];
const referenceSpace = this.renderer.xr.getReferenceSpace();
const hitPose = hit.getPose(referenceSpace);
if (hitPose) {
this.reticle.visible = true;
this.reticle.matrix.fromArray(hitPose.transform.matrix);
}
} else {
this.reticle.visible = false;
}
}
}
}
How to design a high-performance Three.js application architecture?
How to design a high-performance Three.js application architecture?
考察点:架构设计能力。
答案:
高性能Three.js应用架构需要考虑模块化设计、资源管理、渲染优化、状态管理等多个层面。良好的架构能够支撑复杂应用的开发和维护,同时保证运行性能。
核心架构设计:
1. 分层架构模式
// 应用架构分层
class ThreeApp {
constructor(container) {
this.container = container;
// 核心层
this.core = new CoreEngine();
// 渲染层
this.renderer = new RenderManager(this.core);
// 场景层
this.sceneManager = new SceneManager(this.core);
// 资源层
this.resourceManager = new ResourceManager();
// 输入层
this.inputManager = new InputManager(this.container);
// 系统层
this.systems = new SystemManager();
this.init();
}
init() {
this.setupCore();
this.setupSystems();
this.setupEventListeners();
this.startRenderLoop();
}
}
// 核心引擎
class CoreEngine {
constructor() {
this.scene = new THREE.Scene();
this.camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
this.renderer = new THREE.WebGLRenderer({ antialias: true });
this.clock = new THREE.Clock();
this.setupRenderer();
}
setupRenderer() {
this.renderer.setSize(window.innerWidth, window.innerHeight);
this.renderer.shadowMap.enabled = true;
this.renderer.shadowMap.type = THREE.PCFSoftShadowMap;
this.renderer.outputEncoding = THREE.sRGBEncoding;
this.renderer.toneMapping = THREE.ACESFilmicToneMapping;
}
}
2. 组件化系统设计
// 实体组件系统 (ECS)
class Entity {
constructor(id) {
this.id = id;
this.components = new Map();
this.active = true;
}
addComponent(component) {
this.components.set(component.constructor.name, component);
component.entity = this;
return this;
}
getComponent(componentType) {
return this.components.get(componentType.name);
}
hasComponent(componentType) {
return this.components.has(componentType.name);
}
}
// 组件基类
class Component {
constructor() {
this.entity = null;
this.enabled = true;
}
}
// 变换组件
class TransformComponent extends Component {
constructor(position = new THREE.Vector3(), rotation = new THREE.Euler(), scale = new THREE.Vector3(1, 1, 1)) {
super();
this.position = position;
this.rotation = rotation;
this.scale = scale;
this.object3D = new THREE.Object3D();
}
updateMatrix() {
this.object3D.position.copy(this.position);
this.object3D.rotation.copy(this.rotation);
this.object3D.scale.copy(this.scale);
this.object3D.updateMatrix();
}
}
// 渲染组件
class MeshComponent extends Component {
constructor(geometry, material) {
super();
this.geometry = geometry;
this.material = material;
this.mesh = new THREE.Mesh(geometry, material);
}
}
// 系统管理器
class SystemManager {
constructor() {
this.systems = [];
this.entities = new Map();
}
addSystem(system) {
this.systems.push(system);
system.init();
}
addEntity(entity) {
this.entities.set(entity.id, entity);
// 通知所有系统有新实体
this.systems.forEach(system => {
if (system.matchesEntity(entity)) {
system.addEntity(entity);
}
});
}
update(deltaTime) {
this.systems.forEach(system => {
if (system.enabled) {
system.update(deltaTime);
}
});
}
}
性能优化架构:
1. 渲染管理器
class RenderManager {
constructor(core) {
this.core = core;
this.renderQueue = new RenderQueue();
this.postProcessing = new PostProcessingManager();
this.cullingManager = new CullingManager();
this.stats = {
drawCalls: 0,
triangles: 0,
frameTime: 0
};
}
render() {
const startTime = performance.now();
// 视锥剔除
const visibleObjects = this.cullingManager.cull(this.core.camera, this.core.scene);
// 排序渲染队列
this.renderQueue.sort(visibleObjects, this.core.camera);
// 批量渲染
this.batchRender(this.renderQueue.opaque);
this.batchRender(this.renderQueue.transparent);
// 后期处理
this.postProcessing.render(this.core.renderer, this.core.scene, this.core.camera);
// 性能统计
this.stats.frameTime = performance.now() - startTime;
this.updateStats();
}
batchRender(objects) {
// 按材质分组减少状态切换
const materialGroups = this.groupByMaterial(objects);
materialGroups.forEach(group => {
this.core.renderer.setMaterial(group.material);
group.objects.forEach(obj => {
this.core.renderer.renderObject(obj);
});
});
}
}
// 渲染队列
class RenderQueue {
constructor() {
this.opaque = [];
this.transparent = [];
}
sort(objects, camera) {
this.opaque.length = 0;
this.transparent.length = 0;
objects.forEach(obj => {
if (obj.material.transparent) {
this.transparent.push(obj);
} else {
this.opaque.push(obj);
}
});
// 不透明物体从前到后排序(早期Z测试剔除)
this.opaque.sort((a, b) => {
return a.distanceToCamera - b.distanceToCamera;
});
// 透明物体从后到前排序(正确的alpha混合)
this.transparent.sort((a, b) => {
return b.distanceToCamera - a.distanceToCamera;
});
}
}
2. 资源管理系统
class ResourceManager {
constructor() {
this.cache = new Map();
this.loadingQueue = [];
this.loadedResources = new Set();
this.refCounts = new Map();
this.loaders = {
texture: new THREE.TextureLoader(),
gltf: new GLTFLoader(),
audio: new THREE.AudioLoader()
};
}
async load(url, type = 'auto') {
if (this.cache.has(url)) {
this.addRef(url);
return this.cache.get(url);
}
const resourceType = type === 'auto' ? this.detectType(url) : type;
const loader = this.loaders[resourceType];
if (!loader) {
throw new Error(`Unsupported resource type: ${resourceType}`);
}
try {
const resource = await this.loadWithLoader(loader, url);
this.cache.set(url, resource);
this.addRef(url);
return resource;
} catch (error) {
console.error(`Failed to load resource: ${url}`, error);
throw error;
}
}
unload(url) {
this.removeRef(url);
if (this.getRefCount(url) <= 0) {
const resource = this.cache.get(url);
if (resource && resource.dispose) {
resource.dispose();
}
this.cache.delete(url);
}
}
addRef(url) {
this.refCounts.set(url, (this.refCounts.get(url) || 0) + 1);
}
removeRef(url) {
const count = this.refCounts.get(url) || 0;
this.refCounts.set(url, Math.max(0, count - 1));
}
}
实际应用架构:
How to implement custom shaders in Three.js? What are the key points of GLSL programming?
How to implement custom shaders in Three.js? What are the key points of GLSL programming?
考察点:着色器编程。
答案:
自定义着色器是实现高级视觉效果的核心技术,通过GLSL(OpenGL Shading Language)直接控制GPU渲染管线。掌握GLSL编程能够创造出Three.js内置材质无法实现的复杂效果。
GLSL基础语法:
1. 数据类型和变量
// 基础数据类型
float value = 1.0; // 浮点数
int count = 10; // 整数
bool isVisible = true; // 布尔值
// 向量类型
vec2 position = vec2(1.0, 2.0); // 2D向量
vec3 color = vec3(1.0, 0.5, 0.0); // 3D向量/颜色
vec4 vertex = vec4(0.0, 0.0, 0.0, 1.0); // 4D向量/位置
// 矩阵类型
mat3 rotation = mat3(1.0); // 3x3矩阵
mat4 transform = mat4(1.0); // 4x4矩阵
// 纹理采样器
sampler2D diffuseMap; // 2D纹理
samplerCube envMap; // 立方体纹理
2. 变量修饰符
// 顶点着色器
attribute vec3 position; // 顶点属性(只读)
uniform mat4 modelMatrix; // 全局变量
varying vec2 vUv; // 传递给片元着色器
// 片元着色器
uniform float time; // 全局变量
varying vec2 vUv; // 从顶点着色器接收
高级着色器实现:
1. 程序化纹理着色器
const proceduralMaterial = new THREE.ShaderMaterial({
uniforms: {
time: { value: 0 },
resolution: { value: new THREE.Vector2(1024, 1024) },
noiseScale: { value: 5.0 },
colorA: { value: new THREE.Color(0xff6b35) },
colorB: { value: new THREE.Color(0x004e89) }
},
vertexShader: `
varying vec2 vUv;
varying vec3 vPosition;
void main() {
vUv = uv;
vPosition = position;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
`,
fragmentShader: `
uniform float time;
uniform vec2 resolution;
uniform float noiseScale;
uniform vec3 colorA;
uniform vec3 colorB;
varying vec2 vUv;
varying vec3 vPosition;
// 噪声函数
float random(vec2 st) {
return fract(sin(dot(st.xy, vec2(12.9898,78.233))) * 43758.5453123);
}
float noise(vec2 st) {
vec2 i = floor(st);
vec2 f = fract(st);
float a = random(i);
float b = random(i + vec2(1.0, 0.0));
float c = random(i + vec2(0.0, 1.0));
float d = random(i + vec2(1.0, 1.0));
vec2 u = f * f * (3.0 - 2.0 * f);
return mix(a, b, u.x) + (c - a) * u.y * (1.0 - u.x) + (d - b) * u.x * u.y;
}
float fbm(vec2 st) {
float value = 0.0;
float amplitude = 0.5;
float frequency = 0.0;
for (int i = 0; i < 6; i++) {
value += amplitude * noise(st);
st *= 2.0;
amplitude *= 0.5;
}
return value;
}
void main() {
vec2 st = vUv * noiseScale;
st += time * 0.1;
// 分形布朗运动噪声
float n = fbm(st);
// 动态图案
float pattern = sin(vUv.x * 10.0 + time) * cos(vUv.y * 10.0 + time * 0.5);
pattern = (pattern + 1.0) * 0.5;
// 混合颜色
vec3 color = mix(colorA, colorB, n * pattern);
gl_FragColor = vec4(color, 1.0);
}
`
});
2. 变形动画着色器
const morphMaterial = new THREE.ShaderMaterial({
uniforms: {
time: { value: 0 },
morphIntensity: { value: 1.0 },
waveFrequency: { value: 2.0 }
},
vertexShader: `
uniform float time;
uniform float morphIntensity;
uniform float waveFrequency;
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vPosition;
// 3D噪声函数
vec3 mod289(vec3 x) {
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
vec4 mod289(vec4 x) {
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
vec4 permute(vec4 x) {
return mod289(((x*34.0)+1.0)*x);
}
vec4 taylorInvSqrt(vec4 r) {
return 1.79284291400159 - 0.85373472095314 * r;
}
float snoise(vec3 v) {
const vec2 C = vec2(1.0/6.0, 1.0/3.0);
const vec4 D = vec4(0.0, 0.5, 1.0, 2.0);
vec3 i = floor(v + dot(v, C.yyy));
vec3 x0 = v - i + dot(i, C.xxx);
vec3 g = step(x0.yzx, x0.xyz);
vec3 l = 1.0 - g;
vec3 i1 = min(g.xyz, l.zxy);
vec3 i2 = max(g.xyz, l.zxy);
vec3 x1 = x0 - i1 + C.xxx;
vec3 x2 = x0 - i2 + C.yyy;
vec3 x3 = x0 - D.yyy;
i = mod289(i);
vec4 p = permute(permute(permute(
i.z + vec4(0.0, i1.z, i2.z, 1.0))
+ i.y + vec4(0.0, i1.y, i2.y, 1.0))
+ i.x + vec4(0.0, i1.x, i2.x, 1.0));
float n_ = 0.142857142857;
vec3 ns = n_ * D.wyz - D.xzx;
vec4 j = p - 49.0 * floor(p * ns.z * ns.z);
vec4 x_ = floor(j * ns.z);
vec4 y_ = floor(j - 7.0 * x_);
vec4 x = x_ *ns.x + ns.yyyy;
vec4 y = y_ *ns.x + ns.yyyy;
vec4 h = 1.0 - abs(x) - abs(y);
vec4 b0 = vec4(x.xy, y.xy);
vec4 b1 = vec4(x.zw, y.zw);
vec4 s0 = floor(b0)*2.0 + 1.0;
vec4 s1 = floor(b1)*2.0 + 1.0;
vec4 sh = -step(h, vec4(0.0));
vec4 a0 = b0.xzyw + s0.xzyw*sh.xxyy;
vec4 a1 = b1.xzyw + s1.xzyw*sh.zzww;
vec3 p0 = vec3(a0.xy,h.x);
vec3 p1 = vec3(a0.zw,h.y);
vec3 p2 = vec3(a1.xy,h.z);
vec3 p3 = vec3(a1.zw,h.w);
vec4 norm = taylorInvSqrt(vec4(dot(p0,p0), dot(p1,p1), dot(p2, p2), dot(p3,p3)));
p0 *= norm.x;
p1 *= norm.y;
p2 *= norm.z;
p3 *= norm.w;
vec4 m = max(0.6 - vec4(dot(x0,x0), dot(x1,x1), dot(x2,x2), dot(x3,x3)), 0.0);
m = m * m;
return 42.0 * dot(m*m, vec4(dot(p0,x0), dot(p1,x1), dot(p2,x2), dot(p3,x3)));
}
void main() {
vUv = uv;
vNormal = normalize(normalMatrix * normal);
// 变形计算
vec3 pos = position;
float noiseValue = snoise(pos * waveFrequency + time);
pos += normal * noiseValue * morphIntensity;
vPosition = (modelViewMatrix * vec4(pos, 1.0)).xyz;
gl_Position = projectionMatrix * vec4(vPosition, 1.0);
}
`,
fragmentShader: `
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vPosition;
void main() {
// 基于法向量的颜色
vec3 color = normalize(vNormal) * 0.5 + 0.5;
// 边缘光效
vec3 viewDirection = normalize(-vPosition);
float fresnel = 1.0 - dot(vNormal, viewDirection);
fresnel = pow(fresnel, 2.0);
color += vec3(0.2, 0.6, 1.0) * fresnel;
gl_FragColor = vec4(color, 1.0);
}
`
});
How to implement complex 3D data visualization in Three.js?
How to implement complex 3D data visualization in Three.js?
考察点:数据可视化设计。
答案:
3D数据可视化将抽象数据转换为直观的三维表现形式,通过位置、颜色、大小、动画等维度映射数据属性。设计有效的3D数据可视化需要考虑数据结构、视觉编码、交互设计和性能优化。
数据映射策略:
1. 多维数据映射
class DataVisualization {
constructor(data) {
this.data = data;
this.scales = this.createScales();
this.colorScale = this.createColorScale();
this.scene = new THREE.Scene();
}
createScales() {
// 创建数据尺度映射
return {
x: d3.scaleLinear()
.domain(d3.extent(this.data, d => d.x))
.range([-10, 10]),
y: d3.scaleLinear()
.domain(d3.extent(this.data, d => d.y))
.range([0, 20]),
z: d3.scaleLinear()
.domain(d3.extent(this.data, d => d.z))
.range([-10, 10]),
size: d3.scaleSqrt()
.domain(d3.extent(this.data, d => d.value))
.range([0.1, 2])
};
}
createColorScale() {
return d3.scaleSequential(d3.interpolateViridis)
.domain(d3.extent(this.data, d => d.category));
}
createVisualization() {
this.data.forEach((dataPoint, index) => {
const geometry = new THREE.SphereGeometry(this.scales.size(dataPoint.value), 16, 16);
const material = new THREE.MeshStandardMaterial({
color: new THREE.Color(this.colorScale(dataPoint.category)),
transparent: true,
opacity: 0.8
});
const sphere = new THREE.Mesh(geometry, material);
sphere.position.set(
this.scales.x(dataPoint.x),
this.scales.y(dataPoint.y),
this.scales.z(dataPoint.z)
);
// 存储原始数据用于交互
sphere.userData = dataPoint;
this.scene.add(sphere);
});
}
}
2. 时间序列数据可视化
class TimeSeriesVisualization {
constructor(timeSeriesData) {
this.data = timeSeriesData;
this.currentTime = 0;
this.animationSpeed = 0.02;
this.trails = new Map(); // 轨迹记录
}
createAnimatedVisualization() {
// 为每个数据系列创建轨迹
this.data.series.forEach((series, seriesIndex) => {
const trailGeometry = new THREE.BufferGeometry();
const trailMaterial = new THREE.LineBasicMaterial({
color: this.getSeriesColor(seriesIndex),
transparent: true,
opacity: 0.6
});
const trailLine = new THREE.Line(trailGeometry, trailMaterial);
this.scene.add(trailLine);
this.trails.set(series.id, {
line: trailLine,
points: [],
maxPoints: 100
});
});
}
updateVisualization() {
this.data.series.forEach(series => {
const currentDataPoint = this.interpolateDataAtTime(series, this.currentTime);
if (currentDataPoint) {
const position = new THREE.Vector3(
currentDataPoint.x,
currentDataPoint.y,
currentDataPoint.z
);
// 更新轨迹
const trail = this.trails.get(series.id);
trail.points.push(position);
if (trail.points.length > trail.maxPoints) {
trail.points.shift();
}
// 更新几何体
const positions = new Float32Array(trail.points.length * 3);
trail.points.forEach((point, index) => {
positions[index * 3] = point.x;
positions[index * 3 + 1] = point.y;
positions[index * 3 + 2] = point.z;
});
trail.line.geometry.setAttribute('position', new THREE.BufferAttribute(positions, 3));
trail.line.geometry.setDrawRange(0, trail.points.length);
}
});
this.currentTime += this.animationSpeed;
}
}
What is the memory management strategy for Three.js applications? How to avoid memory leaks?
What is the memory management strategy for Three.js applications? How to avoid memory leaks?
考察点:内存管理能力。
答案:
Three.js应用的内存管理涉及几何体、材质、纹理、渲染器等多个层面的资源释放。不当的内存管理会导致内存泄漏,影响应用性能甚至造成浏览器崩溃。
资源释放策略:
1. 自动资源管理系统
class ResourceTracker {
constructor() {
this.resources = new Set();
}
track(resource) {
if (!resource) return resource;
// 追踪各种类型的资源
if (resource.dispose || resource.isTexture || resource.isGeometry || resource.isMaterial) {
this.resources.add(resource);
}
return resource;
}
untrack(resource) {
this.resources.delete(resource);
}
dispose() {
for (const resource of this.resources) {
if (resource.dispose) {
resource.dispose();
}
}
this.resources.clear();
}
}
// 使用示例
const resourceTracker = new ResourceTracker();
// 自动追踪创建的资源
const geometry = resourceTracker.track(new THREE.BoxGeometry());
const material = resourceTracker.track(new THREE.MeshBasicMaterial());
const texture = resourceTracker.track(textureLoader.load('image.jpg'));
// 统一释放
resourceTracker.dispose();
2. 对象池管理
class ObjectPool {
constructor(createFn, resetFn, initialSize = 10) {
this.createFn = createFn;
this.resetFn = resetFn;
this.pool = [];
this.active = new Set();
// 预创建对象
for (let i = 0; i < initialSize; i++) {
this.pool.push(this.createFn());
}
}
get() {
let obj = this.pool.pop();
if (!obj) {
obj = this.createFn();
}
this.active.add(obj);
return obj;
}
release(obj) {
if (this.active.has(obj)) {
this.active.delete(obj);
this.resetFn(obj);
this.pool.push(obj);
}
}
releaseAll() {
this.active.forEach(obj => {
this.resetFn(obj);
this.pool.push(obj);
});
this.active.clear();
}
dispose() {
this.pool.forEach(obj => {
if (obj.dispose) obj.dispose();
});
this.active.forEach(obj => {
if (obj.dispose) obj.dispose();
});
this.pool.length = 0;
this.active.clear();
}
}
// 粒子对象池
const particlePool = new ObjectPool(
() => new THREE.Mesh(
new THREE.SphereGeometry(0.1, 8, 8),
new THREE.MeshBasicMaterial()
),
(particle) => {
particle.position.set(0, 0, 0);
particle.visible = false;
scene.remove(particle);
}
);
How to implement real-time collaboration features in Three.js applications?
How to implement real-time collaboration features in Three.js applications?
考察点:实时协作设计。
答案:
实时协作功能允许多用户同时操作3D场景,需要解决状态同步、冲突处理、网络通信等技术挑战。核心是建立可靠的实时通信机制和状态管理系统。
实时同步架构:
1. WebSocket通信层
class CollaborationManager {
constructor(scene, userId) {
this.scene = scene;
this.userId = userId;
this.socket = null;
this.collaborators = new Map();
this.operationQueue = [];
this.lastSyncTime = 0;
this.setupWebSocket();
}
setupWebSocket() {
this.socket = new WebSocket('ws://localhost:8080');
this.socket.onmessage = (event) => {
const message = JSON.parse(event.data);
this.handleMessage(message);
};
this.socket.onopen = () => {
this.sendMessage({
type: 'join',
userId: this.userId,
timestamp: Date.now()
});
};
}
handleMessage(message) {
switch (message.type) {
case 'object_update':
this.applyObjectUpdate(message.data);
break;
case 'user_joined':
this.addCollaborator(message.data);
break;
case 'user_left':
this.removeCollaborator(message.data.userId);
break;
case 'operation_sync':
this.applyOperation(message.data);
break;
}
}
sendObjectUpdate(objectId, transform) {
const message = {
type: 'object_update',
data: {
objectId: objectId,
transform: {
position: transform.position.toArray(),
rotation: transform.rotation.toArray(),
scale: transform.scale.toArray()
},
timestamp: Date.now(),
userId: this.userId
}
};
this.socket.send(JSON.stringify(message));
}
}
2. 操作冲突解决
class OperationalTransform {
constructor() {
this.operations = [];
this.appliedOperations = new Set();
}
// 操作变换算法
transform(op1, op2) {
// 根据操作类型进行变换
if (op1.type === 'move' && op2.type === 'move') {
return this.transformMoveOperations(op1, op2);
} else if (op1.type === 'delete' && op2.type === 'move') {
return this.transformDeleteMoveOperations(op1, op2);
}
// ... 其他操作组合
return [op1, op2];
}
transformMoveOperations(op1, op2) {
if (op1.objectId === op2.objectId) {
// 相同对象的移动操作,应用相对变换
const deltaPosition = new THREE.Vector3().subVectors(
op2.newPosition, op2.oldPosition
);
return [
{
...op1,
oldPosition: new THREE.Vector3().addVectors(op1.oldPosition, deltaPosition),
newPosition: new THREE.Vector3().addVectors(op1.newPosition, deltaPosition)
},
op2
];
}
return [op1, op2];
}
applyOperation(operation) {
if (this.appliedOperations.has(operation.id)) {
return; // 操作已应用
}
// 应用变换到之前的操作
const transformedOps = this.operations.map(existingOp =>
this.transform(existingOp, operation)[0]
);
this.operations = transformedOps;
this.operations.push(operation);
this.appliedOperations.add(operation.id);
// 应用到场景
this.executeOperation(operation);
}
}
How to implement procedurally generated complex geometries in Three.js?
How to implement procedurally generated complex geometries in Three.js?
考察点:程序化生成技术。
答案:
程序化生成通过算法创建复杂的3D几何体,常用于地形生成、建筑建模、自然现象模拟等场景。核心技术包括噪声函数、L-系统、细分算法等。
地形生成系统:
1. 基于噪声的地形
class TerrainGenerator {
constructor(width = 100, height = 100, segments = 50) {
this.width = width;
this.height = height;
this.segments = segments;
this.heightMap = [];
this.generateHeightMap();
}
// Perlin噪声实现
perlinNoise(x, y, octaves = 4, persistence = 0.5, scale = 0.1) {
let value = 0;
let amplitude = 1;
let frequency = scale;
let maxValue = 0;
for (let i = 0; i < octaves; i++) {
value += this.noise(x * frequency, y * frequency) * amplitude;
maxValue += amplitude;
amplitude *= persistence;
frequency *= 2;
}
return value / maxValue;
}
// 简单噪声函数
noise(x, y) {
const n = Math.sin(x * 12.9898 + y * 78.233) * 43758.5453;
return 2 * (n - Math.floor(n)) - 1;
}
generateHeightMap() {
for (let y = 0; y <= this.segments; y++) {
this.heightMap[y] = [];
for (let x = 0; x <= this.segments; x++) {
const worldX = (x / this.segments - 0.5) * this.width;
const worldY = (y / this.segments - 0.5) * this.height;
// 多层噪声叠加
let height = this.perlinNoise(worldX, worldY, 4, 0.5, 0.02) * 20;
height += this.perlinNoise(worldX, worldY, 8, 0.3, 0.05) * 5;
height += this.perlinNoise(worldX, worldY, 16, 0.1, 0.1) * 1;
this.heightMap[y][x] = height;
}
}
}
generateMesh() {
const geometry = new THREE.PlaneGeometry(
this.width,
this.height,
this.segments,
this.segments
);
const positions = geometry.attributes.position.array;
// 应用高度贴图
for (let i = 0; i < positions.length; i += 3) {
const x = Math.floor((i / 3) % (this.segments + 1));
const y = Math.floor((i / 3) / (this.segments + 1));
positions[i + 2] = this.heightMap[y][x]; // Z坐标
}
geometry.attributes.position.needsUpdate = true;
geometry.computeVertexNormals();
// 生成纹理UV基于高度
this.generateTextureCoordinates(geometry);
return geometry;
}
generateTextureCoordinates(geometry) {
const positions = geometry.attributes.position.array;
const colors = new Float32Array(positions.length);
for (let i = 0; i < positions.length; i += 3) {
const height = positions[i + 2];
// 基于高度的颜色映射
if (height < 2) {
// 水面 - 蓝色
colors[i] = 0.2;
colors[i + 1] = 0.4;
colors[i + 2] = 0.8;
} else if (height < 8) {
// 沙滩 - 黄色
colors[i] = 0.9;
colors[i + 1] = 0.8;
colors[i + 2] = 0.4;
} else if (height < 15) {
// 草地 - 绿色
colors[i] = 0.3;
colors[i + 1] = 0.7;
colors[i + 2] = 0.2;
} else {
// 山峰 - 灰白色
colors[i] = 0.8;
colors[i + 1] = 0.8;
colors[i + 2] = 0.9;
}
}
geometry.setAttribute('color', new THREE.BufferAttribute(colors, 3));
}
}
2. L-系统植物生成
class LSystemGenerator {
constructor(axiom, rules, iterations) {
this.axiom = axiom;
this.rules = rules;
this.iterations = iterations;
this.current = axiom;
}
generate() {
for (let i = 0; i < this.iterations; i++) {
let next = '';
for (const char of this.current) {
next += this.rules[char] || char;
}
this.current = next;
}
return this.current;
}
createTreeGeometry(lString) {
const segments = [];
const stack = [];
const turtle = {
position: new THREE.Vector3(0, 0, 0),
direction: new THREE.Vector3(0, 1, 0),
up: new THREE.Vector3(0, 0, 1),
angle: Math.PI / 6, // 30度
length: 1,
radius: 0.1
};
for (const char of lString) {
switch (char) {
case 'F': // 前进并绘制
const endPos = turtle.position.clone().add(
turtle.direction.clone().multiplyScalar(turtle.length)
);
segments.push({
start: turtle.position.clone(),
end: endPos,
radius: turtle.radius
});
turtle.position = endPos;
turtle.radius *= 0.95; // 逐渐变细
break;
case '+': // 左转
turtle.direction.applyAxisAngle(turtle.up, turtle.angle);
break;
case '-': // 右转
turtle.direction.applyAxisAngle(turtle.up, -turtle.angle);
break;
case '[': // 保存状态
stack.push({
position: turtle.position.clone(),
direction: turtle.direction.clone(),
up: turtle.up.clone(),
radius: turtle.radius
});
break;
case ']': // 恢复状态
const state = stack.pop();
turtle.position = state.position;
turtle.direction = state.direction;
turtle.up = state.up;
turtle.radius = state.radius;
break;
}
}
return this.createMeshFromSegments(segments);
}
}
// 使用示例:生成分形树
const treeRules = {
'F': 'F[+F]F[-F]F'
};
const lSystem = new LSystemGenerator('F', treeRules, 4);
const treeString = lSystem.generate();
const treeGeometry = lSystem.createTreeGeometry(treeString);
How to implement Physically Based Rendering (PBR) in Three.js?
How to implement Physically Based Rendering (PBR) in Three.js?
考察点:PBR渲染技术。
答案:
基于物理的渲染(PBR)模拟真实世界的光照和材质行为,通过物理准确的着色模型实现逼真的视觉效果。Three.js提供了完整的PBR材质系统和光照环境支持。
PBR材质系统:
1. MeshStandardMaterial配置
class PBRMaterialManager {
constructor() {
this.textureLoader = new THREE.TextureLoader();
this.envMapLoader = new THREE.CubeTextureLoader();
}
createPBRMaterial(materialConfig) {
const material = new THREE.MeshStandardMaterial();
// 基础属性
material.color = new THREE.Color(materialConfig.baseColor || 0xffffff);
material.metalness = materialConfig.metalness || 0.0;
material.roughness = materialConfig.roughness || 1.0;
// 纹理贴图
if (materialConfig.albedoMap) {
material.map = this.textureLoader.load(materialConfig.albedoMap);
material.map.encoding = THREE.sRGBEncoding;
}
if (materialConfig.normalMap) {
material.normalMap = this.textureLoader.load(materialConfig.normalMap);
material.normalScale = new THREE.Vector2(1, 1);
}
if (materialConfig.metalnessMap) {
material.metalnessMap = this.textureLoader.load(materialConfig.metalnessMap);
}
if (materialConfig.roughnessMap) {
material.roughnessMap = this.textureLoader.load(materialConfig.roughnessMap);
}
if (materialConfig.aoMap) {
material.aoMap = this.textureLoader.load(materialConfig.aoMap);
material.aoMapIntensity = materialConfig.aoIntensity || 1.0;
}
return material;
}
setupEnvironmentMapping(material, envMapPath) {
// HDR环境贴图
const pmremGenerator = new THREE.PMREMGenerator(renderer);
new RGBELoader()
.load(envMapPath, (texture) => {
const envMap = pmremGenerator.fromEquirectangular(texture).texture;
material.envMap = envMap;
material.envMapIntensity = 1.0;
// 场景环境
scene.environment = envMap;
scene.background = envMap;
texture.dispose();
pmremGenerator.dispose();
});
}
}
// 使用示例
const materialManager = new PBRMaterialManager();
const goldMaterial = materialManager.createPBRMaterial({
baseColor: 0xffd700,
metalness: 1.0,
roughness: 0.1,
normalMap: 'textures/gold_normal.jpg',
roughnessMap: 'textures/gold_roughness.jpg'
});
materialManager.setupEnvironmentMapping(goldMaterial, 'hdri/studio.hdr');
2. 高级PBR着色器
const advancedPBRMaterial = new THREE.ShaderMaterial({
uniforms: {
// 基础属性
albedo: { value: new THREE.Color(0.5, 0.5, 0.5) },
metallic: { value: 0.0 },
roughness: { value: 1.0 },
ao: { value: 1.0 },
// 纹理贴图
albedoMap: { value: null },
normalMap: { value: null },
metallicMap: { value: null },
roughnessMap: { value: null },
aoMap: { value: null },
// 环境光照
envMap: { value: null },
envMapIntensity: { value: 1.0 },
// 光照参数
lightPositions: { value: [] },
lightColors: { value: [] },
lightIntensities: { value: [] }
},
vertexShader: `
varying vec2 vUv;
varying vec3 vPosition;
varying vec3 vNormal;
varying vec3 vWorldPosition;
void main() {
vUv = uv;
vNormal = normalize(normalMatrix * normal);
vPosition = (modelViewMatrix * vec4(position, 1.0)).xyz;
vWorldPosition = (modelMatrix * vec4(position, 1.0)).xyz;
gl_Position = projectionMatrix * vec4(vPosition, 1.0);
}
`,
fragmentShader: `
uniform vec3 albedo;
uniform float metallic;
uniform float roughness;
uniform float ao;
uniform sampler2D albedoMap;
uniform sampler2D normalMap;
uniform sampler2D metallicMap;
uniform sampler2D roughnessMap;
uniform sampler2D aoMap;
uniform samplerCube envMap;
uniform float envMapIntensity;
varying vec2 vUv;
varying vec3 vPosition;
varying vec3 vNormal;
varying vec3 vWorldPosition;
const float PI = 3.14159265359;
// 法向分布函数 (GGX)
float DistributionGGX(vec3 N, vec3 H, float roughness) {
float a = roughness * roughness;
float a2 = a * a;
float NdotH = max(dot(N, H), 0.0);
float NdotH2 = NdotH * NdotH;
float num = a2;
float denom = (NdotH2 * (a2 - 1.0) + 1.0);
denom = PI * denom * denom;
return num / denom;
}
// 几何函数
float GeometrySchlickGGX(float NdotV, float roughness) {
float r = (roughness + 1.0);
float k = (r * r) / 8.0;
float num = NdotV;
float denom = NdotV * (1.0 - k) + k;
return num / denom;
}
float GeometrySmith(vec3 N, vec3 V, vec3 L, float roughness) {
float NdotV = max(dot(N, V), 0.0);
float NdotL = max(dot(N, L), 0.0);
float ggx2 = GeometrySchlickGGX(NdotV, roughness);
float ggx1 = GeometrySchlickGGX(NdotL, roughness);
return ggx1 * ggx2;
}
// 菲涅尔反射
vec3 fresnelSchlick(float cosTheta, vec3 F0) {
return F0 + (1.0 - F0) * pow(clamp(1.0 - cosTheta, 0.0, 1.0), 5.0);
}
void main() {
// 采样纹理
vec3 albedoColor = texture2D(albedoMap, vUv).rgb * albedo;
float metallicValue = texture2D(metallicMap, vUv).r * metallic;
float roughnessValue = texture2D(roughnessMap, vUv).r * roughness;
float aoValue = texture2D(aoMap, vUv).r * ao;
vec3 N = normalize(vNormal);
vec3 V = normalize(cameraPosition - vWorldPosition);
// 基础反射率
vec3 F0 = vec3(0.04);
F0 = mix(F0, albedoColor, metallicValue);
// 环境光照
vec3 F = fresnelSchlick(max(dot(N, V), 0.0), F0);
vec3 kS = F;
vec3 kD = vec3(1.0) - kS;
kD *= 1.0 - metallicValue;
vec3 irradiance = textureCube(envMap, N).rgb;
vec3 diffuse = irradiance * albedoColor;
vec3 ambient = (kD * diffuse) * aoValue;
gl_FragColor = vec4(ambient * envMapIntensity, 1.0);
}
`
});
What are the performance monitoring and debugging strategies for Three.js applications?
What are the performance monitoring and debugging strategies for Three.js applications?
考察点:性能监控能力。
答案:
Three.js应用的性能监控需要从渲染性能、内存使用、GPU利用率等多个维度进行分析。建立完善的监控和调试体系有助于及时发现性能瓶颈并进行优化。
性能监控系统:
1. 实时性能监控
class PerformanceMonitor {
constructor() {
this.stats = {
fps: 0,
frameTime: 0,
drawCalls: 0,
triangles: 0,
geometries: 0,
textures: 0,
memoryUsage: 0
};
this.frameCount = 0;
this.lastTime = performance.now();
this.fpsHistory = [];
this.frameTimeHistory = [];
this.setupGUI();
}
update(renderer) {
const currentTime = performance.now();
const deltaTime = currentTime - this.lastTime;
this.frameCount++;
this.stats.frameTime = deltaTime;
this.frameTimeHistory.push(deltaTime);
// 每秒更新一次FPS
if (this.frameCount % 60 === 0) {
this.stats.fps = 1000 / (this.frameTimeHistory.reduce((a, b) => a + b) / this.frameTimeHistory.length);
this.fpsHistory.push(this.stats.fps);
// 保持历史记录长度
if (this.fpsHistory.length > 120) {
this.fpsHistory.shift();
}
if (this.frameTimeHistory.length > 60) {
this.frameTimeHistory.length = 0;
}
}
// 渲染统计
const info = renderer.info;
this.stats.drawCalls = info.render.calls;
this.stats.triangles = info.render.triangles;
this.stats.geometries = info.memory.geometries;
this.stats.textures = info.memory.textures;
// 内存使用估算
this.estimateMemoryUsage(renderer);
this.lastTime = currentTime;
this.updateDisplay();
}
estimateMemoryUsage(renderer) {
const info = renderer.info;
// 估算GPU内存使用
let textureMemory = 0;
let geometryMemory = 0;
// 简化的内存计算
textureMemory = info.memory.textures * 4 * 1024 * 1024; // 假设每个纹理4MB
geometryMemory = info.memory.geometries * 1024 * 1024; // 假设每个几何体1MB
this.stats.memoryUsage = (textureMemory + geometryMemory) / (1024 * 1024); // MB
}
setupGUI() {
// 创建性能显示面板
this.panel = document.createElement('div');
this.panel.style.position = 'fixed';
this.panel.style.top = '10px';
this.panel.style.left = '10px';
this.panel.style.backgroundColor = 'rgba(0,0,0,0.8)';
this.panel.style.color = 'white';
this.panel.style.padding = '10px';
this.panel.style.fontSize = '12px';
this.panel.style.fontFamily = 'monospace';
document.body.appendChild(this.panel);
}
updateDisplay() {
this.panel.innerHTML = `
FPS: ${this.stats.fps.toFixed(1)}
Frame Time: ${this.stats.frameTime.toFixed(2)}ms
Draw Calls: ${this.stats.drawCalls}
Triangles: ${this.stats.triangles.toLocaleString()}
Geometries: ${this.stats.geometries}
Textures: ${this.stats.textures}
GPU Memory: ${this.stats.memoryUsage.toFixed(1)}MB
`;
// 性能警告
if (this.stats.fps < 30) {
this.panel.style.backgroundColor = 'rgba(255,0,0,0.8)';
} else if (this.stats.fps < 50) {
this.panel.style.backgroundColor = 'rgba(255,165,0,0.8)';
} else {
this.panel.style.backgroundColor = 'rgba(0,0,0,0.8)';
}
}
getPerformanceReport() {
return {
averageFPS: this.fpsHistory.reduce((a, b) => a + b, 0) / this.fpsHistory.length,
minFPS: Math.min(...this.fpsHistory),
maxFPS: Math.max(...this.fpsHistory),
currentStats: { ...this.stats }
};
}
}
2. GPU性能分析
class GPUProfiler {
constructor(renderer) {
this.renderer = renderer;
this.gl = renderer.getContext();
this.timerExt = null;
this.queries = [];
this.results = new Map();
this.initTimerExtension();
}
initTimerExtension() {
// WebGL 2.0 timer queries
if (this.gl instanceof WebGL2RenderingContext) {
this.timerExt = this.gl;
} else {
// WebGL 1.0 extension
this.timerExt = this.gl.getExtension('EXT_disjoint_timer_query_webgl2') ||
this.gl.getExtension('EXT_disjoint_timer_query');
}
}
beginQuery(label) {
if (!this.timerExt) return null;
const query = this.gl.createQuery();
this.gl.beginQuery(this.timerExt.TIME_ELAPSED_EXT, query);
this.queries.push({ label, query, startTime: performance.now() });
return query;
}
endQuery() {
if (!this.timerExt) return;
this.gl.endQuery(this.timerExt.TIME_ELAPSED_EXT);
}
getResults() {
if (!this.timerExt) return this.results;
this.queries.forEach((queryInfo, index) => {
const available = this.gl.getQueryParameter(queryInfo.query, this.gl.QUERY_RESULT_AVAILABLE);
if (available) {
const gpuTime = this.gl.getQueryParameter(queryInfo.query, this.gl.QUERY_RESULT) / 1000000; // 转换为毫秒
this.results.set(queryInfo.label, {
gpuTime: gpuTime,
cpuTime: performance.now() - queryInfo.startTime
});
this.gl.deleteQuery(queryInfo.query);
this.queries.splice(index, 1);
}
});
return this.results;
}
}
How to implement deep integration between Three.js and other front-end frameworks?
How to implement deep integration between Three.js and other front-end frameworks?
考察点:技术栈整合。
答案:
Three.js与前端框架的集成需要处理生命周期管理、状态同步、事件通信等问题。不同框架有各自的集成模式和最佳实践。
React集成方案:
1. React + Three.js Hook
import React, { useEffect, useRef, useState } from 'react';
import * as THREE from 'three';
// 自定义Hook
function useThreeJS(initFn, updateFn, deps = []) {
const mountRef = useRef();
const sceneRef = useRef();
const rendererRef = useRef();
const animationIdRef = useRef();
useEffect(() => {
// 初始化场景
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
const renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
mountRef.current.appendChild(renderer.domElement);
sceneRef.current = scene;
rendererRef.current = renderer;
// 执行初始化函数
if (initFn) {
initFn(scene, camera, renderer);
}
// 渲染循环
const animate = () => {
if (updateFn) {
updateFn(scene, camera, renderer);
}
renderer.render(scene, camera);
animationIdRef.current = requestAnimationFrame(animate);
};
animate();
// 清理函数
return () => {
if (animationIdRef.current) {
cancelAnimationFrame(animationIdRef.current);
}
// 清理Three.js资源
scene.traverse((child) => {
if (child.geometry) child.geometry.dispose();
if (child.material) {
if (Array.isArray(child.material)) {
child.material.forEach(mat => mat.dispose());
} else {
child.material.dispose();
}
}
});
renderer.dispose();
if (mountRef.current && renderer.domElement) {
mountRef.current.removeChild(renderer.domElement);
}
};
}, deps);
return {
mountRef,
scene: sceneRef.current,
renderer: rendererRef.current
};
}
// React组件
function ThreeScene({ data, onObjectClick }) {
const [selectedObject, setSelectedObject] = useState(null);
const { mountRef, scene, renderer } = useThreeJS(
// 初始化函数
(scene, camera, renderer) => {
camera.position.z = 5;
// 添加基础光照
const ambientLight = new THREE.AmbientLight(0x404040);
const directionalLight = new THREE.DirectionalLight(0xffffff, 1);
directionalLight.position.set(1, 1, 1);
scene.add(ambientLight, directionalLight);
},
// 更新函数
(scene, camera, renderer) => {
// 基于React状态更新场景
scene.children.forEach(child => {
if (child.userData.selectable && child === selectedObject) {
child.material.color.setHex(0xff0000);
} else if (child.userData.selectable) {
child.material.color.setHex(0x00ff00);
}
});
},
[data, selectedObject] // 依赖项
);
// 数据变化时更新场景
useEffect(() => {
if (!scene || !data) return;
// 清除旧对象
const objectsToRemove = scene.children.filter(child => child.userData.fromData);
objectsToRemove.forEach(obj => scene.remove(obj));
// 添加新对象
data.forEach((item, index) => {
const geometry = new THREE.BoxGeometry();
const material = new THREE.MeshStandardMaterial({ color: item.color });
const cube = new THREE.Mesh(geometry, material);
cube.position.set(item.x, item.y, item.z);
cube.userData = { fromData: true, selectable: true, data: item };
scene.add(cube);
});
}, [scene, data]);
// 处理点击事件
useEffect(() => {
if (!renderer) return;
const raycaster = new THREE.Raycaster();
const mouse = new THREE.Vector2();
const handleClick = (event) => {
mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
raycaster.setFromCamera(mouse, scene.children.find(child => child.isCamera) || new THREE.Camera());
const intersects = raycaster.intersectObjects(
scene.children.filter(child => child.userData.selectable)
);
if (intersects.length > 0) {
const clickedObject = intersects[0].object;
setSelectedObject(clickedObject);
if (onObjectClick) {
onObjectClick(clickedObject.userData.data);
}
}
};
renderer.domElement.addEventListener('click', handleClick);
return () => {
renderer.domElement.removeEventListener('click', handleClick);
};
}, [renderer, scene, onObjectClick]);
return <div ref={mountRef} style={{ width: '100%', height: '100vh' }} />;
}
2. Vue.js集成方案
<template>
<div>
<div ref="threeContainer" class="three-container"></div>
<div class="controls">
<button @click="addCube">添加立方体</button>
<button @click="clearScene">清空场景</button>
</div>
</div>
</template>
<script>
import * as THREE from 'three';
import { OrbitControls } from 'three/examples/jsm/controls/OrbitControls.js';
export default {
name: 'ThreeViewer',
props: {
sceneData: {
type: Array,
default: () => []
}
},
data() {
return {
scene: null,
camera: null,
renderer: null,
controls: null,
animationId: null
};
},
mounted() {
this.initThree();
this.createScene();
this.animate();
},
beforeUnmount() {
this.cleanup();
},
watch: {
sceneData: {
handler: 'updateSceneFromData',
deep: true
}
},
methods: {
initThree() {
// 创建场景
this.scene = new THREE.Scene();
this.scene.background = new THREE.Color(0xf0f0f0);
// 创建相机
this.camera = new THREE.PerspectiveCamera(
75,
this.$refs.threeContainer.clientWidth / this.$refs.threeContainer.clientHeight,
0.1,
1000
);
this.camera.position.set(5, 5, 5);
// 创建渲染器
this.renderer = new THREE.WebGLRenderer({ antialias: true });
this.renderer.setSize(
this.$refs.threeContainer.clientWidth,
this.$refs.threeContainer.clientHeight
);
this.renderer.shadowMap.enabled = true;
// 添加到DOM
this.$refs.threeContainer.appendChild(this.renderer.domElement);
// 添加控制器
this.controls = new OrbitControls(this.camera, this.renderer.domElement);
this.controls.enableDamping = true;
// 响应式处理
window.addEventListener('resize', this.onWindowResize);
},
createScene() {
// 添加光照
const ambientLight = new THREE.AmbientLight(0x404040, 0.6);
const directionalLight = new THREE.DirectionalLight(0xffffff, 0.8);
directionalLight.position.set(10, 10, 5);
directionalLight.castShadow = true;
this.scene.add(ambientLight, directionalLight);
// 添加地面
const groundGeometry = new THREE.PlaneGeometry(20, 20);
const groundMaterial = new THREE.MeshLambertMaterial({ color: 0xffffff });
const ground = new THREE.Mesh(groundGeometry, groundMaterial);
ground.rotation.x = -Math.PI / 2;
ground.receiveShadow = true;
this.scene.add(ground);
},
updateSceneFromData() {
// 清除之前的数据对象
const objectsToRemove = [];
this.scene.traverse((child) => {
if (child.userData.fromVueData) {
objectsToRemove.push(child);
}
});
objectsToRemove.forEach(obj => this.scene.remove(obj));
// 根据props创建新对象
this.sceneData.forEach((item, index) => {
const geometry = new THREE.BoxGeometry(item.size || 1, item.size || 1, item.size || 1);
const material = new THREE.MeshStandardMaterial({
color: item.color || 0x00ff00
});
const cube = new THREE.Mesh(geometry, material);
cube.position.set(item.x || 0, item.y || 0, item.z || 0);
cube.castShadow = true;
cube.userData.fromVueData = true;
cube.userData.id = item.id;
this.scene.add(cube);
});
},
addCube() {
const geometry = new THREE.BoxGeometry();
const material = new THREE.MeshStandardMaterial({
color: Math.random() * 0xffffff
});
const cube = new THREE.Mesh(geometry, material);
cube.position.set(
(Math.random() - 0.5) * 10,
Math.random() * 5 + 0.5,
(Math.random() - 0.5) * 10
);
cube.castShadow = true;
this.scene.add(cube);
// 触发Vue事件
this.$emit('object-added', {
type: 'cube',
position: cube.position
});
},
clearScene() {
const objectsToRemove = [];
this.scene.traverse((child) => {
if (child.isMesh && !child.userData.ground) {
objectsToRemove.push(child);
}
});
objectsToRemove.forEach(obj => this.scene.remove(obj));
},
animate() {
this.animationId = requestAnimationFrame(this.animate);
this.controls.update();
this.renderer.render(this.scene, this.camera);
},
onWindowResize() {
const width = this.$refs.threeContainer.clientWidth;
const height = this.$refs.threeContainer.clientHeight;
this.camera.aspect = width / height;
this.camera.updateProjectionMatrix();
this.renderer.setSize(width, height);
},
cleanup() {
if (this.animationId) {
cancelAnimationFrame(this.animationId);
}
window.removeEventListener('resize', this.onWindowResize);
// 清理Three.js资源
this.scene.traverse((child) => {
if (child.geometry) child.geometry.dispose();
if (child.material) {
if (Array.isArray(child.material)) {
child.material.forEach(mat => mat.dispose());
} else {
child.material.dispose();
}
}
});
this.renderer.dispose();
}
}
};
</script>
<style scoped>
.three-container {
width: 100%;
height: 70vh;
border: 1px solid #ccc;
}
.controls {
margin-top: 10px;
}
button {
margin-right: 10px;
padding: 8px 16px;
background: #007bff;
color: white;
border: none;
border-radius: 4px;
cursor: pointer;
}
button:hover {
background: #0056b3;
}
</style>
What are the optimization and adaptation strategies for Three.js on mobile devices?
What are the optimization and adaptation strategies for Three.js on mobile devices?
考察点:移动端适配。
答案:
移动端3D应用面临GPU性能限制、电池消耗、触摸交互等挑战。需要从渲染优化、交互适配、性能监控等方面进行专门优化。
移动端渲染优化:
1. 自适应质量控制
class MobileOptimizer {
constructor(renderer, scene, camera) {
this.renderer = renderer;
this.scene = scene;
this.camera = camera;
this.isMobile = this.detectMobile();
this.deviceTier = this.detectDeviceTier();
this.adaptiveSettings = this.getAdaptiveSettings();
this.applyOptimizations();
}
detectMobile() {
return /Android|webOS|iPhone|iPad|iPod|BlackBerry|IEMobile|Opera Mini/i.test(navigator.userAgent);
}
detectDeviceTier() {
const canvas = document.createElement('canvas');
const gl = canvas.getContext('webgl') || canvas.getContext('experimental-webgl');
if (!gl) return 'low';
const debugInfo = gl.getExtension('WEBGL_debug_renderer_info');
const renderer = debugInfo ? gl.getParameter(debugInfo.UNMASKED_RENDERER_WEBGL) : '';
// 简化的GPU分级
if (renderer.includes('Mali') || renderer.includes('Adreno 3')) {
return 'low';
} else if (renderer.includes('Adreno 5') || renderer.includes('PowerVR')) {
return 'medium';
} else {
return 'high';
}
}
getAdaptiveSettings() {
const settings = {
low: {
pixelRatio: 0.75,
shadowMapSize: 512,
maxLights: 2,
particleCount: 100,
lodDistance: [10, 30],
antialias: false,
postProcessing: false
},
medium: {
pixelRatio: 1,
shadowMapSize: 1024,
maxLights: 4,
particleCount: 500,
lodDistance: [25, 75],
antialias: false,
postProcessing: true
},
high: {
pixelRatio: Math.min(window.devicePixelRatio, 2),
shadowMapSize: 2048,
maxLights: 8,
particleCount: 1000,
lodDistance: [50, 150],
antialias: true,
postProcessing: true
}
};
return settings[this.deviceTier];
}
applyOptimizations() {
// 设置像素比
this.renderer.setPixelRatio(this.adaptiveSettings.pixelRatio);
// 阴影优化
if (this.renderer.shadowMap.enabled) {
this.renderer.shadowMap.type = THREE.BasicShadowMap; // 移动端使用基础阴影
}
// 材质优化
this.optimizeMaterials();
// LOD设置
this.setupLOD();
// 性能监控
this.setupPerformanceMonitoring();
}
optimizeMaterials() {
this.scene.traverse((child) => {
if (child.isMesh && child.material) {
const material = child.material;
// 简化材质
if (this.deviceTier === 'low') {
if (material.isMeshStandardMaterial) {
// 替换为更简单的材质
const simpleMaterial = new THREE.MeshLambertMaterial({
color: material.color,
map: material.map
});
child.material = simpleMaterial;
material.dispose();
}
}
// 纹理优化
if (material.map && this.deviceTier !== 'high') {
material.map.minFilter = THREE.LinearFilter;
material.map.generateMipmaps = false;
}
}
});
}
setupLOD() {
const lodDistance = this.adaptiveSettings.lodDistance;
this.scene.traverse((child) => {
if (child.userData.needsLOD) {
const lod = new THREE.LOD();
// 高精度版本
lod.addLevel(child, 0);
// 中精度版本
if (child.userData.mediumLOD) {
lod.addLevel(child.userData.mediumLOD, lodDistance[0]);
}
// 低精度版本
if (child.userData.lowLOD) {
lod.addLevel(child.userData.lowLOD, lodDistance[1]);
}
// 替换原对象
child.parent.add(lod);
child.parent.remove(child);
}
});
}
setupPerformanceMonitoring() {
let frameCount = 0;
let lastTime = performance.now();
const monitor = () => {
frameCount++;
const currentTime = performance.now();
if (currentTime - lastTime > 1000) { // 每秒检查
const fps = frameCount;
frameCount = 0;
lastTime = currentTime;
// 动态调整质量
if (fps < 25 && this.deviceTier !== 'low') {
this.downgradeQuality();
} else if (fps > 50 && this.deviceTier !== 'high') {
this.upgradeQuality();
}
}
requestAnimationFrame(monitor);
};
monitor();
}
}
2. 触摸交互适配
class MobileTouchControls {
constructor(camera, domElement) {
this.camera = camera;
this.domElement = domElement;
this.isUserInteracting = false;
this.rotateSpeed = 0.005;
this.zoomSpeed = 0.002;
this.panSpeed = 0.001;
this.spherical = new THREE.Spherical();
this.sphericalDelta = new THREE.Spherical();
this.target = new THREE.Vector3();
this.lastPosition = new THREE.Vector3().copy(camera.position);
this.setupEventListeners();
}
setupEventListeners() {
this.domElement.addEventListener('touchstart', this.onTouchStart.bind(this), { passive: false });
this.domElement.addEventListener('touchmove', this.onTouchMove.bind(this), { passive: false });
this.domElement.addEventListener('touchend', this.onTouchEnd.bind(this), { passive: false });
// 防止默认的触摸行为
this.domElement.addEventListener('touchstart', (e) => e.preventDefault());
this.domElement.addEventListener('touchmove', (e) => e.preventDefault());
}
onTouchStart(event) {
this.isUserInteracting = true;
if (event.touches.length === 1) {
// 单点触摸 - 旋转
this.rotateStart = {
x: event.touches[0].clientX,
y: event.touches[0].clientY
};
} else if (event.touches.length === 2) {
// 双点触摸 - 缩放和平移
this.zoomStart = this.getDistance(event.touches[0], event.touches[1]);
this.panStart = {
x: (event.touches[0].clientX + event.touches[1].clientX) / 2,
y: (event.touches[0].clientY + event.touches[1].clientY) / 2
};
}
}
onTouchMove(event) {
if (!this.isUserInteracting) return;
if (event.touches.length === 1 && this.rotateStart) {
// 旋转控制
const deltaX = event.touches[0].clientX - this.rotateStart.x;
const deltaY = event.touches[0].clientY - this.rotateStart.y;
this.sphericalDelta.theta -= deltaX * this.rotateSpeed;
this.sphericalDelta.phi -= deltaY * this.rotateSpeed;
this.rotateStart.x = event.touches[0].clientX;
this.rotateStart.y = event.touches[0].clientY;
} else if (event.touches.length === 2) {
// 缩放控制
const distance = this.getDistance(event.touches[0], event.touches[1]);
const zoomDelta = (distance - this.zoomStart) * this.zoomSpeed;
this.camera.position.multiplyScalar(1 - zoomDelta);
this.zoomStart = distance;
// 平移控制
const panX = (event.touches[0].clientX + event.touches[1].clientX) / 2;
const panY = (event.touches[0].clientY + event.touches[1].clientY) / 2;
const deltaX = (panX - this.panStart.x) * this.panSpeed;
const deltaY = (panY - this.panStart.y) * this.panSpeed;
const panOffset = new THREE.Vector3(-deltaX, deltaY, 0);
panOffset.applyQuaternion(this.camera.quaternion);
this.camera.position.add(panOffset);
this.panStart.x = panX;
this.panStart.y = panY;
}
this.update();
}
onTouchEnd(event) {
this.isUserInteracting = false;
this.rotateStart = null;
}
getDistance(touch1, touch2) {
const dx = touch1.clientX - touch2.clientX;
const dy = touch1.clientY - touch2.clientY;
return Math.sqrt(dx * dx + dy * dy);
}
update() {
// 应用旋转
this.spherical.setFromVector3(this.camera.position.clone().sub(this.target));
this.spherical.theta += this.sphericalDelta.theta;
this.spherical.phi += this.sphericalDelta.phi;
// 限制phi角度
this.spherical.phi = Math.max(0.1, Math.min(Math.PI - 0.1, this.spherical.phi));
this.camera.position.setFromSpherical(this.spherical).add(this.target);
this.camera.lookAt(this.target);
// 重置delta
this.sphericalDelta.set(0, 0, 0);
}
}
实际应用:
What is WebGL? How does it differ from traditional 2D Canvas?
What is WebGL? How does it differ from traditional 2D Canvas?
考察点:WebGL基础概念。
答案:
WebGL(Web Graphics Library)是一个基于OpenGL ES的JavaScript API,允许在浏览器中进行硬件加速的3D图形渲染。它通过GPU并行计算来实现高性能的2D和3D图形绘制,是现代Web 3D图形技术的核心。
主要区别:
渲染能力:
编程模式:
// 2D Canvas 示例
const canvas = document.getElementById('canvas');
const ctx = canvas.getContext('2d');
ctx.fillRect(10, 10, 100, 100);
// WebGL 示例
const gl = canvas.getContext('webgl');
const program = createShaderProgram(gl, vertexShader, fragmentShader);
gl.useProgram(program);
性能差异:
适用场景:
What are the main stages of the WebGL rendering pipeline?
What are the main stages of the WebGL rendering pipeline?
考察点:渲染管线理解。
答案:
WebGL渲染管线是图形数据从输入到最终像素输出的处理流程。它遵循现代GPU的可编程渲染管线架构,主要包括以下阶段:
主要阶段:
顶点处理阶段:
图元装配阶段:
光栅化阶段:
片段处理阶段:
测试与混合阶段:
数据流示例:
顶点数据 → 顶点着色器 → 图元装配 → 光栅化 → 片段着色器 → 帧缓冲区
理解渲染管线有助于优化性能和实现复杂的视觉效果。
What are shaders? What are the roles of vertex shaders and fragment shaders?
What are shaders? What are the roles of vertex shaders and fragment shaders?
考察点:着色器基础概念。
答案:
着色器(Shader)是运行在GPU上的小程序,用GLSL(OpenGL Shading Language)编写。它们是WebGL渲染管线中的可编程阶段,允许开发者自定义图形处理逻辑。
顶点着色器(Vertex Shader):
处理每个顶点的属性数据,主要职责包括:
// 顶点着色器示例
attribute vec3 position;
attribute vec2 texCoord;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
varying vec2 vTexCoord;
void main() {
vTexCoord = texCoord;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
片段着色器(Fragment Shader):
处理每个像素片段,决定最终的像素颜色:
// 片段着色器示例
precision mediump float;
uniform sampler2D texture;
varying vec2 vTexCoord;
void main() {
vec4 color = texture2D(texture, vTexCoord);
gl_FragColor = color;
}
工作流程:
What is the coordinate system in WebGL? How to perform coordinate transformations?
What is the coordinate system in WebGL? How to perform coordinate transformations?
考察点:坐标系统理解。
答案:
WebGL使用右手坐标系,坐标变换是3D图形渲染的核心。理解坐标系统和变换矩阵对实现正确的3D渲染至关重要。
坐标系统层次:
模型坐标系(Model Space):
世界坐标系(World Space):
视图坐标系(View Space):
裁剪坐标系(Clip Space):
坐标变换流程:
// 变换矩阵计算
const modelMatrix = mat4.create();
const viewMatrix = mat4.create();
const projectionMatrix = mat4.create();
// 模型变换
mat4.translate(modelMatrix, modelMatrix, [x, y, z]);
mat4.rotate(modelMatrix, modelMatrix, angle, [0, 1, 0]);
mat4.scale(modelMatrix, modelMatrix, [sx, sy, sz]);
// 视图变换
mat4.lookAt(viewMatrix, eyePosition, targetPosition, upVector);
// 投影变换
mat4.perspective(projectionMatrix, fov, aspect, near, far);
// 组合变换矩阵
const mvpMatrix = mat4.create();
mat4.multiply(mvpMatrix, projectionMatrix, viewMatrix);
mat4.multiply(mvpMatrix, mvpMatrix, modelMatrix);
常用变换类型:
实际应用:
How to draw a simple triangle in WebGL?
How to draw a simple triangle in WebGL?
考察点:基础绘制能力。
答案:
在WebGL中绘制三角形是最基础的渲染操作,涉及着色器编译、缓冲区创建、属性绑定等核心步骤。
主要步骤:
创建着色器程序:
// 顶点着色器源码
const vertexShaderSource = `
attribute vec2 position;
void main() {
gl_Position = vec4(position, 0.0, 1.0);
}
`;
// 片段着色器源码
const fragmentShaderSource = `
precision mediump float;
void main() {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); // 红色
}
`;
编译和链接着色器:
function createShader(gl, type, source) {
const shader = gl.createShader(type);
gl.shaderSource(shader, source);
gl.compileShader(shader);
return shader;
}
function createProgram(gl, vertexShader, fragmentShader) {
const program = gl.createProgram();
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragmentShader);
gl.linkProgram(program);
return program;
}
准备顶点数据:
// 三角形顶点坐标
const vertices = new Float32Array([
0.0, 0.5, // 顶部
-0.5, -0.5, // 左下
0.5, -0.5 // 右下
]);
// 创建缓冲区
const buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);
设置属性和绘制:
// 使用着色器程序
gl.useProgram(program);
// 获取属性位置
const positionLocation = gl.getAttribLocation(program, 'position');
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);
// 绘制三角形
gl.drawArrays(gl.TRIANGLES, 0, 3);
完整示例:
// 初始化WebGL上下文并绘制三角形
const canvas = document.getElementById('canvas');
const gl = canvas.getContext('webgl');
// ... 着色器和缓冲区设置 ...
gl.clearColor(0.0, 0.0, 0.0, 1.0);
gl.clear(gl.COLOR_BUFFER_BIT);
gl.drawArrays(gl.TRIANGLES, 0, 3);
这个过程展示了WebGL渲染的基本流程:着色器→缓冲区→属性→绘制。
What are buffers in WebGL? What types are there?
What are buffers in WebGL? What types are there?
考察点:缓冲区系统。
答案:
缓冲区(Buffer)是WebGL中存储顶点数据和索引数据的GPU内存对象。它们是CPU和GPU之间数据传输的桥梁,用于高效地向着色器传递大量数据。
缓冲区类型:
顶点缓冲区(Vertex Buffer - ARRAY_BUFFER):
// 创建顶点缓冲区
const vertexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
const vertices = new Float32Array([
// 位置坐标 纹理坐标
-1.0, -1.0, 0.0, 0.0,
1.0, -1.0, 1.0, 0.0,
0.0, 1.0, 0.5, 1.0
]);
gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);
索引缓冲区(Index Buffer - ELEMENT_ARRAY_BUFFER):
// 创建索引缓冲区
const indexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);
const indices = new Uint16Array([
0, 1, 2, // 第一个三角形
0, 2, 3 // 第二个三角形
]);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, indices, gl.STATIC_DRAW);
数据使用模式:
常见数据类型:
缓冲区操作流程:
// 1. 创建缓冲区
const buffer = gl.createBuffer();
// 2. 绑定缓冲区
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
// 3. 传输数据
gl.bufferData(gl.ARRAY_BUFFER, data, gl.STATIC_DRAW);
// 4. 设置顶点属性指针
gl.vertexAttribPointer(location, size, type, normalized, stride, offset);
// 5. 启用顶点属性
gl.enableVertexAttribArray(location);
性能优化:
What are textures? How to use textures in WebGL?
What are textures? How to use textures in WebGL?
考察点:纹理系统基础。
答案:
纹理(Texture)是WebGL中用于在3D物体表面贴图的2D图像数据。它可以为几何体添加细节、颜色和材质效果,是实现逼真渲染的关键技术。
纹理使用步骤:
创建和配置纹理:
// 创建纹理对象
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
// 设置纹理参数
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
加载纹理图像:
function loadTexture(gl, url) {
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
// 创建临时像素
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 1, 1, 0,
gl.RGBA, gl.UNSIGNED_BYTE, new Uint8Array([255, 0, 255, 255]));
const image = new Image();
image.onload = function() {
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA,
gl.UNSIGNED_BYTE, image);
gl.generateMipmap(gl.TEXTURE_2D);
};
image.src = url;
return texture;
}
在着色器中使用纹理:
// 顶点着色器
attribute vec2 position;
attribute vec2 texCoord;
varying vec2 vTexCoord;
void main() {
vTexCoord = texCoord;
gl_Position = vec4(position, 0.0, 1.0);
}
// 片段着色器
precision mediump float;
uniform sampler2D uTexture;
varying vec2 vTexCoord;
void main() {
gl_FragColor = texture2D(uTexture, vTexCoord);
}
纹理参数设置:
包装模式(Wrapping):
gl.REPEAT: 重复纹理gl.CLAMP_TO_EDGE: 边缘拉伸gl.MIRRORED_REPEAT: 镜像重复过滤模式(Filtering):
gl.NEAREST: 最近邻过滤,像素化效果gl.LINEAR: 线性过滤,平滑效果常见纹理类型:
性能优化:
What are the matrix transformations in WebGL? What are their functions?
What are the matrix transformations in WebGL? What are their functions?
考察点:矩阵变换理解。
答案:
矩阵变换是WebGL中实现3D图形变换的数学基础。通过4×4矩阵的线性变换,可以实现物体的移动、旋转、缩放和投影等操作。
主要矩阵变换类型:
模型变换矩阵(Model Matrix):
// 平移矩阵
const translationMatrix = mat4.fromTranslation(mat4.create(), [x, y, z]);
// 旋转矩阵
const rotationMatrix = mat4.fromRotation(mat4.create(), angle, [0, 1, 0]);
// 缩放矩阵
const scaleMatrix = mat4.fromScaling(mat4.create(), [sx, sy, sz]);
// 组合变换矩阵
const modelMatrix = mat4.create();
mat4.multiply(modelMatrix, translationMatrix, rotationMatrix);
mat4.multiply(modelMatrix, modelMatrix, scaleMatrix);
视图变换矩阵(View Matrix):
// 摄像机视图矩阵
const viewMatrix = mat4.lookAt(
mat4.create(),
[eyeX, eyeY, eyeZ], // 摄像机位置
[targetX, targetY, targetZ], // 目标位置
[upX, upY, upZ] // 向上方向
);
投影变换矩阵(Projection Matrix):
// 透视投影矩阵
const perspectiveMatrix = mat4.perspective(
mat4.create(),
fieldOfView, // 视野角度
aspect, // 宽高比
nearPlane, // 近裁剪面
farPlane // 远裁剪面
);
// 正交投影矩阵
const orthographicMatrix = mat4.ortho(
mat4.create(),
left, right, bottom, top, near, far
);
矩阵变换的作用:
变换应用顺序:
// 标准变换链:模型 → 视图 → 投影
const mvpMatrix = mat4.create();
mat4.multiply(mvpMatrix, projectionMatrix, viewMatrix);
mat4.multiply(mvpMatrix, mvpMatrix, modelMatrix);
// 在顶点着色器中应用
// gl_Position = uMVPMatrix * vec4(position, 1.0);
实际应用场景:
性能优化:
How to handle user interaction events in WebGL?
How to handle user interaction events in WebGL?
考察点:交互事件处理。
答案:
WebGL本身不直接提供交互事件处理,需要结合HTML5事件系统和射线检测等技术来实现3D场景中的用户交互。
主要交互事件类型:
鼠标交互事件:
canvas.addEventListener('mousedown', (event) => {
const rect = canvas.getBoundingClientRect();
const x = event.clientX - rect.left;
const y = event.clientY - rect.top;
// 将屏幕坐标转换为WebGL坐标
const normalizedX = (x / canvas.width) * 2 - 1;
const normalizedY = -((y / canvas.height) * 2 - 1);
handleMouseClick(normalizedX, normalizedY);
});
canvas.addEventListener('mousemove', handleMouseMove);
canvas.addEventListener('wheel', handleMouseWheel);
触摸交互事件:
canvas.addEventListener('touchstart', (event) => {
event.preventDefault();
const touch = event.touches[0];
const rect = canvas.getBoundingClientRect();
const x = touch.clientX - rect.left;
const y = touch.clientY - rect.top;
handleTouchStart(x, y);
});
canvas.addEventListener('touchmove', handleTouchMove);
canvas.addEventListener('touchend', handleTouchEnd);
3D物体选择实现:
射线检测(Ray Casting):
function screenToWorldRay(mouseX, mouseY, viewMatrix, projectionMatrix) {
// 计算射线起点和方向
const rayStart = vec3.create();
const rayDirection = vec3.create();
// 屏幕坐标转世界坐标
const inverseViewProjection = mat4.create();
mat4.multiply(inverseViewProjection, projectionMatrix, viewMatrix);
mat4.invert(inverseViewProjection, inverseViewProjection);
// 计算射线
const nearPoint = vec4.fromValues(mouseX, mouseY, -1, 1);
const farPoint = vec4.fromValues(mouseX, mouseY, 1, 1);
vec4.transformMat4(nearPoint, nearPoint, inverseViewProjection);
vec4.transformMat4(farPoint, farPoint, inverseViewProjection);
// 归一化
vec3.scale(nearPoint, nearPoint, 1/nearPoint[3]);
vec3.scale(farPoint, farPoint, 1/farPoint[3]);
return { start: nearPoint, direction: rayDirection };
}
包围盒检测:
function rayIntersectAABB(rayStart, rayDirection, aabbMin, aabbMax) {
const t1 = vec3.create();
const t2 = vec3.create();
vec3.subtract(t1, aabbMin, rayStart);
vec3.divide(t1, t1, rayDirection);
vec3.subtract(t2, aabbMax, rayStart);
vec3.divide(t2, t2, rayDirection);
const tMin = Math.max(Math.min(t1[0], t2[0]),
Math.min(t1[1], t2[1]),
Math.min(t1[2], t2[2]));
const tMax = Math.min(Math.max(t1[0], t2[0]),
Math.max(t1[1], t2[1]),
Math.max(t1[2], t2[2]));
return tMax >= 0 && tMin <= tMax;
}
常见交互模式:
摄像机控制:
物体操作:
性能优化:
What is the basic structure of a WebGL program?
What is the basic structure of a WebGL program?
考察点:程序结构理解。
答案:
WebGL程序遵循现代图形渲染的标准架构,包含初始化、资源管理、渲染循环等核心模块。理解这个结构有助于构建可维护的3D应用。
基本程序结构:
初始化阶段:
class WebGLApplication {
constructor(canvasId) {
this.canvas = document.getElementById(canvasId);
this.gl = this.canvas.getContext('webgl');
if (!this.gl) {
throw new Error('WebGL not supported');
}
this.initWebGL();
this.loadResources();
this.setupScene();
this.startRenderLoop();
}
}
着色器管理:
initWebGL() {
// 编译着色器
this.vertexShader = this.compileShader(this.gl.VERTEX_SHADER, vertexSource);
this.fragmentShader = this.compileShader(this.gl.FRAGMENT_SHADER, fragmentSource);
// 创建程序
this.program = this.createProgram(this.vertexShader, this.fragmentShader);
// 获取属性和uniform位置
this.programInfo = {
attribs: {
position: this.gl.getAttribLocation(this.program, 'position'),
normal: this.gl.getAttribLocation(this.program, 'normal'),
texCoord: this.gl.getAttribLocation(this.program, 'texCoord')
},
uniforms: {
modelViewMatrix: this.gl.getUniformLocation(this.program, 'uModelViewMatrix'),
projectionMatrix: this.gl.getUniformLocation(this.program, 'uProjectionMatrix'),
normalMatrix: this.gl.getUniformLocation(this.program, 'uNormalMatrix')
}
};
}
资源加载:
async loadResources() {
// 加载几何体数据
this.meshes = await this.loadMeshes();
// 加载纹理资源
this.textures = await this.loadTextures();
// 创建缓冲区
this.buffers = this.createBuffers();
}
场景设置:
setupScene() {
// 设置视口
this.gl.viewport(0, 0, this.canvas.width, this.canvas.height);
// 启用深度测试
this.gl.enable(this.gl.DEPTH_TEST);
// 设置清除颜色
this.gl.clearColor(0.0, 0.0, 0.0, 1.0);
// 初始化摄像机
this.camera = new Camera();
// 创建场景对象
this.scene = new Scene();
}
渲染循环:
startRenderLoop() {
const render = (currentTime) => {
this.update(currentTime);
this.draw();
requestAnimationFrame(render);
};
requestAnimationFrame(render);
}
update(deltaTime) {
// 更新动画
this.updateAnimations(deltaTime);
// 更新物理
this.updatePhysics(deltaTime);
// 更新摄像机
this.camera.update();
}
draw() {
// 清除画布
this.gl.clear(this.gl.COLOR_BUFFER_BIT | this.gl.DEPTH_BUFFER_BIT);
// 使用着色器程序
this.gl.useProgram(this.program);
// 设置矩阵
this.setMatrices();
// 绘制场景对象
this.scene.render(this.gl, this.programInfo);
}
模块化架构:
错误处理和调试:
compileShader(type, source) {
const shader = this.gl.createShader(type);
this.gl.shaderSource(shader, source);
this.gl.compileShader(shader);
if (!this.gl.getShaderParameter(shader, this.gl.COMPILE_STATUS)) {
const error = this.gl.getShaderInfoLog(shader);
this.gl.deleteShader(shader);
throw new Error(`Shader compilation error: ${error}`);
}
return shader;
}
这种结构化设计确保了代码的可维护性、扩展性和性能优化。
What is the context creation and initialization process in WebGL?
What is the context creation and initialization process in WebGL?
考察点:上下文管理。
答案:
WebGL上下文创建和初始化是WebGL应用的基础,涉及硬件检测、扩展加载、状态设置等关键步骤。正确的初始化流程确保应用的兼容性和稳定性。
上下文创建流程:
获取WebGL上下文:
function createWebGLContext(canvas, options = {}) {
const contextNames = ['webgl2', 'webgl', 'experimental-webgl'];
let gl = null;
for (const name of contextNames) {
try {
gl = canvas.getContext(name, {
alpha: options.alpha !== false,
depth: options.depth !== false,
stencil: options.stencil === true,
antialias: options.antialias !== false,
premultipliedAlpha: options.premultipliedAlpha !== false,
preserveDrawingBuffer: options.preserveDrawingBuffer === true,
powerPreference: options.powerPreference || 'default'
});
if (gl) break;
} catch (e) {
console.warn(`Failed to create ${name} context:`, e);
}
}
if (!gl) {
throw new Error('WebGL is not supported');
}
return gl;
}
检查WebGL支持和扩展:
function checkWebGLSupport(gl) {
// 检查基本WebGL支持
const supported = {
webgl2: gl instanceof WebGL2RenderingContext,
extensions: {},
limits: {}
};
// 检查重要扩展
const extensions = [
'OES_texture_float',
'OES_standard_derivatives',
'WEBGL_depth_texture',
'EXT_texture_filter_anisotropic'
];
extensions.forEach(name => {
supported.extensions[name] = gl.getExtension(name);
});
// 获取WebGL限制参数
supported.limits = {
maxTextureSize: gl.getParameter(gl.MAX_TEXTURE_SIZE),
maxTextureUnits: gl.getParameter(gl.MAX_TEXTURE_IMAGE_UNITS),
maxVertexAttribs: gl.getParameter(gl.MAX_VERTEX_ATTRIBS),
maxVaryingVectors: gl.getParameter(gl.MAX_VARYING_VECTORS),
maxFragmentUniforms: gl.getParameter(gl.MAX_FRAGMENT_UNIFORM_VECTORS)
};
return supported;
}
初始化WebGL状态:
function initializeWebGLState(gl) {
// 设置视口
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
// 启用深度测试
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.LEQUAL);
// 启用背面剔除
gl.enable(gl.CULL_FACE);
gl.cullFace(gl.BACK);
// 设置混合模式
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
// 设置清除颜色
gl.clearColor(0.0, 0.0, 0.0, 1.0);
gl.clearDepth(1.0);
// 设置像素存储参数
gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true);
}
上下文丢失处理:
function setupContextLossHandling(canvas, gl) {
canvas.addEventListener('webglcontextlost', (event) => {
event.preventDefault();
console.warn('WebGL context lost');
// 停止渲染循环
cancelAnimationFrame(renderLoop);
});
canvas.addEventListener('webglcontextrestored', () => {
console.log('WebGL context restored');
// 重新初始化资源
reinitializeWebGL();
// 重启渲染循环
startRenderLoop();
});
}
性能优化配置:
const optimizedContextOptions = {
alpha: false, // 禁用alpha通道提高性能
depth: true, // 启用深度缓冲
stencil: false, // 根据需要启用模板缓冲
antialias: false, // 禁用抗锯齿提高性能
premultipliedAlpha: false, // 避免预乘alpha
preserveDrawingBuffer: false, // 不保留绘制缓冲
powerPreference: 'high-performance' // 优先使用高性能GPU
};
错误处理和调试:
正确的初始化流程是WebGL应用稳定运行的基础,需要考虑各种边界情况和性能优化。
How to implement lighting effects for 3D objects in WebGL?
How to implement lighting effects for 3D objects in WebGL?
考察点:光照系统实现。
答案:
WebGL中的光照效果是通过着色器计算实现的,基于物理光照模型来模拟真实世界中光与物体表面的交互。主要包括环境光、漫反射和镜面反射等组件。
基础光照模型(Phong光照):
顶点着色器 - 计算光照向量:
attribute vec3 aPosition;
attribute vec3 aNormal;
attribute vec2 aTexCoord;
uniform mat4 uModelMatrix;
uniform mat4 uViewMatrix;
uniform mat4 uProjectionMatrix;
uniform mat3 uNormalMatrix;
uniform vec3 uLightPosition;
varying vec3 vNormal;
varying vec3 vLightDirection;
varying vec3 vViewDirection;
varying vec2 vTexCoord;
void main() {
vec4 worldPosition = uModelMatrix * vec4(aPosition, 1.0);
vec4 viewPosition = uViewMatrix * worldPosition;
vNormal = normalize(uNormalMatrix * aNormal);
vLightDirection = normalize(uLightPosition - worldPosition.xyz);
vViewDirection = normalize(-viewPosition.xyz);
vTexCoord = aTexCoord;
gl_Position = uProjectionMatrix * viewPosition;
}
片段着色器 - 光照计算:
precision mediump float;
uniform vec3 uLightColor;
uniform vec3 uAmbientColor;
uniform vec3 uMaterialColor;
uniform float uShininess;
uniform sampler2D uTexture;
varying vec3 vNormal;
varying vec3 vLightDirection;
varying vec3 vViewDirection;
varying vec2 vTexCoord;
void main() {
vec3 normal = normalize(vNormal);
vec3 lightDir = normalize(vLightDirection);
vec3 viewDir = normalize(vViewDirection);
// 环境光分量
vec3 ambient = uAmbientColor;
// 漫反射分量
float diffuseStrength = max(dot(normal, lightDir), 0.0);
vec3 diffuse = diffuseStrength * uLightColor;
// 镜面反射分量
vec3 reflectDir = reflect(-lightDir, normal);
float specularStrength = pow(max(dot(viewDir, reflectDir), 0.0), uShininess);
vec3 specular = specularStrength * uLightColor;
// 纹理颜色
vec3 textureColor = texture2D(uTexture, vTexCoord).rgb;
// 最终颜色
vec3 finalColor = (ambient + diffuse + specular) * textureColor * uMaterialColor;
gl_FragColor = vec4(finalColor, 1.0);
}
多光源支持:
// JavaScript中管理多个光源
class LightingSystem {
constructor() {
this.lights = [];
this.maxLights = 8;
}
addLight(type, position, color, intensity) {
if (this.lights.length >= this.maxLights) return;
this.lights.push({
type: type, // 'directional', 'point', 'spot'
position: position,
color: color,
intensity: intensity,
direction: [0, -1, 0], // for directional/spot lights
cutoff: 30.0 // for spot lights
});
}
updateUniforms(gl, program) {
const lightCount = Math.min(this.lights.length, this.maxLights);
gl.uniform1i(gl.getUniformLocation(program, 'uLightCount'), lightCount);
for (let i = 0; i < lightCount; i++) {
const light = this.lights[i];
const prefix = `uLights[${i}]`;
gl.uniform3fv(gl.getUniformLocation(program, `${prefix}.position`), light.position);
gl.uniform3fv(gl.getUniformLocation(program, `${prefix}.color`), light.color);
gl.uniform1f(gl.getUniformLocation(program, `${prefix}.intensity`), light.intensity);
}
}
}
高级光照技术:
法线贴图(Normal Mapping):
// 在片段着色器中使用法线贴图
vec3 normalMap = texture2D(uNormalTexture, vTexCoord).rgb * 2.0 - 1.0;
vec3 tangent = normalize(vTangent);
vec3 bitangent = cross(normal, tangent);
mat3 TBN = mat3(tangent, bitangent, normal);
vec3 perturbedNormal = normalize(TBN * normalMap);
衰减光照(Light Attenuation):
float distance = length(uLightPosition - vWorldPosition);
float attenuation = 1.0 / (1.0 + 0.09 * distance + 0.032 * distance * distance);
vec3 lightColor = uLightColor * attenuation;
常见光照类型:
性能优化:
How do depth testing and blending work in WebGL?
How do depth testing and blending work in WebGL?
考察点:渲染状态管理。
答案:
深度测试和混合是WebGL渲染管线中的重要阶段,用于处理像素深度关系和透明度效果。正确理解和配置这些机制对实现正确的3D渲染效果至关重要。
深度测试(Depth Testing):
深度测试确保正确的前后遮挡关系,通过比较像素的深度值来决定是否绘制该像素。
启用和配置深度测试:
// 启用深度测试
gl.enable(gl.DEPTH_TEST);
// 设置深度函数
gl.depthFunc(gl.LEQUAL); // 小于等于通过测试
// 设置深度缓冲区写入
gl.depthMask(true); // 允许写入深度缓冲区
// 清除深度缓冲区
gl.clearDepth(1.0);
gl.clear(gl.DEPTH_BUFFER_BIT);
深度函数类型:
// 常用深度函数
gl.depthFunc(gl.NEVER); // 从不通过
gl.depthFunc(gl.LESS); // 小于通过
gl.depthFunc(gl.EQUAL); // 等于通过
gl.depthFunc(gl.LEQUAL); // 小于等于通过(默认)
gl.depthFunc(gl.GREATER); // 大于通过
gl.depthFunc(gl.NOTEQUAL); // 不等于通过
gl.depthFunc(gl.GEQUAL); // 大于等于通过
gl.depthFunc(gl.ALWAYS); // 总是通过
颜色混合(Blending):
混合用于处理透明和半透明效果,通过组合源颜色和目标颜色来产生最终像素颜色。
启用和配置混合:
// 启用混合
gl.enable(gl.BLEND);
// 设置混合函数
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
// 设置混合方程
gl.blendEquation(gl.FUNC_ADD);
// 分别设置RGB和Alpha混合
gl.blendFuncSeparate(
gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA, // RGB混合
gl.ONE, gl.ONE_MINUS_SRC_ALPHA // Alpha混合
);
常见混合模式:
// 标准透明混合
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
// 加法混合(发光效果)
gl.blendFunc(gl.SRC_ALPHA, gl.ONE);
// 乘法混合(阴影效果)
gl.blendFunc(gl.DST_COLOR, gl.ZERO);
// 屏幕混合(亮化效果)
gl.blendFunc(gl.ONE_MINUS_DST_COLOR, gl.ONE);
透明渲染的正确顺序:
function renderTransparentObjects(objects) {
// 1. 先渲染所有不透明物体
gl.enable(gl.DEPTH_TEST);
gl.depthMask(true);
gl.disable(gl.BLEND);
objects.opaque.forEach(obj => obj.render());
// 2. 按距离排序透明物体(从远到近)
objects.transparent.sort((a, b) => {
return b.distanceToCamera - a.distanceToCamera;
});
// 3. 渲染透明物体
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
gl.depthMask(false); // 禁止写入深度缓冲区
objects.transparent.forEach(obj => obj.render());
// 4. 恢复深度写入
gl.depthMask(true);
}
模板测试(Stencil Testing):
模板测试提供额外的逐像素控制,常用于实现复杂的渲染效果。
// 启用模板测试
gl.enable(gl.STENCIL_TEST);
// 设置模板函数
gl.stencilFunc(gl.EQUAL, 1, 0xFF);
// 设置模板操作
gl.stencilOp(gl.KEEP, gl.KEEP, gl.REPLACE);
// 清除模板缓冲区
gl.clearStencil(0);
gl.clear(gl.STENCIL_BUFFER_BIT);
高级渲染技术:
深度剥离(Depth Peeling):
用于正确渲染多层透明物体
Order-Independent Transparency:
解决透明物体排序问题的高级技术
Early Z-Testing:
在片段着色器执行前进行深度测试优化
性能优化技巧:
正确配置深度测试和混合是实现高质量3D渲染的基础。
How to optimize the performance of WebGL applications? What are the common optimization techniques?
How to optimize the performance of WebGL applications? What are the common optimization techniques?
考察点:性能优化策略。
答案:
WebGL性能优化是确保3D应用流畅运行的关键。优化需要从CPU和GPU两个维度考虑,涉及渲染管线的各个环节。
CPU端优化策略:
减少Draw Calls:
// 批处理渲染 - 合并多个物体到一个缓冲区
class BatchRenderer {
constructor(gl) {
this.gl = gl;
this.vertices = [];
this.indices = [];
this.maxVertices = 65536;
}
addMesh(mesh, transform) {
if (this.vertices.length + mesh.vertices.length > this.maxVertices) {
this.flush();
}
// 应用变换并添加到批处理缓冲区
const transformedVertices = this.transformVertices(mesh.vertices, transform);
this.vertices.push(...transformedVertices);
this.indices.push(...mesh.indices.map(i => i + this.currentVertexCount));
this.currentVertexCount += mesh.vertices.length / 3;
}
flush() {
if (this.vertices.length === 0) return;
// 一次性绘制所有批处理的几何体
this.updateBuffers();
this.gl.drawElements(this.gl.TRIANGLES, this.indices.length,
this.gl.UNSIGNED_SHORT, 0);
this.clear();
}
}
实例化渲染:
// 使用WebGL扩展进行实例化渲染
const instanceExt = gl.getExtension('ANGLE_instanced_arrays');
// 设置实例属性
gl.bindBuffer(gl.ARRAY_BUFFER, instanceMatrixBuffer);
for (let i = 0; i < 4; i++) {
gl.enableVertexAttribArray(matrixLocation + i);
gl.vertexAttribPointer(matrixLocation + i, 4, gl.FLOAT, false, 64, i * 16);
instanceExt.vertexAttribDivisorANGLE(matrixLocation + i, 1);
}
// 实例化绘制
instanceExt.drawArraysInstancedANGLE(gl.TRIANGLES, 0, vertexCount, instanceCount);
视锥体裁剪(Frustum Culling):
class FrustumCuller {
constructor(camera) {
this.camera = camera;
this.frustumPlanes = new Array(6);
}
updateFrustum() {
const viewProjectionMatrix = mat4.create();
mat4.multiply(viewProjectionMatrix, this.camera.projectionMatrix, this.camera.viewMatrix);
// 提取6个裁剪平面
this.extractPlanes(viewProjectionMatrix);
}
isVisible(boundingBox) {
for (const plane of this.frustumPlanes) {
if (this.distanceToPlane(boundingBox, plane) < 0) {
return false;
}
}
return true;
}
}
GPU端优化策略:
纹理优化:
// 纹理图集减少纹理切换
class TextureAtlas {
constructor(gl, size = 1024) {
this.gl = gl;
this.size = size;
this.texture = this.createAtlasTexture();
this.regions = new Map();
this.packer = new RectanglePacker(size, size);
}
addTexture(name, image) {
const region = this.packer.pack(image.width, image.height);
if (region) {
this.gl.bindTexture(this.gl.TEXTURE_2D, this.texture);
this.gl.texSubImage2D(this.gl.TEXTURE_2D, 0,
region.x, region.y,
this.gl.RGBA, this.gl.UNSIGNED_BYTE, image);
this.regions.set(name, {
x: region.x / this.size,
y: region.y / this.size,
width: region.width / this.size,
height: region.height / this.size
});
}
}
}
LOD(Level of Detail)系统:
class LODManager {
constructor() {
this.lodLevels = [
{ distance: 50, meshIndex: 0 }, // 高精度模型
{ distance: 200, meshIndex: 1 }, // 中精度模型
{ distance: 500, meshIndex: 2 }, // 低精度模型
{ distance: Infinity, meshIndex: 3 } // 极简模型
];
}
selectLOD(object, cameraPosition) {
const distance = vec3.distance(object.position, cameraPosition);
for (const level of this.lodLevels) {
if (distance < level.distance) {
return object.meshes[level.meshIndex];
}
}
return object.meshes[this.lodLevels.length - 1].meshIndex;
}
}
着色器优化:
精度优化:
// 使用适当的精度修饰符
precision mediump float; // 对大多数计算足够
precision lowp float; // 用于颜色等不需要高精度的数据
precision highp float; // 仅在必要时使用
// 避免复杂的数学运算
float fastLength = dot(v, v); // 使用平方长度代替长度
float invSqrt = inversesqrt(dot(v, v)); // 快速归一化
分支优化:
// 避免动态分支,使用混合代替
// 不好的做法
if (useTexture) {
color = texture2D(sampler, uv);
} else {
color = materialColor;
}
// 优化的做法
vec4 texColor = texture2D(sampler, uv);
color = mix(materialColor, texColor, useTexture);
内存管理优化:
class ResourceManager {
constructor(gl) {
this.gl = gl;
this.textures = new Map();
this.buffers = new Map();
this.memoryUsage = 0;
this.maxMemory = 256 * 1024 * 1024; // 256MB
}
createTexture(name, width, height, format) {
// 检查内存使用
const size = width * height * this.getBytesPerPixel(format);
if (this.memoryUsage + size > this.maxMemory) {
this.freeOldestTexture();
}
const texture = this.gl.createTexture();
this.textures.set(name, { texture, size, lastUsed: Date.now() });
this.memoryUsage += size;
return texture;
}
freeOldestTexture() {
// 释放最久未使用的纹理
let oldest = null;
let oldestTime = Date.now();
for (const [name, data] of this.textures) {
if (data.lastUsed < oldestTime) {
oldest = name;
oldestTime = data.lastUsed;
}
}
if (oldest) {
this.deleteTexture(oldest);
}
}
}
性能监控工具:
class PerformanceMonitor {
constructor() {
this.frameCount = 0;
this.lastTime = performance.now();
this.fps = 0;
this.frameTime = 0;
}
update() {
this.frameCount++;
const currentTime = performance.now();
this.frameTime = currentTime - this.lastTime;
if (this.frameCount % 60 === 0) {
this.fps = 1000 / (this.frameTime / 60);
console.log(`FPS: ${this.fps.toFixed(2)}, Frame Time: ${this.frameTime.toFixed(2)}ms`);
}
this.lastTime = currentTime;
}
}
常见优化检查清单:
How to implement multi-texturing and texture mapping in WebGL?
How to implement multi-texturing and texture mapping in WebGL?
考察点:高级纹理技术。
答案:
多纹理和纹理映射是WebGL中实现复杂材质效果的重要技术。通过组合多个纹理,可以创建丰富的视觉效果,如法线贴图、环境映射和多层材质。
多纹理绑定和使用:
设置多个纹理单元:
// 激活并绑定多个纹理单元
function bindMultipleTextures(gl, textures, program) {
// 漫反射纹理
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, textures.diffuse);
gl.uniform1i(gl.getUniformLocation(program, 'uDiffuseMap'), 0);
// 法线贴图
gl.activeTexture(gl.TEXTURE1);
gl.bindTexture(gl.TEXTURE_2D, textures.normal);
gl.uniform1i(gl.getUniformLocation(program, 'uNormalMap'), 1);
// 镜面反射贴图
gl.activeTexture(gl.TEXTURE2);
gl.bindTexture(gl.TEXTURE_2D, textures.specular);
gl.uniform1i(gl.getUniformLocation(program, 'uSpecularMap'), 2);
// 环境贴图
gl.activeTexture(gl.TEXTURE3);
gl.bindTexture(gl.TEXTURE_CUBE_MAP, textures.environment);
gl.uniform1i(gl.getUniformLocation(program, 'uEnvironmentMap'), 3);
}
着色器中的多纹理采样:
// 片段着色器 - 多纹理材质
precision mediump float;
uniform sampler2D uDiffuseMap;
uniform sampler2D uNormalMap;
uniform sampler2D uSpecularMap;
uniform samplerCube uEnvironmentMap;
varying vec2 vTexCoord;
varying vec3 vNormal;
varying vec3 vTangent;
varying vec3 vBitangent;
varying vec3 vViewDirection;
varying vec3 vReflectDirection;
void main() {
// 漫反射颜色
vec3 diffuseColor = texture2D(uDiffuseMap, vTexCoord).rgb;
// 法线贴图处理
vec3 normalMap = texture2D(uNormalMap, vTexCoord).rgb * 2.0 - 1.0;
mat3 TBN = mat3(normalize(vTangent), normalize(vBitangent), normalize(vNormal));
vec3 normal = normalize(TBN * normalMap);
// 镜面反射强度
float specularStrength = texture2D(uSpecularMap, vTexCoord).r;
// 环境反射
vec3 environmentColor = textureCube(uEnvironmentMap, vReflectDirection).rgb;
// 组合最终颜色
vec3 finalColor = diffuseColor + specularStrength * environmentColor;
gl_FragColor = vec4(finalColor, 1.0);
}
常见纹理映射技术:
平面映射(Planar Mapping):
// 根据世界坐标计算纹理坐标
vec2 planarMapping(vec3 worldPos, vec3 planeNormal) {
vec3 tangent = normalize(cross(planeNormal, vec3(0.0, 1.0, 0.0)));
vec3 bitangent = cross(planeNormal, tangent);
float u = dot(worldPos, tangent);
float v = dot(worldPos, bitangent);
return vec2(u, v);
}
球面映射(Spherical Mapping):
// 球面环境映射
vec2 sphericalMapping(vec3 normal) {
float u = atan(normal.z, normal.x) / (2.0 * 3.14159) + 0.5;
float v = asin(normal.y) / 3.14159 + 0.5;
return vec2(u, v);
}
立方体映射(Cube Mapping):
// 创建立方体贴图
function createCubeMap(gl, images) {
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_CUBE_MAP, texture);
const faces = [
gl.TEXTURE_CUBE_MAP_POSITIVE_X, // 右
gl.TEXTURE_CUBE_MAP_NEGATIVE_X, // 左
gl.TEXTURE_CUBE_MAP_POSITIVE_Y, // 上
gl.TEXTURE_CUBE_MAP_NEGATIVE_Y, // 下
gl.TEXTURE_CUBE_MAP_POSITIVE_Z, // 前
gl.TEXTURE_CUBE_MAP_NEGATIVE_Z // 后
];
faces.forEach((face, index) => {
gl.texImage2D(face, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, images[index]);
});
gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
return texture;
}
高级纹理技术:
程序纹理生成:
// 在着色器中生成程序纹理
float checkerboard(vec2 uv, float frequency) {
vec2 grid = floor(uv * frequency);
float checker = mod(grid.x + grid.y, 2.0);
return checker;
}
float noise(vec2 uv) {
return fract(sin(dot(uv, vec2(12.9898, 78.233))) * 43758.5453);
}
纹理动画:
// 纹理滚动动画
class TextureAnimator {
constructor() {
this.scrollSpeed = 0.01;
this.offset = [0, 0];
}
update(deltaTime) {
this.offset[0] += this.scrollSpeed * deltaTime;
this.offset[1] += this.scrollSpeed * deltaTime;
// 防止精度问题
if (this.offset[0] > 1.0) this.offset[0] -= 1.0;
if (this.offset[1] > 1.0) this.offset[1] -= 1.0;
}
updateUniforms(gl, program) {
gl.uniform2fv(gl.getUniformLocation(program, 'uTextureOffset'), this.offset);
}
}
纹理压缩和优化:
// 使用压缩纹理格式
function loadCompressedTexture(gl, url, format) {
const texture = gl.createTexture();
fetch(url)
.then(response => response.arrayBuffer())
.then(data => {
gl.bindTexture(gl.TEXTURE_2D, texture);
// 检查压缩格式支持
const extension = gl.getExtension('WEBGL_compressed_texture_s3tc');
if (extension && format === 'DXT1') {
gl.compressedTexImage2D(gl.TEXTURE_2D, 0,
extension.COMPRESSED_RGBA_S3TC_DXT1_EXT,
width, height, 0, new Uint8Array(data));
}
gl.generateMipmap(gl.TEXTURE_2D);
});
return texture;
}
性能优化建议:
多纹理技术为WebGL应用提供了丰富的视觉表现力,是现代3D渲染的基础。
What are framebuffers? How to use them?
What are framebuffers? How to use them?
考察点:帧缓冲区应用。
答案:
帧缓冲区(Framebuffer)是WebGL中用于离屏渲染的重要概念。它允许将渲染结果输出到纹理而不是屏幕,从而实现后期处理、阴影映射、反射等高级渲染技术。
帧缓冲区基本概念:
帧缓冲区是一个容器,可以包含多个附件(attachments):
创建和配置帧缓冲区:
function createFramebuffer(gl, width, height) {
// 创建帧缓冲区对象
const framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
// 创建颜色纹理附件
const colorTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, colorTexture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0,
gl.RGBA, gl.UNSIGNED_BYTE, null);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
// 附加颜色纹理到帧缓冲区
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0,
gl.TEXTURE_2D, colorTexture, 0);
// 创建深度渲染缓冲区
const depthBuffer = gl.createRenderbuffer();
gl.bindRenderbuffer(gl.RENDERBUFFER, depthBuffer);
gl.renderbufferStorage(gl.RENDERBUFFER, gl.DEPTH_COMPONENT16, width, height);
gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT,
gl.RENDERBUFFER, depthBuffer);
// 检查帧缓冲区完整性
const status = gl.checkFramebufferStatus(gl.FRAMEBUFFER);
if (status !== gl.FRAMEBUFFER_COMPLETE) {
throw new Error(`Framebuffer not complete: ${status}`);
}
// 恢复默认帧缓冲区
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
return {
framebuffer: framebuffer,
colorTexture: colorTexture,
depthBuffer: depthBuffer,
width: width,
height: height
};
}
使用帧缓冲区进行离屏渲染:
function renderToFramebuffer(gl, framebufferData, renderFunction) {
// 绑定帧缓冲区
gl.bindFramebuffer(gl.FRAMEBUFFER, framebufferData.framebuffer);
// 设置视口
gl.viewport(0, 0, framebufferData.width, framebufferData.height);
// 清除帧缓冲区
gl.clearColor(0.0, 0.0, 0.0, 1.0);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
// 执行渲染
renderFunction();
// 恢复默认帧缓冲区
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
}
多重渲染目标(MRT):
// WebGL2中支持多个颜色附件
function createMRTFramebuffer(gl, width, height) {
const framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
const colorTextures = [];
const drawBuffers = [];
// 创建多个颜色附件
for (let i = 0; i < 4; i++) {
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0,
gl.RGBA, gl.UNSIGNED_BYTE, null);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0 + i,
gl.TEXTURE_2D, texture, 0);
colorTextures.push(texture);
drawBuffers.push(gl.COLOR_ATTACHMENT0 + i);
}
// 设置绘制缓冲区
gl.drawBuffers(drawBuffers);
return { framebuffer, colorTextures };
}
常见应用场景:
后期处理效果:
class PostProcessing {
constructor(gl, width, height) {
this.gl = gl;
this.framebuffer = createFramebuffer(gl, width, height);
this.blurShader = createBlurShader(gl);
}
applyBlur(sceneTexture) {
// 第一遍:水平模糊
this.renderToFramebuffer(() => {
this.gl.useProgram(this.blurShader);
this.gl.uniform2f(this.gl.getUniformLocation(this.blurShader, 'uDirection'), 1.0, 0.0);
this.drawFullscreenQuad(sceneTexture);
});
// 第二遍:垂直模糊
this.renderToFramebuffer(() => {
this.gl.uniform2f(this.gl.getUniformLocation(this.blurShader, 'uDirection'), 0.0, 1.0);
this.drawFullscreenQuad(this.framebuffer.colorTexture);
});
}
}
阴影映射:
function createShadowMap(gl, size) {
const framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
// 创建深度纹理
const depthTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, depthTexture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT, size, size, 0,
gl.DEPTH_COMPONENT, gl.UNSIGNED_SHORT, null);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT,
gl.TEXTURE_2D, depthTexture, 0);
// 禁用颜色输出
gl.drawBuffers([gl.NONE]);
gl.readBuffer(gl.NONE);
return { framebuffer, depthTexture };
}
环境映射:
function renderToCubemap(gl, size, position, renderScene) {
const cubemapFaces = [
{ target: gl.TEXTURE_CUBE_MAP_POSITIVE_X, up: [0, -1, 0], dir: [1, 0, 0] },
{ target: gl.TEXTURE_CUBE_MAP_NEGATIVE_X, up: [0, -1, 0], dir: [-1, 0, 0] },
{ target: gl.TEXTURE_CUBE_MAP_POSITIVE_Y, up: [0, 0, 1], dir: [0, 1, 0] },
{ target: gl.TEXTURE_CUBE_MAP_NEGATIVE_Y, up: [0, 0, -1], dir: [0, -1, 0] },
{ target: gl.TEXTURE_CUBE_MAP_POSITIVE_Z, up: [0, -1, 0], dir: [0, 0, 1] },
{ target: gl.TEXTURE_CUBE_MAP_NEGATIVE_Z, up: [0, -1, 0], dir: [0, 0, -1] }
];
const framebuffer = createFramebuffer(gl, size, size);
for (const face of cubemapFaces) {
// 设置摄像机朝向当前面
const viewMatrix = mat4.lookAt(mat4.create(), position,
vec3.add(vec3.create(), position, face.dir),
face.up);
// 渲染到帧缓冲区
renderToFramebuffer(gl, framebuffer, () => {
renderScene(viewMatrix);
});
// 复制到立方体贴图的对应面
gl.copyTexSubImage2D(face.target, 0, 0, 0, 0, 0, size, size);
}
}
性能优化建议:
帧缓冲区是实现高级渲染效果的基础工具,掌握其使用方法对开发复杂3D应用至关重要。
How to implement shadow effects in WebGL?
How to implement shadow effects in WebGL?
考察点:阴影渲染技术。
答案:
阴影效果是增强3D场景真实感的重要技术。WebGL中主要通过阴影贴图(Shadow Mapping)技术来实现实时阴影效果。
阴影贴图基本原理:
阴影贴图是一种两遍渲染技术:
实现步骤:
创建阴影贴图帧缓冲区:
function createShadowMapFramebuffer(gl, size = 1024) {
const framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
// 创建深度纹理
const shadowTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, shadowTexture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT, size, size, 0,
gl.DEPTH_COMPONENT, gl.UNSIGNED_SHORT, null);
// 设置纹理参数
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
// 附加深度纹理
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT,
gl.TEXTURE_2D, shadowTexture, 0);
// 检查完整性
if (gl.checkFramebufferStatus(gl.FRAMEBUFFER) !== gl.FRAMEBUFFER_COMPLETE) {
throw new Error('Shadow map framebuffer not complete');
}
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
return { framebuffer, shadowTexture, size };
}
阴影贴图生成着色器:
// 顶点着色器 - 生成阴影贴图
attribute vec3 aPosition;
uniform mat4 uLightViewProjectionMatrix;
uniform mat4 uModelMatrix;
void main() {
gl_Position = uLightViewProjectionMatrix * uModelMatrix * vec4(aPosition, 1.0);
}
// 片段着色器 - 生成阴影贴图
precision mediump float;
void main() {
// WebGL会自动写入深度值到深度缓冲区
gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0);
}
阴影接收着色器:
// 顶点着色器 - 阴影接收
attribute vec3 aPosition;
attribute vec3 aNormal;
attribute vec2 aTexCoord;
uniform mat4 uModelMatrix;
uniform mat4 uViewMatrix;
uniform mat4 uProjectionMatrix;
uniform mat4 uLightViewProjectionMatrix;
uniform mat3 uNormalMatrix;
varying vec3 vWorldPosition;
varying vec3 vNormal;
varying vec2 vTexCoord;
varying vec4 vLightSpacePosition;
void main() {
vec4 worldPosition = uModelMatrix * vec4(aPosition, 1.0);
vWorldPosition = worldPosition.xyz;
vNormal = normalize(uNormalMatrix * aNormal);
vTexCoord = aTexCoord;
vLightSpacePosition = uLightViewProjectionMatrix * worldPosition;
gl_Position = uProjectionMatrix * uViewMatrix * worldPosition;
}
// 片段着色器 - 阴影接收
precision mediump float;
uniform sampler2D uDiffuseTexture;
uniform sampler2D uShadowMap;
uniform vec3 uLightDirection;
uniform vec3 uLightColor;
uniform vec3 uAmbientColor;
varying vec3 vWorldPosition;
varying vec3 vNormal;
varying vec2 vTexCoord;
varying vec4 vLightSpacePosition;
float calculateShadow(vec4 lightSpacePos) {
// 透视除法
vec3 projCoords = lightSpacePos.xyz / lightSpacePos.w;
// 转换到[0,1]范围
projCoords = projCoords * 0.5 + 0.5;
// 超出范围不产生阴影
if (projCoords.z > 1.0 || projCoords.x < 0.0 || projCoords.x > 1.0 ||
projCoords.y < 0.0 || projCoords.y > 1.0) {
return 0.0;
}
// 获取最近深度值
float closestDepth = texture2D(uShadowMap, projCoords.xy).r;
// 当前片段深度
float currentDepth = projCoords.z;
// 偏移解决阴影失真
float bias = 0.005;
// 阴影计算
float shadow = currentDepth - bias > closestDepth ? 1.0 : 0.0;
return shadow;
}
void main() {
vec3 normal = normalize(vNormal);
vec3 lightDir = normalize(-uLightDirection);
// 漫反射光照
float diff = max(dot(normal, lightDir), 0.0);
vec3 diffuse = diff * uLightColor;
// 计算阴影
float shadow = calculateShadow(vLightSpacePosition);
vec3 lighting = uAmbientColor + (1.0 - shadow) * diffuse;
vec3 color = texture2D(uDiffuseTexture, vTexCoord).rgb;
gl_FragColor = vec4(color * lighting, 1.0);
}
阴影渲染管理器:
class ShadowRenderer {
constructor(gl, shadowMapSize = 1024) {
this.gl = gl;
this.shadowMapData = createShadowMapFramebuffer(gl, shadowMapSize);
this.lightViewMatrix = mat4.create();
this.lightProjectionMatrix = mat4.create();
this.lightViewProjectionMatrix = mat4.create();
}
updateLightMatrices(lightPosition, lightTarget, lightUp,
left, right, bottom, top, near, far) {
// 计算光源视图矩阵
mat4.lookAt(this.lightViewMatrix, lightPosition, lightTarget, lightUp);
// 计算光源投影矩阵(正交投影)
mat4.ortho(this.lightProjectionMatrix, left, right, bottom, top, near, far);
// 组合矩阵
mat4.multiply(this.lightViewProjectionMatrix,
this.lightProjectionMatrix, this.lightViewMatrix);
}
renderShadowMap(objects, shadowMapShader) {
// 绑定阴影贴图帧缓冲区
this.gl.bindFramebuffer(this.gl.FRAMEBUFFER, this.shadowMapData.framebuffer);
this.gl.viewport(0, 0, this.shadowMapData.size, this.shadowMapData.size);
// 清除深度缓冲区
this.gl.clear(this.gl.DEPTH_BUFFER_BIT);
// 启用正面剔除减少阴影失真
this.gl.cullFace(this.gl.FRONT);
// 使用阴影贴图着色器
this.gl.useProgram(shadowMapShader);
this.gl.uniformMatrix4fv(
this.gl.getUniformLocation(shadowMapShader, 'uLightViewProjectionMatrix'),
false, this.lightViewProjectionMatrix
);
// 渲染所有阴影投射物体
objects.forEach(obj => obj.render(shadowMapShader));
// 恢复背面剔除
this.gl.cullFace(this.gl.BACK);
// 恢复默认帧缓冲区
this.gl.bindFramebuffer(this.gl.FRAMEBUFFER, null);
}
}
高级阴影技术:
PCF软阴影(Percentage Closer Filtering):
float calculatePCFShadow(vec4 lightSpacePos, sampler2D shadowMap) {
vec3 projCoords = lightSpacePos.xyz / lightSpacePos.w * 0.5 + 0.5;
float currentDepth = projCoords.z;
float bias = 0.005;
float shadow = 0.0;
vec2 texelSize = 1.0 / textureSize(shadowMap, 0);
// 3x3 PCF采样
for(int x = -1; x <= 1; ++x) {
for(int y = -1; y <= 1; ++y) {
float pcfDepth = texture2D(shadowMap, projCoords.xy + vec2(x, y) * texelSize).r;
shadow += currentDepth - bias > pcfDepth ? 1.0 : 0.0;
}
}
shadow /= 9.0;
return shadow;
}
级联阴影贴图(Cascaded Shadow Maps):
处理大场景的阴影渲染,使用多个不同分辨率的阴影贴图
性能优化:
阴影效果极大提升了3D场景的视觉真实感,是现代实时渲染的重要组成部分。
How to handle rendering of large amounts of geometric data in WebGL?
How to handle rendering of large amounts of geometric data in WebGL?
考察点:批量渲染技术。
答案:
处理大量几何数据是WebGL性能优化的核心挑战。通过批处理、实例化、LOD等技术,可以显著提升大规模场景的渲染性能。
批处理渲染(Batch Rendering):
静态批处理:
class StaticBatchRenderer {
constructor(gl) {
this.gl = gl;
this.vertexData = [];
this.indexData = [];
this.batches = [];
this.maxVertices = 65536;
}
addMesh(mesh, transform) {
const startVertex = this.vertexData.length / 8; // 假设每个顶点8个数据
// 检查是否需要新批次
if (startVertex + mesh.vertices.length / 8 > this.maxVertices) {
this.finalizeBatch();
}
// 应用变换并添加顶点数据
for (let i = 0; i < mesh.vertices.length; i += 8) {
const vertex = [
mesh.vertices[i], mesh.vertices[i+1], mesh.vertices[i+2], 1.0
];
const transformedVertex = vec4.transformMat4(vec4.create(), vertex, transform);
this.vertexData.push(
transformedVertex[0], transformedVertex[1], transformedVertex[2],
mesh.vertices[i+3], mesh.vertices[i+4], mesh.vertices[i+5], // 法线
mesh.vertices[i+6], mesh.vertices[i+7] // UV
);
}
// 添加索引数据
mesh.indices.forEach(index => {
this.indexData.push(index + startVertex);
});
}
finalizeBatch() {
if (this.vertexData.length === 0) return;
// 创建VBO和IBO
const vbo = this.gl.createBuffer();
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, vbo);
this.gl.bufferData(this.gl.ARRAY_BUFFER, new Float32Array(this.vertexData),
this.gl.STATIC_DRAW);
const ibo = this.gl.createBuffer();
this.gl.bindBuffer(this.gl.ELEMENT_ARRAY_BUFFER, ibo);
this.gl.bufferData(this.gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(this.indexData),
this.gl.STATIC_DRAW);
this.batches.push({
vbo: vbo,
ibo: ibo,
indexCount: this.indexData.length
});
// 清空缓存
this.vertexData = [];
this.indexData = [];
}
render(program) {
this.batches.forEach(batch => {
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, batch.vbo);
this.gl.bindBuffer(this.gl.ELEMENT_ARRAY_BUFFER, batch.ibo);
// 设置顶点属性
this.setupVertexAttributes(program);
// 绘制
this.gl.drawElements(this.gl.TRIANGLES, batch.indexCount,
this.gl.UNSIGNED_SHORT, 0);
});
}
}
动态批处理:
class DynamicBatchRenderer {
constructor(gl, maxVertices = 10000) {
this.gl = gl;
this.maxVertices = maxVertices;
// 创建动态缓冲区
this.vertexBuffer = gl.createBuffer();
this.indexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, this.vertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, maxVertices * 32, gl.DYNAMIC_DRAW); // 32字节/顶点
this.vertices = new Float32Array(maxVertices * 8);
this.indices = new Uint16Array(maxVertices * 3);
this.vertexCount = 0;
this.indexCount = 0;
}
beginBatch() {
this.vertexCount = 0;
this.indexCount = 0;
}
addQuad(position, size, color, texture) {
if (this.vertexCount + 4 > this.maxVertices) {
this.flush();
this.beginBatch();
}
const startIndex = this.vertexCount;
// 添加四个顶点
this.addVertex(position[0] - size, position[1] - size, 0.0, 1.0, color, texture);
this.addVertex(position[0] + size, position[1] - size, 1.0, 1.0, color, texture);
this.addVertex(position[0] + size, position[1] + size, 1.0, 0.0, color, texture);
this.addVertex(position[0] - size, position[1] + size, 0.0, 0.0, color, texture);
// 添加两个三角形
this.indices[this.indexCount++] = startIndex;
this.indices[this.indexCount++] = startIndex + 1;
this.indices[this.indexCount++] = startIndex + 2;
this.indices[this.indexCount++] = startIndex;
this.indices[this.indexCount++] = startIndex + 2;
this.indices[this.indexCount++] = startIndex + 3;
}
flush() {
if (this.vertexCount === 0) return;
// 更新缓冲区数据
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, this.vertexBuffer);
this.gl.bufferSubData(this.gl.ARRAY_BUFFER, 0,
this.vertices.subarray(0, this.vertexCount * 8));
this.gl.bindBuffer(this.gl.ELEMENT_ARRAY_BUFFER, this.indexBuffer);
this.gl.bufferSubData(this.gl.ELEMENT_ARRAY_BUFFER, 0,
this.indices.subarray(0, this.indexCount));
// 绘制
this.gl.drawElements(this.gl.TRIANGLES, this.indexCount,
this.gl.UNSIGNED_SHORT, 0);
}
}
空间数据结构优化:
// 八叉树用于空间分割和裁剪
class Octree {
constructor(center, size, maxDepth = 5, maxObjects = 10) {
this.center = center;
this.size = size;
this.maxDepth = maxDepth;
this.maxObjects = maxObjects;
this.objects = [];
this.children = null;
}
insert(object) {
if (!this.contains(object.boundingBox)) return false;
if (this.objects.length < this.maxObjects || this.maxDepth === 0) {
this.objects.push(object);
return true;
}
if (!this.children) {
this.subdivide();
}
for (const child of this.children) {
if (child.insert(object)) return true;
}
this.objects.push(object);
return true;
}
query(frustum, result = []) {
if (!this.intersects(frustum)) return result;
// 添加当前节点的对象
for (const obj of this.objects) {
if (frustum.contains(obj.boundingBox)) {
result.push(obj);
}
}
// 查询子节点
if (this.children) {
for (const child of this.children) {
child.query(frustum, result);
}
}
return result;
}
}
GPU驱动渲染:
class GPUDrivenRenderer {
constructor(gl) {
this.gl = gl;
this.indirectBuffer = gl.createBuffer();
this.commandBuffer = gl.createBuffer();
}
setupIndirectDrawing(drawCommands) {
// 创建间接绘制命令
const commands = new Uint32Array(drawCommands.length * 5);
for (let i = 0; i < drawCommands.length; i++) {
const cmd = drawCommands[i];
const offset = i * 5;
commands[offset] = cmd.count; // 顶点数量
commands[offset + 1] = cmd.instanceCount; // 实例数量
commands[offset + 2] = cmd.first; // 起始顶点
commands[offset + 3] = cmd.baseVertex; // 基础顶点
commands[offset + 4] = cmd.baseInstance; // 基础实例
}
this.gl.bindBuffer(this.gl.DRAW_INDIRECT_BUFFER, this.indirectBuffer);
this.gl.bufferData(this.gl.DRAW_INDIRECT_BUFFER, commands, this.gl.DYNAMIC_DRAW);
}
renderIndirect(drawCount) {
// WebGL扩展支持间接绘制
const ext = this.gl.getExtension('WEBGL_multi_draw_instanced_base_vertex_base_instance');
if (ext) {
ext.multiDrawArraysIndirectWEBGL(this.gl.TRIANGLES, 0, drawCount, 0);
}
}
}
LOD系统:
class LODSystem {
constructor() {
this.lodLevels = new Map();
}
addLODLevels(objectId, levels) {
// levels: [{distance: 100, mesh: highDetailMesh}, ...]
this.lodLevels.set(objectId, levels.sort((a, b) => a.distance - b.distance));
}
selectLOD(objectId, distanceToCamera) {
const levels = this.lodLevels.get(objectId);
if (!levels) return null;
for (const level of levels) {
if (distanceToCamera < level.distance) {
return level.mesh;
}
}
return levels[levels.length - 1].mesh; // 最低细节
}
renderWithLOD(objects, cameraPosition) {
const lodGroups = new Map();
// 按LOD分组
for (const obj of objects) {
const distance = vec3.distance(obj.position, cameraPosition);
const mesh = this.selectLOD(obj.id, distance);
if (!lodGroups.has(mesh)) {
lodGroups.set(mesh, []);
}
lodGroups.get(mesh).push(obj);
}
// 批量渲染每个LOD组
for (const [mesh, group] of lodGroups) {
this.batchRender(mesh, group);
}
}
}
性能优化策略:
大量几何数据的高效渲染需要综合运用多种优化技术,在保证视觉质量的同时实现流畅的帧率。
What is instanced rendering in WebGL? How to implement it?
What is instanced rendering in WebGL? How to implement it?
考察点:实例化渲染。
答案:
实例化渲染(Instanced Rendering)是一种高效渲染大量相同几何体的技术。它通过一次绘制调用渲染多个实例,每个实例可以有不同的变换矩阵、颜色等属性,大幅减少CPU开销和绘制调用次数。
实例化渲染基本概念:
实例化渲染允许使用相同的几何数据渲染多个对象实例,每个实例通过实例属性(Instance Attributes)来区分,这些属性在每个实例中保持不变,但在不同实例间可以不同。
WebGL实例化渲染实现:
获取扩展和设置实例属性:
class InstancedRenderer {
constructor(gl) {
this.gl = gl;
// 获取实例化扩展
this.instanceExt = gl.getExtension('ANGLE_instanced_arrays');
if (!this.instanceExt) {
throw new Error('Instanced rendering not supported');
}
this.setupGeometry();
this.setupInstanceData();
}
setupGeometry() {
// 基础几何体(例如立方体)
const vertices = new Float32Array([
// 位置 法线 UV
-1, -1, -1, 0, 0, -1, 0, 0,
1, -1, -1, 0, 0, -1, 1, 0,
1, 1, -1, 0, 0, -1, 1, 1,
-1, 1, -1, 0, 0, -1, 0, 1,
// ... 其他面的顶点
]);
this.vertexBuffer = this.gl.createBuffer();
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, this.vertexBuffer);
this.gl.bufferData(this.gl.ARRAY_BUFFER, vertices, this.gl.STATIC_DRAW);
}
setupInstanceData(instanceCount = 1000) {
this.instanceCount = instanceCount;
// 创建实例变换矩阵数据
const instanceMatrices = new Float32Array(instanceCount * 16);
const instanceColors = new Float32Array(instanceCount * 3);
for (let i = 0; i < instanceCount; i++) {
// 随机位置和旋转
const position = [
(Math.random() - 0.5) * 100,
(Math.random() - 0.5) * 100,
(Math.random() - 0.5) * 100
];
const rotation = Math.random() * Math.PI * 2;
const scale = 0.5 + Math.random() * 1.5;
// 计算变换矩阵
const matrix = mat4.create();
mat4.translate(matrix, matrix, position);
mat4.rotateY(matrix, matrix, rotation);
mat4.scale(matrix, matrix, [scale, scale, scale]);
// 存储矩阵(16个元素)
for (let j = 0; j < 16; j++) {
instanceMatrices[i * 16 + j] = matrix[j];
}
// 随机颜色
instanceColors[i * 3] = Math.random();
instanceColors[i * 3 + 1] = Math.random();
instanceColors[i * 3 + 2] = Math.random();
}
// 创建实例缓冲区
this.matrixBuffer = this.gl.createBuffer();
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, this.matrixBuffer);
this.gl.bufferData(this.gl.ARRAY_BUFFER, instanceMatrices, this.gl.STATIC_DRAW);
this.colorBuffer = this.gl.createBuffer();
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, this.colorBuffer);
this.gl.bufferData(this.gl.ARRAY_BUFFER, instanceColors, this.gl.STATIC_DRAW);
}
}
实例化着色器:
// 顶点着色器
attribute vec3 aPosition;
attribute vec3 aNormal;
attribute vec2 aTexCoord;
// 实例属性(4x4矩阵分成4个vec4)
attribute vec4 aInstanceMatrix0;
attribute vec4 aInstanceMatrix1;
attribute vec4 aInstanceMatrix2;
attribute vec4 aInstanceMatrix3;
attribute vec3 aInstanceColor;
uniform mat4 uViewMatrix;
uniform mat4 uProjectionMatrix;
varying vec3 vNormal;
varying vec3 vColor;
varying vec2 vTexCoord;
void main() {
// 重构实例变换矩阵
mat4 instanceMatrix = mat4(
aInstanceMatrix0,
aInstanceMatrix1,
aInstanceMatrix2,
aInstanceMatrix3
);
// 应用实例变换
vec4 worldPosition = instanceMatrix * vec4(aPosition, 1.0);
gl_Position = uProjectionMatrix * uViewMatrix * worldPosition;
// 变换法线
mat3 normalMatrix = mat3(instanceMatrix);
vNormal = normalize(normalMatrix * aNormal);
vColor = aInstanceColor;
vTexCoord = aTexCoord;
}
// 片段着色器
precision mediump float;
uniform vec3 uLightDirection;
uniform sampler2D uTexture;
varying vec3 vNormal;
varying vec3 vColor;
varying vec2 vTexCoord;
void main() {
vec3 normal = normalize(vNormal);
float lighting = max(dot(normal, normalize(-uLightDirection)), 0.2);
vec3 textureColor = texture2D(uTexture, vTexCoord).rgb;
vec3 finalColor = vColor * textureColor * lighting;
gl_FragColor = vec4(finalColor, 1.0);
}
设置顶点属性和渲染:
render(program) {
this.gl.useProgram(program);
// 绑定几何体顶点属性
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, this.vertexBuffer);
const positionLocation = this.gl.getAttribLocation(program, 'aPosition');
this.gl.enableVertexAttribArray(positionLocation);
this.gl.vertexAttribPointer(positionLocation, 3, this.gl.FLOAT, false, 32, 0);
const normalLocation = this.gl.getAttribLocation(program, 'aNormal');
this.gl.enableVertexAttribArray(normalLocation);
this.gl.vertexAttribPointer(normalLocation, 3, this.gl.FLOAT, false, 32, 12);
const texCoordLocation = this.gl.getAttribLocation(program, 'aTexCoord');
this.gl.enableVertexAttribArray(texCoordLocation);
this.gl.vertexAttribPointer(texCoordLocation, 2, this.gl.FLOAT, false, 32, 24);
// 设置实例矩阵属性
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, this.matrixBuffer);
for (let i = 0; i < 4; i++) {
const location = this.gl.getAttribLocation(program, `aInstanceMatrix${i}`);
this.gl.enableVertexAttribArray(location);
this.gl.vertexAttribPointer(location, 4, this.gl.FLOAT, false, 64, i * 16);
// 设置实例除数(每个实例更新一次)
this.instanceExt.vertexAttribDivisorANGLE(location, 1);
}
// 设置实例颜色属性
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, this.colorBuffer);
const colorLocation = this.gl.getAttribLocation(program, 'aInstanceColor');
this.gl.enableVertexAttribArray(colorLocation);
this.gl.vertexAttribPointer(colorLocation, 3, this.gl.FLOAT, false, 12, 0);
this.instanceExt.vertexAttribDivisorANGLE(colorLocation, 1);
// 实例化绘制
this.instanceExt.drawArraysInstancedANGLE(
this.gl.TRIANGLES, 0, 36, this.instanceCount
);
// 重置实例除数
for (let i = 0; i < 4; i++) {
const location = this.gl.getAttribLocation(program, `aInstanceMatrix${i}`);
this.instanceExt.vertexAttribDivisorANGLE(location, 0);
}
this.instanceExt.vertexAttribDivisorANGLE(colorLocation, 0);
}
动态实例化:
class DynamicInstanceRenderer {
updateInstanceData(newInstanceData) {
// 更新部分或全部实例数据
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, this.matrixBuffer);
this.gl.bufferSubData(this.gl.ARRAY_BUFFER, 0, newInstanceData.matrices);
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, this.colorBuffer);
this.gl.bufferSubData(this.gl.ARRAY_BUFFER, 0, newInstanceData.colors);
}
renderAnimatedInstances(time) {
// 计算动画
const matrices = new Float32Array(this.instanceCount * 16);
for (let i = 0; i < this.instanceCount; i++) {
const offset = time * 0.001 + i * 0.1;
const position = [
Math.sin(offset) * 10,
Math.cos(offset * 0.7) * 5,
Math.sin(offset * 1.3) * 8
];
const matrix = mat4.create();
mat4.translate(matrix, matrix, position);
mat4.rotateY(matrix, matrix, offset);
for (let j = 0; j < 16; j++) {
matrices[i * 16 + j] = matrix[j];
}
}
this.updateInstanceData({ matrices });
this.render();
}
}
高级实例化技术:
// 计算着色器中进行实例剔除
#version 300 es
layout(local_size_x = 64) in;
layout(std430, binding = 0) buffer InputBuffer {
mat4 inputMatrices[];
};
layout(std430, binding = 1) buffer OutputBuffer {
mat4 outputMatrices[];
};
uniform mat4 uViewProjectionMatrix;
uniform vec3 uCameraPosition;
uniform float uMaxDistance;
void main() {
uint index = gl_GlobalInvocationID.x;
if (index >= inputMatrices.length()) return;
mat4 instanceMatrix = inputMatrices[index];
vec3 instancePosition = instanceMatrix[3].xyz;
// 距离剔除
float distance = length(instancePosition - uCameraPosition);
if (distance > uMaxDistance) return;
// 视锥体剔除
vec4 clipPosition = uViewProjectionMatrix * vec4(instancePosition, 1.0);
if (clipPosition.w <= 0.0) return;
vec3 ndc = clipPosition.xyz / clipPosition.w;
if (any(lessThan(ndc, vec3(-1.0))) || any(greaterThan(ndc, vec3(1.0)))) {
return;
}
// 通过剔除测试,复制到输出缓冲区
uint outputIndex = atomicAdd(visibleInstanceCount, 1u);
outputMatrices[outputIndex] = instanceMatrix;
}
实例化渲染的优势:
适用场景:
实例化渲染是现代3D引擎处理大规模场景的核心技术之一。
How to implement post-processing effects in WebGL?
How to implement post-processing effects in WebGL?
考察点:后期处理技术。
答案:
后期处理(Post-Processing)是在场景渲染完成后对最终图像进行处理的技术。通过帧缓冲区和屏幕空间着色器,可以实现bloom、模糊、色调映射、抗锯齿等丰富的视觉效果。
后期处理基本框架:
渲染目标管理:
class PostProcessingManager {
constructor(gl, width, height) {
this.gl = gl;
this.width = width;
this.height = height;
// 创建渲染目标
this.renderTargets = {
scene: this.createRenderTarget(width, height),
temp1: this.createRenderTarget(width, height),
temp2: this.createRenderTarget(width, height),
half: this.createRenderTarget(width / 2, height / 2),
quarter: this.createRenderTarget(width / 4, height / 4)
};
// 全屏四边形
this.fullscreenQuad = this.createFullscreenQuad();
// 后期处理着色器
this.shaders = {
blur: this.createBlurShader(),
bloom: this.createBloomShader(),
tonemap: this.createTonemapShader(),
fxaa: this.createFXAAShader()
};
}
createRenderTarget(width, height) {
const framebuffer = this.gl.createFramebuffer();
this.gl.bindFramebuffer(this.gl.FRAMEBUFFER, framebuffer);
// 颜色纹理
const colorTexture = this.gl.createTexture();
this.gl.bindTexture(this.gl.TEXTURE_2D, colorTexture);
this.gl.texImage2D(this.gl.TEXTURE_2D, 0, this.gl.RGBA, width, height, 0,
this.gl.RGBA, this.gl.UNSIGNED_BYTE, null);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_MIN_FILTER, this.gl.LINEAR);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_MAG_FILTER, this.gl.LINEAR);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_WRAP_S, this.gl.CLAMP_TO_EDGE);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_WRAP_T, this.gl.CLAMP_TO_EDGE);
this.gl.framebufferTexture2D(this.gl.FRAMEBUFFER, this.gl.COLOR_ATTACHMENT0,
this.gl.TEXTURE_2D, colorTexture, 0);
// 深度缓冲区
const depthBuffer = this.gl.createRenderbuffer();
this.gl.bindRenderbuffer(this.gl.RENDERBUFFER, depthBuffer);
this.gl.renderbufferStorage(this.gl.RENDERBUFFER, this.gl.DEPTH_COMPONENT16, width, height);
this.gl.framebufferRenderbuffer(this.gl.FRAMEBUFFER, this.gl.DEPTH_ATTACHMENT,
this.gl.RENDERBUFFER, depthBuffer);
this.gl.bindFramebuffer(this.gl.FRAMEBUFFER, null);
return { framebuffer, colorTexture, depthBuffer, width, height };
}
}
全屏四边形创建:
createFullscreenQuad() {
const vertices = new Float32Array([
-1, -1, 0, 0, // 左下
1, -1, 1, 0, // 右下
-1, 1, 0, 1, // 左上
1, 1, 1, 1 // 右上
]);
const buffer = this.gl.createBuffer();
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, buffer);
this.gl.bufferData(this.gl.ARRAY_BUFFER, vertices, this.gl.STATIC_DRAW);
return { buffer, vertexCount: 4 };
}
renderFullscreenQuad(shader, uniforms = {}) {
this.gl.useProgram(shader);
// 设置uniform
for (const [name, value] of Object.entries(uniforms)) {
const location = this.gl.getUniformLocation(shader, name);
if (location !== null) {
if (typeof value === 'number') {
this.gl.uniform1f(location, value);
} else if (value.length === 2) {
this.gl.uniform2fv(location, value);
} else if (value.length === 3) {
this.gl.uniform3fv(location, value);
}
}
}
// 绑定顶点数据
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, this.fullscreenQuad.buffer);
const posLocation = this.gl.getAttribLocation(shader, 'aPosition');
const uvLocation = this.gl.getAttribLocation(shader, 'aTexCoord');
this.gl.enableVertexAttribArray(posLocation);
this.gl.vertexAttribPointer(posLocation, 2, this.gl.FLOAT, false, 16, 0);
this.gl.enableVertexAttribArray(uvLocation);
this.gl.vertexAttribPointer(uvLocation, 2, this.gl.FLOAT, false, 16, 8);
// 绘制
this.gl.drawArrays(this.gl.TRIANGLE_STRIP, 0, 4);
}
常见后期处理效果实现:
高斯模糊(Gaussian Blur):
// 模糊顶点着色器
attribute vec2 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
void main() {
vTexCoord = aTexCoord;
gl_Position = vec4(aPosition, 0.0, 1.0);
}
// 模糊片段着色器
precision mediump float;
uniform sampler2D uTexture;
uniform vec2 uDirection;
uniform vec2 uResolution;
varying vec2 vTexCoord;
void main() {
vec2 texelSize = 1.0 / uResolution;
vec4 color = vec4(0.0);
// 高斯权重
float weights[5] = float[](0.227027, 0.1945946, 0.1216216, 0.054054, 0.016216);
// 中心像素
color += texture2D(uTexture, vTexCoord) * weights[0];
// 双向采样
for (int i = 1; i < 5; i++) {
vec2 offset = uDirection * texelSize * float(i);
color += texture2D(uTexture, vTexCoord + offset) * weights[i];
color += texture2D(uTexture, vTexCoord - offset) * weights[i];
}
gl_FragColor = color;
}
Bloom效果:
applyBloom(sceneTexture) {
// 1. 提取高亮区域
this.renderToTarget(this.renderTargets.temp1, () => {
this.gl.activeTexture(this.gl.TEXTURE0);
this.gl.bindTexture(this.gl.TEXTURE_2D, sceneTexture);
this.renderFullscreenQuad(this.shaders.bloom.extract, {
uTexture: 0,
uThreshold: 1.0
});
});
// 2. 降采样并模糊
this.renderToTarget(this.renderTargets.half, () => {
this.gl.activeTexture(this.gl.TEXTURE0);
this.gl.bindTexture(this.gl.TEXTURE_2D, this.renderTargets.temp1.colorTexture);
this.renderFullscreenQuad(this.shaders.blur, {
uTexture: 0,
uDirection: [1.0, 0.0],
uResolution: [this.renderTargets.half.width, this.renderTargets.half.height]
});
});
// 3. 垂直模糊
this.renderToTarget(this.renderTargets.quarter, () => {
this.gl.activeTexture(this.gl.TEXTURE0);
this.gl.bindTexture(this.gl.TEXTURE_2D, this.renderTargets.half.colorTexture);
this.renderFullscreenQuad(this.shaders.blur, {
uTexture: 0,
uDirection: [0.0, 1.0],
uResolution: [this.renderTargets.quarter.width, this.renderTargets.quarter.height]
});
});
// 4. 混合原图和Bloom效果
this.gl.bindFramebuffer(this.gl.FRAMEBUFFER, null);
this.gl.activeTexture(this.gl.TEXTURE0);
this.gl.bindTexture(this.gl.TEXTURE_2D, sceneTexture);
this.gl.activeTexture(this.gl.TEXTURE1);
this.gl.bindTexture(this.gl.TEXTURE_2D, this.renderTargets.quarter.colorTexture);
this.renderFullscreenQuad(this.shaders.bloom.combine, {
uSceneTexture: 0,
uBloomTexture: 1,
uBloomStrength: 0.8
});
}
FXAA抗锯齿:
// FXAA片段着色器
precision mediump float;
uniform sampler2D uTexture;
uniform vec2 uResolution;
varying vec2 vTexCoord;
#define FXAA_SPAN_MAX 8.0
#define FXAA_REDUCE_MUL (1.0/FXAA_SPAN_MAX)
#define FXAA_REDUCE_MIN (1.0/128.0)
vec3 fxaa(sampler2D tex, vec2 fragCoord, vec2 resolution) {
vec2 inverseVP = 1.0 / resolution;
vec3 rgbNW = texture2D(tex, fragCoord + vec2(-1.0, -1.0) * inverseVP).rgb;
vec3 rgbNE = texture2D(tex, fragCoord + vec2(1.0, -1.0) * inverseVP).rgb;
vec3 rgbSW = texture2D(tex, fragCoord + vec2(-1.0, 1.0) * inverseVP).rgb;
vec3 rgbSE = texture2D(tex, fragCoord + vec2(1.0, 1.0) * inverseVP).rgb;
vec3 rgbM = texture2D(tex, fragCoord).rgb;
vec3 luma = vec3(0.299, 0.587, 0.114);
float lumaNW = dot(rgbNW, luma);
float lumaNE = dot(rgbNE, luma);
float lumaSW = dot(rgbSW, luma);
float lumaSE = dot(rgbSE, luma);
float lumaM = dot(rgbM, luma);
float lumaMin = min(lumaM, min(min(lumaNW, lumaNE), min(lumaSW, lumaSE)));
float lumaMax = max(lumaM, max(max(lumaNW, lumaNE), max(lumaSW, lumaSE)));
vec2 dir = vec2(-((lumaNW + lumaNE) - (lumaSW + lumaSE)),
((lumaNW + lumaSW) - (lumaNE + lumaSE)));
float dirReduce = max((lumaNW + lumaNE + lumaSW + lumaSE) * (0.25 * FXAA_REDUCE_MUL),
FXAA_REDUCE_MIN);
float rcpDirMin = 1.0 / (min(abs(dir.x), abs(dir.y)) + dirReduce);
dir = min(vec2(FXAA_SPAN_MAX), max(vec2(-FXAA_SPAN_MAX), dir * rcpDirMin)) * inverseVP;
vec3 rgbA = 0.5 * (texture2D(tex, fragCoord + dir * (1.0/3.0 - 0.5)).rgb +
texture2D(tex, fragCoord + dir * (2.0/3.0 - 0.5)).rgb);
vec3 rgbB = rgbA * 0.5 + 0.25 * (texture2D(tex, fragCoord + dir * -0.5).rgb +
texture2D(tex, fragCoord + dir * 0.5).rgb);
float lumaB = dot(rgbB, luma);
if ((lumaB < lumaMin) || (lumaB > lumaMax)) {
return rgbA;
} else {
return rgbB;
}
}
void main() {
gl_FragColor = vec4(fxaa(uTexture, vTexCoord, uResolution), 1.0);
}
后期处理链管理:
class PostProcessChain {
constructor(postProcessor) {
this.postProcessor = postProcessor;
this.passes = [];
}
addPass(passConfig) {
this.passes.push(passConfig);
}
execute(inputTexture) {
let currentTexture = inputTexture;
let currentTarget = 0;
for (const pass of this.passes) {
const outputTarget = this.getNextTarget(currentTarget);
this.postProcessor.renderToTarget(outputTarget, () => {
this.postProcessor.gl.activeTexture(this.postProcessor.gl.TEXTURE0);
this.postProcessor.gl.bindTexture(this.postProcessor.gl.TEXTURE_2D, currentTexture);
this.postProcessor.renderFullscreenQuad(pass.shader, {
...pass.uniforms,
uTexture: 0
});
});
currentTexture = outputTarget.colorTexture;
currentTarget = 1 - currentTarget;
}
return currentTexture;
}
}
性能优化建议:
后期处理是现代3D渲染管线的重要组成部分,能够显著提升视觉效果的质量和艺术表现力。
How to design a high-performance WebGL rendering engine architecture?
How to design a high-performance WebGL rendering engine architecture?
考察点:引擎架构设计。
答案:
设计高性能WebGL渲染引擎需要考虑模块化架构、性能优化、资源管理、可扩展性等多个方面。一个良好的架构应该平衡性能、可维护性和功能完整性。
核心架构设计:
分层架构模式:
// 引擎核心架构
class WebGLEngine {
constructor(canvas, options = {}) {
// 核心层
this.core = new EngineCore(canvas, options);
// 渲染层
this.renderer = new Renderer(this.core.gl, options.renderer);
// 场景管理层
this.sceneManager = new SceneManager();
// 资源管理层
this.resourceManager = new ResourceManager(this.core.gl);
// 系统层
this.systems = {
animation: new AnimationSystem(),
physics: new PhysicsSystem(),
input: new InputSystem(canvas),
audio: new AudioSystem()
};
// 工具层
this.utils = {
math: new MathUtils(),
loader: new AssetLoader(),
profiler: new Profiler()
};
}
initialize() {
return Promise.all([
this.core.initialize(),
this.renderer.initialize(),
this.resourceManager.initialize(),
this.initializeSystems()
]);
}
update(deltaTime) {
// 更新各个系统
this.systems.input.update();
this.systems.physics.update(deltaTime);
this.systems.animation.update(deltaTime);
// 更新场景
this.sceneManager.update(deltaTime);
// 渲染
this.renderer.render(this.sceneManager.currentScene);
// 性能统计
this.utils.profiler.endFrame();
}
}
渲染器架构:
class Renderer {
constructor(gl, options) {
this.gl = gl;
this.renderQueue = new RenderQueue();
this.renderPasses = new Map();
// 渲染通道
this.setupRenderPasses();
// 状态管理
this.stateManager = new StateManager(gl);
// 命令缓冲
this.commandBuffer = new CommandBuffer();
// 渲染统计
this.stats = new RenderStats();
}
setupRenderPasses() {
// 深度预通道
this.renderPasses.set('depth-prepass', new DepthPrePass(this.gl));
// 阴影贴图通道
this.renderPasses.set('shadow-map', new ShadowMapPass(this.gl));
// 不透明物体通道
this.renderPasses.set('opaque', new OpaquePass(this.gl));
// 透明物体通道
this.renderPasses.set('transparent', new TransparentPass(this.gl));
// 后期处理通道
this.renderPasses.set('post-process', new PostProcessPass(this.gl));
// UI通道
this.renderPasses.set('ui', new UIPass(this.gl));
}
render(scene) {
this.stats.beginFrame();
// 准备渲染队列
this.renderQueue.clear();
this.buildRenderQueue(scene);
// 执行渲染通道
this.executeRenderPasses();
this.stats.endFrame();
}
buildRenderQueue(scene) {
const camera = scene.activeCamera;
const frustum = camera.getFrustum();
// 视锥体裁剪
const visibleObjects = scene.culling(frustum);
// 分类渲染对象
for (const object of visibleObjects) {
const distance = vec3.distance(object.position, camera.position);
if (object.material.transparent) {
this.renderQueue.addTransparent(object, distance);
} else {
this.renderQueue.addOpaque(object, distance);
}
}
// 排序渲染队列
this.renderQueue.sort();
}
}
高性能场景管理:
class SceneManager {
constructor() {
this.scenes = new Map();
this.currentScene = null;
// 空间数据结构
this.spatialIndex = new Octree();
// 对象池
this.objectPools = new Map();
// 批处理管理
this.batchManager = new BatchManager();
}
addObject(object) {
// 添加到空间索引
this.spatialIndex.insert(object);
// 批处理优化
if (this.canBatch(object)) {
this.batchManager.addToBatch(object);
}
this.currentScene.addChild(object);
}
culling(frustum) {
const startTime = performance.now();
// 空间查询
const candidates = this.spatialIndex.query(frustum);
// 精确视锥体测试
const visibleObjects = [];
for (const obj of candidates) {
if (this.isVisible(obj, frustum)) {
visibleObjects.push(obj);
}
}
this.cullTime = performance.now() - startTime;
return visibleObjects;
}
isVisible(object, frustum) {
// 包围盒测试
if (!frustum.intersectsBoundingBox(object.boundingBox)) {
return false;
}
// 遮挡剔除
if (this.occlusionCulling && this.isOccluded(object)) {
return false;
}
// 距离剔除
if (object.maxDistance && object.distanceToCamera > object.maxDistance) {
return false;
}
return true;
}
}
性能优化策略:
渲染状态管理:
class StateManager {
constructor(gl) {
this.gl = gl;
this.currentState = {};
this.stateChanges = 0;
}
setShader(program) {
if (this.currentState.program !== program) {
this.gl.useProgram(program);
this.currentState.program = program;
this.stateChanges++;
}
}
setTexture(unit, texture) {
const key = `texture${unit}`;
if (this.currentState[key] !== texture) {
this.gl.activeTexture(this.gl.TEXTURE0 + unit);
this.gl.bindTexture(this.gl.TEXTURE_2D, texture);
this.currentState[key] = texture;
this.stateChanges++;
}
}
setBlendMode(srcFactor, dstFactor) {
const blendKey = `${srcFactor}-${dstFactor}`;
if (this.currentState.blendMode !== blendKey) {
this.gl.blendFunc(srcFactor, dstFactor);
this.currentState.blendMode = blendKey;
this.stateChanges++;
}
}
}
命令缓冲系统:
class CommandBuffer {
constructor() {
this.commands = [];
this.currentIndex = 0;
}
clear() {
this.commands.length = 0;
this.currentIndex = 0;
}
setViewport(x, y, width, height) {
this.commands.push({
type: 'viewport',
x, y, width, height
});
}
drawElements(mode, count, type, offset) {
this.commands.push({
type: 'drawElements',
mode, count, type, offset
});
}
execute(gl, stateManager) {
for (const cmd of this.commands) {
switch (cmd.type) {
case 'viewport':
gl.viewport(cmd.x, cmd.y, cmd.width, cmd.height);
break;
case 'drawElements':
gl.drawElements(cmd.mode, cmd.count, cmd.type, cmd.offset);
break;
}
}
}
}
资源管理架构:
class ResourceManager {
constructor(gl) {
this.gl = gl;
this.resources = new Map();
this.loadingPromises = new Map();
this.memoryUsage = 0;
this.maxMemory = 512 * 1024 * 1024; // 512MB
}
async loadTexture(url, options = {}) {
if (this.resources.has(url)) {
return this.resources.get(url);
}
if (this.loadingPromises.has(url)) {
return this.loadingPromises.get(url);
}
const promise = this.doLoadTexture(url, options);
this.loadingPromises.set(url, promise);
try {
const texture = await promise;
this.resources.set(url, texture);
this.loadingPromises.delete(url);
return texture;
} catch (error) {
this.loadingPromises.delete(url);
throw error;
}
}
checkMemoryUsage() {
if (this.memoryUsage > this.maxMemory) {
this.garbageCollect();
}
}
garbageCollect() {
// 释放最近最少使用的资源
const sortedResources = Array.from(this.resources.entries())
.sort((a, b) => a[1].lastUsed - b[1].lastUsed);
let freedMemory = 0;
const targetFree = this.maxMemory * 0.2; // 释放20%内存
for (const [url, resource] of sortedResources) {
if (freedMemory >= targetFree) break;
this.gl.deleteTexture(resource.texture);
this.resources.delete(url);
freedMemory += resource.size;
this.memoryUsage -= resource.size;
}
}
}
多线程架构:
class WorkerPool {
constructor(workerCount = navigator.hardwareConcurrency || 4) {
this.workers = [];
this.taskQueue = [];
this.availableWorkers = [];
for (let i = 0; i < workerCount; i++) {
const worker = new Worker('engine-worker.js');
this.workers.push(worker);
this.availableWorkers.push(worker);
worker.onmessage = this.handleWorkerMessage.bind(this, worker);
}
}
submitTask(task) {
return new Promise((resolve, reject) => {
const taskData = { ...task, resolve, reject, id: this.generateId() };
if (this.availableWorkers.length > 0) {
this.executeTask(taskData);
} else {
this.taskQueue.push(taskData);
}
});
}
executeTask(task) {
const worker = this.availableWorkers.pop();
task.worker = worker;
worker.postMessage({
id: task.id,
type: task.type,
data: task.data
});
}
}
关键设计原则:
高性能渲染引擎的架构设计是一个复杂的系统工程,需要在功能、性能、可维护性之间找到平衡点。
How to implement complex shader effects in WebGL? What are the advanced GLSL programming techniques?
How to implement complex shader effects in WebGL? What are the advanced GLSL programming techniques?
考察点:高级着色器编程。
答案:
复杂着色器效果的实现需要掌握高级GLSL编程技巧,包括数学优化、算法实现、性能调优等方面。通过这些技术可以创造出丰富的视觉效果。
高级着色器编程技巧:
程序纹理生成:
// 噪声函数实现
vec2 hash(vec2 p) {
p = vec2(dot(p, vec2(127.1, 311.7)), dot(p, vec2(269.5, 183.3)));
return -1.0 + 2.0 * fract(sin(p) * 43758.5453123);
}
float noise(vec2 p) {
const float K1 = 0.366025404; // (sqrt(3)-1)/2
const float K2 = 0.211324865; // (3-sqrt(3))/6
vec2 i = floor(p + (p.x + p.y) * K1);
vec2 a = p - i + (i.x + i.y) * K2;
vec2 o = (a.x > a.y) ? vec2(1.0, 0.0) : vec2(0.0, 1.0);
vec2 b = a - o + K2;
vec2 c = a - 1.0 + 2.0 * K2;
vec3 h = max(0.5 - vec3(dot(a, a), dot(b, b), dot(c, c)), 0.0);
vec3 n = h * h * h * h * vec3(dot(a, hash(i + 0.0)),
dot(b, hash(i + o)),
dot(c, hash(i + 1.0)));
return dot(n, vec3(70.0));
}
// 分形布朗运动
float fbm(vec2 p) {
float value = 0.0;
float amplitude = 0.5;
float frequency = 1.0;
for (int i = 0; i < 6; i++) {
value += amplitude * noise(p * frequency);
amplitude *= 0.5;
frequency *= 2.0;
}
return value;
}
体积渲染技术:
// 体积云渲染
precision highp float;
uniform vec3 uCameraPosition;
uniform vec3 uLightDirection;
uniform float uTime;
uniform sampler2D uNoiseTexture;
varying vec3 vRayDirection;
float sampleDensity(vec3 pos) {
// 3D噪声采样
vec3 uvw = pos * 0.01 + vec3(uTime * 0.1, 0.0, 0.0);
float noise1 = texture2D(uNoiseTexture, uvw.xy).r;
float noise2 = texture2D(uNoiseTexture, uvw.yz * 2.0).g;
float noise3 = texture2D(uNoiseTexture, uvw.xz * 4.0).b;
float density = noise1 * 0.5 + noise2 * 0.3 + noise3 * 0.2;
// 云层形状控制
float height = (pos.y + 1000.0) / 2000.0;
float heightFactor = smoothstep(0.0, 0.1, height) * (1.0 - smoothstep(0.9, 1.0, height));
return density * heightFactor;
}
float lightMarch(vec3 pos) {
float totalDensity = 0.0;
vec3 lightStep = uLightDirection * 50.0;
for (int i = 0; i < 8; i++) {
pos += lightStep;
totalDensity += sampleDensity(pos);
}
return exp(-totalDensity * 0.1);
}
void main() {
vec3 rayPos = uCameraPosition;
vec3 rayDir = normalize(vRayDirection);
float stepSize = 25.0;
vec4 color = vec4(0.0);
float transmittance = 1.0;
// 体积光线投射
for (int i = 0; i < 64; i++) {
rayPos += rayDir * stepSize;
float density = sampleDensity(rayPos);
if (density > 0.01) {
float lightTransmittance = lightMarch(rayPos);
vec3 luminance = vec3(1.0, 0.9, 0.7) * lightTransmittance;
float alpha = 1.0 - exp(-density * stepSize * 0.1);
color.rgb += luminance * alpha * transmittance;
transmittance *= (1.0 - alpha);
if (transmittance < 0.01) break;
}
}
gl_FragColor = vec4(color.rgb, 1.0 - transmittance);
}
屏幕空间反射:
// 屏幕空间反射 (SSR)
precision highp float;
uniform sampler2D uColorTexture;
uniform sampler2D uDepthTexture;
uniform sampler2D uNormalTexture;
uniform mat4 uViewMatrix;
uniform mat4 uProjectionMatrix;
uniform mat4 uInverseViewMatrix;
uniform mat4 uInverseProjectionMatrix;
uniform vec2 uScreenSize;
varying vec2 vTexCoord;
vec3 reconstructWorldPos(vec2 screenPos, float depth) {
vec4 clipPos = vec4(screenPos * 2.0 - 1.0, depth * 2.0 - 1.0, 1.0);
vec4 viewPos = uInverseProjectionMatrix * clipPos;
viewPos /= viewPos.w;
vec4 worldPos = uInverseViewMatrix * viewPos;
return worldPos.xyz;
}
vec4 screenSpaceRaycast(vec3 startPos, vec3 rayDir) {
vec3 currentPos = startPos;
float stepSize = 0.5;
for (int i = 0; i < 32; i++) {
currentPos += rayDir * stepSize;
// 转换到屏幕空间
vec4 clipPos = uProjectionMatrix * uViewMatrix * vec4(currentPos, 1.0);
vec3 screenPos = clipPos.xyz / clipPos.w;
screenPos = screenPos * 0.5 + 0.5;
// 检查边界
if (screenPos.x < 0.0 || screenPos.x > 1.0 ||
screenPos.y < 0.0 || screenPos.y > 1.0) break;
// 采样深度
float sceneDepth = texture2D(uDepthTexture, screenPos.xy).r;
// 深度比较
if (screenPos.z > sceneDepth + 0.001) {
// 找到交点,进行二分搜索
return vec4(screenPos.xy, 1.0, 1.0);
}
stepSize *= 1.1; // 自适应步长
}
return vec4(0.0);
}
void main() {
vec3 normal = normalize(texture2D(uNormalTexture, vTexCoord).xyz * 2.0 - 1.0);
float depth = texture2D(uDepthTexture, vTexCoord).r;
vec3 worldPos = reconstructWorldPos(vTexCoord, depth);
vec3 viewDir = normalize(worldPos - uCameraPosition);
vec3 reflectDir = reflect(viewDir, normal);
// 屏幕空间光线投射
vec4 reflection = screenSpaceRaycast(worldPos, reflectDir);
if (reflection.a > 0.0) {
vec3 reflectionColor = texture2D(uColorTexture, reflection.xy).rgb;
gl_FragColor = vec4(reflectionColor, reflection.a);
} else {
gl_FragColor = vec4(0.0, 0.0, 0.0, 0.0);
}
}
性能优化技巧:
精度优化和分支消除:
// 使用混合代替分支
float conditionalValue = mix(valueA, valueB, condition);
// 使用step函数代替if判断
float mask = step(0.5, input);
// 快速数学函数
float fastSqrt = inversesqrt(x) * x; // 比sqrt(x)快
float fastLength = dot(v, v); // 避免开方运算
// 预计算常量
const float PI = 3.14159265359;
const float TWO_PI = 6.28318530718;
const float INV_PI = 0.31830988618;
纹理采样优化:
// 使用textureGrad避免隐式导数计算
vec4 color = textureGrad(uTexture, uv, dFdx(uv), dFdy(uv));
// 手动LOD计算
float lod = log2(max(length(dFdx(uv)), length(dFdy(uv))) * textureSize);
vec4 color = textureLod(uTexture, uv, lod);
// 使用纹理数组减少绑定
uniform sampler2DArray uTextureArray;
vec4 color = texture(uTextureArray, vec3(uv, layerIndex));
高级渲染算法实现:
// 时域抗锯齿着色器
uniform sampler2D uCurrentFrame;
uniform sampler2D uPreviousFrame;
uniform sampler2D uMotionVectors;
uniform float uBlendFactor;
vec3 tonemap(vec3 color) {
return color / (1.0 + color);
}
vec3 untonemap(vec3 color) {
return color / (1.0 - color);
}
void main() {
vec2 motion = texture2D(uMotionVectors, vTexCoord).xy;
vec2 previousUV = vTexCoord - motion;
vec3 currentColor = texture2D(uCurrentFrame, vTexCoord).rgb;
vec3 previousColor = texture2D(uPreviousFrame, previousUV).rgb;
// 色彩空间转换提高混合质量
currentColor = tonemap(currentColor);
previousColor = tonemap(previousColor);
// 邻域裁剪
vec3 nearColor0 = tonemap(textureOffset(uCurrentFrame, vTexCoord, ivec2(-1, -1)).rgb);
vec3 nearColor1 = tonemap(textureOffset(uCurrentFrame, vTexCoord, ivec2( 1, -1)).rgb);
vec3 nearColor2 = tonemap(textureOffset(uCurrentFrame, vTexCoord, ivec2(-1, 1)).rgb);
vec3 nearColor3 = tonemap(textureOffset(uCurrentFrame, vTexCoord, ivec2( 1, 1)).rgb);
vec3 minColor = min(currentColor, min(min(nearColor0, nearColor1), min(nearColor2, nearColor3)));
vec3 maxColor = max(currentColor, max(max(nearColor0, nearColor1), max(nearColor2, nearColor3)));
previousColor = clamp(previousColor, minColor, maxColor);
// 自适应混合
float adaptiveBlend = mix(0.05, 0.2, length(motion) * 50.0);
vec3 finalColor = mix(previousColor, currentColor, adaptiveBlend);
gl_FragColor = vec4(untonemap(finalColor), 1.0);
}
着色器调试技巧:
// 调试可视化宏
#define DEBUG_MODE 1
#if DEBUG_MODE
// 颜色编码调试
vec3 debugHeatmap(float value) {
vec3 colors[5] = vec3[](
vec3(0.0, 0.0, 1.0), // 蓝色 (低)
vec3(0.0, 1.0, 1.0), // 青色
vec3(0.0, 1.0, 0.0), // 绿色 (中)
vec3(1.0, 1.0, 0.0), // 黄色
vec3(1.0, 0.0, 0.0) // 红色 (高)
);
value = clamp(value, 0.0, 1.0) * 4.0;
int index = int(value);
float t = fract(value);
return mix(colors[index], colors[min(index + 1, 4)], t);
}
// 法线可视化
vec3 debugNormal(vec3 normal) {
return normal * 0.5 + 0.5;
}
#endif
关键优化原则:
高级着色器编程需要深入理解GPU架构和GLSL语言特性,通过优化算法和数据结构来实现复杂的视觉效果。
What are the memory management and resource optimization strategies for WebGL applications?
What are the memory management and resource optimization strategies for WebGL applications?
考察点:资源管理能力。
答案:
WebGL应用的内存管理和资源优化是确保应用稳定运行和良好性能的关键。需要从GPU内存、CPU内存、资源生命周期等多个维度进行优化。
GPU内存管理:
缓冲区管理策略:
class GPUBufferManager {
constructor(gl) {
this.gl = gl;
this.bufferPools = new Map();
this.activeBuffers = new Set();
this.totalGPUMemory = 0;
this.maxGPUMemory = 256 * 1024 * 1024; // 256MB限制
}
allocateBuffer(size, usage) {
const poolKey = `${size}-${usage}`;
const pool = this.bufferPools.get(poolKey) || [];
if (pool.length > 0) {
const buffer = pool.pop();
this.activeBuffers.add(buffer);
return buffer;
}
// 检查内存限制
if (this.totalGPUMemory + size > this.maxGPUMemory) {
this.garbageCollect();
}
const buffer = {
glBuffer: this.gl.createBuffer(),
size: size,
usage: usage,
lastUsed: Date.now(),
refCount: 1
};
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, buffer.glBuffer);
this.gl.bufferData(this.gl.ARRAY_BUFFER, size, usage);
this.totalGPUMemory += size;
this.activeBuffers.add(buffer);
return buffer;
}
releaseBuffer(buffer) {
if (--buffer.refCount <= 0) {
this.activeBuffers.delete(buffer);
const poolKey = `${buffer.size}-${buffer.usage}`;
const pool = this.bufferPools.get(poolKey) || [];
pool.push(buffer);
this.bufferPools.set(poolKey, pool);
}
}
garbageCollect() {
const threshold = Date.now() - 30000; // 30秒未使用
let freedMemory = 0;
for (const [poolKey, pool] of this.bufferPools) {
for (let i = pool.length - 1; i >= 0; i--) {
const buffer = pool[i];
if (buffer.lastUsed < threshold) {
this.gl.deleteBuffer(buffer.glBuffer);
pool.splice(i, 1);
freedMemory += buffer.size;
this.totalGPUMemory -= buffer.size;
}
}
}
console.log(`GC freed ${freedMemory} bytes of GPU memory`);
}
}
纹理内存优化:
class TextureManager {
constructor(gl) {
this.gl = gl;
this.textureCache = new Map();
this.compressionFormats = this.detectCompressionSupport();
this.streamingQueue = new PriorityQueue();
this.maxTextureMemory = 128 * 1024 * 1024; // 128MB
this.currentMemoryUsage = 0;
}
detectCompressionSupport() {
const formats = {};
formats.s3tc = this.gl.getExtension('WEBGL_compressed_texture_s3tc');
formats.etc1 = this.gl.getExtension('WEBGL_compressed_texture_etc1');
formats.astc = this.gl.getExtension('WEBGL_compressed_texture_astc');
return formats;
}
async loadTexture(url, options = {}) {
const cacheKey = `${url}-${JSON.stringify(options)}`;
if (this.textureCache.has(cacheKey)) {
const textureData = this.textureCache.get(cacheKey);
textureData.lastAccessed = Date.now();
textureData.accessCount++;
return textureData.texture;
}
// 选择最优压缩格式
const format = this.selectOptimalFormat(options.format);
const textureUrl = this.getCompressedTextureUrl(url, format);
const textureData = await this.loadTextureData(textureUrl, options);
// 内存检查
if (this.currentMemoryUsage + textureData.size > this.maxTextureMemory) {
await this.freeUnusedTextures(textureData.size);
}
const texture = this.createGLTexture(textureData);
this.textureCache.set(cacheKey, {
texture: texture,
size: textureData.size,
lastAccessed: Date.now(),
accessCount: 1,
priority: options.priority || 0
});
this.currentMemoryUsage += textureData.size;
return texture;
}
async freeUnusedTextures(requiredMemory) {
const textures = Array.from(this.textureCache.entries())
.sort((a, b) => a[1].lastAccessed - b[1].lastAccessed);
let freedMemory = 0;
for (const [key, data] of textures) {
if (freedMemory >= requiredMemory) break;
this.gl.deleteTexture(data.texture);
this.textureCache.delete(key);
freedMemory += data.size;
this.currentMemoryUsage -= data.size;
}
}
}
资源流送系统:
class ResourceStreaming {
constructor(engine) {
this.engine = engine;
this.loadQueue = new Map();
this.visibilityTracker = new VisibilityTracker();
this.lodSystem = new LODSystem();
this.maxConcurrentLoads = 4;
this.currentLoads = 0;
}
updateStreaming(camera) {
// 基于摄像机位置和方向预测需要的资源
const visibleObjects = this.visibilityTracker.getVisibleObjects(camera);
const predictedObjects = this.visibilityTracker.getPredictedObjects(camera);
// 优先加载可见和即将可见的资源
for (const obj of [...visibleObjects, ...predictedObjects]) {
const lodLevel = this.lodSystem.calculateLOD(obj, camera);
this.queueResourceLoad(obj, lodLevel);
}
// 卸载远离摄像机的资源
this.unloadDistantResources(camera);
}
queueResourceLoad(object, lodLevel) {
const resourceKey = `${object.id}-${lodLevel}`;
if (this.loadQueue.has(resourceKey)) return;
const priority = this.calculateLoadPriority(object, lodLevel);
this.loadQueue.set(resourceKey, {
object: object,
lodLevel: lodLevel,
priority: priority,
timestamp: Date.now()
});
this.processLoadQueue();
}
async processLoadQueue() {
if (this.currentLoads >= this.maxConcurrentLoads) return;
// 按优先级排序
const sortedQueue = Array.from(this.loadQueue.entries())
.sort((a, b) => b[1].priority - a[1].priority);
for (const [key, loadRequest] of sortedQueue) {
if (this.currentLoads >= this.maxConcurrentLoads) break;
this.currentLoads++;
this.loadQueue.delete(key);
try {
await this.loadResource(loadRequest);
} catch (error) {
console.warn('Resource loading failed:', error);
} finally {
this.currentLoads--;
// 递归处理剩余队列
setTimeout(() => this.processLoadQueue(), 0);
}
}
}
}
内存监控和分析:
class MemoryProfiler {
constructor(gl) {
this.gl = gl;
this.snapshots = [];
this.isMonitoring = false;
}
startMonitoring(interval = 1000) {
this.isMonitoring = true;
const monitor = () => {
if (!this.isMonitoring) return;
const snapshot = this.takeSnapshot();
this.snapshots.push(snapshot);
// 保持最近100个快照
if (this.snapshots.length > 100) {
this.snapshots.shift();
}
// 检测内存泄漏
this.detectMemoryLeaks();
setTimeout(monitor, interval);
};
monitor();
}
takeSnapshot() {
const info = this.gl.getExtension('WEBGL_debug_renderer_info');
return {
timestamp: Date.now(),
jsHeapUsed: performance.memory?.usedJSHeapSize || 0,
jsHeapTotal: performance.memory?.totalJSHeapSize || 0,
renderer: info ? this.gl.getParameter(info.UNMASKED_RENDERER_WEBGL) : 'unknown',
textureCount: this.getActiveTextureCount(),
bufferCount: this.getActiveBufferCount(),
programCount: this.getActiveProgramCount()
};
}
detectMemoryLeaks() {
if (this.snapshots.length < 10) return;
const recent = this.snapshots.slice(-10);
const trend = this.calculateMemoryTrend(recent);
if (trend.jsHeap > 1024 * 1024) { // 1MB增长
console.warn('Potential JS memory leak detected', trend);
}
if (trend.textures > 10) {
console.warn('Potential texture leak detected', trend);
}
}
generateReport() {
const latest = this.snapshots[this.snapshots.length - 1];
return {
currentMemoryUsage: {
jsHeap: latest.jsHeapUsed,
textures: latest.textureCount,
buffers: latest.bufferCount
},
memoryTrend: this.calculateMemoryTrend(this.snapshots.slice(-20)),
recommendations: this.generateRecommendations()
};
}
}
性能优化策略:
How to implement Physically Based Rendering (PBR) in WebGL?
How to implement Physically Based Rendering (PBR) in WebGL?
考察点:PBR渲染技术。
答案:
基于物理的渲染(PBR)通过遵循物理定律来实现更真实的光照效果。PBR的核心是能量守恒和使用物理准确的材质参数。
PBR基础理论:
PBR基于双向反射分布函数(BRDF),主要包含漫反射和镜面反射两个部分:
PBR着色器实现:
// PBR顶点着色器
attribute vec3 aPosition;
attribute vec3 aNormal;
attribute vec3 aTangent;
attribute vec2 aTexCoord;
uniform mat4 uModelMatrix;
uniform mat4 uViewMatrix;
uniform mat4 uProjectionMatrix;
uniform mat3 uNormalMatrix;
varying vec3 vWorldPos;
varying vec3 vNormal;
varying vec3 vTangent;
varying vec3 vBitangent;
varying vec2 vTexCoord;
void main() {
vWorldPos = (uModelMatrix * vec4(aPosition, 1.0)).xyz;
vNormal = normalize(uNormalMatrix * aNormal);
vTangent = normalize(uNormalMatrix * aTangent);
vBitangent = cross(vNormal, vTangent);
vTexCoord = aTexCoord;
gl_Position = uProjectionMatrix * uViewMatrix * vec4(vWorldPos, 1.0);
}
// PBR片段着色器
precision highp float;
// 材质纹理
uniform sampler2D uAlbedoMap;
uniform sampler2D uNormalMap;
uniform sampler2D uMetallicMap;
uniform sampler2D uRoughnessMap;
uniform sampler2D uAOMap;
// 环境光照
uniform samplerCube uEnvironmentMap;
uniform samplerCube uIrradianceMap;
uniform sampler2D uBRDFLUT;
// 光源
uniform vec3 uLightPositions[4];
uniform vec3 uLightColors[4];
uniform float uLightIntensities[4];
uniform int uLightCount;
uniform vec3 uCameraPos;
varying vec3 vWorldPos;
varying vec3 vNormal;
varying vec3 vTangent;
varying vec3 vBitangent;
varying vec2 vTexCoord;
const float PI = 3.14159265359;
// 法线分布函数 (GGX/Trowbridge-Reitz)
float DistributionGGX(vec3 N, vec3 H, float roughness) {
float a = roughness * roughness;
float a2 = a * a;
float NdotH = max(dot(N, H), 0.0);
float NdotH2 = NdotH * NdotH;
float num = a2;
float denom = (NdotH2 * (a2 - 1.0) + 1.0);
denom = PI * denom * denom;
return num / denom;
}
// 几何函数
float GeometrySchlickGGX(float NdotV, float roughness) {
float r = (roughness + 1.0);
float k = (r * r) / 8.0;
float num = NdotV;
float denom = NdotV * (1.0 - k) + k;
return num / denom;
}
float GeometrySmith(vec3 N, vec3 V, vec3 L, float roughness) {
float NdotV = max(dot(N, V), 0.0);
float NdotL = max(dot(N, L), 0.0);
float ggx2 = GeometrySchlickGGX(NdotV, roughness);
float ggx1 = GeometrySchlickGGX(NdotL, roughness);
return ggx1 * ggx2;
}
// Fresnel方程
vec3 fresnelSchlick(float cosTheta, vec3 F0) {
return F0 + (1.0 - F0) * pow(clamp(1.0 - cosTheta, 0.0, 1.0), 5.0);
}
vec3 fresnelSchlickRoughness(float cosTheta, vec3 F0, float roughness) {
return F0 + (max(vec3(1.0 - roughness), F0) - F0) * pow(clamp(1.0 - cosTheta, 0.0, 1.0), 5.0);
}
void main() {
// 材质属性
vec3 albedo = pow(texture2D(uAlbedoMap, vTexCoord).rgb, 2.2);
float metallic = texture2D(uMetallicMap, vTexCoord).r;
float roughness = texture2D(uRoughnessMap, vTexCoord).r;
float ao = texture2D(uAOMap, vTexCoord).r;
// 法线贴图
vec3 normalMap = texture2D(uNormalMap, vTexCoord).rgb * 2.0 - 1.0;
mat3 TBN = mat3(normalize(vTangent), normalize(vBitangent), normalize(vNormal));
vec3 N = normalize(TBN * normalMap);
vec3 V = normalize(uCameraPos - vWorldPos);
// 计算F0(表面反射率)
vec3 F0 = vec3(0.04);
F0 = mix(F0, albedo, metallic);
// 反射方程
vec3 Lo = vec3(0.0);
// 直接光照
for(int i = 0; i < uLightCount && i < 4; ++i) {
vec3 L = normalize(uLightPositions[i] - vWorldPos);
vec3 H = normalize(V + L);
float distance = length(uLightPositions[i] - vWorldPos);
float attenuation = 1.0 / (distance * distance);
vec3 radiance = uLightColors[i] * uLightIntensities[i] * attenuation;
// Cook-Torrance BRDF
float NDF = DistributionGGX(N, H, roughness);
float G = GeometrySmith(N, V, L, roughness);
vec3 F = fresnelSchlick(max(dot(H, V), 0.0), F0);
vec3 kS = F;
vec3 kD = vec3(1.0) - kS;
kD *= 1.0 - metallic;
vec3 numerator = NDF * G * F;
float denominator = 4.0 * max(dot(N, V), 0.0) * max(dot(N, L), 0.0) + 0.0001;
vec3 specular = numerator / denominator;
float NdotL = max(dot(N, L), 0.0);
Lo += (kD * albedo / PI + specular) * radiance * NdotL;
}
// 环境光照 (IBL)
vec3 F = fresnelSchlickRoughness(max(dot(N, V), 0.0), F0, roughness);
vec3 kS = F;
vec3 kD = 1.0 - kS;
kD *= 1.0 - metallic;
// 漫反射环境光
vec3 irradiance = textureCube(uIrradianceMap, N).rgb;
vec3 diffuse = irradiance * albedo;
// 镜面反射环境光
vec3 R = reflect(-V, N);
const float MAX_REFLECTION_LOD = 4.0;
vec3 prefilteredColor = textureCubeLod(uEnvironmentMap, R, roughness * MAX_REFLECTION_LOD).rgb;
vec2 brdf = texture2D(uBRDFLUT, vec2(max(dot(N, V), 0.0), roughness)).rg;
vec3 specular = prefilteredColor * (F * brdf.x + brdf.y);
vec3 ambient = (kD * diffuse + specular) * ao;
vec3 color = ambient + Lo;
// HDR色调映射
color = color / (color + vec3(1.0));
// Gamma校正
color = pow(color, vec3(1.0/2.2));
gl_FragColor = vec4(color, 1.0);
}
环境光照预计算:
// IBL预计算管理器
class IBLPrecomputer {
constructor(gl) {
this.gl = gl;
this.cubemapSize = 512;
this.irradianceSize = 32;
this.prefilterSize = 128;
this.brdfSize = 512;
}
async precomputeIBL(hdrTexture) {
// 1. 生成环境立方体贴图
const envCubemap = this.generateEnvironmentCubemap(hdrTexture);
// 2. 预计算辐照度贴图
const irradianceMap = this.precomputeIrradiance(envCubemap);
// 3. 预过滤环境贴图
const prefilterMap = this.prefilterEnvironmentMap(envCubemap);
// 4. 生成BRDF查找表
const brdfLUT = this.generateBRDFLUT();
return {
environment: envCubemap,
irradiance: irradianceMap,
prefilter: prefilterMap,
brdfLUT: brdfLUT
};
}
precomputeIrradiance(envCubemap) {
const shader = this.createIrradianceShader();
const framebuffer = this.createCubemapFramebuffer(this.irradianceSize);
// 对立方体贴图的每个面进行积分
const faces = [
{ target: this.gl.TEXTURE_CUBE_MAP_POSITIVE_X, up: [0, -1, 0], right: [0, 0, -1] },
{ target: this.gl.TEXTURE_CUBE_MAP_NEGATIVE_X, up: [0, -1, 0], right: [0, 0, 1] },
{ target: this.gl.TEXTURE_CUBE_MAP_POSITIVE_Y, up: [0, 0, 1], right: [1, 0, 0] },
{ target: this.gl.TEXTURE_CUBE_MAP_NEGATIVE_Y, up: [0, 0, -1], right: [1, 0, 0] },
{ target: this.gl.TEXTURE_CUBE_MAP_POSITIVE_Z, up: [0, -1, 0], right: [1, 0, 0] },
{ target: this.gl.TEXTURE_CUBE_MAP_NEGATIVE_Z, up: [0, -1, 0], right: [-1, 0, 0] }
];
faces.forEach((face, index) => {
this.gl.framebufferTexture2D(this.gl.FRAMEBUFFER, this.gl.COLOR_ATTACHMENT0,
face.target, framebuffer.texture, 0);
this.renderCubeFace(shader, face.up, face.right);
});
return framebuffer.texture;
}
}
PBR渲染的关键是理解物理光照模型和正确实现材质工作流,这为3D应用提供了更加真实和一致的视觉效果。
How to implement complex particle systems in WebGL?
How to implement complex particle systems in WebGL?
考察点:粒子系统设计。
答案:
复杂粒子系统需要高效的GPU计算、灵活的参数控制和优化的渲染管线。通过变换反馈、计算着色器等技术可以实现大规模、高性能的粒子效果。
GPU粒子系统架构:
class GPUParticleSystem {
constructor(gl, maxParticles = 10000) {
this.gl = gl;
this.maxParticles = maxParticles;
// 粒子数据结构 (position.xyz, velocity.xyz, life, size)
this.particleData = new Float32Array(maxParticles * 8);
this.activeParticles = 0;
// 双缓冲用于变换反馈
this.buffers = {
current: this.createParticleBuffer(),
next: this.createParticleBuffer()
};
// 着色器程序
this.updateProgram = this.createUpdateProgram();
this.renderProgram = this.createRenderProgram();
// 发射器配置
this.emitters = [];
// 物理参数
this.gravity = [0, -9.8, 0];
this.wind = [0, 0, 0];
}
createUpdateProgram() {
const vertexShader = `
attribute vec3 aPosition;
attribute vec3 aVelocity;
attribute float aLife;
attribute float aSize;
uniform float uDeltaTime;
uniform vec3 uGravity;
uniform vec3 uWind;
uniform float uDamping;
// 变换反馈输出
varying vec3 vNewPosition;
varying vec3 vNewVelocity;
varying float vNewLife;
varying float vNewSize;
void main() {
if (aLife <= 0.0) {
// 死亡粒子保持不变
vNewPosition = aPosition;
vNewVelocity = aVelocity;
vNewLife = aLife;
vNewSize = aSize;
} else {
// 物理更新
vec3 acceleration = uGravity + uWind;
vec3 newVelocity = aVelocity + acceleration * uDeltaTime;
newVelocity *= uDamping;
vec3 newPosition = aPosition + newVelocity * uDeltaTime;
float newLife = aLife - uDeltaTime;
vNewPosition = newPosition;
vNewVelocity = newVelocity;
vNewLife = newLife;
vNewSize = aSize;
}
gl_Position = vec4(0.0); // 不需要光栅化
}
`;
return this.compileProgram(vertexShader, null, [
'vNewPosition', 'vNewVelocity', 'vNewLife', 'vNewSize'
]);
}
update(deltaTime) {
// 发射新粒子
this.emitParticles(deltaTime);
// 使用变换反馈更新粒子
this.gl.useProgram(this.updateProgram);
// 设置uniform
this.gl.uniform1f(this.gl.getUniformLocation(this.updateProgram, 'uDeltaTime'), deltaTime);
this.gl.uniform3fv(this.gl.getUniformLocation(this.updateProgram, 'uGravity'), this.gravity);
this.gl.uniform3fv(this.gl.getUniformLocation(this.updateProgram, 'uWind'), this.wind);
this.gl.uniform1f(this.gl.getUniformLocation(this.updateProgram, 'uDamping'), 0.98);
// 绑定当前缓冲区为输入
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, this.buffers.current);
this.setupVertexAttributes(this.updateProgram);
// 绑定下一个缓冲区为输出
this.gl.bindBuffer(this.gl.TRANSFORM_FEEDBACK_BUFFER, this.buffers.next);
// 执行变换反馈
this.gl.enable(this.gl.RASTERIZER_DISCARD);
this.gl.beginTransformFeedback(this.gl.POINTS);
this.gl.drawArrays(this.gl.POINTS, 0, this.activeParticles);
this.gl.endTransformFeedback();
this.gl.disable(this.gl.RASTERIZER_DISCARD);
// 交换缓冲区
[this.buffers.current, this.buffers.next] = [this.buffers.next, this.buffers.current];
}
render(viewMatrix, projectionMatrix) {
this.gl.useProgram(this.renderProgram);
// 设置矩阵
this.gl.uniformMatrix4fv(
this.gl.getUniformLocation(this.renderProgram, 'uViewMatrix'),
false, viewMatrix
);
this.gl.uniformMatrix4fv(
this.gl.getUniformLocation(this.renderProgram, 'uProjectionMatrix'),
false, projectionMatrix
);
// 绑定粒子缓冲区
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, this.buffers.current);
this.setupVertexAttributes(this.renderProgram);
// 启用混合
this.gl.enable(this.gl.BLEND);
this.gl.blendFunc(this.gl.SRC_ALPHA, this.gl.ONE_MINUS_SRC_ALPHA);
// 渲染粒子
this.gl.drawArrays(this.gl.POINTS, 0, this.activeParticles);
}
}
高级粒子效果实现:
// 高级粒子渲染着色器
// 顶点着色器
attribute vec3 aPosition;
attribute vec3 aVelocity;
attribute float aLife;
attribute float aSize;
uniform mat4 uViewMatrix;
uniform mat4 uProjectionMatrix;
uniform float uMaxLife;
varying float vLife;
varying vec3 vVelocity;
void main() {
vec4 viewPos = uViewMatrix * vec4(aPosition, 1.0);
gl_Position = uProjectionMatrix * viewPos;
// 基于生命值的大小变化
float lifeRatio = aLife / uMaxLife;
float sizeMultiplier = smoothstep(0.0, 0.1, lifeRatio) * (1.0 - smoothstep(0.8, 1.0, lifeRatio));
gl_PointSize = aSize * sizeMultiplier;
vLife = lifeRatio;
vVelocity = aVelocity;
}
// 片段着色器
precision mediump float;
uniform sampler2D uParticleTexture;
uniform vec3 uStartColor;
uniform vec3 uEndColor;
varying float vLife;
varying vec3 vVelocity;
void main() {
// 圆形粒子形状
vec2 coord = gl_PointCoord - vec2(0.5);
float dist = length(coord);
if (dist > 0.5) discard;
// 软边缘
float alpha = 1.0 - smoothstep(0.3, 0.5, dist);
// 基于生命值的颜色插值
vec3 color = mix(uStartColor, uEndColor, 1.0 - vLife);
// 基于速度的颜色调制
float speedFactor = length(vVelocity) * 0.1;
color += vec3(speedFactor * 0.5, speedFactor * 0.2, 0.0);
// 纹理采样
vec4 texColor = texture2D(uParticleTexture, gl_PointCoord);
gl_FragColor = vec4(color * texColor.rgb, alpha * texColor.a * vLife);
}
复杂粒子行为系统:
class ParticleBehaviorSystem {
constructor() {
this.behaviors = [];
}
addBehavior(behavior) {
this.behaviors.push(behavior);
}
update(particles, deltaTime) {
for (const behavior of this.behaviors) {
behavior.apply(particles, deltaTime);
}
}
}
// 涡流行为
class VortexBehavior {
constructor(center, strength, radius) {
this.center = center;
this.strength = strength;
this.radius = radius;
}
apply(particles, deltaTime) {
for (let i = 0; i < particles.length; i++) {
const particle = particles[i];
if (particle.life <= 0) continue;
const offset = vec3.subtract(vec3.create(), particle.position, this.center);
const distance = vec3.length(offset);
if (distance < this.radius) {
const factor = 1.0 - (distance / this.radius);
const force = vec3.cross(vec3.create(), offset, [0, 1, 0]);
vec3.normalize(force, force);
vec3.scale(force, force, this.strength * factor);
vec3.add(particle.velocity, particle.velocity,
vec3.scale(vec3.create(), force, deltaTime));
}
}
}
}
// 碰撞检测行为
class CollisionBehavior {
constructor(planes) {
this.planes = planes; // 碰撞平面数组
this.restitution = 0.5; // 弹性系数
}
apply(particles, deltaTime) {
for (let i = 0; i < particles.length; i++) {
const particle = particles[i];
if (particle.life <= 0) continue;
for (const plane of this.planes) {
const distance = this.distanceToPlane(particle.position, plane);
if (distance < 0) {
// 发生碰撞
this.resolveCollision(particle, plane);
}
}
}
}
resolveCollision(particle, plane) {
// 计算反射速度
const dotProduct = vec3.dot(particle.velocity, plane.normal);
const reflection = vec3.scale(vec3.create(), plane.normal, 2 * dotProduct);
vec3.subtract(particle.velocity, particle.velocity, reflection);
vec3.scale(particle.velocity, particle.velocity, this.restitution);
// 将粒子移出碰撞表面
const offset = vec3.scale(vec3.create(), plane.normal, 0.01);
vec3.add(particle.position, particle.position, offset);
}
}
GPU计算着色器粒子系统(WebGL2):
// 计算着色器版本
#version 300 es
layout(local_size_x = 64) in;
layout(std430, binding = 0) buffer ParticleBuffer {
vec4 positions[];
};
layout(std430, binding = 1) buffer VelocityBuffer {
vec4 velocities[];
};
uniform float uDeltaTime;
uniform vec3 uGravity;
uniform int uParticleCount;
void main() {
uint index = gl_GlobalInvocationID.x;
if (index >= uParticleCount) return;
vec3 position = positions[index].xyz;
float life = positions[index].w;
vec3 velocity = velocities[index].xyz;
if (life > 0.0) {
// 更新物理
velocity += uGravity * uDeltaTime;
position += velocity * uDeltaTime;
life -= uDeltaTime;
// 边界检查
if (position.y < 0.0) {
position.y = 0.0;
velocity.y *= -0.5;
}
// 写回数据
positions[index] = vec4(position, life);
velocities[index] = vec4(velocity, 0.0);
}
}
性能优化策略:
复杂粒子系统是现代3D图形的重要组成部分,通过合理的架构设计和GPU优化可以实现丰富的视觉效果。
How to handle WebGL compatibility issues across different devices and browsers?
How to handle WebGL compatibility issues across different devices and browsers?
考察点:兼容性处理。
答案:
WebGL兼容性处理需要考虑硬件差异、浏览器实现差异、性能等级等多个方面,通过功能检测、降级方案和适配策略来确保应用的广泛可用性。
兼容性检测框架:
class WebGLCompatibilityManager {
constructor() {
this.capabilities = {};
this.performanceLevel = 'unknown';
this.limitations = {};
}
async detectCapabilities(canvas) {
const gl = this.createWebGLContext(canvas);
if (!gl) {
throw new Error('WebGL not supported');
}
// 基础功能检测
this.capabilities = {
webgl2: gl instanceof WebGL2RenderingContext,
maxTextureSize: gl.getParameter(gl.MAX_TEXTURE_SIZE),
maxTextureUnits: gl.getParameter(gl.MAX_TEXTURE_IMAGE_UNITS),
maxVertexAttribs: gl.getParameter(gl.MAX_VERTEX_ATTRIBS),
maxFragmentUniforms: gl.getParameter(gl.MAX_FRAGMENT_UNIFORM_VECTORS),
maxVertexUniforms: gl.getParameter(gl.MAX_VERTEX_UNIFORM_VECTORS),
maxVaryings: gl.getParameter(gl.MAX_VARYING_VECTORS),
maxRenderbufferSize: gl.getParameter(gl.MAX_RENDERBUFFER_SIZE),
// 扩展检测
extensions: this.detectExtensions(gl),
// 精度检测
precision: this.detectPrecision(gl),
// 硬件信息
hardware: this.getHardwareInfo(gl)
};
// 性能基准测试
this.performanceLevel = await this.benchmarkPerformance(gl);
// 生成限制配置
this.limitations = this.generateLimitations();
return this.capabilities;
}
detectExtensions(gl) {
const extensions = {
// 纹理相关
textureFloat: gl.getExtension('OES_texture_float'),
textureHalfFloat: gl.getExtension('OES_texture_half_float'),
depthTexture: gl.getExtension('WEBGL_depth_texture'),
compressedTextureS3TC: gl.getExtension('WEBGL_compressed_texture_s3tc'),
compressedTextureETC1: gl.getExtension('WEBGL_compressed_texture_etc1'),
// 渲染相关
instancedArrays: gl.getExtension('ANGLE_instanced_arrays'),
drawBuffers: gl.getExtension('WEBGL_draw_buffers'),
vertexArrayObject: gl.getExtension('OES_vertex_array_object'),
// 调试相关
debugShaders: gl.getExtension('WEBGL_debug_shaders'),
debugRendererInfo: gl.getExtension('WEBGL_debug_renderer_info'),
// 性能相关
disjointTimerQuery: gl.getExtension('EXT_disjoint_timer_query')
};
return Object.fromEntries(
Object.entries(extensions).map(([key, value]) => [key, !!value])
);
}
detectPrecision(gl) {
const vertexShaderPrecision = {
highpFloat: gl.getShaderPrecisionFormat(gl.VERTEX_SHADER, gl.HIGH_FLOAT),
mediumpFloat: gl.getShaderPrecisionFormat(gl.VERTEX_SHADER, gl.MEDIUM_FLOAT),
lowpFloat: gl.getShaderPrecisionFormat(gl.VERTEX_SHADER, gl.LOW_FLOAT)
};
const fragmentShaderPrecision = {
highpFloat: gl.getShaderPrecisionFormat(gl.FRAGMENT_SHADER, gl.HIGH_FLOAT),
mediumpFloat: gl.getShaderPrecisionFormat(gl.FRAGMENT_SHADER, gl.MEDIUM_FLOAT),
lowpFloat: gl.getShaderPrecisionFormat(gl.FRAGMENT_SHADER, gl.LOW_FLOAT)
};
return {
vertex: vertexShaderPrecision,
fragment: fragmentShaderPrecision
};
}
async benchmarkPerformance(gl) {
const tests = [
this.testDrawCallPerformance(gl),
this.testShaderComplexity(gl),
this.testTextureUpload(gl),
this.testFramebufferPerformance(gl)
];
const results = await Promise.all(tests);
const score = results.reduce((sum, result) => sum + result.score, 0) / results.length;
if (score > 80) return 'high';
if (score > 50) return 'medium';
if (score > 20) return 'low';
return 'minimal';
}
generateLimitations() {
const limitations = {
maxParticles: 1000,
maxLights: 4,
shadowMapSize: 512,
maxTextureSize: 512,
instancedRendering: false,
postProcessing: false,
complexShaders: false
};
switch (this.performanceLevel) {
case 'high':
limitations.maxParticles = 10000;
limitations.maxLights = 8;
limitations.shadowMapSize = 2048;
limitations.maxTextureSize = 2048;
limitations.instancedRendering = this.capabilities.extensions.instancedArrays;
limitations.postProcessing = true;
limitations.complexShaders = true;
break;
case 'medium':
limitations.maxParticles = 5000;
limitations.maxLights = 6;
limitations.shadowMapSize = 1024;
limitations.maxTextureSize = 1024;
limitations.instancedRendering = this.capabilities.extensions.instancedArrays;
limitations.postProcessing = true;
break;
case 'low':
limitations.maxParticles = 1000;
limitations.maxLights = 4;
limitations.shadowMapSize = 512;
limitations.maxTextureSize = 512;
break;
}
return limitations;
}
}
自适应质量系统:
class AdaptiveQualityManager {
constructor(compatibilityManager) {
this.compatibility = compatibilityManager;
this.currentSettings = {};
this.targetFrameRate = 60;
this.frameTimeHistory = [];
this.adjustmentCooldown = 0;
}
initializeSettings() {
const limitations = this.compatibility.limitations;
this.currentSettings = {
renderScale: this.getInitialRenderScale(),
shadowQuality: this.getInitialShadowQuality(),
particleCount: limitations.maxParticles,
lightCount: limitations.maxLights,
postProcessing: limitations.postProcessing,
antialiasing: this.getInitialAntialiasing(),
textureQuality: this.getInitialTextureQuality(),
lodBias: 0
};
}
update(frameTime) {
this.frameTimeHistory.push(frameTime);
if (this.frameTimeHistory.length > 60) {
this.frameTimeHistory.shift();
}
if (this.adjustmentCooldown > 0) {
this.adjustmentCooldown--;
return;
}
const averageFrameTime = this.getAverageFrameTime();
const currentFPS = 1000 / averageFrameTime;
if (currentFPS < this.targetFrameRate * 0.8) {
this.decreaseQuality();
} else if (currentFPS > this.targetFrameRate * 1.1) {
this.increaseQuality();
}
}
decreaseQuality() {
let adjusted = false;
// 降级策略优先级
if (this.currentSettings.postProcessing) {
this.currentSettings.postProcessing = false;
adjusted = true;
} else if (this.currentSettings.shadowQuality > 0) {
this.currentSettings.shadowQuality--;
adjusted = true;
} else if (this.currentSettings.renderScale > 0.5) {
this.currentSettings.renderScale *= 0.9;
adjusted = true;
} else if (this.currentSettings.particleCount > 100) {
this.currentSettings.particleCount *= 0.8;
adjusted = true;
}
if (adjusted) {
this.adjustmentCooldown = 120; // 2秒冷却
this.notifyQualityChange();
}
}
increaseQuality() {
const limitations = this.compatibility.limitations;
let adjusted = false;
// 升级策略
if (this.currentSettings.renderScale < 1.0) {
this.currentSettings.renderScale = Math.min(1.0, this.currentSettings.renderScale * 1.1);
adjusted = true;
} else if (this.currentSettings.shadowQuality < 2 && limitations.shadowMapSize > 512) {
this.currentSettings.shadowQuality++;
adjusted = true;
} else if (!this.currentSettings.postProcessing && limitations.postProcessing) {
this.currentSettings.postProcessing = true;
adjusted = true;
}
if (adjusted) {
this.adjustmentCooldown = 180; // 3秒冷却
this.notifyQualityChange();
}
}
}
着色器兼容性处理:
class ShaderCompatibilityManager {
constructor(gl, capabilities) {
this.gl = gl;
this.capabilities = capabilities;
this.shaderVariants = new Map();
}
compileShader(vertexSource, fragmentSource, options = {}) {
const processedVertex = this.processShaderSource(vertexSource, 'vertex');
const processedFragment = this.processShaderSource(fragmentSource, 'fragment');
try {
return this.createShaderProgram(processedVertex, processedFragment);
} catch (error) {
// 尝试降级版本
return this.compileFallbackShader(vertexSource, fragmentSource, error);
}
}
processShaderSource(source, type) {
let processedSource = source;
// 精度处理
if (type === 'fragment') {
const supportHighp = this.capabilities.precision.fragment.highpFloat.precision > 0;
if (!supportHighp) {
processedSource = processedSource.replace(/precision\s+highp\s+float/g, 'precision mediump float');
}
}
// 扩展处理
if (!this.capabilities.extensions.textureFloat) {
processedSource = this.replaceFloatTextures(processedSource);
}
if (!this.capabilities.extensions.drawBuffers) {
processedSource = this.removeMultipleRenderTargets(processedSource);
}
// 移动设备优化
if (this.isMobileDevice()) {
processedSource = this.optimizeForMobile(processedSource);
}
return processedSource;
}
optimizeForMobile(source) {
// 减少复杂计算
source = source.replace(/normalize\(/g, 'fastNormalize(');
// 添加快速数学函数
const fastMathFunctions = `
vec3 fastNormalize(vec3 v) {
return v * inversesqrt(dot(v, v));
}
`;
return fastMathFunctions + source;
}
compileFallbackShader(vertexSource, fragmentSource, originalError) {
console.warn('Primary shader compilation failed, trying fallback', originalError);
// 简化着色器
const simpleFragment = `
precision mediump float;
uniform vec3 uColor;
void main() {
gl_FragColor = vec4(uColor, 1.0);
}
`;
return this.createShaderProgram(vertexSource, simpleFragment);
}
}
跨平台测试框架:
class CrossPlatformTester {
constructor() {
this.testResults = new Map();
this.knownIssues = this.loadKnownIssues();
}
async runCompatibilityTests(canvas) {
const tests = [
this.testBasicRendering(canvas),
this.testTextureFormats(canvas),
this.testShaderPrecision(canvas),
this.testExtensions(canvas),
this.testPerformance(canvas)
];
const results = await Promise.allSettled(tests);
return {
passed: results.filter(r => r.status === 'fulfilled').length,
failed: results.filter(r => r.status === 'rejected').length,
details: results
};
}
loadKnownIssues() {
return {
'Intel HD Graphics': {
issues: ['precision_issues', 'texture_size_limit'],
workarounds: ['use_mediump', 'limit_texture_size_512']
},
'PowerVR': {
issues: ['shader_compilation_slow'],
workarounds: ['cache_shaders']
},
'Mali': {
issues: ['bandwidth_limited'],
workarounds: ['reduce_texture_usage']
}
};
}
generateCompatibilityReport() {
return {
deviceInfo: this.getDeviceInfo(),
testResults: this.testResults,
recommendations: this.generateRecommendations(),
fallbackOptions: this.getFallbackOptions()
};
}
}
关键兼容性策略:
How to implement real-time global illumination effects in WebGL?
How to implement real-time global illumination effects in WebGL?
考察点:高级光照技术。
答案:
实时全局光照是现代3D渲染的前沿技术,通过光线追踪、辐射度、屏幕空间技术等方法实现光线的多次反弹和间接照明效果。
球谐光照(Spherical Harmonics Lighting):
// 球谐光照实现
precision highp float;
// 9个球谐系数(3阶)
uniform vec3 uSHL2[9];
vec3 evaluateSH(vec3 normal) {
// L0
vec3 result = 0.282095 * uSHL2[0];
// L1
result += 0.488603 * normal.y * uSHL2[1];
result += 0.488603 * normal.z * uSHL2[2];
result += 0.488603 * normal.x * uSHL2[3];
// L2
result += 1.092548 * normal.x * normal.y * uSHL2[4];
result += 1.092548 * normal.y * normal.z * uSHL2[5];
result += 0.315392 * (3.0 * normal.z * normal.z - 1.0) * uSHL2[6];
result += 1.092548 * normal.x * normal.z * uSHL2[7];
result += 0.546274 * (normal.x * normal.x - normal.y * normal.y) * uSHL2[8];
return max(result, 0.0);
}
屏幕空间全局光照(SSGI):
// SSGI实现
uniform sampler2D uColorTexture;
uniform sampler2D uDepthTexture;
uniform sampler2D uNormalTexture;
uniform sampler2D uNoiseTexture;
uniform mat4 uViewMatrix;
uniform mat4 uProjectionMatrix;
uniform mat4 uInverseProjectionMatrix;
const int SAMPLE_COUNT = 16;
const float RADIUS = 0.5;
vec3 reconstructWorldPos(vec2 screenPos, float depth) {
vec4 clipPos = vec4(screenPos * 2.0 - 1.0, depth * 2.0 - 1.0, 1.0);
vec4 viewPos = uInverseProjectionMatrix * clipPos;
return viewPos.xyz / viewPos.w;
}
vec3 calculateSSGI(vec2 screenCoord) {
float depth = texture2D(uDepthTexture, screenCoord).r;
vec3 worldPos = reconstructWorldPos(screenCoord, depth);
vec3 normal = normalize(texture2D(uNormalTexture, screenCoord).rgb * 2.0 - 1.0);
// 随机旋转
vec2 noiseScale = vec2(800.0/4.0, 600.0/4.0);
vec3 randomVec = texture2D(uNoiseTexture, screenCoord * noiseScale).xyz;
// 构建切线空间基
vec3 tangent = normalize(randomVec - normal * dot(randomVec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 TBN = mat3(tangent, bitangent, normal);
vec3 indirectLight = vec3(0.0);
float totalWeight = 0.0;
// 半球采样
for(int i = 0; i < SAMPLE_COUNT; i++) {
vec3 sampleDir = TBN * hemisphereSamples[i];
vec3 samplePos = worldPos + sampleDir * RADIUS;
// 转换到屏幕空间
vec4 clipSamplePos = uProjectionMatrix * uViewMatrix * vec4(samplePos, 1.0);
vec3 screenSamplePos = clipSamplePos.xyz / clipSamplePos.w;
screenSamplePos = screenSamplePos * 0.5 + 0.5;
if(screenSamplePos.x < 0.0 || screenSamplePos.x > 1.0 ||
screenSamplePos.y < 0.0 || screenSamplePos.y > 1.0) continue;
// 深度测试
float sampleDepth = texture2D(uDepthTexture, screenSamplePos.xy).r;
vec3 sampleWorldPos = reconstructWorldPos(screenSamplePos.xy, sampleDepth);
float distance = length(sampleWorldPos - worldPos);
if(distance < RADIUS) {
vec3 sampleColor = texture2D(uColorTexture, screenSamplePos.xy).rgb;
float weight = max(0.0, dot(normal, normalize(sampleWorldPos - worldPos)));
indirectLight += sampleColor * weight;
totalWeight += weight;
}
}
return totalWeight > 0.0 ? indirectLight / totalWeight : vec3(0.0);
}
光传播体积(Light Propagation Volumes):
class LightPropagationVolumes {
constructor(gl, volumeSize = 32) {
this.gl = gl;
this.volumeSize = volumeSize;
// 创建3D纹理存储光照信息
this.lpvTextures = {
red: this.create3DTexture(),
green: this.create3DTexture(),
blue: this.create3DTexture()
};
// RSM (Reflective Shadow Map)
this.rsmFramebuffer = this.createRSMFramebuffer();
// 注入和传播着色器
this.injectionShader = this.createInjectionShader();
this.propagationShader = this.createPropagationShader();
}
update(lightPosition, lightDirection) {
// 1. 生成反射阴影贴图
this.generateRSM(lightPosition, lightDirection);
// 2. 注入光源到LPV
this.injectLights();
// 3. 迭代传播光线
for (let i = 0; i < 4; i++) {
this.propagateLight();
}
}
generateRSM(lightPos, lightDir) {
this.gl.bindFramebuffer(this.gl.FRAMEBUFFER, this.rsmFramebuffer);
// 从光源视角渲染场景,输出位置、法线、颜色
const lightViewMatrix = mat4.lookAt(mat4.create(), lightPos,
vec3.add(vec3.create(), lightPos, lightDir),
[0, 1, 0]);
this.renderSceneToRSM(lightViewMatrix);
}
injectLights() {
this.gl.useProgram(this.injectionShader);
// 将RSM中的像素注入到LPV网格
this.gl.uniform1i(this.gl.getUniformLocation(this.injectionShader, 'uRSMPosition'), 0);
this.gl.uniform1i(this.gl.getUniformLocation(this.injectionShader, 'uRSMNormal'), 1);
this.gl.uniform1i(this.gl.getUniformLocation(this.injectionShader, 'uRSMColor'), 2);
// 渲染到LPV纹理
this.renderToLPV();
}
propagateLight() {
this.gl.useProgram(this.propagationShader);
// 使用前一帧的LPV作为输入
this.bindLPVTextures();
// 执行光传播计算
this.renderLightPropagation();
}
}
体素化全局光照(Voxel-based GI):
// 体素化几何着色器
#version 300 es
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
uniform mat4 uProjectionMatrices[3]; // X, Y, Z轴投影
uniform int uVoxelResolution;
in vec3 vWorldPos[];
in vec3 vNormal[];
in vec2 vTexCoord[];
flat out int vAxis;
out vec3 gWorldPos;
out vec3 gNormal;
out vec2 gTexCoord;
void main() {
// 选择主导轴
vec3 normal = abs(normalize(cross(vWorldPos[1] - vWorldPos[0],
vWorldPos[2] - vWorldPos[0])));
int axis = 0;
if (normal.y > normal.x && normal.y > normal.z) axis = 1;
else if (normal.z > normal.x && normal.z > normal.y) axis = 2;
vAxis = axis;
// 投影到选定轴
for (int i = 0; i < 3; i++) {
gWorldPos = vWorldPos[i];
gNormal = vNormal[i];
gTexCoord = vTexCoord[i];
gl_Position = uProjectionMatrices[axis] * vec4(vWorldPos[i], 1.0);
EmitVertex();
}
EndPrimitive();
}
// 体素化片段着色器
precision highp float;
uniform sampler2D uAlbedoTexture;
uniform usampler3D uVoxelTexture;
uniform int uVoxelResolution;
flat in int vAxis;
in vec3 gWorldPos;
in vec3 gNormal;
in vec2 gTexCoord;
layout(r32ui) uniform uimage3D uVoxelImage;
void main() {
// 计算体素坐标
vec3 voxelPos = (gWorldPos + 1.0) * 0.5 * float(uVoxelResolution);
ivec3 voxelCoord = ivec3(voxelPos);
// 获取材质颜色
vec4 albedo = texture(uAlbedoTexture, gTexCoord);
// 存储到3D纹理
uint colorValue = packColor(albedo.rgb);
imageAtomicMax(uVoxelImage, voxelCoord, colorValue);
}
实时光线追踪近似:
// 屏幕空间反射和折射
vec3 screenSpaceReflection(vec3 worldPos, vec3 normal, vec3 viewDir) {
vec3 reflectDir = reflect(viewDir, normal);
// 光线步进
vec3 currentPos = worldPos;
for (int i = 0; i < 32; i++) {
currentPos += reflectDir * 0.1;
vec4 screenPos = uViewProjectionMatrix * vec4(currentPos, 1.0);
screenPos.xyz /= screenPos.w;
screenPos.xyz = screenPos.xyz * 0.5 + 0.5;
if (screenPos.x < 0.0 || screenPos.x > 1.0 ||
screenPos.y < 0.0 || screenPos.y > 1.0) break;
float depth = texture2D(uDepthTexture, screenPos.xy).r;
vec3 sampleWorldPos = reconstructWorldPos(screenPos.xy, depth);
if (length(sampleWorldPos - currentPos) < 0.1) {
return texture2D(uColorTexture, screenPos.xy).rgb;
}
}
return textureCube(uEnvironmentMap, reflectDir).rgb;
}
实时全局光照通过多种技术的组合来逼近真实的光照效果,在性能和质量之间找到平衡点。
How to implement compute shader functionality in WebGL? What are the new features of WebGL2?
How to implement compute shader functionality in WebGL? What are the new features of WebGL2?
考察点:WebGL2新特性。
答案:
WebGL2基于OpenGL ES 3.0,引入了许多新特性,虽然不直接支持计算着色器,但可以通过变换反馈等技术实现类似功能。
WebGL2核心新特性:
// 3D纹理支持
const texture3D = gl.createTexture();
gl.bindTexture(gl.TEXTURE_3D, texture3D);
gl.texImage3D(gl.TEXTURE_3D, 0, gl.RGBA8, 64, 64, 64, 0,
gl.RGBA, gl.UNSIGNED_BYTE, textureData);
// 纹理数组
const textureArray = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D_ARRAY, textureArray);
gl.texImage3D(gl.TEXTURE_2D_ARRAY, 0, gl.RGBA8, 512, 512, 16, 0,
gl.RGBA, gl.UNSIGNED_BYTE, arrayData);
// 片段着色器输出多个颜色
#version 300 es
precision highp float;
layout(location = 0) out vec4 outColor0;
layout(location = 1) out vec4 outColor1;
layout(location = 2) out vec4 outColor2;
void main() {
outColor0 = vec4(albedo, 1.0);
outColor1 = vec4(normal * 0.5 + 0.5, 1.0);
outColor2 = vec4(metallic, roughness, ao, 1.0);
}
class WebGL2ComputeShader {
constructor(gl, computeShaderSource) {
this.gl = gl;
this.program = this.createTransformFeedbackProgram(computeShaderSource);
this.inputBuffers = [];
this.outputBuffers = [];
}
createTransformFeedbackProgram(computeSource) {
const vertexShader = `#version 300 es
${computeSource}
void main() {
compute();
gl_Position = vec4(0.0); // 不需要位置输出
}
`;
const program = gl.createProgram();
const vs = this.compileShader(gl.VERTEX_SHADER, vertexShader);
gl.attachShader(program, vs);
// 设置变换反馈输出
gl.transformFeedbackVaryings(program, this.getOutputVaryings(), gl.INTERLEAVED_ATTRIBS);
gl.linkProgram(program);
return program;
}
dispatch(workGroupX, workGroupY = 1, workGroupZ = 1) {
gl.useProgram(this.program);
// 绑定输入缓冲区
this.bindInputBuffers();
// 绑定输出缓冲区到变换反馈
gl.bindTransformFeedback(gl.TRANSFORM_FEEDBACK, this.transformFeedback);
gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, this.outputBuffers[0]);
// 执行计算
gl.enable(gl.RASTERIZER_DISCARD);
gl.beginTransformFeedback(gl.POINTS);
gl.drawArrays(gl.POINTS, 0, workGroupX * workGroupY * workGroupZ);
gl.endTransformFeedback();
gl.disable(gl.RASTERIZER_DISCARD);
gl.bindTransformFeedback(gl.TRANSFORM_FEEDBACK, null);
}
}
class UniformBufferManager {
constructor(gl) {
this.gl = gl;
this.buffers = new Map();
}
createUniformBuffer(name, data, binding) {
const buffer = gl.createBuffer();
gl.bindBuffer(gl.UNIFORM_BUFFER, buffer);
gl.bufferData(gl.UNIFORM_BUFFER, data, gl.DYNAMIC_DRAW);
// 绑定到绑定点
gl.bindBufferBase(gl.UNIFORM_BUFFER, binding, buffer);
this.buffers.set(name, { buffer, binding, size: data.byteLength });
return buffer;
}
updateUniformBuffer(name, data, offset = 0) {
const bufferInfo = this.buffers.get(name);
if (!bufferInfo) return;
gl.bindBuffer(gl.UNIFORM_BUFFER, bufferInfo.buffer);
gl.bufferSubData(gl.UNIFORM_BUFFER, offset, data);
}
}
// 在着色器中使用
const shaderSource = `#version 300 es
layout(std140) uniform CameraUniforms {
mat4 viewMatrix;
mat4 projectionMatrix;
vec3 cameraPosition;
float time;
};
layout(std140) uniform LightUniforms {
vec3 lightPositions[8];
vec3 lightColors[8];
int lightCount;
};
`;
// WebGL2内置实例化支持
#version 300 es
layout(location = 0) in vec3 aPosition;
layout(location = 1) in vec2 aTexCoord;
layout(location = 2) in mat4 aInstanceMatrix;
void main() {
gl_Position = uViewProjectionMatrix * aInstanceMatrix * vec4(aPosition, 1.0);
}
// JavaScript实例化绘制
gl.drawArraysInstanced(gl.TRIANGLES, 0, vertexCount, instanceCount);
gl.drawElementsInstanced(gl.TRIANGLES, indexCount, gl.UNSIGNED_SHORT, 0, instanceCount);
GPU粒子系统使用变换反馈:
// 粒子更新计算着色器(变换反馈)
#version 300 es
in vec3 aPosition;
in vec3 aVelocity;
in float aLife;
in float aSize;
uniform float uDeltaTime;
uniform vec3 uGravity;
uniform vec3 uWind;
out vec3 vNewPosition;
out vec3 vNewVelocity;
out float vNewLife;
out float vNewSize;
void compute() {
if (aLife <= 0.0) {
vNewPosition = aPosition;
vNewVelocity = aVelocity;
vNewLife = aLife;
vNewSize = aSize;
return;
}
vec3 forces = uGravity + uWind;
vec3 newVelocity = aVelocity + forces * uDeltaTime;
vec3 newPosition = aPosition + newVelocity * uDeltaTime;
// 边界碰撞
if (newPosition.y < 0.0) {
newPosition.y = 0.0;
newVelocity.y *= -0.5;
}
vNewPosition = newPosition;
vNewVelocity = newVelocity;
vNewLife = aLife - uDeltaTime;
vNewSize = aSize;
}
顶点数组对象(VAO):
// VAO简化顶点属性管理
const vao = gl.createVertexArray();
gl.bindVertexArray(vao);
// 设置顶点属性
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.enableVertexAttribArray(0);
gl.vertexAttribPointer(0, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, normalBuffer);
gl.enableVertexAttribArray(1);
gl.vertexAttribPointer(1, 3, gl.FLOAT, false, 0, 0);
// 后续只需绑定VAO即可
gl.bindVertexArray(vao);
gl.drawArrays(gl.TRIANGLES, 0, vertexCount);
查询对象进行性能测量:
class GPUProfiler {
constructor(gl) {
this.gl = gl;
this.queries = new Map();
}
beginQuery(name) {
const query = gl.createQuery();
gl.beginQuery(gl.TIME_ELAPSED, query);
this.queries.set(name, query);
}
endQuery(name) {
gl.endQuery(gl.TIME_ELAPSED);
}
getResult(name) {
const query = this.queries.get(name);
if (!query) return null;
const available = gl.getQueryParameter(query, gl.QUERY_RESULT_AVAILABLE);
if (available) {
const result = gl.getQueryParameter(query, gl.QUERY_RESULT);
return result / 1000000; // 转换为毫秒
}
return null;
}
}
WebGL2相比WebGL1的主要优势:
What are the debugging and performance analysis tools for WebGL applications?
What are the debugging and performance analysis tools for WebGL applications?
考察点:调试分析能力。
答案:
WebGL调试和性能分析需要多层面的工具和技术,包括浏览器开发工具、专业调试插件、自定义分析器等。
浏览器内置调试工具:
// 启用WebGL调试
const canvas = document.createElement('canvas');
const gl = canvas.getContext('webgl', {
preserveDrawingBuffer: true,
antialias: false
});
// 在Console中检查WebGL状态
console.log('WebGL Renderer:', gl.getParameter(gl.RENDERER));
console.log('WebGL Version:', gl.getParameter(gl.VERSION));
console.log('Max Texture Size:', gl.getParameter(gl.MAX_TEXTURE_SIZE));
// 使用Firefox的canvas检查器
// 可以查看每帧的绘制调用、纹理、着色器等
WebGL调试扩展库:
// 使用WebGL Debug库
const WebGLDebugUtils = {
init: function(gl, throwOnError = true) {
function makeDebugContext(ctx) {
function makeWrapper(fname) {
return function() {
let rv;
try {
rv = ctx[fname].apply(ctx, arguments);
} catch (e) {
console.error('WebGL error in', fname, ':', e);
if (throwOnError) throw e;
}
const err = ctx.getError();
if (err !== ctx.NO_ERROR) {
const errorName = this.getErrorName(err);
console.error('WebGL error', errorName, 'in', fname);
if (throwOnError) {
throw new Error(`WebGL error ${errorName} in ${fname}`);
}
}
return rv;
};
}
const wrapped = {};
for (let prop in ctx) {
if (typeof ctx[prop] === 'function') {
wrapped[prop] = makeWrapper(prop);
} else {
wrapped[prop] = ctx[prop];
}
}
return wrapped;
}
return makeDebugContext(gl);
}
};
Spector.js WebGL调试器:
// 集成Spector.js进行帧分析
const spector = new SPECTOR.Spector();
// 开始捕获一帧
spector.captureNextFrame(canvas);
// 或者设置触发条件
spector.captureContext(gl, 500, false, false); // 捕获500次draw call
// 自定义命令捕获
class WebGLCapture {
constructor(gl) {
this.gl = gl;
this.commands = [];
this.isCapturing = false;
}
startCapture() {
this.isCapturing = true;
this.commands = [];
this.wrapDrawCalls();
}
wrapDrawCalls() {
const originalDrawArrays = this.gl.drawArrays;
const originalDrawElements = this.gl.drawElements;
this.gl.drawArrays = (...args) => {
if (this.isCapturing) {
this.captureDrawCall('drawArrays', args);
}
return originalDrawArrays.apply(this.gl, args);
};
this.gl.drawElements = (...args) => {
if (this.isCapturing) {
this.captureDrawCall('drawElements', args);
}
return originalDrawElements.apply(this.gl, args);
};
}
captureDrawCall(method, args) {
this.commands.push({
method: method,
arguments: args,
state: this.captureWebGLState(),
timestamp: performance.now()
});
}
captureWebGLState() {
return {
viewport: this.gl.getParameter(this.gl.VIEWPORT),
program: this.gl.getParameter(this.gl.CURRENT_PROGRAM),
arrayBuffer: this.gl.getParameter(this.gl.ARRAY_BUFFER_BINDING),
elementArrayBuffer: this.gl.getParameter(this.gl.ELEMENT_ARRAY_BUFFER_BINDING),
framebuffer: this.gl.getParameter(this.gl.FRAMEBUFFER_BINDING),
activeTexture: this.gl.getParameter(this.gl.ACTIVE_TEXTURE)
};
}
}
性能分析器:
class WebGLProfiler {
constructor(gl) {
this.gl = gl;
this.metrics = {
drawCalls: 0,
vertices: 0,
triangles: 0,
textureBinds: 0,
shaderBinds: 0,
bufferBinds: 0,
stateChanges: 0
};
this.frameMetrics = [];
this.isEnabled = true;
}
beginFrame() {
if (!this.isEnabled) return;
this.frameStartTime = performance.now();
this.resetMetrics();
this.wrapWebGLCalls();
}
endFrame() {
if (!this.isEnabled) return;
const frameTime = performance.now() - this.frameStartTime;
this.frameMetrics.push({
...this.metrics,
frameTime: frameTime,
fps: 1000 / frameTime
});
// 保持最近60帧数据
if (this.frameMetrics.length > 60) {
this.frameMetrics.shift();
}
}
wrapWebGLCalls() {
// 包装绘制调用
const originalDrawArrays = this.gl.drawArrays;
this.gl.drawArrays = (mode, first, count) => {
this.metrics.drawCalls++;
this.metrics.vertices += count;
if (mode === this.gl.TRIANGLES) {
this.metrics.triangles += count / 3;
}
return originalDrawArrays.call(this.gl, mode, first, count);
};
// 包装纹理绑定
const originalBindTexture = this.gl.bindTexture;
this.gl.bindTexture = (target, texture) => {
this.metrics.textureBinds++;
return originalBindTexture.call(this.gl, target, texture);
};
// 包装着色器使用
const originalUseProgram = this.gl.useProgram;
let currentProgram = null;
this.gl.useProgram = (program) => {
if (program !== currentProgram) {
this.metrics.shaderBinds++;
currentProgram = program;
}
return originalUseProgram.call(this.gl, program);
};
}
generateReport() {
const recent = this.frameMetrics.slice(-30);
const avgFPS = recent.reduce((sum, frame) => sum + frame.fps, 0) / recent.length;
const avgDrawCalls = recent.reduce((sum, frame) => sum + frame.drawCalls, 0) / recent.length;
const avgTriangles = recent.reduce((sum, frame) => sum + frame.triangles, 0) / recent.length;
return {
performance: {
averageFPS: avgFPS,
averageFrameTime: 1000 / avgFPS,
minFPS: Math.min(...recent.map(f => f.fps)),
maxFPS: Math.max(...recent.map(f => f.fps))
},
rendering: {
averageDrawCalls: avgDrawCalls,
averageTriangles: avgTriangles,
averageTextureBinds: recent.reduce((sum, frame) => sum + frame.textureBinds, 0) / recent.length
},
recommendations: this.generateRecommendations(recent)
};
}
generateRecommendations(frames) {
const recommendations = [];
const avgDrawCalls = frames.reduce((sum, f) => sum + f.drawCalls, 0) / frames.length;
const avgFPS = frames.reduce((sum, f) => sum + f.fps, 0) / frames.length;
if (avgDrawCalls > 100) {
recommendations.push('考虑使用批处理减少draw calls');
}
if (avgFPS < 30) {
recommendations.push('性能较低,建议降低渲染质量');
}
return recommendations;
}
}
GPU内存监控:
class GPUMemoryMonitor {
constructor(gl) {
this.gl = gl;
this.allocatedTextures = new Map();
this.allocatedBuffers = new Map();
this.totalTextureMemory = 0;
this.totalBufferMemory = 0;
}
trackTexture(texture, width, height, format, type) {
const size = this.calculateTextureSize(width, height, format, type);
this.allocatedTextures.set(texture, size);
this.totalTextureMemory += size;
}
untrackTexture(texture) {
const size = this.allocatedTextures.get(texture);
if (size) {
this.allocatedTextures.delete(texture);
this.totalTextureMemory -= size;
}
}
calculateTextureSize(width, height, format, type) {
let bytesPerPixel;
switch (format) {
case this.gl.RGBA:
bytesPerPixel = type === this.gl.UNSIGNED_BYTE ? 4 : 8;
break;
case this.gl.RGB:
bytesPerPixel = type === this.gl.UNSIGNED_BYTE ? 3 : 6;
break;
default:
bytesPerPixel = 4; // 默认估计
}
return width * height * bytesPerPixel;
}
getMemoryReport() {
return {
textureMemory: this.totalTextureMemory,
bufferMemory: this.totalBufferMemory,
totalGPUMemory: this.totalTextureMemory + this.totalBufferMemory,
textureCount: this.allocatedTextures.size,
bufferCount: this.allocatedBuffers.size
};
}
}
着色器分析工具:
class ShaderAnalyzer {
analyzeShader(shaderSource, type) {
const analysis = {
complexity: this.calculateComplexity(shaderSource),
instructions: this.estimateInstructions(shaderSource),
registers: this.estimateRegisterUsage(shaderSource),
textureReads: this.countTextureReads(shaderSource),
branches: this.countBranches(shaderSource)
};
return analysis;
}
calculateComplexity(source) {
let score = 0;
// 数学函数计分
score += (source.match(/sin|cos|tan|sqrt|pow|exp|log/g) || []).length * 2;
score += (source.match(/normalize|cross|dot|reflect/g) || []).length * 1;
// 纹理采样计分
score += (source.match(/texture2D|textureCube/g) || []).length * 3;
// 循环和分支计分
score += (source.match(/for\s*\(/g) || []).length * 5;
score += (source.match(/if\s*\(/g) || []).length * 2;
return score;
}
generateOptimizations(analysis) {
const suggestions = [];
if (analysis.textureReads > 8) {
suggestions.push('考虑减少纹理采样次数');
}
if (analysis.branches > 5) {
suggestions.push('尝试使用mix()替代if分支');
}
if (analysis.complexity > 100) {
suggestions.push('着色器复杂度较高,考虑简化计算');
}
return suggestions;
}
}
关键调试策略:
How to implement collaboration between WebGL and Web Workers to improve performance?
How to implement collaboration between WebGL and Web Workers to improve performance?
考察点:多线程优化。
答案:
Web Workers可以与WebGL协作进行CPU密集型任务的并行处理,如几何计算、物理模拟、资源加载等,避免阻塞主线程的渲染循环。
OffscreenCanvas实现离屏渲染:
// 主线程
class WebGLWorkerManager {
constructor() {
this.workers = [];
this.offscreenCanvases = [];
}
async initializeWorkers(workerCount = 4) {
for (let i = 0; i < workerCount; i++) {
const worker = new Worker('webgl-worker.js');
const canvas = new OffscreenCanvas(512, 512);
// 将canvas控制权转移给Worker
const offscreen = canvas.transferControlToOffscreen();
worker.postMessage({
type: 'init',
canvas: offscreen,
workerId: i
}, [offscreen]);
this.workers.push(worker);
this.offscreenCanvases.push(canvas);
}
}
renderToTexture(geometryData, materialData, workerId = 0) {
return new Promise((resolve) => {
const worker = this.workers[workerId];
worker.onmessage = (event) => {
if (event.data.type === 'renderComplete') {
resolve(event.data.imageData);
}
};
worker.postMessage({
type: 'render',
geometry: geometryData,
material: materialData
});
});
}
}
// webgl-worker.js
class OffscreenWebGLRenderer {
constructor() {
this.gl = null;
this.shaderProgram = null;
}
initialize(canvas) {
this.gl = canvas.getContext('webgl2');
if (!this.gl) {
throw new Error('WebGL2 not supported in worker');
}
this.setupShaders();
this.setupBuffers();
}
render(geometryData, materialData) {
const gl = this.gl;
// 更新几何数据
this.updateGeometry(geometryData);
// 渲染
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.useProgram(this.shaderProgram);
this.setUniforms(materialData);
gl.drawElements(gl.TRIANGLES, geometryData.indexCount, gl.UNSIGNED_SHORT, 0);
// 读取像素数据
const pixels = new Uint8Array(512 * 512 * 4);
gl.readPixels(0, 0, 512, 512, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
return pixels;
}
}
self.onmessage = function(event) {
const { type, data } = event.data;
switch (type) {
case 'init':
renderer = new OffscreenWebGLRenderer();
renderer.initialize(data.canvas);
break;
case 'render':
const result = renderer.render(data.geometry, data.material);
self.postMessage({
type: 'renderComplete',
imageData: result
});
break;
}
};
并行几何处理:
// 几何处理Worker
class GeometryProcessor {
static createWorker() {
const workerCode = `
class GeometryWorker {
processVertices(vertices, transform) {
const result = new Float32Array(vertices.length);
for (let i = 0; i < vertices.length; i += 3) {
const x = vertices[i];
const y = vertices[i + 1];
const z = vertices[i + 2];
// 应用变换矩阵
result[i] = transform[0] * x + transform[4] * y + transform[8] * z + transform[12];
result[i + 1] = transform[1] * x + transform[5] * y + transform[9] * z + transform[13];
result[i + 2] = transform[2] * x + transform[6] * y + transform[10] * z + transform[14];
}
return result;
}
calculateNormals(vertices, indices) {
const normals = new Float32Array(vertices.length);
const faceNormals = [];
// 计算面法线
for (let i = 0; i < indices.length; i += 3) {
const i1 = indices[i] * 3;
const i2 = indices[i + 1] * 3;
const i3 = indices[i + 2] * 3;
const v1 = [vertices[i1], vertices[i1 + 1], vertices[i1 + 2]];
const v2 = [vertices[i2], vertices[i2 + 1], vertices[i2 + 2]];
const v3 = [vertices[i3], vertices[i3 + 1], vertices[i3 + 2]];
const normal = this.calculateFaceNormal(v1, v2, v3);
faceNormals.push(normal);
}
// 平均顶点法线
for (let i = 0; i < indices.length; i += 3) {
const faceIndex = Math.floor(i / 3);
const normal = faceNormals[faceIndex];
for (let j = 0; j < 3; j++) {
const vertexIndex = indices[i + j] * 3;
normals[vertexIndex] += normal[0];
normals[vertexIndex + 1] += normal[1];
normals[vertexIndex + 2] += normal[2];
}
}
// 归一化
for (let i = 0; i < normals.length; i += 3) {
const length = Math.sqrt(normals[i] ** 2 + normals[i + 1] ** 2 + normals[i + 2] ** 2);
if (length > 0) {
normals[i] /= length;
normals[i + 1] /= length;
normals[i + 2] /= length;
}
}
return normals;
}
generateLOD(vertices, indices, targetReduction) {
// 简化的LOD生成算法
const vertexCount = vertices.length / 3;
const targetVertexCount = Math.floor(vertexCount * (1 - targetReduction));
// 这里实现网格简化算法
return this.simplifyMesh(vertices, indices, targetVertexCount);
}
}
const processor = new GeometryWorker();
self.onmessage = function(event) {
const { type, data, taskId } = event.data;
let result;
switch (type) {
case 'processVertices':
result = processor.processVertices(data.vertices, data.transform);
break;
case 'calculateNormals':
result = processor.calculateNormals(data.vertices, data.indices);
break;
case 'generateLOD':
result = processor.generateLOD(data.vertices, data.indices, data.reduction);
break;
}
self.postMessage({ taskId, result });
};
`;
return new Worker(URL.createObjectURL(new Blob([workerCode], { type: 'application/javascript' })));
}
}
异步资源加载系统:
class AssetLoaderWorker {
constructor() {
this.workers = [];
this.taskQueue = [];
this.completedTasks = new Map();
}
async initialize(workerCount = 2) {
for (let i = 0; i < workerCount; i++) {
const worker = new Worker('asset-loader-worker.js');
worker.onmessage = (event) => this.handleWorkerMessage(event);
this.workers.push(worker);
}
}
loadModel(url, format) {
return this.addTask('loadModel', { url, format });
}
loadTexture(url, options) {
return this.addTask('loadTexture', { url, options });
}
processImages(imageData, filters) {
return this.addTask('processImages', { imageData, filters });
}
addTask(type, data) {
const taskId = Date.now() + Math.random();
return new Promise((resolve, reject) => {
this.taskQueue.push({
id: taskId,
type: type,
data: data,
resolve: resolve,
reject: reject
});
this.processQueue();
});
}
processQueue() {
if (this.taskQueue.length === 0) return;
// 找到空闲的Worker
const availableWorker = this.workers.find(worker => !worker.busy);
if (!availableWorker) return;
const task = this.taskQueue.shift();
availableWorker.busy = true;
availableWorker.currentTaskId = task.id;
this.completedTasks.set(task.id, task);
availableWorker.postMessage({
taskId: task.id,
type: task.type,
data: task.data
});
}
handleWorkerMessage(event) {
const { taskId, result, error } = event.data;
const worker = event.target;
worker.busy = false;
worker.currentTaskId = null;
const task = this.completedTasks.get(taskId);
if (task) {
this.completedTasks.delete(taskId);
if (error) {
task.reject(new Error(error));
} else {
task.resolve(result);
}
}
// 处理下一个任务
this.processQueue();
}
}
物理计算Worker:
// physics-worker.js
class PhysicsEngine {
constructor() {
this.objects = [];
this.gravity = [0, -9.81, 0];
this.deltaTime = 1/60;
}
addObject(object) {
this.objects.push({
id: object.id,
position: object.position,
velocity: object.velocity || [0, 0, 0],
acceleration: object.acceleration || [0, 0, 0],
mass: object.mass || 1.0,
radius: object.radius || 1.0,
restitution: object.restitution || 0.5
});
}
update() {
for (let object of this.objects) {
// 应用重力
object.acceleration[0] = this.gravity[0];
object.acceleration[1] = this.gravity[1];
object.acceleration[2] = this.gravity[2];
// 更新速度
object.velocity[0] += object.acceleration[0] * this.deltaTime;
object.velocity[1] += object.acceleration[1] * this.deltaTime;
object.velocity[2] += object.acceleration[2] * this.deltaTime;
// 更新位置
object.position[0] += object.velocity[0] * this.deltaTime;
object.position[1] += object.velocity[1] * this.deltaTime;
object.position[2] += object.velocity[2] * this.deltaTime;
// 地面碰撞检测
if (object.position[1] < object.radius) {
object.position[1] = object.radius;
object.velocity[1] *= -object.restitution;
}
}
// 对象间碰撞检测
this.detectCollisions();
return this.objects;
}
detectCollisions() {
for (let i = 0; i < this.objects.length; i++) {
for (let j = i + 1; j < this.objects.length; j++) {
const obj1 = this.objects[i];
const obj2 = this.objects[j];
const dx = obj1.position[0] - obj2.position[0];
const dy = obj1.position[1] - obj2.position[1];
const dz = obj1.position[2] - obj2.position[2];
const distance = Math.sqrt(dx * dx + dy * dy + dz * dz);
if (distance < obj1.radius + obj2.radius) {
this.resolveCollision(obj1, obj2);
}
}
}
}
}
const engine = new PhysicsEngine();
self.onmessage = function(event) {
const { type, data } = event.data;
switch (type) {
case 'addObject':
engine.addObject(data);
break;
case 'update':
const updatedObjects = engine.update();
self.postMessage({
type: 'physicsUpdate',
objects: updatedObjects
});
break;
case 'setGravity':
engine.gravity = data.gravity;
break;
}
};
主线程渲染协调器:
class RenderCoordinator {
constructor(canvas) {
this.canvas = canvas;
this.gl = canvas.getContext('webgl2');
this.geometryWorker = GeometryProcessor.createWorker();
this.physicsWorker = new Worker('physics-worker.js');
this.assetLoader = new AssetLoaderWorker();
this.renderLoop = this.renderLoop.bind(this);
this.isRunning = false;
}
async initialize() {
await this.assetLoader.initialize();
// 设置Worker消息处理
this.physicsWorker.onmessage = (event) => {
if (event.data.type === 'physicsUpdate') {
this.updateRenderObjects(event.data.objects);
}
};
}
start() {
this.isRunning = true;
this.lastTime = performance.now();
requestAnimationFrame(this.renderLoop);
// 启动物理更新循环
this.startPhysicsLoop();
}
renderLoop(currentTime) {
if (!this.isRunning) return;
const deltaTime = currentTime - this.lastTime;
this.lastTime = currentTime;
// 主线程只负责渲染
this.render();
requestAnimationFrame(this.renderLoop);
}
startPhysicsLoop() {
const physicsUpdate = () => {
if (this.isRunning) {
this.physicsWorker.postMessage({ type: 'update' });
setTimeout(physicsUpdate, 16); // 60 FPS物理更新
}
};
physicsUpdate();
}
async loadScene(sceneData) {
// 并行加载资源
const loadPromises = sceneData.assets.map(asset => {
if (asset.type === 'model') {
return this.assetLoader.loadModel(asset.url, asset.format);
} else if (asset.type === 'texture') {
return this.assetLoader.loadTexture(asset.url, asset.options);
}
});
const assets = await Promise.all(loadPromises);
// 在Worker中处理几何数据
const processedGeometry = await this.processGeometry(assets);
return processedGeometry;
}
}
核心优势:
通过Web Workers的协作,WebGL应用可以充分利用现代多核处理器的性能,实现更流畅的渲染和更复杂的场景处理。