阴影贴图 - 将视图 space 位置转换为阴影贴图 space

shadow mapping - transforming a view space position to the shadow map space

我使用延迟渲染并在相机视图中存储片段位置 space。当我执行阴影计算时,我需要将相机视图 space 转换为阴影贴图 space。我这样构建阴影矩阵:

shadowMatrix  = shadowBiasMatrix * lightProjectionMatrix * lightViewMatrix * inverseCameraViewMatrix;

shadowBiasMatrix 将值从 [-1,1] 移动到 [0,1] 范围。 lightProjectionMatrix 是定向光的正交投影矩阵。 lightViewMatrix 着眼于平截头体中心并包含光线方向。 inverseCameraViewMatrix 将片段位置从相机视图 space 转换为世界 space。

我想知道将逆相机视图矩阵与其他矩阵相乘是否正确?也许我应该单独使用逆相机视图矩阵?

第一种情况:

vec4 shadowCoord = shadowMatrix * vec4(cameraViewSpacePosition, 1.0);

第二种情况,单独使用逆相机视图矩阵:

vec4 worldSpacePosition = inverseCameraViewSpaceMatrix * vec4(cameraViewSpacePosition, 1.0);
vec4 shadowCoord = shadowMatrix * worldSpacePosition;

以所述方式预先计算阴影矩阵是正确的方法,应该会按预期工作。

由于矩阵乘法的结合性,三种情况的结果应该是相同的(忽略浮点精度),因此可以互换。 但是因为这些计算是在片段着色器中完成的,所以最好在主程序中对矩阵进行预乘,使每个片段的操作尽可能少。

我目前也在编写一个延迟渲染器,并以相同的方式计算我的矩阵,没有任何问题。

// precomputed: lightspace_mat = light_projection * light_view * inverse_cam_view
// calculate position in clip-space of the lightsource
vec4 lightspace_pos = lightspace_mat * vec4(viewspace_pos, 1.0);

// perspective divide
lightspace_pos/=lightspace_pos.w;

// move range from [-1.0, 1.0] to [0.0, 1.0]
lightspace_pos = lightspace_pos * vec4(0.5) + vec4(0.5);

// sample shadowmap
float shadowmap_depth = texture(shadowmap, lightspace_pos.xy).r;
float fragment_depth  = lightspace_pos.z;

我还发现本教程使用了类似的方法,这可能会有所帮助:http://www.codinglabs.net/tutorial_opengl_deferred_rendering_shadow_mapping.aspx

float readShadowMap(vec3 eyeDir)
{
    mat4 cameraViewToWorldMatrix = inverse(worldToCameraViewMatrix);
    mat4 cameraViewToProjectedLightSpace = lightViewToProjectionMatrix * worldToLightViewMatrix * cameraViewToWorldMatrix;
    vec4 projectedEyeDir = cameraViewToProjectedLightSpace * vec4(eyeDir,1);
    projectedEyeDir = projectedEyeDir/projectedEyeDir.w;

    vec2 textureCoordinates = projectedEyeDir.xy * vec2(0.5,0.5) + vec2(0.5,0.5);

    const float bias = 0.0001;
    float depthValue = texture2D( tShadowMap, textureCoordinates ) - bias;
    return projectedEyeDir.z * 0.5 + 0.5 < depthValue;
}

The eyeDir that comes in input is in View Space. To find the pixel in the shadow map we need to take that point and covert it into the light's clip space, which means going from Camera View Space into World Space, then into Light View Space, than into Light Projection Space/Clip space. All these transformations are done using matrices; if you are not familiar with space changes you may want to read my article about spaces and transformations.

Once we are in the right space we calculate the texture coordinates and we are finally ready to read from the shadow map. Bias is a small offset that we apply to the values in the map to avoid that because of rounding errors a point ends up shading itself! So we shift all the map back a bit so that all the values in the map are slightly smaller than they should.