分布式光线追踪
Distributed Ray-Tracing

原始链接: https://www.4rknova.com//blog/2019/02/24/distributed-raytracing

## 分布式光线追踪:迈向照片级真实感 传统光线追踪,如Whitted算法,通过追踪光线路径来创建图像,模拟反射和折射。然而,它存在局限性——边缘锐利的阴影、有限的光路模拟以及无法模拟景深等效果。它本质上只获取一次“光线交互样本”。 **分布式光线追踪**(或随机光线追踪)通过采用概率方法来提高真实感。它不使用单点计算,而是使用多个样本。例如,为了模拟具有柔和半影的逼真阴影,它会随机选择光源上的点并计算阴影光线以确定遮挡概率。 这种“蒙特卡洛”方法扩展到其他效果:光泽表面通过追踪多个反射光线来渲染,而景深则通过对镜头几何形状进行采样来实现。通过对这些多个样本进行平均,分布式光线追踪提供了对渲染方程描述的复杂交互更接近的近似,从而产生更逼真的图像。

## 分布式光线追踪与镜头模拟挑战 Hacker News 上的一讨论集中在基于物理的渲染 (PBRT) 在模拟复杂镜头时遇到的挑战。一位用户在建模双光片(由两种材料组成的镜头)时遇到困难,因为 PBRT 的系统难以处理同时接触多种介质的形状。 提出的解决方案包括创建一个具有计算折射率 (IOR) 商的单个表面,或采用“优先级追踪”,其中材料过渡取决于 IOR 优先级。然而,优先级追踪会增加开销,影响光线遍历性能——这是 GPU 加速渲染的关键问题。 PBRT 的开发者解释说,他们决定省略此功能是为了最大限度地减小 GPU 效率的光线有效载荷大小,使其适合作为教科书练习,以及书籍长度限制。对话还涉及光线追踪技术(Whitted、分布式、路径追踪)的演变以及准确性和性能之间的权衡。最后,一位用户寻求推荐开源的、GPU 支持的光线追踪器,即使牺牲速度,也能够准确模拟多材料镜头。
相关文章

原文

Distributed ray-tracing is a term that is commonly misconstrued, and often associated with the concept of parallel computing, where the calculations required to render an image are distributed across a network of processing nodes. The more appropriate term, ‘parallel ray-tracing’ is typically used to resolve the ambiguity.

In the traditional Whitted algorithm, a ray is spawned, for every pixel in the screen. That ray is tested against the geometry of the scene to check whether an intersection point exists. If an intersection is found, depending on the properties of the surface, a limited number of additional purpose specific rays may be generated.

These rays can either be shadowing rays, that check whether the resolved point is visible or not by the light sources in the scene, or reflection / transmission rays that recursively trace a specular light path to model reflection or transmission events in perfect mirrors and transparent media.

While the algorithm can produce aesthetically pleasing results and is able to model interactions that traditional rasterization fails to represent at the same level of visual fidelity, it can only simulate a limited set of interactions and light paths, most of which are not typically observed in the real world. In a mathematical sense, it’s intuitive to think of the limitations in terms of the rendering equation which requires the evaluation of several integrals. Conventional ray-tracing is estimating illumination using a single sample across the entire domain, which constitutes a particularly crude approximation.

In summary some of the limitations are:

  • Shadows have a hard edge, as only infinitesimally small point light sources of zero volume can be simulated, with binary shadow queries that use a single ray.
  • Reflection / Refraction can only simulate a limited set of light paths, for perfect mirror surfaces, or perfectly homogeneous transparent media.
  • More complex effects like depth of field are not supported.

Distributed ray-tracing, also known as ‘stohastic ray-tracing’, takes a few additional steps towards photo-realism, adding support for simulating smoothly varying optical phenomena.

To simulate light sources of arbitrary size and shape, shadowing queries are required to yield non binary results. In the real world, a light emitter can be visible in it’s entirety, partially occluded or fully occluded at a specific point on a surface. The resulting shadow has a characteristic gradient border, that is typically called the shadow’s ‘penumbra’. Distributed ray-tracing simulates this effect, by adopting a probabilistic approach. A random point on the light emitter’s surface is selected at random and a shadow ray is constructed, originating from the point that is being shaded, towards that randomly picked position. Multiple such samples are integrated to approximate the occlusion probability for the shaded point.

The same approach is used to simulate a variety of interactions and optical effects:

  • Different degrees of glossiness can be simulated by generating multiple reflection rays towards random samples on a specular lobe.
  • Optical depth of field is computed by distributing multiple integration samples on a thin lens geometry.
  • Motion blur can be achieved by integrating multiple samples in the time domain.

Looking at the rendering equation once again, it’s easy to see that the Monte Carlo method used in distributed ray-tracing, samples the integrand at multiple points across the domain, averaging the result to calculate a far better approximation.

Below you can find some examples that I generated using my own renderer: xtracer. The project is open source and you can access the code Github.

Drag the divider to inspect the images
On the left side a simple scene is rendered using the Whitted algorithm. On the right side, the same scene is rendered using distributed ray-tracing, producing softer shadows.

Distributed ray-tracing used to simulate glossy surfaces.

Depth of field is simulated using a thin lens model.

Soft shadows look more intricate on complex geometry.

Here is an example on how to implement the thin lens model shown above. The full source is available in Github

Ray Perspective::get_primary_ray(float x, float y, float width, float height)
{
  // Note that the direction vector of ray is not normalized. 
  // The DoF ray calculation depends on this at the moment.
  Ray ray;

  scalar_t aspect_ratio = (scalar_t)width / (scalar_t)height;
  ray.origin = position;

	// Calculate the ray's intersection point on the projection plane.
	ray.direction.x = (2.0 * (scalar_t)x / (scalar_t)width) - 1.0;
	ray.direction.y = ((2.0 * (scalar_t)y / (scalar_t)height) - 1.0) / aspect_ratio;
	ray.direction.z = 1.0 / tan(fov * RADIAN / 2.0);

	/*
		Setting up the look-at matrix is easy when you consider that a matrix
		is basically a rotated unit cube formed by three vectors (the 3x3 part) at a
		particular position (the 1x3 part).

		We already have one of the three vectors:
			-	The z-axis of the matrix is simply the view direction.
			-	The x-axis of the matrix is a bit tricky: if the camera is not tilted,
				then the x-axis of the matrix is perpendicular to the z-axis and
				the vector (0, 1, 0).
			-	The y-axis is perpendicular to the other two, so we simply calculate
				the cross product of the x-axis and the z-axis to obtain the y-axis.
				Note that the y-axis is calculated using the reversed z-axis. The
				image will be upside down without this adjustment.
	*/

	// Calculate the camera direction vector and normalize it.
  calculate_transform(m_transform);

	// Calculate the deviated ray direction for DoF
  if (flength > 0) {
      Ray fray;
    fray.origin = ray.direction;
    scalar_t half_aperture = aperture / 2.f;
    fray.origin.x += prng_c(-half_aperture, half_aperture);
    fray.origin.y += prng_c(-half_aperture, half_aperture);

      // Find the intersection point on the focal plane
    Vector3f fpip = ray.direction + flength * ray.direction.normalized();
    fray.direction = fpip - fray.origin;

      ray = fray;
  }

	// Transform the direction vector
	ray.direction.transform(m_transform);
	ray.direction.normalize();

	// Transform the origin of the ray for DoF
  if (flength > 0) {
    ray.origin.transform(m_transform);
    ray.origin += position;
  }

	return ray;
}
联系我们 contact @ memedata.com