Real-Time Simulation of Time-of-Flight Sensors via Path Tracing using OptiX
This article deals with the explanation of the Time-of-Flight sensor simulation using Path Tracing and the generation of depth images. The simulation can be roughly divided into two phases. First, the simulation of the Time-of-Flight sensor and the propagation of light on the GPU using Path Tracing is performed. The result of the simulation is the individual phase images, which were explained in Capturing Depth Information: Principles and Techniques of Time-of-Flight Sensors. With the help of these phase images, the depth values are then calculated on the CPU, taking into account lens scattering and random error.
The following subchapters will first explain the Path Tracing algorithm in detail and then discuss the calculation of depth values.
Path Tracing
The Path Tracing algorithm is an implementation based on Kajiya [Kaj86]. It involves sending rays into the scene for every pixel of the image. It then determines whether the ray collides with the scene. If this ray collides with a surface, it's determined whether this point is illuminated by light sources and the proportional radiation that is reflected in the direction of the observer is calculated. For indirect illumination by other surfaces, a new ray is determined in a random direction originating from the intersection point. It is then repeatedly determined whether this new ray collides with the scene and the reflection towards the origin is calculated. This is repeated until a maximum path length is reached, or the ray leaves the scene. Figure 2 illustrates the operation of the algorithm for several sight rays calculated for the same pixel. Each ray leaves the observer in the same direction and receives a random new direction at each intersection point. For each intersection point, it's calculated whether it is directly illuminated by one or more light sources.
The Path Tracing algorithm was developed using the NVIDIA OptiX Ray Tracing Engine. This is a framework that offers an API for creating Ray Tracing applications [PRS+10]. Therefore, in addition to the installation of OptiX, the CUDA Toolkit is required and its use necessitates a graphics card that supports CUDA. CUDA is an API developed by NVIDIA that allows the user to perform the execution of parts of the program using General-Purpose Computing on Graphics Processing Units (GPGPU) on the graphics card. The computation on the graphics card is characterized by highly parallel processing. In the case of Path Tracing, a separate thread can be used for the calculation of each ray, and thus the program execution for thousands of rays can be carried out simultaneously.
The OptiX framework provides interfaces in which parts of the functionality can be defined using CUDA C Kernels, which are explained below. The implementation of a Path Tracing algorithm using OptiX involves the following steps:
- Implementing a Ray Generation program that defines how the rays are shot into the scene.
- Defining a Bounding Box program, which is needed to accelerate the algorithm and describes a rough test and provides feedback on whether the ray is near the object before a precise and computationally intensive intersection calculation with the surface of the object is performed.
- Implementing an Intersection program that performs the precise intersection calculation of the ray with the surface.
- Defining an Any Hit and a Closest Hit program for each surface type. These describe the behavior of the ray when colliding with the surface. For each object, a specific Closest Hit program can be defined that describes the specific BRDF of the object. The Any Hit program is needed here to calculate the shadows and describes the transparent properties of the object and defines which part of the radiation shines through if there is an object between the intersection point and the radiation source.
- Implementing the Ray Missing program, which defines the behavior if the ray leaves the scene and there can be no more intersection.
The Bounding Box program and the Intersection were taken from the example of the implementation of a Ray Tracer of the OptiX SDK, so they are not detailed here. The explanation of the Any Hit program is also omitted, as in this work it is assumed that all surfaces are completely opaque, and the implementation is therefore trivial.
In contrast to the implementation by Meister et al. [MNK13] , this work uses a unidirectional Path Tracer, as the light source is in the immediate vicinity of the sensor, and the rendered image is almost entirely directly illuminated. A bidirectional Path Tracer plays its strengths particularly in complex lighting situations, where the scene is mostly indirectly illuminated, which is why the additional complexity involved in calculating the additional rays originating from the light source is omitted in the context of this work.
In the implementation of Path Tracing, it is simplistically assumed that the photons travel through a vacuum and are not subject to atmospheric scattering. In addition, the optical density of air at sea level at room temperature is assumed for the calculation of Fresnel reflection.
Ray Generation Program
The Ray Generation Program forms the starting point of the Path Tracing algorithm. This program generates the rays that are shot into the scene and sums up the partial results of the individual rays to a final result. In this work, a Progressive Path Tracing algorithm is implemented, which generates a number of rays for each rendered image and presents the intermediate result to the user while the next rays are already being generated on the graphics card and the image progressively converges towards the final result over time. As soon as the scene changes, e.g., the camera is moved, the image is completely recalculated.
As illustrated in Figure 3, the algorithm continuously sends new rays into the scene for each pixel. After a certain number of rays per pixel, referred to as Samples, the program interrupts the process to write the intermediate result to a memory area (here referred to as Buffer) that is presented to the user in real-time. Each of these intermediate results is referred to as a Frame. Subsequently, the processing of the next frame is initiated and the result is expanded with additional samples.
In the calculation of the rays for the pixel, the distortion caused by the lens, as explained in Understanding Errors in Time of Flight Depth Sensors, is already considered. For this purpose, a buffer with undistorted rays was precomputed and copied into the graphics card's memory. For each pixel coordinate $(u, v)$, an undistorted pixel coordinate $(u', v')$ was determined, and the corresponding ray was calculated from it. Initially, a projection matrix $P$ was calculated using the ArUco library from the camera matrix, determined through calibration using a chessboard, to determine the 2D pixel coordinate on the screen for a 3D point in space. To calculate the rays, the projection matrix is inverted to compute a 3D point in space for the 2D pixel coordinates on the screen, located at an arbitrary position on the ray. This point is then normalized to obtain a direction:
\begin{equation}
\begin{aligned}
\left<p_{x}, p_{y}, p_{z}, p_{w}\right> &= P^{-1} \cdot \left<u^{'}, v^{'}, 1, 1\right>\
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\vec d = \left<d_{x}, d_{y}, d_{z}\right> &= \frac{\left<p_{x}, p_{y}, p_{z}\right>}{\lvert\left<p_{x}, p_{y}, p_{z}\right>\rvert}.
\end{aligned}
\end{equation}
Since using a single ray per pixel would lead to aliasing effects at edges, Ray Tracing applications often generate a random ray for each sample within the pixel to produce a smoother image. As the aperture is not infinitely small, this work does not use just one ray per pixel but approximates the opening of the aperture by choosing a random point $(u', v')$ within a radius for each sample. As calculating an undistorted pixel coordinate for randomly generated pixel coordinates $(u, v)$ on the graphics card would be too complex, a buffer is prepared at ten times the resolution, and 100 rays are precomputed for each pixel. One of these precomputed rays is chosen at random for each sample. Figure 4 illustrates this principle by representing the pixel coordinates used to calculate the ray.
The following simplified code excerpt describes the process that takes place in the Ray Generation Program:
RT_PROGRAM void pathtrace_camera() {
// For each pass, a certain number of rays are generated for each pixel
float3 result = make_float3(0.0f);
unsigned int samples_per_pixel = sqrt_num_samples * sqrt_num_samples;
do {
// Calculate the pixel coordinate for which the ray should be calculated
float2 relativeCoordinate = calculatePixel(launch_index);
// Calculate the ray for the pixel
float3 calculated_ray = calculate_ray_for_pixel(rayDirectionsBuffer, relativeCoordinate);
// Transform the ray from the camera coordinate system to the world coordinate system
float3 ray_origin = eye_position;
float3 ray_direction = normalize(calculated_ray.x*U + calculated_ray.y*V + -calculated_ray.z*W);
// Each iteration calculates a segment of the ray
PerRayData_radiance prd;
for(;;) {
Ray ray = make_Ray(ray_origin, ray_direction, radiance_ray_type, scene_epsilon, RT_DEFAULT_MAX);
// At this point, a Closest Hit or a Miss program is called, and the result is written into the variable prd.
rtTrace(top_object, ray, prd);
prd.depth++;
prd.result += prd.radiance;
// If a termination condition was met, the tracking of the ray is ended
if(prd.done || prd.depth >= max_depth)
break;
// The ray is updated for the next pass
// The new direction is determined in the Closest Hit program
ray_origin = prd.origin;
ray_direction = prd.direction;
}
result += prd.result;
} while (--samples_per_pixel);
float3 pixel_color = result/(sqrt_num_samples*sqrt_num_samples);
// The result is transferred to the buffer, which is then displayed
if (frame_number > 1) {
// Linear interpolation
float a = 1.0f / (float)frame_number;
float3 old_color = make_float3(output_buffer[launch_index]);
output_buffer[launch_index] = make_float4( lerp( old_color, pixel_color, a ), 1.0f);
} else {
output_buffer[launch_index] = make_float4(pixel_color, 1.0f);
}
}
Each thread on the graphics card runs the program in parallel for a different pixel in the image. This process is repeated until the number of samples for the pixel is reached. Subsequently, for the next frame, the frame_number is increased and the program execution is repeated. If the user moves the camera and the image needs to be recalculated, the frame_number is reset to 0, causing previous calculations to be overwritten in line 47. The camera coordinate system is passed in the form of a rotation matrix \begin{equation}
\begin{aligned}
R = \begin{pmatrix}
\hat u_{right}, \hat v_{up}, \hat w_{forward}
\end{pmatrix}
\end{aligned}
\end{equation} and the position of the camera $\mathbf{p}_{eye}$. The rotation matrix consists of three normalized vectors, used in line 12 to transform the ray from the camera coordinate system to the world coordinate system.
Closest Hit Program
The Closest Hit Program is executed when the ray collides with a surface. A unique Closest Hit Program can be defined for each surface. In this work, only one program was implemented for all surfaces, and the material properties are passed to the graphics card as textures. These textures contain the diffuse reflectance as a 24-bit RGB color, the roughness of the surface as an 8-bit grayscale value, the metallic properties as an 8-bit mask, and another texture that contains details of the surface structure in the form of normals of the surface encoded as a 24-bit RGB color. Figure 5 shows a selected example from the Sponza test scene created by Crytek, which is often used for testing light simulations.
The information passed in the textures is used in the calculation of the BRDF. The Closest Hit Program calculates the reflection properties after a collision of the ray with the surface. As explained in Fundamentals of Photo Realistic Rendering, at each intersection point, a new ray is generated in a random direction, and thus a path is followed for each sample. In this work, to optimize, a cosine-weighted density of the rays is used, so the multiplication with $cos \theta_i$ from the rendering equation is omitted. The following code excerpt illustrates the calculation of the indirect lighting of the Closest Hit Program:
RT_PROGRAM void microfacet_closest_hit() {
// The traveled distance of the ray is summed up here
float3 hit_point = old_ray_origin + (ray_length * old_ray_direction);
prd_radiance.ir_traveled_distance += length(old_ray_origin - hit_point);
// Here, the influence of direct light is calculated
// This differs depending on the chosen modulation
// ...
// The new ray originates at the intersection point
prd_radiance.origin = hit_point;
// A new direction for the ray is calculated
prd_radiance.direction = cosine_sample_hemisphere(surface_normal);
// The attenuation factor of the reflection is determined:
// It is known from which direction the ray comes and in which
// direction it is reflected.
// There is a distinction between metals and non-metals
float3 fresnel;
if(metallic) {
// Here, the Fresnel reflectance is calculated if the surface is metallic
fresnel = diffuse_reflectance * FrConductor(dot(prd_radiance.direction, half_vector), current_index_of_refraction, index_of_refraction, absorption_coefficien);
} else {
// Here, the Fresnel reflectance is calculated if the surface is non-metallic
fresnel = FrDielectric(dot(prd_radiance.direction, half_vector), current_index_of_refraction, index_of_refraction);
}
// Calculation of the diffuse reflection component using Oren Nayar BRDF
float3 diffuse = OrenNayar_f(diffuse_reflectance, surface_normal, -old_ray_direction, prd_radiance.direction, roughness) * (1.0f - fresnel) * (1.0f - metallic);
// Calculation of the specular reflection component using Torrance Sparrow BRDF
float3 specular = TorranceSparrow_f(surface_normal, -old_ray_direction, prd_radiance.direction, fresnel, roughness);
// New calculation of the attenuation factor according to the calculated reflectance
prd_radiance.attenuation = (diffuse + specular) * prd_radiance.attenuation * PI;
}
Again, this is a highly simplified excerpt of the implementation, serving only for illustration. The calculation of direct lighting has been omitted here, as it differs for every type of light source. The implementation of direct lighting is therefore detailed in the following subchapters for each type of light source.
Pulse Width Modulation
The following explains the calculation of intensity images for pulse width modulation. As explained in Capturing Depth Information: Principles and Techniques of Time-of-Flight Sensors, two intensity images are created in pulse width modulation, from which the depth image is subsequently calculated. Figure 6 shows the two intensities of the buckets $C_1$ and $C_2$, which are calculated in the simulation. For this purpose, the calculation of the light in the Path Tracing algorithm is extended by the factor of time, which the light needs to be reflected from a surface and hit the sensor. As explained in the Closest Hit Program subsection, the distance the light has traveled is summed up in the calculation of indirect lighting. At every intersection of the ray with the surface, it is calculated whether this point is directly illuminated. To do this, a ray is sent from the intersection point in the direction of the infrared LEDs and tested whether there is an obstacle on the direct path between the surface and the light source. If the light source is visible from the intersection point, the surface is directly illuminated by the modulated light of the infrared LEDs. To calculate the reflected portion of light in the period when the buckets are open, the distance traveled by the path so far is added to the distance from the light source to the intersection point to obtain the total distance the light has traveled. The time delay with which the light pulse arrives at the sensor is determined from the distance and the speed of light. This allows calculating which part of the reflected light is stored in bucket $C_1$ and which in bucket $C_2$.
The following excerpt of the implementation fills the gap of the previous code excerpt:
// ...
// Calculation of the pulse length based on the used frequency
const double pulselength = (1.0f / frequency) * 0.5f;
// As the reflected light is divided into two buckets, here a 2D vector is used for storing
float2 ir_result_pulse = make_float2(0.0f);
unsigned int num_ir_lights = ir_lights.size();
for(int i = 0; i < num_ir_lights; ++i) {
// Calculation of some useful variables
BasicLight light = ir_lights[i];
float3 light_direction = light.pos - hit_point;
float light_distance = sqrt(dot(light_direction, light_direction));
light_direction = light_direction / light_distance;
float NdotL = saturate(dot(surface_normal, light_direction));
if (NdotL > 0.0f) {
// Calculate a ray from the intersection point to the light source to determine if the surface is in shadow or directly illuminated
PerRayData_shadow shadow_prd;
Ray shadow_ray = make_Ray( hit_point, light_direction, shadow_ray_type, scene_epsilon, light_distance - scene_epsilon );
rtTrace(top_object, shadow_ray, shadow_prd);
if(!shadow_prd.inShadow) {
// Here, just as already described, the diffuse and specular component of the reflection is calculated
// ...
float Intensity = prd_radiance.ir_attenuation * luminanceCIE((diffuse + specular) * (light.color * light.intensity * (NdotL / dot(light_direction, light_direction))));
// Calculate the time delay at which the radiation hits the sensor again based on the traveled distance
float sourceToSensorDistance = light_distance + prd_radiance.ir_traveledDistance;
float deltaTime = sourceToSensorDistance / speedOfLight;
// Determine the start time when the buckets begin to collect light
float C1_start = 0.0f;
float C2_start = pulselength;
// Determine the start time and end time of the reflected light pulse
float begin = deltaTime;
float end = deltaTime + pulselength;
// Determine the portion of charge collected by C1
if((end >= C1_start && end <= C1_start + pulselength) || (begin >= C1_start && begin <= C1_start + pulselength)) {
if(begin > C1_start)
ir_result_pulse.x += (((C1_start + pulselength) - begin) / pulselength) * Intensity;
else
ir_result_pulse.x += ((end - C1_start) / pulselength) * Intensity;
}
// Determine the portion of charge collected by C2
if((end >= C2_start && end <= C2_start + pulselength) || (begin >= C2_start && begin <= C2_start + pulselength)) {
if(begin > C2_start)
ir_result_pulse.y += (((C2_start + pulselength) - begin) / pulselength) * Intensity;
else
ir_result_pulse.y += ((end - C2_start) / pulselength) * Intensity;
}
}
}
}
// Return the result to the Ray Generation Program
prd_radiance.radiance = ir_result_pulse;
// ...
Rectangular Wave Function
The simulation of phase images for Continuous-Wave Modulation is similar to the implementation of Pulse Width Modulation. Continuous-Wave Modulation differs from Pulse Width Modulation in two aspects:
- As explained in Capturing Depth Information: Principles and Techniques of Time-of-Flight Sensors, four buckets instead of two are used to capture the reflected light.
- Instead of a pulse, a wave is emitted, where the rectangular wave function can be considered as a periodically repeating Pulse Width Modulation.
The similarity of the rectangular wave function to Pulse Width Modulation is utilized, and the charge of the buckets is simulated in the same way. For this purpose, the calculation is first extended by two additional buckets, and the time window during which the buckets are open is calculated as before. Since a periodic wave is emitted in contrast to Pulse Width Modulation, all previous light pulses must be considered in the calculation. The light pulse is shifted by its double pulse length and the distribution to the four buckets is calculated for each light pulse until the entire wave has been considered in the calculation of the bucket charges.
The following excerpt from the implementation illustrates the calculation of phase images for the Continuous-Wave Modulation of rectangular wave functions:
// ...
// As the reflected light is divided into four buckets, here a 4D vector is used for storing
float4 ir_result_rect = make_float4(0.0f);
// ...
// Calculate the time period during which the buckets are open and collecting light
float C1_start = 0.0f;
float C2_start = pulselength;
float C3_start = pulselength * 0.5f;
float C4_start = (pulselength * 0.5f) + pulselength;
// Shift the light pulse back in time until it lies outside the measurement frame of the four buckets
while(deltaTime < C4_start + pulselength) {
deltaTime = deltaTime + (pulselength * 2.0);
}
// Calculate the intensity distribution analogously to Pulse Width Modulation for each individual light pulse
while(deltaTime + pulselength > 0.0f) {
float begin = deltaTime;
float end = deltaTime + pulselength;
if((end >= C1_start && end <= C1_start + pulselength) || (begin >= C1_start && begin <= C1_start + pulselength)) {
if(begin > C1_start)
ir_result_rect.x += (((C1_start + pulselength) - begin) / pulselength) * Intensity;
else
ir_result_rect.x += ((end - C1_start) / pulselength) * Intensity;
}
if((end >= C2_start && end <= C2_start + pulselength) || (begin >= C2_start && begin <= C2_start + pulselength)) {
if(begin > C2_start)
ir_result_rect.y += (((C2_start + pulselength) - begin) / pulselength) * Intensity;
else
ir_result_rect.y += ((end - C2_start) / pulselength) * Intensity;
}
if((end >= C3_start && end <= C3_start + pulselength) || (begin >= C3_start && begin <= C3_start + pulselength)) {
if(begin > C3_start)
ir_result_rect.z += (((C3_start + pulselength) - begin) / pulselength) * Intensity;
else
ir_result_rect.z += ((end - C3_start) / pulselength) * Intensity;
}
if((end >= C4_start && end <= C4_start + pulselength) || (begin >= C4_start && begin <= C4_start + pulselength)) {
if(begin > C4_start)
ir_result_rect.w += (((C4_start + pulselength) - begin) / pulselength) * Intensity;
else
ir_result_rect.w += ((end - C4_start) / pulselength) * Intensity;
}
// Shift the light pulse by its double phase length to calculate the temporally following light pulse
deltaTime = deltaTime - (pulselength * 2.0);
}
// ...
The charges resulting from the simulation for each bucket are illustrated in the form of phase images in Figure 7.
Sinusoidal Wave Function
The sinusoidal wave function is the simplest calculation for phase shifting. For each bucket, the area under the wave function for the section when the buckets are open is calculated:
\begin{equation}Q_{k}=\frac{\alpha \cdot \Big(\cos(2 \pi f \cdot t_{start}) - \cos(2 \pi f \cdot t_{end}) \Big)}{2 \pi f} - (\beta \cdot t_{start}) + (\beta \cdot t_{end}).\end{equation}
Here, $\alpha$ is the amplitude of the function, $f$ the frequency, $\beta$ the offset, $k$ the index of the bucket, and $t_{start}$ and $t_{end}$ the end and start times when a bucket is open.
The following code excerpt finally shows the calculation of phase images for the sinusoidal wave function:
// ...
// As the reflected light is divided into four buckets, here a 4D vector is used for storing
float4 ir_result_sin = make_float4(0.0f);
// ...
ir_result_sin.x += Intensity * getArea(frequency, C1_start - deltaTime, (C1_start - deltaTime) + pulselength);
ir_result_sin.y += Intensity * getArea(frequency, C2_start - deltaTime, (C2_start - deltaTime) + pulselength);
ir_result_sin.z += Intensity * getArea(frequency, C3_start - deltaTime, (C3_start - deltaTime) + pulselength);
ir_result_sin.w += Intensity * getArea(frequency, C4_start - deltaTime, (C4_start - deltaTime) + pulselength);
// ...
The phase images differ only minimally from those of the rectangular wave function, which can be seen in Figure 7.
Miss Program
The Miss Program is called as soon as the ray leaves the scene and can no longer collide with the geometry. Here, the path can be simply terminated or radiation can be set for the background, leading to ambient lighting. The simulation of ambient light affects the random error of depth values, making it relevant in this work.
The following code excerpt illustrates the use of a constant background as a radiation source:
RT_PROGRAM void miss()
{
prd_radiance.radiance = prd_radiance.attenuation * background_color;
prd_radiance.done = true;
}
If a value greater than zero is chosen for the background, the environment is considered a radiation source if the ray leaves the scene (see Figure 8a). Alternatively, a measured ambient lighting can be used, serving as a radiation source if the ray leaves the scene (see Figure 8b). For this, an Environment Map is captured and projected onto a sphere surrounding the camera. If the ray leaves the scene, the interface with the sphere is calculated, and the radiation from the corresponding direction is determined.
The following code excerpt illustrates the use of an Environment Map as a radiation source:
rtTextureSampler<float4, 2> envmap;
RT_PROGRAM void envmap_miss()
{
float theta = atan2f( ray.direction.x, ray.direction.z );
float phi = M_PIf * 0.5f - acosf( ray.direction.y );
float u = (theta + M_PIf) * (0.5f * M_1_PIf);
float v = 0.5f * ( 1.0f + sinf(phi) );
prd_radiance.radiance = prd_radiance.attenuation * make_float3( tex2D(envmap, u, v) );
prd_radiance.done = true;
}
This implementation allows for a dynamic response to rays that don't hit any object in the scene, either by applying a simple constant background color or by using a more complex environmental illumination from an Environment Map.
Depth Image Generation from Phase Images
This section discusses how depth images are calculated from individual phase images. The simulation of light propagation using Path Tracing provides two or four phase images depending on the chosen modulation and the number of buckets used, from which depth information is calculated. As explained in Capturing Depth Information: Principles and Techniques of Time-of-Flight Sensors, the process proceeds, and the distance is either calculated directly from the ratio of the two buckets or determined via phase shift. The depth information is a radial distance from the camera's aperture to the object. The radial distance is the length of a vector whose origin coincides with that of the camera coordinate system. Since rays were calculated for each pixel for the Path Tracing algorithm, it is already known in which direction the vector is pointing from which the depth information was determined. To calculate a 3D coordinate from the depth information, the vector is extended by the determined radial distance calculated during the phase calculation.
Time-of-Flight camera systems often deliver depth as a result instead of a radial distance. For this, the 3D coordinate in space is first calculated, and only the Z-value of the coordinate is passed to the user in a depth image. However, this is omitted here, henceforth the term "distance images" will be used.
As explained in Understanding Errors in Time of Flight Depth Sensors, the distances are subject to external influences, leading to errors in the distance image. The simulation of these errors is explained in the following.
Simulation of Lens Scattering and the Mixed Pixels Error
In the detailed examination of the Mixed Pixels error in Understanding Errors in Time of Flight Depth Sensors, it was hypothesized that this can be attributed to the Lens Scattering effect. Therefore, only the Lens Scattering error is simulated, and the Mixed Pixels error is consequently also considered. In the precise investigation of the influence of bright pixels on adjacent pixels in Understanding Errors in Time of Flight Depth Sensors, a point spread function was determined, which can replicate this effect. For this purpose, a convolution matrix is defined from the function, which is applied to each phase image. This causes the intensities to distribute to the neighboring pixels, thereby causing errors in the depth image.
Simulation of Random Error
The random error is implemented following the equation \begin{equation}\sigma = \frac{c}{4 \sqrt{2} \pi f} \cdot \frac{\sqrt{A+B}}{c_dA}\label{eq:timeofflighterror}\end{equation}, introduced in Understanding Errors in Time of Flight Depth Sensors. For this, the intensity $A$ of the infrared signal and the offset $B$ are calculated from the phase images. The ambient light, simulated by the Miss Program, additionally contributes to the offset $B$, while the intensity $A$ contains only the modulated radiation of the infrared LEDs, even if the ambient light influences the individual intensity images. However, Pulse Width Modulation is excluded from this, as the ambient light cannot be calculated out of the phase images.
The random error is added to the resulting distance image. For this, the implementation of the normal distribution from the Standard Template Library is used. It expects the standard deviation $\sigma$ as a parameter, which was calculated in the equation.
The following code excerpt demonstrates the calculation of the modified distance using a normal distribution:
// ...
// Definition of a random number generator
std::default_random_engine g_generator;
// ...
float distance = (speedOfLight / (4.0f * M_PIf * frequency)) * phi;
if (distance != 0.0f && m_noise_enabled)
{
// Calculation of the noise using the normal distribution
std::normal_distribution<double> distribution(distance, standard_deviation);
double imperfectDistance = distribution(g_generator);
// Conversion from meters to millimeters
output_depth[launch_index] = imperfectDistance * 1000.0f;
}
// ...
This implementation accounts for the random error in the distance measurement by incorporating the calculated standard deviation into the normal distribution, simulating the impact of various factors, including ambient light and intensity of the infrared signal, on the accuracy of the depth image.
If you're itching to dive into the nitty-gritty and get your hands on the full code, I've got you covered! Just Grab the Complete Source Code. You'll be able to peek under the hood and see all the magic for yourself.
Bibliography | |
---|---|
[Kaj86] | James T. Kajiya. The rendering equation. In Proceedings of the 13th annual conference on Computer graphics and interactive techniques (SIGGRAPH) 1986, ACM Press, 1986. |
[PRS+10] | Steven G. Parker, Austin Robison, Martin Stich, James Bigler, Andreas Dietrich, Heiko Friedrich, Jared Hoberock, David Luebke, David McAllister, Morgan McGuire, and Keith Morley. Optix: A general purpose ray tracing engine. In ACM SIGGRAPH 2010 papers on - SIGGRAPH. ACM Press, 2010. doi:10.1145/1833349.1778803. |
[MNK13] | Stephan Meister, Rahul Nair, and Daniel Kondermann. Simulation of time-of-flight sensors using global illumination, 2013. doi:10.2312/pe.vmv.vmv13.033040. |