# Survey on 3D Surface Reconstruction

## Article information

## Abstract

The recent advent of increasingly affordable and powerful 3D scanning devices capable of capturing high resolution range data about real-world objects and environments has fueled research into effective 3D surface reconstruction techniques for rendering the raw point cloud data produced by many of these devices into a form that would make it usable in a variety of application domains. This paper, therefore, provides an overview of the existing literature on surface reconstruction from 3D point clouds. It explains some of the basic surface reconstruction concepts, describes the various factors used to evaluate surface reconstruction methods, highlights some commonly encountered issues in dealing with the raw 3D point cloud data and delineates the tradeoffs between data resolution/accuracy and processing speed. It also categorizes the various techniques for this task and briefly analyzes their empirical evaluation results demarcating their advantages and disadvantages. The paper concludes with a cross-comparison of methods which have been evaluated on the same benchmark data sets along with a discussion of the overall trends reported in the literature. The objective is to provide an overview of the state of the art on surface reconstruction from point cloud data in order to facilitate and inspire further research in this area.

## 1. Introduction

The recent advent of increasingly diverse, affordable and powerful 3D scanning devices has made it possible to conveniently gather large amounts of high resolution range data about real-world objects and environments. These devices typically operate by projecting several beams of light or radiation (e.g., laser, infrared or x-ray) onto the object or environment which needs to be scanned and then detecting the beams when they strike a surface. The exact position of the point on the surface where the beam strikes can then be determined using various techniques such as time-of-flight or triangulation [1]. The result of this scanning process is a point cloud, a set of disjoint points in 3D space appearing where there are surfaces within the scanning range.

However, the raw point cloud data generated by these devices typically needs to be converted into a meaningful digital representation of the scanned objects to facilitate its use in various application domains such as computer-aided design, medical imaging, reverse engineering, virtual reality, and architectural modelling [2,3]. Since reconstructing the object surfaces in point clouds is the first step in this direction, hence, several different methods for surface reconstruction have been proposed over the past two decades or so and research in this area is continuing to accelerate [4].

This paper, therefore, aims to survey the existing literature on surface reconstruction from 3D point clouds by explaining some of the basic surface reconstruction concepts, describing the various factors used to evaluate surface reconstruction methods, highlighting some commonly encountered issues in dealing with the raw 3D point cloud data and delineating the tradeoffs between data resolution/accuracy and processing speed. It also categorizes and summarizes the most well-known techniques for this task and briefly analyzes the results reported for the empirical evaluations for these methods to demarcate their possible advantages and disadvantages in order to arrange a benchmarking platform to perform a cross-comparison over the quality of the reconstructed surfaces. The objective is to provide an overview of the work published on surface reconstruction so far in order to facilitate and inspire further research in this area.

The survey is organized as follows: Section 2 defines surface reconstruction and surface qualifiers. Section 3 discusses some issues and constraints encountered in surface reconstruction. Section 4 provides an overview of various techniques used for surface reconstruction. Section 5 presents qualification factors and experimental results reported for these techniques. Section 6 concludes the survey with a brief discussion about the current state of the art and identifies areas that require further work.

## 2. Surface Reconstruction and Qualification Factors

Surface reconstruction is the process of retrieving a 3D model of a real object from the input data which is provided by 3D scanner devices. As explained above, the input data is usually in the form of points scattered in 3D space. These points can be either unstructured or structured. For an unstructured point set, the only information provided is the coordinates of the points while for a structured point set, additional information - either geometrical (such as partial connectivity or normal vectors of faces defined by at least three point coordinates) or topological (e.g., the overall shape that the points may fit into such as cylindrical, spherical, etc.) is also given. An appropriate method is then used to reconstruct the surface. Reconstructed surfaces are usually in the triangulated form; i.e., points are connected in the form of triangles with sharing edges and/or vertices. The aim of the process is to create a 3D model of a real object that is visualize-able.

A pre-processing step may be required before surface reconstruction to remove noise, filter out unwanted points, simplify the input data and to segment the original data set into clusters representing actual objects. Different input data type can be treated with different reconstruction methods and the results may change drastically depending upon the selected method and the data type (these issues will be discussed further in section 3).

The surface representations resulting from these methods can be broadly classified into two categories: explicit and implicit. However, other categorizations have also been suggested in the literature based on whether the reconstructed surface contains a subset of the actual data points or is simply estimated from their properties and whether the surface does or does not contain sharp edges or corners. In the following discussion we, therefore, classify reconstructed surfaces, and correspondingly the methods that are producing such surfaces, into implicit vs. explicit, approximated vs. interpolated and isotropic vs. anisotropic.

### 2.1 Explicit vs. Implicit Surfaces

[5] describes an explicit surface as a prescription of the precise location of the surface. It means that an explicit representation of a reconstructed surface from a point cloud lies on the points that are scanned from a real object and are present in the cloud. The surface is usually in the triangulated format and these points are the vertices of the triangles or place on the triangles of the surface.

There are two different kinds of explicit surfaces: (1) parametric and (2) triangulated. A parametric surface is the deformation of a primitive model that covers an arbitrary portion of the points. Some of the primitives being used are B-Spline, NURBS, plane, sphere and ellipsoid. Parametric surfaces are topologically limited by the initial model which means complicated surfaces are not representable easily and they may need a mixture of primitives.

Triangulated surfaces are the most intuitive version for surface representation in which the surface is described by connected triangles made from the input points using k-nearest neighbors of a point to build the connectivity. A triangulated representation of a surface is called simplicial surface, as well [6].

An implicit surface is defined by a function which one of its isocontours is a close estimation to represent the input data. Implicit surfaces are defined as particular isocontours of scalar functions in [5]. More specifically, implicit surface reconstruction is actually the process of finding a function that best fits the input data. Implicit representation of a surface needs to be post-processed in order to be visualized; marching cube [7] is the most well-known method to generate a triangulated surface from the implicit representation of the surface.

Implicit surfaces are typically a summation of radially symmetric basis functions. This symmetricity is an issue when a surface has corners or edges; i.e., a typical implicit surface suffers from incapability of presenting asymmetric characteristics of the surface like edges and corners. Therefore, these surfaces are not able to represent complex surfaces well enough.

Variational implicit surfaces, despite of typical ones, use different types of basis function to address the inability of describing drastic changes on the surface; corners and edges [8]. Volumetric regularization or energy minimization is how a variational implicit surface represents an object; i.e., the surface is the isocontour of the basis function where the function is minimized [8]. In fact, the implicit surface is the zero set of the basis function. This kind of representation is more powerful than the typical one in the case of complex shapes with twisted topologies [8].

Different types of functions are used for implicit surfaces; distance functions and convolution of symmetric functions. The former is more about energy minimization and relative to variational surfaces [8]. While the latter is about combination of symmetric functions with different parameters to find the best fit for the given points [8].

Least Square (LS), Partial Differential Equation (PDE), Hausdroff Distance (HD) and Radial Basis Function (RBF) are some of the variational implicit surface representation functions which are used in surface reconstruction methods. The zero set of these functions on input data are the approximation of the surfaces. Mixture of Poisson distribution with different parameters, Gaussian kernel combination with various μ and σ, in addition to blending heterogeneous polynomial functions are some of the typical implicit surface representation approaches.

### 2.2 Interpolated vs. Approximated Surfaces

The term “interpolation” is used for surfaces that may suffer from non-uniformity of the input data or undesired holes (missing data) due to opaqueness. An interpolated reconstructed surface is the one that at least a subset of the input points lies on the surface which resolves the existing abnormality in the given data.

On the other hand, an approximated surface is the representation in which none of the points may lie on the surface. That surface is just an estimation of topological circumcircle of the input set.

Almost all the explicit surfaces are considered interpolated surfaces, specifically triangulated surfaces in which the input points rest on the surface. Implicit surfaces are often approximated as the spirit of the reconstruction methods of these types of surfaces are about energy minimization or combinatorial function fitting.

### 2.3 Anisotropic vs. Isotropic Surfaces

Isotropy or anisotropy attribute of a surface is a characteristic coherent to the topology of the surface. A model is called isotropic when it has very smooth surface without any sharp edges or corners. In fact, isotropy refers to a set of features that do not change in different directions; i.e., a surface feature is isotropic if that feature carries the same value when it is measured in various directions [8]. Therefore, a surface is isotropic where all the measurable features of the surface are isotropic.

An anisotropic model is a model with at least one anisotropic feature. An anisotropic property differs when it is measured in different directions [8]. The distance between points in presence of sharp edges and corners are anisotropic features of a surface because the distance of the points along an edge varies with the distance across the edge [8]. Anisotropy and heterogeneity and asymmetricity are used interchangeably.

## 3. Issues and Constraints

Issues and limitations in surface reconstruction techniques are co-dependent on the input data type. Some methods have addressed the issues by making some assumptions about the input data and have accepted the imperfection, inaccuracy and incorrectness of the result. On the other hand, some other approaches have overcome the constraints by sacrificing the computation speed and space complexity.

### 3.1 Unstructured Data

More than often, when the input data is unstructured, it means that it includes just the coordinates of the points and nothing more than that. Dealing with unstructured data has become one of the main factors in qualification of surface reconstruction methods.

Problems in surface reconstruction from unstructured point cloud, caused by the non-uniformity characteristic of the input set, includes implementation difficulties, inaccuracy of the results and high computational demands due to less information about the overall structure of the scanned objects.

Some of the reconstruction techniques address the aforementioned issues of dealing with unstructured data by feeding a dense point cloud to the method in order to infer the connectivity information and ease on the surface computation. Although dense sampled input set works in many applications, it may cause some other issues like increasing the input size, computation cost in terms of time and memory space plus having redundancy in the input set which may result in undistinguishable noises that produce outliers in the reconstructed surface.

On the other hand, dense sampled data is not always available. Surface reconstruction methods which use condensed data to resolve the unstructured-ness of the input, suffer from incompleteness and inaccuracy when the data is not dense enough.

### 3.2 Dataset Size

The other major constraint of surface reconstruction methods is the dataset size. Some methods are not able to deal with a large data set due to the implementation of the method and the data structure that is used to store the input data and the reconstructed surface; i.e., an inadequate data structure and implementation limit an approach to available hardware resources. These surface reconstruction methods with upper limits in the data size suffer from low speed because of the computation cost and memory usage of the algorithm. Even a method that handles an enormous data set may require high memory space to reconstruct the surface. Therefore, there is a tradeoff between accuracy and performance.

Increasing the amount of data may result higher accuracy and perfect surfaces, whilst it may lower the performance of the method either with respect to computation time or acquired memory space. Even when a condensed data set is used in some techniques as a resolution for surface reconstruction from unstructured points, these methods may suffer from low speed computation and high memory consumption or in the worst case they may not be able to handle the large input sets.

Dataset size problem has been addressed by filtering and simplification where both try to reduce the input size. Filtering is removing some parts of the input data which are undesired or unnecessary in the reconstructed surface. Simplification is also reducing the amount of data by uniformly sampling the input. Filtering and simplification are the two steps of preprocessing in some reconstruction methods which is also considered as an overhead.

### 3.3 Performance

As it may be known, performance is always a bottleneck in all computer-based applications. Computer software solutions are qualified and ranked with respect to their performance. Performance has three sides: (1) time efficiency, (2) memory efficiency and (3) accuracy and completeness.

These three aspects, together, may not get satisfied in any surface reconstruction technique. But, there are some methods which keep all of them in an arbitrary level of satisfaction (tradeoff between input set size and performance). In brevity, the larger the dataset gets, the lower the speed, the higher the memory space and the better the accuracy will be.

Some surface reconstruction techniques resolve the time and memory efficiency issues by preprocessing steps such as filtering and simplification on the input set, while they suffer from the third aspect of performance: accuracy and completeness. On the other hand, accurate and complete methods may sacrifice the two other aspects: time and memory efficiency.

### 3.4 Redundancy vs. Sparseness

Redundancy happens when the input set is dense, and therefore, points in the set may overlap. Although it may ease the surface reconstruction implementation due to resolving the unstructured-ness of the points, it may affect the performance.

Moreover, redundancy leads to concentrated datasets, which also impacts the performance of the system: degradation in time and memory efficiency vs. promotion in accuracy.

Additionally, redundant data may influence preprocessing steps for noise reduction. When there is a redundancy in the data, the noisy area is also condensed. That means the noisy area and the noise-free area of the data are not readily distinguishable. That generates outliers in the output surface.

Solutions to resolve the issue in most cases are simplification methods in the preprocessing step.

Dispersion in the data input is called sparseness. The reason that some point sets are sparse is sometimes weaknesses in scanning devices, including lower sampling rate and sampling quality. Scattered points are the opposite of redundant data and highly dense one. Sparseness affects the accuracy, increases the chance of noise inclusion in surface reconstruction and may produce coarse surfaces.

As to the other issues, preprocessing is the first idea to overcome the sparseness issue. Interpolating the points to increase the density and as a result producing smoother surface, with incompleteness and outlier inclusion payoffs, is one way of resolving the issue in the preprocessing step.

### 3.5 Noisy Data

Scanning devices are not perfect, therefore, the input data may include erroneous and noisy points. Noise affects the result of the surface reconstruction. It may create outliers in the reconstructed surface or even worse, the result may not represent the scanned object at all. In fact, noise and error in the input are equivalent to inaccuracy.

Assumption like that the input is noise-free sacrifices the accuracy and produces outliers; however, it is one resolution for noisy data in some reconstruction techniques. On the other hand, filtering in preprocessing steps is another idea that may reduce erroneous data.

### 3.6 Incompleteness

Incompleteness in the reconstructed surfaces occurs when the input data is missing some parts due to opaqueness, holes or an inappropriate scanning device. A surface is incomplete when it is not dual of the real object. Missing a hole on the surface due to highly dense points and having extra holes on the surface because of opaqueness are the two major types of incompleteness.

To resolve incompleteness, preprocessing steps such as reproducing or interpolating the missing data to remove the unwanted holes, or simplification to extract the actual holes, must be done. Though, here again there is a tradeoff between accuracy and performance.

## 4. Surface Reconstruction Techniques

Surface reconstruction techniques are categorized differently based upon the surface type that they generate at the end of the process. Here, more powerful implicit and explicit surface reconstruction approaches will be briefly discussed and compared. The methods will be presented in a chronological order in each sub-section.

### 4.1 Explicit Methods

Explicit surface reconstruction techniques, as it is mentioned in section 2.1, are either parametric or triangulated with respect to how they represent the reconstructed surfaces. This section is going to refer to explicit methods according to the definition of their surfaces.

#### 4.1.1 Triangulated explicit surface reconstruction methods

[9] represents one of the fundamental and earliest techniques of surface reconstruction. Delaunay triangulation is a unique triangulation of a point set where none of the triangles encompasses any other points in the set. Voronoi diagram is the dual of the Delaunay triangulation of a point set. A Voronoi diagram of a point set consists of Voronoi faces. A Voronoi face is an entity (in 2-dimensional space a polygon) enclosing a single point from the set, with edges that are equidistant from the closest neighboring points of the set. As Voronoi diagram of a point set is the dual of its Delaunay triangulation, they are convertible to each other with a simple transformation technique.

Algorithms for computing the Delaunay triangulation of a point set include flipping, plane sweep, divide-and-conquer [10,11] in 2D space, in addition to, incremental and gift-wrapping [12] in 2D plus 3D space. For more details, readers are referred to citation [9].

In [13], a new Voronoi-based technique is proposed. This technique is referred as Crust algorithm. Amenta et al., first explained the 2-dimensional method and then extended the algorithm to the 3-dimensional space. The Crust algorithm is an improved version of Voronoi diagram and Delaunay triangulation providing a smooth surface from unstructured points.

The Ball Pivoting Algorithm (BPA) represented in [14] is another explicit surface reconstruction method which tries a completely different ideology to reconstruct a 3D surface model from a point set. The technique that is used in BPA is very simple and intuitive. A ball with various radii rolls over the points to generate triangles. Three consecutive points touched by the rolling ball create a new triangle if the triangle does not encompass any other point. This iterative operation starts from a seed triangle, creates new triangles by rolling the ball over the edges to meet all the edges and then starts with another seed triangle to pass all the points. Different radii help the algorithm to overcome the non-uniformity issue or uneven sampling densities.

[15,16] describe some other region-growing techniques to surface reconstruction which resemble the Ball Pivoting Algorithm.

Amenta and Bern [17] propose a simple combinatorial algorithm which is a new extension to Voronoi diagram and Delaunay triangulation and the Crust algorithm to reconstruct surfaces from point sets. This technique relies on post-processing the Delaunay triangulated surfaces in order to filter out undesired triangles. The authors prove the correctness of the suggested technique in [17].

Another method based on Delaunay triangulation is proposed by Gopi et al. [18]. The algorithm uses incremental triangulation method to produce the surface from an unorganized 3D range data. It has four steps, including normal computation, candidate point selection, Delaunay neighbor computation and triangulation. The first step computes the normal vector of all sample points. The second step selects a subset of points which are possibly neighboring vertices in the final triangulated surface. In Delaunay neighbor computation step, for each candidate point from the previous step, the local Delaunay neighborhood of that point is determined. From the candidate points and points in the local Delaunay neighborhood, the triangulated surface is generated in the last step. A similar approach with more assumptions for the dataset is presented in [19].

Amenta et al. [20] introduce a new Delaunay based algorithm for surface reconstruction which is an improvement to the Crust algorithm that they had proposed earlier. They add further restrictions in post-filtering the Delaunay triangles. The new algorithm is called Cocone.

The Crust algorithm is devised in [21] to overcome the generated artifacts due to under-sampling. The algorithm estimates medial axis transform (MAT) and uses inverse transformation to recover the surface. This is called Power Crust.

Gopi and Krishnan [22] present a fast and memory efficient incremental projected-based algorithm for surface reconstruction. This approach builds the surface starting from an arbitrary point. First it finds all the adjacent vertices in the triangulation boundary of the initial point. Then it processes all the adjacent points in a breadth-first fashion to find their neighboring vertices. Processing all the points in the dataset and incrementally generating triangles direct the algorithm to the triangulated reconstructed surface. The algorithm includes three steps: (1) bucketing, (2) point pruning and (3) triangulation.

The Dimension algorithm presented by Dey et al. [23] provides a Voronoi based method to detect dimensions of the shape assigned to a sample point set. [23] also presents CoconeShape algorithm to approximate the shape of the arbitrary dimensions from the sample points. The CoconeShape algorithm acts the same as the Cocone algorithm does, with extra information about the dimensions of the shape achieved by the Dimension algorithm.

Dey and Goswami [24] represent a new extension of the Cocone algorithm called Tight Cocone. They try to interpolate the holes that may exist in the reconstructed surface by the Cocone algorithm through a post-filling process. It means that the algorithm uses the Cocone approach to produce a preliminary surface from the input set. Then it repairs the holes on the surface, due to locally under-sampling issue, by a post-processing filling method referred as Mark and Peel.

Another incremental Delaunay-based surface reconstruction approach is represented by Cohen-Steiner and Da [25]. Their algorithm uses greedy method to select the triangles. In this approach, first, the Delaunay triangulation of the input set is generated. Then according to some topological constraints, candidate triangles are selected. The output surface is reconstructed incrementally through picking the triangles from the prioritized list of candidate triangles and stitching them together.

Kolluri et al. [26] present a spectral surface reconstruction method. This algorithm, first, creates a Delaunay tetrahedralization from the point set and then starts to prune the surface by spectral graph partitioning. The partitioning is done by making a decision whether a tetrahedron is inside the surface or outside. When an interior tetrahedron meets an exterior one, a triangle is created on the collision plane. This triangle will be in the final set of triangles of the surface. The reconstructed surface by this method is called Eigencrust.

Another region-growing surface reconstruction method with an approximate minimum weight triangulated surface, as a result, is introduced by Lin et al. [27]. This approach starts with a seed triangle and continues to build the surface incrementally by finding a point to reconstruct a new triangle attaching to the existing ones using an intrinsic property of the input set; this property is named sampling uniformity degree. The algorithm stops when there is no point to construct a new triangle; i.e., the surface does not grow anymore. The algorithm is called Intrinsic Property Driven or IPD.

Mederos et al. [28] present a modified version of the Power Crust algorithm to make the algorithm robust to noise. The Power Crust algorithm has changed by filtering some Voronoi vertices polar balls. For more details about the algorithm refer to [28].

A dynamic surface reconstruction approach is proposed by Allegre et al. [29] for large unstructured point sets. This method consists of two major components. A selective reconstruction algorithm that simplifies the dataset and reconstructs the surface, and an algorithm that dynamically and locally refines or coarsens the surface. The reconstructive component is based on geometric convection and the update algorithm works with respect to the defined local sampling constraints. Allegre et al. [29] also introduce a new data structure for surface reconstruction that helps the reconstructive component to deal with the large datasets with more than million points. The proposed data structure is a combination of kd-tree and Delaunay triangulation. An extension of this approach is proposed by Allegre et al. [30].

One of the most recent surface reconstruction algorithms is introduced by Digne et al. [5]. This method starts with an initial estimation of the surface using Delaunay triangulation and then iteratively tries to simplify the surface by filtering out the triangles through optimal transport. The optimal transport filtering is done by lowering the distance between the actual points in the dataset and their corresponding triangles on the surface.

#### 4.1.2 Parametric explicit surface reconstruction methods

A parametric explicit surface reconstruction technique is proposed in [31]. Deformation and blending are the two main operations to fit primitive shapes like cylinder, sphere or torus on a given dataset. “Given two shapes that can be defined parametrically on a common material coordinate space, blended shapes are constructed by the linear interpolation of two shapes using a blending function that specifies the relative contribution of each shape on the resulting blended shape” [31].

There are some other shape representations using cylinder [32,33], superquadrics [34,35], hyperquadrics [36] and blobby models [37]. Geometry of projecting blending surfaces [38], modeling surfaces with dynamic particles [39], algebraic curve and surface fitting [40] and constraints on deformable models [41] are some other related studies revolving around the deformation and parametric surfaces.

### 4.2 Implicit Methods

The major difference between implicit and explicit methods is that the implicit algorithms need to be preceded by a triangulation technique because the outputs of these approaches are manifold surfaces and usually defined by a complex function. That manifold surface needs to be triangulated by a follower method; the best known approach is Marching Cube. Numerical methods are also used to render implicit reconstructed surfaces. In this case, the best fitting isocontour of the implicit function is found by numerically searching the space.

As it is discussed in section 2.2, these methods can be categorized into typical and variational. Variational implicit surface reconstruction techniques are distinguished by the approximation function or combination of functions that they utilize to estimate their outputs.

#### 4.2.1 Variational implicit surface reconstruction methods

One of the earliest implicit surface reconstruction algorithms is proposed by Hoppe et al. [6]. The input of this technique is an unorganized point set and the output is an approximated surface. This method does not need additional information about the dataset, including the topology of the surface or the boundaries, and all required parameters are inferred from the input data. Hoppe et al. [6] approximate the surface with the zero set of the defined signed geometric distance function. For the purpose of illustration, the manifold surface is fed to a triangulation algorithm to generate a simplicial surface.

Dinh et al. [8] introduce an implicit surface reconstruction method using anisotropic basis function. The method is to preserve anisotropic features of the surface including sharp edges and corners. Using anisotropic basis function makes the surface vary locally. The Principal Component Analysis (PCA) is performed in a small neighborhood of points in the dataset to recognize the anisotropy of the surface. Then an approximation of the surface is represented by a non-uniformly scaled radial basis functions. This approach does not require a priori knowledge about the surface such as topology of the surface or points’ normal.

A surface reconstruction of 3D point sets by fitting a radial basis function is presented in [42] by Carr et al. This technique uses radial basis functions as distance functions to approximate the surface. The surface is where the zero set of the approximation function occurs; i.e., minimization of the approximation function represents the surface. It starts with a general radial basis function as an initial guess and then tries to reduce RBF centers using a greedy algorithm. Efficiency of the RBF fitting and evaluation methods makes this approach fast and enables it to deal with the large datasets.

Another implicit surface reconstruction method has been proposed by Zhao et al. [43]. This approach uses variational and partial differential equation (PDE) methods including the level set, fast sweeping and tagging. This method actually uses an unsigned distance function to estimate the surface with some special manipulations and improvements. The algorithm, first, creates an estimated convection model from the dataset and then uses the level set method and numerical implementation to output a smooth surface.

Alexa et al. [44] introduce a new implicit surface reconstruction method that approximates the surface with Moving Least Square mechanism on local differential geometry. Due to the locality of the computation, the input dataset may be partitioned to split the computation cost between different processing units. For the purpose of simplification of the surface, after surface estimation, this approach may use down-sampling to reduce the size of the output or even to remove the unexpected outliers. On the other hand, up-sampling may be used to fill the holes due to opaque or improper initial sampling.

An RBF surface reconstruction technique from noisy dataset is presented by Carr et al. [45]. In this study, they convolve a Radial Basis Function with a smoothing filter to reconstruct the surface. In addition, they introduce the idea of substitution of the basic RBF from the approximated surface to overcome the issue of generating outliers and artifacts due to the presence of noise in the dataset. “The key feature of this approach is that it avoids resampling the RBF on a fine grid or performing a numerical convolution” [45] which makes this approach fast and efficient.

Partial differential equations and distance functions are used, once again, to reconstruct 3D surfaces from point sets by Duan et al. [46]. A Laplacian tangential smoothing filter is used in this method to generate a smooth surface that preserves the geometry and topology of the dataset even in the regions with sharp features. This approach starts with an initial coarse model defined as a deformable PDE and refines the surface iteratively to compute the closest possible model to the sample points.

Hornung and Kobbelt’s implicit surface reconstruction technique [47] uses unsigned distance function to approximate the surfaces represented by the zero set of this function. Their method is an extended version of the algorithms which use signed distance functions. This attribute makes the algorithm to approximate the surfaces without normal information and also avoids the outliers caused by misalignment of 3D scans. At the end, a minimum cut of a weighted spatial graph structure is used to represent the surface.

Recently, Alliez et al. [48] present a new Voronoi-based implicit surface reconstruction technique. In their method, they first compute a “tensor field whose principal axes and eccentricities locally represent respectively the most likely direction of the normal to the surface” [48], using the Voronoi diagram of the input point cloud. Then the algorithm determines an implicit function in a way that its gradient represents the best fit to the principal axes of the tensor field.

Another Principal Component Analysis-based surface reconstruction method is proposed by Huang et al. [49]. A preprocessing operator, weighted locally optimal projection, is utilized to denoise the input set and make it outlier-free and equally distributed. After denoising the input set, a PCA-based method is employed to compute an initial estimation of normals. At the end, a surface is reconstructed using a priority-driven normal propagation scheme and an orientation-aware PCA mechanism.

Oztireli et al. [50] address the noise and losing details issues in surface reconstruction operation using Moving Least Square (MLS) mechanism. Their implicit approximation technique explores MLS focusing on local kernel regression.

#### 4.2.2 Typical implicit surface reconstruction methods

A Poisson surface reconstruction technique is introduced by Kazhdan et al. [51]. In this method, an estimation of a surface is represented as a Poisson problem that makes the algorithm resilient to noise. Later on, Bolitho et al. [52] and Li et al. [53] introduce parallel and an improved version of Poisson surface reconstruction techniques based on Kazhdan et al.’s work [51].

Some other typical methods residing in this category are algebraic surface drawing by Blinn [54], volumetric shape description using blobby models by Muraki [37] and smooth implicit surfaces from polygonal meshes by Yngve and Turk [55]. These approaches explore Gaussian functions to represent surfaces.

## 5. Experimental Results and Observations

Surface reconstruction methods are evaluated with respect to the computation time, coarseness or fineness and whether they preserve the intrinsic properties or not. Here, in this section, each technique that is mentioned in previous sections will be analyzed to provide a very rough estimation of what would be the expectation when they are used in practice. Even though the computation time of some of the techniques are presented in the original documents, there would not be a fair cross-comparison tabular information between these approaches because they each utilizes different hardware platforms to evaluate their systems.

There are some standard datasets such as Stanford dataset that most of the reconstruction techniques use to evaluate their system. The only way that some of these methods can be compared to each other would be to evaluate how coarse or fine they reconstruct the surface given an equal dataset in addition to how well they preserve the details of the surface.

In this section, first, an evaluation and experimental results of each technique presented in section 3 and 4 will be. Then a very rough cross-comparison of these techniques with respect to the aforementioned criteria will be shown in a tabular format.

### 5.1 Triangulated Explicit Methods

Voronoi diagram and Delaunay triangulation presented in [9] is a brute force technique to reconstruct an explicit triangulated surface from a point set. The overall performance qualification of this method with various implementation explained in [9].

The most important advantage of the Voronoi diagram and Delaunay triangulation technique is generating an accurate and interpolated surface. On the other hand, the method is not fast enough; as the size of the input set increases the performance drops drastically. Moreover, this technique is not robust to noise and it may not generate a smooth surface for an unstructured and sparse dataset.

The Crust algorithm in [13], theoretically guarantees that a smooth surface can be generated from a given good sampled data which is topologically equivalent to the original surface. But in practice, the algorithm may not be fast and efficient when the dataset size increases or the result of the algorithm may suffer from outliers due to a noisy dataset. It means, that the Crust algorithm comes with an important condition in order to satisfy the efficiency and to overcome the sparseness issue. Noise-free, dense and relatively small samples are the key factors to efficiency, smoothness, accuracy and interpolated reconstructed surface of the Crust algorithm. More qualifications and experimental results of the Crust algorithm are presented in [13].

The Ball Pivoting Algorithm [19] is very simple and efficient in terms of memory requirement and computing resource consumption. Moreover, it is robust to noise and capable of handling real range scan data. The performance assessment of the Ball Pivoting Algorithm is provided in [14]. The results of running the algorithm on Stanford Bunny, Stanford Dragon, Stanford Buddha and Michelangelo’s Florentine Pieta are illustrated in [14].

Performance of the system proposed in [17] is basically dominated by the Delaunay triangulation technique; i.e., for large dataset the algorithm is not practical due to exponential increase in computing time. In addition, the result of the extended version of Crust algorithm suffers where the data is noisy and not dense enough. An example of a reconstructed surface and its intolerability to uneven sampled data is shown in [17].

The surface reconstruction method presented in [18] does not assume anything about the geometry, topology or any additional information about the input data such as faces’ normal vectors and boundaries. It only presumes that the dataset is densely sampled from a real object. Therefore, the method is not noise tolerant and may suffer from artifacts in the resultant surface.

The main contributions of Gopi et al. [18] are presenting a new sampling criteria, fast and memory efficient surface reconstruction algorithm, normal estimation algorithm and fast Delaunay neighborhood computation in addition to system implementation.

Based upon the result shown in [18], it can be deduced that the algorithm is fast and accurate enough considering the sampling assumption in comparison with its siblings.

The Cocone algorithm introduced in [20] is a Delaunay-based and based on the Crust algorithm. The same issues as the Crust algorithm, like dealing with large datasets, noise, sparseness and non-uniform dense point sets still affects the accuracy and performance of the Cocone algorithm. [20] displays the performance of the Cocone approach in reconstruction of foot and golf club. They also prove that the algorithm guarantees the geometry and surface homeomorphism by comparing the extracted manifold surface with the triangulated one. [20] also illustrates the Cocone algorithm’s results on foot and golf club both in triangulated and manifold surfaces.

One of the main drawbacks of the Power Crust [21] algorithm is that the output surface is not triangulated. It is claimed that the algorithm is robust to noise but the Voronoi diagram computation is still the bottleneck of the processing time. It seems that small holes do not appear in the output which is expected by the authors.

Gopi and Krishnan [22] involve three major assumptions on the input dataset. These assumptions reduce the generality of the algorithm due to the unsatisfiability of the assumptions on the real datasets. Locally uniformity, distinguishability of the points in different layers and smoothness of the underlying surface are the three presumptions about the input data that must be held to provide the smooth, accurate and noise-free surface as a result. Constructing different triangulations with various seed points, inability to produce sharp edges and corners and intolerability to under-sampled and non-uniform input data are some of the shortcomings of this approach.

Main impacts of Gopi and Krishnan’s work [22] are memory and time efficiency, in addition to high level of noise tolerability in some special occasions. The performance and the results of Gopi and Krishnan’s work are displayed in [22].

Since the Dimension and CoconeShape algorithms presented in [23] are Voronoi based and the surface approximation algorithm looks alike the Cocone method, Dey et al.’s approach [23] has the same shortcomings of the Cocone and all Voronoi based surface reconstruction algorithms. These disadvantages include intolerability to noise and sparseness, and performance issue in a large dataset. The results of Dimension algorithm for curve, foot, engine and ball datasets are illustrated in [23]. From the provided results in [23] and [20] it seems that the dimension estimation in [23] increases the total computing time for foot dataset in comparison with [20].

The Tight Cocone algorithm [24] relies on the locality under-sampling. Therefore, it is not productive when this condition is unmet as it does for noisy sample dataset. In addition, the algorithm cannot reconstruct the internal voids. It is claimed that the algorithm provides a watertight surface given a well-sampled and noise-free input set. The illustration of the performance and the results of this technique is represented in [24].

The results and performance of the greedy algorithm introduced in [25] display that the algorithm can compete with the similar approach but the correctness of the output topology is not guaranteed.

Kolluri et al. [26] explain that their proposed method “can ignore outliers, patch holes and under-sampled regions, and surmount ambiguity due to measurement errors” [26]. The reconstructed surfaces of this method are provided in [26]. The surface reconstruction time for this algorithm is high and it cannot generate sharp edges.

Many of the region-growing surface reconstruction methods depend on some user defined parameters. The Intrinsic Property Driven algorithm proposed by Muraki [37] relies on a self-descriptive property of the point cloud to reconstruct the surface which makes the algorithm independent and less limited to any user defined parameters. Moreover, the algorithm uses a special selection method to choose the new vertices which makes it powerful in reconstruction of triangulated surfaces with small topological errors. The reconstruction and the performance results of this method are presented in [27].

Mederos et al. [28] introduced a modification to the Power Crust algorithm to make it robust to noise. This approach has been tested with Stanford bunny, hip-bone and dragon in the presence of noise. [28] contains the results of this approach.

The dynamic surface reconstruction technique in [29] provides an efficient algorithm where the dataset is unorganized and large. To prove the efficiency of the method, Allegre et al. [29] test their algorithm with a few datasets with millions of points. The results are shown in [29], figuratively and performance-wise. For some of the results, Lucy and St. Matthew (reconstructed surfaces), in this approach, the test has been done on multiple cores. ρ_{geom} in [29] is defined as the locality parameter for local update algorithm.

Digne et al. [5] claim that the proposed algorithm is robust to noise and outliers and it preserves intrinsic features of the surface like edges and sharp corners. They also believe that the algorithm can be utilized as a post-processing step for the smooth surfaces of other reconstruction methods to recover the sharp boundaries. Despite simplicity of the algorithm’s formulation, it has a major drawback which is a higher computing time for large datasets. It is not guaranteed that the algorithm reconstruct the surface out of a reasonably large point set.

### 5.2 Parametric Explicit Methods

Blended deformable parametric surface reconstruction technique may only represent models that are deformable from primitive shapes. Due to lack of generality, this method is not able to represent a large class of objects. Some of the experimental results and performance measurements of this technique are displayed in [31].

Even though [31] proposes one of the few parametric surface reconstruction techniques, these techniques are not beneficial with respect to performance and the descriptive power of reconstructed surfaces.

### 5.3 Variational Implicit Methods

The implicit surface reconstruction method introduced by Hoppe et al. [6] is an approximation algorithm. In terms of time complexity, the method is slow due to the three steps of computing: (1) EMST graph reconstruction, (2) k-nearest neighbor calculation of a point and (3) computing the nearest tangent plane to a point. In addition to all the aforementioned steps, the method requires a triangulation algorithm as an extra step to reconstruct a simplicial surface. Beside the computing time, the algorithm needs a parameter tuning step. The performance of the algorithm including computing the triangulated surface using marching cube is shown in [6]. The reconstructed surfaces using Hoppe et al.’s method and marching cube are displayed in [6]. Apparently, the algorithm does not preserve some intrinsic properties of the surfaces like boundaries and sharp edges.

Carr et al.’s radial basis function-based surface reconstruction method [42] is fast, memory efficient and is able to deal with large datasets. Moreover, this implicit technique generates smooth interpolated surfaces and it is robust to noise. Being scale-independent, makes this algorithm suitable for even non-uniformly sampled data. The results of this technique are presented in [42].

The implicit surface reconstruction technique proposed by Zhao et al. [43] is claimed to be fast, robust to noise and one that preserves sharp features. [43] displays the performance results of this method and some of the reconstructed experimental surfaces in tabular and figurative formats. Initial timing in the results [43] is the surface approximation time spent in sweeping and tagging algorithm, and the total timing is the initial timing added up to the solving time of the partial differential equations and the reconstructing time of the actual surface using the level set method and numerical implementation.

Carr et al. [46] evaluate their system with different types of input including volumetric data, 3D point clouds and 2D image scans. Providing an approximated surface with different levels of granularity is one of the major advantages of this method. [46] provides some experimental results on 3D point clouds from coarse to fine levels of granularity.

Voronoi-based variational surface reconstruction method represented by Alliez et al. [48] is designed to work with unoriented input point sets. In addition, it is claimed that the algorithm is robust to noise due to an optimized implicit representation of the surface. Moreover, the method has control over the smoothness of the surface and fitness adjustability to the data.

Huang et al. [49] use a preprocessing step to make the input dataset noise-free and outlier-free. This preceding operation guarantees that the input set is evenly distributed. Therefore, the reconstruction technique is robust to noise and sparseness. This surface reconstruction, which is called consolidation by the authors, would provide better result on topology and geometry of the input data.

In [50], Oztireli et al. address the issues of noisy data and conservation of intrinsic features of surfaces. Their method provides a reconstruction technique to produce surfaces which are resilient to noise and preserve the details like sharp edges. Some other advantages of this approach mentioned by the authors include stability under sparse sampling, efficiency and ease of implementation.

### 5.4 Typical Implicit Methods

An interesting comparison of Poisson surface reconstruction technique [51] with some other algorithms on Stanford bunny is done by Kazhdan et al. [51]. As it is mentioned, this method is robust to noise and sparseness. One drawback of this approach explained in [51] is that holes may not be recovered in the reconstructed surface if the surface contains them.

### 5.5 Cross-Comparison and Overall Trends

In order to cross-compare the surface reconstruction methods presented in the previous sections, we selected only those methods that have been evaluated with the same datasets and compared them with respect to how coarse or fine the reconstructed surface is. To measure the coarseness and fineness of the surfaces, the final triangulated surfaces are analyzed. The more triangular faces a surface has, the finer or smoother the surface is. On the other hand, the less triangles on a surface considers the surface as a coarsen one. Table 1 illustrates this comparison between the algorithms that are tested on Stanford bunny, foot and golf club datasets. The timing presented in Table 1 is not a good source of comparison due to the aforementioned reason of different hardware framework usages. Among all the reconstruction methods that are discussed, Bernardini et al.’s Ball-pivoting algorithm [14] and Dynamic surface reconstruction method by Allegre et al. [29] are the approaches that can handle large datasets in the order of million points. All the provided numbers in Table 1 are collected from the original documents.

Some of the overall trends that are observed in the literature are:

Preserving the intrinsic properties in the reconstructed surfaces

Reducing the computation time of surface reconstruction methods (though this may affect the accuracy of the surface)

Reconstructing fine, accurate and robust-to-noise surfaces

Producing reasonably acceptable surfaces from a minimum informative input data which is just a set of coordinates in space and nothing else

Two completely opposite directions in surface reconstruction methods are being studied; approximated surfaces and interpolated ones. The former methods are more likely categorized as implicit techniques and the latter as explicit ones.

Reconstructing surfaces from noisy data, unstructured data, under-sampled data and also large datasets.

As the 3D scanners are becoming more accurate and the acquired point clouds are less noisy and denser, the overall size of the clouds is getting larger and larger. Therefore, presumably two trends are remarkably more significant than the others for future work in this field: preserving the intrinsic properties in the reconstructed surfaces and reducing the computation time (which is a challenge when dealing with large datasets).

## 6. Conclusion

This paper aimed to provide a brief review of the various techniques for surface reconstruction from 3D point cloud data sets. Since this subject is useful in a vast variety of scientific fields, there has been extensive research conducted in this area in recent years. Many different types of surface reconstruction techniques have been proposed according to the output surface representations and other qualification factors.

There are many issues and constraints that need to be taken into account in order to build a smooth, feature preserved and accurate 3D surface from a scattered point set. There is always a tradeoff between accuracy and efficiency - the higher the accuracy demanded, the lower the efficiency will be in terms of memory usage and computation time. Noisy, sparse and non-uniformly distributed data make the surface reconstruction process challenging.

In this survey, research studies, since early 1990s, have been reviewed and compared in some aspects. In order to choose the best solution for an application, several key factors of the output surfaces need to be evaluated. Moreover, sometimes, several techniques do not preserve the intrinsic properties of the surface such as sharp edges and corners. Feature-preserving approaches may be sufficient to recover the geometric and topological characteristics of a surface but they may suffer in some other aspects such as performance and intolerability to noisy data.

Since the methods have been evaluated using different datasets and run on different platforms for experimental observations, it is not possible to cross-compare all the techniques in terms of accuracy and efficiency. However, we have provided a sub-comparison of a few methods which were evaluated on the sane datasets to provide some insight into their relative performance.

Despite several techniques being proposed for this task in recent years, our survey of the literature demonstrates that 3D surface reconstruction from point cloud data remains a challenging problem. As 3D data acquisition technologies continue to evolve and their use becomes more widespread in various application domains, the need to improve upon existing methods and to develop new techniques for surface reconstruction is ever more compelling.

## References

## Biography

**Alireza Khatamian**

He received his Ph.D. degree in Computer Science from the University of Georgia (Athens, GA, USA) in 2016. He currently works as Senior Software Architect at Peak Soil Indexes, LLC and as Lab Assistant at Complex Carbohydrate Research Center. His research interests include 3D model understanding, video processing and synchronization, 3D model reconstruction, video motion magnification, distributed computing/processing & big data, Hadoop clusters & graph processing systems, machine learning, planning & decision making, computer vision & image processing & 3D computer graphics & video processing, robot programming & robotics (unmanned aerial vehicle, wheeled mobile robots, manipulators).

**Hamid R. Arabnia**

Hamid R. Arabnia received a Ph.D. degree in Computer Science from the University of Kent (Canterbury, England) in 1987. He is currently a Professor of Computer Science at University of Georgia (Athens, GA, USA), where he has been since October 1987. His research interests include parallel and distributed processing techniques and algorithms, supercomputing, big data analytics (in the context of scalable HPC), imaging science (image processing, computer vision, and computer graphics), and other compute intensive problems. Methodologies that promote cross-disciplinary education is also of great interest. Applications of interest include: health informatics, medical imaging, and security. Most recent activities include: studying ways to promote legislation that would prevent cyber-stalking, cyber-harassment, and cyber-bullying. Prof. Arabnia is Editor-in-Chief of The Journal of Supercomputing (one of the oldest journals in Computer Science). He is on the editorial and advisory boards of 40 other journals. He is the book series editor-in-chief of “Transactions of Computational Science and Computational Intelligence” (Springer) and editor-in-chief of the book series entitled “Emerging Trends in Computer Science and Applied Computing” (Elsevier). He is the founder of WORLDCOMP annual congress that has served thousands of scientists. Prof. Arabnia has published extensively in journals and refereed conference proceedings. He has been a PI/Co-PI on about $8 million externally funded projects/initiatives and on about $200,000 internally funded projects. He has also contributed projects for justification for equipment purchase (grant proposals worth over $4 million - awarded). During his tenure as Graduate Coordinator/Director of Computer Science (2002–2009), he secured the largest level of funding in the history of the department for supporting the research and education of graduate students (Ph.D., M.S.).