Continuity and Interpolation Techniques for Computer Graphics

Continuity and interpolation have been crucial topics for computer graphics since its very beginnings. Every time we want to interpolate values across some area, we need to take a set of samples over that interpolating region. However, interpolating samples faithfully allowing the results to closely match the underlying functions can be a tricky task as the functions to sample could not be smooth and, in the worst case, it could be even impossible when they are not continuous. In those situations bringing the required continuity is not an easy task, and much work has been done to solve this problem. In this paper, we focus on the state of the art in continuity and interpolation in three stages of the real‐time rendering pipeline. We study these problems and their current solutions in texture space (2D), object space (3D) and screen space. With this review of the literature in these areas, we hope to bring new light and foster research in these fundamental, yet not completely solved problems in computer graphics.


Introduction
In computer graphics applications, it is a common practice to use a standard pipeline that starts by texturing three-dimensional (3D) models to apply material properties to them. Then, once the models are textured, they are deformed to create new poses that can be more appropriate for the needs of a certain scene. Finally, those models are visualized with a rendering algorithm. It is evident that mesh texturing, mesh deformation and rendering are key parts of the computer graphics pipeline. See Figure 1. Much research has been done in these topics, resulting in methods that allow to create computer-aided images in a more flexible, robust and efficient way.
Despite all these efforts, many of those approaches suffer from continuity problems that prevent the correct usage of interpolation procedures. Frequently, the underlying reason for which evaluations of samples taken near discontinuous regions produce serious continuity issues is that the underlying functions are not smooth (class C 1 ), or not even continuous (class C 0 ) at the region boundaries. Some applications even require higher order continuity, up to require the functions to be of class C ∞ . Achieving the required degree of continuity can be a challenging, yet crucial aspect of the different stages in a rendering pipeline in computer graphics.
In this paper, we are going to present some important concepts that are frequently used throughout the field, as well as some techniques involving continuity and interpolation methods for interactive computer graphics applications. We first introduce a generic formulation for parameterization and continuity on surfaces in texture space (2D), object space (3D) and screen space, as a way to understand the problems and challenges that we need to solve. Then, we introduce the concept of interpolation and we explain several approaches for it. After that, we explain the state of the art for continuity and interpolation in the previously commented spaces.

Parameterization and continuity
A mapping or parameterization of a surface can be viewed as a oneto-one correspondence from the surface to a suitable domain [FH05]. In general, as the mapping domain itself is a surface, to construct a parameterization means to map one surface into another one. Usually, surfaces are represented (or approximated) by triangular meshes and in this case, the mappings are piecewise linear. Surface parameterization was first introduced in computer graphics as a way  to map textures into surfaces, but it has also been successfully and widely used in other areas: repair of CAD models, mesh editing and compression, remeshing, surface fitting, detail mapping, etc. In the following subsections, we are going to describe parameterizations in several different spaces and we will explain the discontinuities they introduce in the mapping function from one domain to the other, which will be later addressed by the techniques reviewed in this paper.

Texture space
Given a surface S ⊂ R 3 and a point p ∈ S, let us define a domain T ⊂ R 2 and a parameterization (or mapping) function m T : S → T such that t = m T (p), with t ∈ T (see Figure 2). As our surfaces are represented by triangular meshes, the objective of m T is to map each polygon of the mesh in S into a polygon of the 2D space T . Usually, one could consider a good parameterization m T should be one that transforms each triangle of the mesh without any kind of distortion from S to T . Thus, depending on the measure that m T minimizes, we can classify parameterizations as: r Conformal parameterizations. A parameterization m T is conformal or angle preserving if the angle of intersection between every pair of edges of each triangle of a mesh is the same in S than in T . Thus, a conformal parameterization m T will try to minimize the distortion produced between the angles of each pair of edges when they are mapped to T . r Equiareal parameterizations. A parameterization m T can be considered equiareal if every triangle of the mesh is mapped onto T with the same area. So, an equiareal parameterization m T will have as its main objective the minimization of the distortion produced between the areas of the triangles in S when they are mapped to T . r Isometric parameterizations. We can define a parameterization or mapping m T of a triangular surface as isometric when it preserves both the angle of the triangles and their areas. We can consider isometric mappings as ideal as they preserve everything we could ask for: areas, angles and, as a consequence, lengths. However, isometric mappings only exist for very special cases. What many approaches try to do is to find a parameterization which either is conformal, or is equiareal, or minimizes some combination of angle and area distortion.
Let us define a function a that allows to retrieve all the attributes associated with a surface point. Without loss of generality, let us assume that all parameters are float tuples that can be arranged linearly in an n-dimensional array as a : S → R n . So, a(p) = < attribute 1 , attribute 2 , . . . , attribute n > is a tuple of n real values storing all the attributes of point p (e.g. the normal, the texture coordinates, the colour, etc.). In practice, in computer graphics, the domain T is usually discretized in a grid of cells G ⊂ Z 2 by a function m G : T → G, such that for each parameterized point t ∈ T that falls in a given cell g ∈ G, we store a single representative value a(p) in g, with p = m −1 T (t). This new space G is usually known as texture space (see Figure 3).
When performing a parameterization operation onto a generic object, it is a common practice [BKP*10] to partition the object surface S into multiple charts C i , verifying that S = C i and C i C j = ∅ for i = j . When dividing the mesh surface S into charts, seams are introduced, which are cuts applied over S [FH05]. Then, these charts are mapped by the parameterization function m T into disjoint parts in T and, as a consequence, they are mapped into   Unwrapping an entire mesh as a single chart can create parameterizations with large distortion and less uniform sampling than what can be achieved with multiple local charts, in particular for surfaces of high genus. For mutli-chart parameterizations, discontinuities are introduced at the boundaries between charts, where the seams are located. For these parameterizations, we can distinguish two types of discontinuities: r Spatial discontinuities. This type of discontinuities appears because neighbouring triangles in S (in 3D) are not neighbours in T (in 2D) and, as a consequence, in G. This way, a continuous mesh S, is transformed by m T (and m G ) into a discontinuous piecewise mesh in T (and G). As an example, in Figure 3, we can observe two neighbouring charts C i and C j in S. Those charts are parameterized by m T into disconnected regions in T .
r Sampling discontinuities. Sampling discontinuities appear because different charts that are neighbours in S (in 3D) are mapped with different distortion, rotation or scaling into T by m T , and as a consequence, their sampling in G produced by m G is different, too. This gives as a result that cells from G do not match up at chart boundaries when back-projected to S. As an example, in Figure 3, we show two charts C i and C j and a point p lying on the boundary between them in S. Given the fact that the mapping produced by m T may differ for triangles of charts C i and C j , it can happen that the sampling of the colour in g = m G (m T (p)) differs, too. Thus, when the texture from G is mapped on S, both the cells and the colours stored in them will not match at both sides of the boundary between the charts.

Object space
The previously defined 2D parameterizations transform a surface S ⊂ R 3 to a smaller (or simpler) domain T ⊂ R 2 , where we can operate easily. Now, let us define a parameterization or mapping function m O that does not reduce the degree of the output domain, but instead reduces its complexity by mapping every point p from the input domain S with respect to another simpler mesh C ⊂ R 3 that encloses S. The mesh C is piecewise defined by a set of connected submeshes C i , such that they form a partition of C such that C = C i and C i C j = ∅ for i = j . C is characterized by the fact that it usually has a similar, but simpler shape than S (see Figure 4). This is an important point that as by using m O , we will be able to perform any operation in C and then transfer it back to S. As a consequence, we can reduce the complexity of the calculations involved. For that purpose, we first define a function w : S × V (C i ) → R n that binds each surface point p with the set of vertices of its enclosing submesh (denoted as V (C i )) by a set of weights. As we will show in the following sections, there are several ways to define such a binding through the usage of different coordinate types (see Section 3.2). As we will see, each type of coordinate has its own properties, but most of them have in common two key features: They are completely defined and smooth (C ∞ ) inside a submesh C i , but on the contrary, they have continuity problems at submesh boundaries (ranging from C 0 up to being even discontinuous). Now, let us specify the parameterization function m O : . That way, when C is not deformed, the surface S is smooth and continuous using the mapping function m O . However, if we transform mesh C into C by transforming each point of the surface with p = m O (w(p, V (C i )), V (C i )), discontinuities will appear in regions near the cages boundaries (see Figure 4). As a result, we would have non-smooth deformations of S. That is, as we get closer to a boundary between two deformed controlling submeshes C i and C j from C , discontinuities may appear when transferring the deformation to the enclosed surface through m O . As a consequence, all the attributes associated with the enclosed surface are discontinuous.

Screen space
Usually, after we have a triangular mesh positioned in 3D space, it is common to visualize it. For that propose, the users define, among other, scene properties (e.g. materials), the lighting settings and the cameras to render the model from a given viewpoint. To display the resulting image, the scene experiences a set of transformations that map each point p of a surface in S to the screen by a mapping function m S (see Figure 5). The transformed model, once it is mapped to the screen, lies in a space commonly known as screen space. That space is merely a discretization of the image projected by the camera to be displayed on a physical device.
As we have commented previously, given a point p ∈ S in our scene, we can retrieve all its attributes with a(p). Even if a is a continuous smooth function, it may happen that when projecting a(p) to screen space S S by the mapping function m S (a(p)), discontinuities  appear as a consequence of the sampling performed to represent the scene on the discretized image plane. This can happen because our sampling rate is not high enough to represent the signal defined by the projection of the 3D model. This is simply a consequence of the Nyquist sampling theorem, which states that a given bandlimited analogue signal can be perfectly reconstructed from an infinite sequence of samples if the sampling rate exceeds 2ν samples, where ν is the highest frequency of the original signal. An example of the problems that appear due to the incorrect sampling is the loss of detail when projecting in screen space sharp features of a model. On the contrary, a side effect of sampling is that smooth areas of a(p) can be over-sampled and, as a consequence, we are unnecessarily increasing the computational time of the image generation in that region.
Thus, let us consider a partition of a mesh m S in 3D space by grouping nearby points p that have similar features or attributes a(p). Each group will be called a cluster. So, given two points p 1 ∈ S and p 2 ∈ S, they will belong to the same cluster if they are similar enough, that is, if the distance between their features is smaller than a given threshold dist(a(p 1 ), a(p 2 )) ≤ Th. In the simpler cases, we can compute dist(a(p 1 ), a(p 2 )) as |a(p 1 ) − a(p 2 )|, but more general metrics can be thought of.
Finally, consider two neighbouring clusters C i and C j . For every point p lying in the boundary of C i and C j , the features or attributes of p given by a(p) could represent an abrupt change (see left image in Figure 6). As we already said, this is because two main reasons: the high frequencies that can exist in the signal of any of the features of the mesh and the discretization performed to project it on the screen. When this happens, we can observe a discontinuity or an area of non-homogeneity. So, if we project the value a(p) in screen space by m S (a(p)), the discontinuity detected in S will still be reflected in screen space S S (see right image in Figure 6). This way allowing for an easy detection of discontinuities, further promoting a better and faster reconstruction of the final image in our physical device.

Interpolation
In the mathematical field of numerical analysis, interpolation is a method of constructing new data points within the range of a discrete set of known data points. In engineering and science, users often have a number of data points, obtained by sampling or experimentation, which represent the values of a function for a limited number of values of the independent variable. It is often required to interpolate (i.e. estimate) the value of that function for an intermediate value of the independent variable. This may be achieved by curve fitting or regression analysis. Similarly, in computer graphics, the concept of interpolation is defined as the creation of new values that lay between a set of other known values. These known values usually represent a smooth signal, and as a consequence, the new created values represent a smooth signal, too (although some detail can be lost). Nobody can expect a high-quality interpolation if the initial values contain some sort of discontinuities. As an example of interpolation, when a triangle is rasterized onto a 2D image from its vertices, all the pixels between those vertices are filled in by an interpolation algorithm, which determines their attributes (colours, normals, texture coordinates, etc.). Another example of interpolation happens when an image generated in a video game is created in sub-HD resolution, and then it is upscaled to be displayed in a monitor that supports full HD mode. There, the missing information is again obtained by interpolation.
When performing an interpolation over a set of values, there are a number of interpolation functions that can be used. The following interpolation approaches are the most used in computer graphics, as they are easy to calculate, stable and most of them are implemented in current graphics hardware. Let us review the types of interpolation (see Figure 7): r Linear interpolation. Linear interpolation (see Figure 7a) is the simplest and fastest method of interpolation. Given two known points x 0 and x 1 in a 1D space, the linear interpolant is the straight line that joins both points. For a point p that lies in the interval [x 0 , x 1 ], we can define mathematically the term linear interpo- When λ is equal to 0, p = x 0 and when λ is equal to 1, then p = x 1 . Let us note that linear interpolation produces smoothness discontinuities at points x 0 and x 1 , i.e. when points x 1 and x 2 are not in the same line of x 0 and x 1 . r Bilinear interpolation. In mathematics, bilinear interpolation (see Figure 7b) is considered an extension of the linear interpolation for functions of two variables (X and Y ) on a regular 2D grid. So, we could consider bilinear interpolation as a linear interpolation of the first variable X followed by a linear interpolation of the second variable Y . Bilinear interpolation considers the closest 2 × 2 neighbourhood of known point values surrounding the unknown point's location, taking a weighted average of those four samples to compute the final interpolated value. The weight on each of the four point values is based on the computed point's distance (in 2D space) from each of the known points. So, suppose that we want to find a value of a function f at a point p = (x, y). It is assumed that we know the value of f at the four points (x 0 , y 0 ), (x 0 , y 1 ), (x 1 , y 0 ) and (x 1 , y 1 ). We first perform a linear interpolation in the x-direction, and then we proceed by interpolation in the y-direction: (1) Although  processing, bilinear interpolation is one of the basic re-sampling techniques, while in texture mapping, it is also known as bilinear texture mapping.
r Trilinear interpolation. As bilinear interpolation is the extension of linear interpolation to 2D spaces (e.g. texture space), we can refer to trilinear interpolation (see Figure 7c) as the extension of linear interpolation to 3D space. Trilinear requires eight adjacent pre-defined values surrounding the interpolation point, so it is the most expensive interpolation mode. Suppose we have a periodic cubic lattice with spacing 1, all we have to do is to follow the same procedure as before but with one more added dimension. So, first of all, we interpolate along the x axis, then we interpolate the values obtained along y axis, and finally we perform the same along the z axis. Each interpolation can be seen as we were pushing the corresponding face of the cube each time and performing a linear interpolation.
r Anisotropic filtering. For oblique viewing angles, trilinear interpolation might not be enough, as different parts of a texture would require evaluating different mipmap levels. For these cases, it is convenient to use anisotropic filtering techniques, which consist of scaling either the dimensions of a mipmap by a ratio relative to the perspective distortion of the texture [AMHH08]. This ratio is dependent on the maximum sampling value specified.

Continuity and Interpolation Techniques
In this section, we are going to review some of the most well-known techniques related to continuity and interpolation in texture space, object space and screen space.

Parameterization using 2D and 3D data structures
Continuity and interpolation in texture space domain are strongly related to texturing parameterization [Hec86], as it intends to solve the problem of continuity in texture space [FH05]. In order to achieve this, several approaches have been proposed. Some of these approaches rely on 2D data structures for texturing purposes, while others use 3D data structures: r Parameterization techniques with 2D data structures. Levy et al. [LPRM02] presented a multi-chart parameterization technique with a new quasi-conformal mapping, based on a least-  [ZSGS04].
squares approximation of the Cauchy-Riemann equations, as well as a new packing method for the generated charts. Geometry images [GGH02] unwrap an entire mesh into a single chart, creating parameterizations with greater distortion and less uniform sampling than can be achieved with multiple local charts, in particular for surfaces of high genus. Carr et al. [CHCH06] and Sander et al. [SWG*03] presented extensions to parameterize them into multiple charts, and Purnomo et al. [PCK04] described a new type of seamless quadrilateral chart-based atlas. There, neighbouring chart's texels were copied into a onepixel gutter on the boundary of the original chart. This work was the extension of the one by Carr and Hart [CH02], where mipmappable atlases were created with one chart per triangle, also using a gutter to sample across seams. Finally, Zhou et al. [ZSGS04] presented a fully automatic method to create texture atlases on arbitrary meshes. Seams are still present for creating a chartification of the mesh, but the method allows the user to balance the number of charts against the resulting texture stretch (see Figure 8). point in their approach is that they propose a tile-based approach that uses virtual implicit texture coordinates that align with the parameterization that comes from the underlying Catmull-Clark base surface. Schäfer et al. [SKS13] followed the same approach by introducing a real-time local displacement mapping using a dynamic graphics processor unit (GPU) memory management system, that was later extended [SKNS14] by adding dynamic allocation of GPU memory only in regions when and where needed, thus avoiding costly central processing unit (CPU)-GPU memory transfers. To finish with the most important work done on parameterizations using 2D data structures, it is worth mentioning the work done regarding global parameterizations: Gu and Yau [GY03] presented a new approach that solved the computation of a global parameterization for surfaces with complex topologies, mostly preserving the conformality and avoiding discontinuities. The following year, Gu et al. [GWC*04] presented a global parameterization approach applied to the cortical surface matching problem that depends neither on the mesh structure nor the resolution of the input mesh. Jin et al. [JKLG08] presented a unified framework for discrete surface Ricci flow algorithms that has a direct and practical value to global surface parameterizations. Although it is not related to global parameterizations, we do not want to finish this paragraph without citing the work done by Zhao et al. [ZSG*13], in which they presented a novel and efficient area-preserving unwrapping method using the optimal mass transport approach. They show how the combination of both conformal and the optimal mass transport mappings can be effectively used for several computer-graphics-oriented applications, including medical imaging.
r Parameterization techniques with 3D data structures.
Benson et al. [BD02] presented in 2002 a technique called Octree Textures showing how a 3D hierarchical data structure could be used to efficiently store colour information along a mesh surface without computing texture coordinates. Several applications, such as surface painting and simulations, were shown. However, it had several drawbacks: First, any filtering method needed to be implemented by fragment shaders, which increased the cost of the final render, mainly because the access to the data structure is more costly than classic 2D parameterizations, as GPU are extremely efficient at displaying filtered standard 2D textures. Second, it produced a memory overhead in contrast to common 2D texture maps and, as a consequence, the resolution of the octrees was lower, producing lower quality renderings. Tarini et al. [THCM04] proposed a seamless parameterization of a mesh by using the surface of a polycube, whose shape is roughly similar to that of the given model, as texture domain. Even if the parameterization was seamless by construction, the user had to build the polycube manually. Once it is generated, if the geometry or topology of the model is too complex, the technique can have problems handling it, as it increases the complexity of the polycube too, and as a consequence, the memory consumption. Moreover, if the polycube is too simple so that it does not capture all the features of the model, visible rendering artefacts may appear. Lefebvre and Dachsbacher [LD07] presented another 3D-like parameterization called Tiletrees with the objective of seamless texturing of a surface without wasted space (empty regions between charts) and adaptive resolution. This is done by storing square texture tiles into the leaves of an octree surrounding the 3D model. Then, at rendering time, the  surface is projected onto the tiles, and the colour is retrieve by regular 2D texture fetches. Finally, Yuskel et al. introduced a novel technique called mesh colours [YKH08] which stores colours on vertices defined over the mesh, with the parameterization defined directly by the mesh itself. Even all these techniques are seamless (see Figure 9), they are not free of problems: usually, artists paint 3D models with mutli-chart parameterizations. So, if we want to use some of these 3D parameterization approaches, a texture transfer process must be applied between the 3D method and the atlas and as a consequence, some blurring and loss of detail may appear. This will happen no matter how many samples are used for the evaluation during the process. Even more, seams in the original textures will be transferred as artefacts to the newer textures, producing an incorrect visualization of the models (see Figure 10).

Seam location and visibility
As seems are the responsible for the discontinuities and that they cannot be avoided, there exist some methods (see Figure 11) that focus on where to place seams to make them less visible: Sheffer and Hart [SH02] present a method that finds places to put seams, but   [SH02,KSG03,GP09] and [RNLL10]. This new method required no explicit parameterization eliminating that way the need for assigning texture coordinates. As a consequence, it allowed the creation of much more efficient texturing pipelines for computer-animated films. Despite of its usefulness, Ptex can be considered unfeasible for real-time rendering. As a consequence, in 2009, Gonzalez and Patow [GP09] introduced a technique called continuity mapping to solve the spatial discontinuities present at chart boundaries, in texture space. Also, it is used to avoid the sampling mismatch at chart boundaries by using of a set of virtual triangles defined in texture space that allowed to connect texels in different charts, and providing a continuous sampling. Continuity mapping uses the original artist's designed model and parameterization, thus neither requiring any re-parameterization, nor the usage of texture transfers. It is a GPU-friendly method evaluated at runtime using a single-pass fragment shader, but that requires extra computations at the shader level. More recently, Ray et al. [RNLL10] proposed a solution to make multi-chart parameterization seams invisible, while still outputting a standard texture atlas. They use a global parameterization to produce a set of texture coordinates that align the texel grid across boundaries introducing, at the same time, some constrains in the colour information of the boundaries. As a result, seams are not visible, at the cost of altering the artist's input texture.
Other techniques (see Figure 12) use some type of blending approach to hide the seams: In lapped textures [PFH00], the perceptibility of seams is reduced by applying alpha-blending at the edges of the pasted texture patches. Losasso and Hoppe [LH04] used textures as a height field and blended heights between mipmap levels. In a similar approach, presented by De Toledo et al. [dTWL08], neighbouring charts must share boundaries with each other, resulting in some overlapping between them. The overlapping area is necessary to avoid cracks at rendering time, and depth information is correctly generated to achieve a seamless reconstruction. However, some undesirable artefacts appear on regions with strong curvature, and self-shadows are very difficult to compute. Even though Figure 12: Techniques to hide seams by blending techniques. Images from [PFH00,LH04] and [dTWL08].
seams can be alleviated in some form, none of these techniques completely eliminates them from the parameterizations. Lefebvre and Hoppe [LH06] presented a method to solve the spatial discontinuities caused by multi-chart parameterization to synthesize a texture over a discontinuous atlas. Their technique can only be used for simple simulations in texture space and do not provide any solution for the seam visibility problem in 3D space (sampling discontinuities). Their method work in a similar way to the atlas transition functions defined by Grimm and Hughes [GH95], but they differ in that they are defined in the surrounding area outside the charts, as done by Gonzalez and Patow [GP09] as well.

Object space
Much research effort has been made in computer graphics for the deformation and manipulation of triangular meshes [BKP*10]. In mesh deformation, we can distinguish between two main groups depending on how they apply the deformation on the surface: r Surface deformation techniques. Surface deformation methods modify the surface of the mesh directly and as a consequence, they can preserve the details and shape of the mesh quite easily. However, these approaches usually need more computational resources.
r Space deformation techniques. In comparison, space deformation techniques have received much less attention than surface deformation methods. Here, the deformation is applied over a volume or some pre-defined space. The most basic space deformation technique defines a lattice with a rather small number of control points that encloses the subject model to be deformed.  is induced on the model. The main advantages of these techniques over the surface methods are their simplicity and speed. In general, these methods are oblivious to the surface representation and are free of discretization errors.
However, in this paper, we are going to follow a more pragmatic approach and group the papers according to whether they present coordinates for mesh deformation, or they present techniques that use these coordinates in some original way.

Coordinates
In recent years, cage-based deformation methods have gained interest and are considered one of the most important mesh deformation approaches. They are characterized by the use of a cage to drive the deformation of an enclosed model (see Figure 13). A cage is a rather low polygon-count polyhedron, which typically has a similar topology and geometry as the enclosed object. The first method based on 3D regular lattices was introduced by Sederberg and Parry [SP86]. Later, this method was extended to handle general lattices [Coq90] and LOD management [ST00]. In recent years, new deformation methods have been proposed based on the use of coordinates computed with respect to the vertices of a single enclosing cage. Floater and co-workers [Flo03,FH05,JSW05] introduced mean value coordinates (MVCs) (see first image in Figure 14) as a method for constructing an interpolant for closed triangular meshes with a closed-form formulation, which is able to re-produce linear functions. MVCs are well defined both inside and outside the control mesh (C ∞ continuous) but they are only C 0 continuous across the cage faces. Lipman et al. [LKCOL07] presented an alternative non-negative coordinate definition to MVC (PMVC) (see third image in Figure 14). The coordinates are computed numerically by using a GPU-friendly approach.  Figure 14) for character articulation, which are positive and C ∞ continuous inside the cage, C 0 continuous on the boundary and have no definition outside the cage. In contrast to MVCs, HCs guarantee to be positive everywhere in the cage interior, while its influence decreases with distance as measured within the control mesh. However, as they do not have an explicit expression, they force the usage of a multi-grid finite difference to compute the coordinates.
Later, Lipman et al. [LLCO08] proposed a new shape-preserving space deformation approach called Green coordinates (GCs) (see fourth image in Figure 14). The work, motivated by Green's third integral identity, produces conformal mappings and extends naturally to quasi-conformal mappings in 3D by using both the vertex positions and face orientations. GCs are C ∞ continuous inside and outside the cage but discontinuous at the boundary, although they could be extended in a natural analytic manner to the exterior of the cage, allowing the employment of partial cages.
A table summarizing the continuity properties of each of these coordinates is shown in Figure 15: As can be seen, all the coordinates for cage-based approaches present some kind of discontinuity at cage boundaries when deforming the mesh, and even some of them are not defined everywhere.

Techniques
Jacobson et al. [JBPS11] proposed bounded biharmonic weights, a linear blending scheme that is able to produce smooth, intuitive and flexible deformations for 2D and 3D shapes using handles of different topology (points, lines and cages). Contrary to single-cagebased approaches, they can naturally use partial cages to locally deform a mesh without any special restriction or consideration.
A hierarchical approach based on a set of pre-defined cages was introduced by Zheng et al. [ZFCO*11], where users could group a set of controllers to obtain a hierarchy to deform a mesh. This approach is only applicable to man-made models and uses a small set of representative controllers. As the authors mention, they discarded cages as handlers given the discontinuities encountered at cage boundaries.
Langer et al. [LBS08] developed a criterion for the construction of smooth maps, called Bézier maps, that are a piecewise homogeneous polynomial in generalized barycentric coordinates. To avoid discontinuities, they had to increase the number of control points and the order of the polynomials, thus increasing computational costs. In the work by Ben-Chen et al. [BCWG09], the challenge was to find a harmonic map from a domain in such a way that it satisfies constraints specified by the user, it is detail-preserving and it is intuitive to control. Huang et al. [HCLB09] presented a mesh deformation technique using modified barycentric coordinates with a tetrahedron control mesh that avoids first-order discontinuities across the cage boundaries. Finally, even though they do not use cages, Botsch et al. [BK05] proposed a real-time freeform shape editing technique that allows to pose user-defined modelling constraints directly on the surface.
Another GC-based technique to locally deform a mesh contained by an automatically generated umbrella-shaped cell was presented by Li et al. [LLD*10]. Although their cage is local, they need to bind coordinates for all mesh vertices, thus increasing memory consumption. A hybrid approach that combines surfacebased and cage-based deformations was presented by Borosan et al. [BHZN10], but as they note, it is not smooth at the boundary of the cage and meshes that are too coarse limit its effectiveness making their approach suitable only for local deformations.   Landreneau and Schaefer [LS10] introduced a Poisson-based method to reduce the storage needs of the coordinates for animated meshes, aiming at making a coordinate system local while still using a single global cage. This technique is based on a set of initial poses and it guarantees smooth deformations if a new pose is similar enough to one of the principal ones. This limits the usage of this method only for previously known deformations and not for industrial modelling packages.
Recently, Gonzalez et al. [GPCP13] introduced a technique called *Cages (star-cages), a cage-based deformation approach that involves a hierarchical set of cages. First, leaf cages bound the object in a piecewise manner allowing the creation of localized, as well as smooth deformations (no discontinuities at cage boundaries). Then, upper level cages further allow the deformation at multiple levels of detail on the mesh, producing faster deformations and consuming less memory than previous approaches. *Cages can accommodate any coordinate system defined within a cage (i.e. MVC, HC or GC), so in that sense, it can be considered as a generalization of previous approaches.
Cage-based deformations have also been applied to planar domains. One related work was introduced by Meng et al. [MSW*09] (see the first image in Figure 16), who designed a method to keep the shape of images during the deformation of a region of interest, but whose continuity depends on the cage coordinates used (MVC, HC or GC). Later, Weber et al. [WBCG09] generalized the concept of barycentric coordinates from real numbers to complex numbers, but this is only applicable to 2D shape deformations (see the second image in Figure 16). In [MZDP11], the authors show how to use cage coordinates (MVC/HC/GC) to deform a 2D image while keeping its original shape. First, they unify the existing deformation coordinates into the concept of cage coordinates (CCs) providing a generalized formulation and, finally they propose a GPU-friendly implementation for interactive deformation (see the third image in Figure 16).

Screen space
Once the objects are ready for rendering, the pipeline continues by evaluating its projection on screen and by determining the final colour for each pixel. This process can be computed either by direct projection, i.e. directly transforming the object to screen space and rasterizing its surface to determine the affected pixels; or by tracing rays from the eye towards the objects. In both cases, we need to determine a set of sampling points p in the object surfaces, and retrieve the corresponding set of attributes a(p). If there are discontinuities in the points p or in the attributes, these discontinuities will unavoidably be translated to screen space. Those discontinuities are not necessarily bad: for instance, consider the discontinuity introduced by an object silhouette, which represents an abrupt change in all the parameters, from depth to attributes a(p), which will be different than the background or any object behind. Once we locate point p and its attributes, we need to compute its shading, which could involve texture sampling, as already described, and perhaps some non-local computations that might imply tracing further rays (e.g. for reflections, refractions or global illumination computations). Thus, based on the values found during evaluation of the image pixels, we can group them into clusters of similar features and avoid further computations. In this section, we are going to review the techniques that exploit the continuity and smoothness in each cluster by trying to apply interpolation techniques inside each one. To simplify further explanations, we have decided to split the interpolation methods in three sets: the ones which deal with the reuse of ray traversal data structures, those that use the simplification of complex shading evaluations and the ones that try to reuse samples and then apply interpolation techniques.

Ray traversal data structures
State-of-the-art ray-tracing methods rely on acceleration structures to optimize and reduce the number of ray-traversal operations. These   structures allow greater flexibility at the expense of inducing storage overhead. Rendering performance depends on the trade-off between the time for querying and updating the data structure, and the reduction in the number and accuracy of the operations. We can classify these techniques as inter-and intra-frame techniques: r Inter-frame techniques. Inter-frame acceleration techniques (see Figure 17) try to use information from previous frames for the current one. Data reprojection is one of the most common strategies [NSL*07, SaLY*08], which exploits the temporal coherence in animation sequences by caching the expensive intermediate shading calculations performed at each frame. However, data reprojection is an inter-frame method unable to avoid unexpected performance drops in fast movements or drastic view changes, because of the cache misses and shading recomputations in the new view. However, these results are not updated in the cache, forbidding further reuse in nearby samples within the generation of the same frame.
r Intra-frame techniques. On the other hand, intra-frame acceleration techniques for ray tracing (see Figure 18) usually rely on acceleration structures to speed up the traversal of primary and secondary rays. While this problem has been well studied for CPUs [WMG*07], only a few recent approaches provided GPU-efficient dynamic ray-traversal acceleration data structures [ZHWG08, LGS*09, GPM11].
In general, most ray-traversal data structures do not reduce the number of traced rays as they just improve the speed of ray tracing and the scene hits. However, image-space techniques often show advantages over object-space techniques, because they can divorce algorithmic and scene complexities, which avoids wasting computations on off-screen portions of the scene. Szirmay-Kalos et al.
by looking up an approximated result from an environment map. Yang et al. [YSL08] (see Figure 19, middle) proposed accelerating rendering by using a subsampled image and an edge-preserving (differences in depth and normal) upsampling approach to obtain the final resolution. The main drawback of this method is that it requires to process the scene twice, making it suitable only for pixel-bounded scenes. For ray-tracing visualization, this requirement would decrease the possible gain when upsampling the shading evaluations, making it suitable only for rasterized visualizations or very simple ray-traced scenes. It may also show some artifacts when reconstructing the rendering of highly tessellated models, because of the large difference in normals and depths from one pixel to its neighbour. More recently, Novák and Dachsbacher [ND12] (see Figure 19, right) suggested to accelerate ray tracing by converting complex meshes into a set of rasterized height fields intersected by simple ray marching. Their performance improvement is more noticeable when highly tessellated meshes are involved (from thousands to millions of triangles), making this method less suitable when simpler models are used, which normally are the ones used for interactive or real-time applications. Moreover, their boost in the performance is more important for primary rays, as they have a higher degree of coherence than for secondary rays-a given ray and its neighbours are going to hit the same region of the scene. Secondary rays loose most of the coherence between them, giving as a result less impressive gain over classic ray tracing.

Simplifying complex shading evaluations
Automatically simplifying complex shading evaluations improves the rendering performance, where less complex shaders are used in place of the original ones. The first system proposed by Olano et al. [OKS03] only considered code transformations that replace a texture with its average colour. This overlooked many possible opportunities for source-level modifications. In a similar basis, Pellacini [Pel05] considered arbitrary source-level simplifications by a brute-force optimization strategy that can easily miss profitable areas of the shader code. Recently, Sitthi-Amorn et al. [SAMWL11] proposed a more effective technique based on genetic programming optimization. However, while shader simplification provides a clear trade-off between performance and accuracy without additional memory requirements, it cannot reduce the number of ray traversals. Furthermore, shader simplification requires a manual user selection of representative rendering sequences, in order to estimate performance and their error approximation, while data reuse strategies can be more easily fine-tuned automatically.

Sample reuse and interpolation techniques
There is a set of approaches that reduce the time needed to render a scene mainly based on the reuse and interpolation of samples at homogeneous regions. Akimoto et al. [AMS91] proposed a method that exploits the similarity between adjacent pixels to reduce the number of evaluations. Although they use a simpler sampling pattern, their procedure requires several verifications at runtime, which introduces a considerable overhead. Moreover, they use pixel intensities to determine similarity between samples, resulting in texture interpolation problems. Bala et al. [BDT99] proposed a CPU-based system that uses per-surface interpolants to approximate and accelerate radiance computations. They decouple the acceleration of visibility and shading operations by exploiting temporal coherence and interpolating radiance samples. A hierarchical data structure called linetrees is used at runtime, being its maintenance one of the main drawbacks of this technique, as it is complex and costly. Adamson et al. [AAN05] presented an acceleration method for intersectable models exploiting spatial coherence by adjusting the sampling resolution. Their main drawback lies in the lack of support for popular techniques such as shadows and ambient occlusion, which involve the computation of secondary rays. Moreover, as they shift the ray-scene intersections computations to the CPU, GPU-based ray-tracers performance may drop due to data transfer operations.

Conclusions
We have presented a summary of the most relevant work, connected to continuity and interpolation, in three main lines inside the rendering pipeline of modern computer graphics applications: in texture space (2D), object space (3D) and screen space. In every case, we have focused on the most relevant work related on sampling and interpolation across boundaries, avoiding the usual discontinuities that frequently appear in these situations. We can see that in spite of the many valuable efforts in these areas, the basic challenges remain as open research problems: r In texture space, as a natural consequence of most parameterizations, the apparition of seams is unavoidable. In these cases, the real challenge is to make those seams disappear without introducing any distortion in the artist-provided material. Other parameterizations are built to naturally avoid seams, but they require either (potentially) harmful texture transfer operations, or the constraint to use tools specifically tailored for them, thus forcing the artist to deviate from their usual mainstream tools. Although this last one might be the best solution for large teams that could afford the extra development efforts, for smaller ones can pose serious problems that usually end up in traditional transferring operations, with a loss in texture quality. Fortunately, this loss can be small, but is unavoidable if attributes should be transferred from one parameterization (the one used by the artist) to another one (used by the final rendering system).
r In object space, mesh deformation poses a serious challenge by requiring easy-to-use tools which at the same time allow both global and fine-grained control over the mesh. Up to now, no single-cage method has been able to provide such degree of control, so multi-cage or hybrid mechanisms are being developed. The main issue is to create deformation systems which do not have discontinuities between the different regions in which a model is divided into. On the other hand, skeleton-based mechanisms have proven to be a reliable solution and are widely used in commercial software for animation. However, their usage is strictly limited to animation tasks, not allowing a broader kind of mesh deformations beyond those defined by the skeleton itself. Some works that combine both approaches, like the work by Jacobson et al. [JBPS11], which introduces a very attractive way to combine the strengths from both worlds.
r In screen space, all efforts have been focused on exploiting the spatial and temporal coherence of the rendering process to avoid complex calculations, with varying levels of success. First, ray traversal data structures aim to reuse visibility and shading calculations by interpolating parts of the data (usually stored as ray traversal trees). Second, some techniques look for ways to reduce complex shading operations by simplifying the shaders used based on parameters like the distance to the observer, reducing calculations in low-perceivable areas of the image. Finally, sample reuse and interpolation techniques take advantage of the smoothness observed in some homogeneous areas of the screen to reduce the number of evaluations. However, these techniques are not artefact-free, introducing defects resulting from interpolating discontinuous areas in screen space. Also, another common limitation is that most of those techniques are not general enough to account for all situations, being unable to render important effects like ambient occlusion or, even more significant, shadows.
All those challenges are not trivial to solve, and we truly believe that research in each of these areas will provide new exciting and strong research results in the future.

Future Perspectives
In this final section, we would like to add some future perspectives that we foresee for the research in the different areas dealt in this paper.
r When talking about texturing parameterizations, we cannot forget that all texturing tasks start with the work of a dedicated artist, who prefer, in general, to work on 2D texture patches before directly painting on the 3D model. Also, they usually have their own preferred unwrapping mechanisms which allow an almost perfect mapping for some selected features (e.g. a face). However, this imposes a serious constraint on the kind of allowable parameterizations. The conflict appears when we take into account the requirements imposed by modern graphics hardware in the case of real-time rendering as, for example, the tessellation units, which require triangles (and their attributes) to be provided in certain key ways. Thus, conflict is unavoidable between these two almost opposite forces: the artist requirements and the hardware constraints. For the future, we foresee that more research should be focused in helping to navigate the duality 2D/3D so that artists could seamlessly go from one to the other in a fully transparent way, with the system taking care of the continuity issues, which cannot be neglected. On the other hand, offline production pipelines do not suffer from the hardware imposed constraints, but the problems resulting from patch-based parameterizations and their continuity still remain. Using seamless or parameterization-less structures seems to be a good option, but they do not unfold nicely for the 2D painting tasks artists usually do, and require specialized tools for their creation. Without those tools, the use of a texture transfer operation is unavoidable, resulting in all the above-mentioned problems.
r In our opinion, mesh deformation will evolve into systems that will allow the integration of different mesh deformation techniques, much in the way introduced by *Cages [GPCP13] or bounded biharmonic weights [JBPS11]. That integration will allow the use of segments, vertices and cages, each with a different type of coordinate. Such a hybrid system would provide a much increased degree of freedom for artists, who would benefit of choosing the right tool for the right task (e.g. volume-preserving or simple free-form surface deformations), without caring for the continuity issues that might arise between regions using different deformation mechanisms.
r Finally, we think that in screen space, the main challenge is thus to maximize coherence exploitation while avoiding introducing artefacts from miscalculated frontiers between inhomogeneous regions (i.e. different properties). Also, fully support for a general rendering framework is of extreme importance, as modern rendering pipelines use a variety of effects that are crucial for the final result. Arriving to a general, but efficient and precise (i.e. artefact free) solution is, in our opinion, the final goal of continuity and interpolation research in screen space.