This paper looks at the various methods available for the implementation of real-time deformation for use in computer game environments. We will analyse a number of published methods and we will examine how these methods can be used to produce real-time deformation in future game developments.1. Introduction
To computer game players, the environment in which they play is all too often no more than an interesting cage. In the real world our environment affects our actions. It would not be wise to sit in a chair made of paper. But in a computer game the distinction between visual properties and physical properties is not so clear-cut. In the past this was due to the processing power needed to calculate mathematical physics. Recently this barrier is being broken with the creation of programs like Math Engine which realistically model physics for games.1.1 Related Work
However one area of the gaming world still lacks representation. In the real world if you fired a missile into a stone wall, large chunks of stone would be ripped off the wall and scattered in the surrounding area leaving a hole.
The deformation properties of the real world have not been successfully used in game environments.
There are four main methods for building deformable worlds in games. These are height maps, voxels, polygon Constructive Solid Geometry (CSG) operations and implicit surfaces. In the following sections we will discuss the advantages and disadvantages of each method when used to produce deformations in games. 1.1.2 Voxels
A game world could be built from implicit surfaces. At present the calculation of implicit surfaces is slow. However an implicit world would be entirely dynamic and deformable. Walls could be allowed to flow and bend. Holes could be removed from any object within the world and liquids could flow across undulating surfaces. Considerable work has been done by Marie-Paule Cani & on the subject of modelling and deformation of implicit surfaces in real-time applications.
A voxel is a point in space drawn as a pixel or small cube. Environments or objects can be defined by voxels. They are especially useful for creating natural or organic objects like landscapes and plants. Through procedural modelling voxels can display fractal and noisy
patterns, imitating those found in nature. There are a few computer games that use voxel modelling to implement expansive gaming environments. These include the "Delta Force" series and "Outcast" (see figure 2) both being for PC. These two games were set in large outdoor environments where voxels were used to create realistic grass and mountains. This is very hard to achieve using conventional polygon modelling techniques. When artificial objects (buildings) are modelled voxels are inefficient. In "Delta Force" the buildings were built from polygons.
Figure 2: Outcast uses voxels to represent its vast environments.
Voxels are memory intensive and difficult to model with. However voxels could simulate deformable real world objects better than any other modelling technique. A voxel is like a LEGO brick. To build a better representation of the real world requires more, smaller LEGO bricks. If a voxel object requires deformation (e.g. a hole made in it) we delete or move the voxels that define the space of the hole.
However creating geometric objects from voxels is unefficient. A sphere can be represented accurately as a polynomial (for example NURBS). Polygons can be used to approximate a sphere. However defining a sphere as voxels is unattractive. Imagine how many LEGO bricks are required to build a smooth looking LEGO sphere.
Ignoring the smoothness problem for now, how would we build a sphere from voxels. This question is answered by , the basic method being as follows.
This method of deformation has not been applied in a gaming environment due to the processing power needed to calculate the boolean operations. However one game developer (Violition) is producing a first person game called "Red Faction" (see figure 4) which uses real time polygonal deformation. This CSG code is called Geo-Mod (Geometric Modification) and was written by John Slagel.
Figure 4: Red Faction by Violition uses real time CSG/Boolean operations allowing the player to shoot through walls. Note the hole in the pillar in this picture.
Due to the computer game companies not releasing the specifics of their code, it is impossible to say exactly how Geo-Mod works, however it probably uses a method similar to that presented in  . This is as follows.
Given two objects, object1 and object2, each face of object1 must be compared with all the faces of object2 to determine what portion of this face lies inside object2. This is repeated but with each face of object2 compared against all faces of object1. Then the required parts of these faces are kept depending on what sort of boolean operation is being performed, thus creating the new modified object.
A face_to_face_intersection algorithm determines where new vertices are to be created. These are topologically sorted. To determine how two intersecting faces are to be divided for the face_to_face_intersection algorithm an edge_to_face algorithm is required. Using the plane equation (a.x + b.y + c.z + d = 0) to represent the polygon/face we can determine where the point of intersection lies along the tested edge. We must also check that this intersection point lies inside the polygon by projecting the edge onto the polygon.
But how could implicit surfaces be rendered? One method would be raytracing. If the ray passes from inside to outside of the implicit surface at a given point then the surface must be drawn at that point. This method would produce a very accurate and smooth surface but at present is too slow for use within a game engine. Another method of rendering the implicit surface is polygonization.
The best method for the polygonization of an implicit surface was presented by Jules Bloomenthal in .
First the space around the implicit surface must be broken down into cubes. This can be done in two methods; continuation and exhaustive search. Continuation means that a starting point is picked on the surface. Then cubes are created away from the starting point along the surface. Exhaustive search methods break down space fully into cubes (for example the marching cubes algorithm).  uses the continuation method as it requires O(n2) function evaluations whereas exhaustive search requires O(n3) function evaluations (where n is a measure of the size of the surface). Exhaustive search requires the evaluation of a volume (n cubed). Continuous search requires the evaluation of a surface area only (n squared). However exhaustive search may produce a more accurate polygonization if the surface isn't continuous. For example if parts of the implicit surface have become disconnected, the continuation method would not find these. The exhaustive search method would find the disconnected pieces as it breaks down all of the space occupied by the surface into cubes, including the gaps. (see figure 6)
If a point is hit by a missile, the height of that point is lowered. This method would allow for some very interesting planet texturing. A point's position, relative to the planets surface is contained in the height field. This could be easily translated into texture co-ordinates. The planets could have different coloured internal layers.
The main disadvantage of using a height map in planets3D would be that as a point's height gets lower it will eventually be pushed out through the other side of the planet (see figure 7).
Figure 7: If a surface point is displaced too far it passes out of the far side of the sphere.
To produce holes that pass all the way through a planet using height maps would require a cutting function. This would edit the planet vertex list to delete vertices that had been displaced all the way through the planet. Then new vertices are added where required on the opposite side of the planet thus forming a 'tunnel'.
The CSG method of deformation would be implemented with simple spheres. The boolean operation would be performed on the planet object and a hole object. This hole object would probably be a sphere but could be any shape. The speed of the CSG operation is determined by the complexity of the objects used to perform the boolean operation.
The benefit of using polygonal deformation with the method outlined above is that the game world can be built using a traditional 3D modeller (for example Maya). However as planets3D is built from spheres, polygon deformation may be overly complicated. There is a more appropriate alternative.
In planets3D each planet could be defined as an implicit sphere.
Figure 8: In this screenshot from planets3D player 1 has blown considerable chunks from the Earth.
If a planet is hit then the vertex list for that planet needs to be updated. Therefore polygonization of the new implicit function is required. First the program adds a new sphere to the planet list. Each planet starts as one positive sphere. Every time it is hit a negative sphere (representing a hole) is added to the list. These holes can be made bigger or smaller depending on the force of the impact. The program then calls the Bloomenthal polygonizer  passing to it the planet to polygonize and the size of the cubes that are used to break down the surface. A larger cubesize means a less detailed planet and a faster redraw time. The Bloomenthal code then recreates the vertex list for that planet which can be subsequently drawn on the next frame. This polygonization process takes about a second depending on the speed of the processor being used.
The rest of the game engine is fairly simple with an idle function waiting for key presses and then calling a draw function that redraws the world. If the player fires a projectile the program calls the fired function which redraws the progress of the projectile until one of the conditions discussed previously is met and the function exits.
Collision detection between the projectile and the planets is calculated by taking the projectiles current position and its position on the previous frame and plotting a line between these two points. This line is then broken down a predefined number of times. These new points are then tested against the planet. If a point is found to be outside the main positive sphere of the planet then the program knows that the projectile has not hit the planet for that point.
When OpenGL draws that polygon it interpolates the texture co-ordinates all the way across the texture instead of over one edge and into the following tile of the texture. When drawn the planet has the entire texture compressed into a thin strip running down one of its sides. This can be fixed by adding 1.0 to the low horizontal texture co-ordinates of any face found to have both high AND low texture co-ordinates. This shifts the low co-ordinates into the following texture tile. OpenGL then interpolates over the correct small area across two texture tiles.
Figure 9: OpenGL interpolates these three co-ordinates all the way across the texture even though they are only one face wide. They should be interpolated in the other direction, i.e. into the adjacent tile.
planets3D then takes the texturing one step further allowing for the inside of the planet to be coloured differently to the outside (see figure 10). This is done by dedicating a small strip at the bottom of the texture file to a desired colour (molten orange in the case of planets3D). The program works out if the point to be tested is on the surface or inside the surface.
If it is on the surface the points vertical texture co-ordinate is bound between 0.1 and 1.0 thus excluding the inside colour strip. If the point is found to be inside the surface its vertical co-ordinate is bound to between 0.0 and 0.1 thus excluding the planet surface part of the texture.
What about making a future game world from implicit surfaces? We would need new modelling tools (Maya and Max would need adapting) that could allow the artist to produce implicit surface environments. However with implicit surfaces we could put holes easily in anything. Also soft objects could be fairly easily simulated. However the problem with implicit surfaces is that the are not suited to producing clamped, flat surfaces like walls.
The ideal gaming environment of the future should use all three methods. Buildings and artificial objects could be built from polygons and deformable with polygon CSG operations. Plants could be made of voxels, whether these were massive oak trees or vast fields of grass. They could be modelled procedurally, using fractal geometry. Soft, curved objects and possibly water would be made from implicit surfaces, allowing them to blob and flow through the game world.
This environment excludes the possibility of an as yet univented alternative (an alternative that will probably be invented). Games of today all too often fall into the trap of trying to mimic one aspect of reality. They should be artistically pulling the player into a new way of thinking and seeing. The physical properties of the game world may be one important part of this. The choice of modelling technique could define these properties. A game wall is normally two triangles because the wall is static. What if that wall had the possibility of changing its natural state. For example it could flow away, or bend and twist. The game player needs her point-of-view turning upside down and inside out before game worlds become real worlds with no rules.
4. Future Developments
is a turned based game, the main consideration for this being that a small amount of time is needed to repolygonize the implicit surfaces. This takes place during the turn switching phase. However through experimenting with the various detail options within the game it is possible to get the polygonization process to run unnoticeably fast (real-time) especially on a high-end processor. The implications for this are important. In terms of planets3D
the game could be developed to run in real-time allowing for a faster, more arcade-like game where the players are continuously trying to shoot each other.
The fact that the polygonization can run in real time (albeit at low detail levels at present) means that it is possible for games, on a wider scale, to use implicit surfaces for visualisation.
Figure 11: planets3D finished.
Rob Edwards (Tutor)References
Peter Comninos (Tutor)
 Claudio Montani and Roberto Scopigno. Spheres-to-Voxels Conversion. Graphics Gems. Consiglio Nazionale delle Ricerche, Pisa, Italy.
 Hongsheng Chen and Shiaofen Fang. A Volumetric Approach to Interactive CSG Modelling and Rendering. Department of Computer and Information Science, Indiana University, Purdue University, Indianapolis.
 Prof. Peter Comninos. Notes on the Anima II System. 3D Computer Animation. National Centre for Computer Animation. Bournemouth University, Bournemouth, UK.
 Eric Ferley, Marie-Paule Cani and Jean-Dominique Gascuel. Practical Volumetric Sculpting. iMAGIS a joint research project of CNRS/INRIA/UJF/INPG. iMAGIS-GRAVIR/IMAG, BP 53, 38041 Grenoble cedex 9 France.
1.1.1 Height Maps
1.1.3 Polygon CSG Operations
This method of deformation was used most notably about six years ago in "Starfighter 3000" (see figure 1). This game allowed the player to fly her space ship around and shoot holes through mountains. Although the deformation was not vital to the game it made it more fun to play.
Figure 1: Starfighter 3000 uses deformable height maps.
A terrain model can be represented by a rectangular grid of quadrilaterals. To generate this grid we need to store the heights of the grid vertices in an array of the same size, called the height field.
In the case of "Starfighter 3000" this was an array of 512x512 elements. The heights stored in this array can be altered at runtime, thus altering the shape of the terrain. For example, if the player shoots at a hill, the grid square where the missile hits has its height lowered and the surrounding squares have their height adjusted proportionally.
Each element of the texture map stores the colour that relates to the height of the corresponding element of the height map. This allows the program to work out what texture to use based on the height of the square. In the case of "Starfighter 3000" the grid squares were textured from snow to lake depending on the value of the element. The texture artist can add extra features (like roads) to the texture map.
Divide the sphere into horizontal slices and then convert each slice to voxels. To work out how far from the centre of the sphere each new voxel should be created pythagoras should be used (see figure 3). In OpenGL all the voxels would be stored as an array or linked list of three-dimensional points. A draw function could step through the list and draw each point as a cube or sprite. When a hole is made in the voxel sphere, the points around the hole are deleted from the list. An extension of this affect would be not to delete those hole voxels but instead to animate them giving the effect of an explosion.
Figure 3: The pixels located for a circle of radius R = 8.
Further methods for the implementation of Volumetric/Voxel CSG conversions are presented in .
To texture voxels an image map could be projected onto the voxels. Alternatively a procedural method could be used. The renderer would determine the voxels colour based on its position in 3D space.
Boolean operations can be performed on polygonal objects using a series of intersection algorithms. Using the method presented in  a polygonal object can be cut into or added onto using other polygonal objects.
Modern computer game environments are built from polygons. These polygonal worlds could be deformed using CSG operations, for example, to produce holes in walls. 1.1.4 Implicit Surfaces
1.1.5 Deformation for planets3D
An implicit surface is a surface defined by a density function. A polygonal sphere requires a list of vertices to define its surface but this only gives the surface to a fixed resolution. Using an implicit surface, we would only need the sphere's centre and radius to calculate if a given point is inside or outside the sphere.
The implicit surface function of a sphere with radius r
and centre C
where P is the point to be tested. The function returns a density for the point P, where negative values are inside the surface and positive values are outside the surface.
An implicit function can be used to describe any surface. This can be as simple as a sphere or as complex as a human figure (see figure 5). However implicit surfaces are suited to describing curved surfaces and not to geometric objects like buildings.
Figure 5: This torso was built entirely from implicit functions (see ) which are blended together to produce the smooth surface. This process of blending implicit functions is sometimes called blobby objects or metaballs.
Figure 6: This implicit surface produced in  uses the marching cube algorithm to break down the implicit surface into cubes for polygonization. However it only breaks down areas that have been modified, thus speeding up the process of decomposition.
Then these cubes can be broken down into tetrahedra if a more accurate surface is required. However the tetrahedra method is slower as it requires more function evaluations.
All the edges of the cubes/tetrahedra are tested against the surface to determine the point of intersection with that edge. This is done using binary subdivision with a fixed number of iterations. Binary subdivision finds the intersection point by taking two points known to lie either side of the surface (the two ends of the tested edge). Then these points are subdivided and the halfway point tested to determine if it is inside or outside the surface. This new point is then taken with the opposite end and the edge split again. This is repeated until the edge has been split enough times to give a fairly accurate approximation of the intersection point. This intersection point determines where a vertex should exist for that particular edge. All the vertices determined for a cube/tetrahedron are joined up to form a face and stored in memory. This list of vertices can be drawn by a separate function.
Given the four methods of deformation outlined above, which would be the best for producing a deformable world in planets3D.2. Implementation
If a height map was to be used then the height information in each array element would represent the distance of that point from the centre of the sphere/planet.
If a hole is added to the planet then a negative implicit sphere is added to the total implicit function of that planet. The user can go on adding holes and consequently implicit spheres as required.
planets3D uses implicit surfaces to define the deformable planet objects. The player can shoot holes into and through the planets. They are then re-polygonized when hit. Implicit surfaces would not be the best method of producing a fully deformable game. They are suited for blobby, clay-like objects and not rigid, simple objects like walls. However the polygonization code has already been proven to work and is robust. It is therefore perfectly suited to putting holes in spheres for planets3D without much extra work (although special thanks to James Fletcher  who updated Bloomenthals code  to use the object based features of C++).
This section describes how planets3D is written and how the implicit surface code is implemented within it.2.1 Texturing of Implicits
planets3D is written in C++ and uses OpenGL and Glut. It is therefore platform independent. It has been tested on a Microsoft Windows and SGI Irix system . planets3D is based on an old 2D game that followed exactly the same rules. planets3D is a two player game and is turned based. Each player takes their turn to angle her ship and fire, trying to hit the other player's ship with their projectile. In-between the two ships are a variable number of planets which exert gravity on the fired projectile causing it to curve as it passes through space. This is where the challenge lies as the player must judge the curved path needed to hit the other player's ship. If the player hits the other player then they win that game and the program raises the player's score by one and resets the world. The planet positions and size are set randomly every time the world is reset. If the projectile goes too far away from the world origin or the projectile has been active for too long then the current player's turn ends.
If the projectile hits a planet then a crater/dent/hole is put into the planet. Holes can be subsequently added to any planet so that tunnels can eventually form. Whole sections of planets can be blown away (see figure 8). This adds interest to the game as a player could win by creating shots that pass through and loop round numerous planets.
If the point is found to be inside the main positive sphere of the planet and inside any of the negative hole spheres of the planet then the program knows that the projectile has not hit the planet for that point. If the point is found to be inside the main positive sphere and outside all the negative hole spheres of the planet then the program knows that the projectile has hit that planet. A new hole sphere should then be added to the planet list and the planet repolygonized. A more accurate method of calculating intersections (and therefore collisions) with implicit surfaces is presented in .
3. Implications of Deformation in Games
Extensive research into the texturing of implicit surfaces is presented in . It involves the implicit object being broken down into patches that can be interactively textured. Further work is presented in .
The planets in planets3D
are textured using the following method. As the surface is polygonized the program takes the current point to be tested and calculates its surface co-ordinates. The vertical spherical co-ordinate can vary between 0 and 180 degrees and the horizontal spherical co-ordinate can vary between 0 and 360 degrees. These are converted to values between 0 and 1 and stored alongside the vertex's Cartesian co-ordinates. They then become the UV co-ordinates that are used by OpenGL to access the texture. The values are calculated by:vertical co-ord = arctan(z/x)/(2*PI)
horizontal co-ord = arcsin(y/radius)/PI + 0.5
where x,y,z are the co-ordinates of the point to be tested and radius
is the radius of the planet.
There is one known bug with this method, associated with the way OpenGL interpolates texture co-ordinates. An example planet polygon has horizontal texture co-ordinates of 0.4, 0.41 and 0.45.
This polygon would be textured with the correct, small part of the texture. However, say we had a polygon of three vertices with horizontal texture co-ordinates of 0.1, 0.2 and 0.9 (see figure 9). All the polygons running down one side of the planet have texture co-ordinates equivalent to the second example polygon. This is due to the nature of the maths and the values returned by arcsin
Figure 10: An example of a textured planet in planets3D. Note that the inside is textured differently to the outside.
One thing that appears guaranteed concerning computers in the near future is that they will get faster. Assuming we had a computer of considerable processing power what would the ideal game world look, or indeed, feel like. What if the game environment was built from voxels, each one representing a molecule or even an atom? In this game world the player could shoot holes in anything (if a future human has any desire to shoot holes in things at all). Liquids could be built from voxels. However methods would be needed to control all these voxels in some mathematical, logical and realistic manner. This is not an easy undertaking. Also memory presents a problem to the all-for-voxels method. At least one bit of storage will be needed to represent each voxel. If we had a universe made of voxels then we would need as many voxels as there are atoms in the real universe and consequently the entire real universe would be taken up representing the computer game world.5. Conclusion
Our future game world could be like game worlds of today and built entirely from polygons, albeit in much higher detail. Objects could be deformed using polygon CSG methods like in "Red Faction". But polygons are not suited to modelling liquids or plants. As yet, nobody has produced a convincing polygonal tree within a game environment due to the large number of polygons needed to model a convincing tree. This isn't to say it would be impossible, we are discussing a future computer with considerably more power, but its just not the best way of making a tree.
How could this polygonization process be speeded up though? One method is presented in  and involves only polygonizing the part of the implicit surface that has changed. For a game like planets3D this would allow it to run faster but may not be ideal, as each planet's polygon list will steadily get longer. However for a first person game like "Red Faction" this would be ideal.
One interesting improvement to the polygonization process would be to add an adaptive cube size. The polygons are calculated using a specified size of cube but what if this cube size could be changed dynamically by the polygonizer. The program could look at the density functions and determine whether a high or low detail level was needed. It could then change the cube size for that part of the surface appropriately.
There are many other, smaller, improvements that could be made to planets3D to make the game more fun to play. Some of these include; rotating planets, power ups (e.g. laser sight), networked game play allowing two players to be on different computers, more than two players per game, split screen mode allowing a panel for each player and many other simpler graphical improvements.
As a finished game, planets3D (see figure 11) is fun to play and successfully uses implicit surfaces to allow the deformation of planets in space.
From the research presented in this report it can be seen that the scope exists for real time deformation in games. It is already being used by games like "Red Faction". This report has shown that implicit surfaces are a viable option for deformation in a simple game like planets3D. For a more complex game they are suited more to the sculpting, animation and deformation of soft, malleable objects.
However for implicit surfaces to be a viable real-time option, some development is needed to speed up the polygonization process.
There is a great deal of room for creative development in computer games and it is hoped that the game player of the future finds her gaming world a little more interactive than those of today.
 E. Ferley, Marie-Paule Cani and Jean-Dominique Gascuel. Virtual Sculpture. EUROGRAPHICS '99. iMAGIS/GRAVIR-IMAG, 220 rue de la chimie BP53, 38041 Grenoble Codex 9, France.
 Jules Bloomenthal. An Implicit Surface Polygonizer. Visual Information Technologies, George Mason Technologies, Fairfax, VA 22030-4444. email@example.com
 James Fletcher. Fletchercallin Ltd. Manchester, UK. firstname.lastname@example.org.
 Tom Duff. Interval Arithmetic and Recursive Subdivision for Implicit Functions and Constructive Solid Geometry. AT&T Bell Laboratories, 600 Mountain Avenue, Murray Hill, New Jersey, 07974.
 Hans Kohling Pedersen. Decorating Implicit Surfaces. Department of Computer Science, University of North Carolina at Chapel Hill. The author can be reached at Centre for Integrated Systems, Stanford University, Stanford, CA 94305-4070. email@example.com
 Fabrice Neyret and Marie-Paule Cani. Pattern-Based Texturing Revisited. iMAGIS/GRAVIR-IMAG. www-imagis.imag.fr/TEXTURES/
planets3d and this report are available at: