From the Ruins

Uncategorized

This article originally appeared in Immerse, the IIT Madras Magazine on Research in Science and Engineering, published during Shaastra 2014. Click here to access the magazine in full. If you’d like your project to be featured on T5E/Immerse, write to us at t5e.iitm[at]gmail.com.

immerse

What happens when technology meets heritage? Good things, it would seem, as evidenced by the Indian Digital Heritage (IDH) project of the Department of Science and Technology (DST). The project aims to create virtual 3D interactive walkthroughs of ancient Indian monuments starting with the ruins at Hampi, a UNESCO World Heritage Site. Various aspects of the projects are being dealt with by academic and industrial research groups around the country, including one here at IIT Madras.

The group at IIT-M is headed by Professor A.N. Rajagopalan of the Department of Electrical Engineering and they are working on creating virtual reconstructions of damaged objects, such as statues, in the ruins. Using techniques from computer vision, they are attempting to make these damaged statues whole again to ensure a pleasant walkthrough for the user, allowing him or her to appreciate these magnificent buildings as they were in their heyday, unblemished by the ravages of time. The challenge lies in accurately reconstructing parts of the monument so that they look authentic and fit in perfectly with the remaining, undamaged parts.

Pratyush Sahay, a former graduate student at IIT-M and a part of this group explains, “There are three main steps in the process of virtual reconstruction. First, a 3D model of the damaged object has to be created. Then, the missing or damaged portion has to be accurately marked on the model. Finally, the missing portion has to be filled in correctly, with, say, a nose appearing where a nose ought to be, and not a head!” The main contribution of the group at IIT-M is in the final step of the process. However, even the first step was prohibitively expensive till a few years back when a lot of progress was made in the field of 3D reconstruction. Earlier, one had to use expensive laser range scanners in order to create digital models of objects. These scanners would bounce laser beams off an object and use the pattern of the reflected rays to determine its shape.

“The missing portion has to be filled in correctly, with, say, a nose appearing where a nose ought to be, and not a head!”

Now, however, it can be done using an off-the-shelf camera. Images of a 3D object are taken from various points of view and the technique of triangulation is used to determine the location of a point on the object in space. Rays are projected back from pixels in the image corresponding to the surface of the object and the meeting points of these rays give us the location of a point on the surface in 3D. One might ask, since there may be millions of pixels in an image, how do we know which ones correspond to the contours of the object and hence, whose location should be determined? One could manually mark the points but the sheer number of pixels in a high-definition image makes this an extremely tedious task. To overcome this problem, a method known as corner point detection is used. This approach allows one to determine pixels corresponding to the same point on the object across the set of images by finding the maxima in the gradient domain map of the image. In the simplest version of this method, the intensity gradient at a pixel is found by taking the difference in the intensity values of the adjoining pixels along both the x- and y-axes. Recent methods use a more complex methodology to get a better feature vector for a cluster of pixels, as opposed to just one pixel, which aids corner point detection.

Typically, an image with a million pixels will have a few thousand corner points which are triangulated to create what is known as the point cloud of the object. This is just a collection of points floating in space giving you a rough idea of the shape of the object. Taking a step further towards a more concrete visualization, a mesh is created from the point cloud by methods such as Delaunay triangulation, where points are connected by triangles satisfying certain properties, giving a tessellation of sorts of the object’s surface (see image below).

A triangular mesh representing a dolphin. Credit: Chrschn/Wikimedia Commons.

Once the mesh has been created, the question naturally arises, how do we know where the damaged portions – let’s call them “holes” from now on, for simplicity – are? Marking the holes on the reconstructed 3D object may require the use of pricey, complicated software so the marking is done on the images themselves. Then, using the back-projection method explained before, the damaged region on the 3D model can be obtained by these markings.

Hole-filling in digital representations of objects has been done before, most notably in a similar context by Marc Levoy’s group at Stanford, where they corrected small imperfections in the 3D scan of Michelangelo’s David. The key difference, however, is that they filled in small areas which were missing only on the 3D model created due to issues with the laser scanning process, but which were actually present on the statue itself. The methodology used by Levoy’s group works for small missing regions and not for a hole as large as a nose or a face, as it is based on simple surface geometry extension, that is, interpolation. Now, filling in a large hole which contained a feature with a distinctive shape, without any prior information on what the feature was, is infeasible. For example, imagine a missing nose. Trying to reconstruct it by simply interpolating from the boundaries of the hole is likely to give you an indistinct mess which looks nothing like a nose.

To overcome this, a database of structurally similar models is used and points from these existing models are put in as estimates in the hole of the incomplete model. Large databases exist for features such as faces and bodies, which contain information about the orientation and the size of the models in them, so they can be appropriately rotated and scaled to match the one to be reconstructed. Without this information, these models cannot be used since they may not be similar to the desired model. This presents a problem while trying to reconstruct objects at heritage sites, since large databases with the necessary information do not exist. So, models are taken from statues at the same site or elsewhere, transformed with respect to the damaged model in order to obtain information about their relative orientation and sizes, and, finally, the estimation is performed as before. The transformation is done using robust point cloud registration techniques.
A figure of a horse from Hampi (left) and its reconstruction (right).   Credits: Prof. A. N. Rajagopalan and Pratyush Sahay.
A figure of a horse from Hampi (left) and its reconstruction (right). Credits: Prof. A. N. Rajagopalan and Pratyush Sahay.

Once the point cloud with the estimates for the hole is ready, a technique known as tensor voting is used to fill in the hole. The local geometry at each estimated point is determined by collecting “votes” from known, undamaged regions of the object, wherein a vote is a communication of structural preference. The estimate point which receives the highest number of votes is now included as part of the undamaged region and the process is repeated again. This continues until the entire hole has been filled up.

Dr. Rajagopalan’s group used this methodology to reconstruct damaged statues at Hampi, with stunning results.

A stone carving of Narasimha from Hampi (top left) and its reconstruction (top right). A lion face from Mahabalipuram (bottom left) and its reconstruction (bottom right). Credit: Prof. A. N. Rajagopalan and Pratyush Sahay.
A stone carving of Narasimha from Hampi (top left) and its reconstruction (top right). A lion face from Mahabalipuram (bottom left) and its reconstruction (bottom right). Credit: Prof. A. N. Rajagopalan and Pratyush Sahay.
 

Dr. Rajagopalan’s group used the methodology described above to reconstruct damaged statues at Hampi, with stunning results. The image above shows a Narasimha statue pre- and post-reconstruction. The group is currently working on using similar techniques to estimate the intensity pattern of the damaged regions as well. A beautiful example of how beneficial technology can be even for art and history, one hopes to see more work like this from research centres throughout the country.

We would love to hear from you, about this article and the magazine, Immerse. Do write to us at t5e.iitm[at]gmail.com, or leave your comments below.

 

Write a Comment

Your email address will not be published. Required fields are marked *