In the dense jungle canopies of Central America and the Yucatan, much of what remains of the Maya civilization remains hidden from view. While important archaeological sites have been discovered and are now iconic tourist destinations, an understanding of how the Maya used the land – the extent of their settlements, the organization of towns and spheres of influences – is yet unknown. Currently, remote sensing technologies that can see through the thick canopy exist, but they are prohibitively expensive. Fortunately, the convergence of aerial drones and light-weight advanced sensor packages such as LiDAR, have just now opened the door to creating the disruptive technology needed to explore the hidden secrets of the Maya. One goal of this project is to assemble a remote sensing device using drone technology for the jungle.
Already discovered Maya ruins are hidden under dense jungle. As a consequence, many sites that are not popular tourist destinations, yet instrumental in understanding Maya culture, are seen by very few people. Items found during excavation are typically claimed for preservation in museums. Large exhibits such as temple structures, by virtue of being buildings, cannot be demonstrated to a wider audience. Our goal is to change how archaeological finds are shared by exploring digital methods for documentation and visualization.
This project is divided into 3 main parts:
- Aerial Surveys: To fly a drone over the jungle and use LiDAR technology to map the terrain.
- Tunnel Mapping: To build prototypes of hand-held 3D scanners to generate 3D models of tunnel excavations of ancient Maya temples.
- Virtual Reality Visualization: To create a immersive visualization of digital models of archaeological sites and artifacts.
The Tunnel Mapping project aims to lower the cost of digital documentation by experimenting with data collection methods. These methods include stereo-panoramic cameras, LiDAR, and experimental remote sensing systems based on the Microsoft Kinect camera or the Google Tango tablet. In prior expeditions to Guatemala, we brought a ground-based LiDAR system for high resolution scans of these large excavated temples. One method we are particularly excited about is Structure from Motion (SfM), a low cost technique for generating 3D models using photos from a traditional camera. This group is seeking highly motivated individuals to build data collection infrastructure for expeditions.
Below, is a fly-through video created from a composite point cloud generated from 50 LIDAR scans.
This video is of a point cloud generated by taking pictures of a stucco mask inside of an excavated temple on the site.
Virtual Reality Visualization
We seek to expand distribution by creating immersive visualizations of the many 3D models that we have collected over the years. We are looking for any person motivated by VR environments, video game creation, 3D modeling, and 3D point cloud manipulation.
Visualization of Structure M7-1 in El Zotz Guatemala.
Part of the Engineers for Exploration Maya Archaeology Project
Created using Point clouds from the Faro Focus 3D 120, Lieca BLK360.
Structure from Motion was used to collect material samples to represent the environment, and collect color information from key elements.
Data Acquisition through Remote Scanning
Terrestrial Light Detection and Ranging (LiDAR)
The process for Terrestrial LiDAR Scanning (TLS) generally involves setting the desired scan density, which in affect increases the number of points collected and time needed per scan. In the case of the Lieca BLK360 TLS, medium quality was sufficient for our purposes and took 3 minutes per scan. Each scan occurred more than 1 meter apart depending on if all the environment’s geometry could be reached from the previous scans location. The set of individual point clouds obtained was then registered using visual alignment in Lieca’s propriety software Cyclone Register 360. The output of all this is several standard file format .ptx files all in alignment with each other, which can then be brought to external software for further 3D point cloud and mesh processing.
Structure from Motion (SfM)
SfM point cloud derived models are able to store accurate RGB data at a higher resolution, but with lower geometric accuracy. Since RGB information collected during SfM have greater accuracy, they can be useful in projecting color information back onto the Mesh generated by LIDAR. We can do Fine registration with the Iterative Closet Point (ICP) algorithm on the 2 Models to share the same coordinate space.
Performing SfM on larger environments can be time consuming in collecting and processing data. In addition to this are the extra constraints of consistency in scene lighting, and obtaining enough coverage and overlap between photos of the entire scene.
Thus we conceptually decompose an environment into it’s most abundant components and perform SfM on these components under controlled lighting conditions for the purpose of high-detail surface scanning applicable toward environmental texturing. The m7-1 excavation was classified into 2 materials, it’s general structure component of limestone stucco and the natural elements comprising the walls and ceiling of the excavation. This surface information can then be input into newer high-level development tools that utilize machine learning to upscale and synthesis material variations.
Aerial Light Detection and Ranging (LiDAR)
The Foundation for Maya Cultural and Natural Heritage (PACUNAM) funded a LiDAR initiative that surveyed a portion of the Guatemalan jungle including the El Zotz region of Protected Biotope San Miguel La Palotada. The LiDAR rapidly emits pulses of light that transmit through multiple surface intersections. Each intersection returns a signal that captures it’s Time-of-Flight and from that a 3-dimensional coordinate and classification can be assigned to each point. The ground points can be extracted to reveal the terrain which can then be used to generate an accurate landscape than can serve as a canvas to align TLS and SfM scans to capture a broader environment.
The Guatemala project is a collaboration between several organizations, Jason Paterniti (GEOS), Tom Garrison (USC), Edwin Roman-Ramirez (UT Austin), Ryan Kastner, Albert Lin and Curt Schurgers (UCSD, E4E, QI, NatGeo).
The project started in 2014 with two expeditions to Guatemala. Albert, Ryan, Curt, Perry Naughton, Eric Lo, and Dominique Meyer, traveled to Guatemala in February to evaluate Quadcopters as a method of surveying jungle-bound archaeological sites. The second expedition, in May, focused on data gathering and survey techniques in several archeological sites, some of which contained Maya tombs. Curt and Ryan returned to Guatemala, and brought Dustin Richmond, David Dantas, and Sabrina Trinh. In June 2016, PhD student Quentin Gautier focused on testing remote sensing tablet technology, and in June 2017, Curt, Quentin and PhD student Peter Tueller went to the field to test a prototype of portable 3D scanner to collect data in the archaeological excavations. Starting Fall 2018 Quentin spearheaded the projects shift to VR and began reaching out to TritonXR club members in order to help recreate these locations. That following Summer 2019 Quentin along with Giovanni Vindiola returned to El Zotz and interviewed our collaborators Tom Garrison (UT Austin) and Edwin Roman-Ramirez (UT Austin) for narrations, and further scanning of structures and surfaces for our VR Experience.
Leaders / Contact
- Nathan Hui (email@example.com)
In the Press
Calit2 Newsroom: Capturing Ancient Maya Sites from Both a Rat’s and a ‘Bat’s Eye View’ (Credit Tiffany Fox)
International Business Times: Drones, Lasers Help Archaeologists Study Ancient Mayan Ruins Hidden In Guatemala Jungle