Publications

[1]
E. Wong, I. Humphrey, S. Switzer, C. Crutchfield, N. Hui, C. Schurgers, and
R. Kastner, “Underwater depth calibration using a commercial depth camera,”
2022.
DOI |
http ]


Depth cameras are increasingly used in research and industry in underwater settings. However, cameras that have been calibrated in air are notably inaccurate in depth measurements when placed underwater, and little research has been done to explore pre-existing depth calibration methodologies and their effectiveness in underwater environments. We used four methods of calibration on a low-cost, commercial depth camera both in and out of water. For each of these methods, we compared the predicted distance and length of objects from the camera with manually measured values to get an indication of depth and length accuracy. Our findings indicate that the standard methods of calibration in air are largely ineffective for underwater calibration and that custom calibration techniques are necessary to achieve higher accuracy.


Keywords: depth camera calibration, Underwater stereo vision

[2]
P. Bresnahan, T. Cyronak, R. J. Brewin, A. Andersson, T. Wirth, T. Martz,
T. Courtney, N. Hui, R. Kastner, A. Stern, T. McGrain, D. Reinicke,
J. Richard, K. Hammond, and S. Waters, “A high-tech, low-cost, internet of
things surfboard fin for coastal citizen science, outreach, and education,”
Continental Shelf Research, vol. 242, p. 104748, 2022.
DOI |
http ]


Coastal populations and hazards are escalating simultaneously, leading to an increased importance of coastal ocean observations. Many well-established observational techniques are expensive, require complex technical training, and offer little to no public engagement. Smartfin, an oceanographic sensor–equipped surfboard fin and citizen science program, was designed to alleviate these issues. Smartfins are typically used by surfers and paddlers in surf zone and nearshore regions where they can help fill gaps between other observational assets. Smartfin user groups can provide data-rich time-series in confined regions. Smartfin comprises temperature, motion, and wet/dry sensing, GPS location, and cellular data transmission capabilities for the near-real-time monitoring of coastal physics and environmental parameters. Smartfin’s temperature sensor has an accuracy of 0.05 °C relative to a calibrated Sea-Bird temperature sensor. Data products for quantifying ocean physics from the motion sensor and additional sensors for water quality monitoring are in development. Over 300 Smartfins have been distributed around the world and have been in use for up to five years. The technology has been proven to be a useful scientific research tool in the coastal ocean—especially for observing spatiotemporal variability, validating remotely sensed data, and characterizing surface water depth profiles when combined with other tools—and the project has yielded promising results in terms of formal and informal education and community engagement in coastal health issues with broad international reach. In this article, we describe the technology, the citizen science project design, and the results in terms of natural and social science analyses. We also discuss progress toward our outreach, education, and scientific goals.


Keywords: Coastal oceanography, Citizen science, Surfing, Sea surface temperature, Outreach

[3]
S. Perry, V. Tiwari, N. Balaji, E. Joun, J. Ayers, M. Tobler, I. Ingram,
R. Kastner, and C. Schurgers, “Pyrenote: a web-based, manual annotation tool
for passive acoustic monitoring,” in 2021 IEEE 18th International
Conference on Mobile Ad Hoc and Smart Systems (MASS)
, pp. 633–638, Oct.
2021.
DOI ]


Passive acoustic monitoring (PAM) involves deploying audio recorders across a natural environment over a long period of time to collect large quantities of audio data. To parse through this data, researchers have worked with automated annotation techniques stemming from Digital Signal Processing and Machine Learning to identify key species calls and judge a region’s biodiversity. To apply and evaluate those techniques, one must acquire strongly labeled data that marks the exact temporal location of audio events in the data, as opposed to weakly labeled data which only labels the presence of an audio event across a clip.Pyrenote was designed to fit the demand for strong manual labels in PAM data. Based on Audino, an open-source, web-based, and easy-to-deploy audio annotation tool, Pyrenote displays a spectrogram for audio annotation, stores labels in a database, and optimizes the labeling process through simplifying the user interface to produce high-quality annotations in a short time frame. This paper documents Pyrenote’s functionality, how the challenge informed the design of the system, and how it compares to other labeling systems.

[4]
C. L. Crutchfield, J. Sutton, A. Ngo, E. Zadorian, G. Hourany, D. Nelson,
A. Wang, F. McHenry-Crutchfield, D. Forster, S. C. Strum, R. Kastner, and
C. Schurgers, “Baboons on the move: Enhancing understanding of collective
decision making through automated motion detection from aerial drone
footage,” in 12th International Conference on Methods and Techniques in
Behavioral Research and 6th Seminar on Behavioral Methods
, vol. 1,
pp. 33–39, Oct. 2021.
DOI |
http ]

[5]
P. Tueller, R. Maddukuri, P. Paxson, V. Suresh, A. Ashok, M. Bland, R. Wallace,
J. Guerrero, B. Semmens, and R. Kastner, “Fishsense: Underwater rgbd imaging
for fish measurement and classification,” in OCEANS 2021 MTS/IEEE SAN
DIEGO
, IEEE, Sept. 2021.
http ]


There is a need for reliable underwater fish monitoring systems that can provide oceanographers and researchers with valuable data about life underwater. Most current methods rely heavily on human observation which is both error prone and costly. FishSense provides a solution that accelerates the use of depth cameras underwater, opening the door to 3D underwater imaging that is fast, accurate, cost effective, and energy efficient. FishSense is a sleek handheld underwater imaging device that captures both depth and color images. This data has been used to calculate the length of fish, which can be used to derive biomass and health. The FishSense platform has been tested through two separate deployments. The first deployment imaged a toy fish of known length and volume within a controlled testing pool. The second deployment was conducted within an 70,000 gallon aquarium tank with multiple species of fish. A Receiver Operating Characteristic (ROC) curve has been computed based on the detector’s performance across all images, and the mean and standard deviation of the length measurements of the detections has been computed.

[6]
J. Ayers, Y. Jandali, Y. J. Hwang, G. Steinberg, E. Joun, M. Tobler, I. Ingram,
R. Kastner, and C. Schurgers, “Challenges in applying audio classification
models to datasets containing crucial biodiversity information,” in
38th International Conference on Machine Learning
, vol. 38, July 2021.
.pdf ]


The acoustic signature of a natural soundscape can reveal consequences of climate change on biodiversity. Hardware costs, human labor time, and expertise dedicated to labeling audio are impediments to conducting acoustic surveys across a representative portion of an ecosystem. These barriers are quickly eroding away with the advent of low-cost, easy to use, open source hardware and the expansion of the machine learning field providing pre-trained neural networks to test on retrieved acoustic data. One consistent challenge in passive acoustic monitoring (PAM) is a lack of reliability from neural networks on audio recordings collected in the field that contain crucial biodiversity information that otherwise show promising results from publicly available training and test sets. To demonstrate this challenge, we tested a hybrid recurrent neural network (RNN) and convolutional neural network (CNN) binary classifier trained for bird presence/absence on two Peruvian bird audiosets. The RNN achieved an area under the receiver operating characteristics (AUROC) of 95% on a dataset collected from Xeno-canto and Google’s AudioSet ontology in contrast to 65% across a stratified random sample of field recordings collected from the Madre de Dios region of the Peruvian Amazon. In an attempt to alleviate this discrepancy, we applied various audio data augmentation techniques in the network’s training process which led to an AUROC of 77% across the field recordings

[7]
K. L. Qi, “Mangroves from the sky: Comparing remote sensing methods for
regional analyses in baja california sur,” 2021.
http ]


Consequences of global warming are causing mangrove migration from tropical habitats towards temperate zones. Forests at limits and transition zones are important to monitor for promoting local management and conservation efforts. The advancement of remote sensing technology in the past decade has allowed more insight into these habitats at large scales, and recent studies using satellite imagery have succeeded in creating baselines for global mangrove extent. However, the high surveying range comes with a cost of reduced resolution, causing gaps in areas with high fragmentation or low canopy height, such as in dwarf mangrove habitats. By using drones, we were able to conduct detailed analyses of canopy height distribution for dwarf mangroves in Baja California Sur. This new model provides a focused approach at analyzing parameters that contribute to the multidimensionality of mangrove forests with primarily remote sensing data. Additionally, improved biomass models were constructed with the drone data and compared against satellite data. Due to its inaccuracies in approximated mangrove extent and canopy height, satellite imagery significantly underestimates above ground biomass and carbon measurements in this region, and potentially dwarf mangroves in general. The pairing of satellite and drone imagery allows for a more robust view of mangrove ecosystems, which is critical in understanding their poleward movement with respect to climate change.

[8]
N. T. Hui, E. K. Lo, J. B. Moss, G. P. Gerber, M. E. Welch, R. Kastner, and
C. Schurgers, “A more precise way to localize animals using drones,”
Journal of Field Robotics
, 2021.
DOI |
arXiv |
http ]


Abstract Radio telemetry is a commonly used technique in conservation biology and ecology, particularly for studying the movement and range of individuals and populations. Traditionally, most radio telemetry work is done using handheld directional antennae and either direction-finding and homing techniques or radio-triangulation techniques. Over the past couple of decades, efforts have been made to utilize unmanned aerial vehicles to make radio-telemetry tracking more efficient, or cover more area. However, many of these approaches are complex and have not been rigorously field-tested. To provide scientists with reliable quality tracking data, tracking systems need to be rigorously tested and characterized. In this paper, we present a novel, drone-based, radio-telemetry tracking method for tracking the broad-scale movement paths of animals over multiple days and its implementation and deployment under field conditions. During a 2-week field period in the Cayman Islands, we demonstrated this system’s ability to localize multiple targets simultaneously, in daily 10 min tracking sessions over a period of 2 weeks, generating more precise estimates than comparable efforts using manual triangulation techniques.


Keywords: aerial robotics, environmental monitoring, exploration, rotorcraft

[9]
A. J. Hsu, J. Dorian, K. Qi, E. Lo, and B. G. Martinez, “Drone imagery
processing procedure,” in UC San Diego Conferences, UC San Diego,
2021.
http ]

[10]
J. G. Ayers, S. Perry, V. Tiwari, M. Blue, N. Balaji, C. Schurgers, R. Kastner,
M. Tobler, and I. Ingram, “Reducing the barriers of acquiring ground-truth
from biodiversity rich audio datasets using intelligent sampling
techniques,” in NeurIPS 2021 Workshop on Tackling Climate Change with
Machine Learning
, 2021.
http ]

[11]
D. Hicks, R. Kastner, C. Schurgers, A. Hsu, and O. Aburto, “Mangrove ecosystem
detection using mixed-resolution imagery with a hybrid-convolutional neural
network,” in Thirty-fourth Conference on Neural Information Processing
Systems Workshop: Tackling Climate Change with Machine Learning
, 2020.
DOI |
.pdf ]


Mangrove forests are rich in biodiversity and are a large contributor to carbon sequestration critical in the fight against climate change. However, they are currently under threat from anthropogenic activities, so monitoring their health, extent, and productivity is vital to our ability to protect these important ecosystems. Traditionally, lower resolution satellite imagery or high resolution unmanned air vehicle (UAV) imagery has been used independently to monitor mangrove extent, both offering helpful features to predict mangrove extent. To take advantage of both of these data sources, we propose the use of a hybrid neural network, which combines a Convolutional Neural Network (CNN) feature extractor with a Multilayer-Perceptron (MLP), to accurately detect mangrove areas using both medium resolution satellite and high resolution drone imagery. We present a comparison of our novel Hybrid CNN with algorithms previously applied to mangrove image classification on a data set we collected of dwarf mangroves from consumer UAVs in Baja California Sur, Mexico, and show a 95% intersection over union (IOU) score for mangrove image classification, outperforming all our baselines

[12]
Q. K. Gautier, T. G. Garrison, F. Rushton, N. Bouck, E. Lo, P. Tueller,
C. Schurgers, and R. Kastner, “Low-cost 3d scanning systems for cultural
heritage documentation,” Journal of Cultural Heritage Management and
Sustainable Development
, vol. 10, no. 4, pp. 437–455, 2020.
DOI |
http ]


Digital documentation techniques of tunneling excavations at archaeological sites are becoming more common. These methods, such as photogrammetry and LiDAR (Light Detection and Ranging), are able to create precise three-dimensional models of excavations to complement traditional forms of documentation with millimeter to centimeter accuracy. However, these techniques require either expensive pieces of equipment or a long processing time that can be prohibitive during short field seasons in remote areas. This article aims to determine the effectiveness of various low-cost sensors and real-time algorithms to create digital scans of archaeological excavations.


Keywords: archaeology, cultural heritage, documentation, surveying and recording, mapping

[13]
N. Hui, “Efficient drone-based radio tracking of wildlife,” 2019.
http ]


Radio telemetry is a critical technique in conservation ecology, particularly for studying the movement and range of individuals and populations. Traditionally, most radio telemetry work is done using handheld directional antennae by using either direction-finding and homing techniques, or radio-triangulation techniques. Over the past couple decades, efforts have been made to utilize aerial vehicles to make radio telemetry tracking more efficient, or cover more area. However, many these approaches require the use of manned aircraft and specialist skill sets. The proliferation of small unmanned aerial systems (SUAS) with high reliability and ease of use, as well as recent development and application of robotic sensing and estimation, opens up the possibility of leveraging SUAS to conduct radio telemetry studies. In this thesis, I present the results of five years of development as well as the testing and deployment of a drone-based radio-telemetry tracking system that is able to track multiple targets simultaneously while operating in field conditions as part of a field expedition.


Keywords: Drone, Radio Tracking, SUAS, Wildlife Telemetry

[14]
A. J. Hsu, E. Lo, J. Dorian, K. Qi, M. T. Costa, and B. G. Martinez, “Lessons
on monitoring mangroves,” in UC San Diego: Aburto Lab, UC San Diego,
2019.
http ]

[15]
C. Beluso, A. Xu, E. Patamasing, B. Sebastian, E. Lo, C. Schurgers, R. Kastner,
L. Chen, X. Yu, D. Sturm, and R. Barlow, “D-sea: The underwater depth
sensing device for standalone time-averaged measurements,” in 2019 IEEE
16th International Conference on Mobile Ad Hoc and Sensor Systems Workshops
(MASSW)
, pp. 101–105, 2019.
DOI |
http ]


Access to accurate depth information is important for a wide variety of oceanographic science applications. For example, it is crucial in the creation of 3D models. Currently, divers are manually measuring the depth by using dive watches, but this method is inconsistent because of variable depth readings caused by changing wave heights and human errors. To combat these problems, we created the Depth-Sensor Enclosed Application (D-SEA) to automatically collect and average pressure data while displaying the calculated depth readings underwater. To use D-SEA, the user places it on top of the area of study to measure and gather the underwater depth readings over time. We are working on an affordable, waterproof prototype with a display that is readable underwater, an automatic transition between on and off states when submerged in seawater, and automatic data logging onto an SD card. From testing the recent prototype, results show that D-SEA lasted for weeks in the sleep state and days in the wake state while under depths of 4.40 meters.

[16]
M. P. Epperson, J. A. Rotenberg, G. L. Bryn, E. K. Lo, S. Afshari, R. Kastner,
C. Schurgers, and A. Thomas, “Seeing the forest from the trees: using drone
imagery and deep learning to characterize rainforest in southern belize,” in
2018 ESA Annual Meeting (August 5–10), ESA, 2018.
http ]


Tropical rainforests worldwide are negatively impacted from a variety of human-caused threats. Unfortunately, our ability to study these rainforests is impeded by logistical problems such as their physical inaccessibility, expensive aerial imagery, and/or coarse satellite data. One solution is the use of low-cost, Unmanned Aerial Vehicles (UAV), commonly referred to as drones. Drones are now widely recognized as a tool for ecology, environmental science, and conservation, collecting imagery that is superior to satellite data in resolution. We asked: Can we take advantage of the sub-meter, high-resolution imagery to detect specific tree species or groups, and use these data as indicators of rainforest functional traits and characteristics?

We demonstrate a low-cost method for obtaining high-resolution aerial imagery in a rainforest of Belize using a drone over three sites in two rainforest protected areas. We built a workflow that uses Structure from Motion (SfM) on the drone images to create a large orthomosaic and a Deep Convolutional Neural Networks (CNN) to classify indicator tree species. We selected: 1) Cohune Palm (Attalea cohune) as they are indicative of past disturbance and current soil condition; and, 2) the dry-season deciduous tree group since deciduousness is an important ecological factor of rainforest structure and function.

[17]
B. Cain, Z. Merchant, I. Avendano, D. Richmond, and R. Kastner, “Pynqcopter –
an open-source fpga overlay for uavs,” in 2018 IEEE International
Conference on Big Data (Big Data)
, pp. 2491–2498, 2018.
DOI |
http ]


FPGAs are a computing platform that excel in performing signal processing, control, networking, and security in a high performance and power efficient manner. This makes FPGAs attractive for unmanned aerial vehicles (UAVs) especially as they require smaller payloads and are processing multiple high data rate input sources (e.g. cameras, lidar, radar, gyroscopes, accelerometers). Unfortunately, FPGAs are notoriously difficult to program and they require significant hardware design expertise. However, there are newly released design tools aimed at making FPGAs easier to use, which drove the initial hypothesis for this paper: could three undergraduates program an FPGA to control a UAV in 10 weeks? The result of the experiment is PynqCopter – an open source control system implemented on an FPGA. We created and tested a UAV overlay which is able to run multiple computations in parallel, allowing for the ability to process high amounts of data at runtime.

[18]
D. Webber, N. Hui, R. Kastner, and C. Schurgers, “Radio receiver design for
unmanned aerial wildlife tracking,” in 2017 International Conference on
Computing, Networking and Communications (ICNC)
, pp. 942–946, 2017.
DOI |
http ]


The use of radio collars is a common method wildlife biologists use to study behavior patterns in animals. Tracking a radio collar from the ground is time consuming and arduous. This task becomes more difficult as the size and output power decreases to accommodate animals as small as an iguana. Our solution is to fly a low cost Unmanned Aerial System equipped with a sensitive receiver chain to locate several transponders at once. The challenge is that the system needs to be low cost and be able to detect the transponder within a range of tens of feet. Initial ground tests indicate that the system was able to detect a collar 70 feet away for under $100.


Keywords: radio receiver design, unmanned aerial wildlife tracking, radio collars, wildlife biologists, behavior patterns, low cost unmanned aerial system, sensitive receiver chain, transponders

[19]
I. Tolkova, L. Bauer, A. Wilby, R. Kastner, and K. Seger, “Automatic
classification of humpback whale social calls,” The Journal of the
Acoustical Society of America
, vol. 141, no. 5, pp. 3605–3605, 2017.
DOI |
arXiv |
http ]


Acoustic methods are an established technique to monitor marine mammal populations and behavior, but developments in computer science can expand the current capabilities. A central aim of these methods is the automated detection and classification of marine mammal vocalizations. While many studies have applied bioacoustic methods to cetacean calls, there has been limited success with humpback whale (Megaptera novaeangliae) social call classification, which has largely remained a manual task in the bioacoustics community. In this project, we automated this process by analyzing spectrograms of calls using PCA-based and connected-component-based methods, and derived features from relative power in the frequency bins of these spectrograms. These features were used to train and test a supervised Hidden Markov Model (HMM) algorithm to investigate classification feasibility. We varied the number of features used in this analysis by varying the sizes of frequency bins. Generally, we saw an increase in precision, recall, and accuracy for all three classified groups, across the individual data sets, as the number of features decreased. We will present the classification rates of our algorithm across multiple model parameters. Since this method is not specific to humpback whale vocalizations, we hope it will prove useful in other acoustic applications.

[20]
D. E. Meyer, M. De Villa, I. Salameh, E. Fraijo, R. Kastner, C. Schurgers, and
F. Kuester, “Rapid design and manufacturing of task-specific autonomous
paragliders using 3d printing,” in 2017 IEEE Aerospace Conference,
pp. 1–9, 2017.
DOI |
http ]


This paper explores a paraglider unmanned aerial vehicle (UAV) concept, using rapid design and payload manufacturing techniques to achieve task specific functions. Autonomous fixed wing, multi-rotor and mono-rotor vehicles require prolonged durations of design, manufacturing and tuning to obtain reliable UAVs. Using 3D printing on the meter-scale, we are able to rapidly integrate sensors and alternative payloads into the suspended fuselage of the paraglider. Additive manufacturing has allowed complex designs to be created which provide greater strength and versatility at lower costs compared to the traditional machining method. This manufacturing type has allowed us to produce weekly prototypes for testing. The latest parafoils have yielded higher airspeeds and stable collapse recovery behavior making them interesting for UAV use beyond dirigeable parachutes. The pendulum nature of the platform is self-stabilizing and allows the discrete proportional-integral-derivative (PID) controller to adapt based on mass alteration of the suspended body. We describe modular designs, stabilization algorithms and applications in the imaging of cultural heritage sites for conservation.

[21]
R. Yeakle, P. Naughton, R. Kastner, and C. Schurgers, “Inter-node distance
estimation from ambient acoustic noise in mobile underwater sensor arrays,”
in OCEANS 2016 MTS/IEEE Monterey, pp. 1–8, 2016.
DOI |
http ]


As the number of units in underwater sensor arrays grow, low-cost localization becomes increasingly important to maintain network scalability. Methods using ambient ocean noise are promising solutions because they do not require external infrastructure, nor expensive on-board sensors. Here we extend past work in stationary array element localization from correlations of ambient noise to a mobile sensor array [1]. After obtaining inter-node distance estimates using ambient noise correlations, these distances can be used to determine a relative localization of an array of mobile underwater sensor platforms without introducing any external infrastructure or on-board localization sensors. In this work we explore the effects of receiver mobility on inter-node distance estimation via correlations of ambient acoustic noise. Through analysis and simulation, we develop an exact solution along with a more tractable approximation to the peak amplitude of the Time-Domain Green’s Function between the two mobile receivers, which provides an estimate of their spatial separation. Here we demonstrate that the mobile noise correlation amplitude at the time delay for a sound wave traveling from one receiver to the other can be modeled with the wideband ambiguity function of a single sound source. We then use this approximation to discuss selection of design parameters and their effects on the noise correlation function.

[22]
A. Wilby, E. Slattery, A. Hostler, and R. Kastner, “Autonomous acoustic
trigger for distributed underwater visual monitoring systems,” in
Proceedings of the 11th ACM International Conference on Underwater Networks
& Systems
, WUWNet ’16, (New York, NY, USA), Association for Computing
Machinery, 2016.
DOI |
http ]


The ability to obtain reliable, long-term visual data in marine habitats has the potential to transform biological surveys of marine species. However, the underwater environment poses several challenges to visual monitoring: turbidity and light attenuation impede the range of optical sensors, biofouling clouds lenses and underwater housings, and marine species typically range over a large area, far outside of the range of a single camera sensor. Due to these factors, a continuously-recording or time-lapse visual sensor will not be gathering useful data the majority of the time, wasting battery life and filling limited onboard storage with useless images. These limitations make visual monitoring difficult in marine environments, but visual data is invaluable to biologists studying the behaviors and interactions of a species. This paper describes an acoustic-based, autonomous triggering approach to counter the current limitations of underwater visual sensing, and motivates the need for a distributed sensor network for underwater visual monitoring.


Keywords: autonomous monitoring, underwater cameras, acoustic triggering, biological surveys

[23]
A. Wilby, R. Kastner, A. Hostler, and E. Slattery, “Design of a low-cost and
extensible acoustically-triggered camera system for marine population
monitoring,” in OCEANS 2016 MTS/IEEE Monterey, pp. 1–9, 2016.
DOI ]


As the health of the ocean continues to decline, more and more marine populations are at risk of extinction. A significant challenge facing conservation biologists is the ability to effectively monitor at-risk populations due to the challenges of the underwater environment. Obtaining visual data on a marine species typically requires significant time spent by humans observing in the field, which is both costly and time-consuming, and often yields a small amount of data. We present a low-cost, acoustically-triggered camera system to enable remote monitoring and identification of marine populations.

[24]
D. E. Meyer, E. Lo, S. Afshari, A. Vaughan, D. Rissolo, and F. Kuester,
“Utility of low-cost drones to generate 3d models of archaeological sites
from multisensor data,” The SAA Archaeological Record, vol. 16, no. 2,
pp. 22–24, 2016.
http ]


With the emergence of low-cost multicopters on the market, archaeologists have rapidly integrated aerial imaging and photogrammetry with more traditional methods of site documentation. Unmanned Aerial Vehicles (UAVs) serve as simple yet transformative tools that can rapidly map archaeological sites.

The ancient Maya port site of Conil is located along the Laguna Holbox of northern Quintana Roo, Mexico. Established as early as 200 B.C., Conil supported a sizable population well into the Colonial period (Andrews 2020). Initial excavations were conducted by William T. Sanders (1955, 1960). Conil appears to have played a significant role in facilitating coastal trade along the northern coast of the Yucatan Peninsula. The aim of the aerial surveying was to obtain an accurate Digital Elevation Model (DEM) of the site that could be compared to a model that was created using a ground total station (Glover 2006).

[25]
T. G. Garrison, D. Richmond, P. Naughton, E. Lo, S. Trinh, Z. Barnes, A. Lin,
C. Schurgers, R. Kastner, S. E. Newman, and et al., “Tunnel vision:
Documenting excavations in three dimensions with lidar technology,”
Advances in Archaeological Practice
, vol. 4, no. 2, pp. 192–204, 2016.
DOI |
http ]


Archaeological tunneling is a standard excavation strategy in Mesoamerica. The ancient Maya built new structures atop older ones that were no longer deemed usable, whether for logistical or ideological reasons. This means that as archaeologists excavate horizontal tunnels into ancient Maya structures, they are essentially moving back in time. As earlier constructions are encountered, these tunnels may deviate in many directions in order to document architectural remains. The resultant excavations often become intricate labyrinths, extending dozens of meters. Traditional forms of archaeological documentation, such as photographs, plan views, and profile drawings, are limited in their ability to convey the complexity of tunnel excavations. Terrestrial Lidar (light detection and ranging) instruments are able to generate precise 3D models of tunnel excavations. This article presents the results of a model created with a Faro™ Focus 3D 120 Scanner of tunneling excavations at the site of El Zotz, Guatemala. The lidar data document the excavations inside a large mortuary pyramid, including intricately decorated architecture from an Early Classic (A.D. 300–600) platform buried within the present form of the structure. Increased collaboration between archaeologists and scholars with technical expertise maximizes the effectiveness of 3D models, as does presenting digital results in tandem with traditional forms of documentation.

[26]
D. Meyer, M. Hess, E. Lo, C. E. Wittich, T. C. Hutchinson, and F. Kuester,
“Uav-based post disaster assessment of cultural heritage sites following the
2014 south napa earthquake,” in 2015 Digital Heritage, vol. 2,
pp. 421–424, 2015.
DOI |
http ]


On Sunday, August 24, 2014, the American Canyon (South Napa) Earthquake occurred at 3:20am local time with moment magnitude MW of 6.1, causing damage with and estimated economic impact of over one billion dollars. Many historic landmarks were severely damaged, some too dangerous to access, while others where simply difficult or impossible to reach quickly by conventional means. This paper explores semi- automatic surveying techniques using unmanned aerial vehicles (UAVs) to support immediate post-earthquake perishable data collection and damage assessment in the context of a case study for the Trefethen Family Vineyard. This case study will examine the methods used to create 3D models through Structure from Motion, which uses photogrammetric data to recreate the geometry of the site being imaged. The ability to recreate accurate models of real world objects rapidly, using images from low cost drones underlines the increasing feasibility of using UAVs for emergency response scenarios.

[27]
G. A. M. D. Santos, Z. Barnes, E. Lo, B. Ritoper, L. Nishizaki, X. Tejeda,
A. Ke, H. Lin, C. Schurgers, A. Lin, and R. Kastner, “Small unmanned aerial
vehicle system for wildlife radio collar tracking,” in 2014 IEEE 11th
International Conference on Mobile Ad Hoc and Sensor Systems
, pp. 761–766,
2014.
DOI |
http ]


This paper describes a low cost system for tracking wildlife that is equipped with radio collars. Currently, researchers have to physically go into the field with a directional antenna to try to pinpoint the VHF (very high frequency) signal originating from a wildlife tracking collar. Depending on the terrain, it could take an entire day to locate a single animal. To vastly improve upon this traditional approach, the system proposed here utilizes a small fixed-wing aircraft drone with a simple radio on-board, flying an automated mission. Received signal strength is recorded, and used to create a heat map that shows the collar’s position. A prototype of this system was built using off-the-shelf hardware and custom signal processing algorithms. Initial field tests confirm the systems capabilities and its promise for wildlife tracking.


Keywords: signal to noise ratio, global positioning system, finite element analysis, wildlife, hardware, receivers, aircraft, autonomous aerial vehicles, directional antennas, radio tracking, signal processing, small unmanned aerial vehicle system, wildlife radio collar tracking, VHF signal, custom signal processing algorithms, small fixed-wing aircraft drone, wildlife telemetry, software-defined radio, digital signal processing