EXPLORATION THROUGH TECHNOLOGY
Engineers for Exploration (E4E) is a one of a kind program centered around multidisciplinary and collaborative student research projects with the broad goals of protecting the environment, uncovering mysteries related to cultural heritage, and providing experiential learning experiences for undergraduate and graduate students. We team student engineers with scientists from a wide range of disciplines, such as ecology, oceanography and archaeology. Students create new technologies to aid these scientists in their work and then accompany them on field deployments around the world. Projects train students in embedded systems and software, machine learning, electronic integration, mechanical design, system building, as well as project management and team leadership.
Interested in joining?SPONSOR OUR RESERACH
Your support allows us to involve more students, which allows us to continue our research on existing and future projects. It is easy and even a small amount can make a big impact. Donate To UCSD E4E.
Introducing FishSense Scout
Factors like geographical location, water condition, and ocean depth can present significant challenges in diver-operated systems for fish ecosystem analysis. To overcome these hurdles, remote operated vehicles (ROVs) have presented themselves as valuable tools for collecting fish data in the lesser-documented areas of the ocean. Previous E4E systems, like FishSense Lite, target the monitoring of fisheries through citizen science. With the introduction of Scout, our horizons are broadened beyond waters safe for human deployment. Aside from aiming to increase accessibility to those under-documented waters, Scout also hopes to reduce the need for trained experts in the process and allow for longer deployments in broader areas, working towards the goal of more streamlined data collection. Scout explores the deployment of the existing fish detection, length estimation, and identification pipeline on embedded-edge computing devices paired with underwater camera technology for real-time computation of fish length and species. With a new underwater camera loaned to us from Deep Water Exploration that is specifically designed to correct for underwater aberrations, our first objective for Scout has been to collect and analyze data from this camera. We began by reviving an old watertight FishSense enclosure from a previous FishSense Lite product, along with the Jetson TX2 for the collection of camera calibration data. DWE’s StellarHD Aquagon, specifically designed for correcting for underwater optics, with a laser mount attached To calibrate and test the camera’s performance underwater, we have performed weekly pool tests for data collection. Students Theo Darci-Maher and Nykolas Rekasius took the camera along with the old FishSense enclosure into a pool and collected data for camera calibration. Students Annabelle Chen and Sijan Shrestha worked on the pool deck with their PhD mentor, Chris Crutchfield, to provide support and debug with the camera and the TX2. Theo handling the enclosure, pointing the camera towards the calibration board Example footage captured from the test, featuring Nyk holding the calibration board After this initial pool test, we analysed the collected camera footage and determined the need for a better calibration target. Hence, during our second round of testing, we used a segmented calibration board as opposed to a checkered one, making sure to position it in each corner so that we could later quantify the amount of distortion occurring at the edges of the frame. As a fair amount of color distortion was visible in the previous test (see above), we also incorporated calibration of a color card. Since motion blur was a possible concern in the results of the first test, we weighed the enclosure down and moved the calibration boards in front of it rather than moving the enclosure itself. The segmented calibration board, positioned to capture distortion in the lower left corner of the frame The color collection card positioned at the center of the screen After these first few pool tests, the TX2 stopped booting due to a flash error. In response, we swapped the TX2 out for Qualcomm’s Rubik Pi 3 to begin our work towards on-device computation. Because of the substantial power draw and, hence, generated heat from the board, our next pool tests focused on how to best manage the heat of the Pi within the underwater enclosure. The existing enclosure was adapted by Theo to accommodate for the size and structure of the new board, and its accompanying battery pack. To consider the maximum amount of heat that could be generated, we have run CPU stress tests on the board in different physical environments. During these tests, we monitored the onboard temperature and analyzed that data to consider the most effective and reasonably feasible option for heat transfer given the existing enclosure. The Rubik Pi 3 and its power bank attached to the enclosure's endcap The Rubik Pi enclosure setup, along with camera Plotted data from a 30 minute long stress tests Future work with Scout includes further stress testing of the Rubik Pi’s remaining processing units (GPU, NPU), analysis of our current camera data, testing in more realistic conditions, and potentially exploring other commercially available computer-operated cameras. With these changes leading to future deployments and collaborations, we will expand FishSense’s ability to improve accessibility of data collection in underdocumented ocean ecosystems! A (very official) post-pool testing debriefing session
Bear-ly Audible Tracking Panda Vocalizations with STM32
Do you know what sound a panda makes? Over the last eight weeks, our team has been collaborating with the San Diego Zoo Wildlife Alliance to find ways to record, label, and store the various vocalizations produced by panda bears, polar bears, and several other species using specialized collars. While initially simple, the problem has posed several questions relevant to the field of computer hardware at large: how do you keep a low-power device running for periods of over a year? Can you collect only the relevant sounds, while avoiding environmental noise? What is the domain shift between a noisy MEMS microphone and high-quality training data? This project is split into several engineering problems and a larger, overarching design problem: collecting microphone data, denoising that data, running lightweight inference, and storing the result are all fairly trivial issues on their own, but they become more complicated when the resources are heavily constrained and the target audience–intended to both collect and access the data–is unfamiliar with the technology. In particular, power consumption optimization and ease of use are dissimilar–and, in some cases, incompatible–goals. Thus, the goal of this project is not only to develop a bioacoustic device, but to solve an optimization problem. This project was made possible thanks to the efforts of Milo Akerman, Francisco Irazaba, AnMei Dasbach-Prisk, and Hayden Dosseh, along with the teams who have worked on hardware and software previously. Crossing the STM Barrier The first issue our team faced was the system architecture. STM32 boards are, by design, more complex to work with than microcontrollers running on ESP32 or ESP8266. On one hand, peeling away layers of abstraction and letting you interact with the board at the register level (bare-metal programming) allows for lightning-fast operations and very low power overheads. On the other, it means you may be stuck for weeks debugging a linker file. The latter is a sacrifice more than acceptable for this project, but it did mean we would occasionally get stuck struggling against simple peripherals like USART or SPI. It was once we got a firm grasp of the STM32H7 that we realized the boon that was its low-level design principles. At the end of the day, we were able to optimize our peripheral usage without needing to swap boards or mess with solder bridges–and it was then that we could really get to work. Output spectrogram from on-board MEMS microphone. Notice the noise at mel bands 20-30, 60 20 Milliamps or Less If you want a board running continuously for a year on 6kg of batteries (just slightly less than 1% of the body weight of a female panda bear) you need, on average, a current draw of 20mA or less. Concerningly, previous studies placed the power estimates for a system like ours at about 100mA for inference alone on a TinyML model. Therefore, our first task was to build a system as power efficient as possible. This is how we arrived at our various prototypes for sleep systems, a task made easy by the STM32H7’s various inbuilt low-power modes and difficult by the lack of a good wake-up trigger. A convenient side effect of triggered wake-up is that, since the data is only collected when something of interest occurs, there is no need to sift through or store noise, thereby saving processing power, storage space, and much-needed time for the scientists. For weeks we combed papers and scribbled over our whiteboard, trying to determine what the best way to issue a wake-up signal was, before arriving at two possible solutions: we either implemented digital signal processing (DSP) on the board’s second, less power-hungry core (the Cortex-M4) or we designed and stress-tested an analog signal processing circuit. Left: System diagram of our collar, with simplified piezo wake-up circuit (analog signal processing) at bottom. Right: A similar, complete piezo wake-up circuit. Marzetti et al., 2020 Due to the complexity of the latter system, along with a lack of in-house prototyping material and a wealth of possible failure modes, we decided to go for the DSP approach. We settled on a system which performs very basic operations on the M4 core, storing the audio data in shared memory with its sister M7 core. This allows us to switch between the two quickly, only ever having one active at a time, while relaying the data seamlessly. We conducted two power studies on our system. One early on, which showed high figures mostly due to unoptimized code and improper clock frequencies, a second one with full peripheral integration. By the end, our system was showing promise in terms of power consumption, allowing us to deploy for durations exceeding both the original estimates and the project specifications. First Study Second Study Baseline - Wake 13.43 mA 6.99 mA Baseline - Sleep 2.9 mA 2.7 mA Mic + SAI + DMA 23.6 mA 10.37 mA DSP + Inference 16.0 mA 12.46 mA MicroSD Write 14.1 mA 7.74 mA Better-than-Algorithmic Compression Finally, we arrive at another important issue: how do you store data securely, for a year or more, under restrictive and unpredictable conditions? There is a simple solution: SD cards are small, light, and mostly lossless, in addition to having fast and power-efficient write cycles. The biggest issue with this approach is that data can’t be collected until the deployment is complete, a feature that can prove very inconvenient, depending on the use case. Given that our collaborators have an already established LoRa (Long Range) network, we did a lot of work trying to make our system integrate wireless communications as an alternative to SD storage. Famously, wireless communications are very power-hungry. LoRa uses less power by transmitting data through “chirps”, at the cost of significantly lower bandwidth and longer uplink times. More importantly, LoRa is still by far the most power-intensive of our system. To reduce this issue, and to avoid blocking the network for too long, it quickly became apparent that we would need to compress our audio files significantly. In fact, if we reduced it enough for a full transmission, we could avoid running inference on-device at all, and classify the sound files with more powerful compute resources elsewhere in the network. For now, we decided to see how far we could take the compression. And, after implementing SD writes with FATFS, we stumbled upon an interesting–and promising–alternative to audio codecs: autoencoders. The idea behind autoencoders is as follows: a multi-layer perceptron (the encoder) is trained to compress, for example, an image of a spectrogram into a set of embeddings, which another MLP (the decoder) can then decompress into a reasonably accurate representation of the original image. If you sever the two parts of the model, you get a section which can encode images into a smaller representation (known as the “latent space”) and a section which can decode it to its original state. Why would you use this, instead of compressing your audio algorithmically (e.g., MP3)? In theory, an autoencoder could reduce the amount of information significantly more than an algorithm by ignoring aspects of the audio irrelevant to classification and only focusing on the features that distinguish two spectrograms you care about. Sample autoencoder input and output, spectrograms courtesy of Anu Jajodia Putting everything together, we arrive at a cutting-edge device capable of acting as a low-power edge node within and without an external network: a versatile device for analyzing animal behavioral data, which ultimately has the potential to aid countless conservation efforts. We hope our work will be of use to the San Diego Zoo, and continue to be iterated on for years to come.
Come See the Summer 2025 REU Final Presentations
Engineers for Exploration is once again hosting final presentations for the REU students this year! This year will include recent work and effort done by the REU students working in exploratory methods for audio data mining, development on edge devices for acoustics, improvements to the FishSense System, improvements to secure chip access, and updates on the latest Smartfin device. The presentations will be on Thursday, August 28 at 10:30 – 11:50am. If you are interested in attending, please fill out the RSVP at https://forms.gle/psHRRH8euGga9nTw5.
2025 Summer Research Students
This summer, we are hosting 14 students from around the world in our 2025 summer research program at UC San Diego. Meet our students by reading their bios below:
Kendall-Frost Marsh Acoustic Deployment!
South of UCSD lies Mission Bay, a popular tourist spot once a biodiverse-rich marshland destroyed through a combination of dredging and the redirection of the San Diego River. However, recent efforts have begun to preserve and restore some of the original habitats, particularly in the Kendall-Frost Marsh, one of several biodiversity reserves managed by the UC Natural Reserve System. This 40-acre marshland lies in the northern section of Mission Bay and is a popular spot for migrating birds and two endangered species: “Ridgway’s rail and Belding’s savannah sparrow” [1,2]. Towards aiding in the preservation of this wetland and protecting our avian friends, the Acoustic Species Identification Team at Engineers for Exploration has started conducting Acoustic deployments to the wetland. Efforts led by 1st-year master’s student Wesley Wu and 1st year PhD and Project Lead Sean Perry have resulted in two audio recording devices monitoring continuously for 3 weeks near Ridgway’s rail nesting sites and closer to the coastline. The goal: The ability to later create new systems for monitoring bird activity around the marsh and the impact of anthropogenic noise on these incredible local species that call the marsh their home. Love Your Wetlands Day: February 1st, 2025 Upon Wesley reaching out and connecting with the folks at UC Natural Reserve System and making plans to start investigating passive acoustic monitoring to protect species at the reserve, Wesley and Sean started by scouting out the Marsh during an open house event as part of Love Your Wetlands Day at the Kendall Frost Marsh. Hundreds of people and organizations were there, out to explore this ecosystem and many people working to protect it. View of Kendall Frost Marsh looking Southeast from Crown Point drive taken by Sean Perry at 11:04 AM during the Love Your Wetlands Day. This image was taken at high tide, so not much of the beach is visible. Thus, the area in particular was quite flooded. There are several artificial mounds designed to act as Nesting Platforms for the Ridgeway’s rail and many other species. For Wesley and Sean, the main focus was less on the event and more on scouting out the area. The team was aiming for a March Deployment, and finding a good location for the audio devices was paramount. Especially since we would not be able to deploy closer to the nesting platforms, as in just a few weeks, the nesting season would start. As to not disturb the birds, researchers are not allowed near the nesting platforms. However, the team was able to scout out a small peninsula that was about 10 meters away from the closet nesting platform across a small river. Notably on this peninsula was a small hill that would overlook the area during high tide and several small grasses and shrubs that held the ground together near one of these platforms. 2 views of Inner Crown Point, the area explored as a possible site for deployment for the audio devices taken by Sean Perry. On the left is a view facing west and on the right from a view facing north from roughly the same location. On the left, you can observe the hill in the distance and the right shows where the various and reasonably dry shrubs are. Notice in the right image, how close this area is to a nearby nesting platform! Upon collecting GPS coordinates of various views of the area, we decided this was a fantastic spot to do the deployment. We could get close to the nesting platforms without disturbing the bird species and potentially still have recordings sit above the water line. With that, we headed back home to start prepping for the deployment. Preperation Two key aspects were needed to get ready to go: Getting approval for the deployment, and getting the equipment set up. Wesley handled the documentation work and networking with the Nature Reserve System to approve the deployment (which was quite a bit of work!). Sean primarily worked on acquiring recording equipment and setup. The devices of interest were the Hydromoth and Audiomoth. Both devices were created by Open Acoustic Devices, with the biggest difference between the two being the Hydromoth was rated for underwater use cases 3. The Audiomoth, being less waterproof, would be placed further away from the waterline than the Hydromoth in the upcoming deployment. Photos of the Hydromoth (left) and Audiomoth (Right) as taken by Wesley Wu. These devices are inexpensive, omnidirectional microphones intended for passive acoustic recording deployments Deployment: March 8th, 2025 The day of the deployment at the Kendall-Frost Marsh with a much lower tide than during scouting. Taken by Wesley Wu With forms signed, devices ready, and permissions granted for the deployment, the team set off early (for a Saturday morning) back to the Kendall Frost Marsh! Upon arrival, we donned high-vis vests, borrowed a couple of wooden stakes, and began trekking down to the marsh. The Hydromoth was placed among the weeds and grasses closer to the tidal zone and the nesting platforms. Even with a night high tide, we were confident as we staked the recording device into the ground that we would get some great audio recordings from it. The Hydromoth, deployed closer to the river and within the tidal zone. The closest nesting platform can be seen on the right edge of the image. Taken by Wesley Wu. Meanwhile, the Audiomoth, which was not rated for underwater use, was placed on a nearby hill. Wesley checking if the Audiomoth is secure and recording. Wesley and Sean took turns getting the Hydromoth and audio prepped while the other person recorded the GPS location and photos documenting the deployment. Taken by Sean Perry The Future These devices will record continuously for 3 weeks with a sample rate of 48 kHz. The team has been working on San Diego Bird Species-based Machine Learning classifiers to identify species local to San Diego in audio data. Once we collect the devices, we will be focused on labeling as much data as possible to identify species of interest. By working to make this process cheaper, ecologists could potentially use audio recordings as another method for studying these species that are slowly recovering as the original habitat continues to improve. Then, not only can people cherish and love their wetlands but also hear and recognize the many birds that call these wetlands their home. Sean (left) and Wesley (right) celebrating their work with one last photo of the marsh. Taken by Sean Perry [1] Natural Reserve System. July 2015. Available at: https://ucnrs.org/reserves/kendall-frost-mission-bay-marsh-reserve/ [http] [2] Ucsd.edu. 2025. Available at: https://nrs.ucsd.edu/reserves/kendall-frost/index.html [http] [3] openacousticdevices. 2025. Available at: https://www.openacousticdevices.info/audiomoth [http]
Winter 2025 Info Session
Info session will be held at CSE 1242 on January 14th at 2:00 PM! Engineers for Exploration is a one of a kind program that develops intelligent systems that aid research in conservation, cultural heritage, and exploration. We work closely with archaeologists, biologists, ecologists, and marine scientists to create technologies that aid them in their scientific research. Applications range from determining population counts for endangered animals and studying animal behavior to capturing large-scale ecological data and visualizing archaeological discoveries. Engineers for Exploration centers around student-led teams who tackle the design process from beginning to end, from planning and prototyping various designs, culminating in deploying the system in the field alongside scientists and explorers. This is a unique opportunity to work on a project with real-world impact for our collaborators. This quarter, we are looking for students to join seven different projects, working on topics such as radio tracking pandas and lizards, detecting bird species using machine learning and sound, monitoring sea surface temperature with surfers, and much more! If you are interested, please fill out the application online at https://e4e.ucsd.edu/join. Applications for this quarter should be submitted by January 26th.
Acoustic Species ID goes to NeurIPS 2024!
Our work, “A Deep Learning Approach to the Automated Segmentation of Bird Vocalizations from Weakly Labeled Crowd-sourced Audio” was accepted and presented at NeurIPS 2024 in the “Tackling Climate Change with Machine Learning” Workshop hosted by Climate Change AI! Congrats to the authors: Jacob Ayers, Sean Perry, Samantha Prestrelski, Tianqi Zhang, Ludwig von Schoenfeldt, Mugen Blue, Gabriel Steinberg, Mathias Tobler, Ian Ingram, Curt Schurgers and Ryan Kastner. View of Canada Place, an iconic landmark of Vancouver. View taken from the east side of the Vancouver Convention Center. The conference actually took place in both buildings with an underground tunnel connecting the two, as seen on the bottom right of the image. Taken by Sean Perry This focused on the issue of weakly labeled datasets, often associated with large, bioacoustic crowdsourced datasets. Traditional methods frequently use approaches rooted in digital signal processing approaches to identify the species of interest. The paper takes a look at testing these methods with deep learning models. It can be found here. Credits to Mathias Tobler for first conceptualizing the idea. Key contributions to this work include PyHa, the Python repository where the main technologies used in the paper are stored. Credits primarily to Jacob Ayers for creating the repo and for his vision of the project and Samantha Prestrelski for developing it and carrying out experiments. Further thanks to Gabriel Steinberg for his technical contributions with isolation techniques and chunking methods and Mugen Blue for his training of TweetyNet, which was the most successful method used (as seen in the paper). Shout out to Sean Perry for developing Pyrenote, the tool used to label the data used in the project. Last week, Sean Perry and Ludwig von Schoenfeldt attended NeurIPS 2024 and presented the work! The two traveled out of the country to Vancouver, Canada to attend most of the conference, getting to see hundreds of posters, amazing research in machine learning, and present their own work! It was an inspiring moment getting to see where the future of the field could be heading. Acoustic Species is planning many exciting extensions to this work. We will continue to evaluate how these methods may influence the behavior of upstream models as we continue to work to improve bioacoustic machine learning techinques to identify species of interest. View of North Vancouver, taken from the west side of Canada Place looking northwest. The previous day was raining and the storm had started to move on north, appearing over the valleys of the mountains and the ski resorts. Taken by Sean Perry
Engineers for Exploration (E4E) Summer Research Program at UC San Diego - Application Submission by February 15th, 2025
UC San Diego’s Engineers for Exploration Summer Research Program is an NSF REU (Research Experiences for Undergraduates) Site centered around multidisciplinary and collaborative student research with the broad goals of protecting the environment, uncovering mysteries related to cultural heritage, and providing experiential learning experiences. Our program is a full-time paid research experience from June 23rd to August 29th in which students will work in multidisciplinary research teams to aid scientists from the San Diego Zoo, Scripps Institute of Oceanography, and UC San Diego in tackling problems in fields such as ecology, physical oceanography, and archeology. During this program, students will create and apply technologies in novel ways to aid scientists in their work, and may have opportunities to accompany these technologies on field deployments around the world. Through these projects, students can expect to learn about embedded systems and software, machine learning, electronics integration, mechanical design, system building, as well as project management and team leadership. For more information, please visit our website at https://e4e.ucsd.edu. Applications are currently being accepted at https://e4e.ucsd.edu/apply through February 15. Applications will continue to be accepted through March 30 if positions are not yet filled. If you have any questions, comments or concerns, please feel free to contact me at nthui@ucsd.edu.
Smartfin - New Fin Potting
Over the past couple years, members of the Smartfin team have been hard at work putting together a new procedure for potting fins at UC San Diego. Just before Thanksgiving break, our team managed to pull a sample fin from our molds. Throughout this journey, our team has had to tackle many different problems, ranging from designing new molds, determining the correct fin makeup, and maintaining pressure vessels to do the potting in. As we move into the new year, our team will now focus on refining our process, and integrating all of the knowledge from the previous efforts so that we can finally get fully potted Smartfins ready. Many thanks to the following people who have contributed to this effort: Megan Martinez Theanie Baskevitch Antara Chugh Hannah Cutler Meena Annamalai Adrian Zugehar Ela Lucas Melissa Chan Sara An Michael Hobrock Gus Blankenberg Tommy Sardarian Riley Meehan Dalton Rust Taylor Wirth Nathan Hui Phil Bresnahan Todd Martz Ryan Kastner Some photos of the progress over the past couple years: Antara, Hannah, and Meena from the 2024 Summer REU potting a fin New potting setup at the SIO Makerspace Finished fins
FishSense Published in UC San Diego Today
We are pleased to announce that FishSense has been recognized in the UC San Diego Today magazine. You can read the online article here.
Aqua3D Promo Video
Aqua3D is thrilled to announce the release of our latest promotional video showcasing our cutting-edge underwater camera technology! At Aqua3D, we specialize in developing advanced depth cameras designed for underwater exploration and research. Our technology plays a pivotal role in supporting innovative initiatives like the FishSense project. Check out the video to see how we’re pushing the boundaries of underwater depth imaging!
Acoustics Species Will be at NeurIPS 2024!
Our paper, “A Deep Learning Approach to the Automated Segmentation of Bird Vocalizations from Weakly Labeled Crowd-sourced Audio,” has been accepted for a spotlight talk and poster presentation for Climate AI’s Tackling Climate Change workshop in NeurIPS 2024! Congratulations to the authors: Jacob Ayers, Sean Perry, Samantha Prestrelski, Tianqi Zhang, Ludwig von Schoenfeldt, Mugen Blue, Gabriel Steinberg, Mathias Tobler, Ian Ingram, Curt Schurgers, and Ryan Kastner!












subscribe via RSS