Publications
Denoising autoencoder for reconstructing sensor observation data and predicting evapotranspiration: Noisy and missing values repair and uncertainty quantification
Publisher: Water Resources Research, 2025
Authors: Timothy K Johnsen, Xiangyu Bi, Chunwei Chou, Charuleka Varadharajan, Yuxin Wu, Jonathan Skone, Lavanya Ramakrishnan
Abstract
Machine learning (ML) methods applied in scientific research often deal with interrelated features in high-dimensional data. Reducing data noise and redundancy is needed to increase prediction accuracy and efficiency especially when dealing with data from field sensors. We explored an unsupervised learning method, the denoising autoencoder (DAE), to extract the underlying data structure from noisy raw data in the context of predicting hydrologic quantities from multiple field sensors. These sensors have intrinsic instrumental noise and occasional malfunctions that cause missing values. Our DAE neural network reconstructed meteorological sensor data containing noise and missing values to predict evapotranspiration in a mountainous watershed. The DAE reconstructed the sensor variables with a mean coefficient of determination, , value of 0.77 across 15 dimensions representing individual sensors. It reduced variance and bias uncertainties compared to a classical autoencoder model. The reconstruction quality varied across dimensions depending on their cross-correlation and alignment with the underlying data structure. Uncertainties arising from the model structure were overall higher than those resulting from data corruption. We attached the DAE structure to a downstream ET-prediction neural network in three formats and achieved reasonably accurate ET predictions (). The use of the DAE notably reduced variance uncertainty in ET prediction. However, excessive variance reduction may be accompanied by an increase in bias due to the intrinsic bias-variance tradeoff. Our method of evaluating and reducing uncertainties in aggregated data from different sources can be used to improve predictive models, process understanding, and uncertainty quantification for better water resource management.
SmartDepth: Motion-Aware Depth Prediction with Intelligent Computing for Navigation
Publisher: 21st International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT), 2025
Authors: Timothy K. Johnsen and Mengting Yang, Ian Harshbarger, Matteo Mendula, Marco Levorato
Abstract
Advancements in robotic platforms emphasize the need for efficient algorithm designs that minimize resource usage for sensing and computing. However, this objective clashes with the complexity of advanced perception tasks that are essential to robotics operations, such as depth perception and navigation. In this paper, we present SmartDepth, a new perception and navigation framework that significantly reduces resource usage compared to traditional approaches. SmartDepth considers a navigation model whose logic requires a chain of depth maps for a sequential number of steps. At its core, SmartDepth leverages highly accurate DNN-predicted depth maps to initialize the chain. To optimize resource efficiency, it then employs Motion-Aware Depth Prediction (MADP)—our novel, low-complexity, geometry-based algorithm—to extrapolate high-quality depth maps for the rest of the chain. The length of a chain is determined by a predefined efficiency level and should be set to ensure that the quality of MADP-predicted depth maps remains useful. In fact, extending the chain beyond this point would result in degradation, rendering the predicted depth maps ineffective. Experimental results show that, compared to a state-of-the-art navigation framework that uses relatively expensive depth extraction as each step, SmartDepth effectively reduces computational costs, in the form of execution time by 19%, average power consumption by 26%, and energy expenditure by 20%, and at a nominal degradation in navigation accuracy of 0.98% and a small increase in path length of 12%. While larger computational savings can potentially be obtained, this comes with a larger degradation in navigation accuracy and path length. Thus, we posit that SmartDepth is an intuitive evolution in perception-based robotics that improves computing efficiency, with a controlled accuracy loss.
Single- and multi-mineral classification using dual-band Raman spectroscopy for planetary surface missions
Publisher: American Mineralogist, 2025
Authors: Timothy K. Johnsen, Virginia C. Gulick
Abstract
Planetary surface missions have greatly benefitted from intelligent systems capable of semi-autonomous navigation and surveying. However, instruments onboard these missions are not similarly equipped with automated science analysis classifiers onboard rovers, which can further improve scientific yield and autonomy. Here, we present both single- and multi-mineral autonomous classifiers integrated using the results from a co-registered dual-band Raman spectrometer. This instrument consecutively irradiates the same spot size on the same sample using two excitation lasers of different wavelengths (532 and 785 nm). We identify the presence of mineral groups: pyroxene, olivine, potassium feldspar, quartz, mica, gypsum, and plagioclase, in 191 rocks. These minerals are among the major rock-forming mineral groups, so their presence or absence within a sample is key for understanding rock composition and the environment in which it formed. We present machine learning methods used to train classifiers and leverage the multiple modalities of the dual-band Raman spectrometer. When testing on a novel sample set for single-mineral classification, we show accuracy scores up to 100% (varying by mineral), with a total classification rate (all minerals) of 91%. When testing on a novel set of samples for multi-mineral classification, we show accuracy scores up to 96%, with a total classification rate of 73%. We end with several hypothesis tests demonstrating that dual-band Raman spectroscopy is more robust and improves the scientific yield for mineral classification over single-band spectroscopy, especially when combined with our multimodal neural network.
An Overview of Adaptive Dynamic Deep Neural Networks via Slimmable and Gated Architectures
Publisher: IEEE, 15th International Conference on Information and Communication Technology Convergence (ICTC), 2024
Authors: Timothy K Johnsen, Ian Harshbarger, Marco Levorato
Abstract
Deep Neural Networks (DNN) are omnipresent in systems that are developed for processing vast quantities of data, most particularly images, for tasks such as perception, navigation, classification, detection, and segmentation. Such DNNs can be computationally demanding from the high number, upwards to hundreds of billions, of computations required during execution. Dynamic Deep Neural Networks (DDNN) are an evolution that allow the number of computations in a DNN to be scaled down. However, lacking in DDNN methodologies is the functionality to control online when and how to scale down the number of computations. To respond to this need, Adaptive Dynamic Deep Neural Networks (ADDNN) are an evolving class of deep learning models used in high performance computing that attempt to minimize resources usage – memory and power – and latency while maintaining an acceptable task performance by adapting the model architecture in response to the current context. Thus, ADDNNs are a State-of-the-Art evolution that perceive context, on a case-by-case basis, and adapt the number of computations to that required by the difficulty of the problem at hand. This positional paper is not only a survey of current DDNN methods in literature, but also an analysis into the considerations, design principles and challenges in developing robust ADDNN systems and applications.
NaviSplit: Dynamic Multi-Branch Split DNNs for Efficient Distributed Autonomous Navigation
Publisher: IEEE, 25th International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2024
Authors: Timothy K Johnsen, Ian Harshbarger, Zixia Xia, Marco Levorato
Abstract
Lightweight autonomous unmanned aerial vehicles (UAV) are emerging as a central component of a broad range of applications. However, autonomous navigation necessitates the implementation of perception algorithms, often deep neural networks (DNN), that process the input of sensor observations, such as that from cameras and LiDARs, for control logic. The complexity of such algorithms clashes with the severe constraints of these devices in terms of computing power, energy, memory, and execution time. In this paper, we propose NaviSplit, the first instance of a lightweight navigation framework embedding a distributed and dynamic multi-branched neural model. At its core is a DNN split at a compression point, resulting in two model parts: (1) the head model, that is executed at the vehicle, which partially processes and compacts perception from sensors; and (2) the tail model, that is executed at an interconnected compute-capable device, which processes the remainder of the compacted perception and infers navigation commands. Different from prior work, the NaviSplit framework includes a neural gate that dynamically selects a specific head model to minimize channel usage while efficiently supporting the navigation network. In our implementation, the perception model extracts a 2D depth map from a monocular RGB image captured by the drone using the robust simulator Microsoft AirSim. Our results demonstrate that the NaviSplit depth model achieves an extraction accuracy of 72.81% while transmitting an extremely small amount of data (1.218 KB) to the edge server. When using the neural gate, as utilized by NaviSplit, we obtain a slightly higher navigation accuracy as compared to a larger static network by 0.3% while significantly reducing the data rate by 95%. To the best of our knowledge, this is the first exemplar of dynamic multi-branched model based on split DNNs for autonomous navigation.
NaviSlim: Adaptive Context-Aware Navigation and Sensing via Dynamic Slimmable Networks
Publisher: IEEE, Ninth International Conference on Internet-of-Things Design and Implementation (IoTDI), 2024
Authors: Timothy K Johnsen, Marco Levorato
Abstract
Small-scale autonomous airborne vehicles, such as micro-drones, are expected to be a central component of a broad spectrum of applications ranging from exploration to surveillance and delivery. This class of vehicles is characterized by severe constraints in computing power and energy reservoir, which impairs their ability to support the complex state-of-the-art neural models needed for autonomous operations. The main contribution of this paper is a new class of neural navigation models – NaviSlim – capable of adapting the amount of resources spent on computing and sensing in response to the current context (i.e., difficulty of the environment, current trajectory, and navigation goals). Specifically, NaviSlim is designed as a gated slimmable neural network architecture that, different from existing slimmable networks, can dynamically select a slimming factor to autonomously scale model complexity, which consequently optimizes execution time and energy consumption. Moreover, different from existing sensor fusion approaches, NaviSlim can dynamically select power levels of onboard sensors to autonomously reduce power and time spent during sensor acquisition, without the need to switch between different neural networks. By means of extensive training and testing on the robust simulation environment Microsoft AirSim, we evaluate our NaviSlim models on scenarios with varying difficulty and a test set that showed a dynamic reduced model complexity on average between 57-92%, and between 61-80% sensor utilization, as compared to static neural networks designed to match computing and sensing of that required by the most difficult scenario.
Elastic Net to Forecast COVID-19 Cases
Publisher: IEEE, International Conference on Innovation and Intelligence for Informatics, Computing and Technologies (3ICT), 2020
Authors: Timothy K Johnsen, Jerry Z Gao
Abstract
Forecasting novel daily cases of COVID-19 is crucial for medical, political, and other officials who handle day to day, COVID-19 related logistics. Current machine learning approaches, though robust in accuracy, can be either black boxes, specific to one region, and/or hard to apply if the user has nominal knowledge in machine learning and programing. This weakens the integrity of otherwise robust machine learning methods, causing them to not be utilized to their full potential. Thus, the presented Elastic Net COVID-19 Forecaster, or EN-CoF for short, is designed to provide an intuitive, generic, and easy to apply forecaster. EN-CoF is a multi-linear regressor trained on time series data to forecast number of novel daily COVID-19 cases. EN-CoF maintains a high accuracy on par with more complex models such as ARIMA and Bi-LSTM, while gaining the advantages of transparency, generalization, and accessibility.
A Multilayer Perceptron for Obtaining Quick Parameter Estimations of Cool Exoplanets from Geometric Albedo Spectra
Publisher: The Astronomical Society of the Pacific, 2020
Authors: Timothy K Johnsen, Mark S Marley, and Virginia C. Gulick
Abstract
Future space telescopes now in the concept and design stage aim to observe reflected light spectra of extrasolar planets. Assessing whether given notional mission and instrument design parameters will provide data suitable for constraining quantities of interest typically requires time consuming retrieval studies in which tens to hundreds of thousands of models are compared to data with a given assumed signal to noise ratio, thereby limiting the rapidity of design iterations. Here we present a machine learning approach employing a Multilayer Perceptron (MLP) trained on model albedo spectra of extrasolar giant planets to estimate a planet’s atmospheric metallicity, gravity, effective temperature, and cloud properties given simulated observed spectra. The stand-alone C++ code we have developed can train new MLP’s on new training sets within minutes to hours, depending upon the dimensions of input spectra, size of the training set, desired output, and desired accuracy. After the MLP is trained, it can classify new input spectra within a second, potentially helping speed observation and mission design planning. Our MLP’s were trained using a grid of model spectra that varied in metallicity, gravity, temperature, and cloud properties. The results show that a trained MLP is an elegant means for reliable in situ estimations when applied to model spectra. We analyzed the effect of using models in a grid range known to have degeneracies.