About me

I'm a Ph.D. student at the University at Buffalo working with Dr. Karthik Dantu at DRONES Lab. My research interests include active perception, learning and field robotics.

Projects

Active Illumination Control in Low-Light Environments using NightHawk


Subterranean environments such as culverts present significant challenges to robot vision due to dim lighting and lack of distinctive features. Although onboard illumination can help, it introduces issues such as specular reflections, overexposure, and increased power consumption. We propose NightHawk, a framework that combines active illumination with exposure control to optimize image quality in these settings. NightHawk formulates an online Bayesian optimization problem to determine the best light intensity and exposure-time for a given scene. We propose a novel feature detector-based metric to quantify image utility and use it as the cost function for the optimizer. We built NightHawk as an event-triggered recursive optimization pipeline and deployed it on a legged robot navigating a culvert beneath the Erie Canal. Results from field experiments demonstrate improvements in feature detection and matching by 47-197% enabling more reliable visual estimation in challenging lighting conditions.

Language-in-the-Loop Culvert Inspection on the Erie Canal


The culverts beneath the Erie Canal demand frequent, high-fidelity inspection due to age, wear, and heterogeneous environments. Long-tailed, site-specific degradation modes, limited labeled data, and shifting imaging conditions undermine closed-set detection and segmentation approaches. Open-vocabulary vision–language models (VLMs) offer a path around taxonomy lock-in, but remain difficult to adapt and fine-tune for niche infrastructure domains. We introduce VISION, an end-to-end autonomous inspection pipeline that couples web-scale VLMs with viewpoint planning to close the loop—see → decide → move → re-image. Deployed at Culvert 110 (Gasport, NY), VISION repeatedly and accurately localized, prioritized, and re-imaged defects, capturing targeted, high-resolution inspection imagery while producing structured descriptions that support downstream condition assessment.

Improving Visual Odometry with PIXER


Accurate feature detection is fundamental for various computer vision tasks including autonomous robotics, 3D reconstruction, medical imaging, and remote sensing. Despite advancements in enhancing the robustness of visual features, no existing method measures the utility of visual information be- fore processing by specific feature-type algorithms. To address this gap, we introduce PIXER and the concept of “Featureness”, which reflects the inherent interest and reliability of visual information for robust recognition independent of any specific feature type. Leveraging a generalization on Bayesian learning, our approach quantifies both the probability and uncertainty of a pixel's contribution to robust visual utility in a single- shot process, avoiding costly operations such as Monte Carlo sampling, and permitting customizable featureness definitions adaptable to a wide range of applications. We evaluate PIXER on visual-odometry with featureness selectivity, achieving an average of 31% improvement in RMSE trajectory with 49% fewer features.

Empir3D: Multi-Dimensional Point Cloud Quality Assessment


In this work, we propose an evaluation framework for point clouds (Empir3D) that consists of four metrics - resolution (Qr) to quantify ability to distinguish between the individual parts in the point cloud, accuracy (Qa) to measure registration error, coverage (Qc) to evaluate portion of missing data, and artifact-score (Qt) to characterize the presence of artifacts. Through detailed analysis, we demonstrate the complementary nature of each of these dimensions, and the improvement they provide compared to uni-dimensional measures highlighted above. Further, we demonstrate the utility of Empir3D by comparing our metric with the uni-dimensional metrics for two 3D perception applications (SLAM and point cloud completion). Empir3D advances our ability to reason between point clouds and helps better debug 3D perception applications by providing richer evaluation of their performance.

Publications

Active Illumination Control in Low-Light Environments using NightHawk


Yash Turkar, Youngjin Kim, Karthik Dantu

International Symposium on Experimental Robotics (ISER), 2025

Autonomous Culvert Inspection on the Erie Canal using Legged Robots


Kartikeya Singh*, Yash Turkar*, Youngjin Kim, Matthew Lengel, Karthik Dantu

Workshop on Field Robotics, International Conference on Robotics and Automation (ICRA), 2025

Simulation Environment for Terrain Excavation Robot Autonomy


Christo Aluckal, Roopesh Vinodh Kumar Lal, Sean Courtney, Yash Turkar, Yashom Dighe, Young-Jin Kim, Jake Gemerek, Karthik Dantu

International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR), 2025

VRF: Vehicle Road-side Point Cloud Fusion


Kaleem Nawaz Khan, Ali Khalid, Yash Turkar, Karthik Dantu and Fawad Ahmad

Proceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services (MobiSys), 2024

Enhancing Archaeological Surveys with In-Sar Imagery and UAV-Based GPR


Yash Turkar, Shaunak De, Charuvahan Adhivarahan, Luca Mottola, Alessandro Sebastiani, Davide Castelletti, Karthik Dantu

International Geoscience and Remote Sensing Symposium (IGARSS), 2024

Generative-Network based Multimedia Super-Resolution for UAV Remote Sensing


Yash Turkar, Christo Aluckal, Shaunak De, Varsha Turkar, Yogesh Agarwadkar

International Geoscience and Remote Sensing Symposium (IGARSS), 2022

Let's get in touch?


Email: [email protected]
Mob: +1 (716)-222-3761