PhD researcher building intelligent legged robots — from lunar locomotion to adversarial robotic security — at Michigan Technological University.
I grew up in India with two things that never left me — a curiosity for how things move and a pencil in my hand. Before I ever wrote a line of code, I was a freelance artist, making portraits and 3D digital artwork on commission. I served as the Design Head of Yodha — The Warrior Within, a social service organization where I created posters, banners, and visual campaigns.
That creative instinct led me to engineering — first a B.E. in Mechanical Engineering from Visveswaraya Technological University, then across the world to Michigan Tech in Houghton, MI for an M.S. in Mechatronics and now a PhD in Electrical Engineering. Somewhere along that journey, I fell deep into the world of legged robotics and reinforcement learning — teaching machines to walk so that one day, astronauts can walk better on the Moon.
Outside the lab, I'm always chasing something new. Since moving to the U.S., I've picked up skiing, golf, indoor rock climbing, clay sculpting, and I'm a serious 8-ball pool player. Back in India, it was chess, basketball, and cricket. The through-line is simple — I love learning things from scratch, whether it's a new gait controller or a new sport.
From discovering gaits for lunar astronauts to exposing security flaws in autonomous robots — each project pushes the boundary of what legged systems can do.
Discovering safe, energy-efficient locomotion strategies for astronauts on the Moon. Designed a custom BT-SLIP physics-based model for RL control, converted the Gait10dof18musc musculoskeletal model from OpenSim to MuJoCo achieving ~600× computational speedup with <5% error. Implementing RL-based controllers for gait discovery under partial-gravity conditions.
Built a hierarchical control framework for autonomous object search with the Unitree Go1, integrating VLM-based navigation (Qwen3-32B) with RL locomotion control. Demonstrated the first history-based backdoor attack on LLM-controlled robots with >98% attack success rate while maintaining full utility in benign runs. Achieving 90% task completion in multi-room environments.
Designed a real-world pipeline combining CNN-based terrain classification via Intel RealSense depth camera with Unitree Go1 gait-switching logic. Validated in field trials with ~60% correct gait selection and 80% command-tracking accuracy for autonomous forest survey applications.
Developed a standardized MuJoCo benchmarking framework for the Unitree Go1. RL recovered 0.25s faster than MPC and achieved 41% lower energy cost, while MPC demonstrated 36% higher lateral stability. Open-source framework supported by NSF.
Proposed a modified planar bipedal trunk spring-loaded inverted pendulum model in MuJoCo with actuated hip and prismatic joints augmented by passive spring-dampers. Trained with RL and validated against human experimental data, then conducted parametric studies on how leg compliance shapes emergent gait patterns across velocities.
Developed FractionalNet, a symmetric neural architecture trained on integer-order data to predict half-order derivatives, achieving ~84% accuracy on test inputs. A novel intersection of deep learning and fractional calculus.
Open to research collaborations, speaking opportunities, and industry positions in robotics and autonomous systems.