Abstract
This paper presents a formulation and methodologies for a vision-aided inertial navigation system designed for autonomous landing or proximity operation. Measurements from a monocular camera are integrated with inertial measurements to enable sensor fusion, leveraging the strengths of both visual and inertial sensing modalities. A multiplicative extended Kalman filter (MEKF) is used to estimate the relative pose to a target space as well as bias variances. The estimator maintains a multiplicative approach to parameterizing the error quaternion and the attitude kinematics throughout filtering. We adapt the histogram-of-oriented gradients (HOGs) algorithm and apply a series of template images for feature extraction in our vision algorithm, thereby improving its rotation variance property. A measurement model is formulated through an auxiliary optimizing process of geometric data to estimate feature locations with uncertainty. The utility of the proposed navigation system and its framework is demonstrated by experimental analysis using the sensor module and the designated helipad target.