Optical Navigation Capabilities for Deep Space Missions - KinetX

20
OPTICAL NAVIGATION CAPABILITIES FOR DEEP SPACE MISSIONS Coralie Jackman and Philip Dumont Space Navigation and Flight Dynamics Practice KinetX Aerospace, Simi Valley, California 93065 Paper AAS 13-443 23 rd AAS/AIAA Space Flight Mechanics Conference Kauai, Hawaii February 10 - 14, 2013 AAS Publications Office, P.O. Box 28130, San Diego, CA 92198

Transcript of Optical Navigation Capabilities for Deep Space Missions - KinetX

Page 1: Optical Navigation Capabilities for Deep Space Missions - KinetX

OPTICAL NAVIGATION CAPABILITIES FOR DEEP

SPACE MISSIONS

Coralie Jackman and Philip Dumont

Space Navigation and Flight Dynamics Practice KinetX Aerospace,

Simi Valley, California 93065

Paper AAS 13-443

23rd AAS/AIAA Space Flight Mechanics

Conference

Kauai, Hawaii February 10 - 14, 2013

AAS Publications Office, P.O. Box 28130, San Diego, CA 92198

Page 2: Optical Navigation Capabilities for Deep Space Missions - KinetX

1

OPTICAL NAVIGATION CAPABILITIES FOR DEEP SPACE MISSIONS

Coralie D. Jackman* and Philip J. Dumont†

KinetX Aerospace, a private corporation, is currently providing navigation for three NASA deep space missions: MESSENGER, New Horizons, and OSIRIS-REx. Two of these, New Horizons to the Pluto system, and OSIRIS-REx to the asteroid 1999 RQ36, rely heavily on optical navigation (OpNav) to ensure mis-sion success. KinetX-developed OpNav software uses spacecraft imaging to de-termine the spacecraft trajectory and targets’ ephemerides. This paper describes the KinetX approach to optical navigation, simulating spacecraft images for test-ing, processing real data, and generating products for orbit determination. Also included are imaging simulations for New Horizons and OSIRIS-REx and real data results from New Horizons.

INTRODUCTION

The Space Navigation and Flight Dynamics Practice of KinetX Aerospace, a private corpora-tion, is currently supplying the navigation for three NASA deep space missions: MESSENGER, New Horizons, and OSIRIS-REx. Two of these missions, New Horizons to the Pluto system, and OSIRIS-REx to the asteroid 1999 RQ36, rely heavily on optical navigation (OpNav) to ensure mission success. The New Horizons and OSIRIS-REx missions are both being flown under the NASA New Frontiers Program. The New Horizons mission, which launched in January 2006, is led by principal investigator, S. Alan Stern, of the Southwest Research Institute. New Horizons will encounter the Pluto system in July 2015. The Johns Hopkins University Applied Physics La-boratory provided facilities for spacecraft assembly and serves as the center for project manage-ment and spacecraft operations. The principal investigator for the OSIRIS-REx mission is Dante Lauretta of the University of Arizona. OSIRIS-REx is scheduled for launch in September 2016, and will encounter its target asteroid in August 2018. NASA’s Goddard Space Flight Center serves as the center for project management. Lockheed Martin Space Systems will build the spacecraft, sampling mechanism, and sample return capsule and perform mission operations.

Optical navigation is the use of spacecraft imaging of the target objects to aid in the determi-nation of the spacecraft trajectory and the targets’ ephemerides. Traditional radiometric naviga-tion uses two-way ranging and Doppler tracking from NASA’s Deep Space Network to determine spacecraft line-of-sight range and velocity. For more precise target-relative navigation required * Optical Navigation Specialist, Space Navigation and Flight Dynamics Practice, KinetX, Inc., 21 West Easy Street Suite 108, Simi Valley, CA 93065. † Chief Optical Navigation Engineer, Space Navigation and Flight Dynamics Practice, KinetX, Inc., 21 West Easy Street Suite 108, Simi Valley, CA 93065.

Page 3: Optical Navigation Capabilities for Deep Space Missions - KinetX

2

during critical mission events, the optical navigation data products provide cross line-of-sight in-formation to de-correlate estimates of spacecraft state from the target body ephemeris. There are two types of optical navigation, star-based and landmark-based;1 this paper will discuss the for-mer. Star-based OpNav uses background stars as reference points to determine the location of the spacecraft and target bodies.

For New Horizons, star-based optical navigation plays a crucial role in navigation and opera-tions during the Pluto system encounter and flyby. The navigation precision required by the sci-ence operations during encounter is only possible with the addition of optical data to the radio-metric orbit determination solution. For a flyby mission such as New Horizons, approach OpNav imaging determines the location of the incoming asymptote of the spacecraft’s hyperbolic trajec-tory relative to the Pluto system, and imaging also improves the knowledge of the time of closest approach. However, the latter is only improved in the final days before closest approach, once the angle between the spacecraft-to-body vector and the incoming velocity vector has increased to a significant, detectable amount.

For OSIRIS-REx, star-based optical navigation plays a crucial role in successfully targeting the asteroid-relative maneuvers on approach and during preliminary survey. During the subse-quent mission phases, the navigation team will transition to a landmark-based OpNav approach.

Several unique aspects of navigation for New Horizons and OSIRIS-REx have influenced the choice of algorithms and motivated the development of a high-fidelity image simulation capabil-ity. For New Horizons these include the long round-trip light-travel time to the spacecraft as it approaches the Pluto system, the low communication bandwidth and consequent limit on the number of OPNAV images that will be shuttered, the spacecraft trajectory uncertainty, the Pluto system barycenter uncertainty (particularly along the Pluto-Sun vector), and the nature of the Pluto system environment, which has made the issue of hazard avoidance a priority. Additional navigation issues for OSIRIS-REx include the spacecraft attitude control and knowledge uncer-tainties, and the quick ground-based turnaround time required between closely spaced maneuvers.

SOFTWARE REQUIREMENTS AND GOALS

The nature of these navigation issues imposes numerous requirements and goals on the OpNav software. First and foremost, the tool must be able to find the geometric center of each target body in the image and calculate the partial derivatives of the position of the centers with respect to the spacecraft state. In order to achieve this, the software must be able to predict the location of spacecraft and target bodies and the spacecraft pointing at the epoch of the OpNav image. Then the tool must be able to identify stars in the image field and solve for the observed centers of the stars. The next requirement is to solve for an updated spacecraft pointing. Finally the tool must solve for the center of the target body and calculate the partials.

The OpNav centerfinding accuracy requirement is driven by the Mission Operations require-ments for navigation accuracy. For each mission, there is a list of requirements from and on each of the subsystems based on a series of trade studies in order to ensure mission success. Mission design drives the spacecraft design and navigation requirements that must be met in order to achieve the science goals and work within the limitations of the instruments and spacecraft sys-tem. During mission development, trade studies and iterations between the navigation require-ments and mission and spacecraft requirements are used to arrive at a final flight system capable of performing the mission objectives within the mission constraints.

For New Horizons, the DSN tracking constraints and downlink bandwidth impose a limit on the amount of radiometric and optical navigation data that can be acquired during the critical ap-

Page 4: Optical Navigation Capabilities for Deep Space Missions - KinetX

3

proach and closest approach where most of the science observations are obtained. Several covari-ance analyses are performed to prove the likelihood of meeting the navigation requirements given the tracking and optical schedules. The weighting on the optical navigation centerfinding in the covariance analysis drives the centerfinding accuracy goal for the OpNav tool. Generally, the centerfinding accuracy goal is to have observation residuals less than one pixel, with the goal be-ing closer to one-tenth of a pixel.

Considering the unique navigation scenarios for both missions, the ability to simulate image and ancillary data has become very important. Image simulation provides many benefits to the software development and mission operations planning. The capability to produce simulated im-ages and ancillary data can be used to test the OpNav software centerfinding in the presence of noise and a truth center. The simulation capabilities also enable the team to conduct meaningful Operational Readiness Tests (ORTs) by simulating the operations scenario during critical mission phases.

Finally, a high-level goal in the development of this OpNav tool is to design the algorithms and infrastructure to be mission-independent, enabling its use in flying any interplanetary mission that requires optical navigation.

DEVELOPMENT PLATFORM

The architecture of the KinetX OPNAV software has been strongly influenced by two choices made early in its development. First, MATLAB™ was chosen as the development platform for the software. This decision was made because of MATLAB’s extensive library of built-in image processing and optimization functions, as well it’s rich library of visualization tools. The MATLAB environment, based on the experience of these authors, encourages rapid proto-typing of code with a minimum of coding errors. In addition, the KinetX OpNav software uses NASA’s Navigation and Ancillary Information Facility’s (NAIF) SPICE toolbox2 to interrogate SPICE kernels delivered by the project and by NAIF. The information derived from these kernels in-cludes spacecraft state and attitude in inertial space, the camera-to-inertial rotation matrix, con-version from Coordinated Universal Time (UTC) to Ephemeris Time (ET) including leap se-conds, and the planet and satellite ephemerides. The NAIF/SPICE system has become the stand-ard tool for organizing navigation and ancillary information in deep-space missions.

PREDICTING BODY AND SPACECRAFT GEOMETRY

For a given observation scenario and epoch, OpNav processing requires the following SPICE kernels:

1. The spacecraft ephemeris file (SPK) given as a function of time 2. The planet, satellite, comet, or asteroid ephemerides (SPK) given as a function of time 3. The Planetary Constants Kernel (PcK), containing information about the orientation, siz-

es, shapes, etc of the planetary bodies 4. The Instrument Kernels (IK) containing instrument field-of-view, shape, and orientation 5. The C-kernel (CK) containing predicted or reconstructed spacecraft attitude information 6. The Frames Kernel (FK) containing rotations and relationships between reference frames 7. The Leapsecond Kernel (LSK) containing an updated list of leapseconds to convert be-

tween UTC and International Atomic Clock (TAI), which is easily converted to ET. 8. The Spacecraft Clock Time Kernel (SCLK) for converting between SCLK and ET

Page 5: Optical Navigation Capabilities for Deep Space Missions - KinetX

4

Using the SPICE toolbox for MATLAB (MICE), a metakernel file, containing the list of de-sired kernels, is loaded into MATLAB to be interrogated by various functions to predict the spacecraft and target body geometry. With all these kernels loaded and a specified epoch, essen-tially all requisite geometric information can be extracted or calculated. This includes ET, UTC, spacecraft state (position and velocity vectors) with respect to any body or frame origin, body state with respect to any body or frame origin, spacecraft attitude quaternion, camera-to-inertial matrix, and the camera boresight vector and twist angle.

An important capability that has been derived from these functions is the capability to simu-late kernels. For missions that are still in early planning phases, like OSIRIS-REx and perhaps proposed missions, there are no actual kernels available, aside from the target ephemeris and the leapseconds kernel. So in order to simulate images for early OpNav analyses, the ability to simu-late various kernels has been developed. The instrument and frames kernels can be constructed using current or proposed camera designs, while assuming a simple rotation from camera to spacecraft frame. The mission design team can produce a simulated spacecraft ephemeris SPK file. Given all of these files the user can specify the desired boresight RA/Dec (usually centered on the target body), and the software can generate a binary C-Kernel file using a MICE utility. With this simulated C-Kernel, the optical navigation team can generate simulated images and process them to conduct a variety of trade studies and analyses.

SIMULATED IMAGES

The ability to simulate high fidelity images will help the project to mitigate the consequences of uncertainties in the spacecraft and target states through a combination of ORTs and various contingency and off-nominal scenarios that bound the nominal mission scenario. The primary goal of image simulation is to place stars and bodies, with the appropriate brightness, in the field of a particular camera model and at a particular epoch with or without image noise. The correct object placement is critical while the fidelity of the photometry model is usually less of a concern.

A model of the camera is required to generate simulated images. The camera is modeled as a system with two major components: the telescope and the detector. For simulation purposes, the aperture of the stop and the focal length of the telescope define the telescope. The detector is modeled by specifying the pixel size, the number of pixels, the Quantum Efficiency (QE) of the detector, the gain (electrons-to-Data Number), and the read noise. The physical size of the detec-tor pixel and the telescope focal length define the footprint of the detector pixel (in radians) on the sky. For high fidelity simulations, a camera distortion model can be applied to place objects in the correct location in the camera Field of View (FOV).

Image simulation requires the specification of the time (epoch) at which the image is shut-tered. Given the epoch of the observation, the SPICE kernels return the inertial state and attitude of the spacecraft and the inertial positions of the target bodies (e.g. Pluto and its satellites) as viewed from the spacecraft. The spacecraft attitude and the rotation matrix from the camera refer-ence frame to the space reference frame are combined to yield the rotation matrix from inertial space to the camera frame.

With this information from the SPICE kernels, the code calculates the camera boresight in in-ertial space (Right Ascension and Declination in J2000). Given the boresight and camera FOV, it can interrogate star catalogs to generate a list of candidate stars present in the image. Currently the software can access, from the MATLAB workspace, the Tycho-2, UCAC2, and UCAC4 star catalogs. The data returned for each candidate star includes the inertial direction at the observa-tion epoch, corrected for proper motion, the magnitude of the star, and the stellar parallax if it is available. The software applies the correction for stellar aberration due to the motion of the

Page 6: Optical Navigation Capabilities for Deep Space Missions - KinetX

5

spacecraft and updates the parallax correction to the value it would have at the location of the spacecraft. The updated inertial directions are mapped to the camera coordinate system using the camera-to-inertial rotation matrix. The camera distortion model is applied to this location to pre-dict the (pixel, line) location on the detector at which star will be imaged.

Modeling Stars in Simulated Images

The photometry model assumes Vega as the standard candle. Hayes3,4 has measured the flux from Vega arriving at the Earth as a function of wavelength. This flux, converted to photons is integrated over the band pass appropriate for the specific instrument. The number of detected photoelectrons from an object is determined by scaling the flux from Vega by the magnitude dif-ference, the integration time of the exposure, the diameter of the stop, and the QE of the detector.

The Point Spread Function (PSF) model of a camera is modeled as a two-dimensional Gaussi-an function. The parameterization of this non-rotationally symmetric model consists of two widths, characterized as standard deviations, for the two orthogonal directions, a rotation angle from the vertical axis of the detector, the amplitude of the function, and the (pixel, line) location of the center of the function. The PSF is normalized to an integrated value of one. For stars and unresolved target bodies, the normalized PSF is scaled to the integrated flux in detected photoe-lectrons derived from the photometry model.

A side-by-side comparison of a simulated and real image of M7 star cluster is shown in Figure 1. The image was shuttered on August 31st, 2006 at 07:21:31.302 UTC with New Horizon’s Long Range Reconnaissance Imager (LORRI) instrument.5 LORRI has a diameter of 0.208 me-ters and a focal length of 2.619 meters. The detector is a 1024x1024 array with pixels that map to 5 micro-radians on the sky. The simulated image uses the UCAC2 star catalog and simulated stars brighter than 12th magnitude.

Figure 1. On the left is a real spacecraft image of M7 taken by the New Horizon’s LORRI instrument

on August 31st, 2006 at 07:21:31.302 UTC. On the right is a simulated LORRI image at the same epoch, using the UCAC2 star catalog. The horizontal axis represents increasing pixel direction and

the vertical axis represents increasing line.

Simulating Target Bodies in OpNav Images

The apparent inertial position of the geometric center of the target bodies relative to the space-craft is calculated by the SPICE toolbox routine ‘spkezr’ from the data in the spacecraft trajectory

Page 7: Optical Navigation Capabilities for Deep Space Missions - KinetX

6

kernel, the planetary ephemeris kernel, and the satellite ephemeris kernel. This position is cor-rected for the aberration induced by the spacecraft velocity and for the light-travel time from the target to the spacecraft, i.e. the position returned is the position of the target at the time the light left the body. This inertial position is mapped to the non-linear (pixel, line) space on the detector using the camera to inertial rotation matrix and the camera distortion model.

For resolved target bodies, it is necessary to calculate the detected flux that is collected by each detector pixel that maps onto the surface of the body. Scaling the apparent magnitude as ob-served from the Earth to the apparent magnitude as seen from the spacecraft is necessary to calcu-late the total integrated flux from the body that is sensed by the detector. This flux is calculated by exercising the photometry model to calculate the total number of detected photoelectrons from the target. To determine the flux collected by each detector pixel, it is first necessary to determine which pixels map onto the surface of the body. Given the diameter of the target body, the space-craft state vector, and the camera model, a straightforward analytic geometry calculation deter-mines if a given detector pixel maps onto the body surface. For these pixels, the angle between the Sun-body vector and the local surface angle (incidence angle) and the angle between camera pixel to surface intercept vector and the local surface normal (scatter or emission angle) are readi-ly calculated. Given a uniform surface albedo, or in the case of Pluto a latitude, longitude de-pendent surface albedo derived from the albedo map6 and a surface scattering law, the calculation of I/F for each pixel is straightforward. Currently, two surface scattering laws, Lambert and Lommel-Seeliger, have been implemented. Normalizing the resulting set of illuminated pixels to unity and scaling the resulting image to the integrated detected photoelectrons yields the simulat-ed image of the target body as sensed by the instrument.

For partially resolved target bodies, when the pixel diameter is between 1 and 4 pixels, a spe-cial technique is used to calculate a more accurate flux for each pixel that intercepts the body. Each of these pixels are divided into 25 sub-pixels for which a similar algorithm as described above calculates the detected photoelectron value for each sub-pixel. Then the sub-pixel photoe-lectron values are averaged to produce a more accurate photoelectron count for whole pixel.

Figure 2 shows an example of simulated LORRI image. This example image will be shuttered on July 9, 2015 at 04:29:00 as part of OPNAV campaign 4. The figure on the left is on a log stretch to bring up the stars in the simulated image. The figure on the right is on a linear stretch. The locations of the green circles are the predicted positions of the stars in the image that have been returned from UCAC2 catalog. The slight discrepancy between positions of the simulated stars and the positions derived from the catalog is due primarily to the fact that the catalog posi-tions have not had stellar aberration applied to them. Pluto (the larger body in the upper right) and Charon (the smaller body in the lower left) are visible in the image. Note the presence of albedo variations on the simulated image of Pluto.

Page 8: Optical Navigation Capabilities for Deep Space Missions - KinetX

7

Figure 2. On the left is a log-scaled simulated LORRI OpNav image of Pluto & Charon with epoch at

July 9, 2015 at 04:29:00. The right is the same image with a linear stretch.

The ability to simulate images and kernels provides the analyst with the capability to under-take a variety of studies, including testing the OpNav system. First, it enables the team to check algorithms against truth simulations, which aids in rapid software prototyping and debugging. Second, it serves as a useful tool in the picture planning process. It allows the navigation team to check sequences for the requested pointing, check for sufficient number and brightness of field stars, and even simulate when initial detection of a target body is expected. Finally, the simulation capability provides the OpNav input to ORTs. These tests simulate mission-critical scenarios to ensure the mission operations infrastructure and processes are in place and the team members are prepared. The ORTs are designed to test the entire navigation process, from optical and radio data simulation, data processing, and integrating observables into the orbit determination process. Var-ious cases with different perturbations in the barycenter, time of flight, and B-plane target are used to test the process and ensure the team is capable of detecting the perturbations.

Figure 3 shows a simulated OSIRIS-REx PolyCam image of the asteroid 1999 RQ36 on the approach trajectory. It is shuttered on September 24, 2018; 28 days before the spacecraft arrives at the asteroid. The star field in the simulation represents the actual star field expected in the real images shuttered by the PolyCam instrument on that day. The asteroid, which is still sub-pixel, appears in the center surrounded by a magenta box. By simulating this image, and a slew of oth-ers on approach, the team is able to confirm that there are enough bright stars in the field to obtain a sufficient navigation solution.

Page 9: Optical Navigation Capabilities for Deep Space Missions - KinetX

8

Figure 3. A simulated OSIRIS-REx PolyCam image of asteroid 1999 RQ36 on September 24th, 2018.

The asteroid is in the center of the magenta box, surrounded by field stars.

STAR CENTERFINDING

The processing steps involved in OpNav image analysis consist of estimating the geometric center of the catalog stars visible in the image, solving for the spacecraft attitude, and solving for the geometric center of the target bodies that are present in the image. The first of these steps is described in the following text.

For a given epoch, the nominal camera boresight on the sky is returned from the spacecraft at-titude kernel. Given this boresight vector, combined with the nominal FOV of the camera, inter-rogation of the star catalogs returns a list of stars and their positions that are expected to be pre-sent in the image. The code maps these star positions onto the non-linear detector space corrected for stellar aberration, parallax, and the camera distortion model. These are the a priori positions for the stars. The code then confirms the stars that should be present in the image. For each star in this subset, the code isolates a small region in the image centered on the a priori position for the star. To make the code less dependent on mission and camera specific constraints, several center-finding algorithms have been implemented as discussed in the following subsections.

The Matched Filter Algorithm

The first option is a matched filter algorithm, which is an optimal linear filter for detection. This method involves correlating a known signal with an unknown signal to detect the unknown signal in the presence of noise. The fundamental assumption underlying this method is that the spatial frequency content of the known and unknown signals is the same. The template for the filter is a point spread function (PSF) extracted from a well-exposed bright star in a real image. In use, the code extracts a circular portion of an OpNav image where a star brighter than a given magnitude is expected, centered at the predicted star center with a radius large enough to ensure the star is in the extracted data even in the presence of a large pointing error. The extracted area should be small enough to exclude multiple stars that could lead to false identifications. This ex-tracted area radius must be tailored to a specific spacecraft and instrument, given the differences in attitude knowledge, and can also be modified interactively in the event of an unusually large pointing error. A code option, which forces the user to manually examine each extracted area, has been implemented for the express purpose of excluding false positives that might ring the filter.

Page 10: Optical Navigation Capabilities for Deep Space Missions - KinetX

9

For each star, the code multiplies the fast Fourier Transform (FFT) of the extracted data with the complex conjugate of the FFT of the template PSF. The Fourier Transform of this product is the cross-correlation of the filter template with the star. A paraboloid is then fitted to the peak of the matched filter result, which gives a (pixel, line) solution for the center of the star. The algorithm is repeated for each star.

Figure 4 shows the star used for the LORRI matched filter on the left and on the right an ex-tracted area containing the star with the unknown center. Figure 5 shows the matched filter result. The (pixel, line) center for the star in the 16 by 16 pixel sub-array is (8.34, 10.56). The matched filter algorithm has proven useful for generating an initial guess for the center of the star. It has proven to be particularly robust in identifying faint stars in the presence of noise.

Figure 4. Left: the matched filter template star for LORRI. Right: the extracted sub-image contain-

ing star of unknown center. The axes represent increasing pixel and line in the sub-array and the bar represents the linear color scaling [DN].

Figure 5. Left: the result of the matched filter. Right: the paraboloid used to fit the matched filter

result. The axes represent increasing pixel and line in the sub-array and the bar represents the linear color scaling [DN].

Page 11: Optical Navigation Capabilities for Deep Space Missions - KinetX

10

The Cross-Correlation Algorithm

The second algorithm implemented in the software is a cross-correlation algorithm. Similar to the matched filter, it involves extracting a 16 by 16 pixel sub-image containing the star in ques-tion. The algorithm takes the FFT of the data and cross-correlates with the FFT of a Gaussian PSF. A paraboloid is then fitted to the peak of the cross-correlation result, producing a (pixel, line) center solution for the star.

Figure 6 shows the Gaussian model function on the left and on the right the extracted area containing the same star with the unknown center as in Figure 4. Figure 7 shows the cross-correlation result and the fitted paraboloid. The (pixel, line) center for the star in the 16 by 16 pixel sub-array is (8.35, 10.70). Like the matched filter, the cross-correlation algorithm has prov-en useful for generating an initial guess for the center of the star. This algorithm, however, does not appear to be as robust as the matched-filter algorithm in pulling a weak signal (i.e., a dim star) out of the noise.

Figure 6. Left: the Gaussian model function for the cross-correlation. Right: the extracted sub-image containing star of unknown center. The axes represent increasing pixel and line in the sub-array and

the bar represents the linear color scaling [DN].

Figure 7. Left: the result from the cross-correlation algorithm using the model and data in Figure 6. Right: the paraboloid used to fit the cross-correlation result. The axes represent increasing pixel and

line in the sub-array and the bar represents the linear color scaling of the function.

Page 12: Optical Navigation Capabilities for Deep Space Missions - KinetX

11

The Non-Linear Least Squares Estimator

The workhorse algorithm implements a non-linear least squares estimator. Using an initial guess from the matched-filter or cross-correlation algorithm, this estimator has proven robust against false positives and yields internally consistent results on both real and simulated images. The model function is a canonical Gaussian PSF. The parameters that are estimated include the amplitude, the two standard deviations, the rotation of the PSF, and most importantly, the (pixel, line) center of the star. Options to estimate the local background level and weight the data by the expected noise in each pixel have also been implemented. The code uses the MATLAB function ‘lsqcurvefit’ to perform the least squares fit.

The right image in Figure 8 shows the extracted image sub-array containing the star weighted by the estimated noise in each pixel. The left image in Figure 8 shows the Gaussian model func-tion result from the least squares fit to the star. Figure 9 shows the resulting array from subtract-ing the real sub-image containing the star with the Gaussian model fit. Note the differences in Figure 9 do not show any residual traces of the star in the image, suggesting a very good result from the algorithm. The reduced chi-squared value for this fit is 1.8. The (pixel, line) center for the star in the 16 by 16 pixel sub-array is (8.46, 10.37).

Figure 8. Left: the Gaussian model function for the least squares fit to the real star on the right,

which is weighted by the estimated noise in each pixel. The axes represent increasing pixel and line in the sub-array and the bar represents the linear color scaling [DN].

Page 13: Optical Navigation Capabilities for Deep Space Missions - KinetX

12

Figure 9. The resulting array from differencing the two images in Figure 8, the star sub-array and the Gaussian model fit. The axes represent increasing pixel and line in the sub-array and the bar

represents the linear color scaling [DN].

Comparison Between the Three Centerfinding Algorithms

Table 1 compares the centerfinding results from the three algorithms previously described. The differences between the solutions are all sub-pixel, suggesting the algorithms are all valid for the example images. As previously mentioned, the matched filter and cross-correlation algorithms are preferred for the first iteration of the centerfinding, the ‘initial guess’. Once the calculated center is close to the truth, the least squares algorithm is generally used to solve for the final cen-ter.

Table 1. Results comparing the three star centerfinding algorithms described above

Algorithm Pixel Line

Pixel Differ-ence from

Least Squares Solution

Line Difference from Least

Squares Solu-tion

Matched Filter 8.34 10.56 -0.12 +0.19

Cross Correlation 8.35 10.70 -0.11 +0.33

Least Squares 8.46 10.37 0.00 0.00

SPACECRAFT REPOINTING SOLUTION

The attitude knowledge for some modern interplanetary spacecraft can deviate from the actual attitude by an amount that translates into more than 40 pixels in the image plane. This error is due to the nature of the spacecraft’s attitude control system. In order to predict the locations of the target bodies in the image to the precision required by the OD filter, it is necessary to estimate the attitude of the imager to an accuracy that translates to less than one pixel uncertainty in the image plane. The KinetX OpNav software uses an estimator that solves for the spacecraft attitude that

Page 14: Optical Navigation Capabilities for Deep Space Missions - KinetX

13

minimizes the residual, in a least-squares sense, between measured and predicted positions of stars in the image.

Three parameters are necessary to specify the spacecraft attitude. An estimator has been im-plemented that solves for two characterizations of these parameters. The first approach is image-plane based. It solves for the image plane shift, in two directions, and a rotation of the image that best registers the predicted star positions with their measured centers. The second algorithm is based directly on the spacecraft attitude, characterized by the camera to inertial rotation matrix. A non-linear least squares estimator is used to solve for the three Euler angles of this matrix that minimizes the residuals between the predicted and measured star locations. This is an iterative process that updates the predicted star positions after each iteration. A tolerance is set in the code to return once the difference between the current and last iteration is sufficiently small.

Figure 10 displays the result of a spacecraft repointing solution for an image taken with the New Horizons MVIC7 camera of the asteroid 2002JF56 shuttered on June 11th, 2006 at 17:07:17 UTC. The plot on the left shows the position residuals of the stars in the image as a function of magnitude before the repointing solution. The plot on the right shows the residuals after the re-pointing solution has been applied. Table 2 contains the pixel, line, and twist deltas for each itera-tion, and the sum at the end of the last iteration. The differences between measured and predicted star positions, readily discernable in Figure 10, were around 4.5 pixels and 12.2 lines. The post-fit residuals have a scatter less than 0.5 pixel, exceeding the 1 pixel requirement.

Figure 10. Left: Positions residuals before repointing solution. Right: Position residuals after re-

pointing solution.

Table 2. Results Comparing Repointing Deltas for Each Iteration

Iteration Pixel Line Twist (deg)

1 4.5532 12.271 -1.0542e-02

2 -2.1563e-03 3.0982e-03 -5.9722e-04

3 -8.6684e-04 -7.3236e-04 -2.1321e-05

4 8.4557e-04 6.8703e-04 2.1630e-05

Page 15: Optical Navigation Capabilities for Deep Space Missions - KinetX

14

Total 4.5510 12.274 -1.1139e-02

TARGET BODY CENTERFINDING

The OpNav analysis products required from each image for orbit determination include the predicted and measured target body geometric centers and the state partial derivatives, which are the partial derivatives of the target body geometric center with respect to the spacecraft state in inertial space relative to a reference target body. For New Horizons, the reference target body is Pluto. For OSIRIS-REx, the target body is asteroid 1999 RQ36.

Unresolved Object Centerfinding

The algorithms in place for centerfinding on unresolved targets, or targets whose projected ar-ea maps to less than one detector pixel, are very similar to stellar centerfinding. The target body is modeled as a 2D Gaussian function and the three algorithms described in the star centerfinding section above are used.

Figure 11 shows a zoomed New Horizons image of Pluto shuttered by the LORRI camera in June 2012 with a circle centered on the a priori estimate of the center of the object. First, a re-pointing solution was derived for this image and the observed-minus-computed star-center resid-uals are shown in Figure 12. Figure 12 shows a side-by-side comparison of the star residuals be-fore and after the repointing solution. The image pointing knowledge was refined by 30.57 pixels in the pixel direction, 11.08 pixels in the line direction, and -0.003476 radians in twist, with final residuals less than 0.2 pixels. The pixel-line center of Pluto in this image was found to be (979.04, 34.29) and the derived alpha and declination of the body after repointing is 270.7635 degrees, and -14.6824 degrees, respectively. The residuals in alpha*cos(declination) and declina-tion in arc-seconds are -0.08963 and 0.30037, respectively. Centerfinding and pointing knowledge determinations to this level are sufficient for OpNav to support the required orbit de-termination accuracy.

Page 16: Optical Navigation Capabilities for Deep Space Missions - KinetX

15

Figure 11. A New Horizons image of Pluto taken by the LORRI imager in June 2012. The pixel-line center of Pluto in pixels was found to be (979.04, 34.29) and the derived (right ascension, declina-tion) in is (270.7635, -14.6824) degrees.

Figure 12. A side-by-side comparison of the star residuals before (left) and after (right) the re-

pointing solution for the image used in Figure 4. The derived image pointing solution was 30.57 pixels in the pixel direction, 11.08 pixels in the line direction, and -0.003476 radians in twist, with final re-siduals less than 0.2 pixels.

Extended Object Centerfinding

Three algorithms have been implemented for extended object centerfinding. Two of the algo-rithms require the ability to simulate an image of the target body, which has been described in the section above on simulating images. The third algorithm requires the ability to predict the surface brightness as seen from the spacecraft at any location on the body. The code first isolates a region in the image (after the repointing solution) based on either an a priori estimate of the target body center and size in pixels, or a user-identified center and radius via an interactive mouse click and circling on the image itself.

Cross-Correlation Target Centerfinding. The Fourier Transform cross-correlation theorem is used to cross-correlate the model of the target with the data in a sub-array centered on an estimate of the target center. A paraboloid is fit to the area in the vicinity of the peak of the output. The parameters of this fit are used to derive the sub-pixel estimate of the target center.

The image on the left in Figure 13 is a simulated image of Pluto near-encounter that was used in a recent ORT. This image was simulated with a perturbed ephemeris in the Pluto barycenter, shifting the Pluto system 4500 km closer to the sun, radially along the Pluto-Sun line. The red circle around the body has a center placed on the derived center solution for the image. Since the incoming hyperbolic trajectory for New Horizons is only 15 degrees off this radial line, an ephemeris error of this nature is primarily along the spacecraft’s downtrack direction and there-fore difficult to detect using OpNav until the last few images. In this test, the analyst pretends the simulated image is ‘real’ and processes the image blind to the ephemeris perturbation. The simu-lated image generated by the software during processing is smaller (since Pluto is thought to be farther away), as shown in the difference plot in Figure 13 (right).

Page 17: Optical Navigation Capabilities for Deep Space Missions - KinetX

16

Figure 13. Left: A perturbed simulated ‘real’ image of Pluto. Right: The difference between the

‘real’ and an unperturbed simulated image. The bar on the right of each figure represents the color scaling in DN.

Non-linear Least Squares Target Centerfinding. The second algorithm uses a non-linear least squares estimator to find a center solution that minimizes the difference between the model (the simulated image), and the data in the sub-array in a least-squares sense using the MATLAB ‘lsqcurvefit’ function.

Limb-Scanning Algorithm for Target Centerfinding. If the universe were solely populated by featureless spheres the classical cross-correlation and least squares techniques would suffice to find the centers of these bodies. Nature, however, presents us with a diversity of targets with an almost infinite variety of shapes, textures, colors, and hues. Pluto provides an excellent example of the issues faced when trying to estimate the center of a body with significant albedo variations across its surface. The work reported by Buie et.al.6, which derived a low-resolution albedo map for Pluto, provided motivation to incorporate the ability to simulate images of Pluto with surface brightness variations based on this map. It was found through simulation that these brightness variations could cause a significant bias in the center solution (sometimes greater than several pixels) when the model function had a uniform albedo surface. The bias depends on the viewing geometry, the face presented by Pluto to the observer, and the scattering law assumed for the sur-face. This result is not surprising, as it should be expected that the classical estimators would move a uniform albedo sphere towards regions of higher albedo and away from regions of lower albedo. This problem would be mitigated if the target albedo were stable with time. Buie et.al.6, however, have found a significant secular variation in the surface albedo of Pluto over a time frame that suggests that the surface Pluto will present to the New Horizons spacecraft at encoun-ter in July 2015 may not resemble the surface shown in Figure 14.

The Jet Propulsion Laboratory optical navigation group first used limb-scanning for optical navigation in the late 1970s during the Voyager Jupiter encounters.8 The KinetX team has im-plemented their own version of a limb-scanning algorithm. The algorithm assumes an a priori target body center, derived from a SPICE kernel and updated by the current value of the repoint-ing solution. The performance of this algorithm will be illustrated using the simulated image of Pluto displayed in Figure 14, which will be shuttered by the LORRI camera on the New Horizons camera at 2015-07-12 08:46:57 UTC. First, a set of vectors is defined, radiating from the a priori center, that have a sufficient length to guarantee crossing either the limb or terminator. The model function is specified by the value of the uniform albedo, the surface scattering law, and the view-ing geometry. The code calculates the observed brightness on a uniformly spaced set of points on

Page 18: Optical Navigation Capabilities for Deep Space Missions - KinetX

17

each vector, which represents a single limb scan. The user manually chooses a subset of limb scans, driven by the requirements that a limb scan traverse a region of relatively uniform surface albedo, preferentially across the lit limb, and that the ensemble of scans have a sufficient span in angle to adequately constrain the center solution. Figure 15 shows the limb scans used for this example. The data are interpolated onto the (x, y) positions of each limb scan. From the set of simulated and real limb scans, the software individually cross-correlates each real limb scan with the simulated limb scan and associates the maximum of the resulting vector with the offset of the data from the a priori center. This set of offsets, in x and y, is fed to a least squares estimator, which adjusts the center to minimize the vector of offsets in a least squares sense. Figure 16 and Table 3 compare the centerfinding error obtained with this limb-scanning algorithm to the error obtained by cross-correlating the image with a simulated image derived from a uniform albedo sphere. Parenthetically, note that the limb scans chosen for this example constrain the center solu-tion better in declination than in right ascension.

Figure 14. Pluto image used for limb-scan solution using New Horizons LORRI imager on ap-

proach to Pluto on July 12, 2015, about 3 days before closest approach.

α

δ

Page 19: Optical Navigation Capabilities for Deep Space Missions - KinetX

18

Figure 15. Limb scans used for center solution of Pluto from image in Figure 14. The axes rep-resent increasing pixel and line in the sub-array and the bar represents the linear color scal-ing [DN].

Figure 16. Left: Difference between the ‘real’ data and simulated image after a solution using the

full-body cross-correlation estimator. Right: Difference between ‘real’ data and simulated image af-ter a limb-scan center solution. The axes represent increasing pixel and line in the sub-array and the bar represents the linear color scaling [DN].

Table 3. Results comparing the centerfinding errors, in arc-seconds, for a full-body cross-correlation solution versus a limb scan solution.

Centerfinding Error (arc-seconds)

Full-Body Cross-Correlation Solution Limb Scan Solution

Right Ascension 1.20 0.12

Declination 0.47 0.03

OUTPUT OBSERVABLES

As mentioned throughout this paper, the output observables required for the OD process in-clude the (pixel, line) centers for each detected target body and the numerical partial derivatives of target body centers with respect to the spacecraft state (position). The partial derivatives are calculated by perturbing the spacecraft state vector (with respect to body center) one component at a time and calculating the change in the target body (x, y) position expected in the image plane. The derivative represents the sensitivity of the change in spacecraft state to a change in target po-sition in the image. This sensitivity matrix and the predicted and observed centers of each target body encompass the observables integrated into the OD solution.

CONCLUSIONS

The use of MATLAB as the development platform provides flexibility in development and encourages rapid prototyping. The capability to simulate images and SPICE kernels has proven particularly valuable for testing and verification, operational readiness tests, and answering vari-

Page 20: Optical Navigation Capabilities for Deep Space Missions - KinetX

19

ous contingency and off-nominal scenarios that bound nominal mission scenarios. The functions and algorithms in the KinetX optical navigation tool have been tested against both real and simu-lated data, and the results meet the requirements for deep space navigation. For both stellar and celestial body centerfinding, the derived center solutions from simulated images match the truth center to a sub-pixel level. For centerfinding on stars and unresolved bodies, the matched filter was found to be more robust than the cross-correlation method in identifying faint stars in the presence of noise. Either technique provides an initial guess to a non-linear least squares estima-tor that gives a robust final center estimate. The consistently small statistical scatter in the re-pointing solutions from real spacecraft images supports validation of the star centerfinding and repointing algorithms. For centerfinding on extended objects, three techniques are implemented: a cross-correlation of a modeled body on the body image, a non-linear least squares algorithm, and a limb-scanning technique. The limb-scanning technique proved better at centerfinding on objects that have significant albedo uncertainty. At each step involved in processing optical navigation data, from predicting star object locations in images, to stellar centerfinding and spacecraft re-pointing, and finally to target body centerfinding and calculating partial derivatives, the imple-mented algorithms have proven to be robust and ready for navigation at Pluto and asteroid 1999 RQ36.

ACKNOWLEDGMENTS

The work described in this paper was carried out by members of the Space Navigation and Flight Dynamics Practice of KinetX Aerospace, and was partially funded under sub-contracts with Johns Hopkins University Applied Physics Laboratory and with ai solutions in support of NASA projects. Additional funding was provided by KinetX Aerospace. The authors also acknowledge the very capable support and review of this paper by the New Horizons’ Lead OpNav Engineer, Dr. William Owen of the Jet Propulsion Laboratory, California Institute of Technology. Additional thanks are extended to Dr. Owen for providing the camera calibration models for the LORRI and MVIC instruments on New Horizons.

REFERENCES 1 R. W. Gaskell, “Optical Navigation Near Small Bodies.” AAS paper 11-220, AAS/AIAA Space Flight Mechanics Meeting, New Orleans, LA, 2011. 2 C. H. Acton, Jr., “Ancillary data services of NASA's Navigation and Ancillary Information Facility.” Planetary and Space Science, Vol. 44, 1996, pp. 65-70. 3 D. S. Hayes, “An Absolute Spectrophotometric Calibration of the Energy Distribution of Twelve Standard Stars.” Astrophysical Journal, Vol. 159, 1970, pp. 165–176. 4 D. S. Hayes and D. W. Latham, “A Rediscussion of the Atmospheric Extinction and the Absolute Spectral-Energy Distribution of Vega.” Astrophysical Journal, Vol. 197, 1975, pp. 593-601. 5 A. F. Cheng, et. al, “Long-Range Reconnaissance Imager on New Horizons.” Space Science Rev. Vol 140, 2008, pp. 189-215. 6 M. W. Buie, W. M. Grundy, E. F. Young, L. A. Young, S. A. Stern, “Pluto and Charon with the Hubble Space Tele-scope. II. Resolving Changes on Pluto’s Surface and a Map for Charon.” Astronomical Journal. Vol. 159, 2010, pp. 1128-1143. 7 D. C. Reuter, et. al, “Ralph: A Visible/Infrared Imager for the New Horizons Pluto/Kuiper Belt Mission.” Space Sci-ence Rev. Vol 140, 2008, pp. 129-154. 8 J. E. Riedel, W. M. Owen, Jr., J.A. Stuve, S. P. Synnott,, and R. M. Vaughan. “Optical Navigation During the Voyag-er Neptune Encounter.” AIAA/AAS Astrodynamics Conference. Portland, OR, USA, 1990.