The F-35 fusion engine is the software module at the heart of the integrated mission systems capability on the aircraft. Fusion involves constructing an integrated description and interpretation of the tactical situation surrounding ownship [2]. It draws from onboard, cooperative, and off-board data sources to enhance situational awareness, lethality, and survivability [2]. The fusion functionality is divided into two major sub-functions: air target management (ATM) and surface target management (STM). The purposes of these functions are to optimize the quality of air and surface target information, respectively. Their functionality is implemented in three primarily software modules: the A/A tactical situation model (AATSM), the A/S tactical situation model (ASTSM), and the sensor schedule (SS).
The AATSM software module receives data from onboard and off-board sources about air objects in the environment. It then integrates this information into kinematic and identification estimates for each air object. Similarly, the ASTSM software module receives data from onboard and off-board sources about surface objects in the environment. It then integrates this information into kinematic and identification estimates for each surface object.
Objects that are ambiguous between air and surface are sent to both tactical situation models (TSMs). Each TSM assesses the quality of its tracks to identify any information needs. The system track information needs (STINs) are sent from the TSMs to the SS software module. The SS prioritizes the information needs by track and selects the appropriate sensor mode command to issue in order to satisfy the information need. The SS provides the autonomous control of the tactical sensors to balance the track information need and the background volume search needs.
Measurement and track data is sent to fusion from the onboard sensors (e.g., radar, EW, CNI, EOTS, DAS) and off-boards sources (e.g., MADL, Link 16). When this information is received at the TSM, the data enter the data association process. This process determines whether the new data constitute an update for an existing system (fusion) track or potentially new tracks. After being associated with a new or existing track, data are sent to the state estimation to update the kinematic, identification, and rules of engagement (ROE) states of the object.
Kinematic estimation refers to the position and velocity estimate of an object. It can also include an acceleration estimate for maneuvering air track. The kinematic estimate also includes the covariance for the track, an estimate of the track accuracy. Identification estimation provides an estimate and confidence of the affiliation, class, and type (platform) of the object. The identification process also evaluates the pilot-programmable ROE assistant rule to determine when the sensing states and confidences have been met for declaration. Estimation publishes the updated track state (kinematic, identification, and ROE statuses) to the system track file. At a periodic rate (about once a second), each track is prioritized and then evaluated to determine whether the kinematic and identification content meets the required accuracy and completeness. Any shortfall for a given track becomes STINs. The STIN message for the air and surface tracks are sent to the SS to make future tasking decisions for the onboard sensor resource. The process continues in a closed-loop fashion with new pieces of data from the sensors or datalinks. Figure 15 illustrates this process.
The results of this fusion of information are provided to the other elements in mission systems. They are provided to the pilot/vehicle interface (PVI) for display, fire control and stores for weapon support, and EW for CM support. This allows these elements to perform their related mission functions to provide: 1) a clearer tactical picture, 2) improved spatial and temporal coverage, 3) improved kinematic accuracy and identification confidence, and 4) enhanced operational robustness.
For a clearer tactical picture, multiple detections of an entity are combined into a single track instead of multiple tracks. For improved spatial and temporal coverage, a target can be continuously tracked across multiple sensors and FOVs. This is made possible by the extended spatial and temporal coverage of the onboard sensors, as well as the offboard contributor. Improved kinematic accuracy and identification confidence requires the effective integration of independent measurements of the track from multiple sensors or aircraft. This integration is what improves the detection, tracking, positional accuracy, and identification confidence. Enhanced operational robustness requires the abilities to fuse observations from different sensors and hand off targets between sensors. This leads to increased track resilience to sensor outages or countermeasures. Increased dimensionality of the measurement space (i.e., different sensors measuring various portions of the electromagnetic spectrum) then reduces vulnerability to denial of any single portion of the measurement space