NASA is seeking technologies that can support increased autonomous operations in space environments, and support humans’ ability to assess and confer trust in these operations. The team also seeks methods/metrics/architectural approaches used to help designers and stakeholders assure these technologies’ trustworthiness in early development, and for human interface and performance Concepts of Operation that support such assessments during operations.

Relevant scenarios include, but are not limited to:

  • Satellite command and control; autonomous reconfiguration of a number of independent entities
  • Robotic surface operations; sensing and behaving, learning and adapting; confidence in planning actions
  • Space-based assembly, maintenance, servicing; robotic capabilities; and the in situ V&V for quality control and how people assess that
  • Data management for intelligently downlinking and artifact/salient datum ident; knowing what’s interesting and what’s spurious; moving data to information; keeping track of uncertainty in fused data
  • Atmospheric or surface data collection in hostile and/or comm-challenged environments; knowing what impact context has on sensed data – deep models of the environment, system and objectives to infer/adapt objectives and priorities
  • Detection, tracking and identification of near earth objects; timely and accurate sense & classify; threat assessment
  • System fault/anomaly, prediction, detection, isolation, reconfiguration; system health maintenance and adaption
  • Cybersecurity issues; how do you know when you’re hacked or spoofed, what can you know about a design that convinces you you’re robust to these
  • System resource management and multi-objective planning (e.g, timelines); dynamic to new conditions & resource changes
  • Autonomous Medical Operations; modeling at different scales and how those fit together; confidence in precise operations
  • Technology adoption methods and processes for highly autonomous systems


Accomplishing future objectives for space exploration, development, and investigation demands new methods such as increased autonomy. Space-trusted Autonomy is technology that can operate with reduced human engagement, in the Space environment and operational context, where humans can assess and confer trust in this performance. For example, future technologies may conduct Mars terraforming and infrastructure development in preparation for human surface operations. NASA needs to better understand what it means to trust these new and increasingly sophisticated technologies, and how they can know the technologies are trustworthy. They need to design technologies to do the work that needs to be done – even when they might not know in advance how to specifically instruct that work. In addition, NASA needs to understand what people in the system need to know to have trust in these autonomous technologies, both during design and in operations.

Ultimately, NASA seeks to identify: technologies that enable autonomous (to some degree) operations in space, methods/approaches/technologies for characterizing trust in such complex operations, and definitions of the evidence or understanding supporting trust assessment and attribution, and the attributes of human interfaces to support trustworthiness assessment.


Solutions will:

  • Support increased autonomous operation
  • Characterize trust in human/automation and human/robotic interfaces to ascertain the trustworthiness of performance throughout the system lifecycle and operational scenarios.
  • Identify design and verification/validation methods, architectures and metrics that support trust in automation and robotics

Additional features:

  • Extension of the operation under space environment, which can overcome environmental challenges (dust, solar radiation, gravity, etc.) and operational challenges (distance, communication, security, etc.)


Possible Solution Areas

  • Robotics
  • Autonomous vehicles
  • Software agents
  • Artificial Intelligence, Machine Learning
  • Data fusion, Uncertainty Management
  • Cybersecurity – Threat Assessment and Assurance
  • Learning and Adaptive systems
  • Trust Assessment at Human Interfaces (Automation, Robotic, Human)
  • Collaboration and Coordination among Human/Robotic/Automated agents; Levels of Engagement, Roles, Mixed-Initiative control
  • Technology adoption
  • Design (methods, architectures) & Test (certification, assurance, validation, verification) for highly autonomous systems.


Desired Outcome of the Solution

Identify participants (information on work and contact information) to enable characterizing state of the art in the trusted autonomy field; enabling technologies, design methods, assessment metrics and methods, test facilities for space operations, human interface requirements for assessment in operations, technology adoption.

Field of Use and Intended Applications

Future autonomous operation in space.