AI Trustworthiness and Risk Assessment for Challenged Contexts (ATRACC)

Nov 7, 2024·
Brian Jalaian, Ph.D.
Brian Jalaian, Ph.D.
· 1 min read
Image credit:
Abstract
The ATRACC Symposium focuses on the critical aspects of AI trustworthiness and risk assessment in challenged contexts. It aims to create a platform for discussions and explorations that will contribute to the development of innovative solutions for quantitatively trustworthy AI, especially in risk-averse domains such as healthcare, civil infrastructure, and military defense.
Date
Nov 7, 2024 9:00 AM — Nov 9, 2024 5:00 PM
Event
Location

Westin Arlington Gateway

801 N Glebe Rd, Arlington, VA 22203

About The ATRACC Symposium Session

Artificial intelligence (AI) has become a transformative technology with revolutionary impact across various domains, including challenging contexts such as civil infrastructure, healthcare, and military defense. The ATRACC Symposium addresses the critical need for assessing AI trustworthiness and risk in these challenged contexts.

Key Topics

  • Assessment of non-functional requirements (explainability, transparency, accountability, privacy)
  • Methods for system reliability, uncertainty quantification, and over-generalizability balance
  • Verification and validation (V&V) of AI systems
  • Enhancing reasoning in Large Language and Foundational Models (LLFMs)
  • Links between performance, trustworthiness, and trust
  • Architectures for Mixture-Of-Experts (MoE) and multi-agent systems
  • Evaluation of AI systems vulnerabilities, risks, and impact

Important Dates

  • Symposium Dates: November 7-9, 2024
  • Registration Fee Increase: October 4, 2024
  • Hotel Room Block Deadline: October 17, 2024

Registration

Early registration is recommended as fees will increase on October 4th. Hotel rooms should be booked as soon as possible due to limited availability.

For more information on topics and submission guidelines, please visit our Call for Papers page.

Brian Jalaian, Ph.D.
Authors
Associate Professor
My research focuses on developing safe, robust, and reliable AI systems, with emphasis on large language models and foundational AI technologies.