Significant numbers of science research projects currently use or will use an array of AI resources to help facilitate scientific research, from machine learning (ML) for elements of the research data lifecycle, to generative AI large language models (LLMs). These resources are part of a rapidly evolving landscape where there is limited guidance on their cybersecurity and research security impacts. To address this critical gap, Trusted CI initiated the Secure Use of AI effort in January 2026 as part of its overall initiatives on AI. This new endeavor focuses on gathering and sharing information to help research cyberinfrastructure organizations and institutions of higher education understand the impact of AI on their research and cybersecurity programs, including the inherent limitations and vulnerabilities of different types of AI tools and systems.
Security risks associated with the use of AI resources in scientific research projects encompass two broad categories. The first involves adversarial attacks that deliberately target AI systems and their underlying components, such as the models, data pipelines, or supporting infrastructure. The second involves operational risks that arise from the behavior and limitations of AI systems, including model hallucinations, design flaws, or improper handling and interpretation of AI-generated outputs. The Secure Use of AI team is mapping the litany of concerns with frameworks for addressing them, thereby identifying urgent areas for scientific cybersecurity programs to adapt or augment their existing approaches. Additionally, as part of the initial phase of activities, the Trusted CI Secure Use of AI team will engage with community stakeholders to gather insights needed to clarify understanding, concerns, and challenges of AI use. This effort will include interviews of community experts on AI and security, and interviews or round table activities with research cyberinfrastructure operators to determine their evolving needs.The activities will result in the socialization of guidance and other outputs from this project among NSF and the broader federally-funded research community.
Through development and growth of relationships with NSF Major Facilities and collaboration with organizations such as CI Compass, the NSF SECURE Center, the National Artificial Intelligence Research Resource Pilot (NAIRR) and its NAIRR Secure effort, and others, Trusted CI will seek to foster a community of practice focused on the Secure Use of AI in Research environments.
If you have questions or suggestions, or need help securing AI in your research project or organization, please contact Trusted CI at help@trustedci.org.
Tuesday, March 24, 2026
Trusted CI Launches Secure Use of AI Effort
Labels:
secure AI