Friday, March 27, 2026

Navigating AI Governance And Security Frameworks: Quick Practical Considerations for Research Institutions

Executive Summary

Artificial intelligence is rapidly becoming embedded in research workflows, from data analysis and scientific modeling to autonomous agents and large language models (LLMs). A 2025 study indicates that AI adoption among researchers has surged to 84% globally, reflecting how quickly the technology is transforming research practices. At the same time, Microsoft asserts that generative AI is now used by roughly one in six people worldwide, while Stanford's AI Index 2025 notes a growing increase in organizational stakeholders reporting AI use. These reports highlight the speed at which AI capabilities are diffusing across industries and institutions. As AI adoption accelerates, research organizations increasingly need structured governance and security frameworks to manage emerging risks and ensure responsible deployment of AI systems. This theme also surfaced during the AI, Security, and Research Cyberinfrastructure panel at the Trusted CI NSF Cybersecurity Summit in October 2025.

This blog post examines key governance and industry frameworks on AI and explains how research institutions can use them to build a practical approach for secure use of AI in their research environments. In addition to these frameworks, supporting resources such as MITRE and OWASP guidance provide structured insights into adversarial techniques and emerging AI security risks.

Why Industry AI Governance and Security Frameworks Matter in Research Environments

Artificial intelligence is evolving rapidly. National initiatives such as the The White House Genesis Mission, policy guidance like OMB Memo M-25-21, and the National Science Foundation’s AI Strategy Plan signal a widespread push to integrate AI across research and scientific infrastructure. As these efforts accelerate, research facilities and cyberinfrastructure teams will need to consider how AI adoption may impact their cybersecurity programs.

In many research environments, AI capabilities may be adopted faster than industry frameworks can evolve to address them. Rather than waiting for a perfect framework, there is value in understanding the existing governance and security frameworks and evaluating how they can be applied to support the responsible and secure use of AI in research environments.

Current Use of AI

Across research institutions, AI systems are increasingly being used for tasks such as large-scale data analysis, scientific modeling, literature review, and hypothesis generation. These tools help researchers in fields like climate science, biomedical research, and materials science, extract insights from complex datasets, and accelerate discovery. As generative AI and machine learning capabilities mature, they are becoming integrated into everyday research workflows across universities, laboratories, and shared cyberinfrastructure environments.

Challenges

While AI is accelerating many aspects of research, it also introduces new challenges for security and governance. Many modern AI systems and agents operate with access to codebases, datasets, research infrastructure, external tools, and networked resources in order to automate tasks and support analysis. As these systems optimize predictions, process large datasets, and assist with troubleshooting or experimentation, questions arise around what level of access they should be granted and how their actions should be monitored. At the same time, the threat landscape surrounding AI is constantly evolving, making it increasingly difficult for institutions to fully understand and secure these systems, particularly in shared research environments where data, models, and infrastructure are widely distributed.

The Role of AI Governance and Security Frameworks

AI governance and security frameworks provide structured guidance for managing the risks associated with deploying AI systems within research infrastructure. They help institutions identify potential risks, establish governance practices, and implement safeguards across the AI lifecycle. For research environments that rely on shared datasets, computing resources, and collaborative platforms, the frameworks below offer practical reference points for adopting AI responsibly while protecting research infrastructure and scientific data.

Below is a summary of four existing frameworks, along with guidance on when a research institution might consider using each one.

1. NIST Artificial Intelligence Risk Management Framework

The NIST Artificial Intelligence Risk Management Framework (AI RMF) provides guidance for organizations and individuals involved in the AI lifecycle (often referred to as AI actors) to design, develop, deploy, and operate AI systems in a trustworthy and responsible manner. The framework is intended to be practical and adaptable as AI technologies evolve, enabling organizations to manage risks while continuing to benefit from AI innovation.

For research environments, the framework is particularly valuable because it focuses on how AI systems interact with organizational processes and infrastructure (core elements emphasized by the Trusted CI Framework’s approach to developing and managing a cybersecurity program). Key concepts include:

  • Govern - Establish organizational policies, oversight, and accountability to manage AI risks across the AI system lifecycle.
  • Map - Establish the context of an AI system by identifying its purpose, stakeholders, operational environment, and potential risks across the AI lifecycle.
  • Measure - Use testing, evaluation, verification, and validation (TEVV) methods and metrics to assess AI system performance, trustworthiness, and associated risks.
  • Manage - Prioritize and mitigate identified AI risks through continuous monitoring, incident response, and improvements across the AI system lifecycle.

When Research Institutions May Consider Using It

Research institutions can use the NIST AI RMF when they need a structured approach for evaluating and managing AI risks across research infrastructure. The framework is particularly useful for establishing governance processes for how AI systems are developed, tested, deployed, and monitored within research environments. By adopting the NIST AI RMF, institutions can improve their ability to document AI system risks, evaluate tradeoffs between trustworthiness characteristics, and support informed decisions about whether AI systems should be deployed within research infrastructure. The framework also helps organizations strengthen accountability, improve information sharing about AI risks across teams, and develop more consistent approaches for testing, evaluating, verifying, and validating (TEVV) AI systems over time.

2. NIST AI 600-1

NIST AI 600-1 is a companion document to the NIST Artificial Intelligence Risk Management Framework (AI RMF) that focuses specifically on risks associated with generative AI systems. It identifies risks that are unique to or amplified by generative AI and provides guidance for addressing them through considerations such as governance, content provenance, pre-deployment testing, and incident disclosure.

The document highlights several risk categories that are particularly relevant to research environments:

  • Confabulation (Hallucinations), where AI systems generate incorrect or fabricated information that may affect scientific accuracy
  • Data privacy risks when sensitive research datasets are used with generative AI tools
  • Information security risks, including the potential misuse of AI systems for cyber activities
  • Intellectual property concerns related to AI-generated research content and code
  • Dual-use risks, particularly in domains such as chemistry, biology, and materials science
  • Broader concerns such as bias, information integrity, and value-chain dependencies in AI systems

When Research Institutions May Consider Using It

NIST AI 600-1 can aid research institutions when evaluating risks associated with generative AI systems used in research workflows, particularly in environments involving sensitive datasets, dual-use research domains, or AI-assisted analysis. The document complements the NIST Artificial Intelligence Risk Management Framework by mapping generative AI risks to the RMF’s Govern, Map, Measure, and Manage functions, helping institutions integrate these risks into broader AI risk management practices.

3. Cloud Security Alliance AI Controls Matrix (CSA AICM)

The CSA AI Controls Matrix (AICM), developed by the Cloud Security Alliance, provides a comprehensive set of security controls specifically designed for AI systems. The framework outlines 243 control objectives across 18 security domains, covering the entire AI lifecycle from data pipelines and training environments to model deployment and third-party integrations. While governance frameworks such as the NIST AI RMF focus on risk management and oversight, the AICM emphasizes practical implementation by defining concrete security controls that organizations can apply when developing and operating AI systems.

One way to think about the relationship between these frameworks: If NIST outlines the why and additional supporting resources such as MITRE shows the how, CSA AICM delivers the what.

For research environments where AI systems are used for data analysis, modeling, and experimentation, the AICM highlights several important areas:

  • Security controls for AI datasets and training pipelines, ensuring research data used for model development is protected
  • Controls for model development, deployment, and lifecycle management across AI systems
  • Identity and access management for AI services and infrastructure
  • Risk management for third-party AI models and external components used within research workflows
  • Monitoring and auditing mechanisms to track AI system behavior and operational activity

When Research Institutions May Consider Using It

Research institutions can use the CSA AI Controls Matrix when implementing operational security controls for AI systems and supporting infrastructure. The AICM can help research cyberinfrastructure teams translate governance guidance from frameworks such as the NIST AI RMF into practical security controls for AI deployments.

4. Cisco’s Integrated AI Security and Safety Framework

The Cisco Integrated AI Security and Safety Framework provides a structured approach for understanding and managing risks associated with modern AI systems. It introduces a taxonomy that categorizes AI security and safety risks across the AI lifecycle, including data pipelines, models, system integrations, and agentic components.

For research environments where AI systems are used for modeling, experimentation, and data analysis, the framework highlights several important areas:

  • Lifecycle-aware risk analysis across data pipelines, model development, deployment, and operation
  • Threat taxonomy based on attacker objectives and techniques, helping teams understand how AI systems may be targeted
  • Coverage of emerging AI risks, including prompt injection, model manipulation, and supply-chain compromise
  • Integration of safety and security considerations, recognizing that AI risks may involve both technical vulnerabilities and harmful outputs

When Research Institutions May Consider Using It

Research institutions can reference this framework when evaluating security risks in complex AI architectures and when developing threat models for AI systems used in research workflows. It is particularly useful alongside frameworks such as the NIST AI RMF and NIST AI 600-1, where governance and risk identification are already established, and complements the CSA AICM by providing a system-level view of how threats may emerge across the AI lifecycle.

How These Four Frameworks Complement Each Other

Each of these frameworks address different dimensions of AI security and governance and can be used together to provide a more comprehensive approach to managing AI risks in research environments. NIST AI RMF establishes the governance and risk management structure for AI systems. NIST AI 600-1 extends this by identifying risks specific to generative AI and mapping them to the RMF functions. CSA’s AICM complements these governance frameworks by translating risk management concepts into concrete security controls that can be implemented in practice. Cisco’s AI Security Framework adds a lifecycle and threat-modeling perspective, helping organizations understand how risks may emerge across AI systems and supporting infrastructure.

For example, a research institution’s team developing an AI model for analyzing biomedical datasets could use the NIST AI RMF to establish governance and risk management practices, NIST AI 600-1 to evaluate generative AI risks such as data leakage or hallucinated outputs, CSA AICM to implement operational controls around data access and model deployment, and Cisco’s framework to analyze potential attack surfaces across the AI system lifecycle.

Alignment with the NSF AI Strategy

The NSF AI Strategy emphasizes responsible and secure adoption of AI across research and operational environments through five pillars: governance, mission-driven AI use cases, data and infrastructure, responsible innovation, and workforce readiness. Applying frameworks such as the NIST AI RMF, NIST AI 600-1, CSA AICM, and Cisco AI Security Framework can help research institutions align with these pillars by establishing governance and risk management processes, improving security practices around AI-enabled data and infrastructure, and supporting the safe deployment of AI systems in research environments. Together, these frameworks provide practical guidance that helps translate the strategic goals outlined in the NSF plan into operational practices for AI systems used in scientific research.

Additional Resources for Understanding AI Security Threats 

Alongside governance and control frameworks, resources such as MITRE ATLAS and the OWASP Top 10 for Agentic Applications provide valuable threat intelligence for understanding how AI systems can be attacked or misused. These resources help research institutions identify common adversarial techniques, emerging attack patterns, and security weaknesses in AI-driven systems, supporting more informed threat modeling and risk assessment when deploying AI within research environments.

Conclusion

As AI becomes more embedded in research workflows, understanding and applying established security and governance frameworks becomes increasingly important. The four frameworks reviewed here (NIST AI RMF, NIST AI 600-1, CSA AI Controls Matrix, and Cisco AI Security Framework) provide complementary perspectives that help research institutions approach AI adoption in a structured and responsible manner. By leveraging these resources alongside threat intelligence guidance such as MITRE ATLAS and OWASP, research organizations can better anticipate risks, strengthen security practices, and support the secure and trustworthy use of AI within research cyberinfrastructure.

Tuesday, March 24, 2026

Trusted CI Launches Secure Use of AI Effort

Significant numbers of science research projects currently use or will use an array of AI resources to help facilitate scientific research, from machine learning (ML) for elements of the research data lifecycle, to generative AI large language models (LLMs). These resources are part of a rapidly evolving landscape where there is limited guidance on their cybersecurity and research security impacts. To address this critical gap, Trusted CI initiated the Secure Use of AI effort in January 2026 as part of its overall initiatives on AI. This new endeavor focuses on gathering and sharing information to help research cyberinfrastructure organizations and institutions of higher education understand the impact of AI on their research and cybersecurity programs, including the inherent limitations and vulnerabilities of different types of AI tools and systems.

Security risks associated with the use of AI resources in scientific research projects encompass two broad categories. The first involves adversarial attacks that deliberately target AI systems and their underlying components, such as the models, data pipelines, or supporting infrastructure. The second involves operational risks that arise from the behavior and limitations of AI systems, including model hallucinations, design flaws, or improper handling and interpretation of AI-generated outputs. The Secure Use of AI team is mapping the litany of concerns with frameworks for addressing them, thereby identifying urgent areas for scientific cybersecurity programs to adapt or augment their existing approaches. Additionally, as part of the initial phase of activities, the Trusted CI Secure Use of AI team will engage with community stakeholders to gather insights needed to clarify understanding, concerns, and challenges of AI use. This effort will include interviews of community experts on AI and security, and interviews or round table activities with research cyberinfrastructure operators to determine their evolving needs.The activities will result in the socialization of guidance and other outputs from this project among NSF and the broader federally-funded research community.

Through development and growth of relationships with NSF Major Facilities and collaboration with organizations such as CI Compass, the NSF SECURE Center, the National Artificial Intelligence Research Resource Pilot (NAIRR) and its NAIRR Secure effort, and others, Trusted CI will seek to foster a community of practice focused on the Secure Use of AI in Research environments.

If you have questions or suggestions, or need help securing AI in your research project or organization, please contact Trusted CI at help@trustedci.org.

Tuesday, March 17, 2026

Welcome to Our New Advisory Committee Members!

In support of our expanded mission, Trusted CI is thrilled to welcome Damian Clarke, Ph.D. and Manish Parashar, Ph.D. to the Trusted CI Advisory Committee. Dr. Clarke rejoins the Advisory Committee after 2 years serving as special advisor to the program. His experiences in leadership positions at universities and university consortia, particularly in the U.S. Southeast, will be particularly valuable to Trusted CI as it works to further engage with institutions of higher education to determine how best to address the cybersecurity requirements of research security. Dr. Parashar brings a wealth of experience related to national cyberinfrastructure and artificial intelligence. We look forward to his input on our new AI initiatives and helping to define our future strategy to support the community with the secure use of AI technologies.

In addition to Drs. Clarke and Parashar, we wish to express our gratitude to the entire Advisory Committee for the guidance they provide and for sharing their time and insights to maximize the value of Trusted CI’s programs to the communities we serve.

Monday, February 9, 2026

SPHERE and Trusted CI Collaborate to Strengthen Research Security

In February 2026, the NSF-funded Security and Privacy Heterogeneous Environment for Reproducible Experimentation (SPHERE) project hosted a week-long cybersecurity residency with Trusted CI, the National Science Foundation’s Cybersecurity Center of Excellence. The residency marked an important milestone in SPHERE’s transition from construction toward sustained operations, strengthening an already robust security posture through formal alignment with widely recognized best practices.

SPHERE previously partnered with Trusted CI during the 2024 Trusted CI Framework Cohort, where the SPHERE team adopted the Trusted CI Framework and completed a structured self-assessment of its cybersecurity program against the framework’s 16 Musts. The Musts identify the concrete, critical requirements for establishing and running a competent cybersecurity program. That cohort experience validated SPHERE’s foundational approach to security, while also highlighting an important next step: formally adopting a baseline cybersecurity control set and performing a gap analysis between that baseline and SPHERE’s existing controls. The Trusted CI Framework specifically recommends adoption of a recognized baseline control set in its Must 15.

Building on that groundwork, the February 2026 residency embedded Trusted CI staff directly with the SPHERE DevOps team for one intensive week at the USC Information Sciences Institute in Marina del Rey, CA. Working side by side, the teams aligned SPHERE’s existing cybersecurity controls with the CIS Critical Security Controls (CIS Controls v8), which SPHERE has now formally adopted as its baseline control set.

This work focused on mapping SPHERE’s existing practices to the CIS Controls, identifying gaps, and prioritizing future improvements. The residency also strengthened SPHERE’s alignment with NSF’s evolving expectations for research security, including closer alignment with the NSF Research Infrastructure Guide (RIG) and its set of 14 critical controls. By grounding its program in both the Trusted CI Framework and the CIS Controls, SPHERE gained a common language for documenting controls, reduced reliance on ad hoc decision-making, and ensured consistency with broadly accepted community standards.

During the residency, Trusted CI conducted site visits at all the sites that host SPHERE physical infrastructure. They visited the ISI and USC server rooms, and met virtually with SPHERE co-PIs and their teams at Northeastern University Khoury College of Computer Sciences and the University of Utah Kahlert School of Computing. These discussions helped ensure that SPHERE’s distributed architecture is protected in a coordinated and consistent manner across institutions.

With the gap analysis complete, SPHERE is well positioned to prioritize future security investments as it moves toward full operations. The outcome directly supports SPHERE’s core mission of enabling realistic and reproducible experimentation without compromising trust in the facility or the science it supports. Achieving this mission requires protecting the underlying infrastructure from attack and security breaches, safeguarding the integrity and availability of shared resources, and ensuring strong isolation and protection of researcher experiments and data.

SPHERE will share lessons learned from the residency with the broader Trusted CI Research Infrastructure Security Community (RISC), contributing back to the ecosystem that helped shape its approach.

 

SPHERE (Security and Privacy Heterogeneous Environment for Reproducible Experimentation) is an NSF Mid-scale Research Infrastructure-1 project (Award #2330066) spanning USC Information Sciences Institute, Northeastern University, and the University of Utah. SPHERE provides a public testbed for reproducible science and experimentation tailored to the needs of cybersecurity and privacy researchers and educators.

Trusted CI, the NSF Cybersecurity Center of Excellence, is supported by the National Science Foundation under Interagency Agreement #A2407-049-089-064206.0. Trusted CI’s mission is to enable trustworthy NSF science by partnering with cyberinfrastructure operators to build and maintain effective cybersecurity programs, publishing resources for the broader NSF community, and advancing the processes, tools, and knowledge needed to secure research progress.

Wednesday, February 4, 2026

2026 Trusted CI Scholars Program Now Accepting Applications


 As cybersecurity becomes increasingly vital across the scientific community, cultivating the next generation of cybersecurity leaders has never been more important. Trusted CI is proud to announce the Trusted CI Scholars Program (formerly Trusted CI Student Program), designed to equip students with essential skills, insights, mentorship, and hands-on experiences in cybersecurity.

The Trusted CI Scholars Program goes beyond technical training. It is about building a collaborative and innovative community of emerging leaders. If you are a student passionate about cybersecurity—or know someone who is—we encourage you to apply and join us in shaping a safer, more secure future for science and beyond.

Why Trusted CI Scholars Matter

Through mentorship, applied learning, and sustained interaction with cyberinfrastructure practitioners and the broader NSF community, Scholars learn from the processes, tools, and knowledge that Trusted CI advances to support secure research. In doing so, the program extends Trusted CI’s impact into the next generation of the cybersecurity workforce.

Additionally, as Trusted CI begins addressing the needs of higher education institutions related to research security, cybersecurity requirements, and artificial intelligence, early and proactive student engagement with these topics lays important groundwork for developing the skills, awareness, and readiness they may need to secure the nation’s science and research enterprise.

Goals of the Program

The Trusted CI Scholars Program is committed to:

  1. Providing Foundational Knowledge: Gain practical insights into cybersecurity through workshops, mentorship, and participation in the annual NSF Cybersecurity Summit.

  2. Growing Leadership Skills: Strengthen communication, collaboration, integrity and adaptability.

  3. Empowering Advocacy: Serve as cybersecurity ambassadors within your communities, sharing knowledge with peers and connecting them to Trusted CI for advanced support.

  4. Building Long-Term Connections: Join a growing network of Trusted CI alumni, opening doors to coaching, networking, and career opportunities in the cybersecurity field.

Highlights for 2026

This year’s program includes exciting enhancements:

  • Focused Workshops and Mentorship: Scholars will engage in tailored workshops and one-on-one mentorship with Trusted CI staff and subject-matter experts.

  • Alumni Engagement: Past participants will continue to have access to resources and Summit reunion opportunities, fostering sustained learning and long-term relationships. Alumni are also encouraged to share their experiences through blog posts, presentations, and outreach activities to inspire future cybersecurity professionals.

  • Streamlined Application Process: Applicants will submit a personal statement, professional bio sketches, and letters of support, enabling a more holistic evaluation.

Applications are now open on our submission website and close March 6. 

For more information on how to apply, visit Trusted CI’s website or reach out to scholars@trustedci.org.

Together, we’re preparing the next generation of cybersecurity leaders!

Trusted CI 2025 Summit Report Now Available

Last October, Trusted CI convened the 2025 NSF Cybersecurity Summit. This yearly event provides a forum for National Science Foundation (NSF) scientists, researchers, cybersecurity, and cyberinfrastructure (CI) professionals and stakeholders to share effective technical practices and brainstorm solutions to everyday challenges facing cybersecurity environment professionals. When the community comes together for the Summit, they collectively learn from each other.

The 2025 Summit was held in person in Boulder, CO, at the Center Green Campus at UCAR and NSF NCAR.

Interested in reading more takeaways from this year's Summit? Download the full Summit Report from https://doi.org/10.5281/zenodo.18484621


Friday, January 16, 2026

Trusted CI Mission Expanding to Address Cybersecurity for Research Security and AI

As we enter into 2026, Trusted CI leadership is excited to share some important updates regarding the expansion of our mission. We will begin addressing the needs of higher education institutions as they relate to research security and the cybersecurity requirements of NSPM-33. In addition, we will begin major new strategic initiatives focused on the secure use of AI in research. Both of these changes represent significant expansion of our mission and also the number of institutions that we will directly impact.

Our core mission continues to be supporting the security of research through cybersecurity excellence.  This includes our existing community of NSF Major Facilities and Mid-Scales, a community we remain committed to supporting. We will continue to host the annual NSF Cybersecurity Summit and will expand the program to include topics related to research security and AI. In addition, this year we’ll host our first Regional Summit in partnership with the University of Alabama.

In support of our expanded mission, we will begin partnering strategically with the SECURE Center and NAIRR-related projects. SECURE Center’s expertise in research security complements our cybersecurity expertise, and we will partner to provide comprehensive support to academic institutions who are navigating compliance with emerging NSPM-33 cybersecurity requirements. We will partner with NAIRR stakeholders to support their cybersecurity program needs. 

We have established our plans for 2026 inclusive of our new objectives. This includes pivoting our cohort model to new communities focused on research security in 2026.

We look forward to engaging with new community members in the coming year! Please send any comments or questions to info@trustedci.org