Executive Summary
Artificial intelligence is rapidly becoming embedded in research workflows, from data analysis and scientific modeling to autonomous agents and large language models (LLMs). A 2025 study indicates that AI adoption among researchers has surged to 84% globally, reflecting how quickly the technology is transforming research practices. At the same time, Microsoft asserts that generative AI is now used by roughly one in six people worldwide, while Stanford's AI Index 2025 notes a growing increase in organizational stakeholders reporting AI use. These reports highlight the speed at which AI capabilities are diffusing across industries and institutions. As AI adoption accelerates, research organizations increasingly need structured governance and security frameworks to manage emerging risks and ensure responsible deployment of AI systems. This theme also surfaced during the AI, Security, and Research Cyberinfrastructure panel at the Trusted CI NSF Cybersecurity Summit in October 2025.
This blog post examines key governance and industry frameworks on AI and explains how research institutions can use them to build a practical approach for secure use of AI in their research environments. In addition to these frameworks, supporting resources such as MITRE and OWASP guidance provide structured insights into adversarial techniques and emerging AI security risks.
Why Industry AI Governance and Security Frameworks Matter in Research Environments
Artificial intelligence is evolving rapidly. National initiatives such as the The White House Genesis Mission, policy guidance like OMB Memo M-25-21, and the National Science Foundation’s AI Strategy Plan signal a widespread push to integrate AI across research and scientific infrastructure. As these efforts accelerate, research facilities and cyberinfrastructure teams will need to consider how AI adoption may impact their cybersecurity programs.
In many research environments, AI capabilities may be adopted faster than industry frameworks can evolve to address them. Rather than waiting for a perfect framework, there is value in understanding the existing governance and security frameworks and evaluating how they can be applied to support the responsible and secure use of AI in research environments.
Current Use of AI
Across research institutions, AI systems are increasingly being used for tasks such as large-scale data analysis, scientific modeling, literature review, and hypothesis generation. These tools help researchers in fields like climate science, biomedical research, and materials science, extract insights from complex datasets, and accelerate discovery. As generative AI and machine learning capabilities mature, they are becoming integrated into everyday research workflows across universities, laboratories, and shared cyberinfrastructure environments.
Challenges
While AI is accelerating many aspects of research, it also introduces new challenges for security and governance. Many modern AI systems and agents operate with access to codebases, datasets, research infrastructure, external tools, and networked resources in order to automate tasks and support analysis. As these systems optimize predictions, process large datasets, and assist with troubleshooting or experimentation, questions arise around what level of access they should be granted and how their actions should be monitored. At the same time, the threat landscape surrounding AI is constantly evolving, making it increasingly difficult for institutions to fully understand and secure these systems, particularly in shared research environments where data, models, and infrastructure are widely distributed.
The Role of AI Governance and Security Frameworks
AI governance and security frameworks provide structured guidance for managing the risks associated with deploying AI systems within research infrastructure. They help institutions identify potential risks, establish governance practices, and implement safeguards across the AI lifecycle. For research environments that rely on shared datasets, computing resources, and collaborative platforms, the frameworks below offer practical reference points for adopting AI responsibly while protecting research infrastructure and scientific data.
Below is a summary of four existing frameworks, along with guidance on when a research institution might consider using each one.
1. NIST Artificial Intelligence Risk Management Framework
The NIST Artificial Intelligence Risk Management Framework (AI RMF) provides guidance for organizations and individuals involved in the AI lifecycle (often referred to as AI actors) to design, develop, deploy, and operate AI systems in a trustworthy and responsible manner. The framework is intended to be practical and adaptable as AI technologies evolve, enabling organizations to manage risks while continuing to benefit from AI innovation.
For research environments, the framework is particularly valuable because it focuses on how AI systems interact with organizational processes and infrastructure (core elements emphasized by the Trusted CI Framework’s approach to developing and managing a cybersecurity program). Key concepts include:
- Govern - Establish organizational policies, oversight, and accountability to manage AI risks across the AI system lifecycle.
- Map - Establish the context of an AI system by identifying its purpose, stakeholders, operational environment, and potential risks across the AI lifecycle.
- Measure - Use testing, evaluation, verification, and validation (TEVV) methods and metrics to assess AI system performance, trustworthiness, and associated risks.
- Manage - Prioritize and mitigate identified AI risks through continuous monitoring, incident response, and improvements across the AI system lifecycle.
When Research Institutions May Consider Using It
Research institutions can use the NIST AI RMF when they need a structured approach for evaluating and managing AI risks across research infrastructure. The framework is particularly useful for establishing governance processes for how AI systems are developed, tested, deployed, and monitored within research environments. By adopting the NIST AI RMF, institutions can improve their ability to document AI system risks, evaluate tradeoffs between trustworthiness characteristics, and support informed decisions about whether AI systems should be deployed within research infrastructure. The framework also helps organizations strengthen accountability, improve information sharing about AI risks across teams, and develop more consistent approaches for testing, evaluating, verifying, and validating (TEVV) AI systems over time.
2. NIST AI 600-1
NIST AI 600-1 is a companion document to the NIST Artificial Intelligence Risk Management Framework (AI RMF) that focuses specifically on risks associated with generative AI systems. It identifies risks that are unique to or amplified by generative AI and provides guidance for addressing them through considerations such as governance, content provenance, pre-deployment testing, and incident disclosure.
The document highlights several risk categories that are particularly relevant to research environments:
- Confabulation (Hallucinations), where AI systems generate incorrect or fabricated information that may affect scientific accuracy
- Data privacy risks when sensitive research datasets are used with generative AI tools
- Information security risks, including the potential misuse of AI systems for cyber activities
- Intellectual property concerns related to AI-generated research content and code
- Dual-use risks, particularly in domains such as chemistry, biology, and materials science
- Broader concerns such as bias, information integrity, and value-chain dependencies in AI systems
When Research Institutions May Consider Using It
NIST AI 600-1 can aid research institutions when evaluating risks associated with generative AI systems used in research workflows, particularly in environments involving sensitive datasets, dual-use research domains, or AI-assisted analysis. The document complements the NIST Artificial Intelligence Risk Management Framework by mapping generative AI risks to the RMF’s Govern, Map, Measure, and Manage functions, helping institutions integrate these risks into broader AI risk management practices.
3. Cloud Security Alliance AI Controls Matrix (CSA AICM)
The CSA AI Controls Matrix (AICM), developed by the Cloud Security Alliance, provides a comprehensive set of security controls specifically designed for AI systems. The framework outlines 243 control objectives across 18 security domains, covering the entire AI lifecycle from data pipelines and training environments to model deployment and third-party integrations. While governance frameworks such as the NIST AI RMF focus on risk management and oversight, the AICM emphasizes practical implementation by defining concrete security controls that organizations can apply when developing and operating AI systems.
One way to think about the relationship between these frameworks: If NIST outlines the why and additional supporting resources such as MITRE shows the how, CSA AICM delivers the what.
For research environments where AI systems are used for data analysis, modeling, and experimentation, the AICM highlights several important areas:
- Security controls for AI datasets and training pipelines, ensuring research data used for model development is protected
- Controls for model development, deployment, and lifecycle management across AI systems
- Identity and access management for AI services and infrastructure
- Risk management for third-party AI models and external components used within research workflows
- Monitoring and auditing mechanisms to track AI system behavior and operational activity
When Research Institutions May Consider Using It
Research institutions can use the CSA AI Controls Matrix when implementing operational security controls for AI systems and supporting infrastructure. The AICM can help research cyberinfrastructure teams translate governance guidance from frameworks such as the NIST AI RMF into practical security controls for AI deployments.
4. Cisco’s Integrated AI Security and Safety Framework
The Cisco Integrated AI Security and Safety Framework provides a structured approach for understanding and managing risks associated with modern AI systems. It introduces a taxonomy that categorizes AI security and safety risks across the AI lifecycle, including data pipelines, models, system integrations, and agentic components.
For research environments where AI systems are used for modeling, experimentation, and data analysis, the framework highlights several important areas:
- Lifecycle-aware risk analysis across data pipelines, model development, deployment, and operation
- Threat taxonomy based on attacker objectives and techniques, helping teams understand how AI systems may be targeted
- Coverage of emerging AI risks, including prompt injection, model manipulation, and supply-chain compromise
- Integration of safety and security considerations, recognizing that AI risks may involve both technical vulnerabilities and harmful outputs
When Research Institutions May Consider Using It
Research institutions can reference this framework when evaluating security risks in complex AI architectures and when developing threat models for AI systems used in research workflows. It is particularly useful alongside frameworks such as the NIST AI RMF and NIST AI 600-1, where governance and risk identification are already established, and complements the CSA AICM by providing a system-level view of how threats may emerge across the AI lifecycle.
How These Four Frameworks Complement Each Other
Each of these frameworks address different dimensions of AI security and governance and can be used together to provide a more comprehensive approach to managing AI risks in research environments. NIST AI RMF establishes the governance and risk management structure for AI systems. NIST AI 600-1 extends this by identifying risks specific to generative AI and mapping them to the RMF functions. CSA’s AICM complements these governance frameworks by translating risk management concepts into concrete security controls that can be implemented in practice. Cisco’s AI Security Framework adds a lifecycle and threat-modeling perspective, helping organizations understand how risks may emerge across AI systems and supporting infrastructure.
For example, a research institution’s team developing an AI model for analyzing biomedical datasets could use the NIST AI RMF to establish governance and risk management practices, NIST AI 600-1 to evaluate generative AI risks such as data leakage or hallucinated outputs, CSA AICM to implement operational controls around data access and model deployment, and Cisco’s framework to analyze potential attack surfaces across the AI system lifecycle.
Alignment with the NSF AI Strategy
The NSF AI Strategy emphasizes responsible and secure adoption of AI across research and operational environments through five pillars: governance, mission-driven AI use cases, data and infrastructure, responsible innovation, and workforce readiness. Applying frameworks such as the NIST AI RMF, NIST AI 600-1, CSA AICM, and Cisco AI Security Framework can help research institutions align with these pillars by establishing governance and risk management processes, improving security practices around AI-enabled data and infrastructure, and supporting the safe deployment of AI systems in research environments. Together, these frameworks provide practical guidance that helps translate the strategic goals outlined in the NSF plan into operational practices for AI systems used in scientific research.
Additional Resources for Understanding AI Security Threats
Alongside governance and control frameworks, resources such as MITRE ATLAS and the OWASP Top 10 for Agentic Applications provide valuable threat intelligence for understanding how AI systems can be attacked or misused. These resources help research institutions identify common adversarial techniques, emerging attack patterns, and security weaknesses in AI-driven systems, supporting more informed threat modeling and risk assessment when deploying AI within research environments.
Conclusion
As AI becomes more embedded in research workflows, understanding and applying established security and governance frameworks becomes increasingly important. The four frameworks reviewed here (NIST AI RMF, NIST AI 600-1, CSA AI Controls Matrix, and Cisco AI Security Framework) provide complementary perspectives that help research institutions approach AI adoption in a structured and responsible manner. By leveraging these resources alongside threat intelligence guidance such as MITRE ATLAS and OWASP, research organizations can better anticipate risks, strengthen security practices, and support the secure and trustworthy use of AI within research cyberinfrastructure.