Showing posts with label compliance. Show all posts
Showing posts with label compliance. Show all posts

Monday, January 6, 2025

From Silos to Community: The Rapid Rise of RRCoP to Support Regulated Research

The research landscape is evolving rapidly and adding complexity with new cybersecurity compliance requirements. Researchers and research support departments now face a growing list of cybersecurity and compliance tasks that extend beyond individual projects, elevating these obligations to the institutional level. Built on principles of openness and collaboration, research institutions must navigate requests for compliance attestations on data handling, processing, sharing, and storage—areas often outside researchers’ expertise. Without robust training programs or a stable regulatory landscape, individuals are frequently left scrambling for current information. Individuals often also lack local colleagues to consult, making them feel isolated and uncertain. This fragmented approach, seen across individuals, departments, institutions, and the national level, inspired the formal creation of the Regulated Research Community of Practice (RRCoP) in 2021 after individuals led a series of workshops focusing on commonalities of challenges facing institutions supporting regulated research.

RRCoP brings together a rapidly growing network of professionals addressing the unique challenges of cybersecurity and compliance in academic research. Led by Trusted CI Co-PI Carolyn Ellis, Director of Research Cybersecurity and Compliance at Arizona State University, RRCoP fosters connections and builds expertise across institutions. Ellis co-founded the community while managing Purdue University’s research Controlled Unclassified Information (CUI) program, where she experienced firsthand the tensions between implementing complex compliance programs and maintaining the openness of academic research. “Today, RRCoP is more than a collection of resources or formal training,” Ellis explains. “We’ve built a community where professionals can learn from one another, collaborate, and tackle big challenges. This community is redefining how institutions support research subject to regulations.”

A map of the Regulated Research Community of Practice’s member locations. (Credit: Carolyn Ellis).

RRCoP informally began as a Slack channel in 2018 when Ellis, searching for ongoing conversations beyond conferences, came up empty. RRCoP has grown into a dynamic, fast-expanding network connecting over 1,100 members from 330 institutions, ranging from R1 universities to community colleges and international partners. Daily Slack interactions foster collaboration and act as early warning systems as members share insights from diverse information venues. This connectivity bridges many communities that contribute to the regulated research landscape.

RRCoP has also developed a wealth of resources, including a recorded monthly webinar series held on the second Wednesday of each month, co-located training seminars at conferences, mentoring opportunities, and an annual hands-on workshop designed to address the most pressing challenges in regulated research. In 2022, RRCoP facilitated a full-day workshop at the EDUCAUSE Cybersecurity Privacy Professionals Conference, where attendees collaboratively wrote responses to 43 controls in a System Security Plan. The 2023 workshop brought higher education professionals together with certified assessors for a cost-effective day of dialogue. All RRCoP resources are offered at no cost to the community and are accessible on their comprehensive website at regulatedresearch.org. Most recently, two leaders of the RRCoP community, Ellis and Erik Deumens, have collaborated on an article about the pressing need for compliance requirements in research in Communications of the ACM. Review the highlights of RRCoP’s 2024.  

Trusted CI has expanded its mission to support regulated research by building on the expertise and resources developed by RRCoP. Trusted CI is able to sustain the valuable services RRCoP offers; Trusted CI’s team members will provide additional expertise, access to its extensive community, and established processes. Moving forward, RRCoP aims to use this collective voice to elevate the unique challenges faced by higher education to decision-makers. Additionally, extending Trusted CI’s established resources into the higher education community, which is supported by RRCoP, will strengthen both groups. Together, the Trusted CI and RRCoP communities will continue to grow by sharing services, expertise, and relationships, creating a stronger foundation for supporting regulated research across institutions.

The December 2024 RRCoP webinar featured a presentation titled Trusted CI & RRCoP’s Next Five Years, presented by Sean Peisert, Trusted CI Director and PI, Scott Russell, Trusted CI Deputy Director and Framework Lead, Carolyn Ellis, Trusted CI Co-PI and Regulated Research Lead.

Monday, August 1, 2022

Analysis of NSPM-33: Cybersecurity Requirements for Federally Funded Research Organizations

By: Anurag Shankar and Scott Russell

This blog post provides research organizations a summary of the National Security Presidential Memorandum on United States Government-Supported Research and Development National Security Policy” (NSPM-33) and the recent Office of Science and Technology Policy (OSTP) / National Science and Technology Council (NSTC) guidance, along with analysis of the requirements. 

Summary

In January 2021, then President Trump issued a directive “National Security Presidential Memorandum on United States Government-Supported Research and Development National Security Policy” (NSPM-33) to all federal agencies to: 1) standardize disclosure requirements and 2) mandate a research security program for all institutions receiving a total of $50 million or more in federally-funded research. In January 2022, the Office of Science and Technology Policy (OSTP) released further guidance on these requirements, including details on four elements specified in NSPM-33: cybersecurity, foreign travel security, research security training, and export control training. The cybersecurity guidance identifies 14 controls that it recommends as requirements for federal agencies to flow down to organizations receiving federal research funding. Twelve of these controls are included in the 17 “basic hygiene” controls specified by CMMC Level 1 and the 15 “minimum security controls” specified by FAR 52.204-21, “Basic Safeguarding of Covered Contractor Information Systems.” The rest are NSPM-33 specific, addressing training and ransomware/data integrity.

The OSTP guidance also includes a number of additional recommendations for federal agencies to flow down to research organizations, summarized below:

  1. Documentation: Research organizations should be required to document their research security program and provide this documentation within 30 days of a request from a research agency that is funding an award or considering an application for award funding.

  2. Certification: Research organizations should be required to provide certification of compliance with the research security program requirement. OSTP, in consultation with the NSTC Subcommittee on Research Security and OMB, plans to develop a single certification standard and process that will apply across all research agencies.

  3. Timeline: Research organizations should establish a research security program as soon as possible, but given one year from the date of issuance of the formal requirement to comply. Organizations that become subject to the requirement in subsequent years are supposed to be similarly provided one additional year to comply.

  4. Assistance: The Federal Government should provide technical assistance to support development of training content and program guidelines, tools, and best practices for research organizations to incorporate at their discretion. Agencies represented on the National Counterintelligence Task Force, in conjunction with the National Counterintelligence and Security Center, should jointly develop content that research organizations can leverage to meet requirements for research security programs and training. The Federal Government should consider supporting the formation of a community consortium to develop and maintain research security program information and implementation resources for research organizations, to include resources suitable for use within research security programs. The development of program content should be a collaborative effort between the government and organizations.

  5. Discretion: Research organizations should be provided flexibility to structure the organization’s research security program to best serve its particular needs, and to leverage existing programs and activities where relevant, provided that the organization implements all required program components. Research organizations should be given flexibility in how they choose to integrate research security requirements into existing programs, such as existing cybersecurity programs. Research organizations should be strongly encouraged to integrate some or all elements into a coherent research security program, where applicable and feasible.

  6. Funding agencies should consider integrating the research security program requirement into the Compliance Supplement’s Research and Development Cluster audit guidance as part of the single audit of Federal grant and assistance programs (2 C.F.R. Part 200, Appendix XI).

Analysis

The primary questions raised by NSPM-33 and the NTSC/OSTP guidance are 1) How will these requirements be flowed down to research organizations; 2) To what extent will funding agencies follow the guidance put forth by the NTSC; and 3) What is the scope of the requirements? 

Regarding the first question, NSPM-33 only directly impacts federal funding agencies (e.g., NSF, DOE): the NSPM does not impose any requirements directly on research institutions. Instead, it instructs federal funding agencies to impose these requirements on research institutions receiving federal research funding. While the NTSC/OSTP guidance specifies January 2023 as the deadline for eligible institutions to comply, it does not specify how the requirements should be imposed. Moreover, the provision of NSPM-33 that specifically mentions cybersecurity is only intended to apply to research institutions receiving over $50 million in federal research funding, without clarifying how these institutions should be identified.

Practically speaking, the funding agencies may impose these requirements on all *new* grants. So although existing grants are technically unaffected, research institutions that wish to continue to get funding will be forced to implement the requirements regardless. 

Moreover, it is also unclear to what extent federal funding agencies are bound by the NTSC guidance. NSPM-33 only instructs OSTP to “promulgate guidelines for research institutions to mitigate risks to research security and integrity”: it is not empowered to dictate what requirements federal funding agencies impose. Indeed, neither OSTP nor NTSC were mentioned in the subsection referencing research security programs and cybersecurity.

Scope is another issue. The guidance does not clarify whether the security program requirements apply only to researchers receiving federal funding or every researcher within the organization. It specifies controls for programs to implement but does not explicitly state if every system used by researchers (e.g, their workstations) is in scope or institutional systems only. Since this has financial repercussions, clarity is needed on what the requirements cover.

A research security program clearly requires controls to secure projects. However, prescribing a set of controls which research systems must implement can be problematic, as research systems have unique needs that may not function using traditional controls (instead requiring alternate controls to achieve their mission.) Moreover, the focus on system-centric controls is not well suited for securing research workflows, which require more than technical controls alone. The uniqueness of research systems (telescopes, sensors, microscopes, etc.) requires different approaches than controls designed to secure “systems.” For example, the Trusted CI Framework is a more appropriate fit for research programs. It includes controls, but provides the institution flexibility in choosing a baseline control set that is tailored to the institution’s mission. Additionally, this baseline control set is supplemented with additional and alternate controls that are particularly important in the research context, as research infrastructure often requires specialized protections. Securing research ultimately requires flexibility.

Applying the same level of security to all research is also unwise. How research is protected is currently scoped to data by sensitivity and regulatory requirements. This is done for a reason, namely to apply security proportionally to risk to contain cost. Expanding it indiscriminately will be wasteful and unnecessary. For instance, public data does not need the same level of security as patient data.

The guidance asks agencies to allow flexibility on which program components institutions choose to implement but also directs them to “strongly encourage” choosing them all. With a documentation submission requirement, it is unclear how the program will be judged and what the impact of a “less than perfect choice” might be (e.g., of not having all of the controls in place).

The certification requirement also is likely to present challenges. As the CMMC rollout shows, designing a certification process for compliance at this scale is extremely challenging. And whereas CMMC is limited in scope, NSPM-33 is potentially much broader. With CMMC compliance, most organizations can design isolated environments for controlled data CUI to limit scope, certifying compliance for research will be much more challenging, given the variety and complexity of research infrastructure.

Tuesday, January 11, 2022

Trusted CI engagement with OSC contributes to HECVAT 3.0


The EDUCAUSE Higher Education Information Security Council (HEISC) launched the latest version of the Higher Education Community Vendor Assessment Toolkit (HECVAT and HECVAT Lite v3). The new version has gone through a substantial overhaul to ensure the questions reflect the modern cloud research environment. More information about the new and improved HECVAT can be found on EDUCAUSE’s website.

The HECVAT is designed specifically for colleges and universities to measure vendor risk. It is presented as a questionnaire that focuses on the unique needs of a college or university. It can also be used by solution providers to demonstrate their organization’s adherence to the security expectations outlined by the HEISC. Providers are encouraged to fill out the HECVAT and share it in the Community Broker Index.

During the development of v3 of the HECVAT and HECVAT Lite, the HEISC Shared Assessments Working Group reached out to representatives of the higher ed community with expertise in industry standards (e.g., CIS Security Controls, HIPAA, ISO 27002:2013, various NIST frameworks, and the Trusted CI Framework) to conduct a “crosswalk.” Trusted CI contributed to the crosswalk by mapping the HECVAT questions to one or more of the 16 Musts in the Trusted CI Framework. Trusted CI has also published guidance on applying the HECVAT for NSF research projects. 

Our collaboration with EDUCAUSE on the HECVAT v3 was prompted by Trusted CI’s recent engagement with Ohio Supercomputer Center. We are very proud to have contributed to this important project. During our Fall 2021 engagement, OSC successfully completed the HECVAT-Lite Version 3 questionnaire on request by a research project at another university that planned to use OSC’s HPC services. OSC's HECVAT can be accessed through the Community Broker Index.
Trusted CI will be presenting a webinar on the new version of the HECVAT on Monday January 24th at 11am Eastern. Registration information is available at trustedci.org/webinars.

Tuesday, August 17, 2021

Trusted CI webinar: NCSA Experience with SOC2 in the Research Computing Space August 30th @11am Eastern

NOTE: If you have any experience with SOC2 compliance and want to share resources, slideshows, presentations, etc., please email links and other materials to Jeannette Dopheide <jdopheid@illinois.edu> and we will share them during the presentation. 

NCSA's Alex Withers is presenting the talk, NCSA Experience with SOC2 in the Research Computing Space, on Monday August 30th at 11am (Eastern).

Please register here.

As the demand for research computing dealing with sensitive data increases, institutions like the National Center for Supercomputing Applications work to build the infrastructure that can process and store these types of data.  Along with the infrastructure can come a host of regulatory obligations including auditing and examination requirements.  We will present NCSA’s recent SOC2 examination of its healthcare computing infrastructure and how we ensured our controls, data collection and processes were properly documented, tested and poised for the examination.  Additionally, we will show how other research and educational organizations might handle a SOC2 examination and what to expect from such an examination.  From a broader perspective, the techniques and lessons learned can be applied to much more than a SOC2 examination and could potentially be used to save time and resources for any audit or examination.

Speaker Bio

Alex Withers is an Assistant Director for Cyber Security and the Chief Information Security Officer at the National Center for Supercomputing Applications (NCSA). Additionally, he is the security co-manager for the XSEDE project and NCSA’s HIPAA Security Liaison. He is also a PI and co-PI for a number of NSF-funded cybersecurity projects.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

 

Monday, July 19, 2021

Higher Education Regulated Research Workshop Series: A Collective Perspective

Regulated research data is a growing challenge for NSF funded organizations in research and academia, with little guidance on how to tackle regulated research institutionally. Trusted CI would like to bring the community’s attention to an important report released today by the organizers of a recent, NSF-sponsored* Higher Education Regulated Research Workshop Series that distills the input of 155 participants from 84 Higher Education institutions. Motivated by the Higher Ed community’s desire to standardize strategies and practices, the facilitated** workshop sought to find efficient ways for institutions large and small to manage regulated research data and smooth the path to compliance. It identified six main pillars of a successful research cybersecurity compliance program, namely Ownership and Roles, Financials and Cost, Training and Education, Auditing, Clarity of Controls, and Scoping. The report presents each pillar as a chapter, complete with best practices, challenges, and recommendations for research enablers on campus. While it focuses on Department of Defense (DOD) funded research, Controlled Unclassified Information (CUI), and health research, the report offers ideas and guidance on how to stand up a well managed campus program that applies to all regulated research data. It represents a depth and breadth of community collaboration and institutional experience never before compiled in a single place.

Organized by Purdue University with co-organizers from Duke University, University of Florida, and Indiana University, the workshop comprised six virtual sessions between November 2020 and June 2021. Participants ranged from research computing directors, information security officers, compliance professionals, research administration officers, and personnel who support and train researchers.

The full report is available at the EDUCAUSE Cybersecurity Resources page at https://library.educause.edu/resources/2021/7/higher-education-regulated-research-workshop-series-a-collective-perspective. It was co-authored by contributors from Purdue University, Duke University, University of Florida, Indiana University, Case Western Reserve University, University of Central Florida, Clemson University, Georgia Institute of Technology, and University of South Carolina.

See https://www.trustedci.org/compliance-programs for additional materials from Trusted CI on the topic of compliance programs.

* NSF Grant #1840043, “Supporting Controlled Unclassified Information with a Campus Awareness and Risk Management Framework”, awarded to Purdue University
** by Knowinnovation

Tuesday, June 1, 2021

Don't Miss Trusted CI at EDUCAUSE CPP Conference

Members of Trusted CI and partner projects will be presenting at the The 2021 EDUCAUSE Cybersecurity and Privacy Professionals Conference (formerly known as the Security Professionals Conference), to be held Tuesday June 8th - Thursday June 10th. The conference "will focus on restoring, evolving, and transforming cybersecurity and privacy in higher education."

Below is a list of presentations that include Trusted CI team members and partners:
 

Regulated Research Community Workshops

Tuesday, June 08 | 12:15p.m. - 12:35p.m. ET

  • Anurag Shankar - Senior Security Analyst, Indiana University
  • Erik Deumens - Director UF Research Computing, University of Florida
  • Carolyn Ellis - Program Manager, Purdue University
  • Jay Gallman - Security IT Analyst, Duke University
Supporting institutional regulated research comes with a wide range of challenges impacting units that haven't commonly worked together. Until recently, most institutions have looked internally to develop their regulated research programs. Since November 2020, 30 institutions have been gathering for six workshops to share their experience and challenges working establishing regulated research programs. This session will share the process involved in making these workshops successful and initial findings of this very specialized group.


Big Security on Small Budgets: Stories from Building a Fractional CISO Program

Thursday, June 10 | 2:00p.m. - 2:45p.m. ET

  • Susan Sons - Chief Security Analyst, Indiana University Bloomington

No one in cybersecurity has an infinite budget. However, those booting up cybersecurity programs in organizations whose leadership haven't fully bought in to the value of cybersecurity operations, bolting security on to an organization that has been operating without it for too long, or leading cybersecurity for a small or medium-sized institution often have even less to work with: smaller budgets, less training, fewer personnel, less of every resource. Meanwhile, the mandate can seem infinite. In this talk, Susan Sons, Deputy Director of ResearchSOC and architect of the fractional CISO programs at ResearchSOC, OmniSOC, and IU's Center for Applied Cybersecurity Research, discusses approaches to right-sizing cybersecurity programs and getting the most out of limited resources for small and medium-sized organizations. This talk covers strategies for prioritizing security needs, selecting controls, and using out-of-the-box approaches to reduce costs while ensuring the right things get done. Bring your note pad: we'll refer to a number of outside references and resources you can use as you continue your journey.


SecureMyResearch at Indiana University

Thursday, June 10 | 1:00p.m. - 1:20p.m. ET

  • William Drake - Senior Security Analyst, Indiana University
  • Anurag Shankar - Senior Security Analyst, Indiana University

Cybersecurity in academia has achieved significant success in securing the enterprise and the campus community at large through effective use of technology, governance, and education. It has not been as successful in securing the research mission, however, owing to the diversity of the research enterprise, and of the time and other constraints under which researchers must operate. In 2019, Indiana University began developing a new approach to research cybersecurity based on its long experience in securing biomedical research. This resulted in the launch of SecureMyResearch, a first-of-its-kind service to provide cybersecurity and compliance assistance to researchers and stakeholders who support research. It was created not only to be a commonly available resource on campus but also to act as a crucible to test new ideas that depart from or are beyond enterprise cybersecurity practice. Those include baking security into workflows, use case analysis, risk acceptance, researcher-focused messaging, etc. A year later, we have much to share that is encouraging, including use cases, results, metrics, challenges, and stories that are likely to be of interest to those who are beginning to tackle research cybersecurity. We also will be sharing information and advice on a method of communicating the need for cybersecurity to researchers that proved to be highly successful, and other fresh ideas to take home and leverage on your own campus.


Lessons from a Real-World Ransomware Attack on Research

Thursday, June 10 | 12:25p.m. - 12:45p.m. ET

  • Andrew Adams - Security Manager / CISO, Carnegie Mellon University
  • Von Welch - Director, CACR, Indiana University
  • Tom Siu - CISO, Michigan State University

In this talk, co-presented by the Michigan State University (MSU) Information Security Office and Trusted CI, the NSF Cybersecurity Center of Excellence, we will describe the impact and lessons learned from a real-world ransomware attack on MSU researchers in 2020, and what researchers and information security professionals can do to prevent and mitigate such attacks. Ransomware attackers have expanded their pool of potential victims beyond those with economically valuable data. In the context of higher ed, this insidious development means researchers, who used to be uninteresting to cybercriminals, are now targets. During the first part of the presentation, we will explain the MSU ransomware incident and how it hurt research. During the second part, we will elaborate on mitigation strategies and techniques that could protect current and future academic researchers. Finally, we will conclude with a question-and-answer session in which audience members are encouraged to ask Trusted CI staff about how to engage researchers on information security. Trusted CI has unique expertise in building trust with the research community and in framing the cybersecurity information for them. Trusted CI regularly engages with researchers, rarely security professionals, and has a track record of success in communicating with researchers about cybersecurity risks.


Until We Can't Get It Wrong: Using Security Exercises to Improve Incident Response

Wednesday, June 09 | 2:00p.m. - 2:20p.m. ET

  • Josh Drake - Senior Security Analyst, Indiana University Bloomington
  • Zalak Shah - Senior Security Analyst, Indiana University

Incident response can be challenging at the best of times, and when one is responding to a major incident, it is rarely the best of times. A rigorous program of security exercises is the best way to ensure than any organization is prepared to meet the challenges that may come. The best cybersecurity teams have learned not just to practice until they can get it right, but to practice until they can't get it wrong. They use a regular program of security exercises coupled with pastmortem analysis and follow-up to ensure that the whole team, and all of the technologists and organizational support they work with, get better at handling incidents over time. This session will teach you how to build a security exercise program from the ground up and use it to ensure that your incident response capabilities can be relied on no matter what happens.


Google Drive, the Unknown Unknowns

Wednesday, June 09 | 12:00p.m. - 12:45p.m. ET

  • Ishan Abhinit - Senior Security Analyst, Indiana University Bloomington
  • Mark Krenz - Chief Security Analyst, Indiana University

Every day countless thousands of students and staff around the world use cloud storage systems such as Google Drive to store their data. This data may be classified public, internal, and even confidential or restricted. Although Google Drive provides users with ways to control access to their data, my experiences have shown that users often aren't aware that they are exposing their data beyond their expected trust boundary. In this talk I will briefly introduce the audience to Google Drive, sharing some of my own experiences dealing with security concerns. Then I will provide an overview of the issues that academic and research institutions face when using it. I'll highlight the security threats to your data and how to deal with various situations, such as when someone leaves a project, when data is accidentally deleted, or when data is shared and you don't know it. In the second half of the presentation I'll provide the audience with some solutions to these security issues that are useful to a variety of institutions large and small as well as individual projects and people. Some of these solutions were developed by me and my team to solve our own issues, and so now I'll be sharing these solutions and tools with the community at large.


The full agenda, including the on-demand program, is available online.

Monday, November 2, 2020

PEARC20: Another successful workshop and training at PEARC

Trusted CI had another successful exhibition at PEARC20.

We hosted our Fourth Workshop on Trustworthy Scientific Cyberinfrastructure for our largest audience to date. The topics covered during the year's workshop were:

  • Community Survey Results from the Trustworthy Data Working Group (slides
    • Presenters: Jim Basney, NCSA / Trusted CI; Jeannette Dopheide, NCSA / Trusted CI; Kay Avila, NCSA / Trusted CI; Florence Hudson, Northeast Big Data Innovation Hub / Trusted CI
  • Characterization and Modeling of Error Resilience in HPC Applications (slides)
    • Presenter: Luanzheng Guo, University of California-Merced 
  • Trusted CI Fellows Panel (slides)
    • Moderator: Dana Brunson, Internet2
    • Panelists: Jerry Perez, University of Texas at Dallas; Laura Christopherson, Renaissance Computing Institute; Luanzheng Guo, University of California, Merced; Songjie Wang, University of Missouri; Smriti Bhatt, Texas A&M University - San Antonio; Tonya Davis, Alabama A&M University
  • Analysis of attacks targeting remote workers and scientific computing infrastructure during the COVID19 pandemic at NCSA/UIUC (slides)
    • Presenters: Phuong Cao, NCSA / University of Illinois at Urbana-Champaign; Yuming Wu, Coordinated Science Laboratory / University of Illinois at Urbana-Champaign; Satvik Kulkarni, University of Illinois at Urbana-Champaign; Alex Withers, NCSA / University of Illinois at Urbana-Champaign; Chris Clausen, NCSA / University of Illinois at Urbana-Champaign
  • Regulated Data Security and Privacy: DFARS/CUI, CMMC, HIPAA, and GDPR (slides)
    • Presenters: Erik Deumens, University of Florida; Gabriella Perez, University of Iowa;  Anurag Shankar, Indiana University
  • Securing Science Gateways with Custos Services (slides)
    • Presenters: Marlon Pierce, Indiana University; Enis Afgan, Johns Hopkins University; Suresh Marru, Indiana University; Isuru Ranawaka, Indiana University; Juleen Graham, Johns Hopkins University

We will post links to the recordings when they are made public.

In addition to the workshop, Trusted CI team member Kay Avila co-presented a Jupyter security tutorial titled “The Streetwise Guide to Jupyter Security” (event page) with Rick Wagner.  This presentation was based on the “Jupyter Security” training developed by Rick Wagner, Matthias Bussonnier, and Trusted CI’s Ishan Abhinit and Mark Krenz for the 2019 NSF Cybersecurity Summit.

Tuesday, September 22, 2020

Trusted CI Webinar: Cybersecurity Maturity Model Certification (CMMC) on Tues Oct 6 @11am Eastern

Trusted CI's Scott Russell is presenting the webinar, Cybersecurity Maturity Model Certification (CMMC), on Tuesday October 6th at 11am (Eastern). 

Please register here. Be sure to check spam/junk folder for registration confirmation email.
The US has historically taken a fairly minimalist approach to cybersecurity regulation, but recent years have evidenced a trend toward increasing regulation. The latest in this trend is the US Department of Defense’s “Cybersecurity Maturity Model Certification” (CMMC). CMMC has garnered quite a bit of attention recently, as it intends to impose cybersecurity compliance requirements on the entire Defense Industrial Base (DIB), over 300,000 organizations (including some universities). CMMC has emerged at a breakneck pace, and there is still a great deal of uncertainty regarding who is impacted, what is required, and how organizations should respond.

This talk will 1) introduce US cybersecurity regulation and compliance generally; 2) provide the background and context leading to CMMC; 3) overview CMMC; and 4) suggest approaches for thinking about cybersecurity compliance moving forward.
Speaker Bio:

Scott Russell is a Senior Policy Analyst at the Indiana University Center for Applied Cybersecurity Research. Scott was previously the Postdoctoral Fellow in Information Security Law & Policy. Scott’s work thus far has emphasized private sector cybersecurity best practices, data aggregation and the First and Fourth Amendments, and cybercrime in international law. Scott studied Computer Science and History at the University of Virginia and received his J.D. from the Indiana University, Maurer School of Law.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Friday, September 4, 2020

Introducing the Law and Policy Student Affiliate Program

The CACR-Maurer Student Affiliate program is a collaboration between the IU Center for Applied Cybersecurity Research (CACR), which leads Trusted CI, and the IU Maurer School of Law, wherein law students with a demonstrated interest in privacy and cybersecurity are given an opportunity to work on real world legal problems. The student affiliates work directly with Scott Russell, who is a Senior Policy Analyst at CACR, Trusted CI team member, and a Maurer graduate, and contribute to law and policy guidance materials produced by Trusted CI.

Previous student affiliates have conducted research relating to Controlled Unclassified Information, the EU General Data Protection Regulation, the California Consumer Privacy Act, US Export Control Laws and Regulations, the DoD Cybersecurity Maturity Model Certification, and Artificial Intelligence & Ethics. Materials developed by these student affiliates have directly contributed to guidance materials Trusted CI has created for the NSF science community, including webinars, live presentations, trainings, blog posts, internal whitepapers, and memorandi


For the Fall 2020 semester, there will be one student affiliate: Madeline Blaney. Madeline is a second year law student at Maurer and the President of the Maurer Cybersecurity and Privacy Law Association. 


The program is managed by Maurer professor Joseph Tomain, who also manages the Maurer Graduate Certificate in Cybersecurity Law and Policy and the Graduate Certificate in Information Privacy Law and Policy. Student affiliates receive 1 credit hour for participating in the program. Participation in the student affiliate program is typically reserved for students pursuing a Maurer Graduate Certificate in Cybersecurity Law and Policy but is also open to non-certificate students with sufficient background in privacy and cybersecurity law. This is CACR’s fourth semester with student affiliates, building on a long history of collaboration between CACR and Maurer.


Thursday, July 9, 2020

PEARC20: Join us at the Fourth Workshop on Trustworthy Scientific Cyberinfrastructure

Join us at the Fourth Workshop on Trustworthy Scientific Cyberinfrastructure at PEARC20 on Monday July 27th, 8:00am - 12:00pm Pacific Time (11:00am - 3:00pm Eastern Time / 15:00 - 19:00 UTC). The workshop provides an opportunity for sharing experiences, recommendations, and solutions for addressing cybersecurity challenges in research computing. It also provides a forum for information sharing and discussion among a broad range of attendees, including cyberinfrastructure operators, developers, and users.

The workshop is organized according to the following goals:

  • Increase awareness of activities and resources that support the research computing community's cybersecurity needs. 
  • Share information about cybersecurity challenges, opportunities, and solutions among a broad range of participants in the research computing community.
  • Identify shared cybersecurity approaches and priorities among workshop participants through interactive discussions.

Schedule

See our workshop page for the full presentation abstracts. The order of presentations is subject to change and will be posted to the workshop page
  • 8:00 am Pacific / 11:00 am Eastern 
    • Community Survey Results from the Trustworthy Data Working Group   
      • Presenters: Jim Basney, NCSA / Trusted CI
        Jeannette Dopheide, NCSA / Trusted CI
        Kay Avila, NCSA / Trusted CI
        Florence Hudson, Northeast Big Data Innovation Hub / Trusted CI
  • 8:30 am Pacific / 11:30 am Eastern 
    • Characterization and Modeling of Error Resilience in HPC Applications 
      • Presenter: Luanzheng Guo, University of California-Merced
  • 9:00 am Pacific / 12:00 pm Eastern
    • Trusted CI Fellows Panel
      • Moderator: Dana Brunson, Internet2 
      • Panelists: Jerry Perez, University of Texas at Dallas
        Laura Christopherson, Renaissance Computing Institute
        Luanzheng Guo, University of California, Merced
        Songjie Wang, University of Missouri
        Smriti Bhatt, Texas A&M University - San Antonio
        Tonya Davis, Alabama A&M University

  • 9:30 - 10:30 am Pacific / 12:30 pm - 1:30 pm Eastern ***Break/Lunch***
  • 10:30 am Pacific / 1:30 pm Eastern
    • Analysis of attacks targeting remote workers and scientific computing infrastructure during the COVID19 pandemic at NCSA/UIUC
      • Presenters: Phuong Cao, NCSA/U of Illinois at Urbana-Champaign
        Yuming Wu, Coordinated Science Lab/UIUC
        Satvik Kulkarni, U of Illinois at Urbana-Champaign
        Alex Withers, NCSA/U of Illinois at Urbana-Champaign
        Chris Clausen, NCSA/U of Illinois at Urbana-Champaign
  • 11:00 am Pacific / 2:00 pm Eastern
    • Regulated Data Security and Privacy: DFARS/CUI, CMMC, HIPAA, and GDPR
      • Presenters: Erik Deumens, University of Florida
        Gabriella Perez, University of Iowa
        Anurag Shankar, Indiana University
  • 11:30 am Pacific / 2:30 pm Eastern
    • Securing Science Gateways with Custos Services
      • Presenters: Marlon Pierce, Indiana University
        Enis Afgan, Johns Hopkins University
        Suresh Marru, Indiana University
        Isuru Ranawaka, Indiana University
        Juleen Graham, Johns Hopkins University
For any questions regarding this workshop, please contact workshop-cfp@trustedci.org.

Thursday, March 19, 2020

Keeping Regulated Data Secure during the COVID-19 Outbreak

The social distancing measures against COVID-19 have resulted in a massive shift of the workforce to home offices. While this has allowed work to continue, it has caused concern among some organizations, especially those without regulatory expertise or resources, who are collecting COVID-19 data or handling other types of regulated research data. We are therefore providing the following guidance to help organizations stay compliant with privacy and security regulations that impact research data, irrespective of whether it is COVID-19 related or other types of data.

[Note: You can also check out our earlier blog post titled “Recommendations for reducing cybersecurity risk while working remotely”.]

1. HIPAA (Health Insurance Portability and Accountability Act) 

First of all, determine if HIPAA is applicable. Not all personally identifiable health information is protected by HIPAA, only protected health information (PHI) created, received, maintained, and transmitted by covered entities (CE) and their business associates (BA). If you are neither, HIPAA may not apply to health data you collect, even if it is personally identifiable. That said, you should still consider it sensitive data and protect it using applicable safeguards below.

Collecting and processing PHI:
  1. Only use tools institutionally approved for PHI. 
  2. Do not use a vendor with whom your institution does not have a HIPAA business associate agreement (BAA). Here is a list of some vendors you might consider if you do not have HIPAA approved systems: 
    1. Qualtrics for surveys 
    2. SFax for e-faxing 
    3. Zoom for teleconferencing 
    4. Box for Healthcare for file sharing 
  3. Protect your workstations and mobile devices as described below. 
Protecting PHI when working from home:
  1. Follow institutional telework and IT policies and procedures. 
  2. Work with your IT professionals. 
  3. Secure your workstation (laptop/desktop). 
    1.  Use a workstation provided and secured by your institution. 
    2.  If you must use a shared workstation (e.g., a home PC), ensure you take the following security measures: 
      1. Do not use the workstation if it has an old and insecure operating system installed (e.g. Windows XP). 
      2. Create a separate account for yourself and password protect it. Access PHI only while logged into this account. 
      3. Do not share the account password. 
      4. Do not download PHI to the workstation. 
      5. Enable and password protect the screen saver. 
      6. Ensure that the firewall and antivirus are enabled. 
      7. Apply the latest patches. 
      8. Connect only to trusted, work-related websites. 
      9. Turn off the “Remember Password” feature in browsers/decline to store passwords to sensitive sites. 
      10. Do not backup the device to your personal cloud storage (e.g. Google or Apple) account. 
      11. Delete the account after you are back at work. 
    3. Secure your mobile device (smartphone/tablet). 
      1. Use a mobile device provided/secured by your institution. 
      2. If you must use a personally owned mobile device, take the following security measures:
        1. Follow your institutional policies/procedures regarding use of personal mobile devices for PHI. 
        2.  Do not download PHI to the device. 
        3.  Enable screen lock or PIN. 
        4. Do not backup the device to your personal cloud storage (e.g. Google or Apple) account. 
    4. Ensure encryption at rest and in transit. 
      1. Ensure that your home WiFi network is using encryption. 
      2. Ensure that the workstation/mobile device is full-disk encrypted.
      3.  Ensure that the URL for sites you visit begins with an https://. 
      4. Use a VPN, especially when using an untrusted network. 
      5. Use institutionally approved, encrypted communication tools for remote meetings. However, as of March 17th, US Dept. of Health and Human Services’ Office for Civil Rights (responsible for enforcing HIPAA) is allowing video chat tools such as Apple FaceTime, Facebook Messenger video chat, Google Hangouts video, Zoom, and Skype for COVID-19 response. Public facing apps such as Facebook Live, Tiktok, etc. are not allowed. 
      6. Do not record meeting sessions. 
      7. If you are backing up to external media, e.g., a USB disk, ensure that it is encrypted.
    5. Ensure physical security. 
      1. Keep your device and any connected media in a physically secure location. 
      2. Keep conversations private by restricting physical access to the home office space to others during meetings where PHI may be disclosed.  
Breach Notification:
  1. If you suspect an incident or a breach of PHI, immediately follow your institutional incident response process.
For the strictly privacy aspects of HIPAA, please refer to Dept. of Health and Human Service's guidance on HIPAA privacy and coronavirus.

2. GDPR (General Data Protection Regulation) 

COVID-19 related data on European Economic Area (EEA) persons falls under a “special category of personal data” under GDPR. 
  1. Processing this data requires consent from the subject. 
  2. Processing must be necessary for one or more of the following. 
    1. Allow an employer to function. 
    2. Protect the interest of the subject. 
    3. Reasons of substantial public interest. 
    4. Purposes of preventing or occupational medicine. 
    5. Reasons for public interest in the area of public health. 
  3. Records of data processing must be kept. 
3. DFARS 252.204-7012 (Defense Federal Acquisition Regulation Supplement) 

Protecting CUI while working from home: 
  1. Secure your workstation (laptop/desktop). 
    1. Work with your IT professionals. 
    2. If your institution provides it, use a web- or remote desktop-accessible virtual desktop interface (VDI) and a remote CUI enclave. 
    3. Use an institutionally provided and secured workstation. 
    4. Do not use a shared workstation such as a home PC. 
    5. Ensure both the firewall and antivirus are enabled. 
    6. Access CUI only while logged into your own user account. 
    7. Use a strong password. 
    8. Do not share the password. 
    9. Enable 2-factor authentication (e.g., fingerprint sensor) if possible. 
    10. Do not download CUI. 
  2. Mobile devices: 
    1.  Do not use mobile devices to access, store, or process CUI. 
  3. Ensure encryption at rest and in transit. 
    1. Ensure that your home WiFi network is using encryption.
    2. Ensure the workstation has full disk encryption. 
    3. Always use a VPN.
  4. Ensure physical security. 
    1. Keep your device and any connected media in a physically secure location. 
    2. Keep conversations private.  Restrict physical access to the home office space to others during meetings where CUI may be disclosed.  
Breach notification:
  1. If you suspect an incident or a breach, immediately follow your institutional incident response process. 
For more guidance, contact your Contracting Officer.
COVID-19 Phishing, Scams, and Fake News 
  1. Beware of COVID-19 phishing tactics and scams
  2. Avoid COVID-19 fake news and misinformation
Contact us if you need additional help or information.

Monday, October 14, 2019

Trusted CI Webinar October 28th at 11am ET: Trends in Global Privacy: GDPR One Year Later with Scott Russell

CACR's Scott Russell is presenting the talk, "Trends in Global Privacy: GDPR One Year Later" on October 28th at 11am (Eastern).

Please register here. Be sure to check spam/junk folder for registration confirmation email.
The past few years have seen a resurgence of privacy laws around the globe, starting with the European Union’s General Data Protection Regulation (GDPR), but leading to proposed laws in South Korea, Brazil, and the United States. These numerous laws may be targeted at enhancing privacy, but their biggest effect has been as a source of fear and confusion for those who are being regulated. This talk will build upon last year’s GDPR webinar, introduce CCPA, and then go on to discuss trends in global privacy more broadly: what’s happening, what’s coming, and what should you do about it.
Scott Russell is a Senior Policy Analyst at the Indiana University Center for Applied Cybersecurity Research (CACR), where his work focuses on privacy and cybersecurity policy. A lawyer and researcher, Scott received his B.A. in Computer Science and History from the University of Virginia, received his J.D. from Indiana University, interned at MITRE, and served as a postdoctoral fellow at CACR.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Tuesday, June 18, 2019

Trusted CI at the 2019 annual Great Plains Networks All-Hands Meeting May 21-23

Ishan Abhinit conducting log analysis exercise at GPN AHM 2019
Following on the successful workshops Trusted CI staff provided at the 2017 Great Plains Network All-Hands Meeting, The Trusted CI staff was invited back to the event in 2019 by GPN staff. Five members of the Trusted CI staff presented a series of three workshops from May 21st - 23rd at the 2019 Great Plains Networks All-Hands Meeting. The workshops covered log analysis, risk management for regulated data, and developing information security programs for research projects and facilities.

Building a NIST Risk Management Framework for HIPAA and FISMA Compliance - Wednesday, May 22 (Anurag Shankar & Ryan Kiser)
Anurag Shankar and Ryan Kiser led a workshop to prepare attendees to effectively leverage NIST’s risk management guidelines as a tool to address the increasingly heavy demands of regulated data on research workflows. They provided an overview of the requirements for handling different types of regulated data such as PHI and CUI as well as a unified risk-based methodology for adhering to these requirements.

Security Log Analysis - Wednesday, May 22 (Mark Krenz & Ishan Abhinit)
Mark Krenz and Ishan Abhinit presented a half day workshop on Security Log Analysis including a 45 minute exercise developed by fellow Trusted CI colleague Kay Avila. The hands on exercise involved performing analysis on an Apache web server log file to find attacks at 6 levels of difficulty. The workshop also covered important aspects of collecting, organizing and analyzing log files as well as provided specific techniques for finding different types of attacks. Real time polling was utilized as a method of helping enguage with attendees as well as gaining insight into community practices.


A Practical Cybersecurity Framework for Open Science Projects and Facilities- Thursday, May 23 (Bob Cowles)
Bob conducted a workshop to give attendees a foundation in what it means to have a basic, competent cybersecurity program for open science projects. In addition to lively discussion from the participants, the four pillars of the Trusted CI Framework were presented along with the sixteen “musts” that compose the core framework requirements. Participants were provided with the tools for building a cybersecurity program and encouraged to use a set of rational, evidence-based controls as a component of their program.
Left to right: Bob, Anurag, Ishan, Michael, Mark, Ryan

Attending the conference also allowed Trusted CI staff to meeting and provide less formalized cybersecurity discussion and consultation during social events at the conference. While visiting Kansas City, the Trusted CI team also had the opportunity to meet with Michael Grobe, who is a member of the distributed computing community and co-developer of Lynx, one of the first popular web browsers.

The materials presented by Trusted CI at the conference as well as others can be found on the Trusted CI website.

Monday, April 8, 2019

CCoE Webinar April 22nd at 11am ET: REED+: A cybersecurity framework for research data at Purdue University

Preston Smith is presenting the talk "REED+: A cybersecurity framework for research data at Purdue University" on Monday April 22nd at 11am (Eastern).

Please register here. Be sure to check spam/junk folder for registration confirmation email.
The REED+ framework integrates NIST SP 800-171 and other related NIST publications as the foundation of the framework. This framework serves as a standard for campus IT to align with security regulations and best practices, and create a single process for intake, contracting, and facilitate easy mapping of controlled research to CI resources for the sponsored programs office, human subjects office, and export control office.

The framework allows researchers to experience faster intake of new funded projects and be more competitive for research dollars. Using student-developed training materials and instruction, researchers, administrators, and campus IT are now able to more clearly understand previously complicated data security regulations affecting research projects.

The ecosystem developed from this project enables new partnerships with government agencies, and industry partners from the defense, aerospace, and life science sectors. Experiences and best practices in providing cyberinfrastructure and security awareness developed from this collaboration are documented and shared with the broader CI and campus community through conferences, journals and workshop.

Addition to the IT challenges - security controls, technology, or regulations, the REED+ team will discuss the use of research facilitators dedicated to regulated research, building relationships between campus IT organizations, appropriate compliance offices, research administration, IRBs, and export control offices; and improving institutional processes.

Ultimately the goal is to create a systematic approach which results in rapid flow from contracts to actionable technical requirements to implementation to approval, so that research data can begin in the minimum possible time frame.
Speaker bio:

Preston Smith is the Director of Research Computing Services at Purdue University. Supporting over 180 HPC faculty, and 550 labs using research data systems, Purdue's Community Cluster program is a pioneering program for delivering "condo-style" HPC. At Purdue, his organization designs, builds, and operates compute systems, and delivers advanced research support to the campus community.

Presentations are recorded and include time for questions with the audience.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Monday, August 13, 2018

CCoE Webinar August 27th at 11am ET: NIST 800-171 Compliance Program at U. Connecticut

Jason Pufahl is presenting the talk "NIST 800-171 Compliance Program at University of Connecticut" on Monday August 27th at 11am (Eastern).

Please register here. Be sure to check spam/junk folder for registration confirmation email.
The Department of Defense established DFARS 252. 204-701 which specifies that any research containing Controlled Unclassified Information (CUI) be protected using NIST 800-171. This presentation will discuss the University of Connecticut's approach to implementing the NIST 800-171 framework, including: Contracting, Faculty Engagement, Infrastructure Implementation, Training and Controls Review. 
The intention of this presentation is to provide a complete picture of what compliance with the NIST Standard requires. I will endeavor to describe the entire compliance process starting from conceptualization of the technology solution through to the post implementation review. The talk will be designed to appeal to compliance staff, technical staff and project managers and will emphasize elements required to build and sustain the compliance program. I will discuss the technology elements of our solution, generally, but will focus on how the technologies chosen met our goals of managing as many of the compliance requirements centrally as practical while providing a flexible solution.
Jason Pufahl is the Chief Information Security Officer for the University of Connecticut. He has 20 years of infrastructure and information security experience and has spent the last 10 years dedicated to information security and privacy. He has responsibility for information security for the institution, encompassing security awareness and training, disaster recovery, risk management, identity management, security policy and regulatory compliance, security analytics, and controls implementation.

Jason works closely with both the administrative and academic areas of the University. He is a member of the University’s Data Governance Committee, Joint Audit and Compliance Committee, and Public Safety Advisory Committee. He is also member of the University IRB with a primary focus of improving data privacy and security practices related to institutional research.

Jason has a Master’s in Education Technology and has a passion for professional development, security training and awareness. He designed and ran an information security and awareness game called HuskyHunt, founded the Connecticut Higher Education Roundtable on Information Security (CHERIS) to provide a quarterly forum for sharing of best practices in the field of information security targeted at higher education institutions in Connecticut and is active in the security community nationally. He is a frequent conference speaker and is a member of the NERCOMP vendor and licensing committee.

Presentations are recorded and include time for questions with the audience.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Monday, May 7, 2018

Trusted CI Webinar May 21st at 11am ET: The EU General Data Protection Regulation (GDPR)



CACR's Scott Russell is presenting the talk, "The EU General Data Protection Regulation (GDPR)" on May 21st at 11am (Eastern).

Please register here. Be sure to check spam/junk folder for registration confirmation email.
The European Union’s General Data Protection Regulation (GDPR) is slated to come into effect on May 25, 2018, and organizations around the world are struggling to determine whether they are covered, what is required, and what will happen if they don’t satisfy its requirements.

This webinar will provide an introduction to GDPR, including an overview of the law's requirements, an in-depth discussion of when and to whom the law may apply, and potential strategies for organizations that are unsure of whether they are covered. The webinar will also provide insight into the motivation behind the law, the legal and practical ramifications of its enforcement outside of the EU, and highlight current uncertainties relating to the scope and impact of the law. Attendees will leave with an improved understanding of how GDPR may impact their organization, and will be equipped with basic strategies to manage risks arising from the enforcement of the law.

This webinar is a product of the Trusted CI, the NSF Cybersecurity Center of Excellence. Trusted CI is supported by the National Science Foundation under Grant Number ACI-1547272. For more information about the Trusted CI please visit: http://trustedci.org/. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Scott Russell is a Senior Policy Analyst at the Indiana University Center for Applied Cybersecurity Research (CACR), where his work focuses on privacy and cybersecurity policy. A lawyer and researcher, Scott received his B.A. in Computer Science and History from the University of Virginia, received his J.D. from Indiana University, interned at MITRE, and served as a postdoctoral fellow at CACR.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Wednesday, June 7, 2017

NIST SP 800-171 and its potential impact on NSF science

By: Grayson Harbour, Scott Russell, Craig Jackson, and Bob Cowles

CTSC has recently seen an uptick in conversations in the community concerning NIST Special Publication 800-171 (SP 800-171) [1] and “Controlled Unclassified Information,” or CUI. In this post, we explain what CUI is, what SP 800-171 is, when and where SP 800-171 applies, and the impact it might have on the NSF science community. As will be elaborated in this post, we are concerned that the requirements of SP 800-171 may evolve into a general purpose set of security requirements that is applied to a wide range of non-federal entities, something SP 800-171 was not intended to do and for which it is not a good fit.  The NSF science community should keep SP 800-171’s limitations in perspective, particularly considering its focus on confidentiality-oriented controls, as opposed to the availability and integrity protections so critical to the science mission and of growing importance in context of ransomware.

Background
NIST SP 800-171 and CUI both fall under the umbrella of federal cybersecurity. The primary federal cybersecurity law is FISMA, [2] which requires federal agencies to implement security controls for their systems commensurate to those systems’ risks. (This process is outlined in detail in the NIST Risk Management Framework, [3] and the controls used are compiled in NIST SP 800-53.[4]) Agencies also have more narrow security requirements arising from other statutes, such as HIPAA. [5] These requirements are typically vague, though, resulting in a hodgepodge of different control sets based on varying agency interpretations of what each law requires. And this system only becomes more complex when agencies try to extend their specific requirements to third parties.

The specific security requirements imposed on federal entities don’t directly affect anyone who isn’t a federal entity. [6] However these regulations can be (and are) imposed indirectly through binding agreements, like procurement contracts, grants, and cooperative agreements. The logic of this is fairly intuitive: data doesn’t suddenly stop needing protection when passed to a contractor. Although we normally associate these requirements with classified data, the same principle applies to unclassified data too. Which brings us to CUI . . .

What is CUI?
Controlled Unclassified Information, or CUI, is exactly what its name suggests: it is (1) unclassified information, that (2) is subject to federal regulatory control. So CUI quite simply is unclassified information that a federal law, regulation, or policy says needs to be protected in some way. But there’s a catch: since we’re in the world of federal cybersecurity, CUI only includes information that is made by or for the federal government. [7] Information non-federal entities make entirely on their own isn’t included, even if there is a law regulating that type of data. [8] This distinction will be important when we discuss scenarios where SP 800-171 could potentially apply.

What is the CUI program?
Before 2010, CUI was handled in different ways by different agencies; DoE, HHS, DoJ: they each applied their own cybersecurity standards. In 2010, the White House decided that the federal government needed to standardize how it handled CUI, so it issued executive order 13556, creating the Controlled Unclassified Information program. The goal here was to create a uniform minimum set of requirements for all CUI handled by the federal government. Rather than the prior hodgepodge, every federal agency would follow the same federal regulation, codified in 32 CFR 2002. [9]

What is SP 800-171?
NIST SP 800-171 is a corollary to the CUI program and 32 CFR 2002, designed to help federal agencies apply the new, uniform CUI standards to non-federal entities that may handle their CUI. It is intended to serve as a point of reference for the terms of binding agreements between federal entities and non-federal entities that receive CUI. Put simply, SP 800-171 is a guidance document; it has no independent regulatory force. This makes sense in context: CUI is by definition information that federal entities have to protect, and the requirement to protect CUI doesn’t disappear when the federal entities hand the CUI to a third party. So federal entities are supposed to incorporate SP 800-171 adherence into their binding agreements with the third parties to ensure those third parties apply the same protections the federal entity is obligated to apply.

In case the preceding paragraph didn’t make sense, it may help to understand how federal regulations work when applied to non-federal entities. Unlike Congress, which can legislate on almost anything, the executive branch can only regulate how executive branch agencies behave, (or enforce powers given to them by Congress). But the executive branch can regulate those third parties indirectly through binding federal agreements (contracts, grants, etc.). It's not the law of the land, but if you want to do business with the federal government, you have to follow their rules. This is why SP 800-171 isn’t just “the law” or “not the law”: it all comes down to what you’ve agreed to.

Ok, but what is SP 800-171?
Now that you know why it exists, what exactly does SP 800-171 require? Quite frankly, SP 800-171 is a list of security controls taken straight from the control requirements for protecting confidentiality in NIST SP 800-53. [10] Although frequently analogized as SP 800-53’s little sibling, this is somewhat misleading: SP 800-171 is focused on confidentiality, and doesn’t include controls whose purpose is primarily integrity or availability. This actually makes perfect sense in context, since the CUI program is mostly standardizing the implementation of federal laws, (most of which are privacy laws), whose primary concern is typically confidentiality.

That being said, not all CUI is made the same, and under the federal CUI program, some CUI has slightly higher requirements than others. For federal agencies, this is boiled down to “CUI-Basic” and “CUI-Specified.” Essentially, CUI-Basic is the floor which no one can pass below, whereas CUI-Specified means that some requirements will be heightened. If this all sounds potentially confusing, don’t worry, the categorization of CUI is laid out in the CUI Registry [11], which helpfully tells you both whether information is CUI, and whether it is CUI-Basic or CUI-Specified. If it isn’t in the CUI Registry, it isn’t CUI. (However, as will be discussed below, the specific terms of your agreement will always be the most important consideration.)

Now here’s where things get tricky, the CUI-Basic/CUI-Specified distinction and the CUI Registry all apply to federal entities, not non-federal entities. Naturally, the intended effect is that they should apply to non-federal entities transitively through binding agreements, but it’s easy to imagine some wires getting crossed in this process. So how will you know what to do?

How do I know if SP 800-171 applies to my project?
Despite all of the potential for confusion, application of SP 800-171 is very simple. If SP 800-171 makes its way into your binding agreement, do SP 800-171. Otherwise, you are not directly obligated to adhere to SP 800-171. As stated above, SP 800-171 is a guidance document, it is not mandatory for non-federal entities unless incorporated as a requirement through some binding legal agreement. So while the terms of this discussion often feel very broad, (both “Controlled Unclassified Information” and “Non-federal Entity” enjoy enormously broad definitions), this is because they are just building blocks for agencies to use when drafting contracts and other binding agreements.

That being said, we’re glazing over a lot of details. For instance, CUI is subject to marking requirements [12],so it is unlikely that you could unwittingly receive CUI. Similarly, there are provisions for challenging the designation of information as CUI if you believe that certain data shouldn’t be covered. But the broader point that must always be emphasized is that this will ultimately depend on what you agree to in your binding agreement. If an organization agrees to meet SP 800-171 for all of their systems, (not just those that handle CUI), they may well be held to this standard. The CUI program is intended only for CUI, but the standards it uses could be applied more broadly.

How might the CUI Program affect the science community?
Putting aside the strict question of “am I bound,” the potentially more salient question is “will I be bound in the future?” This is much harder to answer, but a quick perusal of the CUI Registry should give some indication of whether SP 800-171 might eventually apply to NSF science projects and facilities. The CUI Registry lists several categories of information that a researcher might encounter, store, and subsequently need to protect under the CUI program: student records, export-controlled research, critical infrastructure data, and controlled technical information [13] are all CUI. And as the CUI program continues to expand, the CUI Registry is expected to have more categories added.

There are two basic ways you could imagine the CUI program impacting a researcher: (1) you receive CUI from the federal government specifically for research purposes, or (2) your grant or cooperative agreement incorporates the requirements of SP 800-171 in some form. [14] The first is fairly uncontroversial, but also very limited. The second, however, would be an enormous shift in the research community, and probably a bad one.

Would SP 800-171 be a good thing for the NSF science community?
To put it bluntly: no. SP 800-171 was not designed to advance the science community’s mission, and the requirements it puts in place impose a substantial burden without any evidence of a corresponding benefit. [15] Apart from this basic mismatch in scope and capabilities, we’re also concerned that the approach of SP 800-171 reinforces a checklist mentality toward security (notoriously common in FISMA/NIST RMF environments, despite their lip service to risk management), where problems are solved by predetermined lists of controls. This incorrectly assumes that security is a solved problem, and doesn’t allow for effective risk management. Also troubling is that SP 800-171’s focus on controls may lead organizations to view security as an add-on: something that is slapped onto existing systems to manage their problems, rather than as a core component of system design. Much of our community is designing and building new technology. More troubling still is that SP 800-171 is likely to be quite expensive to implement. After all, SP 800-171 is not a comprehensive information security framework [16]; it is simply a collection of confidentiality requirements arising from federal privacy laws. So it is certainly not tailored for scientific research given its notable lack of focus integrity and availability, two critical concerns for science, despite its high cost.

Ultimately, our biggest concern is that SP 800-171 will come to be seen as a sort of RMF-light: something that is regularly applied as a security baseline to any project or entity that has an agreement in place with a federal entity. As stated above, SP 800-171 was not drafted as a comprehensive security framework for anyone, let alone the science community, and its goals and structure are not in line with the needs and capabilities of the science community. So, although some are beginning to recommend preemptively implementing standards like SP 800-171 as a form of future-proofing against possible federal regulation, we certainly would not go that far. However, considering the possibility that these requirements might find their way into grants and cooperative agreements, the science community should be aware of SP 800-171, its structure, limitations, and burdens, and be prepared to negotiate reasonable confidentiality provisions, perhaps like those controls already in place to protect confidential information at your facility or institution. Increasingly, the science community needs to show that it has effective, efficient approaches to cybersecurity tailored to its mission and the actual risks it faces.



[2] Federal Information Security Management Act of 2002 (FISMA), Pub. L. No. 107-347, Title III, 116 Stat. 2899, (2002), available at http://csrc.nist.gov/drivers/documents/FISMA-final.pdf.
[3] For an overview of the NIST Risk Management Framework, see, e.g., NIST SP 800-37 rev. 1, “Guide for Applying the Risk Management Framework to Federal Information Systems,” (Feb. 2010), available at http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-37r1.pdf.
[4] NIST SP 800-53 rev. 4, “Security and Privacy Controls for Federal Information Systems and Organizations,” (Apr. 2013), available at http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf.
[5] For more information on the HIPAA Security Rule, see, e.g., “Security Rule Guidance,” HHS,  https://www.hhs.gov/hipaa/for-professionals/security/guidance/index.html?language=es.
[6] Certain laws, like HIPAA, regulate both federal and non-federal entities. Our discussion is limited to the ways these laws regulate federal entities.
[7] 32 CFR 2002.4(h) available at https://www.federalregister.gov/d/2016-21665/p-124, (defining “Controlled Unclassified Information” to include only “information the Government creates or possesses, or that an entity creates or possesses for or on behalf of the Government” (emphasis added)).
[8] Although it may not be CUI, the data may still be subject to security regulations, as with HIPAA and the HIPAA Security Rule.
[10] The specific controls selected are largely equivalent to SP 800-53 Moderate - Confidentiality, but SP 800-171 does omit a select few security controls that are: (1) entirely federal (and therefore wouldn’t make sense to apply to non-federal entities), (2) not directly related to the protection of CUI, or (3) assumed to be implemented by third parties already.
[12] The details of the marking requirement are laid out in 32 CFR 2002.20, available at https://www.federalregister.gov/d/2016-21665/p-303. For the various markings used, see https://www.archives.gov/cui/registry/category-marking-list.
[13] “Controlled Technical Information” means “technical information with military or space application that is subject to controls on the access, use, reproduction, modification, performance, display, release, disclosure, or dissemination.”
[14]  Although the definition of CUI only includes data that is made by or for the federal government, there is potentially a great deal of leeway in exactly what data is made “for the federal government,” which may allow for SP 800-171 to creep into a wide range of government agreements. But since this would be a new addition, such a broad interpretation of CUI could be resisted by the third party.
[15] For a discussion of the application and implementation of SP 800-171 in higher education institutions, see “An Introduction to NIST Special Publication 800-171 For Higher Education Institutions,” HEISC, (Oct. 2016), available at https://library.educause.edu/~/media/files/library/2016/4/nist800.pdf.
[16] For an example of a more comprehensive set of security controls, see, e.g., CIS Critical Security Controls, SANS https://www.cisecurity.org/controls/.