Monday, August 8, 2022

New Trusted CI Software Security Training Materials for the Community

In a world of continuous cyber attacks, cybersecurity is a responsibility of every person involved in the software development life cycle: managers, designers, developers, and testers. Trusted CI offers an evolving collection of training materials on software security covering topics such as secure design, secure implementation, testing, code auditing, dependency tools, static analysis tools, and fuzz testing.

The materials are freely available at https://www.cs.wisc.edu/mist/SoftwareSecurityCourse. Apart from videos and corresponding book chapters, they include hands-on exercises and quizzes for many of the topics. Classroom exercises and the solutions to the hands-on exercises and quizzes are provided to instructors by request. Most of the videos now have captions in both English and Spanish.

These materials are being continuously updated, as we develop new modules. The latest additions are modules on address space layout optimization (ASLR), memory safety checks, fuzz testing and using AFL, and dependency analysis tools.

These materials have been used at conferences, workshops, and government agencies to train CI professionals in secure coding, design, and testing. They are also used at the University of Wisconsin-Madison to teach CS542, Introduction to Software Security.

Trusted CI Webinar: CIS Controls, August 22nd @11am EST

Trusted CI's Shane Filus and Mark Krenz will be giving a presentation on CIS Controls on Monday, August 22nd at 11am (Eastern).

Please register here.

The Trusted CI Information Security Office (ISO) team will be presenting a webinar on the CIS Controls. This will include background and information on the CIS controls, our recent experiences using the controls to assess Trusted CI’s own cybersecurity program and operations, and how that can be applied to your own project.
Topics include:
  • Who Trusted CI is and why we have a cybersecurity program.
  • Background on the CIS controls and what an assessment is.
  • What led us to perform a CIS assessment. 
  • Overview and discussion of our results. 
  • Differences between control versions 7.1 and 8. 
  • Discussion on methodology and tools that can be used in assessments.

Speaker Bios:

Shane Filus serves as a Senior Security Engineer at the Pittsburgh Supercomputer Center, and works with Trusted CI, XSEDE/ACCESS, and HuBMAP projects on all aspects of cybersecurity; from operations, to incident response, to policy, and everything in between.

Mark Krenz serves as Chief Security Analyst at Indiana University’s Center for Applied Cybersecurity Research. Mark’s focus is on cybersecurity operations, research and education. He has more than two decades of experience in system and network administration and has spent the last decade focused on cybersecurity. He serves as the CISO of the ResearchSOC and the Deputy CISO of Trusted CI.

---

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Monday, August 1, 2022

Analysis of NSPM-33: Cybersecurity Requirements for Federally Funded Research Organizations

By: Anurag Shankar and Scott Russell

This blog post provides research organizations a summary of the National Security Presidential Memorandum on United States Government-Supported Research and Development National Security Policy” (NSPM-33) and the recent Office of Science and Technology Policy (OSTP) / National Science and Technology Council (NSTC) guidance, along with analysis of the requirements. 

Summary

In January 2021, then President Trump issued a directive “National Security Presidential Memorandum on United States Government-Supported Research and Development National Security Policy” (NSPM-33) to all federal agencies to: 1) standardize disclosure requirements and 2) mandate a research security program for all institutions receiving a total of $50 million or more in federally-funded research. In January 2022, the Office of Science and Technology Policy (OSTP) released further guidance on these requirements, including details on four elements specified in NSPM-33: cybersecurity, foreign travel security, research security training, and export control training. The cybersecurity guidance identifies 14 controls that it recommends as requirements for federal agencies to flow down to organizations receiving federal research funding. Twelve of these controls are included in the 17 “basic hygiene” controls specified by CMMC Level 1 and the 15 “minimum security controls” specified by FAR 52.204-21, “Basic Safeguarding of Covered Contractor Information Systems.” The rest are NSPM-33 specific, addressing training and ransomware/data integrity.

The OSTP guidance also includes a number of additional recommendations for federal agencies to flow down to research organizations, summarized below:

  1. Documentation: Research organizations should be required to document their research security program and provide this documentation within 30 days of a request from a research agency that is funding an award or considering an application for award funding.

  2. Certification: Research organizations should be required to provide certification of compliance with the research security program requirement. OSTP, in consultation with the NSTC Subcommittee on Research Security and OMB, plans to develop a single certification standard and process that will apply across all research agencies.

  3. Timeline: Research organizations should establish a research security program as soon as possible, but given one year from the date of issuance of the formal requirement to comply. Organizations that become subject to the requirement in subsequent years are supposed to be similarly provided one additional year to comply.

  4. Assistance: The Federal Government should provide technical assistance to support development of training content and program guidelines, tools, and best practices for research organizations to incorporate at their discretion. Agencies represented on the National Counterintelligence Task Force, in conjunction with the National Counterintelligence and Security Center, should jointly develop content that research organizations can leverage to meet requirements for research security programs and training. The Federal Government should consider supporting the formation of a community consortium to develop and maintain research security program information and implementation resources for research organizations, to include resources suitable for use within research security programs. The development of program content should be a collaborative effort between the government and organizations.

  5. Discretion: Research organizations should be provided flexibility to structure the organization’s research security program to best serve its particular needs, and to leverage existing programs and activities where relevant, provided that the organization implements all required program components. Research organizations should be given flexibility in how they choose to integrate research security requirements into existing programs, such as existing cybersecurity programs. Research organizations should be strongly encouraged to integrate some or all elements into a coherent research security program, where applicable and feasible.

  6. Funding agencies should consider integrating the research security program requirement into the Compliance Supplement’s Research and Development Cluster audit guidance as part of the single audit of Federal grant and assistance programs (2 C.F.R. Part 200, Appendix XI).

Analysis

The primary questions raised by NSPM-33 and the NTSC/OSTP guidance are 1) How will these requirements be flowed down to research organizations; 2) To what extent will funding agencies follow the guidance put forth by the NTSC; and 3) What is the scope of the requirements? 

Regarding the first question, NSPM-33 only directly impacts federal funding agencies (e.g., NSF, DOE): the NSPM does not impose any requirements directly on research institutions. Instead, it instructs federal funding agencies to impose these requirements on research institutions receiving federal research funding. While the NTSC/OSTP guidance specifies January 2023 as the deadline for eligible institutions to comply, it does not specify how the requirements should be imposed. Moreover, the provision of NSPM-33 that specifically mentions cybersecurity is only intended to apply to research institutions receiving over $50 million in federal research funding, without clarifying how these institutions should be identified.

Practically speaking, the funding agencies may impose these requirements on all *new* grants. So although existing grants are technically unaffected, research institutions that wish to continue to get funding will be forced to implement the requirements regardless. 

Moreover, it is also unclear to what extent federal funding agencies are bound by the NTSC guidance. NSPM-33 only instructs OSTP to “promulgate guidelines for research institutions to mitigate risks to research security and integrity”: it is not empowered to dictate what requirements federal funding agencies impose. Indeed, neither OSTP nor NTSC were mentioned in the subsection referencing research security programs and cybersecurity.

Scope is another issue. The guidance does not clarify whether the security program requirements apply only to researchers receiving federal funding or every researcher within the organization. It specifies controls for programs to implement but does not explicitly state if every system used by researchers (e.g, their workstations) is in scope or institutional systems only. Since this has financial repercussions, clarity is needed on what the requirements cover.

A research security program clearly requires controls to secure projects. However, prescribing a set of controls which research systems must implement can be problematic, as research systems have unique needs that may not function using traditional controls (instead requiring alternate controls to achieve their mission.) Moreover, the focus on system-centric controls is not well suited for securing research workflows, which require more than technical controls alone. The uniqueness of research systems (telescopes, sensors, microscopes, etc.) requires different approaches than controls designed to secure “systems.” For example, the Trusted CI Framework is a more appropriate fit for research programs. It includes controls, but provides the institution flexibility in choosing a baseline control set that is tailored to the institution’s mission. Additionally, this baseline control set is supplemented with additional and alternate controls that are particularly important in the research context, as research infrastructure often requires specialized protections. Securing research ultimately requires flexibility.

Applying the same level of security to all research is also unwise. How research is protected is currently scoped to data by sensitivity and regulatory requirements. This is done for a reason, namely to apply security proportionally to risk to contain cost. Expanding it indiscriminately will be wasteful and unnecessary. For instance, public data does not need the same level of security as patient data.

The guidance asks agencies to allow flexibility on which program components institutions choose to implement but also directs them to “strongly encourage” choosing them all. With a documentation submission requirement, it is unclear how the program will be judged and what the impact of a “less than perfect choice” might be (e.g., of not having all of the controls in place).

The certification requirement also is likely to present challenges. As the CMMC rollout shows, designing a certification process for compliance at this scale is extremely challenging. And whereas CMMC is limited in scope, NSPM-33 is potentially much broader. With CMMC compliance, most organizations can design isolated environments for controlled data CUI to limit scope, certifying compliance for research will be much more challenging, given the variety and complexity of research infrastructure.

Friday, July 29, 2022

Trusted CI Co-authors Identity Management Cookbook for NSF Major Facilities

Trusted CI’s Josh Drake has co-authored a new document addressing many identity management (IdM) challenges present at NSF Major Facilities. Due to their size and collaborative missions, Major Facilities often have many users, across multiple organizations, all with different access permissions to a diverse collection of CI resources. The Federated Identity Management Cookbook aims to address these challenges by providing time-tested “recipes” for building IdM capabilities, as well as a primer on the topic of IdM itself.

“While operating the IdM working group and CI Compass, we had many opportunities to engage with major facilities on identity and access management issues facing researchers. We were able to explore a variety of options to help researchers integrate federated identities into their cyberinfrastructure,” said Josh Drake. “This cookbook represents the distilled version of months of engagement with the MF community and a primer to identity management concepts that we hope will be of use to research cyberinfrastructure operators everywhere.” Trusted CI’s Ryan Kiser and Adrian Crenshaw also participated in the engagements that contributed to the cookbook.

This work was created in partnership with Erik Scott (RENCI) and CI Compass. CI Compass provides expertise and active support to cyberinfrastructure practitioners at NSF Major Facilities in order to accelerate the data lifecycle and ensure the integrity and effectiveness of the cyberinfrastructure upon which research and discovery depend.

The cookbook is available in the CI Compass Resource Library  and on Zenodo. See CI Compass’s website to read the full press release.

Tuesday, July 26, 2022

Advancing the Cybersecurity of NSF Major Facilities: Trusted CI’s Inaugural Framework Cohort Successfully Completes Six-Month Program (June 2022)

Trusted CI’s first Framework Cohort has successfully completed its initial six-month period of workshops designed to improve NSF Major Facilities’ alignment to the Trusted CI Framework. Each cohort member adopted the Trusted CI Framework as the foundation for their cybersecurity program. Additionally, each cohort member worked closely with Trusted CI to produce 1) a validated self-assessment of their cybersecurity program’s alignment with the Trusted CI Framework; and 2) a draft Cybersecurity Program Strategic Plan identifying priorities and directions for further refining their cybersecurity programs.

The inaugural Cohort included the following NSF Major Facilities:

The success of the Framework Cohort is particularly notable as each of these facilities voluntarily adopted and rallied around the Trusted CI Framework as the foundation for their cybersecurity programs. 

The foundation of the Cohort program is the Trusted CI Framework, which was created as a minimum standard for cybersecurity programs. In contrast to cybersecurity guidance focused narrowly on cybersecurity controls, the Trusted CI Framework provides a more holistic and mission-focused standard for managing cybersecurity.

For GAGE, LIGO, NRAO, NSO, and OOI, the Cohort was their first formal training in the Trusted CI Framework’s “Pillars” and “Musts” and how to apply these fundamental principles to assess and strengthen their cybersecurity programs. NOIRLab contributed their experience as an early adopter of the Framework, having previously completed a one-on-one Framework engagement with Trusted CI.

Feedback from members of the first cohort on their experience has been strongly positive:

Eric Cross, Head of Information Technology, National Solar Observatory, said the following about his experience:

"The TrustedCI Framework Cohort was a valuable experience. The process required us to research and reflect on our internal cybersecurity policies and procedures. The Cohort provided a platform to meet with other facilities and work through challenges with feedback from peers. The experience resulted in formal documentation that provided our organization's leadership clear direction to improve our cybersecurity program with specific short-term and long-term goals. I highly recommend this exercise for all NSF facilities."

Craig Risien, CI Systems Project Manager, Ocean Observatories Initiative, said the following about his experience: 

“I found participating in Trusted CI’s first Framework Cohort to be exceptionally instructive and really enjoyed the opportunities to discuss cybersecurity challenges and lessons learned with Trusted CI and colleagues at other NSF Major Facilities. Working with Trusted CI on creating a validated self-assessment based on the Trusted CI Framework over the past six months has helped the Ocean Observatories Initiative (OOI) better understand the current state of its cybersecurity program. Being part of this cohort has also assisted the OOI with the development of a plan to fully implement the Trusted CI Framework and create a well-established and mature cybersecurity program. I look forward to the follow-on cohort sessions in the coming months.”

Trusted CI is continuing to support the first cohort through the end of 2022 by facilitating monthly workshops. Each facility will have the opportunity to lead a workshop in which they are encouraged to share their specific challenges and seek advice among the other cohort members.

Concurrently, Trusted CI is conducting its second cohort engagement leveraging the lessons learned from the first cohort. The second cohort includes the following organizations:

Trusted CI is excited to be working with these new facilities to advance their understanding and implementation of cybersecurity programs and best practices!

For more information, please contact us at info@trustedci.org.


Friday, July 15, 2022

Findings of the 2022 Trusted CI Study on the Security of Operational Technology in NSF Scientific Research

This year, Trusted CI is conducting a year-long effort on the security of operational technology in science. Operational technology (OT) encompasses broad categories of computing and communication systems that in some way interact with the physical world.  This includes devices that either have sensing elements or control elements, or some combination of the two.  Networked sensors and control systems are increasingly important in the context of science as they are critical in  operating scientific instruments.  Trusted CI is pleased to share its findings from this study, published in the following report:

Emily K. Adams, Daniel Gunter, Ryan Kiser, Mark Krenz, Sean Peisert, Susan Sons, and John Zage. “Findings of the 2022 Trusted CI Study on the Security of Operational Technology in NSF Scientific Research,” July 13, 2022. DOI: 10.5281/zenodo.6828675  https://doi.org/10.5281/zenodo.6828675

In support of this study, Trusted CI gratefully acknowledges the many individuals from the following NSF Major Facilities that contributed to this effort: IceCube Neutrino Observatory, NOIRLab, Ocean Observatories Initiative, and the United States Academic Research Fleet.

Now that Trusted CI has finished its examination of the current state of the security of OT in science, it will turn its focus to developing a roadmap of solutions to sustainably advance security of scientific operational technology, which will be published in late 2022.

Thursday, June 30, 2022

Trusted CI co-PI Bart Miller wins award for landmark paper on dependable computing

Bart Miller, Trusted CI co-PI, and his two student co-authors were honored with the 2022 Jean-Claude Laprie Award in Dependable Computing on June 28 in Baltimore, Md. Miller, along with L. Fredriksen, and B. So, were presented the award during the opening session of the Annual IEEE/IFIP International Conference on Dependable Systems and Networks.

The groundbreaking paper, “An Empirical Study of the Reliability of UNIX Utilities," published in 1990, launched the field of fuzz random testing, or fuzzing as it is commonly called. The paper created a new technique for easy-to-use software testing and then used that technique to evaluate UNIX utilities crashes. As part of this research, the authors also studied the root causes of the failures. They also released its code and data openly (a novelty at that time). The paper has been cited more than 1,300 times and was responsible for creating an entire new branch of testing and security research. Hundreds of papers and tens of PhD dissertations are produced each year in this area.

Today, fuzzing is taught in introductory software testing and security courses, is a prominent area of focus at numerous conferences, and is recognized by major companies. For example, Microsoft recently published a paper on how they integrate fuzzing in the life cycle of almost all their products. Similarly, Google recently reported that 80 percent of the bugs they find in production in the Chrome web browser are due to fuzzing. 

Fuzzing is heavily used in security research and is often the tool of choice for penetration testers. Thus, this paper has important implications for reliability and security research.

About Bart Miller

Bart Miller with his Cessna TR182 that he bought in 1980. He's had his commercial pilots license since 1979. 

Barton Miller is the Vilas Distinguished Achievement Professor at the University of Wisconsin-Madison. Co-PI on Trusted CI, where he leads the software assurance effort. Research interests include software security, in-depth vulnerability assessment, and binary code analysis. In 1988, Miller founded the field of fuzz random software testing, a foundation of many security and software engineering disciplines. In 1992, Miller and his then­-student Jeffrey Hollingsworth founded the field of dynamic binary code instrumentation and coined the term “dynamic instrumentation.” Miller is a Fellow of the ACM.

About the Jean-Claude Laprie Award in Dependable Computing

The award was created in 2011, in honor of Jean-Claude Laprie (1944-2010), whose pioneering contributions to the concepts and methodologies of dependability were influential in defining and unifying the field of dependable and secure computing. The award recognizes outstanding papers that have significantly influenced the theory and/or practice of dependable computing.

About IFIP WG 10.4 on Dependable Computing and Fault Tolerance

IFIP Working Group 10.4 was established in 1980 with the aim of identifying and integrating approaches, methods, and techniques for specifying, designing, building, assessing, validating, operating, and maintaining dependable computer systems (those that are reliable, available, safe, and secure). Its 75 members from around the world meet twice a year to conduct in-depth discussions of important technical topics to further the understanding of the fundamental concepts of dependable computing.

About the International Federation for Information Processing

IFIP is a non-governmental, non-profit umbrella organization for national societies working in the field of information processing. It was established in 1960 under the auspices of UNESCO as a result of the first World Computer Congress held in Paris in 1959. It is the leading multinational, apolitical organization in Information and Communications Technologies and Sciences.