Showing posts with label annual challenge. Show all posts
Showing posts with label annual challenge. Show all posts

Friday, December 15, 2023

Announcing publication of the Operational Technology Procurement Vendor Matrix

RCRV Photo: The Glosten Associates

The Trusted CI Secure by Design team has completed work on “The Operational Technology Procurement Vendor Matrix.” The purpose of this document is to assist those in leadership roles during the procurement process. It’s meant to help formulate questions for vendors to discuss security controls on devices that will be used for maritime research.

The matrix includes a list of controls, requirements for the control, potential questions for vendors, tips, and real world examples justifying a given control.

For example, Item #3 in the matrix is an inventory requirement stating that security vulnerabilities in vendor-provided software must be patched. The Threat Actor Example we cite to justify the requirement is the WannaCry vulnerability. We include an example question that could be used when discussing with the vendor. (Click the image below to see in better detail.)

The document can be viewed and downloaded here (Note: The file is available in many formats):

https://zenodo.org/doi/10.5281/zenodo.10257812

This document represents the work of many people, including critical feedback from maritime operational technology practitioners (Scripps Institution of Oceanography’s CCRV, and Oregon State University’s RCRV and OOI). We are grateful for their contributions to this effort.

Our goal is to share this matrix and continue to develop its utility after receiving feedback from the Trusted CI community. To contact us, email info@trustedci.org.

Monday, July 24, 2023

Updates on Trusted CI’s Efforts in Cybersecurity by Design of NSF Academic Maritime Facilities

As part of its “Annual Challenge” in 2023, Trusted CI has been engaging with current and future NSF Major Facilities undergoing design or construction with the goal of building security into those Facilities from the outset.  To date, this effort has focused on working with cyberinfrastructure operators in the the academic maritime domain, and has included support of the cybersecurity aspects of the acceptance testing process of the NSF-funded Research Class Research Vessels (RCRVs) at Oregon State University as well as Scripps Institution of Oceanography’s design of the California Coastal Research Vessel (CCRV).  These vessels are all expected to eventually become a part of the U.S. Academic Research Fleet (ARF).

In 2022, Trusted CI studied cybersecurity issues in operational technology (OT) in science and produced a roadmap to help lead to greater security of such systems, and thus Trusted CI’s efforts with security by design of Major Facilities this year are seeking to both refine and apply OT insights gained previously.  The U.S. Antarctic Program (USAP)’s design of the Antarctic Research Vessel (ARV) has also been contributing to Trusted CI’s understanding of cybersecurity issues in operational technology  Trusted CI has also benefited from insights from numerous conversations with domain experts in the academic maritime domain across a variety of ARF institutions, including IT personnel, marine technicians, oceanographers, ship captains, project leadership, and NSF Program Managers.

One of the highlights of this year's security-by-design efforts has been site visits to ships and facilities. The team has made site visits to the R/V Sally Ride and Oregon State University’s Hatfield Marine Science Center in Newport, Oregon, where the R/V Taani — one of the initial three RCRVs being constructed — will be based upon completion of its construction.  These in-person visits, including extensive discussion with personnel involved with the facilities, have provided invaluable insight to supporting Trusted CI’s efforts.

In the second half of 2023, Trusted CI will continue working on security by design with the aforementioned organizations and will also be working with the NSF Ocean Observatories Initiative (OOI) Major Facility, which is in the process of planning a refresh of its autonomous underwater vehicle (AUD) and glider fleets.

Recent site visit photographs:

Trusted CI’s Sean Peisertleft, in a crawlspace on the R/V Sally Ride examining operational technology systems.

The R/V Sally Ride, docked in Alameda, CA.


Trusted CI’s Dan Arnold, left, conferring with marine technicians on the R/V Sally Ride.


Trusted CI’s John Zage, left, looks on as RCRV’s Chris Romsos, right, explains some of the scientific instruments that will be part of the newly constructed ships at the RCRV’s offices at OSU, Corvallis, OR.


Trusted CI’s John Zage left, and RCRV’s Chris Romsos, right, view part of the expansive warehouse of items and gear to outfit the new ships under construction. OSU, Corvallis, OR.  


Wednesday, January 25, 2023

Announcing the 2023 Trusted CI Annual Challenge: Building Security Into NSF Major Facilities By Design

The Trusted CI Annual Challenge is a year-long project focusing on a cybersecurity topic of importance for scientific computing environments.  In its first year, the Trusted CI Annual Challenge focused on improving trustworthy data for open science.  In its second year, the Annual Challenge focused on software assurance in scientific computing.  In its third year, 2022, the Annual Challenge focused on the security of operational technology in science.  

The 2022 Annual Challenge on the Security of Operational Technology in NSF Scientific Research reinforced the notion that NSF Major Facilities, once constructed, can deploy operational technology that can have an operational lifetime of 15-30 years.  However, there are typically no cybersecurity requirements during acquisition and design.  In the 2023 Annual Challenge, Trusted CI staff will engage with NSF Major Facilities undergoing construction or refreshes in a hands-on way to build security into those Facilities from the outset.  Trusted CI will directly support the planning for facility refreshes and construction with respect to operational technology and will particularly focus on the academic maritime domain, including supporting the acceptance testing of the NSF-funded Research Class Research Vessels (RCRVs) at Oregon State University, supporting the U.S. Antarctic Program (USAP)’s design of the Antarctic Research Vessel (ARV), and Scripps Institution of Oceanography’s design of the California Coastal Research Vessel (CCRV).

This year’s Annual Challenge is supported by a stellar team of Trusted CI staff, including Andrew Adams (Pittsburgh Supercomputing Center), Daniel Gunter (Berkeley Lab), Ryan Kiser (Indiana University), Mark Krenz (Indiana University), Michael Simpson (Indiana University), John Zage (University of Illinois, Urbana-Champaign), and Sean Peisert (Berkeley Lab; 2023 Annual Challenge Project Lead).

Wednesday, November 16, 2022

Publication of the Trusted CI Roadmap for Securing Operational Technology in NSF Scientific Research

Trusted CI is pleased to announce the publication of its Roadmap for Securing Operational Technology in NSF Scientific Research.  

In 2022, Trusted CI conducted a year-long effort examining the security of operational technology in science. Operational technology (OT) encompasses broad categories of computing and communication systems that in some way interact with the physical world.  This includes devices that either have sensing elements or control elements, or some combination of the two, and can include both bespoke scientific instrumentation as well as commercially-produced OT.  In both cases, networked sensors and control systems are increasingly important in the context of science as they are critical in operating Major Facilities.  

Trusted CI’s approach to this effort was to spend the first half of 2022 engaging with NSF personnel and operators of OT at NSF Major Facilities to understand the range of operational practices and evaluate potential deficiencies that lead to vulnerabilities and compromises.  In the second half of 2022, leveraged our insights from the first half to develop a roadmap of solutions to sustainably advance security of scientific operational technology.  The audiences for this roadmap include NSF, NSF Major Facilities, and Trusted CI itself.

In July 2022, Trusted CI published its findings from its study of the security of operational technology in science, conducted in the first half of 2022.  

Emily K. Adams, Daniel Gunter, Ryan Kiser, Mark Krenz, Sean Peisert, Susan Sons, andJohn Zage. “Findings of the 2022 Trusted CI Study on the Security of Operational Technology in NSF Scientific Research,” July 13, 2022. DOI: 10.5281/zenodo.6828675 https://doi.org/10.5281/zenodo.6828675

Now, with the publication of this roadmap, Trusted CI aims to help NSF operational technology in cyberinfrastructure advance toward solutions.  The full citation for the solutions roadmap is as follows:

Andrew Adams, Emily K. Adams, Dan Gunter, Ryan Kiser, Mark Krenz, Sean Peisert, and John Zage. “Roadmap for Securing Operational Technology in NSF Scientific Research,” November 16 2022. DOI: 10.5281/zenodo.7327987 https://doi.org/10.5281/zenodo.7327987

Trusted CI gratefully acknowledges the many individuals from NSF as well as the following NSF Major Facilities that contributed to the year-long effort that has led to this roadmap: IceCube Neutrino Observatory, NOIRLab, Ocean Observatories Initiative, United States Academic Research Fleet, and the United States Antarctic Program.

In 2023, Trusted CI will turn its focus toward working closely with several maritime-centric NSF Major Facilities and Major Research Equipment and Facilities Construction (MREFC) projects to offer guidance and recommendations  for integrating operational technology security into those facilities for planning, design, and construction of new and refreshed facilities and instrumentation therein.


Friday, July 15, 2022

Findings of the 2022 Trusted CI Study on the Security of Operational Technology in NSF Scientific Research

This year, Trusted CI is conducting a year-long effort on the security of operational technology in science. Operational technology (OT) encompasses broad categories of computing and communication systems that in some way interact with the physical world.  This includes devices that either have sensing elements or control elements, or some combination of the two.  Networked sensors and control systems are increasingly important in the context of science as they are critical in  operating scientific instruments.  Trusted CI is pleased to share its findings from this study, published in the following report:

Emily K. Adams, Daniel Gunter, Ryan Kiser, Mark Krenz, Sean Peisert, Susan Sons, and John Zage. “Findings of the 2022 Trusted CI Study on the Security of Operational Technology in NSF Scientific Research,” July 13, 2022. DOI: 10.5281/zenodo.6828675  https://doi.org/10.5281/zenodo.6828675

In support of this study, Trusted CI gratefully acknowledges the many individuals from the following NSF Major Facilities that contributed to this effort: IceCube Neutrino Observatory, NOIRLab, Ocean Observatories Initiative, and the United States Academic Research Fleet.

Now that Trusted CI has finished its examination of the current state of the security of OT in science, it will turn its focus to developing a roadmap of solutions to sustainably advance security of scientific operational technology, which will be published in late 2022.

Friday, May 13, 2022

Better Scientific Software (BSSw) Helps Promote Trusted CI Guide to Securing Scientific Software

Trusted CI is grateful to Better Scientific Software (BSSw) for helping to publicize the results of Trusted CI’s software security study, including its recently published findings report and Guide to Securing Scientific Software (GS3), via its widely-read blog.  Read the full blog post here.

Monday, February 14, 2022

Trusted CI Webinar: The Results of the Trusted CI Annual Challenge on Software, Mon Feb. 28 @ 1pm Eastern

Members of Trusted CI are presenting the Results of the Trusted CI Annual Challenge on Software, on Monday February 28th at 1pm (Eastern). Note the time is later than previous webinars.

Please register here.

This webinar presents the results of Trusted CI's 2021 examination of the state of software assurance in scientific computing, and also gives an overview of the contents of its recently released Guide to Securing Scientific Software (GS3), aimed at helping developers of software used in scientific computing improve the security of that software.

See our blog post announcing the report:
https://blog.trustedci.org/2021/12/publication-of-trusted-ci-guide-to.html

Speaker Bios

Dr. Elisa Heymann Pignolo is a Senior Scientist on the NSF Cybersecurity Center of Excellence at the University of Wisconsin, and an Associate Professor at the Autonomous University of Barcelona. She was in charge of the Grid/Cloud security group at the UAB, and participated in two major Grid European Projects: EGI‐InSPIRE and European Middleware Initiative (EMI). Heymann's research interests include security and resource management for Grid and Cloud environments. Her research is supported by the NSF, Spanish government, the European Commission, and NATO.

Prof. Barton Miller is the Vilas Distinguished Achievement Professor and Amar & Belinder Sohi Professor in computer science at the University of Wisconsin-Madison. Prof. Miller founded the field of fuzz random testing, which is foundational to computer security and software testing. In addition, he founded (with his then-student Prof. Jeffrey Hollingsworth) the field of dynamic binary instrumentation, which is a widely used, critical technology for cyberforensics. Prof. Miller advises the Department of Defense on computer security issues though his position at the Institute for Defense Analysis and was on the Los Alamos National Laboratory Computing, Communications and Networking Division Review Committee and the US Secret Service Electronic Crimes Task Force (Chicago Area). He is currently an advisor to the Wisconsin Security Research Council. Prof. Miller is a fellow of the ACM.

Dr. Sean Peisert leads applied research and development in computer security at the Berkeley Lab and UC Davis. He is also chief cybersecurity strategist for CENIC; co-lead of Trusted CI, the NSF Cybersecurity Center of Excellence; editor-in-chief of IEEE Security & Privacy; a member of the Distinguished Expert Review Panel for the NSA Annual Best Scientific Cybersecurity Paper Competition; a member of the DARPA Information Science and Technology (ISAT) Study Group; an ACSA Senior Fellow; past chair of the IEEE Technical Committee on Security & Privacy' and is a steering committee member and past general chair of the IEEE Symposium on Security and Privacy ("Oakland").

---

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Wednesday, January 5, 2022

Announcing the 2022 Trusted CI Annual Challenge on Scientific OT/CPS Security

 The Trusted CI Annual Challenge is a year-long project focusing on a cybersecurity topic of importance for scientific computing environments.  In its first year, the Trusted CI Annual Challenge focused on improving trustworthy data for open science.  In its second year, the Annual Challenge focused on software assurance in scientific computing.  Now in its third year, the Annual Challenge is focusing on the security of “operational technology” or “cyber-physical systems” in science.

Operational technology (OT) or cyber-physical systems (CPS) are networked systems connected to computing systems on one side and to either controls or sensors of physical systems on the other side.  Networked sensors and control systems are increasingly important in the context of science as they are critical in  operating scientific instruments like telescopes,biological and chemical reactors, and even  vehicles used in scientific discovery.  Given their increasing importance in the process of scientific discovery, disruption of networked instruments therefore also increasingly can have negative consequences to the scientific mission.  And, like OT/CPS everywhere, including commercial, off the shelf (COTS) OT/IoT, by definition, any control system can also have physical consequences in the real world, including equipment damage and loss of life. Indeed, NSF's recent update to the Research Infrastructure Guide (formerly known as the Major Facilities Guide) further clarified that OT is within the scope of information assets to be protected by the facilities' cybersecurity programs (see Sections 6.3.3.2 and 6.3.6.1).

Trusted CI has a long history in addressing the security of operational technology through its engagements with facilities that operate such equipment.  The 2022 Annual Challenge seeks to gain both broader and deeper insights into the security of these important and specialized systems.  To accomplish this, in the first half of the year, we plan to have conversations with personnel involved with IT security and OT operations at a variety of NSF Major Research Facilities.  In the second half of the year, we will leverage this insight to develop a multi-year roadmap of solutions to advance the security of scientific operational technology. This guidance will offer security recommendations in a way most relevant to NSF facilities, rather than existing guides that have different foci and audiences with different priorities and resources.  

This year’s Annual Challenge is supported by a stellar team of Trusted CI staff, including Emily K. Adams (Indiana University), Ryan Kiser (Indiana University), Drew Paine (Berkeley Lab), Susan Sons (Indiana University), John Zage (University of Illinois, Urbana-Champaign), and Sean Peisert (Berkeley Lab; 2022 Annual Challenge Project Lead).


Tuesday, December 14, 2021

Publication of the Trusted CI Guide to Securing Scientific Software

Trusted CI is pleased to announce the publication of its Guide to Securing Scientific Software (GS3).  The GS3 was produced over the course of 2021 by seven Trusted CI members with the goal of broadly improving the robustness of software used in scientific computing with respect to security. GS3 is the result of  the 2021 Trusted CI Annual Challenge on Software Assurance and the interviews we conducted with seven prominent scientific software development projects,  helping to  shape the team’s ideas about the community’s needs in software assurance.  The guide can be downloaded here:

Andrew Adams, Kay Avila, Elisa Heymann, Mark Krenz, Jason R. Lee, Barton Miller, and Sean Peisert. “Guide to Securing Scientific Software,” December 2021. DOI:10.5281/zenodo.5777646 https://doi.org/10.5281/zenodo.5777646

Note that this guide follows the publication of the team’s findings report from a few months ago:

Andrew Adams, Kay Avila, Elisa Heymann, Mark Krenz, Jason R. Lee, Barton Miller, and Sean Peisert. “The State of the Scientific Software World: Findings of the 2021 Trusted CI Software Assurance Annual Challenge Interviews,” September 2021.  https://hdl.handle.net/2022/26799

It is intended that the GS3 will continue to evolve and be further integrated into Trusted CI’s array of activities, including training and engagements, and so we encourage those interested in the subject of software assurance to continue to watch this blog for more information, and to also feel free to reach out to authors of the GS3 with questions and feedback.

For those interested in hearing more about the GS3, please (virtually) join the Trusted CI webinar focused on the topic of software assurance scheduled for February 28, 2022 at 10am Pacific / 1pm Eastern. https://www.trustedci.org/webinars  Register for the webinar.

Finally, Trusted CI gratefully acknowledges the contributions from the following teams to this effort: FABRIC, the Galaxy Project, High Performance SSH/SCP (HPN-SSH) by the Pittsburgh Supercomputing Center (PSC), Open OnDemand by the Ohio Supercomputer Center, Rolling Deck to Repository (R2R) by Columbia University, and the Vera C. Rubin Observatory, as well as to all those who provided feedback on early versions of this guide.

More information on Trusted CI’s work in software assurance can be found at https://www.trustedci.org/software-assurance


Wednesday, September 29, 2021

Findings Report of the 2021 Trusted CI Annual Challenge on Software Assurance Published

 As reported in this blog earlier this year, in 2021, Trusted CI is conducting our focused “annual challenge” on the assurance of software used by scientific computing and cyberinfrastructure

In July, the 2021 Trusted CI Annual Challenge team posted its initial findings in this blog.  The team is now pleased to share its detailed findings report:

Andrew Adams, Kay Avila, Elisa Heymann, Mark Krenz, Jason R. Lee, Barton Miller, and Sean Peisert. “The State of the Scientific Software World: Findings of the 2021 Trusted CI Software Assurance Annual Challenge Interviews,” September 2021.  https://hdl.handle.net/2022/26799

Now that the team has finished its examination of software assurance findings, it will turn its focus to solutions.  In accordance with that, later this calendar year, the Trusted CI team will be publishing a guide for recommended best practices for scientific software development.

For those interested in hearing more about the 2021 Annual Challenge, please (virtually) come to the team’s panel session at the 2021 NSF Cybersecurity Summit at 3:05 EDT on October 13, 2021: https://www.trustedci.org/2021-summit-program


Tuesday, August 3, 2021

Initial Findings of the 2021 Trusted CI Annual Challenge on Software Assurance

 In 2021, Trusted CI is conducting our focused “annual challenge” on the assurance of software used by scientific computing and cyberinfrastructure. The goal of this year-long project, involving seven Trusted CI members, is to broadly improve the robustness of software used in scientific computing with respect to security. The Annual Challenge team spent the first half of the 2021 calendar year engaging with developers of scientific software to understand the range of software development practices used and identifying opportunities to improve practices and code implementation to minimize the risk of vulnerabilities. In this blog post, the 2021 Trusted CI Annual Challenge team gives a high-level description of some of its more important findings during the past six months. 

Later this year, the team will be leveraging its insights from open-science developer engagements to develop a guide specifically aimed at the scientific software community that covers software assurance in a way most appropriate to that community. Trusted CI will be reaching back out to the community sometime in the Fall for feedback on draft versions of that guide before the final version is published late in 2021.

In support of this effort, Trusted CI gratefully acknowledges the input from the following teams who contributed to this effort: FABRIC, the Galaxy Project, High Performance SSH/SCP (HPN-SSH) by the Pittsburgh Supercomputing Center (PSC), Open OnDemand by the Ohio Supercomputer Center, Rolling Deck to Repository (R2R) by Columbia University, and the Vera C. Rubin Observatory

At a high level, the team identified challenges that developers face with robust policy and process documentation; difficulties in identifying and staffing security leads, and ensuring strong lines of security responsibilities among developers; difficulties in effective use of code analysis tools; confusion about when, where, and how to find effective security training; and challenges with controlling source code developed and external libraries used, to ensure strong supply chain security. We now describe our examination process and findings in greater detail.


Goals and Approach

The motivation for this year’s Annual Challenge is that Trusted CI has reviewed many projects in its history and found significant anecdotal evidence that there are worrisome gaps in software assurance practices in scientific computing. We determined that if some common themes could be identified and paired with the proportional remediations, the state of software assurance in science might be significantly improved. 

Trusted CI has observed that currently available software development resources often do not match well with the needs of scientific projects; the backgrounds of the developers, the available resources, and the way the software is used do not necessarily map to existing resources available for software assurance. Hence, Trusted CI put together a team including a range of security expertise with backgrounds in the field from academic researchers to operational expertise. That team then examined several software projects covering a range of sizes, applications, and NSF directorate funding sources, looking for commonalities among them related to software security. Our focus was on both procedures and practical application of security measures and tools. 

In preparing our examinations of these individual software projects, the Annual Challenge team enumerated several details that it felt would shed light on the software security challenges faced by scientific software developers, some of the most successful ways in which existing teams are addressing those challenges, and observations from developers about the way that they wish things might be different in the future, or if they were able to do things over again from the beginning.


Findings

The Annual Challenge team’s findings are generally aligned with one of five categories: process, organization/mission, tools, training, and code storage.

Process: The team found several common threads of challenges facing developers, most notably related to policy process documentation, including policies relating to onboarding, offboarding, code commits and pull requests, coding standards, design, communication about vulnerabilities with user communities, patching methodologies, and auditing practices. One cause for this finding is often that software projects start small and do not plan to grow or be used widely. And when the software does grow and starts to be used broadly, it can be hard to develop formal policies after developers are used to working in an informal, ad hoc manner. In addition, organizations do not budget for security. Further, where policy documentation does exist, it can easily go stale -- “documentation rot.” As a result, it would be helpful for Trusted CI to develop guides for and examples of such policies that could be used and implemented even at early stages by the scientific software development community.

Organization and Mission: Most projects faced difficulties in identifying, staffing, or funding a security lead and/or project manager. The few projects that had at least one of these roles filled had an advantage in regards to DevSecOps. In terms of acquiring broader security skills, some projects attempted to use institutional “audit services” but found mixed results. Several projects struggled with the challenge of integrating security knowledge among different teams or individuals. Strong lines of responsibility can create valuable modularity but can also create points of weakness when interfaces between different authors or repositories are not fully evaluated for security issues. Developers can ease this tension by using processes for developing security policies around software, ensuring ongoing management support and enforcement of policies, and helping development teams understand the assignment of key responsibilities. These topics will be addressed in the software assurance guide that Trusted CI is developing.

Tools: Static analysis tools are commonly employed in continuous integration (CI) workflows to help detect security flaws, poor coding style, and potential errors in the project. A primary attribute of a static analysis tool is the set of language-specific rules and patterns it uses to search for style, correctness, and security issues in a project. One major issue with static analysis tools is that they report a high number of false positives, which, as the Trusted CI team found, can cause developers to avoid using them. It was determined that it would be helpful for Trusted CI to develop tutorials that are appropriate for the developers in the scientific software community to learn how to properly use these tools and overcome their traditional weaknesses without being buried in useless results.

The Trusted CI team found that dependency checking tools were commonly employed, particularly given some of the automation and analysis features built into GitHub. Such tools are useful to ensure the continued security of a project’s dependencies as new vulnerabilities are found over time. Thus, the Trusted CI team will explore developing (or referencing existing) materials to ensure that the application of dependency tracking is effective for the audience and application in question. It should be noted that tools in general could give a false sense of security if they are not carefully used.

Training: Projects shared that developers of scientific software received almost no specific training on security or secure software development. A few of the projects that attempted to find online training resources reported finding themselves lost in a quagmire of tutorials. In some cases, developers had computer science backgrounds and relied on what they learned early in their careers, sometimes decades ago. In other cases, professional training was explored but found to be at the wrong level of detail to be useful, had little emphasis on security specifically, or was extremely expensive. In yet other cases, institutional training was leveraged. We found that any kind of ongoing training tended to be seen by developers as not worth the time and/or expense. To address this, Trusted CI should identify training resources appropriate for the specific needs, interests, and budgets of the scientific software community.

Code Storage: Although most projects were using version control in external repositories, the access controls methods governing pull requests and commits were not sufficiently restricted to maintain a secure posture. Many projects leverage GitHub’s dependency checking software; however, that tool is limited to only checking libraries within GitHub’s domain. A few projects developed their own software in an attempt to navigate a dependency nightmare. Further, there was often little ability or attempt to vet external libraries; these were often accepted without inspection mainly because there is no straightforward mechanism in place to vet these packages. In the Trusted CI software assurance guide, it would be useful to describe processes for leveraging two-factor authentication and developing policies governing access controls, commits, pull requests, and vetting of external libraries.


Next Steps

The findings derived from our examination of several representative scientific software development projects will direct our efforts towards addressing what new content we believe is most needed by the scientific software development community.

Over the next six months, the Trusted CI team will be developing a guide consisting of this material, targeted toward anyone who is either planning or has an ongoing software project that needs a security plan in place. While we hope that the guide will be broadly usable, a particular focus of the guide will be on projects that provide a user-facing front end exposed to the Internet because such software is most likely to be attacked. 

This guide is meant as a “best practices” approach to the software lifecycle. We will recommend various resources that should be leveraged in scientific software, including the types of tools to run to expose vulnerabilities, best practices in coding, and some procedures that should be followed when engaged in a large collaborative effort and how to share the code safely. Ultimately, we hope the guide will support scientific discovery itself by providing guidance around how to minimize possible risks incurred in creating and using scientific software.

Tuesday, March 30, 2021

Announcing the 2021 Trusted CI Annual Challenge on Software Assurance


The Trusted CI “Annual Challenge” is a year-long project focusing on a particular topic of importance to cybersecurity in scientific computing environments.  In its first year, the Trusted CI Annual Challenge focused on issues in trustworthy data.  Now, in its second year, the Annual Challenge is focusing on software assurance in scientific computing.

The scientific computing community develops large amounts of software.  At the largest scale, projects can have millions of lines of code.  And indeed, the software used in scientific computing and the vulnerabilities present in scientific computing can be similar to that used in other domains.  At the same time, the software developers have usually come from traditional scientific focused domains rather than traditional software engineering backgrounds.  And, in comparison to other domains, there's often less emphasis on software assurance.

Trusted CI has a long history in addressing the software assurance of scientific software, both through engagements with individual scientific software teams, as well as through courses and tutorials frequently taught at conferences and workshops by Elisa Heyman and Barton Miller, from University of Wisconsin-Madison.  This year’s Annual Challenge seeks to complement those existing efforts in a focused way, and leveraging a larger team.  Specifically, this year’s Annual Challenge seeks to broadly improve the robustness of software used in scientific computing with respect to security.  It will do this by spending the March–June  2021 timeframe engaging with developers of scientific software to understand the range of software development practices being used and identifying opportunities to improve practices and code implementation to minimize the risk of vulnerabilities.  In the second half of 2021, we will leverage our insights to develop a guide specifically aimed at the scientific software community that covers software assurance in a way most appropriate to that community,.  

We seek to optimize the impact of our efforts in 2021 by focusing our effort on software that is widely used, is situated in vulnerable locations, and is developed mostly by individuals who do not have traditional software engineering backgrounds and training.

This year’s Annual Challenge is supported by a stellar team of Trusted CI staff, including Andrew Adams (Pittsburgh Supercomputing Center), Kay Avila (National Center for Supercomputing Applications), Ritvik Bhawnani (University of Wisconsin-Madison), Elisa Heyman (University of Wisconsin-Madison), Mark Krenz (Indiana University), Jason Lee (Berkeley Lab/ NERSC), Barton Miller (University of Wisconsin-Madison), and Sean Peisert (Berkeley Lab; 2021 Annual Challenge Project Lead).