Tuesday, August 31, 2021

2021 Open OnDemand Engagement Concludes

Open OnDemand, funded by NSF OAC, is an open-source HPC portal based on the Ohio Supercomputer Center’s original OnDemand portal. The goal of Open OnDemand is to provide an easy way for system administrators to provide web access to their HPC resources.

Open OnDemand is facing increased community adoption. As a result, it is becoming a critical production service for many HPC centers and clients. Open OnDemand engaged with Trusted CI to improve the overall security of the project, ensuring that it continues to be a trusted and reliable platform for the hundreds of centers and tens of thousands of clients that regularly utilize it. 

Our engagement centered on providing the Open OnDemand team with the skills, tools and resources needed to ensure their software security. This included using the FPVA methodology to conduct in-depth vulnerability assessments independently. In addition, we evaluated the static analysis and dependency checking tools used by Open OnDemand. The analysis of this evaluation led to interesting findings regarding the way tools behave and a set of recommendations regarding which tools to use and how to most effectively configure them.

Trusted-CI has performed in-depth assessments for NSF projects in the past. In this engagement with Open OnDemand, we took a step forward as Trusted CI taught a group how to perform the assessment themselves. In general, the NSF community benefits from being able to carry out that kind of activity in an autonomous way.  In addition, the lessons in this engagement related to automated tools will benefit any NSF software project.

Open OnDemand Software Engineer, Jeff Ohrstrom, shared positive feedback regarding the value of the engagement, stating “The biggest takeaway for me was just getting muscle memory around security to start to think about attack vectors in every change, every commit, every time.”

Our findings and recommendations are summarized in our engagement report, which can be found here

Thursday, August 26, 2021

Trusted CI begins engagement with University of Arkansas

The University of Arkansas has engaged with Trusted CI and the Engagement and Performance Operations Center (EPOC) to review their plans for a Science DMZ that will serve institutions for higher education across Arkansas. Trusted CI and EPOC will also help create training and policy materials that can be reused by other institutions both in the state of Arkansas and beyond.

Science DMZs are a secure architecture for providing high throughput transfer of science data between two points. By placing data transfer nodes outside each institution's cononical network and into a specially controlled zone, the Science DMZ is able to increase speed by reducing the friction created by firewalls, other traffic, and switches and routers that are tuned for more diverse traffic.

 The University of Arkansas via its Data Analytics that are Robust and Trusted (DART) project, is funded by NSF GRANT #194639 for EPSCoR RII.

Tuesday, August 24, 2021

Trusted CI Begins Engagement with Jupyter Security Coordinators

Project Jupyter is an open-source project which supports interactive data science and scientific computing across multiple programming languages. Project Jupyter has developed several interactive computing products including Jupyter Notebook, JupyterLab, and JupyterHub, which are used throughout the NSF community. This Trusted CI engagement is motivated by an upcoming Jupyter Security Best Practices Workshop funded by NumFOCUS as part of the Community Workshop series. The workshop is tentatively scheduled to be held April 2022 at the Ohio Supercomputer Center.

The goals of this engagement include the following tasks.

  • Review existing Jupyter deployment documentation related to security, identify gaps, and create recommendations for improvements.
  • Identify Jupyter deployment use-cases as targets for Jupyter Security Best Practices documentation. Example use-cases include DOE supercomputing centers, campus research clusters, workshops, small scientific projects, etc. Prioritize these use-cases based on which audiences would benefit most from new security documentation.
  • Write Jupyter Security Best Practices documentation for high priority use-cases identified above. Work through other use-cases as time permits.

The Jupyter Security Best Practices documentation produced by this engagement will be shared with Project Jupyter for inclusion in their documentation, and also presented at the workshop.

To read Jupyter's blog post about the engagement, click here.

Monday, August 23, 2021

Trusted CI Adopts Framework for its own Security Program

Trusted CI, the NSF Cybersecurity Center of Excellence, is pleased to announce that it has completed its adoption of the Trusted CI Framework for its own security program.  The previous security program, based off of Trusted CI’s Guide for Cybersecurity Programs for NSF Science and Engineering Projects, provided Trusted CI with a usable but basic security program. As Trusted CI has matured and its impact on the community expanded, we found our program was no longer adequate for our growing cybersecurity needs.  Thus, we began the process of rebuilding our program in order to strengthen our security posture.  

The release of Trusted CI’s Framework was independent of our effort to redress our security program, but serendipitously timed nonetheless.  We leveraged the Framework Implementation Guide (or FIG) -- instructions for cyberinfrastructure research operators -- to rebuild our security program based on the 4 Pillars and 16 Musts constituting the Trusted CI Framework.

The documents that form Trusted CI’s updated security program include the top-level Master Information Security Policies and Procedures (or MISPP), along with the support policies: Access Control Policy, Collaborator Information Policy, Document Labeling Policy, Incident Response Policy & Procedures, Information Classification Policy, Infrastructure Change Policy, and Onboarding / Offboarding Policy & Procedures.  Moreover, to track critical assets, asset owners for incident response, associated controls, and granted privilege escalations, the following “Asset Specific Access and Privilege Specifications”, or ASAPS were included: Apple (Podcasts), Badgr, Backup System (for G-Drive), Blogger, CloudPerm (G-Drive tool), DNS Registrar, GitHub, Group Service Account, IDEALS (@Illinois), Mailing Lists @Indiana), Slack, Twitter, YouTube, Website (SquareSpace), Zenodo, and Zoom.


The effort to adopt the Trusted CI Framework took ½ FTE over four months. 

Registration is now open for the 2021 NSF Cybersecurity Summit

 It is our great pleasure to announce registration is now open for the 2021 NSF Cybersecurity Summit. Please join us for this virtual conference. Plenary: Oct 12-13, Trainings: Oct 15, Workshops Oct 18-19. Attendees will include cybersecurity practitioners, technical leaders, and risk owners from within the NSF Large Facilities and CI community, as well as key stakeholders and thought leaders from the broader scientific and cybersecurity communities.


Registration: Complete the online registration form:
https://www.trustedci.org/2021-cybersecurity-summit

Thank you on behalf of the Program and Organizer Committee.

 

Tuesday, August 17, 2021

Trusted CI webinar: NCSA Experience with SOC2 in the Research Computing Space August 30th @11am Eastern

NOTE: If you have any experience with SOC2 compliance and want to share resources, slideshows, presentations, etc., please email links and other materials to Jeannette Dopheide <jdopheid@illinois.edu> and we will share them during the presentation. 

NCSA's Alex Withers is presenting the talk, NCSA Experience with SOC2 in the Research Computing Space, on Monday August 30th at 11am (Eastern).

Please register here.

As the demand for research computing dealing with sensitive data increases, institutions like the National Center for Supercomputing Applications work to build the infrastructure that can process and store these types of data.  Along with the infrastructure can come a host of regulatory obligations including auditing and examination requirements.  We will present NCSA’s recent SOC2 examination of its healthcare computing infrastructure and how we ensured our controls, data collection and processes were properly documented, tested and poised for the examination.  Additionally, we will show how other research and educational organizations might handle a SOC2 examination and what to expect from such an examination.  From a broader perspective, the techniques and lessons learned can be applied to much more than a SOC2 examination and could potentially be used to save time and resources for any audit or examination.

Speaker Bio

Alex Withers is an Assistant Director for Cyber Security and the Chief Information Security Officer at the National Center for Supercomputing Applications (NCSA). Additionally, he is the security co-manager for the XSEDE project and NCSA’s HIPAA Security Liaison. He is also a PI and co-PI for a number of NSF-funded cybersecurity projects.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

 

Tuesday, August 10, 2021

Trusted CI Begins Engagement with Ohio Supercomputing Center

In July the Ohio Supercomputing Center (OSC) began an engagement with Trusted CI to address the challenge of security questionnaire response management for academic research service providers.

It is a common occurrence for potential users with strong security concerns to submit security questionnaires to research service providers. Such questionnaires must be completed by security staff at the research service provider to provide those users with information about the security of the resource so they can assess if it is appropriate for their concerns. These security questionnaires are blockers to use of the resource, so they become high priority interrupts for security staff who have limited time to manage them. Also, the questionnaires are typically targeted to commercial cloud service providers, not research service providers at higher education institutions, resulting in a mismatch between the questions and the academic research environment.

The goal of the engagement is to produce guidance for academic research service providers (such as NSF HPC centers and campus NSF CC*/CICI awardees) that addresses the challenge of security questionnaire response management. Our approach is to produce a profile of the EDUCAUSE Higher Education Community Vendor Assessment Toolkit (HECVAT) (specifically, the HECVAT-Lite version) that is applicable to academic research service providers (rather than commercial cloud service providers), so that research service providers can maintain responses to a single security questionnaire that should be broadly accepted by their users.

The profile should be applicable to HPC/HTC providers (like OSC, NCSA, OSG/PATh), NSF research testbeds (like FABRIC), academic research software providers (like CILogon, Globus, and Open OnDemand), and campus Science DMZs.

The co-lead of the HECVAT Users Community Group, Charlie Escue, has agreed to join us during this engagement to help provide guidance and insight into the HECVAT. Trusted CI and OSU are grateful for his contributions to this exciting project.

The engagement is planned to conclude in December with the resulting work to be published for the benefit of our CI community.

Friday, August 6, 2021

Michigan State University and Trusted CI Collaborate to Raise Awareness of Cybersecurity Threats to the Research Community

Ransomware is a form of cybercrime that has risen to the same level of concern as terrorism by the U.S. Department of Justice. The United States suffered more than 65,000 ransomware attacks last year and victims paid $350 million in ransom, with an unknown amount of collateral costs due to lost productivity. Historically, research organizations have been largely ignored by cybercriminals since they do not typically have data that is easily sold or otherwise monetized. Unfortunately, since ransomware works by extorting payments from victims to get their own data back, research organizations are no longer immune to being targeted by criminals.

An event of this nature occurred in the Physics and Astronomy department at Michigan State University (MSU), which experienced a ransomware attack in May 2020. While many organizations attempt to keep the public from finding out about cyberattacks for fear of loss of reputation or follow-up attacks, MSU has decided to make elements and factors of its attack public in the interests of transparency, to encourage disclosure of similar types of attacks, and perhaps more importantly, to educate the open-science community about the threat of ransomware and other destructive types of cyberattacks. The overarching goal is to raise awareness about rising cybersecurity threats to higher education in hopes of driving safe cyberinfrastructure practices across university communities.

To achieve this, the CIO’s office at MSU engaged with Trusted CI, the NSF Cybersecurity Center of Excellence, in a collaborative review and analysis of the ransomware attack suffered by MSU last year. The culmination of the engagement—based on interviews of those involved in the incident—is the report “Research at Risk: Ransomware attack on Physics and Astronomy Case Study,” which focuses on lessons learned during the analysis. The report contains mitigation strategies that other researchers and their colleagues can apply to protect themselves. In the experience of Trusted CI, there was nothing extraordinary about the issues that led to this incident, and hence, we share these lessons with the goal of motivating other organizations to prevent future negative impacts to their research mission.

The engagement ran from January 2021 to July 2021.


Tuesday, August 3, 2021

Trusted CI new co-PIs: Peisert and Shute

I am happy to announce that Sean Peisert and Kelli Shute have taken on co-PI roles with Trusted CI. Both already have substantial leadership roles with Trusted CI. Sean is leading the 2021 annual challenge on software assurance and Kelli has been serving as Trusted CI’s Executive Director since August of 2020.

Thank you to Sean and Kelli for being willing to step up and take on these responsibilities.

Von

Trusted CI PI and Director


Initial Findings of the 2021 Trusted CI Annual Challenge on Software Assurance

 In 2021, Trusted CI is conducting our focused “annual challenge” on the assurance of software used by scientific computing and cyberinfrastructure. The goal of this year-long project, involving seven Trusted CI members, is to broadly improve the robustness of software used in scientific computing with respect to security. The Annual Challenge team spent the first half of the 2021 calendar year engaging with developers of scientific software to understand the range of software development practices used and identifying opportunities to improve practices and code implementation to minimize the risk of vulnerabilities. In this blog post, the 2021 Trusted CI Annual Challenge team gives a high-level description of some of its more important findings during the past six months. 

Later this year, the team will be leveraging its insights from open-science developer engagements to develop a guide specifically aimed at the scientific software community that covers software assurance in a way most appropriate to that community. Trusted CI will be reaching back out to the community sometime in the Fall for feedback on draft versions of that guide before the final version is published late in 2021.

In support of this effort, Trusted CI gratefully acknowledges the input from the following teams who contributed to this effort: FABRIC, the Galaxy Project, High Performance SSH/SCP (HPN-SSH) by the Pittsburgh Supercomputing Center (PSC), Open OnDemand by the Ohio Supercomputer Center, Rolling Deck to Repository (R2R) by Columbia University, and the Vera C. Rubin Observatory

At a high level, the team identified challenges that developers face with robust policy and process documentation; difficulties in identifying and staffing security leads, and ensuring strong lines of security responsibilities among developers; difficulties in effective use of code analysis tools; confusion about when, where, and how to find effective security training; and challenges with controlling source code developed and external libraries used, to ensure strong supply chain security. We now describe our examination process and findings in greater detail.


Goals and Approach

The motivation for this year’s Annual Challenge is that Trusted CI has reviewed many projects in its history and found significant anecdotal evidence that there are worrisome gaps in software assurance practices in scientific computing. We determined that if some common themes could be identified and paired with the proportional remediations, the state of software assurance in science might be significantly improved. 

Trusted CI has observed that currently available software development resources often do not match well with the needs of scientific projects; the backgrounds of the developers, the available resources, and the way the software is used do not necessarily map to existing resources available for software assurance. Hence, Trusted CI put together a team including a range of security expertise with backgrounds in the field from academic researchers to operational expertise. That team then examined several software projects covering a range of sizes, applications, and NSF directorate funding sources, looking for commonalities among them related to software security. Our focus was on both procedures and practical application of security measures and tools. 

In preparing our examinations of these individual software projects, the Annual Challenge team enumerated several details that it felt would shed light on the software security challenges faced by scientific software developers, some of the most successful ways in which existing teams are addressing those challenges, and observations from developers about the way that they wish things might be different in the future, or if they were able to do things over again from the beginning.


Findings

The Annual Challenge team’s findings are generally aligned with one of five categories: process, organization/mission, tools, training, and code storage.

Process: The team found several common threads of challenges facing developers, most notably related to policy process documentation, including policies relating to onboarding, offboarding, code commits and pull requests, coding standards, design, communication about vulnerabilities with user communities, patching methodologies, and auditing practices. One cause for this finding is often that software projects start small and do not plan to grow or be used widely. And when the software does grow and starts to be used broadly, it can be hard to develop formal policies after developers are used to working in an informal, ad hoc manner. In addition, organizations do not budget for security. Further, where policy documentation does exist, it can easily go stale -- “documentation rot.” As a result, it would be helpful for Trusted CI to develop guides for and examples of such policies that could be used and implemented even at early stages by the scientific software development community.

Organization and Mission: Most projects faced difficulties in identifying, staffing, or funding a security lead and/or project manager. The few projects that had at least one of these roles filled had an advantage in regards to DevSecOps. In terms of acquiring broader security skills, some projects attempted to use institutional “audit services” but found mixed results. Several projects struggled with the challenge of integrating security knowledge among different teams or individuals. Strong lines of responsibility can create valuable modularity but can also create points of weakness when interfaces between different authors or repositories are not fully evaluated for security issues. Developers can ease this tension by using processes for developing security policies around software, ensuring ongoing management support and enforcement of policies, and helping development teams understand the assignment of key responsibilities. These topics will be addressed in the software assurance guide that Trusted CI is developing.

Tools: Static analysis tools are commonly employed in continuous integration (CI) workflows to help detect security flaws, poor coding style, and potential errors in the project. A primary attribute of a static analysis tool is the set of language-specific rules and patterns it uses to search for style, correctness, and security issues in a project. One major issue with static analysis tools is that they report a high number of false positives, which, as the Trusted CI team found, can cause developers to avoid using them. It was determined that it would be helpful for Trusted CI to develop tutorials that are appropriate for the developers in the scientific software community to learn how to properly use these tools and overcome their traditional weaknesses without being buried in useless results.

The Trusted CI team found that dependency checking tools were commonly employed, particularly given some of the automation and analysis features built into GitHub. Such tools are useful to ensure the continued security of a project’s dependencies as new vulnerabilities are found over time. Thus, the Trusted CI team will explore developing (or referencing existing) materials to ensure that the application of dependency tracking is effective for the audience and application in question. It should be noted that tools in general could give a false sense of security if they are not carefully used.

Training: Projects shared that developers of scientific software received almost no specific training on security or secure software development. A few of the projects that attempted to find online training resources reported finding themselves lost in a quagmire of tutorials. In some cases, developers had computer science backgrounds and relied on what they learned early in their careers, sometimes decades ago. In other cases, professional training was explored but found to be at the wrong level of detail to be useful, had little emphasis on security specifically, or was extremely expensive. In yet other cases, institutional training was leveraged. We found that any kind of ongoing training tended to be seen by developers as not worth the time and/or expense. To address this, Trusted CI should identify training resources appropriate for the specific needs, interests, and budgets of the scientific software community.

Code Storage: Although most projects were using version control in external repositories, the access controls methods governing pull requests and commits were not sufficiently restricted to maintain a secure posture. Many projects leverage GitHub’s dependency checking software; however, that tool is limited to only checking libraries within GitHub’s domain. A few projects developed their own software in an attempt to navigate a dependency nightmare. Further, there was often little ability or attempt to vet external libraries; these were often accepted without inspection mainly because there is no straightforward mechanism in place to vet these packages. In the Trusted CI software assurance guide, it would be useful to describe processes for leveraging two-factor authentication and developing policies governing access controls, commits, pull requests, and vetting of external libraries.


Next Steps

The findings derived from our examination of several representative scientific software development projects will direct our efforts towards addressing what new content we believe is most needed by the scientific software development community.

Over the next six months, the Trusted CI team will be developing a guide consisting of this material, targeted toward anyone who is either planning or has an ongoing software project that needs a security plan in place. While we hope that the guide will be broadly usable, a particular focus of the guide will be on projects that provide a user-facing front end exposed to the Internet because such software is most likely to be attacked. 

This guide is meant as a “best practices” approach to the software lifecycle. We will recommend various resources that should be leveraged in scientific software, including the types of tools to run to expose vulnerabilities, best practices in coding, and some procedures that should be followed when engaged in a large collaborative effort and how to share the code safely. Ultimately, we hope the guide will support scientific discovery itself by providing guidance around how to minimize possible risks incurred in creating and using scientific software.