Wednesday, October 20, 2021

Trusted CI Begins Engagement with OOI


The Ocean Observatories Initiative (OOI), funded by the NSF OCE Division of Ocean Sciences #1743430, is a science-driven ocean observing network that delivers real-time data from more than 800 instruments to address critical science questions regarding the world’s oceans. OOI data are freely available online to anyone with an Internet connection. 

The OOI provides an exponential increase in the scope and timescale of observations of the world’s oceans. Present and future educators, scientists, and researchers will draw conclusions about climatological and environmental processes based on these measurements, which sets a requirement for the data to be accurate, with a flawless pedigree. As a result, the OOI has a requirement to protect its data from being altered by any external agent.

To this end, OOI-CI (OOI Cyberinfrastructure) is seeking consultation from Trusted CI on evaluation of their current security program, along with guidance on reviewing and evaluating potential alternatives for an enhanced security posture. Through a kick-off meeting, Trusted CI and OOI discussed their concerns, questions, and goals, including: penetration testing; system and software vulnerability scanning and remediation; gaps in current policies and procedures; developing periodic security tasks; and identifying ‘unknowns’. These topics were refined and prioritized based on their needs using a subset of tasks outlining the goals of the engagement, specifically:

  1. Perform a review of OOI’s cyberinfrastructure using the Trusted CI Security Program Evaluation worksheet in order to assess the current state and target level of their cybersecurity.
  2. Review the 2015 Engagement final report and recommendations (covering OOI @Rutgers University) with the goal to see if any recommendations made at that time are still applicable and warranted.
  3. Using information documented in step 1., take initial steps towards adopting the Trusted CI Framework by developing a ‘master information security policies and procedures’ document (MISPP).
  4. Discuss and document missing policies and procedures from the Framework, including questions and concerns raised by OOI, and also unknowns discovered in above exercises.  
  5. Provide guidance on creating an asset inventory, applying a control set, and creating and maintaining a risk registry.

Additionally, broader impacts from this engagement can be realized as the OOI-CI is connected to several locations around the country. Lessons learned and recommendations from the engagement will be implemented at the other sites, which consist of Woods Hole Oceanographic Institute (WHOI) administration, and the three MIO’s (Marine Implementing Organizations) that provide data from Oregon State University, University of Washington, and WHOI.

The engagement will run from September 2021 to December 2021.

Monday, October 18, 2021

Announcing Trusted CI's Open Science Cybersecurity Fellows Program (Applications due Nov.12th)

Application Deadline: Friday, Nov. 12th  Apply here.

Overview

Trusted CI serves the scientific community as the NSF Cybersecurity Center of Excellence, providing leadership in and assistance in cybersecurity in the support of research. In 2019, Trusted CI is establishing an Open Science Cybersecurity Fellows program. This program will establish and support a network of Fellows with diversity in both geography and scientific discipline. These fellows will have access to training and other resources to foster their professional development in cybersecurity. In exchange, they will champion cybersecurity for science in their scientific and geographic communities and communicate challenges and successful practices to Trusted CI.

About the program

The vision for the Fellows program is to identify members of the scientific community, empower them with basic knowledge of cybersecurity and the understanding of Trusted CI’s services, and then have them serve as cybersecurity liaisons to their respective communities. They would then assist members of the community with basic cybersecurity challenges and connect them with Trusted CI for advanced challenges. 

Trusted CI will select six fellows each year.  Fellows will receive recognition, cybersecurity professional development consisting of training and travel funding. The Fellows’ training will consist of a Virtual Institute, providing 20 hours of basic cybersecurity training over six months. The training will be delivered by Trusted CI staff and invited speakers. The Virtual Institute will be presented as a weekly series via Zoom and recorded to be publicly available for later online viewing. Travel support is budgeted (during their first year only) to cover fellows’ attendance at the NSF Cybersecurity Summit, PEARC, and one professional development opportunity agreed to with Trusted CI. The Fellows will be added to an email list to discuss any challenges they encounter that will receive prioritized attention from Trusted CI staff. Trusted CI will recognize the Fellows on its website and social media. Fellowships are funded for one year but will be encouraged to continue to participate in TrustedCI activities the years following their fellowship year.

After the Virtual Institute, Fellows, with assistance from the Trusted CI team, will be expected to help their science community with cybersecurity and make them aware of Trusted CI for complex needs. By the end of the year, they will be expected to present or write a short white paper on the cybersecurity needs of their community and some initial steps they will take (or have taken) to address these needs. After the year of full support, Trusted CI will continue recognizing the cohort of Fellows and giving them prioritized attention. Over the years, this growing cohort of Fellows will broaden and diversify Trusted CI’s impact.

Application requirements

·   A description of their connection to the research community. Any connection to NSF projects should be clearly stated, ideally providing the NSF award number.
A statement of interest in cybersecurity

·   Two-page biosketch

·   Optional demographic info

·    A letter from their supervisor supporting their involvement and time commitment to the program

·    A commitment to fully participate in the Fellows activities for one year (and optionally thereafter)

The selection of Fellows would be made by the Trusted CI PIs and Senior Personnel based on the following criteria:

1.  Demonstrated connection to scientific research, with preference given to those who demonstrate a connection to NSF-funded science.

2.   Articulated interest in cybersecurity.

3.   Fellows that broaden Trusted CI’s impact across all seven NSF research directorates (Trusted CI encourages applications for individuals with connections to NSF directorates other than CISE), with connections to any of the NSF 10 Big Ideas, or Fellows that increase the participation of underrepresented populations.

Who should apply?   

·   Professionals and post-docs interested in cybersecurity for science, with evidence of that in their past and current role

·   Research Computing, Data, and IT technical or policy professionals interested in applying cybersecurity innovations to scientific research

·   Domain scientists interested in data integrity aspects of scientific research

·   Scientists from all across the seven NSF research directorates interested in how data integrity fits with their scientific mission

·   Researchers in the NSF 10 Big Ideas interested in cybersecurity needs

·   Regional network security personnel working across universities and facilities in their region

·   People comfortable collaborating and communicating across multiple institutions with IT / CISO / Research Computing and Data professionals

·    Anyone in a role relevant to cybersecurity for open science

More about the Fellowship

Fellows come from a variety of career stages, they demonstrate a passion for their area, the ability to communicate ideas effectively, and a real interest in the role of cybersecurity in research. Fellows are empowered to talk about cybersecurity to a wider audience, network with others who share a passion for cybersecurity for open science and learn key skills that benefit them and their collaborators.

If you have questions about the Fellows program, please let us know by emailing 
fellows@trustedci.org.




Monday, October 11, 2021

Trusted CI webinar: The Trusted CI Framework; Overview and Recent Developments, Oct 25th @11am Eastern

Trusted CI's Scott Russell will be presenting the talk, The Trusted CI Framework; Overview and Recent Developments, on Monday October 25th at 11am (Eastern).

Please register here.

The Trusted CI Framework is a tool to help organizations establish and refine their cybersecurity programs. In response to an abundance of guidance focused narrowly on cybersecurity controls, Trusted CI set out to develop a new framework that would empower organizations to confront cybersecurity from a mission-oriented, programmatic, and full organizational lifecycle perspective. The Trusted CI Framework recommends organizations take control of their cybersecurity the same way they would any other important business concern: by adopting a programmatic approach.

This webinar will provide an introduction to the Trusted CI Framework, including a walkthrough of the 16 “Musts” for establishing a competent cybersecurity program. Then we will go on to cover recent developments with the Trusted CI Framework, including: 
  1. The publication of the first “Framework Implementation Guide,” which provides in-depth guidance on how to implement each Framework Must;
  2. The experiences of NOIRLab (NSF Major Facility) as the first official Framework adopter; and
  3. The announcement of the “Framework Cohort” for 2022, an initiative to help Major Facilities adopt and implement the Framework.

Speaker Bio

Scott Russell is a Senior Policy Analyst at the Indiana University Center for Applied Cybersecurity Research. Scott was previously the Postdoctoral Fellow in Information Security Law & Policy. Scott’s work thus far has emphasized private sector cybersecurity best practices, data aggregation and the First and Fourth Amendments, and cybercrime in international law. Scott studied Computer Science and History at the University of Virginia and received his J.D. from the Indiana University, Maurer School of Law.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Wednesday, September 29, 2021

Findings Report of the 2021 Trusted CI Annual Challenge on Software Assurance Published

 As reported in this blog earlier this year, in 2021, Trusted CI is conducting our focused “annual challenge” on the assurance of software used by scientific computing and cyberinfrastructure

In July, the 2021 Trusted CI Annual Challenge team posted its initial findings in this blog.  The team is now pleased to share its detailed findings report:

Andrew Adams, Kay Avila, Elisa Heymann, Mark Krenz, Jason R. Lee, Barton Miller, and Sean Peisert. “The State of the Scientific Software World: Findings of the 2021 Trusted CI Software Assurance Annual Challenge Interviews,” September 2021.  https://hdl.handle.net/2022/26799

Now that the team has finished its examination of software assurance findings, it will turn its focus to solutions.  In accordance with that, later this calendar year, the Trusted CI team will be publishing a guide for recommended best practices for scientific software development.

For those interested in hearing more about the 2021 Annual Challenge, please (virtually) come to the team’s panel session at the 2021 NSF Cybersecurity Summit at 3:05 EDT on October 13, 2021: https://www.trustedci.org/2021-summit-program


Wednesday, September 22, 2021

SGCI Webinar: Security recommendations for science gateways, Sept 29th @ 1pm EDT

This webinar announcement was originally posted on SGCI's website.

Security recommendations for science gateways

Wednesday, September 29, 2021, 1 pm Eastern/10 am Pacific

Presented by Mark Krenz, Chief Security Analyst, Center for Applied Cybersecurity Research, Indiana University

Trusted CI has recently published a four-page document targeted at small team science gateways. This document provides a prioritized list of security recommendations to help reduce overall security risk. In this webinar Mark Krenz, from Trusted CI, will be providing an introduction and overview of the document, as well as a discussion of the lessons learned from the last few years of providing security consultations for science gateways.

See SGCI's webinars page for the Zoom link and password.

Tuesday, September 14, 2021

Trusted CI webinar: Q-Factor: Real-time data transfer optimization, September 27th @11am Eastern

Members of FIU and ESnet are presenting the talk, Q-Factor: Real-time data transfer optimization leveraging In-band Network Network provided by P4 data planes, on Monday September 27th at 11am (Eastern). Our presenters are Jeronimo Bezerra, Richard Cziva, and Dr. Julio Ibarra.

Please register here.

Q-Factor is a framework to enable data transfer optimization based on real-time network state information provided by programmable data planes. Communication networks are critical components of today’s scientific workflows. Researchers leverage long-distance ultra-high-speed networks to transfer massive data sets from acquisition sites to processing sites and share measurements with scientists worldwide. However, while network bandwidth is continuously increasing, most data transfers are unable to efficiently utilize the added capacity due to inherent limitations of parameter settings of the network transport protocols and the lack of network state information at the end hosts. To address these challenges, Q-Factor plans to use sub-second network state data to dynamically configure current transport protocol and operating systems parameters to reach higher network utilization and, as a result, to improve scientific workflows. Q-Factor leverages programmable network devices with the In-band Network Telemetry (INT) framework and delivers a software solution to process in-band measurements at the end hosts. Using Q-Factor on end hots, for instance Data Transfer Nodes (DTN)s, TCP/IP parameters will be configured according to temporal network characteristics, such as round-trip time, network utilization, and network buffer occupancy. This tuning is expected to increase network utilization, shorter flow completion times, and significantly reduce packet drops caused by under-provisioned network buffers. Q-Factor is a collaboration between Florida International University and Energy Science Network.

Speaker Bio

Jeronimo Bezerra is the FIU’s Center for Internet Augmented Research and Assessment’s IT Associate Director. Jeronimo has 19 years of IT and Network Engineering experience, most of them with R&E networks. He is responsible for AmLight network operation and engineering, including the SDN deployment and operation. He is leading the Q-Factor design, development and deployment activities.

Richard Cziva is a software engineer at ESnet. He has a range of technical interests including traffic and performance analysis, data-plane programmability, high-speed packet processing, software-defined networking, and network function virtualization. Prior to joining ESnet in 2018, Richard was a Research Associate at University of Glasgow, where he looked at how advanced services (e.g., personalized firewalls, intrusion detection modules, measurement functions) can be implemented and managed inside wide area networks with programmable edge capabilities. Richard holds a BSc in Computer Engineering (2013) from Budapest University of Technology and Economics, Hungary and a Ph.D. in Computer Science (2018) from University of Glasgow, United Kingdom. He will lead the research activities in Q-Factor.

As the Assistant Vice President for Technology Augmented Research at FIU, Dr. Julio Ibarra is responsible for furthering the mission of the Center for Internet Augmented Research and Assessment (CIARA) – to contribute to the pace and the quality of research at FIU through the application of advanced Cyberinfrastructure. Has 30+ years of IT and Telecom infrastructure management, 18 of those years of specialization with Research and Education networks and project management. Dr. Ibarra will be responsible for overall project management and coordination.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Tuesday, September 7, 2021

Testbed Facility Security Workshop at 2021 NSF Cybersecurity Summit


The 2021 NSF Summit Workshop on Testbed Facility Security will be held Monday, October 18 from 1pm to 5pm Eastern Time as part of the 2021 NSF Cybersecurity Summit. The workshop will explore the unique cybersecurity challenges of testbed facilities, stemming from their mission to enable experimental use, including configuration of facility resources for novel networking and security experiments, which may span multiple facilities. The workshop is being co-organized by Chameleon, Colosseum, DETERLab, FABRIC, PAWR, and Trusted CI.
If you are interested in the cybersecurity challenges of experimental cloud-based testbeds, please plan to attend. Visit https://www.trustedci.org/2021-testbed-facility-security-workshop for more details.
The workshop is a follow-on activity from the Trusted CI FABRIC engagement. See https://blog.trustedci.org/search/label/FABRIC for more information about that engagement.
 

Tuesday, August 31, 2021

2021 Open OnDemand Engagement Concludes

Open OnDemand, funded by NSF OAC, is an open-source HPC portal based on the Ohio Supercomputer Center’s original OnDemand portal. The goal of Open OnDemand is to provide an easy way for system administrators to provide web access to their HPC resources.

Open OnDemand is facing increased community adoption. As a result, it is becoming a critical production service for many HPC centers and clients. Open OnDemand engaged with Trusted CI to improve the overall security of the project, ensuring that it continues to be a trusted and reliable platform for the hundreds of centers and tens of thousands of clients that regularly utilize it. 

Our engagement centered on providing the Open OnDemand team with the skills, tools and resources needed to ensure their software security. This included using the FPVA methodology to conduct in-depth vulnerability assessments independently. In addition, we evaluated the static analysis and dependency checking tools used by Open OnDemand. The analysis of this evaluation led to interesting findings regarding the way tools behave and a set of recommendations regarding which tools to use and how to most effectively configure them.

Trusted-CI has performed in-depth assessments for NSF projects in the past. In this engagement with Open OnDemand, we took a step forward as Trusted CI taught a group how to perform the assessment themselves. In general, the NSF community benefits from being able to carry out that kind of activity in an autonomous way.  In addition, the lessons in this engagement related to automated tools will benefit any NSF software project.

Open OnDemand Software Engineer, Jeff Ohrstrom, shared positive feedback regarding the value of the engagement, stating “The biggest takeaway for me was just getting muscle memory around security to start to think about attack vectors in every change, every commit, every time.”

Our findings and recommendations are summarized in our engagement report, which can be found here

Thursday, August 26, 2021

Trusted CI begins engagement with University of Arkansas

The University of Arkansas has engaged with Trusted CI and the Engagement and Performance Operations Center (EPOC) to review their plans for a Science DMZ that will serve institutions for higher education across Arkansas. Trusted CI and EPOC will also help create training and policy materials that can be reused by other institutions both in the state of Arkansas and beyond.

Science DMZs are a secure architecture for providing high throughput transfer of science data between two points. By placing data transfer nodes outside each institution's cononical network and into a specially controlled zone, the Science DMZ is able to increase speed by reducing the friction created by firewalls, other traffic, and switches and routers that are tuned for more diverse traffic.

 The University of Arkansas via its Data Analytics that are Robust and Trusted (DART) project, is funded by NSF GRANT #194639 for EPSCoR RII.

Tuesday, August 24, 2021

Trusted CI Begins Engagement with Jupyter Security Coordinators

Project Jupyter is an open-source project which supports interactive data science and scientific computing across multiple programming languages. Project Jupyter has developed several interactive computing products including Jupyter Notebook, JupyterLab, and JupyterHub, which are used throughout the NSF community. This Trusted CI engagement is motivated by an upcoming Jupyter Security Best Practices Workshop funded by NumFOCUS as part of the Community Workshop series. The workshop is tentatively scheduled to be held April 2022 at the Ohio Supercomputer Center.

The goals of this engagement include the following tasks.

  • Review existing Jupyter deployment documentation related to security, identify gaps, and create recommendations for improvements.
  • Identify Jupyter deployment use-cases as targets for Jupyter Security Best Practices documentation. Example use-cases include DOE supercomputing centers, campus research clusters, workshops, small scientific projects, etc. Prioritize these use-cases based on which audiences would benefit most from new security documentation.
  • Write Jupyter Security Best Practices documentation for high priority use-cases identified above. Work through other use-cases as time permits.

The Jupyter Security Best Practices documentation produced by this engagement will be shared with Project Jupyter for inclusion in their documentation, and also presented at the workshop.

To read Jupyter's blog post about the engagement, click here.

Monday, August 23, 2021

Trusted CI Adopts Framework for its own Security Program

Trusted CI, the NSF Cybersecurity Center of Excellence, is pleased to announce that it has completed its adoption of the Trusted CI Framework for its own security program.  The previous security program, based off of Trusted CI’s Guide for Cybersecurity Programs for NSF Science and Engineering Projects, provided Trusted CI with a usable but basic security program. As Trusted CI has matured and its impact on the community expanded, we found our program was no longer adequate for our growing cybersecurity needs.  Thus, we began the process of rebuilding our program in order to strengthen our security posture.  

The release of Trusted CI’s Framework was independent of our effort to redress our security program, but serendipitously timed nonetheless.  We leveraged the Framework Implementation Guide (or FIG) -- instructions for cyberinfrastructure research operators -- to rebuild our security program based on the 4 Pillars and 16 Musts constituting the Trusted CI Framework.

The documents that form Trusted CI’s updated security program include the top-level Master Information Security Policies and Procedures (or MISPP), along with the support policies: Access Control Policy, Collaborator Information Policy, Document Labeling Policy, Incident Response Policy & Procedures, Information Classification Policy, Infrastructure Change Policy, and Onboarding / Offboarding Policy & Procedures.  Moreover, to track critical assets, asset owners for incident response, associated controls, and granted privilege escalations, the following “Asset Specific Access and Privilege Specifications”, or ASAPS were included: Apple (Podcasts), Badgr, Backup System (for G-Drive), Blogger, CloudPerm (G-Drive tool), DNS Registrar, GitHub, Group Service Account, IDEALS (@Illinois), Mailing Lists @Indiana), Slack, Twitter, YouTube, Website (SquareSpace), Zenodo, and Zoom.


The effort to adopt the Trusted CI Framework took ½ FTE over four months. 

Registration is now open for the 2021 NSF Cybersecurity Summit

 It is our great pleasure to announce registration is now open for the 2021 NSF Cybersecurity Summit. Please join us for this virtual conference. Plenary: Oct 12-13, Trainings: Oct 15, Workshops Oct 18-19. Attendees will include cybersecurity practitioners, technical leaders, and risk owners from within the NSF Large Facilities and CI community, as well as key stakeholders and thought leaders from the broader scientific and cybersecurity communities.


Registration: Complete the online registration form:
https://www.trustedci.org/2021-cybersecurity-summit

Thank you on behalf of the Program and Organizer Committee.

 

Tuesday, August 17, 2021

Trusted CI webinar: NCSA Experience with SOC2 in the Research Computing Space August 30th @11am Eastern

NOTE: If you have any experience with SOC2 compliance and want to share resources, slideshows, presentations, etc., please email links and other materials to Jeannette Dopheide <jdopheid@illinois.edu> and we will share them during the presentation. 

NCSA's Alex Withers is presenting the talk, NCSA Experience with SOC2 in the Research Computing Space, on Monday August 30th at 11am (Eastern).

Please register here.

As the demand for research computing dealing with sensitive data increases, institutions like the National Center for Supercomputing Applications work to build the infrastructure that can process and store these types of data.  Along with the infrastructure can come a host of regulatory obligations including auditing and examination requirements.  We will present NCSA’s recent SOC2 examination of its healthcare computing infrastructure and how we ensured our controls, data collection and processes were properly documented, tested and poised for the examination.  Additionally, we will show how other research and educational organizations might handle a SOC2 examination and what to expect from such an examination.  From a broader perspective, the techniques and lessons learned can be applied to much more than a SOC2 examination and could potentially be used to save time and resources for any audit or examination.

Speaker Bio

Alex Withers is an Assistant Director for Cyber Security and the Chief Information Security Officer at the National Center for Supercomputing Applications (NCSA). Additionally, he is the security co-manager for the XSEDE project and NCSA’s HIPAA Security Liaison. He is also a PI and co-PI for a number of NSF-funded cybersecurity projects.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

 

Tuesday, August 10, 2021

Trusted CI Begins Engagement with Ohio Supercomputing Center

In July the Ohio Supercomputing Center (OSC) began an engagement with Trusted CI to address the challenge of security questionnaire response management for academic research service providers.

It is a common occurrence for potential users with strong security concerns to submit security questionnaires to research service providers. Such questionnaires must be completed by security staff at the research service provider to provide those users with information about the security of the resource so they can assess if it is appropriate for their concerns. These security questionnaires are blockers to use of the resource, so they become high priority interrupts for security staff who have limited time to manage them. Also, the questionnaires are typically targeted to commercial cloud service providers, not research service providers at higher education institutions, resulting in a mismatch between the questions and the academic research environment.

The goal of the engagement is to produce guidance for academic research service providers (such as NSF HPC centers and campus NSF CC*/CICI awardees) that addresses the challenge of security questionnaire response management. Our approach is to produce a profile of the EDUCAUSE Higher Education Community Vendor Assessment Toolkit (HECVAT) (specifically, the HECVAT-Lite version) that is applicable to academic research service providers (rather than commercial cloud service providers), so that research service providers can maintain responses to a single security questionnaire that should be broadly accepted by their users.

The profile should be applicable to HPC/HTC providers (like OSC, NCSA, OSG/PATh), NSF research testbeds (like FABRIC), academic research software providers (like CILogon, Globus, and Open OnDemand), and campus Science DMZs.

The co-lead of the HECVAT Users Community Group, Charlie Escue, has agreed to join us during this engagement to help provide guidance and insight into the HECVAT. Trusted CI and OSU are grateful for his contributions to this exciting project.

The engagement is planned to conclude in December with the resulting work to be published for the benefit of our CI community.

Friday, August 6, 2021

Michigan State University and Trusted CI Collaborate to Raise Awareness of Cybersecurity Threats to the Research Community

Ransomware is a form of cybercrime that has risen to the same level of concern as terrorism by the U.S. Department of Justice. The United States suffered more than 65,000 ransomware attacks last year and victims paid $350 million in ransom, with an unknown amount of collateral costs due to lost productivity. Historically, research organizations have been largely ignored by cybercriminals since they do not typically have data that is easily sold or otherwise monetized. Unfortunately, since ransomware works by extorting payments from victims to get their own data back, research organizations are no longer immune to being targeted by criminals.

An event of this nature occurred in the Physics and Astronomy department at Michigan State University (MSU), which experienced a ransomware attack in May 2020. While many organizations attempt to keep the public from finding out about cyberattacks for fear of loss of reputation or follow-up attacks, MSU has decided to make elements and factors of its attack public in the interests of transparency, to encourage disclosure of similar types of attacks, and perhaps more importantly, to educate the open-science community about the threat of ransomware and other destructive types of cyberattacks. The overarching goal is to raise awareness about rising cybersecurity threats to higher education in hopes of driving safe cyberinfrastructure practices across university communities.

To achieve this, the CIO’s office at MSU engaged with Trusted CI, the NSF Cybersecurity Center of Excellence, in a collaborative review and analysis of the ransomware attack suffered by MSU last year. The culmination of the engagement—based on interviews of those involved in the incident—is the report “Research at Risk: Ransomware attack on Physics and Astronomy Case Study,” which focuses on lessons learned during the analysis. The report contains mitigation strategies that other researchers and their colleagues can apply to protect themselves. In the experience of Trusted CI, there was nothing extraordinary about the issues that led to this incident, and hence, we share these lessons with the goal of motivating other organizations to prevent future negative impacts to their research mission.

The engagement ran from January 2021 to July 2021.


Tuesday, August 3, 2021

Trusted CI new co-PIs: Peisert and Shute

I am happy to announce that Sean Peisert and Kelli Shute have taken on co-PI roles with Trusted CI. Both already have substantial leadership roles with Trusted CI. Sean is leading the 2021 annual challenge on software assurance and Kelli has been serving as Trusted CI’s Executive Director since August of 2020.

Thank you to Sean and Kelli for being willing to step up and take on these responsibilities.

Von

Trusted CI PI and Director


Initial Findings of the 2021 Trusted CI Annual Challenge on Software Assurance

 In 2021, Trusted CI is conducting our focused “annual challenge” on the assurance of software used by scientific computing and cyberinfrastructure. The goal of this year-long project, involving seven Trusted CI members, is to broadly improve the robustness of software used in scientific computing with respect to security. The Annual Challenge team spent the first half of the 2021 calendar year engaging with developers of scientific software to understand the range of software development practices used and identifying opportunities to improve practices and code implementation to minimize the risk of vulnerabilities. In this blog post, the 2021 Trusted CI Annual Challenge team gives a high-level description of some of its more important findings during the past six months. 

Later this year, the team will be leveraging its insights from open-science developer engagements to develop a guide specifically aimed at the scientific software community that covers software assurance in a way most appropriate to that community. Trusted CI will be reaching back out to the community sometime in the Fall for feedback on draft versions of that guide before the final version is published late in 2021.

In support of this effort, Trusted CI gratefully acknowledges the input from the following teams who contributed to this effort: FABRIC, the Galaxy Project, High Performance SSH/SCP (HPN-SSH) by the Pittsburgh Supercomputing Center (PSC), Open OnDemand by the Ohio Supercomputer Center, Rolling Deck to Repository (R2R) by Columbia University, and the Vera C. Rubin Observatory

At a high level, the team identified challenges that developers face with robust policy and process documentation; difficulties in identifying and staffing security leads, and ensuring strong lines of security responsibilities among developers; difficulties in effective use of code analysis tools; confusion about when, where, and how to find effective security training; and challenges with controlling source code developed and external libraries used, to ensure strong supply chain security. We now describe our examination process and findings in greater detail.


Goals and Approach

The motivation for this year’s Annual Challenge is that Trusted CI has reviewed many projects in its history and found significant anecdotal evidence that there are worrisome gaps in software assurance practices in scientific computing. We determined that if some common themes could be identified and paired with the proportional remediations, the state of software assurance in science might be significantly improved. 

Trusted CI has observed that currently available software development resources often do not match well with the needs of scientific projects; the backgrounds of the developers, the available resources, and the way the software is used do not necessarily map to existing resources available for software assurance. Hence, Trusted CI put together a team including a range of security expertise with backgrounds in the field from academic researchers to operational expertise. That team then examined several software projects covering a range of sizes, applications, and NSF directorate funding sources, looking for commonalities among them related to software security. Our focus was on both procedures and practical application of security measures and tools. 

In preparing our examinations of these individual software projects, the Annual Challenge team enumerated several details that it felt would shed light on the software security challenges faced by scientific software developers, some of the most successful ways in which existing teams are addressing those challenges, and observations from developers about the way that they wish things might be different in the future, or if they were able to do things over again from the beginning.


Findings

The Annual Challenge team’s findings are generally aligned with one of five categories: process, organization/mission, tools, training, and code storage.

Process: The team found several common threads of challenges facing developers, most notably related to policy process documentation, including policies relating to onboarding, offboarding, code commits and pull requests, coding standards, design, communication about vulnerabilities with user communities, patching methodologies, and auditing practices. One cause for this finding is often that software projects start small and do not plan to grow or be used widely. And when the software does grow and starts to be used broadly, it can be hard to develop formal policies after developers are used to working in an informal, ad hoc manner. In addition, organizations do not budget for security. Further, where policy documentation does exist, it can easily go stale -- “documentation rot.” As a result, it would be helpful for Trusted CI to develop guides for and examples of such policies that could be used and implemented even at early stages by the scientific software development community.

Organization and Mission: Most projects faced difficulties in identifying, staffing, or funding a security lead and/or project manager. The few projects that had at least one of these roles filled had an advantage in regards to DevSecOps. In terms of acquiring broader security skills, some projects attempted to use institutional “audit services” but found mixed results. Several projects struggled with the challenge of integrating security knowledge among different teams or individuals. Strong lines of responsibility can create valuable modularity but can also create points of weakness when interfaces between different authors or repositories are not fully evaluated for security issues. Developers can ease this tension by using processes for developing security policies around software, ensuring ongoing management support and enforcement of policies, and helping development teams understand the assignment of key responsibilities. These topics will be addressed in the software assurance guide that Trusted CI is developing.

Tools: Static analysis tools are commonly employed in continuous integration (CI) workflows to help detect security flaws, poor coding style, and potential errors in the project. A primary attribute of a static analysis tool is the set of language-specific rules and patterns it uses to search for style, correctness, and security issues in a project. One major issue with static analysis tools is that they report a high number of false positives, which, as the Trusted CI team found, can cause developers to avoid using them. It was determined that it would be helpful for Trusted CI to develop tutorials that are appropriate for the developers in the scientific software community to learn how to properly use these tools and overcome their traditional weaknesses without being buried in useless results.

The Trusted CI team found that dependency checking tools were commonly employed, particularly given some of the automation and analysis features built into GitHub. Such tools are useful to ensure the continued security of a project’s dependencies as new vulnerabilities are found over time. Thus, the Trusted CI team will explore developing (or referencing existing) materials to ensure that the application of dependency tracking is effective for the audience and application in question. It should be noted that tools in general could give a false sense of security if they are not carefully used.

Training: Projects shared that developers of scientific software received almost no specific training on security or secure software development. A few of the projects that attempted to find online training resources reported finding themselves lost in a quagmire of tutorials. In some cases, developers had computer science backgrounds and relied on what they learned early in their careers, sometimes decades ago. In other cases, professional training was explored but found to be at the wrong level of detail to be useful, had little emphasis on security specifically, or was extremely expensive. In yet other cases, institutional training was leveraged. We found that any kind of ongoing training tended to be seen by developers as not worth the time and/or expense. To address this, Trusted CI should identify training resources appropriate for the specific needs, interests, and budgets of the scientific software community.

Code Storage: Although most projects were using version control in external repositories, the access controls methods governing pull requests and commits were not sufficiently restricted to maintain a secure posture. Many projects leverage GitHub’s dependency checking software; however, that tool is limited to only checking libraries within GitHub’s domain. A few projects developed their own software in an attempt to navigate a dependency nightmare. Further, there was often little ability or attempt to vet external libraries; these were often accepted without inspection mainly because there is no straightforward mechanism in place to vet these packages. In the Trusted CI software assurance guide, it would be useful to describe processes for leveraging two-factor authentication and developing policies governing access controls, commits, pull requests, and vetting of external libraries.


Next Steps

The findings derived from our examination of several representative scientific software development projects will direct our efforts towards addressing what new content we believe is most needed by the scientific software development community.

Over the next six months, the Trusted CI team will be developing a guide consisting of this material, targeted toward anyone who is either planning or has an ongoing software project that needs a security plan in place. While we hope that the guide will be broadly usable, a particular focus of the guide will be on projects that provide a user-facing front end exposed to the Internet because such software is most likely to be attacked. 

This guide is meant as a “best practices” approach to the software lifecycle. We will recommend various resources that should be leveraged in scientific software, including the types of tools to run to expose vulnerabilities, best practices in coding, and some procedures that should be followed when engaged in a large collaborative effort and how to share the code safely. Ultimately, we hope the guide will support scientific discovery itself by providing guidance around how to minimize possible risks incurred in creating and using scientific software.

Monday, July 19, 2021

Higher Education Regulated Research Workshop Series: A Collective Perspective

Regulated research data is a growing challenge for NSF funded organizations in research and academia, with little guidance on how to tackle regulated research institutionally. Trusted CI would like to bring the community’s attention to an important report released today by the organizers of a recent, NSF-sponsored* Higher Education Regulated Research Workshop Series that distills the input of 155 participants from 84 Higher Education institutions. Motivated by the Higher Ed community’s desire to standardize strategies and practices, the facilitated** workshop sought to find efficient ways for institutions large and small to manage regulated research data and smooth the path to compliance. It identified six main pillars of a successful research cybersecurity compliance program, namely Ownership and Roles, Financials and Cost, Training and Education, Auditing, Clarity of Controls, and Scoping. The report presents each pillar as a chapter, complete with best practices, challenges, and recommendations for research enablers on campus. While it focuses on Department of Defense (DOD) funded research, Controlled Unclassified Information (CUI), and health research, the report offers ideas and guidance on how to stand up a well managed campus program that applies to all regulated research data. It represents a depth and breadth of community collaboration and institutional experience never before compiled in a single place.

Organized by Purdue University with co-organizers from Duke University, University of Florida, and Indiana University, the workshop comprised six virtual sessions between November 2020 and June 2021. Participants ranged from research computing directors, information security officers, compliance professionals, research administration officers, and personnel who support and train researchers.

The full report is available at the EDUCAUSE Cybersecurity Resources page at https://library.educause.edu/resources/2021/7/higher-education-regulated-research-workshop-series-a-collective-perspective. It was co-authored by contributors from Purdue University, Duke University, University of Florida, Indiana University, Case Western Reserve University, University of Central Florida, Clemson University, Georgia Institute of Technology, and University of South Carolina.

See https://www.trustedci.org/compliance-programs for additional materials from Trusted CI on the topic of compliance programs.

* NSF Grant #1840043, “Supporting Controlled Unclassified Information with a Campus Awareness and Risk Management Framework”, awarded to Purdue University
** by Knowinnovation

Tuesday, July 13, 2021

Trusted CI webinar: A capability-based authorization infrastructure for distributed High Throughput Computing July 26th @11am Eastern

Open Science Grid's Brian Bockelman, is presenting the talk, A capability-based authorization infrastructure for distributed High Throughput Computing, on Monday July 26th at 11am (Eastern).

Please register here. Be sure to check spam/junk folder for registration confirmation email.

The OSG Consortium provides researchers with the ability to bring their distributed high throughput computing (dHTC) workloads to a pool of resources consisting of hardware across approximately 100 different sites.  Using this “Open Science Pool” resource, projects can leverage the opportunistic access (nodes that would be otherwise idle at the site), dedicated hardware, or allocated time at large-scallel NSF-funded resources.

While dHTC can be a powerful tool to advance scientific discovery, managing trust relationships with so many sites can be challenging; the OSG helps bootstrap the trust relationships between project and provider.  Further, authorization in the OSG ecosystem is an evolving topic.  On the national and international infrastructure, we are leading the transition from identity-based authorization -- basing decisions on “who you are” -- to capability based authorization.  Capability-based authorization focuses on “what can you do?” and is implemented through tools like bearer tokens.  Changing the mindset of an entire ecosystem is wide-ranging work, involving dedicated projects such as the new NSF-funded “SciAuth” and international partners like the Worldwide LHC Computing Grid.

In this talk, we’ll cover the journey of the OSG to a capability-based authorization as well as the challenges and opportunities of changing trust models for a functioning infrastructure.

Speaker Bio

Brian Bockelman is a Principal Investigator at the Morgridge Institute for Research and co-PI on the Partnership to Advance Throughput Computing (PATh) and Institute for Research and Innovation in Software for High Energy Physics (IRIS-HEP).  Within the OSG, he leads the Technology Area, which provides the software and technologies that underpin the OSG fabric of services.  He is also a co-PI on the new SciAuth project, led by Jim Basney, which aims to coordinate the deployment of capability-based authorization across the science and engineering cyberinfrastructure.

Before joining Morgridge, Bockelman received a joint PhD in Mathematics and Computer Science from the University of Nebraska-Lincoln (UNL) and was an integral member of the Holland Computing Center at UNL.  His team helps advance Research Computing activities at Morgridge and are partners within the Center for High Throughput Computing (CHTC) at University of Wisconsin-Madison.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Wednesday, July 7, 2021

Trusted CI Concludes Engagement with FABRIC

FABRIC: Adaptive Programmable Research Infrastructure for Computer Science and Science Applications, funded under NSF grants 1935966 and 2029261, is a national scale testbed that connects to connects to existing NSF testbeds (e.g., PAWR), as well as NSF Clouds (e.g., Chameleon and CloudLab), HPC Facilities, and the real Internet. FABRIC aims to expand its outreach, enabling new science applications, using a diverse array of networks, integrating machine learning, and preparing the next generation of computer science researchers.

FABRIC received its initial funding in 2019 and is projected to go into operational phase in September of 2023. FABRIC reached out to Trusted CI to request a review of its software development process, the trust boundaries in the FABRIC system, and the FABRIC security and monitoring architecture.

The five-month engagement began in February and completed in June. In that time the teams worked together to review FABRIC’s project documentation, which included a deep analysis of the security architecture. We moved on to completing an asset inventory and risk assessment, covering over 70 project assets, identifying attack surfaces and potential threats, and documenting current and planned security controls. Lastly, we documented engagement findings in an internal report shared with FABRIC project leadership.

FABRIC also assisted with the Trusted CI 2021 Annual Challenge (Software Assurance) by participating in an interview with members of the software assurance team. The results of that interview will provide input to Trusted CI's forthcoming guide on software assurance for NSF projects.

Tuesday, July 6, 2021

Join Trusted CI at PEARC21, July 19th - 22nd

PEARC21 will be held virtually on July 19th - 22nd, 2021 (PEARC website).

Trusted CI will be hosting two events, our annual workshop and our Security Log Analysis tutorial.

Both events are scheduled at the same time, please note that when planning your agenda.

The details for each event are listed below. 

Workshop: The Fifth Trusted CI Workshop on Trustworthy Scientific Cyberinfrastructure provides an opportunity for sharing experiences, recommendations, and solutions for addressing cybersecurity challenges in research computing.  

Monday July 19th @ 8am - 11am Pacific.

  • 8:00 am - Welcome and opening remarks
  • 8:10 am - The Trusted CI Framework: A Minimum Standard for Cybersecurity Programs
    • Presenters: Scott Russell, Ranson Ricks, Craig Jackson, and Emily Adams; Trusted CI / Indiana University’s Center for Applied Cybersecurity Research
  • 8:40 am - Google Drive: The Unknown Unknowns
    • Presenter: Mark Krenz; Trusted CI / Indiana University’s Center for Applied Cybersecurity Research
  • 9:10 am - Experiences Integrating and Operating Custos Security Services
    • Presenters: Isuru Ranawaka, Dimuthu Wannipurage, Samitha Liyanage, Yu Ma, Suresh Marru, and Marlon Pierce; Indiana University
    • Dannon Baker, Alexandru Mahmoud, Juleen Graham, and Enis Afgan; Johns Hopkins University
    • Terry Fleury, and Jim Basney; University of Illinois Urbana Champaign
  • 9:40 am - 10 minute Break
  • 9:50 am - Drawing parallels and synergies between NSF and NIH cybersecurity projects
    • Presenters: Enis Afgan, Alexandru Mahmoud, Dannon Baker, and Michael Schatz; Johns Hopkins University
    • Jeremy Goecks; Oregon Health and Sciences University
  • 10:20 am - How InCommon is helping its members to meet NIH requirements for federated credentials
    • Presenters: Tom Barton; Internet2
  • 10:50 am - Wrap up and final thoughts (10 minutes)

        More detailed information about the presentations is available on our website.


Tutorial: Security Log Analysis: Real world hands-on methods and techniques to detect attacks.  

Monday July 19th @ 8am - 11am Pacific.

A half-day training to tie together various log and data sources and provide a more rounded, coherent picture of a potential security event. It will also present log analysis as a life cycle (collection, event management, analysis, response), that becomes more efficient over time. Interactive demonstrations will cover both automated and manual analysis using multiple log sources, with examples from real security incidents.

Monday July 19th @ 8am - 11am Pacific time

Thursday, June 24, 2021

The 2021 NSF Cybersecurity Summit Call For Participation - NOW OPEN - Deadline is Friday, July 2nd

It is our pleasure to announce that the 2021 NSF Cybersecurity Summit is scheduled to take place the week of October 11th with the plenary sessions occurring on Tuesday, October 12th and Wednesday October 13th. Due to the impact of the global pandemic, we will hold this year’s summit on-line instead of in-person as originally planned.

The final program is still evolving, but we will maintain the mission to provide a format designed to increase the NSF community’s understanding of cybersecurity strategies that strengthen trustworthy science: what data, processes, and systems are crucial to the scientific mission, what risks they face, and how to protect them.

 

Call for Participation (CFP)

Program content for the summit is driven by our community. We invite proposals for presentations, breakout and training sessions, as well as nominations for student scholarships. The deadline for CFP submissions is July 5th. To learn more about the CFP, please visit: https://www.trustedci.org/2021-summit-cfp 

 More information can be found at https://www.trustedci.org/2021-cybersecurity-summit