Wednesday, December 15, 2021

Trusted CI Wraps Up Engagement with Jupyter Security Coordinators

Project Jupyter is an open-source project consisting of several products, including Jupyter Notebook/Server, Jupyter Hub, and JupyterLab, which are used throughout the NSF community. This Trusted CI engagement was originally motivated by a Jupyter Security Best Practices Workshop tentatively scheduled for April 2022. Due to the ongoing pandemic, the workshop has been canceled, and alternative avenues for discussion of Jupyter security topics are being pursued. 

Regardless, the engagees agreed that there was value in continuing the original engagement tasks, which include the following.

  • Perform a high-level survey of existing Jupyter documentation with a focus on the security aspects of installation and configuration. Identify gaps and suggest recommendations for improvement.
  • Identify common Jupyter deployment use-cases as targets for Jupyter Security Best Practices documentation.
  • Write security documentation for as many of these use-cases as time permits.

Three documents were produced from these engagement tasks.

  • A summary of all existing Jupyter documentation focused on security aspects of deployment and configuration. This survey was presented to the Jupyter community via Jupyter's Discourse.
  • Suggestions for revisions to Jupyter Notebook documentation related to security of a single-user (e.g., laptop) installation.
  • Suggestions for revisions to JupyterHub documentation related to security of a single-server / multi-user (e.g., small scientific project) installation.

All documentation produced during this engagement has been published to a GitHub repository

Concurrent with this Trusted CI engagement, the Jupyter Security Coordination Team began working with the Jupyter Steering Council to address security issues across the Jupyter project as a whole. This effort led to the following milestones.

This engagement represents the start of a bigger conversation focused on Jupyter security concerns. It is our hope that the documentation produced by this engagement will be incorporated by Jupyter developers into their project documentation to assist administrators and users in securing their deployments.

Tuesday, December 14, 2021

Publication of the Trusted CI Guide to Securing Scientific Software

Trusted CI is pleased to announce the publication of its Guide to Securing Scientific Software (GS3).  The GS3 was produced over the course of 2021 by seven Trusted CI members with the goal of broadly improving the robustness of software used in scientific computing with respect to security. GS3 is the result of  the 2021 Trusted CI Annual Challenge on Software Assurance and the interviews we conducted with seven prominent scientific software development projects,  helping to  shape the team’s ideas about the community’s needs in software assurance.  The guide can be downloaded here:

Andrew Adams, Kay Avila, Elisa Heymann, Mark Krenz, Jason R. Lee, Barton Miller, and Sean Peisert. “Guide to Securing Scientific Software,” December 2021. DOI:10.5281/zenodo.5777646 https://doi.org/10.5281/zenodo.5777646

Note that this guide follows the publication of the team’s findings report from a few months ago:

Andrew Adams, Kay Avila, Elisa Heymann, Mark Krenz, Jason R. Lee, Barton Miller, and Sean Peisert. “The State of the Scientific Software World: Findings of the 2021 Trusted CI Software Assurance Annual Challenge Interviews,” September 2021.  https://hdl.handle.net/2022/26799

It is intended that the GS3 will continue to evolve and be further integrated into Trusted CI’s array of activities, including training and engagements, and so we encourage those interested in the subject of software assurance to continue to watch this blog for more information, and to also feel free to reach out to authors of the GS3 with questions and feedback.

For those interested in hearing more about the GS3, please (virtually) join the Trusted CI webinar focused on the topic of software assurance scheduled for February 28, 2022 at 10am Pacific / 1pm Eastern. https://www.trustedci.org/webinars  Register for the webinar.

Finally, Trusted CI gratefully acknowledges the contributions from the following teams to this effort: FABRIC, the Galaxy Project, High Performance SSH/SCP (HPN-SSH) by the Pittsburgh Supercomputing Center (PSC), Open OnDemand by the Ohio Supercomputer Center, Rolling Deck to Repository (R2R) by Columbia University, and the Vera C. Rubin Observatory, as well as to all those who provided feedback on early versions of this guide.

More information on Trusted CI’s work in software assurance can be found at https://www.trustedci.org/software-assurance


Monday, November 22, 2021

Trusted CI Webinar: Lessons learned from a real-world ransomware attack on researchers at MSU, Dec 6th @11am EST

Members of Trusted CI and MSU are presenting the talk, Lessons learned from a real-world ransomware attack on researchers at Michigan State University: What researchers need to know about the increased risk from ransomware attacks, on Monday December 6th at 11am (Eastern).

Update: The presentation video can be found here: https://youtu.be/_Ay2jUxthfw 

Slides are available here: http://hdl.handle.net/2142/112812

Ransomware report: https://hdl.handle.net/2022/26638

Please register here.

Cybercriminals are increasingly targeting researchers (along with hospitals, cities, schools, and utilities) because ransomware allows them to target a broader set of victims. Ransomware monetizes the attack by encrypting data and holding it ransom until victims pay, meaning victims no longer need to hold data of direct financial value. The proliferation of ransomware attacks has led to the U.S. Department of Justice calling it a growing national security threat.

The Physics and Astronomy department at Michigan State University (MSU) suffered a ransomware attack in 2020. The MSU Information Security Office partnered with Trusted CI, the NSF Cybersecurity Center of Excellence, to investigate the attack and produce a report for the research community on lessons learned.

This webinar by MSU CISO Tom Siu and Trusted CI, will present that report. MSU and Trusted CI will discuss the impact and lessons learned from the attack and offer cybersecurity mitigation strategies for protecting academic researchers. The webinar will conclude with a Q&A session. Audience members are encouraged to ask about their challenges engaging with researchers on the importance of information security.

Speaker Bios

Andrew Adams is the Principal Information Security Officer at Pittsburgh Supercomputer Center (PSC) under Carnegie Mellon University, and the Security Manager for the Bridges-2 supercomputer.  He also acts as the Chief Information Security Officer for Trusted CI, the NSF Cybersecurity Center of Excellence.  Andrew holds M.S. degrees in both computer science and information science (U. Pittsburgh), and has 20+ years of experience in computer networking research as a previous member of PSC’s Networking Group, including operational responsibilities in the 3ROX GigaPoP. In the field of security, he has designed and developed multiple security oriented systems, performed risk assessments, developed security policies, and has engaged with the open-science community 15+ times to improve their cybersecurity posture.  At present, his focus is on methods to keep HPC secure during the pandemic.

Tom Siu joined MSU IT in October 2020 as chief information security officer. As CISO, Tom leads the Security Engineering; Security Operations; Incident Response; and Governance, Risk and Compliance teams within the Information Security department and is responsible for the university-wide information security strategy.

Prior to arriving at MSU, Tom served as CISO for Case Western Reserve University (CWRU) for 14 years where he oversaw the development of the information security program. His notable achievements include the deployment of multifactor authentication and passphrases to all core services for all users, transition to default-deny network posture, creation and operation of a secure research computing enclave, and the development of a highly capable team of information assurance professionals. As a culmination of his time at CWRU, Tom’s team, in combination with colleagues from the Cleveland Clinic Foundation, worked to provide a secured operational IT environment for the first 2020 Presidential Debate.


Von Welch
is the associate vice president for Information Security and executive director for Cybersecurity Innovation at Indiana University, executive director for the OmniSOC, and the director of IU's Center for Applied Cybersecurity Research (CACR).

CACR has a unique focus - improve real world cybersecurity for organizations with missions that challenge traditional cybersecurity approaches. Examples include research and development, open science, and highly distributed collaborations. CACR project partners and funders include the US Department of Defense, National Science Foundation, Department of Homeland Security, as well as private sector organizations - and Von’s roles span research, development, operations, and leadership.

He specializes in cybersecurity for distributed systems, particularly scientific collaborations and federated identity. His current roles include serving as PI and director for the NSF Cybersecurity Center of Excellence (Trusted CI), a project dedicated to helping NSF science projects with their cybersecurity needs. He is also PI and director of the Research Security Operations Center (ResearchSOC), a collaborative security response center that addresses the unique cybersecurity concerns of the research community.

---

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Monday, November 1, 2021

Trusted CI at SFSCon 2021

SFSCon was on hiatus last year due to the pandemic, but it's back this year with a virtual format. SFSCon 2021, to be held November 5-7, will be the fourth annual cybersecurity training and professional development event organized by Cal Poly Pomona (CPP) for CyberCorps Scholarship for Service (SFS) students and alumni nationwide. This year SFSCon will use the U.S. Cyber Range for hands-on student training. Trusted CI will be providing Identity and Access Management training and Security Log Analysis training, as in previous years, with training materials updated for the virtual format.

Wednesday, October 20, 2021

Trusted CI Begins Engagement with OOI


The Ocean Observatories Initiative (OOI), funded by the NSF OCE Division of Ocean Sciences #1743430, is a science-driven ocean observing network that delivers real-time data from more than 800 instruments to address critical science questions regarding the world’s oceans. OOI data are freely available online to anyone with an Internet connection. 

The OOI provides an exponential increase in the scope and timescale of observations of the world’s oceans. Present and future educators, scientists, and researchers will draw conclusions about climatological and environmental processes based on these measurements, which sets a requirement for the data to be accurate, with a flawless pedigree. As a result, the OOI has a requirement to protect its data from being altered by any external agent.

To this end, OOI-CI (OOI Cyberinfrastructure) is seeking consultation from Trusted CI on evaluation of their current security program, along with guidance on reviewing and evaluating potential alternatives for an enhanced security posture. Through a kick-off meeting, Trusted CI and OOI discussed their concerns, questions, and goals, including: penetration testing; system and software vulnerability scanning and remediation; gaps in current policies and procedures; developing periodic security tasks; and identifying ‘unknowns’. These topics were refined and prioritized based on their needs using a subset of tasks outlining the goals of the engagement, specifically:

  1. Perform a review of OOI’s cyberinfrastructure using the Trusted CI Security Program Evaluation worksheet in order to assess the current state and target level of their cybersecurity.
  2. Review the 2015 Engagement final report and recommendations (covering OOI @Rutgers University) with the goal to see if any recommendations made at that time are still applicable and warranted.
  3. Using information documented in step 1., take initial steps towards adopting the Trusted CI Framework by developing a ‘master information security policies and procedures’ document (MISPP).
  4. Discuss and document missing policies and procedures from the Framework, including questions and concerns raised by OOI, and also unknowns discovered in above exercises.  
  5. Provide guidance on creating an asset inventory, applying a control set, and creating and maintaining a risk registry.

Additionally, broader impacts from this engagement can be realized as the OOI-CI is connected to several locations around the country. Lessons learned and recommendations from the engagement will be implemented at the other sites, which consist of Woods Hole Oceanographic Institute (WHOI) administration, and the three MIO’s (Marine Implementing Organizations) that provide data from Oregon State University, University of Washington, and WHOI.

The engagement will run from September 2021 to December 2021.

Monday, October 18, 2021

Announcing Trusted CI's Open Science Cybersecurity Fellows Program (Applications due Nov.12th)

Application Deadline: Friday, Nov. 12th  Apply here.

Overview

Trusted CI serves the scientific community as the NSF Cybersecurity Center of Excellence, providing leadership in and assistance in cybersecurity in the support of research. In 2019, Trusted CI is establishing an Open Science Cybersecurity Fellows program. This program will establish and support a network of Fellows with diversity in both geography and scientific discipline. These fellows will have access to training and other resources to foster their professional development in cybersecurity. In exchange, they will champion cybersecurity for science in their scientific and geographic communities and communicate challenges and successful practices to Trusted CI.

About the program

The vision for the Fellows program is to identify members of the scientific community, empower them with basic knowledge of cybersecurity and the understanding of Trusted CI’s services, and then have them serve as cybersecurity liaisons to their respective communities. They would then assist members of the community with basic cybersecurity challenges and connect them with Trusted CI for advanced challenges. 

Trusted CI will select six fellows each year.  Fellows will receive recognition, cybersecurity professional development consisting of training and travel funding. The Fellows’ training will consist of a Virtual Institute, providing 20 hours of basic cybersecurity training over six months. The training will be delivered by Trusted CI staff and invited speakers. The Virtual Institute will be presented as a weekly series via Zoom and recorded to be publicly available for later online viewing. Travel support is budgeted (during their first year only) to cover fellows’ attendance at the NSF Cybersecurity Summit, PEARC, and one professional development opportunity agreed to with Trusted CI. The Fellows will be added to an email list to discuss any challenges they encounter that will receive prioritized attention from Trusted CI staff. Trusted CI will recognize the Fellows on its website and social media. Fellowships are funded for one year but will be encouraged to continue to participate in TrustedCI activities the years following their fellowship year.

After the Virtual Institute, Fellows, with assistance from the Trusted CI team, will be expected to help their science community with cybersecurity and make them aware of Trusted CI for complex needs. By the end of the year, they will be expected to present or write a short white paper on the cybersecurity needs of their community and some initial steps they will take (or have taken) to address these needs. After the year of full support, Trusted CI will continue recognizing the cohort of Fellows and giving them prioritized attention. Over the years, this growing cohort of Fellows will broaden and diversify Trusted CI’s impact.

Application requirements

·   A description of their connection to the research community. Any connection to NSF projects should be clearly stated, ideally providing the NSF award number.
A statement of interest in cybersecurity

·   Two-page biosketch

·   Optional demographic info

·    A letter from their supervisor supporting their involvement and time commitment to the program

·    A commitment to fully participate in the Fellows activities for one year (and optionally thereafter)

The selection of Fellows would be made by the Trusted CI PIs and Senior Personnel based on the following criteria:

1.  Demonstrated connection to scientific research, with preference given to those who demonstrate a connection to NSF-funded science.

2.   Articulated interest in cybersecurity.

3.   Fellows that broaden Trusted CI’s impact across all seven NSF research directorates (Trusted CI encourages applications for individuals with connections to NSF directorates other than CISE), with connections to any of the NSF 10 Big Ideas, or Fellows that increase the participation of underrepresented populations.

Who should apply?   

·   Professionals and post-docs interested in cybersecurity for science, with evidence of that in their past and current role

·   Research Computing, Data, and IT technical or policy professionals interested in applying cybersecurity innovations to scientific research

·   Domain scientists interested in data integrity aspects of scientific research

·   Scientists from all across the seven NSF research directorates interested in how data integrity fits with their scientific mission

·   Researchers in the NSF 10 Big Ideas interested in cybersecurity needs

·   Regional network security personnel working across universities and facilities in their region

·   People comfortable collaborating and communicating across multiple institutions with IT / CISO / Research Computing and Data professionals

·    Anyone in a role relevant to cybersecurity for open science

More about the Fellowship

Fellows come from a variety of career stages, they demonstrate a passion for their area, the ability to communicate ideas effectively, and a real interest in the role of cybersecurity in research. Fellows are empowered to talk about cybersecurity to a wider audience, network with others who share a passion for cybersecurity for open science and learn key skills that benefit them and their collaborators.

If you have questions about the Fellows program, please let us know by emailing 
fellows@trustedci.org.




Monday, October 11, 2021

Trusted CI webinar: The Trusted CI Framework; Overview and Recent Developments, Oct 25th @11am Eastern

Trusted CI's Scott Russell will be presenting the talk, The Trusted CI Framework; Overview and Recent Developments, on Monday October 25th at 11am (Eastern).

Please register here.

The Trusted CI Framework is a tool to help organizations establish and refine their cybersecurity programs. In response to an abundance of guidance focused narrowly on cybersecurity controls, Trusted CI set out to develop a new framework that would empower organizations to confront cybersecurity from a mission-oriented, programmatic, and full organizational lifecycle perspective. The Trusted CI Framework recommends organizations take control of their cybersecurity the same way they would any other important business concern: by adopting a programmatic approach.

This webinar will provide an introduction to the Trusted CI Framework, including a walkthrough of the 16 “Musts” for establishing a competent cybersecurity program. Then we will go on to cover recent developments with the Trusted CI Framework, including: 
  1. The publication of the first “Framework Implementation Guide,” which provides in-depth guidance on how to implement each Framework Must;
  2. The experiences of NOIRLab (NSF Major Facility) as the first official Framework adopter; and
  3. The announcement of the “Framework Cohort” for 2022, an initiative to help Major Facilities adopt and implement the Framework.

Speaker Bio

Scott Russell is a Senior Policy Analyst at the Indiana University Center for Applied Cybersecurity Research. Scott was previously the Postdoctoral Fellow in Information Security Law & Policy. Scott’s work thus far has emphasized private sector cybersecurity best practices, data aggregation and the First and Fourth Amendments, and cybercrime in international law. Scott studied Computer Science and History at the University of Virginia and received his J.D. from the Indiana University, Maurer School of Law.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Wednesday, September 29, 2021

Findings Report of the 2021 Trusted CI Annual Challenge on Software Assurance Published

 As reported in this blog earlier this year, in 2021, Trusted CI is conducting our focused “annual challenge” on the assurance of software used by scientific computing and cyberinfrastructure

In July, the 2021 Trusted CI Annual Challenge team posted its initial findings in this blog.  The team is now pleased to share its detailed findings report:

Andrew Adams, Kay Avila, Elisa Heymann, Mark Krenz, Jason R. Lee, Barton Miller, and Sean Peisert. “The State of the Scientific Software World: Findings of the 2021 Trusted CI Software Assurance Annual Challenge Interviews,” September 2021.  https://hdl.handle.net/2022/26799

Now that the team has finished its examination of software assurance findings, it will turn its focus to solutions.  In accordance with that, later this calendar year, the Trusted CI team will be publishing a guide for recommended best practices for scientific software development.

For those interested in hearing more about the 2021 Annual Challenge, please (virtually) come to the team’s panel session at the 2021 NSF Cybersecurity Summit at 3:05 EDT on October 13, 2021: https://www.trustedci.org/2021-summit-program


Wednesday, September 22, 2021

SGCI Webinar: Security recommendations for science gateways, Sept 29th @ 1pm EDT

This webinar announcement was originally posted on SGCI's website.

Security recommendations for science gateways

Wednesday, September 29, 2021, 1 pm Eastern/10 am Pacific

Presented by Mark Krenz, Chief Security Analyst, Center for Applied Cybersecurity Research, Indiana University

Trusted CI has recently published a four-page document targeted at small team science gateways. This document provides a prioritized list of security recommendations to help reduce overall security risk. In this webinar Mark Krenz, from Trusted CI, will be providing an introduction and overview of the document, as well as a discussion of the lessons learned from the last few years of providing security consultations for science gateways.

See SGCI's webinars page for the Zoom link and password.

Tuesday, September 14, 2021

Trusted CI webinar: Q-Factor: Real-time data transfer optimization, September 27th @11am Eastern

Members of FIU and ESnet are presenting the talk, Q-Factor: Real-time data transfer optimization leveraging In-band Network Network provided by P4 data planes, on Monday September 27th at 11am (Eastern). Our presenters are Jeronimo Bezerra, Richard Cziva, and Dr. Julio Ibarra.

Please register here.

Q-Factor is a framework to enable data transfer optimization based on real-time network state information provided by programmable data planes. Communication networks are critical components of today’s scientific workflows. Researchers leverage long-distance ultra-high-speed networks to transfer massive data sets from acquisition sites to processing sites and share measurements with scientists worldwide. However, while network bandwidth is continuously increasing, most data transfers are unable to efficiently utilize the added capacity due to inherent limitations of parameter settings of the network transport protocols and the lack of network state information at the end hosts. To address these challenges, Q-Factor plans to use sub-second network state data to dynamically configure current transport protocol and operating systems parameters to reach higher network utilization and, as a result, to improve scientific workflows. Q-Factor leverages programmable network devices with the In-band Network Telemetry (INT) framework and delivers a software solution to process in-band measurements at the end hosts. Using Q-Factor on end hots, for instance Data Transfer Nodes (DTN)s, TCP/IP parameters will be configured according to temporal network characteristics, such as round-trip time, network utilization, and network buffer occupancy. This tuning is expected to increase network utilization, shorter flow completion times, and significantly reduce packet drops caused by under-provisioned network buffers. Q-Factor is a collaboration between Florida International University and Energy Science Network.

Speaker Bio

Jeronimo Bezerra is the FIU’s Center for Internet Augmented Research and Assessment’s IT Associate Director. Jeronimo has 19 years of IT and Network Engineering experience, most of them with R&E networks. He is responsible for AmLight network operation and engineering, including the SDN deployment and operation. He is leading the Q-Factor design, development and deployment activities.

Richard Cziva is a software engineer at ESnet. He has a range of technical interests including traffic and performance analysis, data-plane programmability, high-speed packet processing, software-defined networking, and network function virtualization. Prior to joining ESnet in 2018, Richard was a Research Associate at University of Glasgow, where he looked at how advanced services (e.g., personalized firewalls, intrusion detection modules, measurement functions) can be implemented and managed inside wide area networks with programmable edge capabilities. Richard holds a BSc in Computer Engineering (2013) from Budapest University of Technology and Economics, Hungary and a Ph.D. in Computer Science (2018) from University of Glasgow, United Kingdom. He will lead the research activities in Q-Factor.

As the Assistant Vice President for Technology Augmented Research at FIU, Dr. Julio Ibarra is responsible for furthering the mission of the Center for Internet Augmented Research and Assessment (CIARA) – to contribute to the pace and the quality of research at FIU through the application of advanced Cyberinfrastructure. Has 30+ years of IT and Telecom infrastructure management, 18 of those years of specialization with Research and Education networks and project management. Dr. Ibarra will be responsible for overall project management and coordination.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Tuesday, September 7, 2021

Testbed Facility Security Workshop at 2021 NSF Cybersecurity Summit


The 2021 NSF Summit Workshop on Testbed Facility Security will be held Monday, October 18 from 1pm to 5pm Eastern Time as part of the 2021 NSF Cybersecurity Summit. The workshop will explore the unique cybersecurity challenges of testbed facilities, stemming from their mission to enable experimental use, including configuration of facility resources for novel networking and security experiments, which may span multiple facilities. The workshop is being co-organized by Chameleon, Colosseum, DETERLab, FABRIC, PAWR, and Trusted CI.
If you are interested in the cybersecurity challenges of experimental cloud-based testbeds, please plan to attend. Visit https://www.trustedci.org/2021-testbed-facility-security-workshop for more details.
The workshop is a follow-on activity from the Trusted CI FABRIC engagement. See https://blog.trustedci.org/search/label/FABRIC for more information about that engagement.
 

Tuesday, August 31, 2021

2021 Open OnDemand Engagement Concludes

Open OnDemand, funded by NSF OAC, is an open-source HPC portal based on the Ohio Supercomputer Center’s original OnDemand portal. The goal of Open OnDemand is to provide an easy way for system administrators to provide web access to their HPC resources.

Open OnDemand is facing increased community adoption. As a result, it is becoming a critical production service for many HPC centers and clients. Open OnDemand engaged with Trusted CI to improve the overall security of the project, ensuring that it continues to be a trusted and reliable platform for the hundreds of centers and tens of thousands of clients that regularly utilize it. 

Our engagement centered on providing the Open OnDemand team with the skills, tools and resources needed to ensure their software security. This included using the FPVA methodology to conduct in-depth vulnerability assessments independently. In addition, we evaluated the static analysis and dependency checking tools used by Open OnDemand. The analysis of this evaluation led to interesting findings regarding the way tools behave and a set of recommendations regarding which tools to use and how to most effectively configure them.

Trusted-CI has performed in-depth assessments for NSF projects in the past. In this engagement with Open OnDemand, we took a step forward as Trusted CI taught a group how to perform the assessment themselves. In general, the NSF community benefits from being able to carry out that kind of activity in an autonomous way.  In addition, the lessons in this engagement related to automated tools will benefit any NSF software project.

Open OnDemand Software Engineer, Jeff Ohrstrom, shared positive feedback regarding the value of the engagement, stating “The biggest takeaway for me was just getting muscle memory around security to start to think about attack vectors in every change, every commit, every time.”

Our findings and recommendations are summarized in our engagement report, which can be found here

Thursday, August 26, 2021

Trusted CI begins engagement with University of Arkansas

The University of Arkansas has engaged with Trusted CI and the Engagement and Performance Operations Center (EPOC) to review their plans for a Science DMZ that will serve institutions for higher education across Arkansas. Trusted CI and EPOC will also help create training and policy materials that can be reused by other institutions both in the state of Arkansas and beyond.

Science DMZs are a secure architecture for providing high throughput transfer of science data between two points. By placing data transfer nodes outside each institution's cononical network and into a specially controlled zone, the Science DMZ is able to increase speed by reducing the friction created by firewalls, other traffic, and switches and routers that are tuned for more diverse traffic.

 The University of Arkansas via its Data Analytics that are Robust and Trusted (DART) project, is funded by NSF GRANT #194639 for EPSCoR RII.

Tuesday, August 24, 2021

Trusted CI Begins Engagement with Jupyter Security Coordinators

Project Jupyter is an open-source project which supports interactive data science and scientific computing across multiple programming languages. Project Jupyter has developed several interactive computing products including Jupyter Notebook, JupyterLab, and JupyterHub, which are used throughout the NSF community. This Trusted CI engagement is motivated by an upcoming Jupyter Security Best Practices Workshop funded by NumFOCUS as part of the Community Workshop series. The workshop is tentatively scheduled to be held April 2022 at the Ohio Supercomputer Center.

The goals of this engagement include the following tasks.

  • Review existing Jupyter deployment documentation related to security, identify gaps, and create recommendations for improvements.
  • Identify Jupyter deployment use-cases as targets for Jupyter Security Best Practices documentation. Example use-cases include DOE supercomputing centers, campus research clusters, workshops, small scientific projects, etc. Prioritize these use-cases based on which audiences would benefit most from new security documentation.
  • Write Jupyter Security Best Practices documentation for high priority use-cases identified above. Work through other use-cases as time permits.

The Jupyter Security Best Practices documentation produced by this engagement will be shared with Project Jupyter for inclusion in their documentation, and also presented at the workshop.

To read Jupyter's blog post about the engagement, click here.

Monday, August 23, 2021

Trusted CI Adopts Framework for its own Security Program

Trusted CI, the NSF Cybersecurity Center of Excellence, is pleased to announce that it has completed its adoption of the Trusted CI Framework for its own security program.  The previous security program, based off of Trusted CI’s Guide for Cybersecurity Programs for NSF Science and Engineering Projects, provided Trusted CI with a usable but basic security program. As Trusted CI has matured and its impact on the community expanded, we found our program was no longer adequate for our growing cybersecurity needs.  Thus, we began the process of rebuilding our program in order to strengthen our security posture.  

The release of Trusted CI’s Framework was independent of our effort to redress our security program, but serendipitously timed nonetheless.  We leveraged the Framework Implementation Guide (or FIG) -- instructions for cyberinfrastructure research operators -- to rebuild our security program based on the 4 Pillars and 16 Musts constituting the Trusted CI Framework.

The documents that form Trusted CI’s updated security program include the top-level Master Information Security Policies and Procedures (or MISPP), along with the support policies: Access Control Policy, Collaborator Information Policy, Document Labeling Policy, Incident Response Policy & Procedures, Information Classification Policy, Infrastructure Change Policy, and Onboarding / Offboarding Policy & Procedures.  Moreover, to track critical assets, asset owners for incident response, associated controls, and granted privilege escalations, the following “Asset Specific Access and Privilege Specifications”, or ASAPS were included: Apple (Podcasts), Badgr, Backup System (for G-Drive), Blogger, CloudPerm (G-Drive tool), DNS Registrar, GitHub, Group Service Account, IDEALS (@Illinois), Mailing Lists @Indiana), Slack, Twitter, YouTube, Website (SquareSpace), Zenodo, and Zoom.


The effort to adopt the Trusted CI Framework took ½ FTE over four months. 

Registration is now open for the 2021 NSF Cybersecurity Summit

 It is our great pleasure to announce registration is now open for the 2021 NSF Cybersecurity Summit. Please join us for this virtual conference. Plenary: Oct 12-13, Trainings: Oct 15, Workshops Oct 18-19. Attendees will include cybersecurity practitioners, technical leaders, and risk owners from within the NSF Large Facilities and CI community, as well as key stakeholders and thought leaders from the broader scientific and cybersecurity communities.


Registration: Complete the online registration form:
https://www.trustedci.org/2021-cybersecurity-summit

Thank you on behalf of the Program and Organizer Committee.

 

Tuesday, August 17, 2021

Trusted CI webinar: NCSA Experience with SOC2 in the Research Computing Space August 30th @11am Eastern

NOTE: If you have any experience with SOC2 compliance and want to share resources, slideshows, presentations, etc., please email links and other materials to Jeannette Dopheide <jdopheid@illinois.edu> and we will share them during the presentation. 

NCSA's Alex Withers is presenting the talk, NCSA Experience with SOC2 in the Research Computing Space, on Monday August 30th at 11am (Eastern).

Please register here.

As the demand for research computing dealing with sensitive data increases, institutions like the National Center for Supercomputing Applications work to build the infrastructure that can process and store these types of data.  Along with the infrastructure can come a host of regulatory obligations including auditing and examination requirements.  We will present NCSA’s recent SOC2 examination of its healthcare computing infrastructure and how we ensured our controls, data collection and processes were properly documented, tested and poised for the examination.  Additionally, we will show how other research and educational organizations might handle a SOC2 examination and what to expect from such an examination.  From a broader perspective, the techniques and lessons learned can be applied to much more than a SOC2 examination and could potentially be used to save time and resources for any audit or examination.

Speaker Bio

Alex Withers is an Assistant Director for Cyber Security and the Chief Information Security Officer at the National Center for Supercomputing Applications (NCSA). Additionally, he is the security co-manager for the XSEDE project and NCSA’s HIPAA Security Liaison. He is also a PI and co-PI for a number of NSF-funded cybersecurity projects.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

 

Tuesday, August 10, 2021

Trusted CI Begins Engagement with Ohio Supercomputing Center

In July the Ohio Supercomputing Center (OSC) began an engagement with Trusted CI to address the challenge of security questionnaire response management for academic research service providers.

It is a common occurrence for potential users with strong security concerns to submit security questionnaires to research service providers. Such questionnaires must be completed by security staff at the research service provider to provide those users with information about the security of the resource so they can assess if it is appropriate for their concerns. These security questionnaires are blockers to use of the resource, so they become high priority interrupts for security staff who have limited time to manage them. Also, the questionnaires are typically targeted to commercial cloud service providers, not research service providers at higher education institutions, resulting in a mismatch between the questions and the academic research environment.

The goal of the engagement is to produce guidance for academic research service providers (such as NSF HPC centers and campus NSF CC*/CICI awardees) that addresses the challenge of security questionnaire response management. Our approach is to produce a profile of the EDUCAUSE Higher Education Community Vendor Assessment Toolkit (HECVAT) (specifically, the HECVAT-Lite version) that is applicable to academic research service providers (rather than commercial cloud service providers), so that research service providers can maintain responses to a single security questionnaire that should be broadly accepted by their users.

The profile should be applicable to HPC/HTC providers (like OSC, NCSA, OSG/PATh), NSF research testbeds (like FABRIC), academic research software providers (like CILogon, Globus, and Open OnDemand), and campus Science DMZs.

The co-lead of the HECVAT Users Community Group, Charlie Escue, has agreed to join us during this engagement to help provide guidance and insight into the HECVAT. Trusted CI and OSU are grateful for his contributions to this exciting project.

The engagement is planned to conclude in December with the resulting work to be published for the benefit of our CI community.

Friday, August 6, 2021

Michigan State University and Trusted CI Collaborate to Raise Awareness of Cybersecurity Threats to the Research Community

Ransomware is a form of cybercrime that has risen to the same level of concern as terrorism by the U.S. Department of Justice. The United States suffered more than 65,000 ransomware attacks last year and victims paid $350 million in ransom, with an unknown amount of collateral costs due to lost productivity. Historically, research organizations have been largely ignored by cybercriminals since they do not typically have data that is easily sold or otherwise monetized. Unfortunately, since ransomware works by extorting payments from victims to get their own data back, research organizations are no longer immune to being targeted by criminals.

An event of this nature occurred in the Physics and Astronomy department at Michigan State University (MSU), which experienced a ransomware attack in May 2020. While many organizations attempt to keep the public from finding out about cyberattacks for fear of loss of reputation or follow-up attacks, MSU has decided to make elements and factors of its attack public in the interests of transparency, to encourage disclosure of similar types of attacks, and perhaps more importantly, to educate the open-science community about the threat of ransomware and other destructive types of cyberattacks. The overarching goal is to raise awareness about rising cybersecurity threats to higher education in hopes of driving safe cyberinfrastructure practices across university communities.

To achieve this, the CIO’s office at MSU engaged with Trusted CI, the NSF Cybersecurity Center of Excellence, in a collaborative review and analysis of the ransomware attack suffered by MSU last year. The culmination of the engagement—based on interviews of those involved in the incident—is the report “Research at Risk: Ransomware attack on Physics and Astronomy Case Study,” which focuses on lessons learned during the analysis. The report contains mitigation strategies that other researchers and their colleagues can apply to protect themselves. In the experience of Trusted CI, there was nothing extraordinary about the issues that led to this incident, and hence, we share these lessons with the goal of motivating other organizations to prevent future negative impacts to their research mission.

The engagement ran from January 2021 to July 2021.


Tuesday, August 3, 2021

Trusted CI new co-PIs: Peisert and Shute

I am happy to announce that Sean Peisert and Kelli Shute have taken on co-PI roles with Trusted CI. Both already have substantial leadership roles with Trusted CI. Sean is leading the 2021 annual challenge on software assurance and Kelli has been serving as Trusted CI’s Executive Director since August of 2020.

Thank you to Sean and Kelli for being willing to step up and take on these responsibilities.

Von

Trusted CI PI and Director


Initial Findings of the 2021 Trusted CI Annual Challenge on Software Assurance

 In 2021, Trusted CI is conducting our focused “annual challenge” on the assurance of software used by scientific computing and cyberinfrastructure. The goal of this year-long project, involving seven Trusted CI members, is to broadly improve the robustness of software used in scientific computing with respect to security. The Annual Challenge team spent the first half of the 2021 calendar year engaging with developers of scientific software to understand the range of software development practices used and identifying opportunities to improve practices and code implementation to minimize the risk of vulnerabilities. In this blog post, the 2021 Trusted CI Annual Challenge team gives a high-level description of some of its more important findings during the past six months. 

Later this year, the team will be leveraging its insights from open-science developer engagements to develop a guide specifically aimed at the scientific software community that covers software assurance in a way most appropriate to that community. Trusted CI will be reaching back out to the community sometime in the Fall for feedback on draft versions of that guide before the final version is published late in 2021.

In support of this effort, Trusted CI gratefully acknowledges the input from the following teams who contributed to this effort: FABRIC, the Galaxy Project, High Performance SSH/SCP (HPN-SSH) by the Pittsburgh Supercomputing Center (PSC), Open OnDemand by the Ohio Supercomputer Center, Rolling Deck to Repository (R2R) by Columbia University, and the Vera C. Rubin Observatory

At a high level, the team identified challenges that developers face with robust policy and process documentation; difficulties in identifying and staffing security leads, and ensuring strong lines of security responsibilities among developers; difficulties in effective use of code analysis tools; confusion about when, where, and how to find effective security training; and challenges with controlling source code developed and external libraries used, to ensure strong supply chain security. We now describe our examination process and findings in greater detail.


Goals and Approach

The motivation for this year’s Annual Challenge is that Trusted CI has reviewed many projects in its history and found significant anecdotal evidence that there are worrisome gaps in software assurance practices in scientific computing. We determined that if some common themes could be identified and paired with the proportional remediations, the state of software assurance in science might be significantly improved. 

Trusted CI has observed that currently available software development resources often do not match well with the needs of scientific projects; the backgrounds of the developers, the available resources, and the way the software is used do not necessarily map to existing resources available for software assurance. Hence, Trusted CI put together a team including a range of security expertise with backgrounds in the field from academic researchers to operational expertise. That team then examined several software projects covering a range of sizes, applications, and NSF directorate funding sources, looking for commonalities among them related to software security. Our focus was on both procedures and practical application of security measures and tools. 

In preparing our examinations of these individual software projects, the Annual Challenge team enumerated several details that it felt would shed light on the software security challenges faced by scientific software developers, some of the most successful ways in which existing teams are addressing those challenges, and observations from developers about the way that they wish things might be different in the future, or if they were able to do things over again from the beginning.


Findings

The Annual Challenge team’s findings are generally aligned with one of five categories: process, organization/mission, tools, training, and code storage.

Process: The team found several common threads of challenges facing developers, most notably related to policy process documentation, including policies relating to onboarding, offboarding, code commits and pull requests, coding standards, design, communication about vulnerabilities with user communities, patching methodologies, and auditing practices. One cause for this finding is often that software projects start small and do not plan to grow or be used widely. And when the software does grow and starts to be used broadly, it can be hard to develop formal policies after developers are used to working in an informal, ad hoc manner. In addition, organizations do not budget for security. Further, where policy documentation does exist, it can easily go stale -- “documentation rot.” As a result, it would be helpful for Trusted CI to develop guides for and examples of such policies that could be used and implemented even at early stages by the scientific software development community.

Organization and Mission: Most projects faced difficulties in identifying, staffing, or funding a security lead and/or project manager. The few projects that had at least one of these roles filled had an advantage in regards to DevSecOps. In terms of acquiring broader security skills, some projects attempted to use institutional “audit services” but found mixed results. Several projects struggled with the challenge of integrating security knowledge among different teams or individuals. Strong lines of responsibility can create valuable modularity but can also create points of weakness when interfaces between different authors or repositories are not fully evaluated for security issues. Developers can ease this tension by using processes for developing security policies around software, ensuring ongoing management support and enforcement of policies, and helping development teams understand the assignment of key responsibilities. These topics will be addressed in the software assurance guide that Trusted CI is developing.

Tools: Static analysis tools are commonly employed in continuous integration (CI) workflows to help detect security flaws, poor coding style, and potential errors in the project. A primary attribute of a static analysis tool is the set of language-specific rules and patterns it uses to search for style, correctness, and security issues in a project. One major issue with static analysis tools is that they report a high number of false positives, which, as the Trusted CI team found, can cause developers to avoid using them. It was determined that it would be helpful for Trusted CI to develop tutorials that are appropriate for the developers in the scientific software community to learn how to properly use these tools and overcome their traditional weaknesses without being buried in useless results.

The Trusted CI team found that dependency checking tools were commonly employed, particularly given some of the automation and analysis features built into GitHub. Such tools are useful to ensure the continued security of a project’s dependencies as new vulnerabilities are found over time. Thus, the Trusted CI team will explore developing (or referencing existing) materials to ensure that the application of dependency tracking is effective for the audience and application in question. It should be noted that tools in general could give a false sense of security if they are not carefully used.

Training: Projects shared that developers of scientific software received almost no specific training on security or secure software development. A few of the projects that attempted to find online training resources reported finding themselves lost in a quagmire of tutorials. In some cases, developers had computer science backgrounds and relied on what they learned early in their careers, sometimes decades ago. In other cases, professional training was explored but found to be at the wrong level of detail to be useful, had little emphasis on security specifically, or was extremely expensive. In yet other cases, institutional training was leveraged. We found that any kind of ongoing training tended to be seen by developers as not worth the time and/or expense. To address this, Trusted CI should identify training resources appropriate for the specific needs, interests, and budgets of the scientific software community.

Code Storage: Although most projects were using version control in external repositories, the access controls methods governing pull requests and commits were not sufficiently restricted to maintain a secure posture. Many projects leverage GitHub’s dependency checking software; however, that tool is limited to only checking libraries within GitHub’s domain. A few projects developed their own software in an attempt to navigate a dependency nightmare. Further, there was often little ability or attempt to vet external libraries; these were often accepted without inspection mainly because there is no straightforward mechanism in place to vet these packages. In the Trusted CI software assurance guide, it would be useful to describe processes for leveraging two-factor authentication and developing policies governing access controls, commits, pull requests, and vetting of external libraries.


Next Steps

The findings derived from our examination of several representative scientific software development projects will direct our efforts towards addressing what new content we believe is most needed by the scientific software development community.

Over the next six months, the Trusted CI team will be developing a guide consisting of this material, targeted toward anyone who is either planning or has an ongoing software project that needs a security plan in place. While we hope that the guide will be broadly usable, a particular focus of the guide will be on projects that provide a user-facing front end exposed to the Internet because such software is most likely to be attacked. 

This guide is meant as a “best practices” approach to the software lifecycle. We will recommend various resources that should be leveraged in scientific software, including the types of tools to run to expose vulnerabilities, best practices in coding, and some procedures that should be followed when engaged in a large collaborative effort and how to share the code safely. Ultimately, we hope the guide will support scientific discovery itself by providing guidance around how to minimize possible risks incurred in creating and using scientific software.