Tuesday, August 3, 2021

Trusted CI new co-PIs: Peisert and Shute

I am happy to announce that Sean Peisert and Kelli Shute have taken on co-PI roles with Trusted CI. Both already have substantial leadership roles with Trusted CI. Sean is leading the 2021 annual challenge on software assurance and Kelli has been serving as Trusted CI’s Executive Director since August of 2020.

Thank you to Sean and Kelli for being willing to step up and take on these responsibilities.

Von

Trusted CI PI and Director


Initial Findings of the 2021 Trusted CI Annual Challenge on Software Assurance

 In 2021, Trusted CI is conducting our focused “annual challenge” on the assurance of software used by scientific computing and cyberinfrastructure. The goal of this year-long project, involving seven Trusted CI members, is to broadly improve the robustness of software used in scientific computing with respect to security. The Annual Challenge team spent the first half of the 2021 calendar year engaging with developers of scientific software to understand the range of software development practices used and identifying opportunities to improve practices and code implementation to minimize the risk of vulnerabilities. In this blog post, the 2021 Trusted CI Annual Challenge team gives a high-level description of some of its more important findings during the past six months. 

Later this year, the team will be leveraging its insights from open-science developer engagements to develop a guide specifically aimed at the scientific software community that covers software assurance in a way most appropriate to that community. Trusted CI will be reaching back out to the community sometime in the Fall for feedback on draft versions of that guide before the final version is published late in 2021.

In support of this effort, Trusted CI gratefully acknowledges the input from the following teams who contributed to this effort: FABRIC, the Galaxy Project, High Performance SSH/SCP (HPN-SSH) by the Pittsburgh Supercomputing Center (PSC), Open OnDemand by the Ohio Supercomputer Center, Rolling Deck to Repository (R2R) by Columbia University, and the Vera C. Rubin Observatory

At a high level, the team identified challenges that developers face with robust policy and process documentation; difficulties in identifying and staffing security leads, and ensuring strong lines of security responsibilities among developers; difficulties in effective use of code analysis tools; confusion about when, where, and how to find effective security training; and challenges with controlling source code developed and external libraries used, to ensure strong supply chain security. We now describe our examination process and findings in greater detail.


Goals and Approach

The motivation for this year’s Annual Challenge is that Trusted CI has reviewed many projects in its history and found significant anecdotal evidence that there are worrisome gaps in software assurance practices in scientific computing. We determined that if some common themes could be identified and paired with the proportional remediations, the state of software assurance in science might be significantly improved. 

Trusted CI has observed that currently available software development resources often do not match well with the needs of scientific projects; the backgrounds of the developers, the available resources, and the way the software is used do not necessarily map to existing resources available for software assurance. Hence, Trusted CI put together a team including a range of security expertise with backgrounds in the field from academic researchers to operational expertise. That team then examined several software projects covering a range of sizes, applications, and NSF directorate funding sources, looking for commonalities among them related to software security. Our focus was on both procedures and practical application of security measures and tools. 

In preparing our examinations of these individual software projects, the Annual Challenge team enumerated several details that it felt would shed light on the software security challenges faced by scientific software developers, some of the most successful ways in which existing teams are addressing those challenges, and observations from developers about the way that they wish things might be different in the future, or if they were able to do things over again from the beginning.


Findings

The Annual Challenge team’s findings are generally aligned with one of five categories: process, organization/mission, tools, training, and code storage.

Process: The team found several common threads of challenges facing developers, most notably related to policy process documentation, including policies relating to onboarding, offboarding, code commits and pull requests, coding standards, design, communication about vulnerabilities with user communities, patching methodologies, and auditing practices. One cause for this finding is often that software projects start small and do not plan to grow or be used widely. And when the software does grow and starts to be used broadly, it can be hard to develop formal policies after developers are used to working in an informal, ad hoc manner. In addition, organizations do not budget for security. Further, where policy documentation does exist, it can easily go stale -- “documentation rot.” As a result, it would be helpful for Trusted CI to develop guides for and examples of such policies that could be used and implemented even at early stages by the scientific software development community.

Organization and Mission: Most projects faced difficulties in identifying, staffing, or funding a security lead and/or project manager. The few projects that had at least one of these roles filled had an advantage in regards to DevSecOps. In terms of acquiring broader security skills, some projects attempted to use institutional “audit services” but found mixed results. Several projects struggled with the challenge of integrating security knowledge among different teams or individuals. Strong lines of responsibility can create valuable modularity but can also create points of weakness when interfaces between different authors or repositories are not fully evaluated for security issues. Developers can ease this tension by using processes for developing security policies around software, ensuring ongoing management support and enforcement of policies, and helping development teams understand the assignment of key responsibilities. These topics will be addressed in the software assurance guide that Trusted CI is developing.

Tools: Static analysis tools are commonly employed in continuous integration (CI) workflows to help detect security flaws, poor coding style, and potential errors in the project. A primary attribute of a static analysis tool is the set of language-specific rules and patterns it uses to search for style, correctness, and security issues in a project. One major issue with static analysis tools is that they report a high number of false positives, which, as the Trusted CI team found, can cause developers to avoid using them. It was determined that it would be helpful for Trusted CI to develop tutorials that are appropriate for the developers in the scientific software community to learn how to properly use these tools and overcome their traditional weaknesses without being buried in useless results.

The Trusted CI team found that dependency checking tools were commonly employed, particularly given some of the automation and analysis features built into GitHub. Such tools are useful to ensure the continued security of a project’s dependencies as new vulnerabilities are found over time. Thus, the Trusted CI team will explore developing (or referencing existing) materials to ensure that the application of dependency tracking is effective for the audience and application in question. It should be noted that tools in general could give a false sense of security if they are not carefully used.

Training: Projects shared that developers of scientific software received almost no specific training on security or secure software development. A few of the projects that attempted to find online training resources reported finding themselves lost in a quagmire of tutorials. In some cases, developers had computer science backgrounds and relied on what they learned early in their careers, sometimes decades ago. In other cases, professional training was explored but found to be at the wrong level of detail to be useful, had little emphasis on security specifically, or was extremely expensive. In yet other cases, institutional training was leveraged. We found that any kind of ongoing training tended to be seen by developers as not worth the time and/or expense. To address this, Trusted CI should identify training resources appropriate for the specific needs, interests, and budgets of the scientific software community.

Code Storage: Although most projects were using version control in external repositories, the access controls methods governing pull requests and commits were not sufficiently restricted to maintain a secure posture. Many projects leverage GitHub’s dependency checking software; however, that tool is limited to only checking libraries within GitHub’s domain. A few projects developed their own software in an attempt to navigate a dependency nightmare. Further, there was often little ability or attempt to vet external libraries; these were often accepted without inspection mainly because there is no straightforward mechanism in place to vet these packages. In the Trusted CI software assurance guide, it would be useful to describe processes for leveraging two-factor authentication and developing policies governing access controls, commits, pull requests, and vetting of external libraries.


Next Steps

The findings derived from our examination of several representative scientific software development projects will direct our efforts towards addressing what new content we believe is most needed by the scientific software development community.

Over the next six months, the Trusted CI team will be developing a guide consisting of this material, targeted toward anyone who is either planning or has an ongoing software project that needs a security plan in place. While we hope that the guide will be broadly usable, a particular focus of the guide will be on projects that provide a user-facing front end exposed to the Internet because such software is most likely to be attacked. 

This guide is meant as a “best practices” approach to the software lifecycle. We will recommend various resources that should be leveraged in scientific software, including the types of tools to run to expose vulnerabilities, best practices in coding, and some procedures that should be followed when engaged in a large collaborative effort and how to share the code safely. Ultimately, we hope the guide will support scientific discovery itself by providing guidance around how to minimize possible risks incurred in creating and using scientific software.

Monday, July 19, 2021

Higher Education Regulated Research Workshop Series: A Collective Perspective

Regulated research data is a growing challenge for NSF funded organizations in research and academia, with little guidance on how to tackle regulated research institutionally. Trusted CI would like to bring the community’s attention to an important report released today by the organizers of a recent, NSF-sponsored* Higher Education Regulated Research Workshop Series that distills the input of 155 participants from 84 Higher Education institutions. Motivated by the Higher Ed community’s desire to standardize strategies and practices, the facilitated** workshop sought to find efficient ways for institutions large and small to manage regulated research data and smooth the path to compliance. It identified six main pillars of a successful research cybersecurity compliance program, namely Ownership and Roles, Financials and Cost, Training and Education, Auditing, Clarity of Controls, and Scoping. The report presents each pillar as a chapter, complete with best practices, challenges, and recommendations for research enablers on campus. While it focuses on Department of Defense (DOD) funded research, Controlled Unclassified Information (CUI), and health research, the report offers ideas and guidance on how to stand up a well managed campus program that applies to all regulated research data. It represents a depth and breadth of community collaboration and institutional experience never before compiled in a single place.

Organized by Purdue University with co-organizers from Duke University, University of Florida, and Indiana University, the workshop comprised six virtual sessions between November 2020 and June 2021. Participants ranged from research computing directors, information security officers, compliance professionals, research administration officers, and personnel who support and train researchers.

The full report is available at the EDUCAUSE Cybersecurity Resources page at https://library.educause.edu/resources/2021/7/higher-education-regulated-research-workshop-series-a-collective-perspective. It was co-authored by contributors from Purdue University, Duke University, University of Florida, Indiana University, Case Western Reserve University, University of Central Florida, Clemson University, Georgia Institute of Technology, and University of South Carolina.

See https://www.trustedci.org/compliance-programs for additional materials from Trusted CI on the topic of compliance programs.

* NSF Grant #1840043, “Supporting Controlled Unclassified Information with a Campus Awareness and Risk Management Framework”, awarded to Purdue University
** by Knowinnovation

Tuesday, July 13, 2021

Trusted CI webinar: A capability-based authorization infrastructure for distributed High Throughput Computing July 26th @11am Eastern

Open Science Grid's Brian Bockelman, is presenting the talk, A capability-based authorization infrastructure for distributed High Throughput Computing, on Monday July 26th at 11am (Eastern).

Please register here. Be sure to check spam/junk folder for registration confirmation email.

The OSG Consortium provides researchers with the ability to bring their distributed high throughput computing (dHTC) workloads to a pool of resources consisting of hardware across approximately 100 different sites.  Using this “Open Science Pool” resource, projects can leverage the opportunistic access (nodes that would be otherwise idle at the site), dedicated hardware, or allocated time at large-scallel NSF-funded resources.

While dHTC can be a powerful tool to advance scientific discovery, managing trust relationships with so many sites can be challenging; the OSG helps bootstrap the trust relationships between project and provider.  Further, authorization in the OSG ecosystem is an evolving topic.  On the national and international infrastructure, we are leading the transition from identity-based authorization -- basing decisions on “who you are” -- to capability based authorization.  Capability-based authorization focuses on “what can you do?” and is implemented through tools like bearer tokens.  Changing the mindset of an entire ecosystem is wide-ranging work, involving dedicated projects such as the new NSF-funded “SciAuth” and international partners like the Worldwide LHC Computing Grid.

In this talk, we’ll cover the journey of the OSG to a capability-based authorization as well as the challenges and opportunities of changing trust models for a functioning infrastructure.

Speaker Bio

Brian Bockelman is a Principal Investigator at the Morgridge Institute for Research and co-PI on the Partnership to Advance Throughput Computing (PATh) and Institute for Research and Innovation in Software for High Energy Physics (IRIS-HEP).  Within the OSG, he leads the Technology Area, which provides the software and technologies that underpin the OSG fabric of services.  He is also a co-PI on the new SciAuth project, led by Jim Basney, which aims to coordinate the deployment of capability-based authorization across the science and engineering cyberinfrastructure.

Before joining Morgridge, Bockelman received a joint PhD in Mathematics and Computer Science from the University of Nebraska-Lincoln (UNL) and was an integral member of the Holland Computing Center at UNL.  His team helps advance Research Computing activities at Morgridge and are partners within the Center for High Throughput Computing (CHTC) at University of Wisconsin-Madison.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Wednesday, July 7, 2021

Trusted CI Concludes Engagement with FABRIC

FABRIC: Adaptive Programmable Research Infrastructure for Computer Science and Science Applications, funded under NSF grants 1935966 and 2029261, is a national scale testbed that connects to connects to existing NSF testbeds (e.g., PAWR), as well as NSF Clouds (e.g., Chameleon and CloudLab), HPC Facilities, and the real Internet. FABRIC aims to expand its outreach, enabling new science applications, using a diverse array of networks, integrating machine learning, and preparing the next generation of computer science researchers.

FABRIC received its initial funding in 2019 and is projected to go into operational phase in September of 2023. FABRIC reached out to Trusted CI to request a review of its software development process, the trust boundaries in the FABRIC system, and the FABRIC security and monitoring architecture.

The five-month engagement began in February and completed in June. In that time the teams worked together to review FABRIC’s project documentation, which included a deep analysis of the security architecture. We moved on to completing an asset inventory and risk assessment, covering over 70 project assets, identifying attack surfaces and potential threats, and documenting current and planned security controls. Lastly, we documented engagement findings in an internal report shared with FABRIC project leadership.

FABRIC also assisted with the Trusted CI 2021 Annual Challenge (Software Assurance) by participating in an interview with members of the software assurance team. The results of that interview will provide input to Trusted CI's forthcoming guide on software assurance for NSF projects.

Tuesday, July 6, 2021

Join Trusted CI at PEARC21, July 19th - 22nd

PEARC21 will be held virtually on July 19th - 22nd, 2021 (PEARC website).

Trusted CI will be hosting two events, our annual workshop and our Security Log Analysis tutorial.

Both events are scheduled at the same time, please note that when planning your agenda.

The details for each event are listed below. 

Workshop: The Fifth Trusted CI Workshop on Trustworthy Scientific Cyberinfrastructure provides an opportunity for sharing experiences, recommendations, and solutions for addressing cybersecurity challenges in research computing.  

Monday July 19th @ 8am - 11am Pacific.

  • 8:00 am - Welcome and opening remarks
  • 8:10 am - The Trusted CI Framework: A Minimum Standard for Cybersecurity Programs
    • Presenters: Scott Russell, Ranson Ricks, Craig Jackson, and Emily Adams; Trusted CI / Indiana University’s Center for Applied Cybersecurity Research
  • 8:40 am - Google Drive: The Unknown Unknowns
    • Presenter: Mark Krenz; Trusted CI / Indiana University’s Center for Applied Cybersecurity Research
  • 9:10 am - Experiences Integrating and Operating Custos Security Services
    • Presenters: Isuru Ranawaka, Dimuthu Wannipurage, Samitha Liyanage, Yu Ma, Suresh Marru, and Marlon Pierce; Indiana University
    • Dannon Baker, Alexandru Mahmoud, Juleen Graham, and Enis Afgan; Johns Hopkins University
    • Terry Fleury, and Jim Basney; University of Illinois Urbana Champaign
  • 9:40 am - 10 minute Break
  • 9:50 am - Drawing parallels and synergies between NSF and NIH cybersecurity projects
    • Presenters: Enis Afgan, Alexandru Mahmoud, Dannon Baker, and Michael Schatz; Johns Hopkins University
    • Jeremy Goecks; Oregon Health and Sciences University
  • 10:20 am - How InCommon is helping its members to meet NIH requirements for federated credentials
    • Presenters: Tom Barton; Internet2
  • 10:50 am - Wrap up and final thoughts (10 minutes)

        More detailed information about the presentations is available on our website.


Tutorial: Security Log Analysis: Real world hands-on methods and techniques to detect attacks.  

Monday July 19th @ 8am - 11am Pacific.

A half-day training to tie together various log and data sources and provide a more rounded, coherent picture of a potential security event. It will also present log analysis as a life cycle (collection, event management, analysis, response), that becomes more efficient over time. Interactive demonstrations will cover both automated and manual analysis using multiple log sources, with examples from real security incidents.

Monday July 19th @ 8am - 11am Pacific time

Thursday, June 24, 2021

The 2021 NSF Cybersecurity Summit Call For Participation - NOW OPEN - Deadline is Friday, July 2nd

It is our pleasure to announce that the 2021 NSF Cybersecurity Summit is scheduled to take place the week of October 11th with the plenary sessions occurring on Tuesday, October 12th and Wednesday October 13th. Due to the impact of the global pandemic, we will hold this year’s summit on-line instead of in-person as originally planned.

The final program is still evolving, but we will maintain the mission to provide a format designed to increase the NSF community’s understanding of cybersecurity strategies that strengthen trustworthy science: what data, processes, and systems are crucial to the scientific mission, what risks they face, and how to protect them.

 

Call for Participation (CFP)

Program content for the summit is driven by our community. We invite proposals for presentations, breakout and training sessions, as well as nominations for student scholarships. The deadline for CFP submissions is July 5th. To learn more about the CFP, please visit: https://www.trustedci.org/2021-summit-cfp 

 More information can be found at https://www.trustedci.org/2021-cybersecurity-summit

 

 




Monday, June 14, 2021

Trusted CI webinar: Investigating Secure Development In Practice: A Human-Centered Perspective Mon June 28th @1pm Eastern

University of Maryland's Michelle Mazurek, is presenting the talk,
Investigating Secure Development In Practice: A Human-Centered Perspective,
on Monday June 28th at 1pm (Eastern).

Please register here. Be sure to check spam/junk folder for registration confirmation email.

Secure development is not just a technical problem: it’s a human and organizational problem as well. To understand the causes of insecurity, and find effective solutions, we must understand how and why security problems happen, and what barriers stand in the way of fixing them. How can we make it easier for developers to write secure code, even without special training? In this talk, I will report on findings from several recent studies addressing these questions. These include examining the effects of information resources and API design on developers' likelihood of writing secure code; using data from a secure programming contest to explore the kinds of security mistakes developers make; and exploring the benefits and barriers associated with adoption of a secure programming language.

Speaker Bio

Michelle Mazurek is an Associate Professor in the Computer Science Department and the Institute for Advanced Computer Studies at the University of Maryland, College Park, where she also directs the Maryland Cybersecurity Center. Her research aims to understand and improve the human elements of security- and privacy-related decision making. Recent projects include examining how and why developers make security and privacy mistakes; investigating the vulnerability-discovery process; evaluating the use of threat-modeling in large-scale organizations; and analyzing how users learn about and decide whether to adopt security advice. Her work has been recognized with an NSA Best Scientific Cybersecurity Paper award and three USENIX Security Distinguished Paper awards. She was Program Chair for the Symposium on Usable Privacy and Security (SOUPS) for 2019 and 2020 and is Program Chair for the Privacy Enhancing Technologies Symposium (PETS) for 2022 and 2023. 

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Thursday, June 10, 2021

Thank you and congratulations to Dana Brunson!

Dana Brunson joined Trusted CI in 2019 as a co-PI and was instrumental in developing and leading Trusted CI’s very successful Fellows program. Her proposal to create a Center of Excellence in workforce development was recently awarded. As a result, she is stepping away from Trusted CI to focus on her role as PI for the new Center of Excellence.

We wish Dana the best of luck with her new Center of Excellence and look forward to identifying opportunities to continue to collaborate.

Von

Trusted CI PI and Director


Wednesday, June 9, 2021

Trusted CI Materials as the Foundation for a University Course at the University of Wisconsin-Madison

Software security is important to the NSF community because it is critical to their support of science. For example, Trusted CI’s Community Benchmarking Survey consistently finds the overwhelming majority of NSF projects and Large Facilities develop software and also adopts both open source and commercial software, whose quality they assess as part of a cybersecurity risk management.  Trusted CI recognises the importance of this issue and has focused the TrustedCI 2021 Annual Challenge on software assurance.

Trusted CI has been developing training materials to teach secure software design and implementation. These materials have been used at conferences, workshops, and government agencies to train CI professionals in secure coding, design, and testing. More recently, they were used at the University of Wisconsin-Madison to develop a new course on software security.  The new course, CS542, Introduction to Software Security (http://www.cs.wisc.edu/~bart/cs542.html), is part of the computer science curriculum at the University of Wisconsin-Madison.  The teaching materials support a blended (flipped) model. Lectures are based on video modules and corresponding text chapters, and the classroom time was used for collaborative exercises and discussions. The videos and text are supplemented by hands-on exercises for each module delivered in virtual machines. The online nature of these materials proved themselves to be of even greater value during the remote learning situation caused by the COVID-19 pandemic.

This new course covers security throughout the various stages of the software development life cycle (SDLC), including secure design, secure coding, and testing and evaluation for security.

These teaching materials are freely available at
https://www.cs.wisc.edu/mist/SoftwareSecurityCourse.

Some of the comments from the students at the end of the last class of the Spring 2021 course, taken from the chat window, include:

“Thank you for such an enlightening course! I had a lot of fun!”
“Thank you for a very insightful and interesting course.”
“Thanks for the semester! This class was very interesting and manageable I appreciate it”
“Is this only taught in the Spring? I'd like to recommend the class to some of my CS friends.”
300 students have benefitted from this course at the University of Wisconsin-Madison.

Tuesday, June 1, 2021

Don't Miss Trusted CI at EDUCAUSE CPP Conference

Members of Trusted CI and partner projects will be presenting at the The 2021 EDUCAUSE Cybersecurity and Privacy Professionals Conference (formerly known as the Security Professionals Conference), to be held Tuesday June 8th - Thursday June 10th. The conference "will focus on restoring, evolving, and transforming cybersecurity and privacy in higher education."

Below is a list of presentations that include Trusted CI team members and partners:
 

Regulated Research Community Workshops

Tuesday, June 08 | 12:15p.m. - 12:35p.m. ET

  • Anurag Shankar - Senior Security Analyst, Indiana University
  • Erik Deumens - Director UF Research Computing, University of Florida
  • Carolyn Ellis - Program Manager, Purdue University
  • Jay Gallman - Security IT Analyst, Duke University
Supporting institutional regulated research comes with a wide range of challenges impacting units that haven't commonly worked together. Until recently, most institutions have looked internally to develop their regulated research programs. Since November 2020, 30 institutions have been gathering for six workshops to share their experience and challenges working establishing regulated research programs. This session will share the process involved in making these workshops successful and initial findings of this very specialized group.


Big Security on Small Budgets: Stories from Building a Fractional CISO Program

Thursday, June 10 | 2:00p.m. - 2:45p.m. ET

  • Susan Sons - Chief Security Analyst, Indiana University Bloomington

No one in cybersecurity has an infinite budget. However, those booting up cybersecurity programs in organizations whose leadership haven't fully bought in to the value of cybersecurity operations, bolting security on to an organization that has been operating without it for too long, or leading cybersecurity for a small or medium-sized institution often have even less to work with: smaller budgets, less training, fewer personnel, less of every resource. Meanwhile, the mandate can seem infinite. In this talk, Susan Sons, Deputy Director of ResearchSOC and architect of the fractional CISO programs at ResearchSOC, OmniSOC, and IU's Center for Applied Cybersecurity Research, discusses approaches to right-sizing cybersecurity programs and getting the most out of limited resources for small and medium-sized organizations. This talk covers strategies for prioritizing security needs, selecting controls, and using out-of-the-box approaches to reduce costs while ensuring the right things get done. Bring your note pad: we'll refer to a number of outside references and resources you can use as you continue your journey.


SecureMyResearch at Indiana University

Thursday, June 10 | 1:00p.m. - 1:20p.m. ET

  • William Drake - Senior Security Analyst, Indiana University
  • Anurag Shankar - Senior Security Analyst, Indiana University

Cybersecurity in academia has achieved significant success in securing the enterprise and the campus community at large through effective use of technology, governance, and education. It has not been as successful in securing the research mission, however, owing to the diversity of the research enterprise, and of the time and other constraints under which researchers must operate. In 2019, Indiana University began developing a new approach to research cybersecurity based on its long experience in securing biomedical research. This resulted in the launch of SecureMyResearch, a first-of-its-kind service to provide cybersecurity and compliance assistance to researchers and stakeholders who support research. It was created not only to be a commonly available resource on campus but also to act as a crucible to test new ideas that depart from or are beyond enterprise cybersecurity practice. Those include baking security into workflows, use case analysis, risk acceptance, researcher-focused messaging, etc. A year later, we have much to share that is encouraging, including use cases, results, metrics, challenges, and stories that are likely to be of interest to those who are beginning to tackle research cybersecurity. We also will be sharing information and advice on a method of communicating the need for cybersecurity to researchers that proved to be highly successful, and other fresh ideas to take home and leverage on your own campus.


Lessons from a Real-World Ransomware Attack on Research

Thursday, June 10 | 12:25p.m. - 12:45p.m. ET

  • Andrew Adams - Security Manager / CISO, Carnegie Mellon University
  • Von Welch - Director, CACR, Indiana University
  • Tom Siu - CISO, Michigan State University

In this talk, co-presented by the Michigan State University (MSU) Information Security Office and Trusted CI, the NSF Cybersecurity Center of Excellence, we will describe the impact and lessons learned from a real-world ransomware attack on MSU researchers in 2020, and what researchers and information security professionals can do to prevent and mitigate such attacks. Ransomware attackers have expanded their pool of potential victims beyond those with economically valuable data. In the context of higher ed, this insidious development means researchers, who used to be uninteresting to cybercriminals, are now targets. During the first part of the presentation, we will explain the MSU ransomware incident and how it hurt research. During the second part, we will elaborate on mitigation strategies and techniques that could protect current and future academic researchers. Finally, we will conclude with a question-and-answer session in which audience members are encouraged to ask Trusted CI staff about how to engage researchers on information security. Trusted CI has unique expertise in building trust with the research community and in framing the cybersecurity information for them. Trusted CI regularly engages with researchers, rarely security professionals, and has a track record of success in communicating with researchers about cybersecurity risks.


Until We Can't Get It Wrong: Using Security Exercises to Improve Incident Response

Wednesday, June 09 | 2:00p.m. - 2:20p.m. ET

  • Josh Drake - Senior Security Analyst, Indiana University Bloomington
  • Zalak Shah - Senior Security Analyst, Indiana University

Incident response can be challenging at the best of times, and when one is responding to a major incident, it is rarely the best of times. A rigorous program of security exercises is the best way to ensure than any organization is prepared to meet the challenges that may come. The best cybersecurity teams have learned not just to practice until they can get it right, but to practice until they can't get it wrong. They use a regular program of security exercises coupled with pastmortem analysis and follow-up to ensure that the whole team, and all of the technologists and organizational support they work with, get better at handling incidents over time. This session will teach you how to build a security exercise program from the ground up and use it to ensure that your incident response capabilities can be relied on no matter what happens.


Google Drive, the Unknown Unknowns

Wednesday, June 09 | 12:00p.m. - 12:45p.m. ET

  • Ishan Abhinit - Senior Security Analyst, Indiana University Bloomington
  • Mark Krenz - Chief Security Analyst, Indiana University

Every day countless thousands of students and staff around the world use cloud storage systems such as Google Drive to store their data. This data may be classified public, internal, and even confidential or restricted. Although Google Drive provides users with ways to control access to their data, my experiences have shown that users often aren't aware that they are exposing their data beyond their expected trust boundary. In this talk I will briefly introduce the audience to Google Drive, sharing some of my own experiences dealing with security concerns. Then I will provide an overview of the issues that academic and research institutions face when using it. I'll highlight the security threats to your data and how to deal with various situations, such as when someone leaves a project, when data is accidentally deleted, or when data is shared and you don't know it. In the second half of the presentation I'll provide the audience with some solutions to these security issues that are useful to a variety of institutions large and small as well as individual projects and people. Some of these solutions were developed by me and my team to solve our own issues, and so now I'll be sharing these solutions and tools with the community at large.


The full agenda, including the on-demand program, is available online.

Tuesday, May 25, 2021

Trusted CI webinars now available as a podcast

Want to catch up on the Trusted CI webinar series while you're on the go? Trusted CI is excited to announce the launch of a podcast version of our webinar. You can find us by searching for "Trusted CI podcast" on Apple, Google, Overcast, Luminary, Pocketcasts, and many other podcatchers.

Contact webinars@trustedci.org if you have any questions.

Tuesday, May 11, 2021

Trusted CI webinar: Identifying Vulnerable GitHub Repositories and Users, Mon May 24th @11am Eastern

Indiana University's Sagar Samtani is presenting the talk, Identifying Vulnerable GitHub Repositories in Scientific Cyberinfrastructure: An Artificial Intelligence Approach, on Monday May 24th at 11am (Eastern).

Please register here. Be sure to check spam/junk folder for registration confirmation email.

The scientific cyberinfrastructure community heavily relies on public internet-based systems (e.g., GitHub) to share resources and collaborate. GitHub is one of the most powerful and popular systems for open source collaboration that allows users to share and work on projects in a public space for accelerated development and deployment. Monitoring GitHub for exposed vulnerabilities can save financial cost and prevent misuse and attacks of cyberinfrastructure. Vulnerability scanners that can interface with GitHub directly can be leveraged to conduct such monitoring. This research aims to proactively identify vulnerable communities within scientific cyberinfrastructure. We use social network analysis to construct graphs representing the relationships amongst users and repositories. We leverage prevailing unsupervised graph embedding algorithms to generate graph embeddings that capture the network attributes and nodal features of our repository and user graphs. This enables the clustering of public cyberinfrastructure repositories and users that have similar network attributes and vulnerabilities. Results of this research find that major scientific cyberinfrastructures have vulnerabilities pertaining to secret leakage and insecure coding practices for high-impact genomics research. These results can help organizations address their vulnerable repositories and users in a targeted manner.

Speaker Bio: Dr. Sagar Samtani is an Assistant Professor and Grant Thornton Scholar in the Department of Operations and Decision Technologies at the Kelley School of Business at Indiana University (2020 – Present). He is also a Fellow within the Center for Applied Cybersecurity Research (CACR) at IU. Samtani graduated with his Ph.D. in May 2018 from the Artificial Intelligence Lab in University of Arizona’s Management Information Systems (MIS) department from the University of Arizona (UArizona). He also earned his MS in MIS and BSBA in 2014 and 2013, respectively, from UArizona. From 2014 – 2017, Samtani served as a National Science Foundation (NSF) Scholarship-for-Service (SFS) Fellow.

Samtani’s research centers around Explainable Artificial Intelligence (XAI) for Cybersecurity and cyber threat intelligence (CTI). Selected recent topics include deep learning, network science, and text mining approaches for smart vulnerability assessment, scientific cyberinfrastructure security, and Dark Web analytics. Samtani has published over two dozen journal and conference papers on these topics in leading venues such as MIS Quarterly, JMIS, ACM TOPS, IEEE IS, Computers and Security, IEEE Security and Privacy, and others. His research has received nearly $1.8M (in PI and Co-PI roles) from the NSF CICI, CRII, and SaTC-EDU programs. 

He also serves as a Program Committee member or Program Chair of leading AI for cybersecurity and CTI conferences and workshops, including IEEE S&P Deep Learning Workshop, USENIX ScAINet, ACM CCS AISec, IEEE ISI, IEEE ICDM, and others. He has also served as a Guest Editor on topics pertaining to AI for Cybersecurity at IEEE TDSC and other leading journals. Samtani has won several awards for his research and teaching efforts, including the ACM SIGMIS Doctoral Dissertation award in 2019. Samtani has received media attention from outlets such as Miami Herald, Fox, Science Magazine, AAAS, and the Penny Hoarder. He is a member of AIS, ACM, IEEE, INFORMS, and INNS.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

 

Wednesday, April 28, 2021

Transition to practice success story: Pablo Moriano - technology readiness & understanding critical security issues in large-scale networked systems

Pablo Moriano is a research scientist in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL). He received Ph.D. and M.S. degrees in Informatics from Indiana University (IU). Previously, he received M.S. and B.S. degrees in Electrical Engineering from Pontificia Universidad Javeriana in Colombia.

Moriano’s research lies at the intersection of data science, network science, and cybersecurity. In particular, he develops data-driven and analytical methods to discover and understand critical security issues in large-scale networked systems. He relies on this approach to design and develop innovative solutions to address these. Applications of his research range across multiple disciplines, including the detection of exceptional events in social media, internet route hijacking, and insider threat behavior in version control systems. His research has been published in Computer Networks, Scientific Reports, Computers & Security, Europhysics Letters, and the Journal of Statistical Mechanics: Theory and Experiments as well as the ACM CCS International Workshop on Managing Insider Security Threats.

In the past, he interned at Cisco with the Advanced Security Group. He is a member of IEEE, ACM, and SIAM and has received funding from Cisco Research.

Trusted CI sat down with Moriano to discuss his transition to practice journey, what he has learned, and his experience with the Technology Readiness Level Assessment tool.

Trusted CI: Tell us about your background and your broader research interests.

My background is in electrical engineering.

I was born and grew up in Colombia. I attended Pontificia Universidad Javeriana to pursue a degree in electrical engineering. I remember enjoying so much math-related and physics classes, which are the foundations of electrical engineering. I did pretty well on those topics.

In my engineering classes, at the end of the semester, we had the same kinds of final projects as in the US, called capstones. The idea of these projects was to integrate the learnings from different subjects to solve a real engineering challenge. In these types of activities, you usually measure the impact a technology has on solving a real problem.

In general, I enjoyed going beyond what I learned in classes. I participated in math-related contests, which allowed me to sharpen my analytical skills. By the end of my undergraduate studies, I had a professor that always was encouraging me to try research and go to grad school. I worked under his supervision to complete my undergraduate thesis. In my undergraduate thesis, I developed real-time control algorithms for a non-linear laboratory plant that used magnetic levitation. That was a starting point to be involved with research and pursuing opportunities in that direction later during grad school.

Currently at Oak Ridge National Laboratory (ORNL), I am a researcher in the computer science and mathematics division. I develop data-driven and analytical models for understanding and identifying anomalies in large-scale networked systems such as cyber-physical systems, communication systems, and socio-technological systems like social media.

This is broad, but common to these systems, also known as complex systems, is that they are made of a large number of elements and that these elements interact in non-linear ways, often producing collective behavior. This collective behavior cannot be explained by analyzing the aggregated behavior of the individual parts. For example, on the internet, a large number of independent and autonomous networks, also known as Autonomous Systems (ASes), such as internet service providers, corporations, and universities are constantly interacting between each other to share reachability of information with respect to where to find destination IP addresses. To do so, ASes communicate using a protocol called Border Gateway Protocol (BGP). The details of the protocol and the interactions between Ases are complex and subject to engineering and economic constraints. However, their aggregated behavior allows users around the globe to navigate the web—and use many other services—by allowing them to find the resources they need every time they search online.

In these networked systems such as the internet, their emergent behavior may sometimes be anomalous or substantially different. This idea in the cybersecurity space is really important because it may be an indication of a problem or in the worst case scenario an indication of an upcoming attack. A similar approach as described in the case of the internet may be used to study other real-world networked systems.

Trusted CI: Tell us about your experience using the Technology Readiness Level (TRL) assessment.

When I was finishing my studies at IU, I had the chance to participate in a Trusted CI workshop in Chicago. At that time Florence [Hudson] was leading that effort.

In addition to getting to interact with other researchers, the intention of the workshop was to provide an opportunity to share the latest research efforts in the cybersecurity space. The emphasis was also to showcase previous academic research that was subsequently translated to practice, delivering a solution to a practical need. That event was very fruitful and allowed me to interact with other peers, have a fresh perspective into transition to practice, and grow my network.

Later, I was invited to participate in the [Trusted CI] cohort. The intention of the cohort is to bring together researchers interested in solving real-world problems in cybersecurity and help them do so. During the process, you get mentorship through the process of transition to practice. In addition, the experience allows you to foster interactions with external stakeholders to receive feedback and support during the process.

The cohort, under the leadership of Ryan [Kiser] has been developing different useful tools like the TRL assessment and canvas proposition.

The TRL assessment idea is not new. In fact, it came from NASA in the 70s. However, it has not been widely used as a resource for transition to practice by cybersecurity researchers. In particular, the TRL assessment provides a tool—similar to a decision tree—to help classify the level of maturity of a technology. Originally, it was conceived using a nine-level scale (from one to nine) with nine being the most mature technology. The TRL assessment is super helpful, for example, to identify the next steps in the transition to practice journey. The fundamental assumption of the tool is that by recognizing where you are at the moment, you will have a clearer picture on how to proceed next.

For instance, when searching for funding opportunities, having a clear picture of where you are (with respect to the maturation of the technology) will allow you to better target specific sources of funding, enabling next steps in the transition to practice journey. In my experience at ORNL, it is an important decision element when deciding which funding steps to pursue in the overall R&D pipeline across several federal agencies.

Trusted CI: Talk about your experience with the funding you were pursuing.

Here at ORNL, there are different opportunities for funding, including specific ones for transitioning to practice your research. One of the fundamental advantages of working in a national laboratory is that it is an environment that bridges academia and industry. In that sense, the work we do is mission-driven and has real-world impact—often with some component of transition to practice as a measure of impact. That means that both research and development are tied together and highly appreciated.

I already applied to an internal funding opportunity for transition to practice. The main purpose of the solicitation was to look for technologies at a minimum of TRL 5 (requiring a working high-fidelity prototype which is beyond basic research) to support the necessary steps for technology maturation. The final goal was to help convert the prototype into an actual usable system that may open the door to commercialization opportunities.

By the time I applied, my technology was not at TRL 5 and of course that was the basis of the feedback that I received. I, however, enjoyed and learned during the process and realized that there are other solicitations that may be more adequate to help me to increase the TRL of my technology (from proof-of-concept to prototype). Throughout the process, I had the chance to talk with practitioners out there and learn about the practical challenges they faced with current deployed systems. I also learned about other federal agencies such as DOE, DHS, and DARPA (and people there) looking for proposals with the focus on transition to practice. That was encouraging.

Trusted CI: Tell us more about your technology.

It's a technology that aims to detect and inform network operators in near real-time about routing incidents (of different severity) by leveraging update messages transmitted in BGP. The fundamental characteristic of the intended system is that it is somehow automatic (leveraging AI/ML methods), detects incidents as soon as possible (allowing quick turnaround), and is able to detect subtle attacks in which only a small fraction of IP prefixes are affected (usually the ones performed through man-in-the-middle).

Trusted CI: Describe where you’d say you are in your transition to practice.

Through the Trusted CI cohort, I had the opportunity to use that TRL tool to evaluate the current state of my technology. By using the tool and the decision criteria behind it, I am pretty confident that the technology at this stage is on what is called Level 3 or proof-of-concept.

The next step will be to mature the technology to build a high-fidelity working prototype that can be used to detect routing incidents using real-time data.

This particular BGP project came from my dissertation research. I recently published a paper about it. However, beyond this project, I see that tools like the TRL assessment are essential to guide my next steps. For that reason, this experience easily translates to other ongoing research projects that go through the whole R&D pipeline.

Trusted CI: Where do you see your research heading down the road?

I'm pursuing the idea of maturing the BGP technology. The problem of BGP incident detection has been in the community for many years. BGP anomaly detection is a difficult space with little room for improvement. For that reason, you need to be very precise about the added value the technology is offering. I also started new projects in the cybersecurity space where I see a clear path between research and development. Currently, these are in earlier stages but may benefit from early consideration through the use of tools like the TRL assessment and the Trusted CI cohort experience.


Monday, April 12, 2021

Trusted CI webinar: Arizona State's Science DMZ, Mon April 26th @11am Eastern

Members of Arizona State University are presenting on their Science DMZ on Monday April 26th at 11am (Eastern).

Please register here. Be sure to check spam/junk folder for registration confirmation email.

Drawing upon its mission to enable access to discovery and scholarship, Arizona State University is deploying an advanced research network employing the Science DMZ architecture. While advancing knowledge of managing 21st-century cyberinfrastructure in a large public research university, this project also advances how network cyberinfrastructure supports research and education in science, engineering, and health.

Replacing existing edge network equipment and installing an optimized, tuned Data Transfer Node provides a friction-free wide area network path and streamlined research data movement. A strict router access control list and intrusion detection system provide security within the Science DMZ, and end-to-end network performance measurement via perfSONAR guards against issues such as packet loss.

Recognizing that the operation of the Science DMZ must not compromise the university’s network security profile, while at the same time avoiding the performance penalty associated with perimeter firewall devices, data access and transfer services will be protected by access control lists on the Science DMZ border router as well as host-level security measures. Additionally, the system architecture employs the anti-IP spoofing tool Spoofer, the Intrusion Detection System (IDS) Zeek, data-sharing honeypot tool STINGAR, traditional honeypot/darknet/tarpit tools, as well as other open-source software.

Finally, Science data flows are supported by a process incorporating user engagement, iterative technical improvements, training, documentation, and follow-up.

Speaker Bios:

Douglas Jennewein is Senior Director for Research Computing in the Research Technology Office at Arizona State University. He has supported computational and data-enabled science since 2003 when he built his first supercomputer from a collection of surplus-bound PCs. He currently architects, funds, and deploys research cyberinfrastructure including advanced networks, supercomputers, and big data archives. He has also served on the NSF XSEDE Campus Champions Leadership Team since 2016 and has chaired that group since 2020. Jennewein is a certified Software Carpentry instructor and has successfully directed cyberinfrastructure projects funded by the National Science Foundation, the National Institutes of Health, and the US Department of Agriculture totaling over $4M.

Chris Kurtz is the Senior Systems Architect for the Research Technology Office in the Office of Knowledge Enterprise at Arizona State University. Previously Chris was the Director of Public Cloud Engineering as well as the Splunk System Architect (and Evangelist) at ASU. He has been appointed as Splunk Trust Community MVP since its inception. Chris is a regular speaker on Splunk and Higher Education, including multiple presentations at Educause, Educause Security Professionals,  and Splunk’s yearly “.conf" Conference. Prior to architecting Splunk, he was the Systems Manager of the Mars Space Flight Facility at ASU, a NASA/JPL funded research group, where he supported numerous Mars Missions including TES, THEMIS, and the Spirit and Opportunity Rovers. Chris lives in Mesa, Arizona along with his wife, rescue dogs, and cat.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

 

Wednesday, April 7, 2021

Michigan State University Engages with Trusted CI to Raise Awareness of Cybersecurity Threats in the Research Community

Cybersecurity exploits are on the rise across university communities, costing valuable resources, and loss of productivity, research data, and personally identifiable information. In a DXC report, it was estimated that an average ransomware attack can take critical systems down for 16 days, and the overall worldwide cost of ransomware in 2020 was predicted to cost $170 billion.   Additional reputational impacts of cybersecurity attacks, although hard to measure, regularly weigh in the minds of scientists and researchers.

An event of this nature occurred at Michigan State University (MSU), which experienced a ransomware attack in May 2020. While many organizations attempt to keep the public from finding out about cyberattacks for fear of loss of reputation or follow-up attacks, MSU has decided to make elements of its attack public in the interests of transparency, to encourage disclosure of similar types of attacks, and perhaps more importantly, to educate the open-science community about the threat of ransomware and other destructive types of cyberattacks. The overarching goal is to raise awareness about rising cybersecurity threats to higher education in hopes of driving safe cyberinfrastructure practices across university communities. 

To achieve this, the CIO’s office at MSU has engaged with Trusted CI, the NSF Cybersecurity Center of Excellence, in a collaborative review and analysis of the ransomware attack suffered by MSU last year.  The culmination of the engagement will be a report focusing on lessons learned during the analysis; these ‘Lessons Learned’ would then be disseminated to the research community.  We expect the published report to be a clear guide for researchers and their colleagues who are security professionals to help identify, manage, and mitigate the risk of ransomware and other types of attacks.

Thursday, April 1, 2021

Trusted CI Engagement Application Deadline Extended

 

Trusted CI Engagement Application Deadline

 Extended till April 9, 2021

 

Apply for a one-in-one engagement with Trusted CI for early 2021

  

Trusted CI is accepting applications for one-on-one engagements to be executed in July-Dec 2021. Applications are due April 9, 2021

To learn more about the process and criteria, and to complete the application form, visit our site: 

http://trustedci.org/application


During Trusted CI’s first 5 years, we’ve conducted
 more than 24 one-on-one engagements with NSF-funded projects, Large Facilities, and major science service providers representing the full range of NSF science missions.  We support a variety of engagement types including: assistance in developing, improving, or evaluating an information security program; software assurance-focused efforts; identity management; technology or architectural evaluation; training for staff; and more.   

As the NSF Cybersecurity Center of Excellence, Trusted CI’s mission is to provide the NSF community a coherent understanding of cybersecurity’s role in producing trustworthy science and the information and know-how required to achieve and maintain effective cybersecurity programs.

Tuesday, March 30, 2021

Announcing the 2021 Trusted CI Annual Challenge on Software Assurance


The Trusted CI “Annual Challenge” is a year-long project focusing on a particular topic of importance to cybersecurity in scientific computing environments.  In its first year, the Trusted CI Annual Challenge focused on issues in trustworthy data.  Now, in its second year, the Annual Challenge is focusing on software assurance in scientific computing.

The scientific computing community develops large amounts of software.  At the largest scale, projects can have millions of lines of code.  And indeed, the software used in scientific computing and the vulnerabilities present in scientific computing can be similar to that used in other domains.  At the same time, the software developers have usually come from traditional scientific focused domains rather than traditional software engineering backgrounds.  And, in comparison to other domains, there's often less emphasis on software assurance.

Trusted CI has a long history in addressing the software assurance of scientific software, both through engagements with individual scientific software teams, as well as through courses and tutorials frequently taught at conferences and workshops by Elisa Heyman and Barton Miller, from University of Wisconsin-Madison.  This year’s Annual Challenge seeks to complement those existing efforts in a focused way, and leveraging a larger team.  Specifically, this year’s Annual Challenge seeks to broadly improve the robustness of software used in scientific computing with respect to security.  It will do this by spending the March–June  2021 timeframe engaging with developers of scientific software to understand the range of software development practices being used and identifying opportunities to improve practices and code implementation to minimize the risk of vulnerabilities.  In the second half of 2021, we will leverage our insights to develop a guide specifically aimed at the scientific software community that covers software assurance in a way most appropriate to that community,.  

We seek to optimize the impact of our efforts in 2021 by focusing our effort on software that is widely used, is situated in vulnerable locations, and is developed mostly by individuals who do not have traditional software engineering backgrounds and training.

This year’s Annual Challenge is supported by a stellar team of Trusted CI staff, including Andrew Adams (Pittsburgh Supercomputing Center), Kay Avila (National Center for Supercomputing Applications), Ritvik Bhawnani (University of Wisconsin-Madison), Elisa Heyman (University of Wisconsin-Madison), Mark Krenz (Indiana University), Jason Lee (Berkeley Lab/ NERSC), Barton Miller (University of Wisconsin-Madison), and Sean Peisert (Berkeley Lab; 2021 Annual Challenge Project Lead).

Monday, March 29, 2021

Trusted CI and the CI CoE Pilot Complete Identity Management Engagement with GAGE

 

The Geodetic Facility for the Advancement of Geoscience (GAGE), is operated by UNAVCO and funded by the NSF and NASA. The GAGE project’s mission is to provide support to the larger NSF investigator community for geodesy, earth sciences research, education, and workforce development. During the second half of 2020, GAGE and the Trusted CI/CI CoE Identity Management working group collaborated on an engagement to design a working proof of concept for integrating federated identity into GAGE’s researcher data portal.

The Cyberinfrastructure Center of Excellence Pilot (CI CoE) is a Trusted CI partner, specializing in providing expertise and active support to CI practitioners at the NSF major facilities in order to accelerate the data lifecycle and ensure the integrity and effectiveness of the CI upon which research and discovery depends. The Identity Management working group is a joint effort between the CI CoE and Trusted CI to provide subject matter expertise and advice to major facilities on trust and identity issues, best practices and implementation. The working group's target audience is NSF funded major facilities, but participation in the working group is open to anyone in higher education and IAM.

The engagement began in July 2020 with a month long series of interviews between working group members and GAGE department leadership. GAGE came into the engagement with a series of needs that had arisen from practice and with a request from NSF to collect information on how their research data was being used. The working group used the interviews to identify key systems and areas of impact in order to present GAGE with a design for integrating federated identity into their data portal using elements of InCommon’s Trusted Access Platform.

Over the next three months, the engagement team met with members of GAGE’s software development team, CILogon, and COmanage to finalize and implement the proof of concept design. This design used CILogon to consume federated identities from other InCommon member institutions and then used COmanage registry to store GAGE specific attributes for those identities to grant permission for accessing various data groups, membership in research projects, and home institutions. Identities and attributes stored in COmanage could then be passed to the GAGE data portal using OIDC claim tokens; granting permissions appropriately at the time of access and allowing GAGE to track which identities were requesting what permissions for their data.

The engagement culminated with a 15-page report delivered to GAGE in February 2021 containing detailed observations from interviews, alternate design configurations and tools for the proof of concept, lessons learned through the implementation process, and identification of future opportunities for investment and collaboration in IAM. Additionally, findings from this engagement will be included in an IAM cookbook that the working group plans to release in 2022. The Identity Management working group meets monthly on the second Monday at 2pm Eastern time. For more information about the Identity Management working group, please see the Trusted CI IAM page, the CI CoE working group directory, or join our mailing list to receive updates on working group meetings and products.

GAGE is funded by an NSF award managed by the Division of Earth Sciences (Award #1724794) and is operated by UNAVCO. The CI CoE Pilot is supported by a grant managed by the NSF Office of Advanced Cyberinfrastructure (Award #1842042) and is a collaboration between the University of Southern California, University of North Carolina at Chapel Hill, University of Notre Dame, University of Utah, and Indiana University. The working group would like to thank the following institutions and organizations for the collaboration and contributions to the engagement: Internet2 and InCommon, the CILogon team, the COmanage team, and the Globus team.




Announcing the 2021 NSF Community Cybersecurity Benchmarking Survey

It's time again for the NSF Community Cybersecurity Benchmarking Survey (“Community Survey”). We’ve appreciated all the great participation in the past and look forward to seeing your responses again this year. The Community Survey, started in 2016, is a key tool used by Trusted CI to gauge the cybersecurity posture of the NSF science community. The twin goals of the Community Survey are: 1) To collect and aggregate information about the state of cybersecurity for NSF projects and facilities; and 2) To produce a report analyzing the results, which will help the community level-set and provide Trusted CI and other stakeholders a richer understanding of the community’s cybersecurity posture. (To view the previous years’ reports, see 2019 Report, 2017 Report, and 2016 Report.) To ensure the survey report is of maximum utility, we want to encourage a high level of participation, particularly from NSF Major Facilities. Please note that we are aggregating responses and minimizing the amount of project-identifying information we’re collecting, and any data that is released will be anonymized.

Survey Link: https://docs.google.com/forms/d/e/1FAIpQLSeooNKQdKx-W5kRol0vTYq0oLogBaT5Sy0G2tG6LwGWSoLc3g/viewform?usp=sf_link

Each NSF project or facility should submit only a single response to this survey. Completing the survey may require input from the PI, the IT manager, and/or the person responsible for cybersecurity (if those separate areas of responsibility exist). While answering specific questions is optional, we strongly encourage you to take the time to respond as completely and accurately as possible. If you prefer not to respond to or are unable to answer a particular question, we ask that you make that explicit (e.g., by using “other:” inputs) and provide your reason.

The response period closes June 30, 2021.

Thank you.


Wednesday, March 24, 2021

Trusted CI’s Large Facilities Security Team Update Spring 2021


Trusted CI continues to address the cybersecurity needs of NSF’s Large Facilities (LFs) by coordinating the Large Facilities Security Team (LFST). The LFST comprises representatives from each of the LFs who are responsible for cybersecurity at their sites. The primary goal of the LFST is to encourage sharing of best practices, policies, and technologies among the team members to further cybersecurity at each of the LFs.

Communication among LFST participants is via a dedicated email list and monthly calls. Call format is either facilitated discussion of a pre-selected topic or a presentation followed by Q. and A. Topics during the past year included COVID-19 pandemic-related cybersecurity issues and response, a ResearchSOC overview, cybersecurity policy development, risk assessment, asset categorization, and supply chain vulnerability. The Trusted CI facilitators actively encourage input from all LFST members during these monthly calls, often producing informative insights on similarities and differences among site priorities and practices.

In service to the broader NSF cybersecurity community, input from the LFST was valuable to development of Trusted CI’s recently released Framework Implementation Guide for Research Cyberinfrastructure Operators. The team is reviewing NSF’s proposed revision to the Major Facilities Guide, which is currently open for comment.

We look forward to another year of learning and active cybersecurity collaboration among NSF’s Large Facilities!

For more information, or to join the LFST, email benninger@psc.edu or info@trustedci.org.


Tuesday, March 23, 2021

Trusted CI Begins Engagement with PATh

The Partnership to Advance Throughput and Computing (PATh) is a project funded by NSF’s OAC Campus Cyberinfrastructure (CC*) program and brings together the Center for High Throughput Computing (CHTC) and the Open Science Grid (OSG) in order to advance the nation’s campuses and science communities through the use of distributed High Throughput Computing. The PATh project offers technologies and services that enable researchers to harness through a single interface, and from the comfort of their “home directory”, computing capacity offered by a global and diverse collection of resources.

PATh is collaborating with Trusted CI on adapting and rewriting PATh’s security program. Through a pre-kickoff meeting and their proposed security program plan submitted to the NSF, we have prioritized their needs using a subset of tasks to outline the goals of the engagement, specifically:

  • Work on Trusted CI Information Security Program Evaluation in order to evaluate PATh’s understanding on their system
  • Assessing the existing security plan and current OSG policies
  • Revising relevant policies and superseding outdated policies with new documents reflecting the current and planned future operations of OSG and PATh
  • Alignment with the Trusted CI Framework 
  • Additional focus and emphasis on resiliency and availability of services, including; monitoring, backups, disaster recovery, and operational upgrades and redundancy

The engagement began in January 2021 and will run until the end of June 2021.