Showing posts with label reports. Show all posts
Showing posts with label reports. Show all posts

Tuesday, July 18, 2023

Trusted CI releases updated guide to software security

As part of its ongoing efforts to support software assurance, Trusted CI has released a major update (version 2.0) of our Guide to Securing Scientific Software.

The first version of this guide provided concrete advice for anyone involved in developing or managing software for scientific projects. This new version of the guide expands both coverage and depth of the topics. This guide provides an understanding of many of the security issues faced when producing software and actionable advice on how to deal with these issues. New topics include approaches to avoiding software exploits (injection attacks, buffer overflows and overruns, numeric errors, exceptions, serialization, directory traversal, improper set of permissions, and web security); securing the software supply chain; secure design; software analysis tools; fuzz testing; and code auditing.

The new version of the guide is available at https://doi.org/10.5281/zenodo.8137009.

If you write code, this guide is for you. And if you write scientific software, your software is likely to be shared or deployed as a service. Once that step happens, you and the people who use or deploy your software, will be confronted with software security concerns.

To address these concerns, you will need a variety of skills. However, it may be daunting just to know what are the concerns to address and what are the skills that you need. The goal of this guide is to provide an introduction to these topics.

You can read this guide beginning-to-end as a tutorial to introduce you to the topic of secure software development, or you can read it selectively to help understand specific issues. In either case, this guide will introduce you to a variety of topics and then provide you with a list of resources to dive deeper into those topics.

It is our hope that our continued efforts in the area of software assurance will help scientific software projects better understand and ameliorate some of the most important gaps in the security of scientific software, and also to help policymakers understand those gaps so they can better understand the need for committing resources to improving the state of scientific software security. Ultimately, we hope that this effort will support scientific discovery itself by shedding light on the risks to science incurred in creating and using software.

Trusted CI releases a new report on ransomware

As part of its ongoing efforts to support software assurance, Trusted CI has released a new report describing the current landscape of ransomware.

Ransomware has become a global problem, striking almost every sector that uses computers, from industry to academia to government.

Given that ransomware is a global problem, striking almost every sector that uses computers, from industry to academia to government, our report takes a detailed technical approach to understanding ransomware. Ransomware attacks affect the smallest businesses, the largest corporations, research labs, and have even shut down IT operations at entire universities.

We present a broad landscape of how ransomware can affect a computer system and suggest how the system designer and operator might prepare to recover from such an attack. In our report we are focused on detection, recovery, and resilience. As such, we are explicitly not discussing how the ransomware might enter a computer system. The assumption is that systems will be successfully attacked and rendered inoperative to some extent. Therefore, it is essential to have a recovery and continuity of operations strategy.

Some of the ransomware scenarios that we describe reflect attacks that are common and well understood. Many of these scenarios have active attacks in the wild. Other scenarios are less common and do not appear to have any active attacks. In many ways, these less common scenarios are the most interesting ones as they pose an opportunity to build defenses ahead of attacks. Such areas need more research into the possible threats and defenses against these threats.

We start with a discussion of the basic attack goals of ransomware and distinguish ransomware from purely malicious vandalism. We present a canonical model of a computing system, representing the key components of the system such as user processes, the file system, and the firmware. We also include representative external components such as database servers, storage servers, and backup systems. This system model then forms the basis of our discussion on specific attacks.

We then use the system model to methodically discuss ways in which ransomware can (and sometimes cannot) attack each component of the system that we identified. For each attack scenario, we describe how the system might be subverted, the ransom act, the impact on operations, difficulty of accomplishing the attack, the cost to recover, the ease of detection of the attack, and frequency in which the attack is found in the wild. We also describe strategies that could be used to detect these attacks and recover from them.

Based on our study, we present our major takeaway observations and best practices that can help make a system more resilient to attack and easier to recover after an attack. Our report is available at https://doi.org/10.5281/zenodo.8140464.

Tuesday, November 1, 2022

Open Science Cyber Risk Profile (OSCRP) Updated with Science DMZ, Software Assurance, Operational Technology, and Cloud Computing Elements

 Trusted CI has released an updated version of the Open Science Cyber Risk Profile (OSCRP), with additions based on insights from its 2021 study of scientific software assurance:

Andrew Adams, Kay Avila, Elisa Heymann, Mark Krenz, Jason R. Lee, Barton Miller, and Sean Peisert. “The State of the Scientific Software World: Findings of the 2021 Trusted CI Software Assurance Annual Challenge Interviews,” September 2021.  https://hdl.handle.net/2022/26799

Andrew Adams, Kay Avila, Elisa Heymann, Mark Krenz, Jason R. Lee, Barton Miller, and Sean Peisert. “Guide to Securing Scientific Software,” December 2021. DOI: 10.5281/zenodo.5777646

…and its 2022 study on scientific operational technology:

Emily K. Adams, Daniel Gunter, Ryan Kiser, Mark Krenz, Sean Peisert, Susan Sons, and John Zage. “Findings of the 2022 Trusted CI Study on the Security of Operational Technology in NSF Scientific Research,” July 13, 2022. DOI: 10.5281/zenodo.6828675

A new section on risk profiling of  cloud computing was also added.  The full reference for the OSCRP is:

Sean Peisert, Von Welch, Andrew Adams, RuthAnne Bevier, Michael Dopheide, Rich LeDuc, Pascal Meunier, Steve Schwab, and Karen Stocks. Open Science Cyber Risk Profile (OSCRP), Version 1.3.3. October 2022. DOI: 10.5281/zenodo.7268749

The OSCRP is a document, initially released in 2017, designed to help principal investigators and their supporting information technology professionals assess cybersecurity risks related to open science projects. The OSCRP was the culmination of extensive discussions with research and education community leaders, and has since become a widely-used resource, including numerous references in recent National Science Foundation (NSF) solicitations.

The OSCRP is a living document and will continue to be refreshed as technology and threats change, and as new insights are acquired.

Comments, questions, and suggestions about this post, and both documents are always welcome at info@trustedci.org.


Friday, July 29, 2022

Trusted CI Co-authors Identity Management Cookbook for NSF Major Facilities

Trusted CI’s Josh Drake has co-authored a new document addressing many identity management (IdM) challenges present at NSF Major Facilities. Due to their size and collaborative missions, Major Facilities often have many users, across multiple organizations, all with different access permissions to a diverse collection of CI resources. The Federated Identity Management Cookbook aims to address these challenges by providing time-tested “recipes” for building IdM capabilities, as well as a primer on the topic of IdM itself.

“While operating the IdM working group and CI Compass, we had many opportunities to engage with major facilities on identity and access management issues facing researchers. We were able to explore a variety of options to help researchers integrate federated identities into their cyberinfrastructure,” said Josh Drake. “This cookbook represents the distilled version of months of engagement with the MF community and a primer to identity management concepts that we hope will be of use to research cyberinfrastructure operators everywhere.” Trusted CI’s Ryan Kiser and Adrian Crenshaw also participated in the engagements that contributed to the cookbook.

This work was created in partnership with Erik Scott (RENCI) and CI Compass. CI Compass provides expertise and active support to cyberinfrastructure practitioners at NSF Major Facilities in order to accelerate the data lifecycle and ensure the integrity and effectiveness of the cyberinfrastructure upon which research and discovery depend.

The cookbook is available in the CI Compass Resource Library  and on Zenodo. See CI Compass’s website to read the full press release.

Monday, March 28, 2022

Trusted CI Publishes 2022 Report Summarizing its Impact on Over 500 NSF Projects

Trusted CI has published its second Impacts Report analyzing our impact on the NSF community. The first report was published in 2018 and summarized our impact from 2012 to 2018. This new report updates our analysis under the current NSF cooperative agreement, which began in 2019 (award #1920430).

We define "impact" as the number of NSF projects (awards) that have had an engagement with Trusted CI or have had staff attend a Trusted CI event; including the NSF Cybersecurity Summit, webinars, and training events. Using that metric, we find that since 2012, Trusted CI has interacted with over 500 NSF projects, including over 300 NSF projects during the last 3 years (2019-2021).

The full report includes more details about our impact broken down by NSF Directorate, our engagements, Summit attendance, and more. It is available at https://doi.org/10.5281/zenodo.6350540.

Tuesday, December 8, 2020

Report on the Trusted CI 2020 NSF Cybersecurity Summit is now available

The Report of the 2020 NSF Cybersecurity Summit for Cyberinfrastructure and Large Facilities is now available at http://hdl.handle.net/2142/108907. The report summarizes the eighth annual Summit, the first to be held entirely online, which took place September 22-24, 2020. The annual Summit provides a valuable opportunity for cybersecurity training and information exchange among members of the cybersecurity, cyberinfrastructure, and research communities who support NSF science projects. This sharing of challenges and experiences raises the level of cybersecurity awareness and gives Trusted CI important insights into current and evolving issues within the constituent communities.
 
This year’s Summit training and plenary sessions reiterated some observations from previous years such as the high value of community member interaction and knowledge share. Several presentations again noted the value of federated identity management in facilitating project collaboration. Also emphasized was the importance of workforce development but with a new highlight on the strength that diversity brings to teams. Other emerging trends that were noted among this year’s presentations included the threat presented by the rapid spread of misinformation and disinformation and a broadening of the focus on data confidentiality to include the value of data integrity 
 
Day 1 of the Summit was dedicated to half-day and full-day training sessions. Days 2 and 3 comprised plenary presentations, panels, and keynotes that focused on the security of cyberinfrastructure projects and NSF Large Facilities. Recordings of many of the Summit sessions are available on YouTube. Slides from a subset of the presentations are also available.
 
With 2020’s no-cost virtual format, this year’s attendance totaled 287 (up from 143 in-person attendees in 2019), representing 142 NSF projects and 16 of the 20 NSF Large Facilities. The total attendance includes a significant increase in student participation, with 27 students attending, up from ten in 2019. For more information on the 2020 Summit student attendees, please see the Trusted CI blog post Student Program at the 2020 NSF Cybersecurity Summit. Evaluation and feedback on the 2020 Summit were very positive, with many requests to continue offering a virtual attendance option in the future. As we begin planning for the 2021 Summit, we will be mindful of the conditions and options to determine meeting formats that we think will best serve the community’s needs at that time.

Friday, November 20, 2020

Open Science Cyber Risk Profile (OSCRP), and Data Confidentiality and Data Integrity Reports Updated

 In April 2017, Trusted CI released the Open Science Cyber Risk Profile (OSCRP), a document designed to help principal investigators and their supporting information technology professionals assess cybersecurity risks related to open science projects. The OSCRP was the culmination of extensive discussions with research and education community leaders, and has since become a widely-used resource, including numerous references in recent National Science Foundation (NSF) solicitations.

The OSCRP has always been intended to be a living document.  In order to gather material for continued refreshing of ideas, Trusted CI has spent the past couple of years performing in-depth examination of additional topics for inclusion in a revised OSCRP.  In 2019, Trusted CI examined the causes of random bit flips in scientific computing and common measures used to mitigate the effects of “bit flips.”  Its report, “An Examination and Survey of Random Bit Flips and Scientific Computing,” was issued in December 2019.  In order to address the community's need for insights on how to start thinking about computing on sensitive data, in 2020, Trusted CI examined data confidentiality issues and solutions in academic research computing.  Its report, “An Examination and Survey of Data Confidentiality Issues and Solutions in Academic Research Computing,” was issued in September 2020.  

Both reports have now been updated, with the current versions being made available at the links to the report titles above.  In conjunction, the Open Science Cyber Risk Profile (OSCRP) itself has also been refreshed with insights from both data confidentiality and data integrity reports.

All of these documents will continue to be living reports that will be updated over time to serve community needs. Comments, questions, and suggestions about this post, and both documents are always welcome at info@trustedci.org.


Thursday, September 10, 2020

Data Confidentiality Issues and Solutions in Academic Research Computing

Many universities have needs for computing with “sensitive” data, such as data containing protected health information (PHI), personally identifiable information (PII), or proprietary information.  Sometimes this data is subject to legal restrictions, such as those imposed by HIPAA, CUI, FISMA, DFARS, GDPR, or the CCPA, and at other times, data may simply not be sharable per a data use agreement.  It may be tempting to think that such data is typically only in the domain of DOD and NIH funded research, but it turns out that this assumption is far from reality.  While this issue arises in numerous scientific domains, including ones that people might immediately think of, such as medical research, it also arises in numerous others, including economics, sociology, and other social sciences that might look at financial data, student data or psychological records; chemistry and biology particularly that which relates to genomic analysis and pharmaceuticals, manufacturing, and materials; engineering analyses, such as airflow dynamics; underwater acoustics; and even computer science and data analysis, including advanced AI research, quantum computing, and research involving system and network logs.  Such research is funded by an array of sponsors, including the National Science Foundation (NSF) and private foundations.

Few organizations currently have computing resources appropriate for sensitive data.  However, many universities have started thinking about how to enable computing of sensitive data, but may not know where to start.

In order to address the community need for insights on how to start thinking about computing on sensitive data, in 2020, Trusted CI examined data confidentiality issues and solutions in academic research computing.  Its report, “An Examination and Survey of Data Confidentiality Issues and Solutions in Academic Research Computing,” was issued in September 2020.  The report is available at the following URL:

https://escholarship.org/uc/item/7cz7m1ws

The report examined both the varying needs involved in analyzing sensitive data and also a variety of solutions currently in use, ranging from campus and PI-operated clusters to cloud and third-party computing environments to technologies like secure multiparty computation and differential privacy.  We also discussed procedural and policy issues involved in campuses handling sensitive data.

Our report was the result of numerous conversations with members of the community.  We thank all of them and are pleased to acknowledge those who were willing to be identified here and also in the report:

  • Thomas Barton, University of Chicago, and Internet2
  • Sandeep Chandra, Director for the Health Cyberinfrastructure Division and Executive Director for Sherlock Cloud, San Diego Supercomputer Center, University of California, San Diego
  • Erik Deumens, Director of Research Computing, University of Florida
  • Robin Donatello, Associate Professor, Department of Mathematics and Statistics, California State University, Chico
  • Carolyn Ellis, Regulated Research Program Manager, Purdue University
  • Bennet Fauber, University of Michigan
  • Forough Ghahramani, Associate Vice President for Research, Innovation, and Sponsored Programs, Edge, Inc.
  • Ron Hutchins, Vice President for Information Technology, University of Virginia
  • Valerie Meausoone, Research Data Architect & Consultant, Stanford Research Computing Center
  • Mayank Varia, Research Associate Professor of Computer Science, Boston University

For the time being, this report is intended as a standalone initial draft for use by the academic computing community. Later in 2020, this report will be accompanied by an appendix with additional technical details on some of the privacy-preserving computing methods currently available.  

Finally, in late 2020, we also expect to integrate issues pertaining to data confidentiality into a future version of the Open Science Cyber Risk Profile (OSCRP). The OSCRP is a document that was first created in 2016 to develop a “risk profile” for scientists to help understand risks to their projects via threats posed through scientific computing. While the first version included issues in data confidentiality, a revised version will include some of our additional insights gained in developing this report.

As with many Trusted CI reports, both the data confidentiality report and the OSCRP are intended to be living reports that will be updated over time to serve community needs. It is our hope that this new report helps answer many of the questions that universities are asking, but also that begins conversations in the community and results in questions and feedback that will help us to make improvements to this report over time.  Comments, questions, and suggestions about this post, and both documents are always welcome at info@trustedci.org

Going forward, the community can expect additional reports from us on the topics mentioned above, as well as a variety of other topics. Please watch this space for future blog posts on these studies.


Tuesday, June 23, 2020

Fantastic Bits and Why They Flip

In 2019, Trusted CI examined the causes of random bit flips in scientific computing and common measures used to mitigate the effects of “bit flips.”  (In a separate effort, we will also be issuing a similar report on data confidentiality needs in science, as well.) Its report, “An Examination and Survey of Random Bit Flips and Scientific Computing,” was issued a few days before the winter holidays in December 2019. As news of the report was buried amidst the winter holidays and New Year, we are pleased to highlight the report in a bit more detail now. This post is longer than most of Trusted CI’s blog posts to give you a feel for the report and hopefully entice you to read it.

For those reading this that are not computer scientists, some background: What in the world is “bit,” how can one “flip” and what makes one occur randomly? Binary notation is the base-2 representation of numbers as combinations of the digits 0 and 1, in contrast to the decimal notation most of us are used to in our daily lives that represents digits as combinations of the digits 0 through 9. In binary notation, A “bit” is the atomic element of the representation of a 1 or a 0. Bits --- 0s or 1s --- can be combined together to represent numbers larger than 0 or 1 in the same way that decimal digits can be put together to represent numbers larger than 9.

Binary notation has been in use for many hundreds of years. The manipulation of binary numbers made significant advances in the mid 19th century through the efforts of George Boole, who introduced what was later referred to as Boolean algebra or Boolean logic. This advance in mathematics, combined with electronic advances in switching circuits and logic gates by Claude Shannon (and others) in the 1930s led to binary storage and logic as the basis of computing. As such, binary notation, with numbers represented as bits, are the basis of how most computers have stored and processed information since the inception of electronic computers.

However, while we see the binary digits 0 and 1 as discrete, opposite, and rigid representations, in the same way that North and South represent directions, the components of a computer that underlie these 0 and 1 representations are analog components that reveal that 0 and 1 are in fact closer to shades of grey. In fact, 0 and 1 are typically stored magnetically and transmitted through electrical charges. In reality, both magnetism and electrical charges can either degrade or otherwise be altered through external forces, including cosmic rays or other forms of radiation and magnetism. To a computer, a “bit flip” is the change of the representation of a number from a 0 to a 1 or vice versa. Underlying that “flip” could have been a sudden burst of radiation that suddenly and instantly altered magnetic storage or electrical transmission, or could also have been the slow degradation of the magnetism of a magnetically stored bit from something close to 1, or a “full” magnetic charge, to something less than 0.5, at which point it would be recognized and interpreted as a 0.

The use of error correction in computing and communication was pioneered in the 1940s and 1950s by Richard Hamming, who used some form of redundancy to help to identify and mask the effects of bit flips. Despite the creation of these techniques 70–80 years ago, it is still not the case that error correction is universally used. And, even when it is, there are limits to the amount of errors that can be incurred in a particular blob of data (a number, a file, a database) before the errors can fail to be correctable, or even to be detected at all.

The report that Trusted CI published last year describes the methods for why bit flips occur. These include isolated single errors due to some kind of interference, bursty faults of a number of sequential bits, due to some kind of mechanical failure or electrical interference; or malicious tampering. The document then narrows to focus on isolated errors. Malicious tampering is the focus of future reports, for example, as are data errors or loss due to improper scientific design, mis-calibrated sensors, and outright bugs, including unaccounted-for non-determinism in computational workflows, improper roundoff and truncation errors, hardware failures, and “natural” faults.

The report then describes why single bit faults occur — such as via cosmic rays, ionizing radiation, and corrosion in metal — their potential odds of faults occurring for a variety of different components in computing, and potential mitigation mechanisms. The goal is to help scientists understand the risk that bit faults can either lead to scientific data that is in some way incorrect, due to bit flips, or an inability to reproduce scientific results in the future, which is of course a cornerstone of the scientific process.

As part of the process of documenting mitigation mechanisms, the authors of the report surveyed an array of scientists with scientific computing workflows, as well as operators of data repositories, and computing systems ranging from small clusters to large-scale DOE and NSF high-performance computing systems. The report also discusses the impact of bit flips on science. For example, in some cases, including certain types of metadata, corrupt data might be catastrophic. In other cases, such as images,, or situations where there are already multiple data streams collecting that cross-validate each other, the flip of a single bit or even a small handful of bits is largely or entirely lost in the noise. Finally, the report collects these mechanisms into a set of practices, divided by components involved in scientific computing, that scientists may wish to consider implementing in order to protect their data and computation — for example, using strong hashing before storing or transmitting data, file systems with automated integrity repair built in, disks with redundancy built in, and even leveraging fault tolerant algorithms where possible.

For the time being, this report is intended as a standalone first draft for use by the scientific computing community. Later in 2020, this report will be combined with insights from the Trusted CI “annual challenge” on trustworthy data to more broadly offer guidance on integrity issues beyond bit flips. Finally, in late 2020, we expect to integrate issues pertaining to bit flips into a future version of the Open Science Cyber Risk Profile (OSCRP). The OSCRP is a document that was first created in 2016 to develop a “risk profile” for scientists to help understand risks to their projects via threats posed through scientific computing. While the first version included issues in data integrity, a revised version will include bit flips more directly and in greater detail.

As with many Trusted CI reports, both the bit flip report and the OSCRP are intended to be living reports that will be updated over time to serve community needs. As such, comments, questions, and suggestions about this post, and both documents are always welcome at info@trustedci.org
Going forward the community can expect additional reports from us on the topics mentioned above, as well as a variety of other topics. Please watch this space for future blog posts on these studies.


Tuesday, October 20, 2015

CTSC Year 3 Report Published

CTSC's Year 3 report, covering activities from October of 2014 through September of 2015, has been accepted by NSF and is available online at http://trustedci.org/reports.

Monday, December 22, 2014

2014 NSF Summit Report Published

The Report of the 2014 NSF Cybersecurity Summit for Large Facilities and Cyberinfrastructure is available for download at http://trustedci.org/2014summit/. Many thanks to our colleagues who helped us document this community event and those who contributed white papers.

We'll be using the findings and recommendations, in part, to drive planning for the 2015 summit, to be held at the Westin Arlington Gateway, August 17 - 19.

Monday, October 6, 2014

CTSC Year Two Report

CTSC's Year Two Project Report has been submitted to NSF and is available at http://trustedci.org/reports/ (along with the year one report). The Executive Summary follows.



The Center for Trustworthy Scientific Cyberinfrastructure (CTSC) is transforming and improving the practice of cybersecurity and hence the trustworthiness of NSF scientific cyberinfrastructure (CI). CTSC is providing the NSF CI community with cybersecurity leadership, expertise, training, and the nexus of a community for sharing experiences and lessons learned. The vision of CTSC is an NSF CI community in which each project knows where it fits in a coherent cybersecurity ecosystem, has access to the tools and expertise to enact a cybersecurity program, participates in the sharing of experiences and collaboration between projects and is greatly benefited by leveraging services from universities, regional and national networks (e.g., CIC, SURA, Internet2).

This report covers CTSC project year two, from October 2013 through September 2014, during which time CTSC engaged with seven NSF CI projects, re-invigorated the NSF CI cybersecurity community by organizing the 2013 and 2014 NSF Cybersecurity Summits for Large Facilities and Cyberinfrastructure, provided the community with a guide and templates for developing a cybersecurity program, and provided training in secure coding, incident response and developing a cybersecurity program.

Nearly 150 individuals, representing over 70 projects, attended one or both of the Summits. The 2014 Summit was particularly successful in building community around a call for participation that resulted in the broader community presenting two training sessions and four experience reports.

Through its first two years, CTSC has now engaged with 13 NSF projects, and trained over 130 CI professionals representing 30 projects. Those numbers include a significant impact on NSF Large Facilities, who comprised 4 CTSC engagees, 14 of the projects who have attended a Summit, and 9 of the projects benefitting from CTSC training.

Awareness of CTSC increased in its second year, with International Science Grid This Week publishing an article on CTSC’s work with LIGO, a NSF solicitation mentioning the CTSC-organized Summit, and CTSC’s blog and website receiving a significant number of views.

This report describes all CTSCs activities in detail, concluding with a set of lessons learned by CTSC over its first two years and the project’s plans for its third year.

Wednesday, February 12, 2014

NSF Cybersecurity Summit Report Published

CTSC is pleased to announce that the Report of the 2013 NSF Cybersecurity Summit for Cyberinfrastructure and Large Facilities is finalized and available on the CTSC website:


Trustworthy computational science and the complex interactions NSF projects have with scientific communities and research organizations bring unique cybersecurity challenges. Held from September 30 through October 2, the summit brought together community members from 24 NSF-funded projects and 30 organizations for tutorials, panels, presentations, and discussions around the theme, Designing Cybersecurity Programs in Support of ScienceCTSC had the honor of rebooting the summit after a four year hiatus. The report summarizes the event, its outcomes, and plans for future community activities.

The report is also available in the Trusted CI Forum, where questions and discussion are welcome: https://trustedci.groupsite.com/file_cabinet

Many thanks to all the community members who participated and assisted us in drafting the report. We look forward to collaborating with you in 2014.

Thursday, September 26, 2013

CTSC Year One Project Report published.

CTSC's Year One Project Report has been submitted to NSF and is available at http://trustedci.org/reports/. The Executive Summary follows.

The Center for Trustworthy Scientific Cyberinfrastructure (CTSC) is transforming and improving the practice of cybersecurity and hence the trustworthiness of NSF scientific cyberinfrastructure (CI). CTSC is providing readily available cybersecurity expertise and services, as well as leadership in advancing the state of practice and coordination across a broad range of NSF scientific CI projects via a series of engagements with NSF CI projects and a broader ongoing education, outreach and training effort.
The vision of CTSC is an NSF CI community in which 1) each project knows where it fits in a coherent cybersecurity ecosystem and can assess its own needs; 2) each project has access to the tools and needed help to enact a basic cybersecurity program and tackle the project’s advanced challenges; 3) sharing of experiences and collaboration between projects is the norm; and 4) cybersecurity is greatly benefited by leveraging services, universities, I2, and broader community best practices. 
Towards this vision, CTSC is organized by three thrusts: 1) Engagements with specific communities to address their individual challenges; 2) Education, Outreach and Training, providing the NSF scientific CI community with training, student education, best practice guides, and lessons learned documents; and 3) Cybersecurity Leadership, building towards a coherent, interoperable cybersecurity community and ecosystem.This report covers CTSC’s successful first year, in which it initiated seven engagements, completing three (LTER Network Office, LIGO, Pegasus), is in the process of finalizing three more (DataONE, IceCube, CyberGIS) and initiating a seventh (Globus Online). 
Accomplishments include 1) developing a process for developing NSF CI Cybersecurity programs that incorporates well-known best practices and tackles NSF CI challenges of residing in a complicated, multi-institution ecosystem with unique science instruments and data; 2) re-starting and organizing the NSF Cybersecurity Summit along with an online Trusted CI Forum to foster an ongoing NSF community focused on NSF CI cybersecurity; and 3) delivering seven training sessions by leveraging prior training materials from the University of Wisconsin team and creating two new tutorials. 
Educational activities include 1) creating a new education module on cybersecurity for CI that is being utilized in a class at the University of Illinois this Fall; 2) mentoring of a student in Indiana University’s Summer of Networking program; 3) and the ongoing membership of two graduate students in the CTSC team as research assistants. Our broader impacts include the publication of engagement products and three other papers to define community best practices. 
Year two plans are described that continue the emphasis on these three thrusts and building the community working on cybersecurity with the Trusted CI Forum and a vision for continued CI and Large Facility Cybersecurity Summits.