Friday, December 18, 2020

Southern Ocean Carbon and Climate Observations and Modeling (SOCCOM) and Global Ocean Biogeochemistry Array (GO-BGC) Complete Trusted CI CyberCheckup

The Southern Ocean Carbon and Climate Observations and Modeling (SOCCOM) project is a $21 million NSF-funded project (OPP 1425989 and OPP 1936222) to instrument the Southern Ocean and make data publicly available.  SOCCOM has deployed nearly 200 robotic profiling floats in the Southern Ocean (south of 30°S). These floats are part of the international Argo network and collect physical, chemical, and biological sensor data from the upper 2000 m of the water column every 10 days. The data are transmitted to shore via the Iridium satellite system. The data are then passed through a series of institutional servers, where the data are fully processed and quality controlled. The resulting science quality data and the raw observations are made available within 24 hours with no restrictions. The data set has been used in more than 100 publications to assess physical, chemical, and biological processes in the Southern Ocean. 

The biogeochemical float array in the Southern Ocean is now expanding to the world ocean with a new NSF sponsored project, the Global Ocean Biogeochemistry (GO-BGC) Array (OCE  1946578).  GO-BGC will deploy 500 robotic profiling floats throughout the ocean.  GO-BGC is funded by a $52.9 million grant from the Mid-scale Research Infrastructure-2 program.  Institutional float operators expand from the University of Washington (UW) in SOCCOM to include Scripps Institution of Oceanography (SIO) and Woods Hole Oceanographic Institution (WHOI).  The Monterey Bay Aquarium Research Institute (MBARI) will maintain the biogeochemical data processing center for both programs.

SOCCOM and GO-BGC staff first used Trusted CI's "Securing Commodity IT in Scientific CI Projects" spreadsheet to evaluate four of their participating institutions, MBARI, UW, SIO, and WHOI. SOCCOM and GO-BGC staff next completed Trusted CI's "Information Security Program Evaluation" questionnaire. This document was used to capture the current state of each of the participant’s information security programs as well as find potential security policy gaps. The output from these two documents will be used by SOCCOM and GO-BGC to better secure their project. In addition to the CyberCheckup, Trusted CI staff walked project members through the use of Trusted CI’s guide to developing cybersecurity programs and the upcoming Trusted CI framework for putting together a comprehensive cybersecurity program.

The SOCCOM data system includes servers at UW, which handle float communications through the Iridium system, data processing for the physical variables (temperature, salinity, and pressure), and transmission of the physical data to the Argo Data Assembly Center in Miami, which is maintained by NOAA.  The UW system also links to the network at MBARI, where all of the biogeochemical data is processed and then transmitted to the Argo Data Assembly Center, where it is merged with the physical data.  The GO-BGC data system (including float communications, raw data acquisitions, data processing and quality control, and data dissemination and archiving) is more complicated with networks at UW, WHOI, and SIO communicating with floats and distributing data to MBARI for processing.   SOCCOM and GO-BGC performed a Trusted CI CyberCheckup to look at their needs for a comprehensive cybersecurity program.  The Cybercheckup is an engagee-driven, self-evaluation of a project’s cybersecurity readiness.  Trusted CI staff provided templates to be used for the CyberCheckup as well as assistance in reviewing the templates. 

The multi-institutional SOCCOM and GO-BGC projects create a cybersecurity challenge because of the mix of institutional assets, policies, and infrastructure.  To accommodate the multi-institutional nature of the project, a two-tiered approach to cybersecurity will be implemented, which incorporates the practices outlined in the Trusted CI review.  A project level CyberSecurity Team will encompass representatives of each institution.  This team will be led by a CyberSecurity Coordinator from the science staff.   

Each of the institutional members directly involved in the flow of project data will then implement a local team.  These local teams will include a cyber security professional from the information systems group at each location, a SOCCOM or GO-BGC science team representative, and a member from the SOCCOM or GO-BGC technical staff at the location.  The diverse membership of the local teams has the objective of ensuring professional cybersecurity capabilities, a vision of the scientific requirements for data availability and protection, and a code-level view of the project infrastructure.  The local CyberSecurity Teams are responsible for developing a cybersecurity plan that is adapted to their local infrastructure and policies.  

The Project CyberSecurity Team coordinates communications between the local teams and ensures that a system-wide review of security and vulnerabilities is conducted.  They ensure that the project-wide data system is functional, meets the broader community needs, and is capable of rapid recovery from a cyber attack. The Project CyberSecurity Team will conduct periodic reviews and tests (“fire drills”) of the local plans.  

As noted by Ken Johnson, the GO-BGC PI at MBARI, “The Trusted CI CyberCheckUp has been a really important mechanism for us to review a critical path that often gets overlooked.  Our program will be a lot stronger as a result of the review.”

Now available: An “early look” at three additional chapters from the Trusted CI Framework Implementation Guide for Research Cyberinfrastructure Operators

Following the earlier release of the Must 15 v0.9, Trusted CI has released additional v0.9 chapters from the forthcoming Trusted CI Framework Implementation Guide (FIG) for Research Cyberinfrastructure Operators (RCOs). The chapters are:


Must 3: Organizations must establish and maintain documentation of information assets. 

 

Must 4: Organizations must establish and implement a structure for classifying information assets as it relates to the organization’s mission. 

 

Must 16: Organizations must select and deploy additional and alternate controls as warranted. 


These chapters provide RCOs with roadmaps and advice on addressing fundamental steps toward establishing a mature cybersecurity program. The chapters are the result of Trusted CI’s years of accumulated experience conducting research, training, assessments, consultations, and collaborating closely with the research community. They have been reviewed and vetted by the Framework Advisory Board. 


Trusted CI will publish v1.0 of the complete FIG on March 1, 2021.


Read on to learn more. For the latest information about the Framework, please see https://www.trustedci.org/framework and consider subscribing to Trusted CI’s announce email list. For inquiries, please contact info@trustedci.org


About the Trusted CI Framework


The Trusted CI Framework is a tool to help organizations establish cybersecurity programs. In response to an abundance of guidance focused narrowly on cybersecurity controls, Trusted CI set out to develop a framework that would empower organizations to confront their own cybersecurity challenges from a mission-oriented and full organizational lifecycle perspective. Within Trusted CI’s mission is to lead the development of an NSF Cybersecurity Ecosystem that enables trustworthy science: the Framework fills a gap in emphasizing programmatic fundamentals.


The Trusted CI Framework is structured around 4 “Pillars” which make up the foundation of a competent cybersecurity program: Mission Alignment, Governance, Resources, and Controls


Within these pillars are 16 “Musts” that identify the concrete, critical elements required for running a competent cybersecurity program. The 4 Pillars and the 16 Musts combined make up the “Framework Core,” which is designed to be applicable in any environment and for any organization and which is unlikely to change significantly over time.


About the forthcoming Framework Implementation Guide


A “Framework Implementation Guide” (FIG) is an audience-specific deep dive into how an organization would begin implementing the 16 Musts. FIGs provide detailed guidance and recommendations and are expected to be updated much more frequently than the Framework Core.


This Framework Implementation Guide is designed for direct use by research cyberinfrastructure operators. We define RCOs as organizations that operate on-premises, cloud-based, or hybrid computational and data/information management systems, instruments, visualization environments, networks, and/or other technologies that enable knowledge breakthroughs and discoveries. These include, but are not limited to, major research facilities, research computing centers within research institutions, and major computational resources that support research computing.


About the Framework Advisory Board (FAB)


As a product ultimately designed for use in the Research and Higher Education communities, this Framework Implementation Guide is being developed with significant input from stakeholders that represent a cross-section of the target audience. The Framework Advisory Board (FAB) includes 19 stakeholders with diverse interests and roles in the research and education communities. Over the course of 2020, Trusted CI’s Framework project team is engaging the FAB on a monthly basis, and the group is providing substantial inputs on the draft material. 


The Framework Advisory Board is:


Kay Avila (NCSA); Steve Barnet (IceCube); Tom Barton (University of Chicago); Jim Basney (NCSA); Jerry Brower (NOIRLab, Gemini Observatory); Jose Castilleja (NCAR / UCAR); Shafaq Chaudhry (UCF); Eric Cross (NSO); Carolyn Ellis (Purdue U.); Terry Fleury (NCSA); Paul Howell (Internet2); Tim Hudson (NEON / Battelle / Arctic); David Kelsey (UKRI/WISE); Tolgay Kizilelma (UC Merced); Nick Multari (PNNL); Adam Slagell (ESnet); Susan Sons (IU CACR); Alex Withers (NCSA / XSEDE); Melissa Woo (Michigan State U.)


Tuesday, December 8, 2020

Report on the Trusted CI 2020 NSF Cybersecurity Summit is now available

The Report of the 2020 NSF Cybersecurity Summit for Cyberinfrastructure and Large Facilities is now available at http://hdl.handle.net/2142/108907. The report summarizes the eighth annual Summit, the first to be held entirely online, which took place September 22-24, 2020. The annual Summit provides a valuable opportunity for cybersecurity training and information exchange among members of the cybersecurity, cyberinfrastructure, and research communities who support NSF science projects. This sharing of challenges and experiences raises the level of cybersecurity awareness and gives Trusted CI important insights into current and evolving issues within the constituent communities.
 
This year’s Summit training and plenary sessions reiterated some observations from previous years such as the high value of community member interaction and knowledge share. Several presentations again noted the value of federated identity management in facilitating project collaboration. Also emphasized was the importance of workforce development but with a new highlight on the strength that diversity brings to teams. Other emerging trends that were noted among this year’s presentations included the threat presented by the rapid spread of misinformation and disinformation and a broadening of the focus on data confidentiality to include the value of data integrity 
 
Day 1 of the Summit was dedicated to half-day and full-day training sessions. Days 2 and 3 comprised plenary presentations, panels, and keynotes that focused on the security of cyberinfrastructure projects and NSF Large Facilities. Recordings of many of the Summit sessions are available on YouTube. Slides from a subset of the presentations are also available.
 
With 2020’s no-cost virtual format, this year’s attendance totaled 287 (up from 143 in-person attendees in 2019), representing 142 NSF projects and 16 of the 20 NSF Large Facilities. The total attendance includes a significant increase in student participation, with 27 students attending, up from ten in 2019. For more information on the 2020 Summit student attendees, please see the Trusted CI blog post Student Program at the 2020 NSF Cybersecurity Summit. Evaluation and feedback on the 2020 Summit were very positive, with many requests to continue offering a virtual attendance option in the future. As we begin planning for the 2021 Summit, we will be mindful of the conditions and options to determine meeting formats that we think will best serve the community’s needs at that time.

Monday, December 7, 2020

Trusted CI Webinar Series: Planning for 2021, review of 2020

The 2020 season of the Trusted CI Webinar series has concluded and we are looking forward to the presentations scheduled in the next year.

The following topics and have been booked in 2021:

  • January: SciTokens
  • February: Cyberattacks & the social sciences
  • March:  REED+ ecosystem
  • April: OSN and MGHPCC
  • May: Identifying Vulnerable GitHub Repositories
  • June: Trusted CI annual challenge - Software Assurance
  • July: Open Science Grid
  • August: NCSA's SOC Type 2 certification
  • September: Q-Factor project
  • October: Legal insights with Scott Russell
  • December: Trusted CI annual challenge - Software Assurance

In case you missed them, here are the webinars from 2020:

  • January ’20: REN-ISAC for Research Facilities & Projects with Kim Milford (Video)(Slides)
  • February ’20: FABRIC: Adaptive programmaBle networked Research Infrastructure for Computer science with Anita Nikolich (Video)(Slides)
  • March ’20: OnTimeURB: Multi-cloud Broker Framework for Creation of Secure and High-performance Science Gateways with Prasad Calyam (Video)(Slides)
  • April ’20: Trustworthy Decision Making and Artificial Intelligence with Arjan Durresi (Video)(Slides)
  • May ’20: Is your code safe from attack? with Barton Miller and Elisa Heymann (Video)(Slides)
  • June ’20: The ResearchSOC with Susan Sons (Video)(Slides)
  • July ’20: Whose line is it anyway? - Problem solving in complex networks with Doug Southworth (EPOC) (Video)(Slides)
  • August ’20: Transitioning Cybersecurity Research to Practice - Success stories and tools you can use,” with Patrick Traynor, Florence Hudson, and Ryan Kiser (Video)(Slides)
  • September ’20: Trusted CI Webinar: ACCORD: Integrating CI policy and mechanism to support research on sensitive data; with Ron Hutchinson, Tho Nguyen, Neal Magee (Video)(Slides)
  • October ’20: RDP: Enforcing Security and Privacy Policies to Protect Research Data with Yuan Tian (Video)(Slides)
  • October ’20: Cybersecurity Maturity Model Certification (CMMC) with Scott Russell (Video)(Slides)
  • December ’20: Trustworthy Data panel (Video)(Slides)
 Join Trusted CI''s announcements mailing list for information about upcoming events. Our complete catalog of webinars and other presentations are available on our YouTube channel.


Friday, November 20, 2020

Open Science Cyber Risk Profile (OSCRP), and Data Confidentiality and Data Integrity Reports Updated

 In April 2017, Trusted CI released the Open Science Cyber Risk Profile (OSCRP), a document designed to help principal investigators and their supporting information technology professionals assess cybersecurity risks related to open science projects. The OSCRP was the culmination of extensive discussions with research and education community leaders, and has since become a widely-used resource, including numerous references in recent National Science Foundation (NSF) solicitations.

The OSCRP has always been intended to be a living document.  In order to gather material for continued refreshing of ideas, Trusted CI has spent the past couple of years performing in-depth examination of additional topics for inclusion in a revised OSCRP.  In 2019, Trusted CI examined the causes of random bit flips in scientific computing and common measures used to mitigate the effects of “bit flips.”  Its report, “An Examination and Survey of Random Bit Flips and Scientific Computing,” was issued in December 2019.  In order to address the community's need for insights on how to start thinking about computing on sensitive data, in 2020, Trusted CI examined data confidentiality issues and solutions in academic research computing.  Its report, “An Examination and Survey of Data Confidentiality Issues and Solutions in Academic Research Computing,” was issued in September 2020.  

Both reports have now been updated, with the current versions being made available at the links to the report titles above.  In conjunction, the Open Science Cyber Risk Profile (OSCRP) itself has also been refreshed with insights from both data confidentiality and data integrity reports.

All of these documents will continue to be living reports that will be updated over time to serve community needs. Comments, questions, and suggestions about this post, and both documents are always welcome at info@trustedci.org.


Thursday, November 19, 2020

Trusted CI Webinar: Trustworthy Data panel Mon Dec 7 @11am Eastern

The Trustworthy Data Working Group is hosting a panel on Monday December 7th at 11am (Eastern) to discuss tools, standards, community practices for trustworthy scientific data sharing. Our panelists are:

Please register here. Be sure to check spam/junk folder for registration confirmation email.

The Trustworthy Data Working Group (TDWG) is a collaborative effort of Trusted CI, the four NSF Big Data Innovation Hubs, the NSF CI CoE Pilot, the Ostrom Workshop on Data Management and Information Governance, the NSF Engagement and Performance Operations Center (EPOC), the Indiana Geological and Water Survey, the Open Storage Network, and other interested community members. The goal of the working group is to understand scientific data security concerns and provide guidance on ensuring the trustworthiness of data.

This year the TDWG published a survey report about data security concerns and practices amongst the scientific community. And, building off the insights of the survey report, the working group published a guidance report on trustworthy data for science projects, including science gateways, that covers the topics that the panel will be discussing.

This panel is an opportunity to discuss the work of the TDWG in the larger context of related work by PresQT, NIST, and RDA-US.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Thursday, November 12, 2020

Thank you to Trusted CI alumni

Trusted CI, the NSF Cybersecurity Center of Excellence, has relied on expertise from its staff, multiple internationally recognized institutions, its advisory committee, and its collaboration with numerous NSF-funded research organizations to address the ongoing cybersecurity challenges for higher education and high-performance computing scientific research. We also want to thank our alumni, who made significant contributions to our mission. We wish them the best in their ongoing endeavors. 

Go to Trusted CI alumni to see some of the contributions that alumni have made. (Some of our alumni have opted not to appear on a public website.)


Wednesday, November 4, 2020

Trusted CI Offering Office Hours by Appointment

The purpose of Trusted CI office hours is to provide direct assistance to members of the open science security community. We have decided to move office hours to a "by appointment" format to be more flexible with community members' schedules. To request an appointment, contact us with the subject, “Office Hours,” and any details you can provide about the question or problem you'd like to solve.

Monday, November 2, 2020

PEARC20: Another successful workshop and training at PEARC

Trusted CI had another successful exhibition at PEARC20.

We hosted our Fourth Workshop on Trustworthy Scientific Cyberinfrastructure for our largest audience to date. The topics covered during the year's workshop were:

  • Community Survey Results from the Trustworthy Data Working Group (slides
    • Presenters: Jim Basney, NCSA / Trusted CI; Jeannette Dopheide, NCSA / Trusted CI; Kay Avila, NCSA / Trusted CI; Florence Hudson, Northeast Big Data Innovation Hub / Trusted CI
  • Characterization and Modeling of Error Resilience in HPC Applications (slides)
    • Presenter: Luanzheng Guo, University of California-Merced 
  • Trusted CI Fellows Panel (slides)
    • Moderator: Dana Brunson, Internet2
    • Panelists: Jerry Perez, University of Texas at Dallas; Laura Christopherson, Renaissance Computing Institute; Luanzheng Guo, University of California, Merced; Songjie Wang, University of Missouri; Smriti Bhatt, Texas A&M University - San Antonio; Tonya Davis, Alabama A&M University
  • Analysis of attacks targeting remote workers and scientific computing infrastructure during the COVID19 pandemic at NCSA/UIUC (slides)
    • Presenters: Phuong Cao, NCSA / University of Illinois at Urbana-Champaign; Yuming Wu, Coordinated Science Laboratory / University of Illinois at Urbana-Champaign; Satvik Kulkarni, University of Illinois at Urbana-Champaign; Alex Withers, NCSA / University of Illinois at Urbana-Champaign; Chris Clausen, NCSA / University of Illinois at Urbana-Champaign
  • Regulated Data Security and Privacy: DFARS/CUI, CMMC, HIPAA, and GDPR (slides)
    • Presenters: Erik Deumens, University of Florida; Gabriella Perez, University of Iowa;  Anurag Shankar, Indiana University
  • Securing Science Gateways with Custos Services (slides)
    • Presenters: Marlon Pierce, Indiana University; Enis Afgan, Johns Hopkins University; Suresh Marru, Indiana University; Isuru Ranawaka, Indiana University; Juleen Graham, Johns Hopkins University

We will post links to the recordings when they are made public.

In addition to the workshop, Trusted CI team member Kay Avila co-presented a Jupyter security tutorial titled “The Streetwise Guide to Jupyter Security” (event page) with Rick Wagner.  This presentation was based on the “Jupyter Security” training developed by Rick Wagner, Matthias Bussonnier, and Trusted CI’s Ishan Abhinit and Mark Krenz for the 2019 NSF Cybersecurity Summit.

Tuesday, October 27, 2020

Student Program at the 2020 NSF Cybersecurity Summit

In September we hosted our annual NSF Cybersecurity Summit for Large Facilities and Cyberinfrastructure. This year's event was hosted online, which presented some challenges as well as new opportunities to broaden our audience. This is especially the case with our student program. Without the budget constraints of travel and hotel accommodations, we were able to allocate more registration seats to students than in previous years. This year twenty-seven students, from inside and outside the US, joined us for three days of hands-on training, talks, panels, and active Q & A sessions.

Because our student attendance was significantly larger than in previous years, we decided to host a panel specifically targeted toward their interests. Prior to the Summit we asked the students to vote from a list of topics. They selected "Multidisciplinary in Cyber: Research, education, and industry." Our panelists (names listed below) included senior leadership in cybersecurity institutions as well as experts in crime and psychology. They talked about the various efforts in research, education, and industry to engage these domains to gain a more holistic approach to cybersecurity. One key takeaway was the panel's emphasis that a diverse group of backgrounds and interests can enhance an organization or project's security posture. And, some light but practical advice, remember to "Marie Kondo" your attack surface. The fewer apps you manage on your devices, the better off you will be.

We asked the students to share their thoughts on their experiences at the Summit. Below are a selection of their responses. These statements have been edited for brevity and clarity.

Posie Aagaard; Master's in Information Technology with Cyber Focus, University of Texas San Antonio (LinkedIn)

The conference was very well organized and information about the conference was clearly communicated in advance, which helped me plan. I appreciated the pre-recorded sessions and was able to view them in advance. Richard Biever’s and Ken Goodwin’s vulnerability scanning/honeypot session, and Pablo Morian’s session on cyberinfrastructure protection using machine learning, made me wonder, “Why didn’t I think of that?”

The session moderators did a super job of being punctual, informative, and keeping that human element in the virtual sessions. Thank you for recording some sessions, and thanks to the presenters for sharing slides as they were able.

I found Kate Starbird’s keynote on disinformation to be fascinating and relevant to the discussions organizations should be having about cybersecurity. The humor and candor in Jerry Brower’s "Gemini’s Policy Laxative," was welcome and informative. Susan Son’s "Speak Their Language," included some gems of advice that I am sure I will reference in my future briefings.

The Tuesday training session was a great bonus. I liked the technical hands-on aspect and the ability to ask presenters questions along the way. I attended Bart Miller’s and Elisa Heymann’s session. They did a lot to prepare ahead of time.
Hristina Jovanoska; Associate's in Cybersecurity/Information Assurance, Ivy Tech Community College (LinkedIn)
I thank the NSF for choosing me to represent this year's student program. I learned so much that I will take with me in my career. It was a great experience for all three days of the Summit. Thank you again. Stay safe and healthy, and I'm looking forward to next year's Summit.
Krunal Mahant; Master's in Computer Security, Rochester Institute of Technology (LinkedIn)
This was my first experience as a student to be a part of an event that involves such a large number of candidates. The best part about the Summit was that everyone I interacted with had the same mindset as me: to share experiences and learn from one another. I attended the Web Security tools training session and was instantly inspired by the experiences shared by both Prof. Bart Miller and Elisa Heymann. Each session had so much information packed in a very organised way that it never felt overwhelming at all. Apart from the training day, I was part of the "Build a Cybersecurity Culture with Tabletop Security Exercises" by Josh Drake. It was a fun and insightful experience as there were topics discussed in the session that I had never imagined were important in building the security culture. It gave me good knowledge about how organizations in the industry use techniques to develop security habits. All in all, my experience was great and I would love to continue being part of NSF Cybersecurity Summit each year. Thanks a lot for accepting me as a participant.
Eric Tatman; Bachelor's in Intelligent Systems Engineering, Indiana University Bloomington (LinkedIn)
I really enjoyed keynote speaker Kate Starbird's presentation on disinformation. I thought it was a very insightful presentation, particularly when she shared overlays of her models depicting how the sources of disinformation not only come from opposing factions in contemporary social movements, but some of the disinformation somes from the same source. This showed that some of the disinformation sources are not only acting on both sides, but they are using their ability to manipulate what each different factions are focused on at a certain time.
Trusted CI thanks the members of the student panel, and the students themselves, for making this year's Summit a success. As always, the students' participation and enthusiasm is a rewarding affirmation of our commitment to community building.

Student Panel Moderators

  • Jeannette Dopheide - Senior Education, Outreach, and Training Coordinator at the National Center for Supercomputing Applications (NCSA) at the University of Illinois
  • Aunshul Rege - Associate Professor at Temple University

Panelists

  • ​Dr. Tonya Davis - Assistant Professor at Alabama A&M University
  • Kevin Metcalf - Chief Executive Officer for the National Child Protection Task Force
  • Helen Patton - Chief Information Security Officer at the Ohio State University
  • ​Rodney Petersen - Director of the National Initiative for Cybersecurity Education (or NICE) at the National Institute of Standards and Technology (NIST)


Thursday, October 15, 2020

Transition to practice success story: The behavioral side of cybersecurity

An interview with social scientist Aunshul Rege

Proactive cybersecurity must include the who and the why

Aunshul Rege, Ph.D., is an associate professor in the Department of Criminal Justice at Temple University. Dr. Rege is a social scientist who looks at the behavioral side of cybersecurity – “proactive cybersecurity, focusing on adversarial behavior, decision-making, movement, adaptation to disruption, and group dynamics.” (Also see her research website.)

She is also a 2019 Trusted CI Open Science Cybersecurity Fellow. Her current research includes two National Science Foundation grants: 

She is also a recipient of a new NSF SaTC EDU grant that focuses on cybersecurity education that emphasizes the human factor.

Trusted CI spoke with Rege about her transition to practice (TTP) journey. We also asked her about her recent capture the flag event.

Trusted CI: Tell us about your research interests and how that's tied into your transition to practice journey.

A.R.: My background was in computer science. I worked for a couple of years in the private sector where we had our very first breach. This was back in the day when security wasn't even something that was taught in the curriculum. That got me thinking. What is going on? Who's doing this? Why are they doing this? Because if we don't understand the who and the why, we're not really fighting an effective fight. I quit my job and went back to school and studied criminology because I wanted to combine these two things together.

Currently what I look at is adversarial behavior. How do groups make decisions to get the attacks done? How do they adapt? If they're either stuck or if they just don't know enough, what do they do? Looking at their decision making and adaptation I think is important.

There's a whole side of this that maps to the technical domain called intrusion chains—how does it progress and what are the mitigation points. My work complements that because I'm looking at how they progress through the chain. Can we cut or break the chain? What does that do to their actions, and how can we perhaps generalize our understanding of their behavior and their adaptation to then predict what they might do? Can we build better defenses that are predictive and anticipatory as opposed to being reactionary?

Trusted CI: How does that create things of value to others?

A.R.: “Transitioning to practice” can mean many different things. For social scientists, I think it's a very different thing than developing open source software or having a patent. 

It's how we can inform practice and policy. How we can develop better tools. How we can work with the computer scientists, so their alerts work better or train their machine learning algorithms. We can work hand in hand and do these types of things. 

For example, for my NSF CAREER grant, I worked with the Michigan Cyber Range. The Michigan Cyber Range is a program that leverages the physical range to develop world-class cybersecurity professionals. We would observe their events and analyze our observations. Then we take that data back and ask, “could you manipulate the environment or the exercise itself to bring hurdles into the environment or block the attacking team?” These were some recommendations that we could give to make the actual event a little bit more effective and useful to the training. That's one example of our transition to practice. 

We also worked with computer scientists who used big data analytics on qualitative data. We've done time-series analysis to look at how attackers might spend different amounts of time in different stages of the intrusion chain.

We've used social network analysis to look at group dynamics and group behavior. For example, do we see groups of people in a team coalesce at certain points because certain techniques are needed or not? Or if there's a disruption, how are they going to come together to solve it?

More recently we used qualitative data to train a machine-learning algorithm. We gave it about half to 2/3 of the data, trained the model, and then asked it to predict what would happen next. When we aligned it with our actual observations, we found that for the most part about 63% to 70% of the prediction was in sync with what we had observed.

Trusted CI: How do you measure outcomes and success?

A.R.: To look at success, again, is different from the hard sciences. It takes a lot to do qualitative research. It's very time consuming. You must observe a team for an eight-hour cybersecurity exercise, which is not easy to do. You can't interact with them. But moving beyond our qualitative observations was just as challenging. 

That was another sort of methodological intersection point where we could see how do these qualitative—and if I can call it quantitative data—complement each other? How do they supplement each other? How do they contradict each other? These are all important things to look at. It helps us improve our methodologies and become better social scientists and more useful in this space.

Another interesting multidisciplinary methodological intersection was applying big-data analytics to the qualitative human behavior observations. The fact that it was doable, that we could provide a methodological proof of concept, is one aspect of success in and of itself, because to me that was methodological innovation. 

Trusted CI: What has helped you the most in this journey?

A.R.: I think it takes a team. You need to have multiple disciplines coincide. I gave a talk once where I said that cyber is not just technical. It's got social. It's got psychological. It's ethical, it's legal, and so much more. It's all these things combined. 

Just doing qualitative research allows me to only go so far, but when I bring in the expertise of computer scientists, or I get big-data people excited about the things we can do together, it is so much more effective and powerful and can really take not just our disciplines in new directions but also the field of cybersecurity itself. 

If big data analytics or big data helps you look at how much or how often or when you know these types of things, and you combine that with the power of the how and the why that qualitative research offers, I think you just amplified what you could do when you bring all of this together and this goes back into getting that holistic perspective. A healthier perspective, one that's better grounded. Is it perfect? Of course not, there are still all these other elements, but I think that combination - that coming together - is probably what gets me excited about this stuff and that's what has helped me. You need to have the right mindset – find people who are willing to listen, to have an open mind, to bend a little, to experiment – that’s how we’re going to break boundaries.

Trusted CI: Tell us more about your collaborations.

A.R.: Out of the three grants that I had, one was the CAREER grant. That was a nice partnership that I developed with a couple of cyber ranges and computer scientists at Temple University. And even though the grant didn't necessarily call for these types of collaborations, they just organically came about. We've done time series. We've done social network analysis. We've done machine learning. We've done simulations. To me, that's a major contribution to the field of methodologies in and of itself, but also to cybersecurity. The cyber-physical systems grant that looked at power grids, cyberattacks, and cyber defense, brought different disciplines to the table. 

Then more recently Jay Yang, a cybersecurity researcher and professor at the Rochester Institute of Technology, and I worked on an NSF EAGER project, where we're trying to combine our methodologies. 

I think the biggest thing that I appreciate in Jay is that he is a computer scientist who listens. They run the Collegiate Penetration Testing Competition (CPTC) at Rochester Institute of Technology. Some of us went there for three years in a row to observe the teams. We had our qualitative observers and they provided technical observers. These were students from computer science or engineering who also did observations with us in real-time as the competition unfolded. They were looking at what is being typed on the screen and what does that translate into in terms of actions that the team is doing. We were looking for things like group dynamics, who's talking to who, is there a division of labor based on skill sets, etc. That became interesting in merging those two data points together. And then we also had the alerts. Because there were so many alerts, the observations really helped zoom into the alerts at certain times to extract what was going on at the time in the logs. 


The logs capture certain actions that we don't, and then we capture certain actions that the logs don't and what you don't get out of logs that I think people need to understand is the decision-making that went into it. It's a bunch of people in a room having a conversation and deciding on something before their fingers even reach the keyboard.

It's at that point that they start typing. That's the aftermath of that decision-making process, so you've just lost a key portion of that decision-making deciding how are you going to allocate your skill sets to get things done. So, we brought that picture into the data, and we also helped zoom in and identify which part of the alerts to look at. And that helped sift through large amounts of data in a more informed manner.

But these collaborations are primarily on the academic side. I want to also emphasize the collaborations with the ethical hacker community – this is one of the brightest, most passionate, and most supportive communities that I have engaged with. What I want to emphasize is to keep that open mind and interact with others outside your silo (not just the social sciences to include computer science or computer engineering, but even outside of academia altogether).

Trusted CI: What are some other examples of TTP that came out of your work? 

A.R.: One of the areas I (and I suspect many academic researchers) struggled with is access to data. Oftentimes attack data are just not shared (for an assortment of understandable reasons), or they can be for a hefty price, which academics like me certainly cannot afford. I run the Cybersecurity in Action, Research and Education (CARE) Lab, which works on several NSF-funded projects. 

For my NSF CAREER grant, my team and I had to do a literature review of ransomware attacks against critical infrastructures to get an idea of the threat landscape. As we came across various cases, we decided to rehash them into a simple dataset and over time this grew from 162 incidents last September to 747 incidents to date. We decided to make this open/free to the wider community in an effort to help other educators and students. Well, we were surprised when our dataset was requested by industry and government.

We started getting positive feedback and even requests, some of which we have fulfilled (for example, mapping our dataset onto the MITRE ATT&CK framework), and a submission form where you can let us know of a publicly disclosed incident that we missed in our dataset. We now have a dedicated page on the CARE Lab’s website for this dataset. To date we have had over 400 download requests from educators, students, government, industry, researchers, and journalists. To me, this is a measure of success – a transition to practice. In fact, our dataset was recently covered in Security Week and the CARE Lab was also listed as a contributor to ransomware research efforts.

We update the dataset regularly. We have set up alerts that notify us about various critical infrastructure ransomware incidents. Once a month, we release the next iteration of the dataset and we also document the changes in that iteration. 

Trusted CI: What's coming up next for you?

A.R.: When you talk about transition to practice, I think there's another area and that's educational practices. I'm wrapping up my CAREER grant this year and last year I did my annual report, sent it off to be reviewed by my program officers. They are asking “what’s your contribution to the field?” And that really forced me to think beyond just this space, beyond just publishing. For example, what do I have that people can take and use beyond the dataset that I just mentioned? 

Education, training, and awareness is one of the things my team and I at the CARE Lab have been doing. We have a repository of experiential learning course projects on social engineering. What is social engineering? Humans are considered to be the weakest link in cyberattacks/security. Social engineering is the psychological manipulation of humans to gain access to sensitive information or systems. Social engineering is often used in the very first stage of the intrusion chain, which is reconnaissance. 

A well-recognized example of social engineering is phishing, but it can take so many other different forms. Given that social engineering leverages the human/social aspect, it easily and naturally falls in the social science domain for research and education.

I had developed course projects for my cybercrime class since fall 2017. These were vetted by the ethics board, and after completing about three iterations, I decided I could share these with other educators who wouldn’t have to develop instructions and rubrics – they could literally ‘click-and-run’ these projects into their existing courses. 

My team and I have mapped the course projects on to the National Initiative for Cybersecurity Education (NICE) cybersecurity workforce framework. The NICE Framework (National Institute of Standards and Technology Special Publication 800-181) is a nationally focused resource that categorizes and describes cybersecurity work. The NICE Framework establishes a taxonomy and common lexicon that describes cybersecurity work and workers irrespective of where or for whom the work is performed. It is comprised of seven Workforce Categories with a subset of 33 Specialty Areas, as well as Work Roles, Tasks, and Knowledge, Skills, and Abilities (KSAs).

Currently, we have about five social engineering corpus projects that are complete with instructions and rubrics and have been mapped onto the NICE Framework. People can request to download them. We’ve had almost 200 downloads worldwide from not only educators and students that are looking for something like this that is hands-on, experiential learning, but also from industry and government to train their employees using fun and active learning as opposed to online quizzes. 

Interestingly, most of the educators that are looking at these projects are all from computer science, which is funny because this was intended as a social science course project, but now it's available to everyone. There's a need for this type of activity as well. And we’ve also created a social engineering incident dataset that is available for free and is fairly popular. The course projects and the dataset are available at the CARE Lab website.

So for me, transition to practice in that sense is also an important thing. There are a lot of things we can do as social scientists not just methodologically and contributing to research but having concrete deliverables that people can use. 

Trusted CI: Tell us about your recent capture the flag competition.

A.R.: In October, we held our very first collegiate social engineering capture the flag competition (SECTF) as part of Cybersecurity Awareness Month. The CARE Lab partnered with Layer 8 Conference, which is the only conference in the world whose sole focus is social engineering and open source intelligence (OSINT). The Collegiate SECTF was not a technical competition – because there are plenty of great ones already; this focuses solely on the human, social and psychological aspects that come into play during cyberattacks and cybersecurity. We had professional social engineering experts who served as judges. 

I wanted to create a competition that was open to all disciplines. Unlike technical CTFs which cater exclusively to computer science and engineering students, this social engineering CTF is for all fields and the human factor cuts across all domains. It was a three-day event and I’m delighted to say that everything went very smoothly.

And this goes back to my earlier points about working outside academic silos and finding the right people who listen. I want to particularly thank Patrick Laverty, who is the co-organizer of the Layer 8 conference. When I pitched the idea to him, he said yes in a heartbeat – he was so passionate and driven, and he believed in my vision. I couldn’t have done it without his support. It’s amazing what you can accomplish when you find the right people. Patrick and I are sharing our experiences from this inaugural SECTF at the upcoming NICE conference on November 5, 2020.

I felt so strongly about the need for bringing social engineering to the wider domain that I applied for an NSF grant.

And I’m excited to share that I recently found out that it has been funded (SaTC: EDU: Educating STEM Students and Teachers about the Relevance of Social Engineering in Cyberattacks and Cybersecurity) . It will start next year, and so the SECTF competition will continue. You can check out the SECTF website at https://sites.temple.edu/socialengineering/.

Trusted CI: Have you worked with any of the other organizations that are doing catch the flag events to try and coordinate the technical ones with what you are trying to do?

A.R.: That's a great question and I thought long and hard about this. If it's a combo, you're going to dilute the experience for both the technical and social. That's not to say it can't be done. The idea here is, can we design something with this pure emphasis on the social and psychological. As this grows, we might consider a combined event.

For now I’m just excited to look at how we can bring social engineering to the wider cybersecurity education curriculum, develop experiential learning SE course projects, offer a SECTF that is ethical, safe, and fun, and build on the SE dataset that we have already started. I want to engage with the wider community to make the social sciences more mainstream in the cybersecurity discourse.


Monday, October 12, 2020

Trusted CI Webinar: Enforcing Security and Privacy Policies to Protect Research Data Mon Oct 26 @11am Eastern

University of Virginia's Yuan Tian is presenting the webinar, Enforcing Security and Privacy Policies to Protect Research Data, on Monday October 26th at 11am (Eastern). 

Please register here. Be sure to check spam/junk folder for registration confirmation email.

Advances in computer systems over the past decade have laid a solid foundation for data collection at a staggering scale. Data generated from end-user devices has tremendous value to the research community. For example, mobile and Internet-of-Things devices can participate in large-scale Internet-based measurement or monitoring of patient's health conditions. While ground-breaking discovered may occur, malicious attacks or unintentional data leaks threaten the research data. Such a threat is hard to predict and difficult to recover from once it happens. Preventative and defensive measures should be taken where data is generated in order to protect private, valuable data from the attackers. Currently, there are efforts that try to regulate data management, for example, a research application might have a privacy policy that describes how the user data is being collected and protected. However, there is a disconnect between these documented policies and the implementations of a research project. 
In this talk, I’ll present our research, which interprets the documented policies automatically with NLP (natural language processing) and enforce them in the code of research projects, in order to protect the privacy of research data. This work can significantly reduce researchers' overhead in implementing policy-compliant code and reduce the complexity of protecting research datasets.
Speaker Bio:

Yuan Tian is an Assistant Professor of Computer Science at University of Virginia. Her research focuses on security and privacy and its interactions with systems, and machine learning. Her work has a real-world impact on platforms (such as iOS, Chrome, and Azure). She is a recipient of NSF CAREER Award 2020, Amazon Faculty Fellowship 2019, CSAW Best Paper Award 2019, Rising Stars in EECS 2016.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Friday, October 2, 2020

Trusted CI Engagement Applications Deadline Extended til October 16, 2020

 

Deadline application has been extended till Oct 16, 2020

Apply for a one-in-one engagement with Trusted CI for late 2020.

  

Trusted CI is accepting applications for one-on-one engagements to be executed in Jan-June 2021. Applications are due Oct 16, 2020.

To learn more about the process and criteria, and to complete the application form, visit our site: 

http://trustedci.org/application


During Trusted CI’s first 5 years, we’ve conducted
 more than 24 one-on-one engagements with NSF-funded projects, Large Facilities, and major science service providers representing the full range of NSF science missions.  We support a variety of engagement types including: assistance in developing, improving, or evaluating an information security program; software assurance-focused efforts; identity management; technology or architectural evaluation; training for staff; and more.   

As the NSF Cybersecurity Center of Excellence, Trusted CI’s mission is to provide the NSF community a coherent understanding of cybersecurity’s role in producing trustworthy science and the information and know-how required to achieve and maintain effective cybersecurity programs.

 

Thursday, October 1, 2020

Requesting Feedback on Initial Report and Upcoming Webinar: Guidance for Trustworthy Data Management in Science Projects

The Trustworthy Data Working Group has published an initial draft report at https://doi.org/10.5281/zenodo.4056241 on guidance for trustworthy management in science projects.  

We invite the community’s feedback on the initial version of this report and input toward our revisions via the working group mailing list. You may also send input directly to Jim Basney at jbasney@illinois.edu. Please attend the Science Gateways webinar on Wednesday, October 7th at 1pm Eastern, where Jim will be presenting an overview of the guidance report. 

This report builds off key findings from its previously published survey report regarding trustworthy data and provides recommendations to address those concerns. The report covers stakeholders of trustworthy data, the definition of trustworthiness, findings from the survey report, barriers to trustworthiness, tools and technologies for trustworthy data, and communication of trustworthiness.

We thank all the members of the Trustworthy Data Working Group for their help with developing this guidance as well as their participation throughout the year. The working group will be revising its guidance in November, incorporating community input received in October, to be included in the working group's final report in December.

Working group membership is open to all who are interested. Please visit https://www.trustedci.org/2020-trustworthy-data for details.

Wednesday, September 30, 2020

Thank you and congratulations to Florence Hudson!

Florence Hudson has been leading Trusted CI's transition to practice (TTP) efforts since 2018. She has been instrumental in fostering connections between researchers and practitioners and leading the creation of a suite of TTP resources based on best practices and successes. September 30th marks Florence's last day with Trusted CI and we wish Florence all the best in her role as Executive Director for the Northeast Big Data Innovation Hub.

Ryan Kiser has been working closely with Florence on TTP and will assume leadership of Trusted CI's TTP effort, supported by Sean Peisert, who brings a strong history of both research and practice in cybersecurity.

Von

Trusted CI PI and Director

Monday, September 28, 2020

Announcing Trusted CI's Open Science Cybersecurity Fellows Program (Applications due Nov.6th)

 Application Deadline: Friday Nov.6th  Apply here.

Overview

Trusted CI serves the scientific community as the NSF Cybersecurity Center of Excellence, providing leadership in and assistance in cybersecurity in the support of research. In 2019, Trusted CI is establishing an Open Science Cybersecurity Fellows program. This program will establish and support a network of Fellows with diversity in both geography and scientific discipline. These fellows will have access to training and other resources to foster their professional development in cybersecurity. In exchange, they will champion cybersecurity for science in their scientific and geographic communities and communicate challenges and successful practices to Trusted CI.

About the program

The vision for the Fellows program is to identify members of the scientific community, empower them with basic knowledge of cybersecurity and the understanding of Trusted CI’s services, and then have them serve as cybersecurity liaisons to their respective community. They would then assist members of the community with basic cybersecurity challenges and connect them with Trusted CI for advanced challenges. 

Trusted CI will select six fellows each year.  Fellows will receive recognition, cybersecurity professional development consisting of training and travel funding. The Fellows’ training will consist of a Virtual Institute, providing 20 hours of basic cybersecurity training over six months. The training will be delivered by Trusted CI staff and invited speakers. The Virtual Institute will be presented as a weekly series via Zoom and recorded to be publicly available for later online viewing. Travel support is budgeted (during their first year only) to cover fellows’ attendance at the NSF Cybersecurity Summit, PEARC, and one professional development opportunity agreed to with Trusted CI. The Fellows will be added to an email list to discuss any challenges they encounter that will receive prioritized attention from Trusted CI staff. Trusted CI will recognize the Fellows on its website and social media. Fellowships are funded for one year, but will be encouraged to continue to participating in TrustedCI activities the years following their fellowship year.

After the Virtual Institute, Fellows, with assistance from the Trusted CI team, will be expected to help their science community with cybersecurity and make them aware of Trusted CI for complex needs. By the end of the year, they will be expected to present or write a short white paper on the cybersecurity needs of their community and some initial steps they will take (or have taken) to address these needs. After the year of full support, Trusted CI will continue recognizing the cohort of Fellows and giving them prioritized attention. Over the years, this growing cohort of Fellows will broaden and diversify Trusted CI’s impact.

Application requirements

  • A description of their connection to the research community. Any connection to NSF projects should be clearly stated, ideally providing the NSF award number.
    A statement of interest in cybersecurity
  • Two-page biosketch
  • Optional demographic info
  • A letter from their supervisor supporting their involvement and time commitment to the program
  • A commitment to fully participate in the Fellows activities for one year (and optionally thereafter)

The selection of Fellows would be made by the Trusted CI PIs and Senior Personnel based on the following criteria:

  1. Demonstrated connection to scientific research, with preference given to those who demonstrate a connection to NSF-funded science.
  2. Articulated interest in cybersecurity.
  3. Fellows that broaden Trusted CI’s impact across all seven NSF research directorates (Trusted CI encourages applications for individuals with connections to NSF directorates other than CISE), with connections to any of the NSF 10 Big Ideas, or Fellows that increase the participation of underrepresented populations.

Who should apply?   

  • Professionals and post-docs interested in cybersecurity for science, with evidence of that in their past and current role
  • Research Computing, Data, and IT technical or policy professionals interested in applying cybersecurity innovations to scientific research
  • Domain scientists interested in data integrity aspects of scientific research
  • Scientists from all across the seven NSF research directorates interested in how data integrity fits with their scientific mission
  • Researchers in the NSF 10 Big Ideas interested in cybersecurity needs
  • Regional network security personnel working across universities and facilities in their region
  • People comfortable collaborating and communicating across multiple institutions with IT / CISO / Research Computing and Data professionals
  • Anyone in a role relevant to cybersecurity for open science

More about the Fellowship

Fellows come from a variety of career stages, they demonstrate a passion for their area, the ability to communicate ideas effectively, and a real interest in the role of cybersecurity in research. Fellows are empowered to talk about cybersecurity to a wider audience, network with others who share a passion for cybersecurity for open science, and learn key skills that benefit them and their collaborators.

If you have questions about the Fellows program, please let us know by email us at 
fellows@trustedci.org.

Application Deadline: Friday Nov 6, 2020  Apply here.

Applicants will be notified by Jan 15
, 2021