Friday, November 20, 2020

Open Science Cyber Risk Profile (OSCRP), and Data Confidentiality and Data Integrity Reports Updated

 In April 2017, Trusted CI released the Open Science Cyber Risk Profile (OSCRP), a document designed to help principal investigators and their supporting information technology professionals assess cybersecurity risks related to open science projects. The OSCRP was the culmination of extensive discussions with research and education community leaders, and has since become a widely-used resource, including numerous references in recent National Science Foundation (NSF) solicitations.

The OSCRP has always been intended to be a living document.  In order to gather material for continued refreshing of ideas, Trusted CI has spent the past couple of years performing in-depth examination of additional topics for inclusion in a revised OSCRP.  In 2019, Trusted CI examined the causes of random bit flips in scientific computing and common measures used to mitigate the effects of “bit flips.”  Its report, “An Examination and Survey of Random Bit Flips and Scientific Computing,” was issued in December 2019.  In order to address the community's need for insights on how to start thinking about computing on sensitive data, in 2020, Trusted CI examined data confidentiality issues and solutions in academic research computing.  Its report, “An Examination and Survey of Data Confidentiality Issues and Solutions in Academic Research Computing,” was issued in September 2020.  

Both reports have now been updated, with the current versions being made available at the links to the report titles above.  In conjunction, the Open Science Cyber Risk Profile (OSCRP) itself has also been refreshed with insights from both data confidentiality and data integrity reports.

All of these documents will continue to be living reports that will be updated over time to serve community needs. Comments, questions, and suggestions about this post, and both documents are always welcome at info@trustedci.org.


Thursday, November 19, 2020

Trusted CI Webinar: Trustworthy Data panel Mon Dec 7 @11am Eastern

The Trustworthy Data Working Group is hosting a panel on Monday December 7th at 11am (Eastern) to discuss tools, standards, community practices for trustworthy scientific data sharing. Our panelists are:

Please register here. Be sure to check spam/junk folder for registration confirmation email.

The Trustworthy Data Working Group (TDWG) is a collaborative effort of Trusted CI, the four NSF Big Data Innovation Hubs, the NSF CI CoE Pilot, the Ostrom Workshop on Data Management and Information Governance, the NSF Engagement and Performance Operations Center (EPOC), the Indiana Geological and Water Survey, the Open Storage Network, and other interested community members. The goal of the working group is to understand scientific data security concerns and provide guidance on ensuring the trustworthiness of data.

This year the TDWG published a survey report about data security concerns and practices amongst the scientific community. And, building off the insights of the survey report, the working group published a guidance report on trustworthy data for science projects, including science gateways, that covers the topics that the panel will be discussing.

This panel is an opportunity to discuss the work of the TDWG in the larger context of related work by PresQT, NIST, and RDA-US.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Thursday, November 12, 2020

Thank you to Trusted CI alumni

Trusted CI, the NSF Cybersecurity Center of Excellence, has relied on expertise from its staff, multiple internationally recognized institutions, its advisory committee, and its collaboration with numerous NSF-funded research organizations to address the ongoing cybersecurity challenges for higher education and high-performance computing scientific research. We also want to thank our alumni, who made significant contributions to our mission. We wish them the best in their ongoing endeavors. 

Go to Trusted CI alumni to see some of the contributions that alumni have made. (Some of our alumni have opted not to appear on a public website.)


Wednesday, November 4, 2020

Trusted CI Offering Office Hours by Appointment

The purpose of Trusted CI office hours is to provide direct assistance to members of the open science security community. We have decided to move office hours to a "by appointment" format to be more flexible with community members' schedules. To request an appointment, contact us with the subject, “Office Hours,” and any details you can provide about the question or problem you'd like to solve.

Monday, November 2, 2020

PEARC20: Another successful workshop and training at PEARC

Trusted CI had another successful exhibition at PEARC20.

We hosted our Fourth Workshop on Trustworthy Scientific Cyberinfrastructure for our largest audience to date. The topics covered during the year's workshop were:

  • Community Survey Results from the Trustworthy Data Working Group (slides
    • Presenters: Jim Basney, NCSA / Trusted CI; Jeannette Dopheide, NCSA / Trusted CI; Kay Avila, NCSA / Trusted CI; Florence Hudson, Northeast Big Data Innovation Hub / Trusted CI
  • Characterization and Modeling of Error Resilience in HPC Applications (slides)
    • Presenter: Luanzheng Guo, University of California-Merced 
  • Trusted CI Fellows Panel (slides)
    • Moderator: Dana Brunson, Internet2
    • Panelists: Jerry Perez, University of Texas at Dallas; Laura Christopherson, Renaissance Computing Institute; Luanzheng Guo, University of California, Merced; Songjie Wang, University of Missouri; Smriti Bhatt, Texas A&M University - San Antonio; Tonya Davis, Alabama A&M University
  • Analysis of attacks targeting remote workers and scientific computing infrastructure during the COVID19 pandemic at NCSA/UIUC (slides)
    • Presenters: Phuong Cao, NCSA / University of Illinois at Urbana-Champaign; Yuming Wu, Coordinated Science Laboratory / University of Illinois at Urbana-Champaign; Satvik Kulkarni, University of Illinois at Urbana-Champaign; Alex Withers, NCSA / University of Illinois at Urbana-Champaign; Chris Clausen, NCSA / University of Illinois at Urbana-Champaign
  • Regulated Data Security and Privacy: DFARS/CUI, CMMC, HIPAA, and GDPR (slides)
    • Presenters: Erik Deumens, University of Florida; Gabriella Perez, University of Iowa;  Anurag Shankar, Indiana University
  • Securing Science Gateways with Custos Services (slides)
    • Presenters: Marlon Pierce, Indiana University; Enis Afgan, Johns Hopkins University; Suresh Marru, Indiana University; Isuru Ranawaka, Indiana University; Juleen Graham, Johns Hopkins University

We will post links to the recordings when they are made public.

In addition to the workshop, Trusted CI team member Kay Avila co-presented a Jupyter security tutorial titled “The Streetwise Guide to Jupyter Security” (event page) with Rick Wagner.  This presentation was based on the “Jupyter Security” training developed by Rick Wagner, Matthias Bussonnier, and Trusted CI’s Ishan Abhinit and Mark Krenz for the 2019 NSF Cybersecurity Summit.

Tuesday, October 27, 2020

Student Program at the 2020 NSF Cybersecurity Summit

In September we hosted our annual NSF Cybersecurity Summit for Large Facilities and Cyberinfrastructure. This year's event was hosted online, which presented some challenges as well as new opportunities to broaden our audience. This is especially the case with our student program. Without the budget constraints of travel and hotel accommodations, we were able to allocate more registration seats to students than in previous years. This year twenty-seven students, from inside and outside the US, joined us for three days of hands-on training, talks, panels, and active Q & A sessions.

Because our student attendance was significantly larger than in previous years, we decided to host a panel specifically targeted toward their interests. Prior to the Summit we asked the students to vote from a list of topics. They selected "Multidisciplinary in Cyber: Research, education, and industry." Our panelists (names listed below) included senior leadership in cybersecurity institutions as well as experts in crime and psychology. They talked about the various efforts in research, education, and industry to engage these domains to gain a more holistic approach to cybersecurity. One key takeaway was the panel's emphasis that a diverse group of backgrounds and interests can enhance an organization or project's security posture. And, some light but practical advice, remember to "Marie Kondo" your attack surface. The fewer apps you manage on your devices, the better off you will be.

We asked the students to share their thoughts on their experiences at the Summit. Below are a selection of their responses. These statements have been edited for brevity and clarity.

Posie Aagaard; Master's in Information Technology with Cyber Focus, University of Texas San Antonio (LinkedIn)

The conference was very well organized and information about the conference was clearly communicated in advance, which helped me plan. I appreciated the pre-recorded sessions and was able to view them in advance. Richard Biever’s and Ken Goodwin’s vulnerability scanning/honeypot session, and Pablo Morian’s session on cyberinfrastructure protection using machine learning, made me wonder, “Why didn’t I think of that?”

The session moderators did a super job of being punctual, informative, and keeping that human element in the virtual sessions. Thank you for recording some sessions, and thanks to the presenters for sharing slides as they were able.

I found Kate Starbird’s keynote on disinformation to be fascinating and relevant to the discussions organizations should be having about cybersecurity. The humor and candor in Jerry Brower’s "Gemini’s Policy Laxative," was welcome and informative. Susan Son’s "Speak Their Language," included some gems of advice that I am sure I will reference in my future briefings.

The Tuesday training session was a great bonus. I liked the technical hands-on aspect and the ability to ask presenters questions along the way. I attended Bart Miller’s and Elisa Heymann’s session. They did a lot to prepare ahead of time.
Hristina Jovanoska; Associate's in Cybersecurity/Information Assurance, Ivy Tech Community College (LinkedIn)
I thank the NSF for choosing me to represent this year's student program. I learned so much that I will take with me in my career. It was a great experience for all three days of the Summit. Thank you again. Stay safe and healthy, and I'm looking forward to next year's Summit.
Krunal Mahant; Master's in Computer Security, Rochester Institute of Technology (LinkedIn)
This was my first experience as a student to be a part of an event that involves such a large number of candidates. The best part about the Summit was that everyone I interacted with had the same mindset as me: to share experiences and learn from one another. I attended the Web Security tools training session and was instantly inspired by the experiences shared by both Prof. Bart Miller and Elisa Heymann. Each session had so much information packed in a very organised way that it never felt overwhelming at all. Apart from the training day, I was part of the "Build a Cybersecurity Culture with Tabletop Security Exercises" by Josh Drake. It was a fun and insightful experience as there were topics discussed in the session that I had never imagined were important in building the security culture. It gave me good knowledge about how organizations in the industry use techniques to develop security habits. All in all, my experience was great and I would love to continue being part of NSF Cybersecurity Summit each year. Thanks a lot for accepting me as a participant.
Eric Tatman; Bachelor's in Intelligent Systems Engineering, Indiana University Bloomington (LinkedIn)
I really enjoyed keynote speaker Kate Starbird's presentation on disinformation. I thought it was a very insightful presentation, particularly when she shared overlays of her models depicting how the sources of disinformation not only come from opposing factions in contemporary social movements, but some of the disinformation somes from the same source. This showed that some of the disinformation sources are not only acting on both sides, but they are using their ability to manipulate what each different factions are focused on at a certain time.
Trusted CI thanks the members of the student panel, and the students themselves, for making this year's Summit a success. As always, the students' participation and enthusiasm is a rewarding affirmation of our commitment to community building.

Student Panel Moderators

  • Jeannette Dopheide - Senior Education, Outreach, and Training Coordinator at the National Center for Supercomputing Applications (NCSA) at the University of Illinois
  • Aunshul Rege - Associate Professor at Temple University

Panelists

  • ​Dr. Tonya Davis - Assistant Professor at Alabama A&M University
  • Kevin Metcalf - Chief Executive Officer for the National Child Protection Task Force
  • Helen Patton - Chief Information Security Officer at the Ohio State University
  • ​Rodney Petersen - Director of the National Initiative for Cybersecurity Education (or NICE) at the National Institute of Standards and Technology (NIST)


Thursday, October 15, 2020

Transition to practice success story: The behavioral side of cybersecurity

An interview with social scientist Aunshul Rege

Proactive cybersecurity must include the who and the why

Aunshul Rege, Ph.D., is an associate professor in the Department of Criminal Justice at Temple University. Dr. Rege is a social scientist who looks at the behavioral side of cybersecurity – “proactive cybersecurity, focusing on adversarial behavior, decision-making, movement, adaptation to disruption, and group dynamics.” (Also see her research website.)

She is also a 2019 Trusted CI Open Science Cybersecurity Fellow. Her current research includes two National Science Foundation grants: 

She is also a recipient of a new NSF SaTC EDU grant that focuses on cybersecurity education that emphasizes the human factor.

Trusted CI spoke with Rege about her transition to practice (TTP) journey. We also asked her about her recent capture the flag event.

Trusted CI: Tell us about your research interests and how that's tied into your transition to practice journey.

A.R.: My background was in computer science. I worked for a couple of years in the private sector where we had our very first breach. This was back in the day when security wasn't even something that was taught in the curriculum. That got me thinking. What is going on? Who's doing this? Why are they doing this? Because if we don't understand the who and the why, we're not really fighting an effective fight. I quit my job and went back to school and studied criminology because I wanted to combine these two things together.

Currently what I look at is adversarial behavior. How do groups make decisions to get the attacks done? How do they adapt? If they're either stuck or if they just don't know enough, what do they do? Looking at their decision making and adaptation I think is important.

There's a whole side of this that maps to the technical domain called intrusion chains—how does it progress and what are the mitigation points. My work complements that because I'm looking at how they progress through the chain. Can we cut or break the chain? What does that do to their actions, and how can we perhaps generalize our understanding of their behavior and their adaptation to then predict what they might do? Can we build better defenses that are predictive and anticipatory as opposed to being reactionary?

Trusted CI: How does that create things of value to others?

A.R.: “Transitioning to practice” can mean many different things. For social scientists, I think it's a very different thing than developing open source software or having a patent. 

It's how we can inform practice and policy. How we can develop better tools. How we can work with the computer scientists, so their alerts work better or train their machine learning algorithms. We can work hand in hand and do these types of things. 

For example, for my NSF CAREER grant, I worked with the Michigan Cyber Range. The Michigan Cyber Range is a program that leverages the physical range to develop world-class cybersecurity professionals. We would observe their events and analyze our observations. Then we take that data back and ask, “could you manipulate the environment or the exercise itself to bring hurdles into the environment or block the attacking team?” These were some recommendations that we could give to make the actual event a little bit more effective and useful to the training. That's one example of our transition to practice. 

We also worked with computer scientists who used big data analytics on qualitative data. We've done time-series analysis to look at how attackers might spend different amounts of time in different stages of the intrusion chain.

We've used social network analysis to look at group dynamics and group behavior. For example, do we see groups of people in a team coalesce at certain points because certain techniques are needed or not? Or if there's a disruption, how are they going to come together to solve it?

More recently we used qualitative data to train a machine-learning algorithm. We gave it about half to 2/3 of the data, trained the model, and then asked it to predict what would happen next. When we aligned it with our actual observations, we found that for the most part about 63% to 70% of the prediction was in sync with what we had observed.

Trusted CI: How do you measure outcomes and success?

A.R.: To look at success, again, is different from the hard sciences. It takes a lot to do qualitative research. It's very time consuming. You must observe a team for an eight-hour cybersecurity exercise, which is not easy to do. You can't interact with them. But moving beyond our qualitative observations was just as challenging. 

That was another sort of methodological intersection point where we could see how do these qualitative—and if I can call it quantitative data—complement each other? How do they supplement each other? How do they contradict each other? These are all important things to look at. It helps us improve our methodologies and become better social scientists and more useful in this space.

Another interesting multidisciplinary methodological intersection was applying big-data analytics to the qualitative human behavior observations. The fact that it was doable, that we could provide a methodological proof of concept, is one aspect of success in and of itself, because to me that was methodological innovation. 

Trusted CI: What has helped you the most in this journey?

A.R.: I think it takes a team. You need to have multiple disciplines coincide. I gave a talk once where I said that cyber is not just technical. It's got social. It's got psychological. It's ethical, it's legal, and so much more. It's all these things combined. 

Just doing qualitative research allows me to only go so far, but when I bring in the expertise of computer scientists, or I get big-data people excited about the things we can do together, it is so much more effective and powerful and can really take not just our disciplines in new directions but also the field of cybersecurity itself. 

If big data analytics or big data helps you look at how much or how often or when you know these types of things, and you combine that with the power of the how and the why that qualitative research offers, I think you just amplified what you could do when you bring all of this together and this goes back into getting that holistic perspective. A healthier perspective, one that's better grounded. Is it perfect? Of course not, there are still all these other elements, but I think that combination - that coming together - is probably what gets me excited about this stuff and that's what has helped me. You need to have the right mindset – find people who are willing to listen, to have an open mind, to bend a little, to experiment – that’s how we’re going to break boundaries.

Trusted CI: Tell us more about your collaborations.

A.R.: Out of the three grants that I had, one was the CAREER grant. That was a nice partnership that I developed with a couple of cyber ranges and computer scientists at Temple University. And even though the grant didn't necessarily call for these types of collaborations, they just organically came about. We've done time series. We've done social network analysis. We've done machine learning. We've done simulations. To me, that's a major contribution to the field of methodologies in and of itself, but also to cybersecurity. The cyber-physical systems grant that looked at power grids, cyberattacks, and cyber defense, brought different disciplines to the table. 

Then more recently Jay Yang, a cybersecurity researcher and professor at the Rochester Institute of Technology, and I worked on an NSF EAGER project, where we're trying to combine our methodologies. 

I think the biggest thing that I appreciate in Jay is that he is a computer scientist who listens. They run the Collegiate Penetration Testing Competition (CPTC) at Rochester Institute of Technology. Some of us went there for three years in a row to observe the teams. We had our qualitative observers and they provided technical observers. These were students from computer science or engineering who also did observations with us in real-time as the competition unfolded. They were looking at what is being typed on the screen and what does that translate into in terms of actions that the team is doing. We were looking for things like group dynamics, who's talking to who, is there a division of labor based on skill sets, etc. That became interesting in merging those two data points together. And then we also had the alerts. Because there were so many alerts, the observations really helped zoom into the alerts at certain times to extract what was going on at the time in the logs. 


The logs capture certain actions that we don't, and then we capture certain actions that the logs don't and what you don't get out of logs that I think people need to understand is the decision-making that went into it. It's a bunch of people in a room having a conversation and deciding on something before their fingers even reach the keyboard.

It's at that point that they start typing. That's the aftermath of that decision-making process, so you've just lost a key portion of that decision-making deciding how are you going to allocate your skill sets to get things done. So, we brought that picture into the data, and we also helped zoom in and identify which part of the alerts to look at. And that helped sift through large amounts of data in a more informed manner.

But these collaborations are primarily on the academic side. I want to also emphasize the collaborations with the ethical hacker community – this is one of the brightest, most passionate, and most supportive communities that I have engaged with. What I want to emphasize is to keep that open mind and interact with others outside your silo (not just the social sciences to include computer science or computer engineering, but even outside of academia altogether).

Trusted CI: What are some other examples of TTP that came out of your work? 

A.R.: One of the areas I (and I suspect many academic researchers) struggled with is access to data. Oftentimes attack data are just not shared (for an assortment of understandable reasons), or they can be for a hefty price, which academics like me certainly cannot afford. I run the Cybersecurity in Action, Research and Education (CARE) Lab, which works on several NSF-funded projects. 

For my NSF CAREER grant, my team and I had to do a literature review of ransomware attacks against critical infrastructures to get an idea of the threat landscape. As we came across various cases, we decided to rehash them into a simple dataset and over time this grew from 162 incidents last September to 747 incidents to date. We decided to make this open/free to the wider community in an effort to help other educators and students. Well, we were surprised when our dataset was requested by industry and government.

We started getting positive feedback and even requests, some of which we have fulfilled (for example, mapping our dataset onto the MITRE ATT&CK framework), and a submission form where you can let us know of a publicly disclosed incident that we missed in our dataset. We now have a dedicated page on the CARE Lab’s website for this dataset. To date we have had over 400 download requests from educators, students, government, industry, researchers, and journalists. To me, this is a measure of success – a transition to practice. In fact, our dataset was recently covered in Security Week and the CARE Lab was also listed as a contributor to ransomware research efforts.

We update the dataset regularly. We have set up alerts that notify us about various critical infrastructure ransomware incidents. Once a month, we release the next iteration of the dataset and we also document the changes in that iteration. 

Trusted CI: What's coming up next for you?

A.R.: When you talk about transition to practice, I think there's another area and that's educational practices. I'm wrapping up my CAREER grant this year and last year I did my annual report, sent it off to be reviewed by my program officers. They are asking “what’s your contribution to the field?” And that really forced me to think beyond just this space, beyond just publishing. For example, what do I have that people can take and use beyond the dataset that I just mentioned? 

Education, training, and awareness is one of the things my team and I at the CARE Lab have been doing. We have a repository of experiential learning course projects on social engineering. What is social engineering? Humans are considered to be the weakest link in cyberattacks/security. Social engineering is the psychological manipulation of humans to gain access to sensitive information or systems. Social engineering is often used in the very first stage of the intrusion chain, which is reconnaissance. 

A well-recognized example of social engineering is phishing, but it can take so many other different forms. Given that social engineering leverages the human/social aspect, it easily and naturally falls in the social science domain for research and education.

I had developed course projects for my cybercrime class since fall 2017. These were vetted by the ethics board, and after completing about three iterations, I decided I could share these with other educators who wouldn’t have to develop instructions and rubrics – they could literally ‘click-and-run’ these projects into their existing courses. 

My team and I have mapped the course projects on to the National Initiative for Cybersecurity Education (NICE) cybersecurity workforce framework. The NICE Framework (National Institute of Standards and Technology Special Publication 800-181) is a nationally focused resource that categorizes and describes cybersecurity work. The NICE Framework establishes a taxonomy and common lexicon that describes cybersecurity work and workers irrespective of where or for whom the work is performed. It is comprised of seven Workforce Categories with a subset of 33 Specialty Areas, as well as Work Roles, Tasks, and Knowledge, Skills, and Abilities (KSAs).

Currently, we have about five social engineering corpus projects that are complete with instructions and rubrics and have been mapped onto the NICE Framework. People can request to download them. We’ve had almost 200 downloads worldwide from not only educators and students that are looking for something like this that is hands-on, experiential learning, but also from industry and government to train their employees using fun and active learning as opposed to online quizzes. 

Interestingly, most of the educators that are looking at these projects are all from computer science, which is funny because this was intended as a social science course project, but now it's available to everyone. There's a need for this type of activity as well. And we’ve also created a social engineering incident dataset that is available for free and is fairly popular. The course projects and the dataset are available at the CARE Lab website.

So for me, transition to practice in that sense is also an important thing. There are a lot of things we can do as social scientists not just methodologically and contributing to research but having concrete deliverables that people can use. 

Trusted CI: Tell us about your recent capture the flag competition.

A.R.: In October, we held our very first collegiate social engineering capture the flag competition (SECTF) as part of Cybersecurity Awareness Month. The CARE Lab partnered with Layer 8 Conference, which is the only conference in the world whose sole focus is social engineering and open source intelligence (OSINT). The Collegiate SECTF was not a technical competition – because there are plenty of great ones already; this focuses solely on the human, social and psychological aspects that come into play during cyberattacks and cybersecurity. We had professional social engineering experts who served as judges. 

I wanted to create a competition that was open to all disciplines. Unlike technical CTFs which cater exclusively to computer science and engineering students, this social engineering CTF is for all fields and the human factor cuts across all domains. It was a three-day event and I’m delighted to say that everything went very smoothly.

And this goes back to my earlier points about working outside academic silos and finding the right people who listen. I want to particularly thank Patrick Laverty, who is the co-organizer of the Layer 8 conference. When I pitched the idea to him, he said yes in a heartbeat – he was so passionate and driven, and he believed in my vision. I couldn’t have done it without his support. It’s amazing what you can accomplish when you find the right people. Patrick and I are sharing our experiences from this inaugural SECTF at the upcoming NICE conference on November 5, 2020.

I felt so strongly about the need for bringing social engineering to the wider domain that I applied for an NSF grant.

And I’m excited to share that I recently found out that it has been funded (SaTC: EDU: Educating STEM Students and Teachers about the Relevance of Social Engineering in Cyberattacks and Cybersecurity) . It will start next year, and so the SECTF competition will continue. You can check out the SECTF website at https://sites.temple.edu/socialengineering/.

Trusted CI: Have you worked with any of the other organizations that are doing catch the flag events to try and coordinate the technical ones with what you are trying to do?

A.R.: That's a great question and I thought long and hard about this. If it's a combo, you're going to dilute the experience for both the technical and social. That's not to say it can't be done. The idea here is, can we design something with this pure emphasis on the social and psychological. As this grows, we might consider a combined event.

For now I’m just excited to look at how we can bring social engineering to the wider cybersecurity education curriculum, develop experiential learning SE course projects, offer a SECTF that is ethical, safe, and fun, and build on the SE dataset that we have already started. I want to engage with the wider community to make the social sciences more mainstream in the cybersecurity discourse.