Thursday, October 15, 2020

Transition to practice success story: The behavioral side of cybersecurity

An interview with social scientist Aunshul Rege

Proactive cybersecurity must include the who and the why

Aunshul Rege, Ph.D., is an associate professor in the Department of Criminal Justice at Temple University. Dr. Rege is a social scientist who looks at the behavioral side of cybersecurity – “proactive cybersecurity, focusing on adversarial behavior, decision-making, movement, adaptation to disruption, and group dynamics.” (Also see her research website.)

She is also a 2019 Trusted CI Open Science Cybersecurity Fellow. Her current research includes two National Science Foundation grants: 

She is also a recipient of a new NSF SaTC EDU grant that focuses on cybersecurity education that emphasizes the human factor.

Trusted CI spoke with Rege about her transition to practice (TTP) journey. We also asked her about her recent capture the flag event.

Trusted CI: Tell us about your research interests and how that's tied into your transition to practice journey.

A.R.: My background was in computer science. I worked for a couple of years in the private sector where we had our very first breach. This was back in the day when security wasn't even something that was taught in the curriculum. That got me thinking. What is going on? Who's doing this? Why are they doing this? Because if we don't understand the who and the why, we're not really fighting an effective fight. I quit my job and went back to school and studied criminology because I wanted to combine these two things together.

Currently what I look at is adversarial behavior. How do groups make decisions to get the attacks done? How do they adapt? If they're either stuck or if they just don't know enough, what do they do? Looking at their decision making and adaptation I think is important.

There's a whole side of this that maps to the technical domain called intrusion chains—how does it progress and what are the mitigation points. My work complements that because I'm looking at how they progress through the chain. Can we cut or break the chain? What does that do to their actions, and how can we perhaps generalize our understanding of their behavior and their adaptation to then predict what they might do? Can we build better defenses that are predictive and anticipatory as opposed to being reactionary?

Trusted CI: How does that create things of value to others?

A.R.: “Transitioning to practice” can mean many different things. For social scientists, I think it's a very different thing than developing open source software or having a patent. 

It's how we can inform practice and policy. How we can develop better tools. How we can work with the computer scientists, so their alerts work better or train their machine learning algorithms. We can work hand in hand and do these types of things. 

For example, for my NSF CAREER grant, I worked with the Michigan Cyber Range. The Michigan Cyber Range is a program that leverages the physical range to develop world-class cybersecurity professionals. We would observe their events and analyze our observations. Then we take that data back and ask, “could you manipulate the environment or the exercise itself to bring hurdles into the environment or block the attacking team?” These were some recommendations that we could give to make the actual event a little bit more effective and useful to the training. That's one example of our transition to practice. 

We also worked with computer scientists who used big data analytics on qualitative data. We've done time-series analysis to look at how attackers might spend different amounts of time in different stages of the intrusion chain.

We've used social network analysis to look at group dynamics and group behavior. For example, do we see groups of people in a team coalesce at certain points because certain techniques are needed or not? Or if there's a disruption, how are they going to come together to solve it?

More recently we used qualitative data to train a machine-learning algorithm. We gave it about half to 2/3 of the data, trained the model, and then asked it to predict what would happen next. When we aligned it with our actual observations, we found that for the most part about 63% to 70% of the prediction was in sync with what we had observed.

Trusted CI: How do you measure outcomes and success?

A.R.: To look at success, again, is different from the hard sciences. It takes a lot to do qualitative research. It's very time consuming. You must observe a team for an eight-hour cybersecurity exercise, which is not easy to do. You can't interact with them. But moving beyond our qualitative observations was just as challenging. 

That was another sort of methodological intersection point where we could see how do these qualitative—and if I can call it quantitative data—complement each other? How do they supplement each other? How do they contradict each other? These are all important things to look at. It helps us improve our methodologies and become better social scientists and more useful in this space.

Another interesting multidisciplinary methodological intersection was applying big-data analytics to the qualitative human behavior observations. The fact that it was doable, that we could provide a methodological proof of concept, is one aspect of success in and of itself, because to me that was methodological innovation. 

Trusted CI: What has helped you the most in this journey?

A.R.: I think it takes a team. You need to have multiple disciplines coincide. I gave a talk once where I said that cyber is not just technical. It's got social. It's got psychological. It's ethical, it's legal, and so much more. It's all these things combined. 

Just doing qualitative research allows me to only go so far, but when I bring in the expertise of computer scientists, or I get big-data people excited about the things we can do together, it is so much more effective and powerful and can really take not just our disciplines in new directions but also the field of cybersecurity itself. 

If big data analytics or big data helps you look at how much or how often or when you know these types of things, and you combine that with the power of the how and the why that qualitative research offers, I think you just amplified what you could do when you bring all of this together and this goes back into getting that holistic perspective. A healthier perspective, one that's better grounded. Is it perfect? Of course not, there are still all these other elements, but I think that combination - that coming together - is probably what gets me excited about this stuff and that's what has helped me. You need to have the right mindset – find people who are willing to listen, to have an open mind, to bend a little, to experiment – that’s how we’re going to break boundaries.

Trusted CI: Tell us more about your collaborations.

A.R.: Out of the three grants that I had, one was the CAREER grant. That was a nice partnership that I developed with a couple of cyber ranges and computer scientists at Temple University. And even though the grant didn't necessarily call for these types of collaborations, they just organically came about. We've done time series. We've done social network analysis. We've done machine learning. We've done simulations. To me, that's a major contribution to the field of methodologies in and of itself, but also to cybersecurity. The cyber-physical systems grant that looked at power grids, cyberattacks, and cyber defense, brought different disciplines to the table. 

Then more recently Jay Yang, a cybersecurity researcher and professor at the Rochester Institute of Technology, and I worked on an NSF EAGER project, where we're trying to combine our methodologies. 

I think the biggest thing that I appreciate in Jay is that he is a computer scientist who listens. They run the Collegiate Penetration Testing Competition (CPTC) at Rochester Institute of Technology. Some of us went there for three years in a row to observe the teams. We had our qualitative observers and they provided technical observers. These were students from computer science or engineering who also did observations with us in real-time as the competition unfolded. They were looking at what is being typed on the screen and what does that translate into in terms of actions that the team is doing. We were looking for things like group dynamics, who's talking to who, is there a division of labor based on skill sets, etc. That became interesting in merging those two data points together. And then we also had the alerts. Because there were so many alerts, the observations really helped zoom into the alerts at certain times to extract what was going on at the time in the logs. 

The logs capture certain actions that we don't, and then we capture certain actions that the logs don't and what you don't get out of logs that I think people need to understand is the decision-making that went into it. It's a bunch of people in a room having a conversation and deciding on something before their fingers even reach the keyboard.

It's at that point that they start typing. That's the aftermath of that decision-making process, so you've just lost a key portion of that decision-making deciding how are you going to allocate your skill sets to get things done. So, we brought that picture into the data, and we also helped zoom in and identify which part of the alerts to look at. And that helped sift through large amounts of data in a more informed manner.

But these collaborations are primarily on the academic side. I want to also emphasize the collaborations with the ethical hacker community – this is one of the brightest, most passionate, and most supportive communities that I have engaged with. What I want to emphasize is to keep that open mind and interact with others outside your silo (not just the social sciences to include computer science or computer engineering, but even outside of academia altogether).

Trusted CI: What are some other examples of TTP that came out of your work? 

A.R.: One of the areas I (and I suspect many academic researchers) struggled with is access to data. Oftentimes attack data are just not shared (for an assortment of understandable reasons), or they can be for a hefty price, which academics like me certainly cannot afford. I run the Cybersecurity in Action, Research and Education (CARE) Lab, which works on several NSF-funded projects. 

For my NSF CAREER grant, my team and I had to do a literature review of ransomware attacks against critical infrastructures to get an idea of the threat landscape. As we came across various cases, we decided to rehash them into a simple dataset and over time this grew from 162 incidents last September to 747 incidents to date. We decided to make this open/free to the wider community in an effort to help other educators and students. Well, we were surprised when our dataset was requested by industry and government.

We started getting positive feedback and even requests, some of which we have fulfilled (for example, mapping our dataset onto the MITRE ATT&CK framework), and a submission form where you can let us know of a publicly disclosed incident that we missed in our dataset. We now have a dedicated page on the CARE Lab’s website for this dataset. To date we have had over 400 download requests from educators, students, government, industry, researchers, and journalists. To me, this is a measure of success – a transition to practice. In fact, our dataset was recently covered in Security Week and the CARE Lab was also listed as a contributor to ransomware research efforts.

We update the dataset regularly. We have set up alerts that notify us about various critical infrastructure ransomware incidents. Once a month, we release the next iteration of the dataset and we also document the changes in that iteration. 

Trusted CI: What's coming up next for you?

A.R.: When you talk about transition to practice, I think there's another area and that's educational practices. I'm wrapping up my CAREER grant this year and last year I did my annual report, sent it off to be reviewed by my program officers. They are asking “what’s your contribution to the field?” And that really forced me to think beyond just this space, beyond just publishing. For example, what do I have that people can take and use beyond the dataset that I just mentioned? 

Education, training, and awareness is one of the things my team and I at the CARE Lab have been doing. We have a repository of experiential learning course projects on social engineering. What is social engineering? Humans are considered to be the weakest link in cyberattacks/security. Social engineering is the psychological manipulation of humans to gain access to sensitive information or systems. Social engineering is often used in the very first stage of the intrusion chain, which is reconnaissance. 

A well-recognized example of social engineering is phishing, but it can take so many other different forms. Given that social engineering leverages the human/social aspect, it easily and naturally falls in the social science domain for research and education.

I had developed course projects for my cybercrime class since fall 2017. These were vetted by the ethics board, and after completing about three iterations, I decided I could share these with other educators who wouldn’t have to develop instructions and rubrics – they could literally ‘click-and-run’ these projects into their existing courses. 

My team and I have mapped the course projects on to the National Initiative for Cybersecurity Education (NICE) cybersecurity workforce framework. The NICE Framework (National Institute of Standards and Technology Special Publication 800-181) is a nationally focused resource that categorizes and describes cybersecurity work. The NICE Framework establishes a taxonomy and common lexicon that describes cybersecurity work and workers irrespective of where or for whom the work is performed. It is comprised of seven Workforce Categories with a subset of 33 Specialty Areas, as well as Work Roles, Tasks, and Knowledge, Skills, and Abilities (KSAs).

Currently, we have about five social engineering corpus projects that are complete with instructions and rubrics and have been mapped onto the NICE Framework. People can request to download them. We’ve had almost 200 downloads worldwide from not only educators and students that are looking for something like this that is hands-on, experiential learning, but also from industry and government to train their employees using fun and active learning as opposed to online quizzes. 

Interestingly, most of the educators that are looking at these projects are all from computer science, which is funny because this was intended as a social science course project, but now it's available to everyone. There's a need for this type of activity as well. And we’ve also created a social engineering incident dataset that is available for free and is fairly popular. The course projects and the dataset are available at the CARE Lab website.

So for me, transition to practice in that sense is also an important thing. There are a lot of things we can do as social scientists not just methodologically and contributing to research but having concrete deliverables that people can use. 

Trusted CI: Tell us about your recent capture the flag competition.

A.R.: In October, we held our very first collegiate social engineering capture the flag competition (SECTF) as part of Cybersecurity Awareness Month. The CARE Lab partnered with Layer 8 Conference, which is the only conference in the world whose sole focus is social engineering and open source intelligence (OSINT). The Collegiate SECTF was not a technical competition – because there are plenty of great ones already; this focuses solely on the human, social and psychological aspects that come into play during cyberattacks and cybersecurity. We had professional social engineering experts who served as judges. 

I wanted to create a competition that was open to all disciplines. Unlike technical CTFs which cater exclusively to computer science and engineering students, this social engineering CTF is for all fields and the human factor cuts across all domains. It was a three-day event and I’m delighted to say that everything went very smoothly.

And this goes back to my earlier points about working outside academic silos and finding the right people who listen. I want to particularly thank Patrick Laverty, who is the co-organizer of the Layer 8 conference. When I pitched the idea to him, he said yes in a heartbeat – he was so passionate and driven, and he believed in my vision. I couldn’t have done it without his support. It’s amazing what you can accomplish when you find the right people. Patrick and I are sharing our experiences from this inaugural SECTF at the upcoming NICE conference on November 5, 2020.

I felt so strongly about the need for bringing social engineering to the wider domain that I applied for an NSF grant.

And I’m excited to share that I recently found out that it has been funded (SaTC: EDU: Educating STEM Students and Teachers about the Relevance of Social Engineering in Cyberattacks and Cybersecurity) . It will start next year, and so the SECTF competition will continue. You can check out the SECTF website at

Trusted CI: Have you worked with any of the other organizations that are doing catch the flag events to try and coordinate the technical ones with what you are trying to do?

A.R.: That's a great question and I thought long and hard about this. If it's a combo, you're going to dilute the experience for both the technical and social. That's not to say it can't be done. The idea here is, can we design something with this pure emphasis on the social and psychological. As this grows, we might consider a combined event.

For now I’m just excited to look at how we can bring social engineering to the wider cybersecurity education curriculum, develop experiential learning SE course projects, offer a SECTF that is ethical, safe, and fun, and build on the SE dataset that we have already started. I want to engage with the wider community to make the social sciences more mainstream in the cybersecurity discourse.

Monday, October 12, 2020

Trusted CI Webinar: Enforcing Security and Privacy Policies to Protect Research Data Mon Oct 26 @11am Eastern

University of Virginia's Yuan Tian is presenting the webinar, Enforcing Security and Privacy Policies to Protect Research Data, on Monday October 26th at 11am (Eastern). 

Please register here. Be sure to check spam/junk folder for registration confirmation email.

Advances in computer systems over the past decade have laid a solid foundation for data collection at a staggering scale. Data generated from end-user devices has tremendous value to the research community. For example, mobile and Internet-of-Things devices can participate in large-scale Internet-based measurement or monitoring of patient's health conditions. While ground-breaking discovered may occur, malicious attacks or unintentional data leaks threaten the research data. Such a threat is hard to predict and difficult to recover from once it happens. Preventative and defensive measures should be taken where data is generated in order to protect private, valuable data from the attackers. Currently, there are efforts that try to regulate data management, for example, a research application might have a privacy policy that describes how the user data is being collected and protected. However, there is a disconnect between these documented policies and the implementations of a research project. 
In this talk, I’ll present our research, which interprets the documented policies automatically with NLP (natural language processing) and enforce them in the code of research projects, in order to protect the privacy of research data. This work can significantly reduce researchers' overhead in implementing policy-compliant code and reduce the complexity of protecting research datasets.
Speaker Bio:

Yuan Tian is an Assistant Professor of Computer Science at University of Virginia. Her research focuses on security and privacy and its interactions with systems, and machine learning. Her work has a real-world impact on platforms (such as iOS, Chrome, and Azure). She is a recipient of NSF CAREER Award 2020, Amazon Faculty Fellowship 2019, CSAW Best Paper Award 2019, Rising Stars in EECS 2016.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Friday, October 2, 2020

Trusted CI Engagement Applications Deadline Extended til October 16, 2020


Deadline application has been extended till Oct 16, 2020

Apply for a one-in-one engagement with Trusted CI for late 2020.


Trusted CI is accepting applications for one-on-one engagements to be executed in Jan-June 2021. Applications are due Oct 16, 2020.

To learn more about the process and criteria, and to complete the application form, visit our site:

During Trusted CI’s first 5 years, we’ve conducted
 more than 24 one-on-one engagements with NSF-funded projects, Large Facilities, and major science service providers representing the full range of NSF science missions.  We support a variety of engagement types including: assistance in developing, improving, or evaluating an information security program; software assurance-focused efforts; identity management; technology or architectural evaluation; training for staff; and more.   

As the NSF Cybersecurity Center of Excellence, Trusted CI’s mission is to provide the NSF community a coherent understanding of cybersecurity’s role in producing trustworthy science and the information and know-how required to achieve and maintain effective cybersecurity programs.


Thursday, October 1, 2020

Requesting Feedback on Initial Report and Upcoming Webinar: Guidance for Trustworthy Data Management in Science Projects

The Trustworthy Data Working Group has published an initial draft report at on guidance for trustworthy management in science projects.  

We invite the community’s feedback on the initial version of this report and input toward our revisions via the working group mailing list. You may also send input directly to Jim Basney at Please attend the Science Gateways webinar on Wednesday, October 7th at 1pm Eastern, where Jim will be presenting an overview of the guidance report. 

This report builds off key findings from its previously published survey report regarding trustworthy data and provides recommendations to address those concerns. The report covers stakeholders of trustworthy data, the definition of trustworthiness, findings from the survey report, barriers to trustworthiness, tools and technologies for trustworthy data, and communication of trustworthiness.

We thank all the members of the Trustworthy Data Working Group for their help with developing this guidance as well as their participation throughout the year. The working group will be revising its guidance in November, incorporating community input received in October, to be included in the working group's final report in December.

Working group membership is open to all who are interested. Please visit for details.

Wednesday, September 30, 2020

Thank you and congratulations to Florence Hudson!

Florence Hudson has been leading Trusted CI's transition to practice (TTP) efforts since 2018. She has been instrumental in fostering connections between researchers and practitioners and leading the creation of a suite of TTP resources based on best practices and successes. September 30th marks Florence's last day with Trusted CI and we wish Florence all the best in her role as Executive Director for the Northeast Big Data Innovation Hub.

Ryan Kiser has been working closely with Florence on TTP and will assume leadership of Trusted CI's TTP effort, supported by Sean Peisert, who brings a strong history of both research and practice in cybersecurity.


Trusted CI PI and Director

Monday, September 28, 2020

Announcing Trusted CI's Open Science Cybersecurity Fellows Program (Applications due Nov.6th)

 Application Deadline: Friday Nov.6th  Apply here.


Trusted CI serves the scientific community as the NSF Cybersecurity Center of Excellence, providing leadership in and assistance in cybersecurity in the support of research. In 2019, Trusted CI is establishing an Open Science Cybersecurity Fellows program. This program will establish and support a network of Fellows with diversity in both geography and scientific discipline. These fellows will have access to training and other resources to foster their professional development in cybersecurity. In exchange, they will champion cybersecurity for science in their scientific and geographic communities and communicate challenges and successful practices to Trusted CI.

About the program

The vision for the Fellows program is to identify members of the scientific community, empower them with basic knowledge of cybersecurity and the understanding of Trusted CI’s services, and then have them serve as cybersecurity liaisons to their respective community. They would then assist members of the community with basic cybersecurity challenges and connect them with Trusted CI for advanced challenges. 

Trusted CI will select six fellows each year.  Fellows will receive recognition, cybersecurity professional development consisting of training and travel funding. The Fellows’ training will consist of a Virtual Institute, providing 20 hours of basic cybersecurity training over six months. The training will be delivered by Trusted CI staff and invited speakers. The Virtual Institute will be presented as a weekly series via Zoom and recorded to be publicly available for later online viewing. Travel support is budgeted (during their first year only) to cover fellows’ attendance at the NSF Cybersecurity Summit, PEARC, and one professional development opportunity agreed to with Trusted CI. The Fellows will be added to an email list to discuss any challenges they encounter that will receive prioritized attention from Trusted CI staff. Trusted CI will recognize the Fellows on its website and social media. Fellowships are funded for one year, but will be encouraged to continue to participating in TrustedCI activities the years following their fellowship year.

After the Virtual Institute, Fellows, with assistance from the Trusted CI team, will be expected to help their science community with cybersecurity and make them aware of Trusted CI for complex needs. By the end of the year, they will be expected to present or write a short white paper on the cybersecurity needs of their community and some initial steps they will take (or have taken) to address these needs. After the year of full support, Trusted CI will continue recognizing the cohort of Fellows and giving them prioritized attention. Over the years, this growing cohort of Fellows will broaden and diversify Trusted CI’s impact.

Application requirements

  • A description of their connection to the research community. Any connection to NSF projects should be clearly stated, ideally providing the NSF award number.
    A statement of interest in cybersecurity
  • Two-page biosketch
  • Optional demographic info
  • A letter from their supervisor supporting their involvement and time commitment to the program
  • A commitment to fully participate in the Fellows activities for one year (and optionally thereafter)

The selection of Fellows would be made by the Trusted CI PIs and Senior Personnel based on the following criteria:

  1. Demonstrated connection to scientific research, with preference given to those who demonstrate a connection to NSF-funded science.
  2. Articulated interest in cybersecurity.
  3. Fellows that broaden Trusted CI’s impact across all seven NSF research directorates (Trusted CI encourages applications for individuals with connections to NSF directorates other than CISE), with connections to any of the NSF 10 Big Ideas, or Fellows that increase the participation of underrepresented populations.

Who should apply?   

  • Professionals and post-docs interested in cybersecurity for science, with evidence of that in their past and current role
  • Research Computing, Data, and IT technical or policy professionals interested in applying cybersecurity innovations to scientific research
  • Domain scientists interested in data integrity aspects of scientific research
  • Scientists from all across the seven NSF research directorates interested in how data integrity fits with their scientific mission
  • Researchers in the NSF 10 Big Ideas interested in cybersecurity needs
  • Regional network security personnel working across universities and facilities in their region
  • People comfortable collaborating and communicating across multiple institutions with IT / CISO / Research Computing and Data professionals
  • Anyone in a role relevant to cybersecurity for open science

More about the Fellowship

Fellows come from a variety of career stages, they demonstrate a passion for their area, the ability to communicate ideas effectively, and a real interest in the role of cybersecurity in research. Fellows are empowered to talk about cybersecurity to a wider audience, network with others who share a passion for cybersecurity for open science, and learn key skills that benefit them and their collaborators.

If you have questions about the Fellows program, please let us know by email us at

Application Deadline: Friday Nov 6, 2020  Apply here.

Applicants will be notified by Jan 15
, 2021

Tuesday, September 22, 2020

Trusted CI Webinar: Cybersecurity Maturity Model Certification (CMMC) on Tues Oct 6 @11am Eastern

Trusted CI's Scott Russell is presenting the webinar, Cybersecurity Maturity Model Certification (CMMC), on Tuesday October 6th at 11am (Eastern). 

Please register here. Be sure to check spam/junk folder for registration confirmation email.
The US has historically taken a fairly minimalist approach to cybersecurity regulation, but recent years have evidenced a trend toward increasing regulation. The latest in this trend is the US Department of Defense’s “Cybersecurity Maturity Model Certification” (CMMC). CMMC has garnered quite a bit of attention recently, as it intends to impose cybersecurity compliance requirements on the entire Defense Industrial Base (DIB), over 300,000 organizations (including some universities). CMMC has emerged at a breakneck pace, and there is still a great deal of uncertainty regarding who is impacted, what is required, and how organizations should respond.

This talk will 1) introduce US cybersecurity regulation and compliance generally; 2) provide the background and context leading to CMMC; 3) overview CMMC; and 4) suggest approaches for thinking about cybersecurity compliance moving forward.
Speaker Bio:

Scott Russell is a Senior Policy Analyst at the Indiana University Center for Applied Cybersecurity Research. Scott was previously the Postdoctoral Fellow in Information Security Law & Policy. Scott’s work thus far has emphasized private sector cybersecurity best practices, data aggregation and the First and Fourth Amendments, and cybercrime in international law. Scott studied Computer Science and History at the University of Virginia and received his J.D. from the Indiana University, Maurer School of Law.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."