Showing posts with label software assurance. Show all posts
Showing posts with label software assurance. Show all posts

Tuesday, July 18, 2023

Trusted CI releases updated guide to software security

As part of its ongoing efforts to support software assurance, Trusted CI has released a major update (version 2.0) of our Guide to Securing Scientific Software.

The first version of this guide provided concrete advice for anyone involved in developing or managing software for scientific projects. This new version of the guide expands both coverage and depth of the topics. This guide provides an understanding of many of the security issues faced when producing software and actionable advice on how to deal with these issues. New topics include approaches to avoiding software exploits (injection attacks, buffer overflows and overruns, numeric errors, exceptions, serialization, directory traversal, improper set of permissions, and web security); securing the software supply chain; secure design; software analysis tools; fuzz testing; and code auditing.

The new version of the guide is available at https://doi.org/10.5281/zenodo.8137009.

If you write code, this guide is for you. And if you write scientific software, your software is likely to be shared or deployed as a service. Once that step happens, you and the people who use or deploy your software, will be confronted with software security concerns.

To address these concerns, you will need a variety of skills. However, it may be daunting just to know what are the concerns to address and what are the skills that you need. The goal of this guide is to provide an introduction to these topics.

You can read this guide beginning-to-end as a tutorial to introduce you to the topic of secure software development, or you can read it selectively to help understand specific issues. In either case, this guide will introduce you to a variety of topics and then provide you with a list of resources to dive deeper into those topics.

It is our hope that our continued efforts in the area of software assurance will help scientific software projects better understand and ameliorate some of the most important gaps in the security of scientific software, and also to help policymakers understand those gaps so they can better understand the need for committing resources to improving the state of scientific software security. Ultimately, we hope that this effort will support scientific discovery itself by shedding light on the risks to science incurred in creating and using software.

Trusted CI releases a new report on ransomware

As part of its ongoing efforts to support software assurance, Trusted CI has released a new report describing the current landscape of ransomware.

Ransomware has become a global problem, striking almost every sector that uses computers, from industry to academia to government.

Given that ransomware is a global problem, striking almost every sector that uses computers, from industry to academia to government, our report takes a detailed technical approach to understanding ransomware. Ransomware attacks affect the smallest businesses, the largest corporations, research labs, and have even shut down IT operations at entire universities.

We present a broad landscape of how ransomware can affect a computer system and suggest how the system designer and operator might prepare to recover from such an attack. In our report we are focused on detection, recovery, and resilience. As such, we are explicitly not discussing how the ransomware might enter a computer system. The assumption is that systems will be successfully attacked and rendered inoperative to some extent. Therefore, it is essential to have a recovery and continuity of operations strategy.

Some of the ransomware scenarios that we describe reflect attacks that are common and well understood. Many of these scenarios have active attacks in the wild. Other scenarios are less common and do not appear to have any active attacks. In many ways, these less common scenarios are the most interesting ones as they pose an opportunity to build defenses ahead of attacks. Such areas need more research into the possible threats and defenses against these threats.

We start with a discussion of the basic attack goals of ransomware and distinguish ransomware from purely malicious vandalism. We present a canonical model of a computing system, representing the key components of the system such as user processes, the file system, and the firmware. We also include representative external components such as database servers, storage servers, and backup systems. This system model then forms the basis of our discussion on specific attacks.

We then use the system model to methodically discuss ways in which ransomware can (and sometimes cannot) attack each component of the system that we identified. For each attack scenario, we describe how the system might be subverted, the ransom act, the impact on operations, difficulty of accomplishing the attack, the cost to recover, the ease of detection of the attack, and frequency in which the attack is found in the wild. We also describe strategies that could be used to detect these attacks and recover from them.

Based on our study, we present our major takeaway observations and best practices that can help make a system more resilient to attack and easier to recover after an attack. Our report is available at https://doi.org/10.5281/zenodo.8140464.

Wednesday, July 5, 2023

Trusted CI Webinar: The Technical Landscape of Ransomware: Threat Models and Defense Models, July 17th@11am EST

Members of Trusted CI are presenting the talk, The Technical Landscape of Ransomware:  Threat Models and Defense Models, July 17th at 11am (Eastern).

Please register here.

Ransomware has become a global problem.  Given the reality that ransomware will eventually strike your system, we focus on recovery and not on prevention.  The assumption is that the attacker did enter the system and rendered it inoperative to some extent.

We start by presenting the broad landscape of how ransomware can affect a computer system, suggesting how the IT manager, system designer, and operator might prepare to recover from such an attack.

We show the ways in which ransomware can (and sometimes cannot) attack each component of the systems. For each attack scenario, we describe how the system might be subverted, the ransom act, the impact on operations, difficulty of accomplishing the attack, the cost to recover, the ease of detection of the attack, and frequency in which the attack is found in the wild (if at all). We also describe strategies that could be used to recover from these attacks.

Some of the ransomware scenarios that we describe reflect attacks that are common and well understood. Many of these scenarios have active attacks in the wild. Other scenarios are less common and do not appear to have any active attacks. In many ways, these less common scenarios are the most interesting ones as they pose an opportunity to build defenses ahead of attacks.

Speaker Bios:

Barton Miller is the Vilas Distinguished Achievement Professor and the Amar & Belinder Sohi Professor in Computer Sciences at the University of Wisconsin-Madison. He is a co-PI on the Trusted CI NSF Cybersecurity Center of Excellence, where he leads the software assurance effort and leads the Paradyn Tools project, which is investigating performance and instrumentation technologies for parallel and distributed applications and systems. His research interests include software security, in-depth vulnerability assessment, binary and malicious code analysis and instrumentation, extreme scale systems, and parallel and distributed program measurement and debugging.  In 1988, Miller founded the field of Fuzz random software testing, which is the foundation of many security and software engineering disciplines. In 1992, Miller (working with his then­student Prof. Jeffrey Hollingsworth) founded the field of dynamic binary code instrumentation and coined the term “dynamic instrumentation”. Miller is a Fellow of the ACM and recent recipient of the Jean Claude Laprie Award for dependable computing.

Miller was the chair of the Institute for Defense Analysis Center for Computing Sciences Program Review Committee, member of the U.S. National Nuclear Safety Administration Los Alamos and Lawrence Livermore National Labs Cyber Security Review Committee (POFMR), member of the Los Alamos National Laboratory Computing, Communications and Networking Division Review Committee, and has been on the U.S. Secret Service Electronic Crimes Task Force (Chicago Area).

Elisa Heymann is a Senior Scientist on TrustedCI, the NSF Cybersecurity Center of Excellence at the University of Wisconsin-Madison, and an Associate Professor at the Autonomous University of Barcelona. She co-directs the MIST software vulnerability assessment at the Autonomous University of Barcelona, Spain.

She coordinates in-depth vulnerability assessments for NFS Trusted CI, and was also in charge of the Grid/Cloud security group at the UAB, and participated in two major Grid European Projects:  EGI-InSPIRE and European Middleware Initiative (EMI). Heymann's research interests include software security and resource management for Grid and Cloud environments. Her research is supported by the NSF, Spanish government, the European Commission, and NATO.

---

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Tuesday, November 1, 2022

Open Science Cyber Risk Profile (OSCRP) Updated with Science DMZ, Software Assurance, Operational Technology, and Cloud Computing Elements

 Trusted CI has released an updated version of the Open Science Cyber Risk Profile (OSCRP), with additions based on insights from its 2021 study of scientific software assurance:

Andrew Adams, Kay Avila, Elisa Heymann, Mark Krenz, Jason R. Lee, Barton Miller, and Sean Peisert. “The State of the Scientific Software World: Findings of the 2021 Trusted CI Software Assurance Annual Challenge Interviews,” September 2021.  https://hdl.handle.net/2022/26799

Andrew Adams, Kay Avila, Elisa Heymann, Mark Krenz, Jason R. Lee, Barton Miller, and Sean Peisert. “Guide to Securing Scientific Software,” December 2021. DOI: 10.5281/zenodo.5777646

…and its 2022 study on scientific operational technology:

Emily K. Adams, Daniel Gunter, Ryan Kiser, Mark Krenz, Sean Peisert, Susan Sons, and John Zage. “Findings of the 2022 Trusted CI Study on the Security of Operational Technology in NSF Scientific Research,” July 13, 2022. DOI: 10.5281/zenodo.6828675

A new section on risk profiling of  cloud computing was also added.  The full reference for the OSCRP is:

Sean Peisert, Von Welch, Andrew Adams, RuthAnne Bevier, Michael Dopheide, Rich LeDuc, Pascal Meunier, Steve Schwab, and Karen Stocks. Open Science Cyber Risk Profile (OSCRP), Version 1.3.3. October 2022. DOI: 10.5281/zenodo.7268749

The OSCRP is a document, initially released in 2017, designed to help principal investigators and their supporting information technology professionals assess cybersecurity risks related to open science projects. The OSCRP was the culmination of extensive discussions with research and education community leaders, and has since become a widely-used resource, including numerous references in recent National Science Foundation (NSF) solicitations.

The OSCRP is a living document and will continue to be refreshed as technology and threats change, and as new insights are acquired.

Comments, questions, and suggestions about this post, and both documents are always welcome at info@trustedci.org.


Monday, August 8, 2022

New Trusted CI Software Security Training Materials for the Community

In a world of continuous cyber attacks, cybersecurity is a responsibility of every person involved in the software development life cycle: managers, designers, developers, and testers. Trusted CI offers an evolving collection of training materials on software security covering topics such as secure design, secure implementation, testing, code auditing, dependency tools, static analysis tools, and fuzz testing.

The materials are freely available at https://www.cs.wisc.edu/mist/SoftwareSecurityCourse. Apart from videos and corresponding book chapters, they include hands-on exercises and quizzes for many of the topics. Classroom exercises and the solutions to the hands-on exercises and quizzes are provided to instructors by request. Most of the videos now have captions in both English and Spanish.

These materials are being continuously updated, as we develop new modules. The latest additions are modules on address space layout optimization (ASLR), memory safety checks, fuzz testing and using AFL, and dependency analysis tools.

These materials have been used at conferences, workshops, and government agencies to train CI professionals in secure coding, design, and testing. They are also used at the University of Wisconsin-Madison to teach CS542, Introduction to Software Security.

Friday, May 13, 2022

Better Scientific Software (BSSw) Helps Promote Trusted CI Guide to Securing Scientific Software

Trusted CI is grateful to Better Scientific Software (BSSw) for helping to publicize the results of Trusted CI’s software security study, including its recently published findings report and Guide to Securing Scientific Software (GS3), via its widely-read blog.  Read the full blog post here.

Monday, February 14, 2022

Trusted CI Webinar: The Results of the Trusted CI Annual Challenge on Software, Mon Feb. 28 @ 1pm Eastern

Members of Trusted CI are presenting the Results of the Trusted CI Annual Challenge on Software, on Monday February 28th at 1pm (Eastern). Note the time is later than previous webinars.

Please register here.

This webinar presents the results of Trusted CI's 2021 examination of the state of software assurance in scientific computing, and also gives an overview of the contents of its recently released Guide to Securing Scientific Software (GS3), aimed at helping developers of software used in scientific computing improve the security of that software.

See our blog post announcing the report:
https://blog.trustedci.org/2021/12/publication-of-trusted-ci-guide-to.html

Speaker Bios

Dr. Elisa Heymann Pignolo is a Senior Scientist on the NSF Cybersecurity Center of Excellence at the University of Wisconsin, and an Associate Professor at the Autonomous University of Barcelona. She was in charge of the Grid/Cloud security group at the UAB, and participated in two major Grid European Projects: EGI‐InSPIRE and European Middleware Initiative (EMI). Heymann's research interests include security and resource management for Grid and Cloud environments. Her research is supported by the NSF, Spanish government, the European Commission, and NATO.

Prof. Barton Miller is the Vilas Distinguished Achievement Professor and Amar & Belinder Sohi Professor in computer science at the University of Wisconsin-Madison. Prof. Miller founded the field of fuzz random testing, which is foundational to computer security and software testing. In addition, he founded (with his then-student Prof. Jeffrey Hollingsworth) the field of dynamic binary instrumentation, which is a widely used, critical technology for cyberforensics. Prof. Miller advises the Department of Defense on computer security issues though his position at the Institute for Defense Analysis and was on the Los Alamos National Laboratory Computing, Communications and Networking Division Review Committee and the US Secret Service Electronic Crimes Task Force (Chicago Area). He is currently an advisor to the Wisconsin Security Research Council. Prof. Miller is a fellow of the ACM.

Dr. Sean Peisert leads applied research and development in computer security at the Berkeley Lab and UC Davis. He is also chief cybersecurity strategist for CENIC; co-lead of Trusted CI, the NSF Cybersecurity Center of Excellence; editor-in-chief of IEEE Security & Privacy; a member of the Distinguished Expert Review Panel for the NSA Annual Best Scientific Cybersecurity Paper Competition; a member of the DARPA Information Science and Technology (ISAT) Study Group; an ACSA Senior Fellow; past chair of the IEEE Technical Committee on Security & Privacy' and is a steering committee member and past general chair of the IEEE Symposium on Security and Privacy ("Oakland").

---

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Tuesday, December 14, 2021

Publication of the Trusted CI Guide to Securing Scientific Software

Trusted CI is pleased to announce the publication of its Guide to Securing Scientific Software (GS3).  The GS3 was produced over the course of 2021 by seven Trusted CI members with the goal of broadly improving the robustness of software used in scientific computing with respect to security. GS3 is the result of  the 2021 Trusted CI Annual Challenge on Software Assurance and the interviews we conducted with seven prominent scientific software development projects,  helping to  shape the team’s ideas about the community’s needs in software assurance.  The guide can be downloaded here:

Andrew Adams, Kay Avila, Elisa Heymann, Mark Krenz, Jason R. Lee, Barton Miller, and Sean Peisert. “Guide to Securing Scientific Software,” December 2021. DOI:10.5281/zenodo.5777646 https://doi.org/10.5281/zenodo.5777646

Note that this guide follows the publication of the team’s findings report from a few months ago:

Andrew Adams, Kay Avila, Elisa Heymann, Mark Krenz, Jason R. Lee, Barton Miller, and Sean Peisert. “The State of the Scientific Software World: Findings of the 2021 Trusted CI Software Assurance Annual Challenge Interviews,” September 2021.  https://hdl.handle.net/2022/26799

It is intended that the GS3 will continue to evolve and be further integrated into Trusted CI’s array of activities, including training and engagements, and so we encourage those interested in the subject of software assurance to continue to watch this blog for more information, and to also feel free to reach out to authors of the GS3 with questions and feedback.

For those interested in hearing more about the GS3, please (virtually) join the Trusted CI webinar focused on the topic of software assurance scheduled for February 28, 2022 at 10am Pacific / 1pm Eastern. https://www.trustedci.org/webinars  Register for the webinar.

Finally, Trusted CI gratefully acknowledges the contributions from the following teams to this effort: FABRIC, the Galaxy Project, High Performance SSH/SCP (HPN-SSH) by the Pittsburgh Supercomputing Center (PSC), Open OnDemand by the Ohio Supercomputer Center, Rolling Deck to Repository (R2R) by Columbia University, and the Vera C. Rubin Observatory, as well as to all those who provided feedback on early versions of this guide.

More information on Trusted CI’s work in software assurance can be found at https://www.trustedci.org/software-assurance


Wednesday, September 29, 2021

Findings Report of the 2021 Trusted CI Annual Challenge on Software Assurance Published

 As reported in this blog earlier this year, in 2021, Trusted CI is conducting our focused “annual challenge” on the assurance of software used by scientific computing and cyberinfrastructure

In July, the 2021 Trusted CI Annual Challenge team posted its initial findings in this blog.  The team is now pleased to share its detailed findings report:

Andrew Adams, Kay Avila, Elisa Heymann, Mark Krenz, Jason R. Lee, Barton Miller, and Sean Peisert. “The State of the Scientific Software World: Findings of the 2021 Trusted CI Software Assurance Annual Challenge Interviews,” September 2021.  https://hdl.handle.net/2022/26799

Now that the team has finished its examination of software assurance findings, it will turn its focus to solutions.  In accordance with that, later this calendar year, the Trusted CI team will be publishing a guide for recommended best practices for scientific software development.

For those interested in hearing more about the 2021 Annual Challenge, please (virtually) come to the team’s panel session at the 2021 NSF Cybersecurity Summit at 3:05 EDT on October 13, 2021: https://www.trustedci.org/2021-summit-program


Tuesday, August 3, 2021

Initial Findings of the 2021 Trusted CI Annual Challenge on Software Assurance

 In 2021, Trusted CI is conducting our focused “annual challenge” on the assurance of software used by scientific computing and cyberinfrastructure. The goal of this year-long project, involving seven Trusted CI members, is to broadly improve the robustness of software used in scientific computing with respect to security. The Annual Challenge team spent the first half of the 2021 calendar year engaging with developers of scientific software to understand the range of software development practices used and identifying opportunities to improve practices and code implementation to minimize the risk of vulnerabilities. In this blog post, the 2021 Trusted CI Annual Challenge team gives a high-level description of some of its more important findings during the past six months. 

Later this year, the team will be leveraging its insights from open-science developer engagements to develop a guide specifically aimed at the scientific software community that covers software assurance in a way most appropriate to that community. Trusted CI will be reaching back out to the community sometime in the Fall for feedback on draft versions of that guide before the final version is published late in 2021.

In support of this effort, Trusted CI gratefully acknowledges the input from the following teams who contributed to this effort: FABRIC, the Galaxy Project, High Performance SSH/SCP (HPN-SSH) by the Pittsburgh Supercomputing Center (PSC), Open OnDemand by the Ohio Supercomputer Center, Rolling Deck to Repository (R2R) by Columbia University, and the Vera C. Rubin Observatory

At a high level, the team identified challenges that developers face with robust policy and process documentation; difficulties in identifying and staffing security leads, and ensuring strong lines of security responsibilities among developers; difficulties in effective use of code analysis tools; confusion about when, where, and how to find effective security training; and challenges with controlling source code developed and external libraries used, to ensure strong supply chain security. We now describe our examination process and findings in greater detail.


Goals and Approach

The motivation for this year’s Annual Challenge is that Trusted CI has reviewed many projects in its history and found significant anecdotal evidence that there are worrisome gaps in software assurance practices in scientific computing. We determined that if some common themes could be identified and paired with the proportional remediations, the state of software assurance in science might be significantly improved. 

Trusted CI has observed that currently available software development resources often do not match well with the needs of scientific projects; the backgrounds of the developers, the available resources, and the way the software is used do not necessarily map to existing resources available for software assurance. Hence, Trusted CI put together a team including a range of security expertise with backgrounds in the field from academic researchers to operational expertise. That team then examined several software projects covering a range of sizes, applications, and NSF directorate funding sources, looking for commonalities among them related to software security. Our focus was on both procedures and practical application of security measures and tools. 

In preparing our examinations of these individual software projects, the Annual Challenge team enumerated several details that it felt would shed light on the software security challenges faced by scientific software developers, some of the most successful ways in which existing teams are addressing those challenges, and observations from developers about the way that they wish things might be different in the future, or if they were able to do things over again from the beginning.


Findings

The Annual Challenge team’s findings are generally aligned with one of five categories: process, organization/mission, tools, training, and code storage.

Process: The team found several common threads of challenges facing developers, most notably related to policy process documentation, including policies relating to onboarding, offboarding, code commits and pull requests, coding standards, design, communication about vulnerabilities with user communities, patching methodologies, and auditing practices. One cause for this finding is often that software projects start small and do not plan to grow or be used widely. And when the software does grow and starts to be used broadly, it can be hard to develop formal policies after developers are used to working in an informal, ad hoc manner. In addition, organizations do not budget for security. Further, where policy documentation does exist, it can easily go stale -- “documentation rot.” As a result, it would be helpful for Trusted CI to develop guides for and examples of such policies that could be used and implemented even at early stages by the scientific software development community.

Organization and Mission: Most projects faced difficulties in identifying, staffing, or funding a security lead and/or project manager. The few projects that had at least one of these roles filled had an advantage in regards to DevSecOps. In terms of acquiring broader security skills, some projects attempted to use institutional “audit services” but found mixed results. Several projects struggled with the challenge of integrating security knowledge among different teams or individuals. Strong lines of responsibility can create valuable modularity but can also create points of weakness when interfaces between different authors or repositories are not fully evaluated for security issues. Developers can ease this tension by using processes for developing security policies around software, ensuring ongoing management support and enforcement of policies, and helping development teams understand the assignment of key responsibilities. These topics will be addressed in the software assurance guide that Trusted CI is developing.

Tools: Static analysis tools are commonly employed in continuous integration (CI) workflows to help detect security flaws, poor coding style, and potential errors in the project. A primary attribute of a static analysis tool is the set of language-specific rules and patterns it uses to search for style, correctness, and security issues in a project. One major issue with static analysis tools is that they report a high number of false positives, which, as the Trusted CI team found, can cause developers to avoid using them. It was determined that it would be helpful for Trusted CI to develop tutorials that are appropriate for the developers in the scientific software community to learn how to properly use these tools and overcome their traditional weaknesses without being buried in useless results.

The Trusted CI team found that dependency checking tools were commonly employed, particularly given some of the automation and analysis features built into GitHub. Such tools are useful to ensure the continued security of a project’s dependencies as new vulnerabilities are found over time. Thus, the Trusted CI team will explore developing (or referencing existing) materials to ensure that the application of dependency tracking is effective for the audience and application in question. It should be noted that tools in general could give a false sense of security if they are not carefully used.

Training: Projects shared that developers of scientific software received almost no specific training on security or secure software development. A few of the projects that attempted to find online training resources reported finding themselves lost in a quagmire of tutorials. In some cases, developers had computer science backgrounds and relied on what they learned early in their careers, sometimes decades ago. In other cases, professional training was explored but found to be at the wrong level of detail to be useful, had little emphasis on security specifically, or was extremely expensive. In yet other cases, institutional training was leveraged. We found that any kind of ongoing training tended to be seen by developers as not worth the time and/or expense. To address this, Trusted CI should identify training resources appropriate for the specific needs, interests, and budgets of the scientific software community.

Code Storage: Although most projects were using version control in external repositories, the access controls methods governing pull requests and commits were not sufficiently restricted to maintain a secure posture. Many projects leverage GitHub’s dependency checking software; however, that tool is limited to only checking libraries within GitHub’s domain. A few projects developed their own software in an attempt to navigate a dependency nightmare. Further, there was often little ability or attempt to vet external libraries; these were often accepted without inspection mainly because there is no straightforward mechanism in place to vet these packages. In the Trusted CI software assurance guide, it would be useful to describe processes for leveraging two-factor authentication and developing policies governing access controls, commits, pull requests, and vetting of external libraries.


Next Steps

The findings derived from our examination of several representative scientific software development projects will direct our efforts towards addressing what new content we believe is most needed by the scientific software development community.

Over the next six months, the Trusted CI team will be developing a guide consisting of this material, targeted toward anyone who is either planning or has an ongoing software project that needs a security plan in place. While we hope that the guide will be broadly usable, a particular focus of the guide will be on projects that provide a user-facing front end exposed to the Internet because such software is most likely to be attacked. 

This guide is meant as a “best practices” approach to the software lifecycle. We will recommend various resources that should be leveraged in scientific software, including the types of tools to run to expose vulnerabilities, best practices in coding, and some procedures that should be followed when engaged in a large collaborative effort and how to share the code safely. Ultimately, we hope the guide will support scientific discovery itself by providing guidance around how to minimize possible risks incurred in creating and using scientific software.

Monday, June 14, 2021

Trusted CI webinar: Investigating Secure Development In Practice: A Human-Centered Perspective Mon June 28th @1pm Eastern

University of Maryland's Michelle Mazurek, is presenting the talk,
Investigating Secure Development In Practice: A Human-Centered Perspective,
on Monday June 28th at 1pm (Eastern).

Please register here. Be sure to check spam/junk folder for registration confirmation email.

Secure development is not just a technical problem: it’s a human and organizational problem as well. To understand the causes of insecurity, and find effective solutions, we must understand how and why security problems happen, and what barriers stand in the way of fixing them. How can we make it easier for developers to write secure code, even without special training? In this talk, I will report on findings from several recent studies addressing these questions. These include examining the effects of information resources and API design on developers' likelihood of writing secure code; using data from a secure programming contest to explore the kinds of security mistakes developers make; and exploring the benefits and barriers associated with adoption of a secure programming language.

Speaker Bio

Michelle Mazurek is an Associate Professor in the Computer Science Department and the Institute for Advanced Computer Studies at the University of Maryland, College Park, where she also directs the Maryland Cybersecurity Center. Her research aims to understand and improve the human elements of security- and privacy-related decision making. Recent projects include examining how and why developers make security and privacy mistakes; investigating the vulnerability-discovery process; evaluating the use of threat-modeling in large-scale organizations; and analyzing how users learn about and decide whether to adopt security advice. Her work has been recognized with an NSA Best Scientific Cybersecurity Paper award and three USENIX Security Distinguished Paper awards. She was Program Chair for the Symposium on Usable Privacy and Security (SOUPS) for 2019 and 2020 and is Program Chair for the Privacy Enhancing Technologies Symposium (PETS) for 2022 and 2023. 

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Wednesday, June 9, 2021

Trusted CI Materials as the Foundation for a University Course at the University of Wisconsin-Madison

Software security is important to the NSF community because it is critical to their support of science. For example, Trusted CI’s Community Benchmarking Survey consistently finds the overwhelming majority of NSF projects and Large Facilities develop software and also adopts both open source and commercial software, whose quality they assess as part of a cybersecurity risk management.  Trusted CI recognises the importance of this issue and has focused the TrustedCI 2021 Annual Challenge on software assurance.

Trusted CI has been developing training materials to teach secure software design and implementation. These materials have been used at conferences, workshops, and government agencies to train CI professionals in secure coding, design, and testing. More recently, they were used at the University of Wisconsin-Madison to develop a new course on software security.  The new course, CS542, Introduction to Software Security (http://www.cs.wisc.edu/~bart/cs542.html), is part of the computer science curriculum at the University of Wisconsin-Madison.  The teaching materials support a blended (flipped) model. Lectures are based on video modules and corresponding text chapters, and the classroom time was used for collaborative exercises and discussions. The videos and text are supplemented by hands-on exercises for each module delivered in virtual machines. The online nature of these materials proved themselves to be of even greater value during the remote learning situation caused by the COVID-19 pandemic.

This new course covers security throughout the various stages of the software development life cycle (SDLC), including secure design, secure coding, and testing and evaluation for security.

These teaching materials are freely available at
https://www.cs.wisc.edu/mist/SoftwareSecurityCourse.

Some of the comments from the students at the end of the last class of the Spring 2021 course, taken from the chat window, include:

“Thank you for such an enlightening course! I had a lot of fun!”
“Thank you for a very insightful and interesting course.”
“Thanks for the semester! This class was very interesting and manageable I appreciate it”
“Is this only taught in the Spring? I'd like to recommend the class to some of my CS friends.”
300 students have benefitted from this course at the University of Wisconsin-Madison.

Tuesday, March 30, 2021

Announcing the 2021 Trusted CI Annual Challenge on Software Assurance


The Trusted CI “Annual Challenge” is a year-long project focusing on a particular topic of importance to cybersecurity in scientific computing environments.  In its first year, the Trusted CI Annual Challenge focused on issues in trustworthy data.  Now, in its second year, the Annual Challenge is focusing on software assurance in scientific computing.

The scientific computing community develops large amounts of software.  At the largest scale, projects can have millions of lines of code.  And indeed, the software used in scientific computing and the vulnerabilities present in scientific computing can be similar to that used in other domains.  At the same time, the software developers have usually come from traditional scientific focused domains rather than traditional software engineering backgrounds.  And, in comparison to other domains, there's often less emphasis on software assurance.

Trusted CI has a long history in addressing the software assurance of scientific software, both through engagements with individual scientific software teams, as well as through courses and tutorials frequently taught at conferences and workshops by Elisa Heyman and Barton Miller, from University of Wisconsin-Madison.  This year’s Annual Challenge seeks to complement those existing efforts in a focused way, and leveraging a larger team.  Specifically, this year’s Annual Challenge seeks to broadly improve the robustness of software used in scientific computing with respect to security.  It will do this by spending the March–June  2021 timeframe engaging with developers of scientific software to understand the range of software development practices being used and identifying opportunities to improve practices and code implementation to minimize the risk of vulnerabilities.  In the second half of 2021, we will leverage our insights to develop a guide specifically aimed at the scientific software community that covers software assurance in a way most appropriate to that community,.  

We seek to optimize the impact of our efforts in 2021 by focusing our effort on software that is widely used, is situated in vulnerable locations, and is developed mostly by individuals who do not have traditional software engineering backgrounds and training.

This year’s Annual Challenge is supported by a stellar team of Trusted CI staff, including Andrew Adams (Pittsburgh Supercomputing Center), Kay Avila (National Center for Supercomputing Applications), Ritvik Bhawnani (University of Wisconsin-Madison), Elisa Heyman (University of Wisconsin-Madison), Mark Krenz (Indiana University), Jason Lee (Berkeley Lab/ NERSC), Barton Miller (University of Wisconsin-Madison), and Sean Peisert (Berkeley Lab; 2021 Annual Challenge Project Lead).

Monday, May 4, 2020

Trusted CI Webinar May 18th at 11am ET: Is Your Code Safe from Attack? with Barton Miller and Elisa Heymann

University of Wisconsin-Madison's Barton Miller and Elisa Heymann are presenting the talk, "Is Your Code Safe from Attack?" on May 18th at 11am (Eastern).  

This month's webinar is one week early to accommodate the Memorial day holiday.

Please register here. Be sure to check spam/junk folder for registration confirmation email.
The science and cyberinfrastructure community writes a huge quantity of software in the form of services, web applications, and infrastructure to support its mission. Each deployed software component can open your organization to the risk of attack, creating violations of data integrity and privacy, and provide unauthorized access to your computing and science infrastructure. An important part of preventing such attacks is an in-depth review of your code.
The goal of an in depth code review is to understand the structure of your software, identify the critical parts of code and the resources they control, understand trust and privilege, and then use this information to focus key parts of the code. Such a review can identify design issues, coding problems, and deployment mistakes. By focusing on the software structure and resources, you can anticipate types of vulnerabilities that have not yet been seen in the wild. This type of review can take beyond the capabilities of penetration testing.
We will briefly describe our First Principles Vulnerability Assessment (FPVA), which we have applied to a wide variety of real-world software, under the aegis of TrustedCI and other organizations. This software has included systems such as HTCondor, Wireshark, Singularity, Google Chrome, and even software that controls almost half the container shipping ports in the world.
We will describe our experiences with such assessments and discuss how you, as an organization that writes or deploys custom software can access or create such an assessment and how you would work with the assessment team. And, importantly, we will discuss how you respond to the identification of vulnerabilities in your software.
Speaker Bios:

Barton Miller is the Vilas Distinguished Achievement Professor, and Amar & Belinder Professor of Computer Sciences at the University of Wisconsin-Madison. He is also Chief Scientist for the DHS Software Assurance Marketplace (SWAMP) research facility, leads the software assurance effort for the NSF Cybersecurity Center of Excellence (TrustedCI), and co-directs the MIST software vulnerability assessment project in collaboration with his colleagues at the Autonomous University of Barcelona. He also leads the Paradyn Parallel Performance Tool project, which is investigating performance and instrumentation technologies for parallel and distributed applications and systems. His research interests include systems security, binary and malicious code analysis and instrumentation extreme scale systems, parallel and distributed program measurement and debugging, and mobile computing. Miller's research is supported by the U.S. Department of Homeland Security, U.S. Department of Energy, National Science Foundation, NATO, and various corporations.

In 1988, Miller founded the field of Fuzz random software testing, which is the foundation of many security and software engineering disciplines. In 1992, Miller (working with his then-student, Prof. Jeffrey Hollingsworth), founded the field of dynamic binary code instrumentation and coined the term "dynamic instrumentation". Dynamic instrumentation forms the basis for his current efforts in malware analysis and instrumentation.

Miller was the chair of the IDA Center for Computing Sciences Program Review Committee, a member of the Los Alamos National Laboratory Computing, Communications and Networking Division Review Committee, and has been on the U.S. Secret Service Electronic Crimes Task Force (Chicago Area). Miller is a Fellow of the ACM.

Elisa Heymann is a Senior Scientist on the NSF Cybersecurity Center of Excellence at the University of Wisconsin-Madison, and an Associate Professor at the Autonomous University of Barcelona. She co-directs the MIST software vulnerability assessment at the Autonomous University of Barcelona, Spain.

She coordinates in-depth vulnerability assessments for NFS Trusted CI, and was also in charge of the Grid/Cloud security group at the UAB, and participated in two major Grid European Projects:  EGI-InSPIRE and European Middleware Initiative (EMI). Heymann's research interests include software security and resource management for Grid and Cloud environments. Her research is supported by the NSF, Spanish government, the European Commission, and NATO.

Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

Monday, April 20, 2020

Trusted CI Releases Assessment Report for Singularity


In the first half of 2019, Trusted CI collaborated with the Sylabs team and the Open Science Grid (OSG) to assess the security of Singularity (https://sylabs.io/singularity/), an open source container platform optimized for high-performance computing (HPC) and scientific environments. This software assurance engagement is one of the most recent performed by Trusted CI; previous ones have included Open OnDemand and HTCondor-CE.

The goal of Singularity is to provide an easy-to-use, secure, and reproducible environment for scientists to transport their studies between computational resources. As more communities are using the Singularity software and collaborating with Sylabs, an in-depth security assessment becomes an important aspect of the software development process.

In the Trusted CI engagement, we conducted a thorough architectural and code review, performing an in-depth vulnerability assessment of Singularity by applying the First Principle Vulnerability Assessment (FPVA) methodology. The FPVA analysis started by mapping out the architecture and resources of the system (see figure 1 below), paying attention to trust and privilege used across the system, and identifying the high value assets in the system. From there we performed a detailed code inspection of the parts of the code that have access to the high value assets.

Overall, Singularity is well-engineered with careful attention to detail. In our engagement final report we discuss the parts of Singularity that were inspected and no issues were found. These parts included the majority of the functionality in the execution of a Singularity container. Though it is impossible to certify that code is free of vulnerabilities, we have substantially increased our confidence in the security of those parts of the code.  We also commented on design complexities where we see no current problems in the code but that need special care to prevent future vulnerabilities from being introduced when the software is updated. We made a couple of suggestions to enhance the security of Singularity. We also worked with the Singularity team to help improve their documentation related to security features.

Trusted CI, in agreement with Sylabs, published the engagement final report at the following URL: http://hdl.handle.net/2142/104612.
Figure 1. Architectural diagram for Singularity run/exec/shell.
1 James A. Kupsch, Barton P. Miller, Eduardo César, and Elisa Heymann, “First Principles Vulnerability Assessment”, 2010 ACM Cloud Computing Security Workshop (CCSW), Chicago, IL, October 2010.

Monday, September 30, 2019

Spotlight on Software Assurance and Secure Coding

Bart & Elisa at Cal-Poly Pomona, 09/27/19
Software assurance is the secure design,coding, and assessment of software to ensure it is free from vulnerabilities and works as intended. Since its inception, Trusted CI has dedicated a portion of its engagements and community outreach to software assurance. Much of this work has been led by Profs. Barton P. Miller and Elisa Heymann from the University of Wisconsin-Madison. Through conducting engagements, training events, presenting talks, and building curricula, Bart and Elisa strive to teach programmers, analysts, and managers how to design and program secure software, and how to assess  software to find  flaws and make the software more difficult to be hacked.

Bart and Elisa have conducted numerous engagements for Trusted CI and other organizations. During one engagement for Trusted CI they conducted an in-depth vulnerability assessment of Singularity, an open source container platform optimized for high-performance computing (HPC) and scientific environments. The Open Science Grid engagement involved a vulnerability assessment of OSG's installment of HTCondor, a program that manages jobs submitted to the batch system. In another collaboration outside of Trusted CI, they evaluated Total Soft Bank's (TSB) Terminal Operating System, a system for managing maritime freight shipping, including that manages about 40 percent of container terminals in the world. That work resulted in significant improvements in the security of international shipping, reported in a paper published in Port Technology International.

The pair has conducted workshops for Internet2, Supercomputing, Science Gateways Community Institute (SGCI), IEEE, O’Reilly, the New Jersey FAA; and have traveled to Australia, Germany, South America, and India to give trainings. Much of their work is publicly accessible to broadcast it out to the widest audience possible. And their course, “Introduction to Software Security,” has recently been added to UW-Madison’s Spring 2020 undergrad curriculum. A pilot version of the course had 120 students enrolled, they are optimistic the spring course will be well attended. These training resources focus on real scenarios and hands-on learning to make a lasting impact on students. The training exercises have evolved over time to include different languages and operating systems. It should be noted that, depending on the language, some security problems can be reduced, but they don’t entirely go away.

The future of secure coding relies on as much education as possible. The number of people writing programs has increased at a breathtaking rate. The resources available to them must scale to meet these demands.

Updates about upcoming Trusted CI trainings are regularly posted on our home page. Applications for an engagement with Trusted CI during the early 2020 session are due October 2nd.


Monday, November 14, 2016

NTP Rescue: one year later

Over the past two weeks I've gotten to take a look back at one of CTSC's 2015 projects, the rescue of the Network Time Protocol reference implementation, and see how far-reaching its impact has been and will be. It began with a presentation titled "Saving Time" at O'Reilly Security Conference. In this presentation I talked about the rescue and what it meant as a model for saving other failing infrastructure software.

I told the story of how NTP had become a liability not just to the science projects that depend on accurate time, but to the internet as a whole.  CTSC had a chance to make a difference in a failing system by partnering with nonprofit ICEI in a short, intense intervention. About a year later the work we made possible has been carried on by others. The NTP Security Project (NTPSec) has taken the lead, resulting in a new life for this critical infrastructure:
  • NTPSec's code base is down to 75kloc (75,000 lines of code) from the original 227klok.  That 2/3 reduction in attack surface has paid off: NTPSec has been immune to about half of old NTP's vulnerabilities before discovery, and 84% in the past year.
  • NTPSec's code is now stored in a standard git repository, accessible to all.  Its documentation has been brought up to date, and the project has begun onboarding and training new developers.
  • NTPSec's success has helped increase awareness of critical infrastructure in need, and made fixing it approachable.  Recent articles by Brady Dale of the NY Observer and the (in)famous Cory Doctorow helped spread the story.
At the time it felt like a scurrying few months amid a busy year. It seemed like a last ditch effort to ensure that our friends in science could get accurate time signals without taking on a security nightmare.  It's nice to see how much more it became.

Wednesday, December 3, 2014

Security for Software Cyberinfrastructure

NSF's CIF21 Software Vision (NSF 12-113) recognizes that "software is a critical and pervasive component of cyberinfrastructure for science, engineering, and education" and that cyberinfrastructure (CI) software "must be reliable, robust, and secure." What are community best practices for developing reliable, robust, and secure software, and what unique challenges do NSF CI software development projects face?

CTSC will be exploring this topic over the coming months by supplementing CTSC’s existing training materials on secure coding practices with guides that cover additional security topics throughout the software development lifecycle, such as:

  • identifying security objectives and addressing security threats during the software design phase to avoid patching for security issues later in the process
  • software release engineering to support the integrity and maintenance of deployed software, including security hygiene for developers to safeguard credentials and revoke credentials if compromised
  • vulnerability handling processes and software update mechanisms to address software vulnerabilities when they occur
  • software maintenance and dependency management for keeping up-to-date on security standards and fixes

We welcome your input and questions as we develop materials (and gather pointers to existing materials) on these topics. Please join the discussion on the CTSC Security Discussion email list.

Tuesday, April 22, 2014

Secure Coding tutorial accepted at XSEDE'14

Prof. Bart Miller will present his Secure Coding tutorial at XSEDE'14 on July 14th. With the recent coding flaws found in OpenSSL, this subject has become even more timely.

Watch this blog or the CTSC Twitter feed for more details.

Friday, October 11, 2013

Position paper accepted

The CTSC had a position paper accepted for the Workshop on Sustainable Software for Science: Practice and Experiences. Randy Heiland, Betsy Thomas, Von Welch, and Craig Jackson contributed "Toward a Research Software Security Maturity Model". Their paper, along with other accepted papers, can be found at wssspe.researchcomputing.org.uk/contributions/. The workshop will be held November 17 in conjunction with SC13.