The 2020 NSF Cybersecurity Summit organizers recognize the disruption of the Coronavirus on our professional and personal lives and the great uncertainty that we’re all experiencing. We feel it's important and critical to have the community weigh in on this year’s Summit planning process. We would appreciate getting your input by answering the following four questions relating to this year’s NSF Cybersecurity Summit. Stay tuned for a future announcement about the status of the Summit in the coming weeks.
Thursday, April 30, 2020
Community Input on 2020 NSF Cybersecurity Summit Amid COVID-19 Crisis
The 2020 NSF Cybersecurity Summit organizers recognize the disruption of the Coronavirus on our professional and personal lives and the great uncertainty that we’re all experiencing. We feel it's important and critical to have the community weigh in on this year’s Summit planning process. We would appreciate getting your input by answering the following four questions relating to this year’s NSF Cybersecurity Summit. Stay tuned for a future announcement about the status of the Summit in the coming weeks.
Tuesday, April 21, 2020
Study of scientific data security concerns and practices
The Trustworthy Data Working Group invites scientific researchers and the cyberinfrastructure professionals who support them to complete a short survey about scientific data security concerns and practices.
The working group is a collaborative effort of Trusted CI, the four NSF Big Data Innovation Hubs, the NSF CI CoE Pilot, the Ostrom Workshop on Data Management and Information Governance, the NSF Engagement and Performance Operations Center (EPOC), the Indiana Geological and Water Survey, the Open Storage Network, and other interested community members. The goal of the working group is to understand scientific data security concerns and provide guidance on ensuring the trustworthiness of data.
The purpose of this survey is to:
Survey results, along with the analysis and applicable guidance, will be published by the Trustworthy Data Working Group as a freely available report by the end of 2020. Please visit https://trustedci.org/trustworthy-data for updated information about the study.
Any questions/comments, please contact Jim Basney at jbasney@illinois.edu.
The working group is a collaborative effort of Trusted CI, the four NSF Big Data Innovation Hubs, the NSF CI CoE Pilot, the Ostrom Workshop on Data Management and Information Governance, the NSF Engagement and Performance Operations Center (EPOC), the Indiana Geological and Water Survey, the Open Storage Network, and other interested community members. The goal of the working group is to understand scientific data security concerns and provide guidance on ensuring the trustworthiness of data.
The purpose of this survey is to:
- Improve broad understanding of the range of data security concerns and practices for open science
- Provide input and help shape new guidance for science projects and cyberinfrastructure providers
- Serve as an opportunity to consider local data security concerns during a voluntary, follow-up interview
Survey results, along with the analysis and applicable guidance, will be published by the Trustworthy Data Working Group as a freely available report by the end of 2020. Please visit https://trustedci.org/trustworthy-data for updated information about the study.
Any questions/comments, please contact Jim Basney at jbasney@illinois.edu.
Monday, April 20, 2020
Trusted CI Releases Assessment Report for Singularity
In the first half of 2019, Trusted CI collaborated with the Sylabs team and the Open Science Grid (OSG) to assess the security of Singularity (https://sylabs.io/singularity/), an open source container platform optimized for high-performance computing (HPC) and scientific environments. This software assurance engagement is one of the most recent performed by Trusted CI; previous ones have included Open OnDemand and HTCondor-CE.
The goal of Singularity is to provide an easy-to-use, secure, and reproducible environment for scientists to transport their studies between computational resources. As more communities are using the Singularity software and collaborating with Sylabs, an in-depth security assessment becomes an important aspect of the software development process.
In the Trusted CI engagement, we conducted a thorough architectural and code review, performing an in-depth vulnerability assessment of Singularity by applying the First Principle Vulnerability Assessment (FPVA) methodology. The FPVA analysis started by mapping out the architecture and resources of the system (see figure 1 below), paying attention to trust and privilege used across the system, and identifying the high value assets in the system. From there we performed a detailed code inspection of the parts of the code that have access to the high value assets.
Overall, Singularity is well-engineered with careful attention to detail. In our engagement final report we discuss the parts of Singularity that were inspected and no issues were found. These parts included the majority of the functionality in the execution of a Singularity container. Though it is impossible to certify that code is free of vulnerabilities, we have substantially increased our confidence in the security of those parts of the code. We also commented on design complexities where we see no current problems in the code but that need special care to prevent future vulnerabilities from being introduced when the software is updated. We made a couple of suggestions to enhance the security of Singularity. We also worked with the Singularity team to help improve their documentation related to security features.
Trusted CI, in agreement with Sylabs, published the engagement final report at the following URL: http://hdl.handle.net/2142/104612.
Figure 1. Architectural diagram for Singularity run/exec/shell. |
Tuesday, April 14, 2020
Transition to Practice success story, part two: How CILogon powers science gateways
Different authentication scenarios must all work together for science gateways
Marlon Pierce, Ph.D., is director of the Cyberinfrastructure Integration Research Center at Indiana University (formerly the Science Gateways Research Center). Pierce leads distributed systems research into scalable cyberinfrastructure to support computational and data-driven science.
Trusted CI spoke with Pierce about how science gateways use CILogon. CILogon enables researchers to log on to cyberinfrastructure (CI). CILogon provides an integrated open source identity and access management platform for research collaborations, combining federated identity management (Shibboleth, InCommon) with collaborative organization management (COmanage). (Read the interview with Jim Basney who leads the CILogon project >>)
Pierce and his team have worked with Jim Basney and the CILogon team for quite a while, especially with two projects. One of those is an NSF-funded project called the Science Gateway Platform as a Service (SciGaP) that uses their Apache Airavata software-as-a-service.
The platform and one code-based installation can support many different gateway tenants. Each of those gateway tenants can support many different users.
“We might have a gateway that could be out of anywhere,” says Pierce. “They could work with communities all over the country or all over the world that are not tied to Indiana University, for example, where we are. We work with PIs from all over the country who want to offer their gateways.”
The essence of a gateway is that it supports communities of users who need to be authenticated. The gateways are not just anonymous. In fact, that is an important characteristic that they are not an anonymous science service. They need to be able to log in and use it through a sequence of actions that need to be recorded so that the gateway can keep track of work they do.
“You could think of those as creating digital objects,” says Pierce, “so the ability to do federated authentication is a cornerstone of all these projects which we outsource to the CILogon team. That is extremely valuable because it’s already solved for us.”
Pierce says now they can automate through some new services that CILogon provides. “Now every time we create a new gateway tenant, it also becomes a new tenant inside the CILogon system. That gateway could decide what authentication providers it wants to use. It could turn on the spigot and say, ‘come with whatever you have.’ For example, ‘I only want this for my university.’ CILogon provides many different capabilities.
Pierce has another NSF-funded project called Custos (NSF Award 1840003) that is about halfway finished that incorporates CILogon.
“It’s a cyberinfrastructure program that Jim Basney is co-PI on,” says Pierce, “that takes on some of the things we learned from SciGaP. Many gateways want some of our services but not all of them. Let’s say a gateway has solved for their own purposes this problem of running a job with a supercomputer but they'd like to outsource some of the other things that we built. For example, the security pieces. CILogon is a key part of the Custos project for us to provide a targeted set of capabilities that are specifically for gateways use cases with authentication being the cornerstone.”
Currently, Pierce estimates that between 2,000 and 3,000 science gateway users are directly impacted by CILogon.
Pierce and his team first started using CILogon several years ago with a project called SeaGrid that was part of the SciGaP project. At the time, their other projects were using in-house authentication methods. During the SeaGrid project and designing the security infrastructure, they realized early on that CILogon was the way to go.
“We’d worked with Jim on an earlier project in 2010 or 2012,” says Pierce. “We realized there was no other service that offered this type of reliability and the type of support we get from them.”
“They've done all the hard work with the ‘plumbing’ of authentication systems, so we don't have to do it. There are things out there like Open ID Connect, which they support, but we needed more than that. Since gateways are typically with academic partners, that means that we need solutions where we have any number of different authentication scenarios that all work together that are appropriate for a gateway.”
SciGaP is funded by the National Science Foundation's Software Infrastructure for Sustained Innovation (SI2) program through award #'s 1339774, 1339856, and 1339649.
Marlon Pierce, Ph.D., is director of the Cyberinfrastructure Integration Research Center at Indiana University (formerly the Science Gateways Research Center). Pierce leads distributed systems research into scalable cyberinfrastructure to support computational and data-driven science.
Trusted CI spoke with Pierce about how science gateways use CILogon. CILogon enables researchers to log on to cyberinfrastructure (CI). CILogon provides an integrated open source identity and access management platform for research collaborations, combining federated identity management (Shibboleth, InCommon) with collaborative organization management (COmanage). (Read the interview with Jim Basney who leads the CILogon project >>)
Pierce and his team have worked with Jim Basney and the CILogon team for quite a while, especially with two projects. One of those is an NSF-funded project called the Science Gateway Platform as a Service (SciGaP) that uses their Apache Airavata software-as-a-service.
The platform and one code-based installation can support many different gateway tenants. Each of those gateway tenants can support many different users.
“We might have a gateway that could be out of anywhere,” says Pierce. “They could work with communities all over the country or all over the world that are not tied to Indiana University, for example, where we are. We work with PIs from all over the country who want to offer their gateways.”
The essence of a gateway is that it supports communities of users who need to be authenticated. The gateways are not just anonymous. In fact, that is an important characteristic that they are not an anonymous science service. They need to be able to log in and use it through a sequence of actions that need to be recorded so that the gateway can keep track of work they do.
“You could think of those as creating digital objects,” says Pierce, “so the ability to do federated authentication is a cornerstone of all these projects which we outsource to the CILogon team. That is extremely valuable because it’s already solved for us.”
Pierce says now they can automate through some new services that CILogon provides. “Now every time we create a new gateway tenant, it also becomes a new tenant inside the CILogon system. That gateway could decide what authentication providers it wants to use. It could turn on the spigot and say, ‘come with whatever you have.’ For example, ‘I only want this for my university.’ CILogon provides many different capabilities.
Pierce has another NSF-funded project called Custos (NSF Award 1840003) that is about halfway finished that incorporates CILogon.
“It’s a cyberinfrastructure program that Jim Basney is co-PI on,” says Pierce, “that takes on some of the things we learned from SciGaP. Many gateways want some of our services but not all of them. Let’s say a gateway has solved for their own purposes this problem of running a job with a supercomputer but they'd like to outsource some of the other things that we built. For example, the security pieces. CILogon is a key part of the Custos project for us to provide a targeted set of capabilities that are specifically for gateways use cases with authentication being the cornerstone.”
Currently, Pierce estimates that between 2,000 and 3,000 science gateway users are directly impacted by CILogon.
Pierce and his team first started using CILogon several years ago with a project called SeaGrid that was part of the SciGaP project. At the time, their other projects were using in-house authentication methods. During the SeaGrid project and designing the security infrastructure, they realized early on that CILogon was the way to go.
“We’d worked with Jim on an earlier project in 2010 or 2012,” says Pierce. “We realized there was no other service that offered this type of reliability and the type of support we get from them.”
“They've done all the hard work with the ‘plumbing’ of authentication systems, so we don't have to do it. There are things out there like Open ID Connect, which they support, but we needed more than that. Since gateways are typically with academic partners, that means that we need solutions where we have any number of different authentication scenarios that all work together that are appropriate for a gateway.”
SciGaP is funded by the National Science Foundation's Software Infrastructure for Sustained Innovation (SI2) program through award #'s 1339774, 1339856, and 1339649.
Monday, April 13, 2020
Trusted CI Webinar April 27th at 11am ET: Trustworthy Decision Making and Artificial Intelligence with Arjan Durresi
Indiana University – Purdue University Indianapolis's Arjan Durresi is presenting the talk, "Trustworthy Decision Making and Artificial Intelligence" on April 27th at
11am (Eastern).
Please register here. Be sure to check spam/junk folder for registration confirmation email.
Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."
Algorithms and computers have been used for a long time in supporting decision making in various fields of human endeavors. Examples include optimization techniques in engineering, statistics in experiment design, modeling of different natural phenomena, and so on. In all such uses of algorithms and computers, an essential question has been how much we can trust them, what are the potential errors of such models, what is the field range of their applicability? With time the algorithms and computers we use have become more powerful and more complex, and we call them today as Artificial Intelligence that includes various machine learning and other algorithmic techniques. But the increase of power and complexity of algorithms and computers and with extended use of them the question of how much we should trust them becomes more crucial. Their complexity might hide more potential errors and especially the interdependencies; their solution might be difficult to be explained, and so on. To deal with these problems, we have developed an evidence and measurement-based trust management system; our system can be used to measure trust in human to human, human to machine, and machine to machine interactions. In this talk, we will introduce our trust system and its validation on real stock market data. Furthermore, we will discuss the use of our trust system to build more secure computer systems, filter fake news on social networks and develop better collective decision making support systems in managing natural resources, as well as future potential uses.Speaker Bio: Arjan Durresi is a Professor of Computer Science at Indiana University Purdue University in Indianapolis, Indiana. In the past, he held positions at LSU and The Ohio State University. His research interests include trustworthy decision making and AI, networking, and security. He has published about 100 articles in journals and over 200 articles in conference proceedings and seven book chapters. He also has authored over thirty contributions to standardization organizations such as IETF, ATM Forum, ITU, ANSI and TIA.
Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."
Wednesday, April 8, 2020
The extra Zoom setting you may not know about to control access for phone-in attendees
What if I told you, that your Zoom meeting password does not apply to users calling in by phone?
Over the past several weeks the rest of the world has found out about the Zoom video conferencing system. In this time of crisis, it has become essential for work, school, and even play. However, people have also been finding out about the security and privacy issues related to Zoom. I'm now going to share one more with you.
Trusted CI staff have discovered that, by default, meetings that have been protected with a meeting password do not require the password for users calling in by phone. There is an extra setting to control by-phone access and we think that this extra setting may not be not known by many Zoom users. Users who call in using one of the Zoom gateway phone numbers will not normally be prompted for a password. This potentially leaves sensitive meetings vulnerable to eavesdropping. Although this issue isn't a vulnerability in Zoom, it allows the users setting up meetings to create a vulnerability in their own meetings. It is a user interface and security awareness issue.
In order to enable password protection for by-phone users, you must locate the setting "Require password for participants joining by phone" as shown below, which in some interfaces may be located in the advanced settings.
A second closely related issue is that enabling this "Require password for participants by phone" setting does not immediately change the configuration of existing meetings that have already been set up. The owner of the meetings must go into each meeting configuration, edit the meeting, and then save it without making any changes to the meeting. According to our observations, this regenerates the meeting and applies a phone password to the meeting. The phone password will be automatically generated and become part of the meeting invitation. You would then share this new password and meeting invite with your meeting participants who need it.
A third issue to be aware of here is that phone number caller id information can be faked. Although this is not new by any means, there has been little to no warning about this in relation to using Zoom. This vulnerability isn't Zoom's fault as the flaw exists in the design of the phone system.
However, because of this, you should not use a phone number in the participants list to authenticate a participant. A malicious user could change their number to that of an authorized user to avoid detection.
During our research into these issues, we found that most of the existing documentation outside of the Zoom website itself does not mention the "Require a phone password" extra setting that must be applied. Similarly, it is not obvious that this must be done when creating a meeting and setting a password, as there is no feedback from the interface that this must be done or that your meeting will not be fully protected.
Several of our security colleagues were also unaware of this extra "Require a password for by-phone users" setting, suggesting that the setting is unknown to most Zoom users.
Our recommendations for Zoom, the company, is to add some type of indication near the meeting password setting that informs users that there is an additional setting for controlling access by phone and that Zoom should inform their existing install base about these issues. Alternatively, this option should be enabled by default.
Related links:
Over the past several weeks the rest of the world has found out about the Zoom video conferencing system. In this time of crisis, it has become essential for work, school, and even play. However, people have also been finding out about the security and privacy issues related to Zoom. I'm now going to share one more with you.
Trusted CI staff have discovered that, by default, meetings that have been protected with a meeting password do not require the password for users calling in by phone. There is an extra setting to control by-phone access and we think that this extra setting may not be not known by many Zoom users. Users who call in using one of the Zoom gateway phone numbers will not normally be prompted for a password. This potentially leaves sensitive meetings vulnerable to eavesdropping. Although this issue isn't a vulnerability in Zoom, it allows the users setting up meetings to create a vulnerability in their own meetings. It is a user interface and security awareness issue.
In order to enable password protection for by-phone users, you must locate the setting "Require password for participants joining by phone" as shown below, which in some interfaces may be located in the advanced settings.
Screenshot of the extra "by phone" setting to consider to protect a meeting |
Trusted CI's test of faking a number |
A third issue to be aware of here is that phone number caller id information can be faked. Although this is not new by any means, there has been little to no warning about this in relation to using Zoom. This vulnerability isn't Zoom's fault as the flaw exists in the design of the phone system.
However, because of this, you should not use a phone number in the participants list to authenticate a participant. A malicious user could change their number to that of an authorized user to avoid detection.
During our research into these issues, we found that most of the existing documentation outside of the Zoom website itself does not mention the "Require a phone password" extra setting that must be applied. Similarly, it is not obvious that this must be done when creating a meeting and setting a password, as there is no feedback from the interface that this must be done or that your meeting will not be fully protected.
The Zoom meeting password interface, showing no indicators of an extra by-phone setting. |
Several of our security colleagues were also unaware of this extra "Require a password for by-phone users" setting, suggesting that the setting is unknown to most Zoom users.
Our recommendations for Zoom, the company, is to add some type of indication near the meeting password setting that informs users that there is an additional setting for controlling access by phone and that Zoom should inform their existing install base about these issues. Alternatively, this option should be enabled by default.
How Trusted CI discovered the issues
On February 26th, 2020, Mark Krenz set up a meeting with a colleague on the COSMIC2 science gateway project and set a meeting password to try to protect the meeting. When the colleague called in by phone, Mark asked the user if they needed a password to get in, which to his surprise, they did not. Mark then performed further testing of the issue with the help of Trusted CI members including Andrew Adams, Shane Filus, Ishan Abhinit, and Scott Russell. It was quickly found that changing the "require password by-phone" setting did not set it on existing meetings and that the existing meetings needed to be edited and re-saved. The team above wrote up a security report to send to Zoom, which was done so on March 6th through the hackerone.com website, which acts as a gateway for submitting such reports to companies. This meant that there was then a 30 day embargo on releasing this information to the public. During this time, the COVID19 crisis began to unfold in the western countries and people started heavily using Zoom. This almost immediately led to many reports of various unwanted incidents within Zoom meetings, so called Zoombombing, and other vulnerabilities being discovered and announced. During this time we discussed the issue internally, met with Zoom to discuss the issue, and provided our recommendations for a way forward. We also monitored the media for any signs that this was being exploited, but found no direct evidence that it was being exploited. We also looked for these recommendations in news reports that were surfacing over the past month and found none that directly mentioned this issue.Related links:
- https://zoom.us/security
- https://blog.zoom.us/wordpress/2020/04/01/a-message-to-our-users/
- https://blog.zoom.us/wordpress/2020/03/27/best-practices-for-securing-your-virtual-classroom/
- https://www.nbcboston.com/news/local/fbi-warns-of-zoom-bombing-after-2-mass-schools-have-web-conferences-hijacked/2099692/
- https://www.forbes.com/sites/anthonykarcz/2020/03/29/do-these-4-things-to-keep-hackers-out-of-your-zoom-call/#149f3040486e
- https://citizenlab.ca/2020/04/move-fast-roll-your-own-crypto-a-quick-look-at-the-confidentiality-of-zoom-meetings/