Thursday, July 2, 2020

Survey Report: Scientific Data Security Concerns and Practices


The Trustworthy Data Working Group has published a report at https://doi.org/10.5281/zenodo.3906865 that summarizes the results from our survey of scientific data security concerns and practices. 111 participants completed the survey from a wide range of positions and roles within their organizations and projects. We invite the community’s feedback on this report and input to the ongoing work of the working group via the working group mailing list. You may also send input directly to Jim Basney at jbasney@illinois.edu.

Next, the working group will be developing guidance on trustworthy data for science projects and cyberinfrastructure developers, based on the survey results and on resources from NIST, RDA, ESIP and others. Related work includes NIST 1800-25, the TRUST Principles for Digital Repositories, and Risk Assessment for Scientific Data. The working group will also be providing input into the next revision of the Open Science Cyber Risk Profile (OSCRP).

Working group membership is open to all who are interested. Please visit https://www.trustedci.org/2020-trustworthy-data for details.

Wednesday, July 1, 2020

2020 NSF Cybersecurity Summit CFP extended to July 13

2020 NSF Cybersecurity Summit Call for Participation (CFP) has been extended, deadline is COB on Monday July 13th. 


Call for Participation (CFP)

Program content for the summit is driven by our community. We invite proposals for presentations, breakout and training sessions, as well as nominations for student scholarships. The deadline for CFP submissions is July 13th To learn more about the CFP, please visit: https://trustedci.org/cfp-2020

Tuesday, June 23, 2020

Fantastic Bits and Why They Flip

In 2019, Trusted CI examined the causes of random bit flips in scientific computing and common measures used to mitigate the effects of “bit flips.”  (In a separate effort, we will also be issuing a similar report on data confidentiality needs in science, as well.) Its report, “An Examination and Survey of Random Bit Flips and Scientific Computing,” was issued a few days before the winter holidays in December 2019. As news of the report was buried amidst the winter holidays and New Year, we are pleased to highlight the report in a bit more detail now. This post is longer than most of Trusted CI’s blog posts to give you a feel for the report and hopefully entice you to read it.

For those reading this that are not computer scientists, some background: What in the world is “bit,” how can one “flip” and what makes one occur randomly? Binary notation is the base-2 representation of numbers as combinations of the digits 0 and 1, in contrast to the decimal notation most of us are used to in our daily lives that represents digits as combinations of the digits 0 through 9. In binary notation, A “bit” is the atomic element of the representation of a 1 or a 0. Bits --- 0s or 1s --- can be combined together to represent numbers larger than 0 or 1 in the same way that decimal digits can be put together to represent numbers larger than 9.

Binary notation has been in use for many hundreds of years. The manipulation of binary numbers made significant advances in the mid 19th century through the efforts of George Boole, who introduced what was later referred to as Boolean algebra or Boolean logic. This advance in mathematics, combined with electronic advances in switching circuits and logic gates by Claude Shannon (and others) in the 1930s led to binary storage and logic as the basis of computing. As such, binary notation, with numbers represented as bits, are the basis of how most computers have stored and processed information since the inception of electronic computers.

However, while we see the binary digits 0 and 1 as discrete, opposite, and rigid representations, in the same way that North and South represent directions, the components of a computer that underlie these 0 and 1 representations are analog components that reveal that 0 and 1 are in fact closer to shades of grey. In fact, 0 and 1 are typically stored magnetically and transmitted through electrical charges. In reality, both magnetism and electrical charges can either degrade or otherwise be altered through external forces, including cosmic rays or other forms of radiation and magnetism. To a computer, a “bit flip” is the change of the representation of a number from a 0 to a 1 or vice versa. Underlying that “flip” could have been a sudden burst of radiation that suddenly and instantly altered magnetic storage or electrical transmission, or could also have been the slow degradation of the magnetism of a magnetically stored bit from something close to 1, or a “full” magnetic charge, to something less than 0.5, at which point it would be recognized and interpreted as a 0.

The use of error correction in computing and communication was pioneered in the 1940s and 1950s by Richard Hamming, who used some form of redundancy to help to identify and mask the effects of bit flips. Despite the creation of these techniques 70–80 years ago, it is still not the case that error correction is universally used. And, even when it is, there are limits to the amount of errors that can be incurred in a particular blob of data (a number, a file, a database) before the errors can fail to be correctable, or even to be detected at all.

The report that Trusted CI published last year describes the methods for why bit flips occur. These include isolated single errors due to some kind of interference, bursty faults of a number of sequential bits, due to some kind of mechanical failure or electrical interference; or malicious tampering. The document then narrows to focus on isolated errors. Malicious tampering is the focus of future reports, for example, as are data errors or loss due to improper scientific design, mis-calibrated sensors, and outright bugs, including unaccounted-for non-determinism in computational workflows, improper roundoff and truncation errors, hardware failures, and “natural” faults.

The report then describes why single bit faults occur — such as via cosmic rays, ionizing radiation, and corrosion in metal — their potential odds of faults occurring for a variety of different components in computing, and potential mitigation mechanisms. The goal is to help scientists understand the risk that bit faults can either lead to scientific data that is in some way incorrect, due to bit flips, or an inability to reproduce scientific results in the future, which is of course a cornerstone of the scientific process.

As part of the process of documenting mitigation mechanisms, the authors of the report surveyed an array of scientists with scientific computing workflows, as well as operators of data repositories, and computing systems ranging from small clusters to large-scale DOE and NSF high-performance computing systems. The report also discusses the impact of bit flips on science. For example, in some cases, including certain types of metadata, corrupt data might be catastrophic. In other cases, such as images,, or situations where there are already multiple data streams collecting that cross-validate each other, the flip of a single bit or even a small handful of bits is largely or entirely lost in the noise. Finally, the report collects these mechanisms into a set of practices, divided by components involved in scientific computing, that scientists may wish to consider implementing in order to protect their data and computation — for example, using strong hashing before storing or transmitting data, file systems with automated integrity repair built in, disks with redundancy built in, and even leveraging fault tolerant algorithms where possible.

For the time being, this report is intended as a standalone first draft for use by the scientific computing community. Later in 2020, this report will be combined with insights from the Trusted CI “annual challenge” on trustworthy data to more broadly offer guidance on integrity issues beyond bit flips. Finally, in late 2020, we expect to integrate issues pertaining to bit flips into a future version of the Open Science Cyber Risk Profile (OSCRP). The OSCRP is a document that was first created in 2016 to develop a “risk profile” for scientists to help understand risks to their projects via threats posed through scientific computing. While the first version included issues in data integrity, a revised version will include bit flips more directly and in greater detail.

As with many Trusted CI reports, both the bit flip report and the OSCRP are intended to be living reports that will be updated over time to serve community needs. As such, comments, questions, and suggestions about this post, and both documents are always welcome at info@trustedci.org
Going forward the community can expect additional reports from us on the topics mentioned above, as well as a variety of other topics. Please watch this space for future blog posts on these studies.


Transition to practice success story: Using machine learning to aid in the fight against cyberattacks

Artificial intelligence and machine learning becoming key technologies in cybersecurity operations

S. Jay Yang, professor at the Rochester Institute of Technology, is a 2019 Trusted CI Fellow and the first 2020 Trusted CI Transition to Practice (TTP) Fellow. His research group has developed several pioneering machine learning, attack modeling, and simulation systems to provide predictive analytics and anticipatory cyber defense. His earlier works included FuSIA, VTAC, ViSAw, F-VLMM, and attack obfuscation modeling.

In 2019, the Center for Applied Cybersecurity Research (CACR) and OmniSOC, the security operations center for higher education, began working with Dr. Yang and his team at Rochester Institute of Technology to implement Dr. Yang’s ASSERT research prototype with the OmniSOC. ASSERT is a machine learning system that automatically categorizes attacker behaviors derived from alerts and other information into descriptive models to help a SOC operator more effectively identify related attacker behavior.

“SOC analysts are overwhelmed by intrusion alerts,” said Yang. “By providing a characteristic summary of different groups of alerts, ASSERT can bring SOC analysts’ attention to critical attacks quicker and help them make informed decisions.”

CACR staff are working with OmniSOC engineers and Yang’s team from Rochester Institute of Technology to validate the methodology and test the research prototype for use at OmniSOC for applicability to SOC workflows using data OmniSOC aggregates from IU as the first of these explorations of machine learning approaches.

The team is using a subset of an anonymized parallel feed of (only) IU’s OmniSOC data. This data is pipelined to a prototype deployed on IU’s virtualization infrastructure. The results will be provided to OmniSOC engineers and analysts to determine if the method has utility for OmniSOC’s workflows. This project aims to catalyze further applied AI research for cybersecurity by taking advantage of the size of the security data set aggregated by OmniSOC, the expertise of CACR staff, and the relationships both organizations have within higher-ed security and research communities.

Ryan Kiser is a senior security analyst at the Indiana University Center for Applied Cybersecurity Research and one of the researchers involved in the project. We spoke with Kiser to catch up on how the project got started and where the project stands now.

Trusted CI: How did you learn about Dr. Jay Yang’s work?

Jay was a member of the Trusted CI cybersecurity cohort. The intent of the cohort was to get a group of security researchers together so that we could help make connections with the community that Trusted CI serves -- that is, the higher-ed and research communities and the facilities that are funded by NSF.

Some of Jay’s work is related to machine learning. Jay visited Bloomington to visit IU. It was a good opportunity for us to talk about his research. It seemed like the ability to generate models of attack was potentially applicable to OmniSOC. One of his grad students was working on a series of visualizations and a way for people to interact with the results from ASSERT, and he was able to demonstrate it for us in person.

Trusted CI: Where does the project stand now?

The project happened in phases. We planned to do it that way from the start because we weren't sure from the beginning that this would be something that could provide real value because it's still a research prototype.

We interacted with the researcher early on to find out what they need. We then tried to figure out how we can reduce this data down to reduce the risk of using operational data while still providing the functionality that is needed for the research. We determined a way to anonymize data and got approval from the security and policy offices for the use of the data in the way we proposed. Once we had that approval we could start.

The first phase was to just get a testbed set up and get the prototype deployed into the testbed, then start to get the right data from OmniSOC into the prototype. That concluded in early January.

We were starting to get results, so we started the second round to see if we can make use of this. Part of that was to develop a set of use cases for OmniSOC.

Another part of the project is that we had an undergraduate student here at IU develop visualizations as part of his capstone project and we set up some additional software to enable us to do that on the testbed. That's the phase of the project that is concluding now.

Suricata is a network monitoring and alerting tool used at IU. We wanted to take a subset of the data that Suricata is generating at IU and use that as the basis for an initial analysis, an exploration. The hope is that ultimately this can be applied more broadly, that we could do something like full network sensor data.

Another tool called Zeek captures a lot more data than Suricata about what is flowing over the network. Our hope is that once the groundwork is laid using the small dataset with Suricata, OmniSOC can start using the much larger volume of data that Zeek captures, hopefully getting much more valuable results out of it.

We have learned a lot throughout this process. One of the biggest takeaways that I have from this is the way in which it is limited. You cannot take a dataset and throw it at a neural network and then have a usable model that you can use to analyze other data. You have to tailor these things to the use case in order to solve a particular problem.

Our goal now is to work with OmniSOC and Jay to come up with a roadmap for OmniSOC and Jay to use to realize this potential. We're going to write up what we found by the end of July and plot a path forward for Jay’s group and OmniSOC to try to bring it into a real production environment.

Wednesday, June 17, 2020

Analysis of the Cybersecurity Maturity Model Certification (CMMC) and Implications for Contractors

The Cybersecurity Maturity Model Certification (CMMC) is currently being developed by the US Department of Defense (DoD) as the next generation cybersecurity requirement for contractors.  Section 1 below summarizes the publicly available information regarding CMMC, highlighting key facts, key dates, and key unknowns. Section 2 provides an analysis of how to interpret CMMC and what it may mean for future contracting efforts with the DoD. Section 3 provides the sources used. 

Key Takeaways: 

  • CMMC may be very important for future interactions with the DoD, as it establishes cybersecurity compliance requirements for *all* entities contracting (or subcontracting) with the DoD. 
  • CMMC requirements are currently planned to be included in all DoD contracts by JAN 2026.
  • CMMC is an evolution in the DoD’s treatment of CUI, adding a “verification component” to what had previously been a regime “based on trust.”
  • CMMC establishes five levels of cybersecurity requirements, ranging from “Basic Cyber Hygiene” to “Advanced/Progressive.”
  • Presently, there are still uncertainties regarding how the program will be implemented and what assessments will look like.
  • The substantive challenges, frequent changes, and emergence of COVID-19 all cast doubts on whether CMMC will actually be implemented as currently envisioned.
  • Organizations that anticipate needing CMMC certification should continue to monitor the developments in this space.  Organizations with current DoD contracts should work with their contract officer and review the CMMC document to self evaluate compliance.

1. What We Know So Far


Overview:

The Cybersecurity Maturity Model Certification (CMMC) is a cybersecurity compliance framework being developed by the Department of Defense (DoD).  CMMC is an evolution of the DoD's current requirements for the protection of Controlled Unclassified Information (CUI), outline in DFARS 252.204-7012.  CMMC expressly acknowledges that the CUI DFARS in "based on trusted", and CMMC is intended to add "a verification component".  However, CMMC goes beyond the protection of CUI, and intends to establish cybersecurity requirements for every entity that contracts with the DoD (often collectively referred to as the "Defense Industrial Base", or DIB.[1]

The Office of the Undersecretary of Defense for Acquisition and Sustainment is generating CMMC in collaboration with “DoD stakeholders, University Affiliated Research Centers (UARCs), Federally Funded Research and Development Centers (FFRDCs), and industry.”[2] CMMC will combine a number of existing cybersecurity standards, including “NIST 800-171, NIST 800-53, AIA NAS9933, and others.”[3] The current proposed requirements are available in the CMMC Version 1.02 document.[4] CMMC materials also note that it will go beyond assessing the “maturity of . . . controls,” and assess “the company’s maturity/institutionalization of cybersecurity practices and processes.”

The Requirements:

The core of CMMC is a five-level “maturity model” [5] specifying required “practices” and “processes” for compliance. Every DoD contract will eventually have a CMMC level requirement that must be satisfied by defense contractors wishing to bid on that contract. Contractors must have their specified cybersecurity level evaluated and certified by an accredited “CMMC 3rd Party Assessment Organization” (C3PAO) or individual assessor.[6] The different CMMC levels are intended to protect against different adversaries or attacks.[7]

  • Level 1: “Basic Cyber Hygiene” establishes the minimal set of requirements for CMMC, which are largely a restatement of the federal contract requirements in FAR 52.204-21.[8] Every DoD contractor will be required to satisfy at least Level 1. 
  • Level 2: “Intermediate Cyber Hygiene” is an intermediate step for organizations targeting Level 3. This level is not currently planned to be included in any DoD contracts, but may be used as a competitive advantage when bidding on Level 1 contracts.
  • Level 3: “Good Cyber Hygiene” is the required level for any contract handling CUI.[9] The requirements for Level 3 are largely a restatement of NIST SP 800-171/DFARS 252.204-7012, along with an additional 20 controls. 
  • Level 4: “Proactive” focuses on the protection of CUI from Advanced Persistent Threats (APTs), drawing on controls in NIST SP 800-171B.
  • Level 5: “Advanced/Progressive” is the highest level, reserved for the most critical non-classified contracts. Level 5 also focuses on the protection of CUI from APTs, but requires even “greater depth and sophistication of cybersecurity capabilities.”
The DoD has provided some clarifications and examples on interpretation of CMMC’s required practices and processes.[10]

Timeline:

The most recent statement from DoD is that CMMC will be incorporated into contracts slowly over a 6 year period. During the first year, CMMC is planned to only be included in a small number of contracts with major prime contractors (est. 10-15 contracts).[11] However, since these requirements will flow down to any subcontractors, the total number of impacted organizations may still be CMMC requirements will then be gradually included in more contracts until JAN 2026, when CMMC requirements are planned to be included in every DoD contract.

Cost:

Finally, the CMMC website states that “[t]he goal is for CMMC to be cost-effective and affordable for small businesses to implement at the lower CMMC levels.”[12] For instance, Katie Arrington has stated her desire for a Level 1 certification for a small-to-medium sized business to cost less than $3000. Presently there is no evidence we have been able to find for how this goal will be implemented. The FAQs also state that “[t]he cost of certification will be considered an allowable, reimbursable cost and will not be prohibitive” (emphasis added).

Governance:

The CMMC program will be governed by a recently constituted “Accreditation Body” (AB).[13] The CMMC AB is a non-profit, independent organization whose Board of Directors is composed of representatives of the DIB. The CMMC AB is operating under an Memorandum of Understanding (MOU) with the DoD, and is tasked with creating and operating the CMMC certification program, including training and accreditation of C3PAOs and individual assessors. The AB is also developing tools to help contractors achieve CMMC compliance. To date, a number of governance elements surrounding the CMMC program are unclear, including whether there will be an appeals process, how litigation will play out, and how the AB will accredit organizations to conduct assessments.

Key Facts: 

    • CMMC will eventually apply to *all* DoD contracts, including those without CUI requirements. This includes all DoD subcontractors. 
      • Only companies that supply COTS products will be excluded.
      • The estimated number of impacted organizations is ~350,000.
    • CMMC will be gradually rolled out, with requirements included in ~10-15 contracts during 2020, and complete incorporation planned by January 2026.
    • Third party certification assessment is required for all CMMC levels, even those without CUI requirements.
    • The contractor determines the scope of CMMC certifications (organization-wide or partial).[14]
    • The initial set of C3PAOs will consist of 250 companies, with additional assessors being added monthly.
    • There is no self certification.[15]
    • Certification will last 3 years.
    • Plans of Action and Milestones (POAMs) are not allowed.
    • Data breaches / incidents *may* prompt a requirement to get recertified. (Details not specified.) 
    • CMMC applies only to DoD contracts (i.e., does not carry over to other government contracts).
    • CMMC levels will be required in RFP sections L and M, and used as a “go /no go decision.”[16]
    • CMMC levels will be evaluated equally across all contractor sizes. However, lower levels are designed to be achievable by small, non-technical contractors.

    Key Dates: 

        • MAR 2019: CMMC first announced.
        • JUL - OCT 2019: CMMC “listening tour.”
        • JAN 2020: Version 1.0 of the CMMC framework released.
        • MAR 2020: CMMC Accreditation Body signed MOU with DoD.
        • MAR 2020: Version 1.02 released.
        • APRIL 2020: CMMC AB issues RFP for continuous monitoring.[17]
        • MID 2020[18]: Planned DFARS update from 800-171 to CMMC.
        • JUN 2020: Planned release of training from the AB.
        • JUN 2020: Planned date for incorporation into Requests for Information (RFIs) for selected prime contractors.
        • JAN 2026: Planned date for incorporation into all Request for Proposals (RFPs).

        Key Unknows: 

            • It is not clear whether the proposed development timeline will be realized. The short history of CMMC development has shown a pattern of aggressive timeline estimates that aren’t realized. 
            • It is not clear how contracts will be assigned specific CMMC levels.
            • It is not clear how recertification will be managed.
            • It is not clear how C3PAOs will be chosen, what form the assessments will take, and how much they will cost.[19]
            • It is not clear what role DoD contracting officers (or other stakeholders) will play in evaluating cybersecurity requirements (outside of verifying the CMMC certification level).
            • It is not clear whether CMMC will apply to other vehicles; e.g. grants, cooperative agreements (CAs), or other transactional authorities (OTAs).

            2. Analysis 


            CMMC could be a major evolution in the way the DoD approaches cybersecurity for defense contractors. Drawing upon the CUI DFARS clause, the DoD appears to be looking for ways to better verify that the requirements it sets are actually being satisfactorily implemented. For instance, the CMMC website states that the DFARS clause is “based on trust,” whereas CMMC will add “a verification component.” Furthermore, the emphasis placed on third party assessors, the application to all DoD contracts, and the full spectrum of levels (from “Basic Cyber Hygiene” through “Advanced/Progressive”) all suggest that the DoD is looking for ways to comprehensively evaluate the cybersecurity of the DIB at scale.

            Notwithstanding these stated intentions, the core of CMMC appears to be a restatement of existing cybersecurity compliance control sets, drawing from NIST SP 800-171, NIST SP 800-53, and other well known control sets. Although CMMC might use these control sets in a way that avoids the problems of most cybersecurity frameworks, most notably the “checkbox mentality,”[20] early evidence does not support this conclusion. CMMC appears to be placing a heavy emphasis on third party assessors and clearly defined “levels,” implying that CMMC compliance is likely to be evaluated in a mechanical, checkbox manner consistent with most contemporary cybersecurity compliance regimes.

            Despite being built from existing control sets, the underlying structure of CMMC is new, making it difficult to evaluate what compliance will look like. Most strikingly, the core distinction between “practices” and “processes” has the potential for considerable overlap. For example, the Basic Cyber Hygiene level currently has no process requirements whatsoever. However, it includes process language in its practices (i.e., “. . . in an ad hoc manner.”) Higher levels of practices also employ process language, in some cases actually using the word “process” as a practice requirement. (I.e. “The organization has a process . . .”) Moreover, despite using the words “processes,” “policy,” “practices,” and “plan” each as distinct requirements, none of these terms are defined.

            On a positive note, the establishment of clear ‘levels’ could simplify the DoD contracting environment for cybersecurity, as this will reduce uncertainty regarding what is required for CUI compliance, and the certification process should remove redundancies when negotiating multiple contracts with multiple different contracting officers. Additionally, the highest levels (4 and 5) are expected to only apply to a select few large defense contractors, while Level 1 is designed to encompass even entirely small, non-technical organizations. (One third party referred to it as being for “the lawn mowing company.”) This broad scope, coupled with the currently published requirements, suggests that although CMMC will apply to every defense contractor, the requirements will potentially not be too burdensome.

            However, even if the individual CMMC level requirements are reasonable, CMMC could also run into problems from overly aggressive application of “flow down” requirements. “Flow down” essentially requires contractors to include the same requirements in their subcontracts. If CMMC certification is required for *every* subcontractor, this could be prohibitive for large organizations (with a large number of subcontractors) wishing to pursue relatively small DoD contracts, such as research universities. Data-specific compliance regimes limit this problem by only flowing down with the relevant data. Generalized compliance regimes may not have a clear limiting principle in this respect.

            Moreover, there appears to be a conflict with regard to scope. CMMC has consistently pushed toward establishing “enterprise-wide” security certifications, in contrast to the data-specific regimes typically employed by cybersecurity standards (e.g., NIST SP 800-171 applies only to CUI). And yet, it allows the contractor to pick a specific segment of the network where the information to be protected is located. This creates an additional problem for non-IT contractors, since it is unclear which information is in scope.

            The concept of enterprise-wide cybersecurity compliance audits is a daunting one, as organizations typically do not apply security controls universally, the amount of documentation likely required for a larger organization could be prohibitive, and the process of certifying enterprise-wide compliance of even basic security controls is likely to be extremely expensive.

            Finally, since CMMC does not state an intent to apply to grants, CAs, or OTAs, these vehicles may not be impacted, and organizations may wish to prioritize these vehicles when multiple funding vehicles are possible options. Note, however, that some level of CMMC compliance may still be required if the work performed under these other vehicles generates or requires access to CUI.

            Caveats: 

                  The primary caveat to evaluating the impact of CMMC is that its history of pursuing overly aggressive timelines, frequent changes, the COVID-19 pandemic, and the upcoming election make the likelihood of it rolling out as planned questionable. Standing up a cybersecurity requirements and assessment program for the entire DIB is a gigantic task, and the proposed system has a number of flaws that has led to a significant amount of public criticism. Most notably, there has been pushback from industry trade groups,[21] Educause,[22] and former DoD Under Secretary of Defense for Acquisition, Technology and Logistics Frank Kendall,[23] all questioning the wisdom of CMMC and calling for either significant changes or its outright abandonment.

                  3. Sources 

                  References 


                    [1] Defense Industrial Base is defined as “the Department of Defense, government, and private sector worldwide industrial complex with capabilities to perform research and development and design, produce, and maintain military weapon systems, subsystems, components, or parts to meet military requirements.” https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/dictionary.pdf.

                    [2] https://www.acq.osd.mil/cmmc/index.html.

                    [3] Other control sets and frameworks currently referenced include the CIS Critical Security Controls, DIB SCC TF WG Top 10, and the CERT Resilience Management Model.

                    [4] https://www.acq.osd.mil/cmmc/docs/CMMC_ModelMain_V1.02_20200318.pdf.

                    [5] Note, despite the name, CMMC does not operate as a typical maturity model.

                    [6] Contractors without a current CMMC certification will be allowed to submit proposals but must complete certification prior to funding.

                    [7] “For a given CMMC level, the associated controls and processes . . . will reduce risk against a specific set of cyber threats.” https://www.acq.osd.mil/cmmc/index.html. Currently this “threat protection” appears to be manifested only by a statement for each level specifying “resistance against data exfiltration” and “resilience against malicious actions.”

                    [8] https://www.law.cornell.edu/cfr/text/48/52.204-21

                    [9] DFARS 252.204-7012 will continue to apply to CUI until it is superseded by CMMC.

                    [10] https://www.acq.osd.mil/cmmc/docs/CMMC_Appendices_V1.02_20200318.pdf.

                    [11] https://fcw.com/articles/2020/01/09/cmmc-chair-cyber-cert.aspx.

                    [12] https://www.acq.osd.mil/cmmc/index.html.

                    [13] https://www.cmmcab.org.

                    [14] According to the CMMC v1.02 document, “...A DIB contractor can achieve a specific CMMC level for its entire enterprise network or for particular segment(s) or enclave(s), depending on where the information to be protected is handled and stored.”

                    [15] FAQ #12, https://www.acq.osd.mil/cmmc/faq.html.

                    [16] FAQ #4, https://www.acq.osd.mil/cmmc/faq.html.

                    [17] The CMMC AB issued an RFP on April 22, 2020 for vendors to provide “continuous monitoring” in the form of “non-intrusive” review of the company’s internet traffic, a secure portal for displaying monitoring data, and security of AB/DOD intellectual property.

                    [18] This date is unlikely to be met since COVID-19 has delayed public hearing for the DFARS rule change https://fcw.com/articles/2020/05/11/cmmc-covid-dfar-rule-change-delay.aspx.

                    [19] However, the FAQ does state that the cost of certification will be reimbursable. FAQ#19 https://www.acq.osd.mil/cmmc/faq.html.

                    [20] The term “checkbox mentality” or “checkbox security” refers to a common problem in security where organizations are more concerned with their compliance with legal requirements than the actual security of their mission.

                    [21] https://www.itic.org/policy/CMMCmultiassoc_3.26_Final.pdf.

                    [22] https://er.educause.edu/articles/2020/1/us-federal-policy-perspectives-on-the-educause-2020-top-10-it-issues.

                    [23] “Cybersecurity Maturity Model Certification: An Idea Whose Time Has Not Come and Never May” by Frank Kendall, former Undersecretary of Defense for Acquisition Technology https://www.forbes.com/sites/frankkendall/2020/04/29/cyber-security-maturity-model-certificationan-idea-whose-time-has-not-come-and-never-may/?utm_source=Sailthru&utm_medium=email&utm_campaign=EBB%2004.30.20&utm_term=Editorial%20-%20Early%20Bird%20Brief#32829a773bf2.

                    Friday, June 12, 2020

                    Removing language with racial biases

                    Effective immediately, the Indiana University Center for Applied Cybersecurity Research, Trusted CI, and the ResearchSOC are joining other organizations* in ceasing the use of terms such as  “whitelist,” “blacklist,” and similar cybersecurity terms that imply negative and positive attributes and use colors also used to identify people. This is in alignment with the principles of Indiana University and the principle that there is simply is no place today for biased language with racial implications. No new materials produced will use such language and current materials will be edited to remove its use. Our code of conduct has been updated to bar its use in presentations at our events. We recognize some terminology we cannot unilaterally change without breaking inter-organizational communications (e.g. “TLP White”) and we are asking the broader cybersecurity community to reconsider such language.
                    We believe the cybersecurity community needs these measures to improve its own inclusivity and as  a small but important statement to support people of color and more inclusive terms across society.  We continue to educate ourselves on these issues and will take further steps as our understanding grows.
                    Von Welch, for the IU CACR, Trusted CI, and ResearchSOC teams

                    * For example:

                    Thursday, June 11, 2020

                    Trusted CI Webinar June 22nd at 11am ET: "How worried should I be?": The worst question we keep asking about research cybersecurity

                    Indiana University's Susan Sons is presenting the talk, "How worried should I be?": The worst question we keep asking about research cybersecurity, on June 22nd at 11am (Eastern). 

                    Please register here. Be sure to check spam/junk folder for registration confirmation email.
                    Susan Sons, Deputy Director of the Research Security Operations Center (ResearchSOC) will provide a mini threat briefing (a brief brief) and a broader discussion on how the research cybersecurity community approaches threat intelligence when we're at our best, and when we're at our worst.  Please join us and bring your questions!
                    Speaker Bio:
                    Susan serves as Deputy Director of ResearchSOC and Chief Security Analyst of IU's Center of Applied Cybersecurity Research.  She has a slight obsession with improving software engineering practices and the security of ICS/SCADA assets.  Known vulnerabilities: Can be bribed with dark chocolate.

                    Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."

                    2020 NSF Cybersecurity Summit Call For Participation - NOW OPEN - Deadline is Monday, June 29th

                    It is our pleasure to announce that the 2020 NSF Cybersecurity Summit is scheduled to take place Tuesday, September 22 through Thursday the 24th. Due to the impact of the global pandemic, we will hold this year’s summit on-line instead of in-person as originally planned.

                    The final program is still evolving, but we will maintain the mission to provide a format designed to increase the NSF community’s understanding of cybersecurity strategies that strengthen trustworthy science: what data, processes, and systems are crucial to the scientific mission, what risks they face, and how to protect them.

                     

                    Call for Participation (CFP)

                    Program content for the summit is driven by our community. We invite proposals for presentations, breakout and training sessions, as well as nominations for student scholarships. The deadline for CFP submissions is June 29th To learn more about the CFP, please visit: https://trustedci.org/cfp-2020

                     

                    Student Program

                    Due to the Summit moving to a virtual format, we are offering access to all active students who apply (or until we reach our headcount limit). More information and application can be found at: trustedci.org/2020-student-program 

                    On behalf of the 2020 NSF Cybersecurity Summit organizers and program committee, we welcome your participation and hope to see you in September.

                    More information can be found at https://trustedci.org/2020-nsf-summit

                    Wednesday, June 3, 2020

                    Trusted CI resources for Cyberinfrastructure Center of Excellence proposers (NSF 20-082)

                    Dear colleagues who are planning on submitting a proposal in response to NSF 20-082 on Cyberinfrastructure Centers of Excellence, Trusted CI has two resources that may benefit your proposal:
                    • Our proposal and annual reports are available online at https://trustedci.org/reports.
                    • Our paper describing our experiences and lessons learned as a center of excellence: Trusted CI Experiences in Cybersecurity and Service to Open Science. PEARC'19: Practice and Experience in Advanced Research Computing, 2019. https://doi.org/10.1145/3332186.3340601
                    We also welcome discussions regarding collaboration, though we ask you to contact us very soon given the June 15th deadline is rapidly approaching.

                    Wednesday, May 27, 2020

                    2020 NSF Cybersecurity Summit will be Online September 22-24, 2020

                    Dear Trusted CI community,

                    This year’s NSF Cybersecurity Summit will be online, with no in-person meeting as originally planned. Please continue to hold September 22-24, 2020 for this event.

                    This decision was based on the feedback you gave us to our survey, discussions with our program committee, our assessment of conditions with an emphasis on your safety, and reports we are hearing from many of you that travel funding will be challenging for your institutions. We regret not being able to interact in person but look forward to an interactive event and seeing you again in 2021.

                    Please watch the Trusted CI Blog and Announcement email list for more updates, including a Call for Participation and subsequently a program. We are working with our program committee, who deserve extra thanks for their efforts in these new circumstances, to develop a program that takes advantage of the online nature to deliver a quality event we hope will make up for some of what we will miss from being together in-person.

                    Best,

                    Von Welch
                    Director, Trusted CI

                    Tuesday, May 19, 2020

                    Transition to practice success story: Securing payment card readers with Skim Reaper

                    Skimmers want the data on your payment cards

                    Transition to practice is really a passion of mine. It is wonderful to write papers and have great ideas. But it is even cooler to get a million people using it. – Professor Patrick Traynor.

                    Patrick Traynor, Ph.D., is the John and Mary Lou Dasburg Preeminent Chair in Engineering and a professor in the Department of Computer and Information Science and Engineering (CISE) at the University of Florida. His research focuses on the security of mobile systems, with a concentration on telecommunications infrastructure and mobile devices. He is also a co-founder of Pindrop Security, CryptoDrop, and Skim Reaper. (Read his full bio at the end of this article.)

                    Trusted CI spoke with Professor Traynor about his experience transitioning Skim Reaper from a lab experiment into a real-world product.

                    Trusted CI: How did the Skim Reaper project get started?

                    We were doing work on how mobile payments are done in the developing world. Imagine that you don't have a credit card, you don't have access to a traditional bank, but you have a cell phone. People were texting each other and trading top-up minutes as currency. Safaricom in Kenya started allowing people to exchange cash instead of minutes.

                    The first digital payment system for much of the developing world is called M-Pesa. There'll be tremendous advantages bringing such systems here to the US. But in the process of doing that work, we were looking at how traditional payment systems work.

                    Skim Reaper was an offshoot of an NSF-funded project on trying to secure modern payments (NSF grant 1526718). It's not like credit cards are going to disappear anytime soon. We're going to have more types of payments, so we're going to have to secure these legacy things.

                    I had my credit card stolen six times in three years. When I talk to academics about credit card fraud, everyone treats it as a solved problem. When I went through the process with a debit card, the money was out of my account for a long period. I started thinking about how people who are financially vulnerable might go long periods without cash. I thought we needed to do something—to look at how we can push back against credit card skimming.

                    Trusted CI: How does Skim Reaper work?

                    The Skim Reaper is a card that's swiped or dipped into the payment terminal, just like a credit card. It's a device about the size and shape of a credit card. It determines how many times it's being read. That's a very simplistic version of what it's doing. But with the kind of credit card skimming that we're going after, the adversary adds a second read head to the card reader. They'll do that by overlaying it. Or they'll put one deep inside, called deep insert.

                    The card reader itself is going to get a normal read, but so too will the attacker. By developing a device that counts the number of times it's being read and then compares that to the number of times it should be being read, we know whether you have additional read heads in place and therefore whether there's a skimmer.

                    If a skimmer is in place, the device will turn on a red LCD. If the blue LCD lights up, everything is fine. Something like 10% of the population is red-green colorblind. So, we chose a blue light instead of green. We tried to be as inclusive as possible in the design.

                    Trusted CI: Did you have any NSF funding for Skim Reaper?

                    We had no explicit NSF funding for Skim Reaper other than the grant to study securing modern payments that preceded it. I have not applied for TTP-explicit funding before, but I am in the process of applying for some now. I have also applied for SBIR funding in the past as part of my work on Pindrop.

                    Trusted CI: Tell us how things got started.

                    When we started on this path, we didn't have access to credit cards skimmers. We started by looking online and trying to reach out to various law enforcement agencies, many of whom, of course, said, “who are you and why are you asking for credit card skimmers?” But we got quite lucky. We were in the process of prototyping our devices. We'd seen enough of the things online and had access to a few small units.

                    Then, we happened to meet the NYPD Financial Crimes Task Force attending a conference about traditional theft in retail at the University of Florida in 2017. When we met these detectives, we ran back to our lab, grabbed our prototypes, and showed them. They said they could use something like that. We flew up to New York in January 2018 at our expense with our devices for them to teach us everything they know about skimmers and then used our devices on skimmers they had previously recovered. We were in New York City for three days and the NYPD was fantastic. I mean, they were amazing. The care and the skill. They took us through the world of skimming, how it works, where it happens, and the motivations. We worked with the detectives during the day, and we'd go back at night and we would rewrite user interfaces.

                    Initially, our card had a box with a little LCD screen that would give instructions. They were great instructions for lab guys like me. But that's not what the detectives wanted. They said “nope, it's pretty much got to give us a thumbs up, thumbs down. The tiny print is not going to help us when we're out in the field, you just have to give us a clear signal.” We'd run back to the hotel, rewrite user interfaces, bring them back, test them again. Then on the second day, we saw how they were using them. And the original devices we had literally held together with electrical tape and Gorilla glue. We had to find a Home Depot in Manhattan on the second night because we had to essentially tape them back together.

                    We learned a lot about how users wanted to use the device, how durable it would have to be, and what the procedures around the use of the device might be. That experience was invaluable. We kept great contact and left five prototypes with the NYPD. About a month and a half later, they came back to us and said that they had used the device on an ATM in Queens. They had a positive hit. They did a stake out, and ultimately were able to make an arrest and conviction based on the use of our device.

                    Trusted CI: How did the project then transition to a product?

                    From there, things grew quickly. We started getting media coverage and all of a sudden this project that had happened really out of my own shame for having my credit card stolen so many times, resulted in probably 2,000 phone calls to my office and thousands and thousands of emails. We realized this was widespread. We were prototyping as fast as we possibly could. It probably took us fifteen hours to make a single device. But now, we had requests for thousands. We had to try and do this professionally because we couldn't send out something that as we saw lasted a couple of days. We needed to transition this into a real product. And that's what we spent the next year doing.

                    Trusted CI: Talk about the scope of your potential customers.

                    We started off working with law enforcement because they had the most examples of credit card skimmers and they're the ones who are generally called in to deal with the problem when it exists. But ultimately what we're doing is trying to make this available to companies, vendors, and retailers because they're the ones that have the point-of sale-units. They're the ones who are being attacked. It’s the same reason that every retailer needs to have locks on their doors. We think every retailer that takes credit cards, debit cards, or gift cards needs to have a Skim Reaper. They need to know that their customers are going to be secure when they make those payments. And in fact, we've heard anecdotally, and I know for myself, when consumers feel like yours is the store where their card has been skimmed, they stop going there. We think it's on retailers to deploy these devices.

                    Trusted CI: What about banks or ATM manufacturers?

                    We are working with multiple companies in the financial industry. There are multiple banks of varying sizes that we currently have as customers.

                    One of the most important things for a transition that I've found is it's not just about having a good pitch. It's not just about having a good product; it's about getting in front of the right people. The media coverage has really helped. (How the 'Skim Reaper' is trying to kill credit card skimming devices) (How the 'Skim Reaper' protects you from credit card skimmers)

                    Many industries don't want to talk about security problems, at least publicly. And that's a natural thing. You don't want your consumers to think that you are more vulnerable than the competition. But by working with law enforcement, by doing media outreach with them, this allows other businesses to admit that is a problem for them and they often reach out directly to us.

                    Trusted CI: Without disclosing any customers, how big have you grown?

                    We started selling in August of 2019, and we're now deployed in 20 states and internationally.

                    Trusted CI: Would you like to make any acknowledgments?

                    I really want to thank the NYPD Financial Crimes Taskforce. If they hadn't taken a chance on us early on, we probably wouldn't be having this conversation. But I'm also grateful to the local police department here in Gainesville, Florida. They've been tremendous. Beyond that, the Department of Agriculture and Consumer Services in the State of Florida are responsible for ensuring that gas pumps pump the correct amount that you pay for. But because they're on the ground and out inspecting pumps, they're often the ones that come across skimmers. And for the last two years they've really been a tremendous resource and we very much enjoyed working with them. All these folks continue to help us by giving us access to the newest skimmers that are out there so that we can make sure that number one, our devices continue to work. And number two, we have new things in the pipeline which will come out soon.

                    Again, I can't speak highly enough about our law enforcement partners. These folks work hard and need the resources to do their job as effectively as possible. And all throughout this transition process, it just wouldn't have been possible without willing law enforcement partners.

                    Trusted CI: Tell us about your support structure.

                    We provide videos and we often Skype with customers to make sure that they know how to use it correctly. So far, we've had minimal requests for support. But again, the experience with the NYPD showed us how to simplify the interface. A tool that's likely to give retailers any kind of help in this space has got to be easy enough that it can be learned in two minutes.

                    Trusted CI: How widespread is skimming?

                    This is one of the interesting questions we're trying to answer. The best example comes from colleagues at the Department of Agriculture. They often pull out skimmers from gas pumps and they're wrapped in tape and on occasion they'll have numbers on them. I was told a story where somebody in one day pulled out a number 17, a number 32, and he said, “that's great, I have two but where's one through 16, 18 through 31? And what's the stopping number?” Their guess, based on how many they were pulling, was that they were getting about 5% of what's out there.

                    Prior to the Skim Reaper, there really weren't any tools to know the numbers because these things are often undetected. Sometimes they are recovered and taken away, sometimes the bad guys come back and take them and move them to other spots. Knowing the scale of the problem is quite difficult. But I think anecdotally, we all know someone who's had their credit card stolen. And if it's not you, you're lucky.

                    Trusted CI: Talk about some of the other things you're working on.

                    I'm fortunate to have a wonderful group of incredibly talented and diverse students here at the University of Florida. We're working on a huge range of problems, everything from security and microfinance to detecting deep fake voices and disinformation. We're also looking at strengthening two-factor authentication for common users. Our work really runs the gamut. And that's only possible because of NSF funding. Most of my students are indeed funded by the NSF, and we're quite fortunate.

                    Skim Reaper is my third startup. I want to try and help incentivize junior scientists and help make that path a lot easier because it's tough, but it's been worth it.

                    Trusted CI: Why is transitioning to practice important?

                    In a keynote I gave, I had a slightly darker take on this. The NSF is funding us for a long time and we're quite fortunate and we're doing great work. But at some point, they might say, “We're just not winning the battle. The return on investment isn't high enough.” We may need to do this for our own survival. And quite frankly, the world needs us, and the world needs our innovation. I like that more positive spin on it.

                    Trusted CI: Any last thoughts?

                    One last thing I do want to plug. We made a conscious decision that are our devices are manufactured in the US. They're manufactured in Houston. This is important to us because the ideas were generated in the US and we're now helping to create high-tech jobs in Houston. We think that this is a great example of reasons to invest in science. We're creating jobs from the ideation to the manufacturing phase. And they're all happening here in the US.

                    Bio

                    Patrick Traynor is the John and Mary Lou Dasburg Preeminent Chair in Engineering and a Professor in the Department of Computer and Information Science and Engineering (CISE) at the University of Florida. His research focuses on the security of mobile systems, with a concentration on telecommunications infrastructure and mobile devices. His research has uncovered critical vulnerabilities in cellular networks, developed techniques to find credit card skimmers that have been adopted by law enforcement and created robust approaches to detecting and combating Caller-ID scams.

                    He received a CAREER Award from the National Science Foundation in 2010, was named a Sloan Fellow in 2014, a Fellow of the Center for Financial Inclusion at Accion in 2016 and a Kavli Fellow in 2017. Professor Traynor earned his Ph.D and M.S. in Computer Science and Engineering from the Pennsylvania State University in 2008 and 2004, respectively, and his B.S. in Computer Science from the University of Richmond in 2002. He is also a co-founder of Pindrop Security, CryptoDrop, and Skim Reaper.

                    Monday, May 18, 2020

                    Trusted CI policies for managing information that you share with us

                    Trusted CI greatly values the trust the community has in us. That trust enables the sharing of your experiences with us knowing we’ll treat what is shared with appropriate respect and confidence. We also recognize that you look to us to synergize experiences and lessons broadly, serving the community as a knowledge hub and allowing all of us to build on each other's knowledge. Hence, Trusted CI seeks to balance two principles:

                    1. Trusted CI controls the management and distribution of confidential data such that community members are comfortable sharing such data with Trusted CI during the course of collaborations, engagements, etc.
                    2. Trusted CI seeks to share information broadly with the community to facilitate learning from common experiences.
                    As Trusted CI has grown and matured, we have recognized the need to mature our processes to ensure we live up to these principles, and our processes to do so are well understood by both Trusted CI team members and our collaborators. To that end we have published two new policies we adhere to:

                    We are making these policies public along with the rest of our cybersecurity program to promote your trust and provide examples for others to use. As always, we recognize the expertise in the community and welcome your feedback and suggestions.

                    Results of survey on 2020 Cybersecurity Summit

                    The NSF Cybersecurity Summit Organizers would like to thank the community for providing comments on planning for the 2020 Summit in light of the Pandemic crisis. We are considering options in consultation with the program committee, taking this community input into account, and will make an announcement regarding our plans for the Summit as soon as they are finalized. We felt the survey responses might be of interest to others who face similar event planning uncertainties. Here are some key takeaways from the survey:
                    • The Community greatly appreciated the opportunity to voice their opinion on the 2020 Summit.
                    • Face-to-Face events provide many benefits for social networking that virtual meetings can't yet replicate.
                    • However, the majority of respondents prefer having some type of virtual summit.
                    • There were multiple comments that full-day programs are not desirable ("Zoom-Fatigue")

                     The aggregated results are as follows:

                    Question #2 complete legend to responses



                    Friday, May 8, 2020

                    Open Storage Network (OSN) and Trusted CI Complete CyberCheckup


                    The Open Storage Network (OSN) is an NSF-funded pilot project (OAC 1747483, 1747490, 1747493, 1747507, and 1747552). The OSN pilot project's goal is to design and test a cooperative multi-institution, research-oriented storage and transfer service, including a governance model to manage both the technical system and user allocations. The outcome of the pilot project will direct the design of a national scale infrastructure that can serve as a storage substrate along with NSF's other national investments (e.g., XSEDE) and network implementations supported by NSF's CC* program.

                    OSN is a distributed storage infrastructure accessible via national research and education networks (NRENs). To evaluate the current state of this infrastructure, OSN performed a Trusted CI CyberCheckup, which is an engagee-driven, self-evaluation of a project's cybersecurity readiness. Trusted CI staff provided templates to be used for the CyberCheckup as well as assistance in filling out the templates.

                    OSN staff first used Trusted CI's "Securing Commodity IT in Scientific CI Projects" spreadsheet to evaluate five facilities including NCSA, SDSC, RENCI, MGHPCC, and JHU. These results were then used to evaluate the OSN system as a whole. OSN staff next completed Trusted CI's "Information Security Program Evaluation" questionnaire. This document was used to capture the current state of the OSN information security program as well as find potential security policy gaps in the pilot program. The output from these CyberCheckup documents will be used by OSN to better secure future phases of the project.

                    Monday, May 4, 2020

                    Windows 7 end-of-life security mitigation

                    On January 14, 2020, Windows 7 entered its End of Life phase.  This means Microsoft no longer offers patches or security updates for Windows 7.  As a result, Windows 7 will be vulnerable to attacks that currently supported Windows operating systems will have patched in future updates.  While our guideline would optimistically be to update any Windows 7 system to supported operating systems, we realize some legacy software and hardware used across the medical and scientific community may not be compatible.

                    Alternative solutions were raised in discussions on the Trusted CI discuss email list, an article from the University of Michigan, an article from CSO online about isolating the device, and an article from Electronic Specifier focused on medical devices.  From these resources, we offer the following guidelines to reduce the risk of the system to your cyber infrastructure environment, depending on the needs of the host.

                    Universal controls that apply to all scenarios:
                    • Reduce the functionality of the device to only the legacy software needed by doing the following:
                      • Uninstall all unnecessary software
                      • Turn off all unneeded network services
                      • Don't use the system for web browsing or other network client based activities that are non-essential
                    • Do not open any new office documents on the system
                    • Monitor traffic between host and network at boundaries
                    Scenario 1: The host is a control system and has no need for access to network
                    • Remove host from network, preventing access
                    • Prevent accidentally connecting to network by staff by covering Ethernet and USB ports with warning stickers







                    Scenario 2: The host is a control system where the user physically accesses the host and needs to access the network for sensors as well as to upload data to a server

                    • Segment the host from the network via restricted VLAN, allowing access to chosen devices
                    • Use local firewall rules to only allow outbound data from host for uploading data and inbound data to host from the specific sensor IPs
                    • Disable outside access with a GPO (Group Policy Object) or your local policy

                    Scenario 3: The host is a control system and needs to allow remote control access and the serving of  data
                    Trusted CI worked with the Gemini Observatory
                    in the past on a cyber infrastructure engagement
                    • Insert secure bastion host between Windows 7 host and network, requiring access to the bastion host before accessing the Windows 7 host
                    • Ensure bastion host follows security best practices for a bastion host role, including multi factor authentication (MFA)
                    • Use local firewall rules to limit access to the Windows 7 host
                    • Disable outside access with a GPO (Group Policy Object) 

                    These steps reduce potential risk as well as the impact of a security event should the system become compromised.  In addition to these steps, ensure leadership is informed of the additional risk from this system by informing project leadership. This list is also applicable to other unsupported systems that are vulnerable.  Users of Windows 7 systems can also pay for extended security updates from Microsoft for the next 3 years, which varies in cost by the version of Windows 7 and doubles in price each year.