Thursday, July 2, 2020

Survey Report: Scientific Data Security Concerns and Practices


The Trustworthy Data Working Group has published a report at https://doi.org/10.5281/zenodo.3906865 that summarizes the results from our survey of scientific data security concerns and practices. 111 participants completed the survey from a wide range of positions and roles within their organizations and projects. We invite the community’s feedback on this report and input to the ongoing work of the working group via the working group mailing list. You may also send input directly to Jim Basney at jbasney@illinois.edu.

Next, the working group will be developing guidance on trustworthy data for science projects and cyberinfrastructure developers, based on the survey results and on resources from NIST, RDA, ESIP and others. Related work includes NIST 1800-25, the TRUST Principles for Digital Repositories, and Risk Assessment for Scientific Data. The working group will also be providing input into the next revision of the Open Science Cyber Risk Profile (OSCRP).

Working group membership is open to all who are interested. Please visit https://www.trustedci.org/2020-trustworthy-data for details.

Wednesday, July 1, 2020

2020 NSF Cybersecurity Summit CFP extended to July 13

2020 NSF Cybersecurity Summit Call for Participation (CFP) has been extended, deadline is COB on Monday July 13th. 


Call for Participation (CFP)

Program content for the summit is driven by our community. We invite proposals for presentations, breakout and training sessions, as well as nominations for student scholarships. The deadline for CFP submissions is July 13th To learn more about the CFP, please visit: https://trustedci.org/cfp-2020

Tuesday, June 23, 2020

Fantastic Bits and Why They Flip

In 2019, Trusted CI examined the causes of random bit flips in scientific computing and common measures used to mitigate the effects of “bit flips.”  (In a separate effort, we will also be issuing a similar report on data confidentiality needs in science, as well.) Its report, “An Examination and Survey of Random Bit Flips and Scientific Computing,” was issued a few days before the winter holidays in December 2019. As news of the report was buried amidst the winter holidays and New Year, we are pleased to highlight the report in a bit more detail now. This post is longer than most of Trusted CI’s blog posts to give you a feel for the report and hopefully entice you to read it.

For those reading this that are not computer scientists, some background: What in the world is “bit,” how can one “flip” and what makes one occur randomly? Binary notation is the base-2 representation of numbers as combinations of the digits 0 and 1, in contrast to the decimal notation most of us are used to in our daily lives that represents digits as combinations of the digits 0 through 9. In binary notation, A “bit” is the atomic element of the representation of a 1 or a 0. Bits --- 0s or 1s --- can be combined together to represent numbers larger than 0 or 1 in the same way that decimal digits can be put together to represent numbers larger than 9.

Binary notation has been in use for many hundreds of years. The manipulation of binary numbers made significant advances in the mid 19th century through the efforts of George Boole, who introduced what was later referred to as Boolean algebra or Boolean logic. This advance in mathematics, combined with electronic advances in switching circuits and logic gates by Claude Shannon (and others) in the 1930s led to binary storage and logic as the basis of computing. As such, binary notation, with numbers represented as bits, are the basis of how most computers have stored and processed information since the inception of electronic computers.

However, while we see the binary digits 0 and 1 as discrete, opposite, and rigid representations, in the same way that North and South represent directions, the components of a computer that underlie these 0 and 1 representations are analog components that reveal that 0 and 1 are in fact closer to shades of grey. In fact, 0 and 1 are typically stored magnetically and transmitted through electrical charges. In reality, both magnetism and electrical charges can either degrade or otherwise be altered through external forces, including cosmic rays or other forms of radiation and magnetism. To a computer, a “bit flip” is the change of the representation of a number from a 0 to a 1 or vice versa. Underlying that “flip” could have been a sudden burst of radiation that suddenly and instantly altered magnetic storage or electrical transmission, or could also have been the slow degradation of the magnetism of a magnetically stored bit from something close to 1, or a “full” magnetic charge, to something less than 0.5, at which point it would be recognized and interpreted as a 0.

The use of error correction in computing and communication was pioneered in the 1940s and 1950s by Richard Hamming, who used some form of redundancy to help to identify and mask the effects of bit flips. Despite the creation of these techniques 70–80 years ago, it is still not the case that error correction is universally used. And, even when it is, there are limits to the amount of errors that can be incurred in a particular blob of data (a number, a file, a database) before the errors can fail to be correctable, or even to be detected at all.

The report that Trusted CI published last year describes the methods for why bit flips occur. These include isolated single errors due to some kind of interference, bursty faults of a number of sequential bits, due to some kind of mechanical failure or electrical interference; or malicious tampering. The document then narrows to focus on isolated errors. Malicious tampering is the focus of future reports, for example, as are data errors or loss due to improper scientific design, mis-calibrated sensors, and outright bugs, including unaccounted-for non-determinism in computational workflows, improper roundoff and truncation errors, hardware failures, and “natural” faults.

The report then describes why single bit faults occur — such as via cosmic rays, ionizing radiation, and corrosion in metal — their potential odds of faults occurring for a variety of different components in computing, and potential mitigation mechanisms. The goal is to help scientists understand the risk that bit faults can either lead to scientific data that is in some way incorrect, due to bit flips, or an inability to reproduce scientific results in the future, which is of course a cornerstone of the scientific process.

As part of the process of documenting mitigation mechanisms, the authors of the report surveyed an array of scientists with scientific computing workflows, as well as operators of data repositories, and computing systems ranging from small clusters to large-scale DOE and NSF high-performance computing systems. The report also discusses the impact of bit flips on science. For example, in some cases, including certain types of metadata, corrupt data might be catastrophic. In other cases, such as images,, or situations where there are already multiple data streams collecting that cross-validate each other, the flip of a single bit or even a small handful of bits is largely or entirely lost in the noise. Finally, the report collects these mechanisms into a set of practices, divided by components involved in scientific computing, that scientists may wish to consider implementing in order to protect their data and computation — for example, using strong hashing before storing or transmitting data, file systems with automated integrity repair built in, disks with redundancy built in, and even leveraging fault tolerant algorithms where possible.

For the time being, this report is intended as a standalone first draft for use by the scientific computing community. Later in 2020, this report will be combined with insights from the Trusted CI “annual challenge” on trustworthy data to more broadly offer guidance on integrity issues beyond bit flips. Finally, in late 2020, we expect to integrate issues pertaining to bit flips into a future version of the Open Science Cyber Risk Profile (OSCRP). The OSCRP is a document that was first created in 2016 to develop a “risk profile” for scientists to help understand risks to their projects via threats posed through scientific computing. While the first version included issues in data integrity, a revised version will include bit flips more directly and in greater detail.

As with many Trusted CI reports, both the bit flip report and the OSCRP are intended to be living reports that will be updated over time to serve community needs. As such, comments, questions, and suggestions about this post, and both documents are always welcome at info@trustedci.org
Going forward the community can expect additional reports from us on the topics mentioned above, as well as a variety of other topics. Please watch this space for future blog posts on these studies.


Transition to practice success story: Using machine learning to aid in the fight against cyberattacks

Artificial intelligence and machine learning becoming key technologies in cybersecurity operations

S. Jay Yang, professor at the Rochester Institute of Technology, is a 2019 Trusted CI Fellow and the first 2020 Trusted CI Transition to Practice (TTP) Fellow. His research group has developed several pioneering machine learning, attack modeling, and simulation systems to provide predictive analytics and anticipatory cyber defense. His earlier works included FuSIA, VTAC, ViSAw, F-VLMM, and attack obfuscation modeling.

In 2019, the Center for Applied Cybersecurity Research (CACR) and OmniSOC, the security operations center for higher education, began working with Dr. Yang and his team at Rochester Institute of Technology to implement Dr. Yang’s ASSERT research prototype with the OmniSOC. ASSERT is a machine learning system that automatically categorizes attacker behaviors derived from alerts and other information into descriptive models to help a SOC operator more effectively identify related attacker behavior.

“SOC analysts are overwhelmed by intrusion alerts,” said Yang. “By providing a characteristic summary of different groups of alerts, ASSERT can bring SOC analysts’ attention to critical attacks quicker and help them make informed decisions.”

CACR staff are working with OmniSOC engineers and Yang’s team from Rochester Institute of Technology to validate the methodology and test the research prototype for use at OmniSOC for applicability to SOC workflows using data OmniSOC aggregates from IU as the first of these explorations of machine learning approaches.

The team is using a subset of an anonymized parallel feed of (only) IU’s OmniSOC data. This data is pipelined to a prototype deployed on IU’s virtualization infrastructure. The results will be provided to OmniSOC engineers and analysts to determine if the method has utility for OmniSOC’s workflows. This project aims to catalyze further applied AI research for cybersecurity by taking advantage of the size of the security data set aggregated by OmniSOC, the expertise of CACR staff, and the relationships both organizations have within higher-ed security and research communities.

Ryan Kiser is a senior security analyst at the Indiana University Center for Applied Cybersecurity Research and one of the researchers involved in the project. We spoke with Kiser to catch up on how the project got started and where the project stands now.

Trusted CI: How did you learn about Dr. Jay Yang’s work?

Jay was a member of the Trusted CI cybersecurity cohort. The intent of the cohort was to get a group of security researchers together so that we could help make connections with the community that Trusted CI serves -- that is, the higher-ed and research communities and the facilities that are funded by NSF.

Some of Jay’s work is related to machine learning. Jay visited Bloomington to visit IU. It was a good opportunity for us to talk about his research. It seemed like the ability to generate models of attack was potentially applicable to OmniSOC. One of his grad students was working on a series of visualizations and a way for people to interact with the results from ASSERT, and he was able to demonstrate it for us in person.

Trusted CI: Where does the project stand now?

The project happened in phases. We planned to do it that way from the start because we weren't sure from the beginning that this would be something that could provide real value because it's still a research prototype.

We interacted with the researcher early on to find out what they need. We then tried to figure out how we can reduce this data down to reduce the risk of using operational data while still providing the functionality that is needed for the research. We determined a way to anonymize data and got approval from the security and policy offices for the use of the data in the way we proposed. Once we had that approval we could start.

The first phase was to just get a testbed set up and get the prototype deployed into the testbed, then start to get the right data from OmniSOC into the prototype. That concluded in early January.

We were starting to get results, so we started the second round to see if we can make use of this. Part of that was to develop a set of use cases for OmniSOC.

Another part of the project is that we had an undergraduate student here at IU develop visualizations as part of his capstone project and we set up some additional software to enable us to do that on the testbed. That's the phase of the project that is concluding now.

Suricata is a network monitoring and alerting tool used at IU. We wanted to take a subset of the data that Suricata is generating at IU and use that as the basis for an initial analysis, an exploration. The hope is that ultimately this can be applied more broadly, that we could do something like full network sensor data.

Another tool called Zeek captures a lot more data than Suricata about what is flowing over the network. Our hope is that once the groundwork is laid using the small dataset with Suricata, OmniSOC can start using the much larger volume of data that Zeek captures, hopefully getting much more valuable results out of it.

We have learned a lot throughout this process. One of the biggest takeaways that I have from this is the way in which it is limited. You cannot take a dataset and throw it at a neural network and then have a usable model that you can use to analyze other data. You have to tailor these things to the use case in order to solve a particular problem.

Our goal now is to work with OmniSOC and Jay to come up with a roadmap for OmniSOC and Jay to use to realize this potential. We're going to write up what we found by the end of July and plot a path forward for Jay’s group and OmniSOC to try to bring it into a real production environment.

Wednesday, June 17, 2020

Analysis of the Cybersecurity Maturity Model Certification (CMMC) and Implications for Contractors

The Cybersecurity Maturity Model Certification (CMMC) is currently being developed by the US Department of Defense (DoD) as the next generation cybersecurity requirement for contractors.  Section 1 below summarizes the publicly available information regarding CMMC, highlighting key facts, key dates, and key unknowns. Section 2 provides an analysis of how to interpret CMMC and what it may mean for future contracting efforts with the DoD. Section 3 provides the sources used. 

Key Takeaways: 

  • CMMC may be very important for future interactions with the DoD, as it establishes cybersecurity compliance requirements for *all* entities contracting (or subcontracting) with the DoD. 
  • CMMC requirements are currently planned to be included in all DoD contracts by JAN 2026.
  • CMMC is an evolution in the DoD’s treatment of CUI, adding a “verification component” to what had previously been a regime “based on trust.”
  • CMMC establishes five levels of cybersecurity requirements, ranging from “Basic Cyber Hygiene” to “Advanced/Progressive.”
  • Presently, there are still uncertainties regarding how the program will be implemented and what assessments will look like.
  • The substantive challenges, frequent changes, and emergence of COVID-19 all cast doubts on whether CMMC will actually be implemented as currently envisioned.
  • Organizations that anticipate needing CMMC certification should continue to monitor the developments in this space.  Organizations with current DoD contracts should work with their contract officer and review the CMMC document to self evaluate compliance.

1. What We Know So Far


Overview:

The Cybersecurity Maturity Model Certification (CMMC) is a cybersecurity compliance framework being developed by the Department of Defense (DoD).  CMMC is an evolution of the DoD's current requirements for the protection of Controlled Unclassified Information (CUI), outline in DFARS 252.204-7012.  CMMC expressly acknowledges that the CUI DFARS in "based on trusted", and CMMC is intended to add "a verification component".  However, CMMC goes beyond the protection of CUI, and intends to establish cybersecurity requirements for every entity that contracts with the DoD (often collectively referred to as the "Defense Industrial Base", or DIB.[1]

The Office of the Undersecretary of Defense for Acquisition and Sustainment is generating CMMC in collaboration with “DoD stakeholders, University Affiliated Research Centers (UARCs), Federally Funded Research and Development Centers (FFRDCs), and industry.”[2] CMMC will combine a number of existing cybersecurity standards, including “NIST 800-171, NIST 800-53, AIA NAS9933, and others.”[3] The current proposed requirements are available in the CMMC Version 1.02 document.[4] CMMC materials also note that it will go beyond assessing the “maturity of . . . controls,” and assess “the company’s maturity/institutionalization of cybersecurity practices and processes.”

The Requirements:

The core of CMMC is a five-level “maturity model” [5] specifying required “practices” and “processes” for compliance. Every DoD contract will eventually have a CMMC level requirement that must be satisfied by defense contractors wishing to bid on that contract. Contractors must have their specified cybersecurity level evaluated and certified by an accredited “CMMC 3rd Party Assessment Organization” (C3PAO) or individual assessor.[6] The different CMMC levels are intended to protect against different adversaries or attacks.[7]

  • Level 1: “Basic Cyber Hygiene” establishes the minimal set of requirements for CMMC, which are largely a restatement of the federal contract requirements in FAR 52.204-21.[8] Every DoD contractor will be required to satisfy at least Level 1. 
  • Level 2: “Intermediate Cyber Hygiene” is an intermediate step for organizations targeting Level 3. This level is not currently planned to be included in any DoD contracts, but may be used as a competitive advantage when bidding on Level 1 contracts.
  • Level 3: “Good Cyber Hygiene” is the required level for any contract handling CUI.[9] The requirements for Level 3 are largely a restatement of NIST SP 800-171/DFARS 252.204-7012, along with an additional 20 controls. 
  • Level 4: “Proactive” focuses on the protection of CUI from Advanced Persistent Threats (APTs), drawing on controls in NIST SP 800-171B.
  • Level 5: “Advanced/Progressive” is the highest level, reserved for the most critical non-classified contracts. Level 5 also focuses on the protection of CUI from APTs, but requires even “greater depth and sophistication of cybersecurity capabilities.”
The DoD has provided some clarifications and examples on interpretation of CMMC’s required practices and processes.[10]

Timeline:

The most recent statement from DoD is that CMMC will be incorporated into contracts slowly over a 6 year period. During the first year, CMMC is planned to only be included in a small number of contracts with major prime contractors (est. 10-15 contracts).[11] However, since these requirements will flow down to any subcontractors, the total number of impacted organizations may still be CMMC requirements will then be gradually included in more contracts until JAN 2026, when CMMC requirements are planned to be included in every DoD contract.

Cost:

Finally, the CMMC website states that “[t]he goal is for CMMC to be cost-effective and affordable for small businesses to implement at the lower CMMC levels.”[12] For instance, Katie Arrington has stated her desire for a Level 1 certification for a small-to-medium sized business to cost less than $3000. Presently there is no evidence we have been able to find for how this goal will be implemented. The FAQs also state that “[t]he cost of certification will be considered an allowable, reimbursable cost and will not be prohibitive” (emphasis added).

Governance:

The CMMC program will be governed by a recently constituted “Accreditation Body” (AB).[13] The CMMC AB is a non-profit, independent organization whose Board of Directors is composed of representatives of the DIB. The CMMC AB is operating under an Memorandum of Understanding (MOU) with the DoD, and is tasked with creating and operating the CMMC certification program, including training and accreditation of C3PAOs and individual assessors. The AB is also developing tools to help contractors achieve CMMC compliance. To date, a number of governance elements surrounding the CMMC program are unclear, including whether there will be an appeals process, how litigation will play out, and how the AB will accredit organizations to conduct assessments.

Key Facts: 

    • CMMC will eventually apply to *all* DoD contracts, including those without CUI requirements. This includes all DoD subcontractors. 
      • Only companies that supply COTS products will be excluded.
      • The estimated number of impacted organizations is ~350,000.
    • CMMC will be gradually rolled out, with requirements included in ~10-15 contracts during 2020, and complete incorporation planned by January 2026.
    • Third party certification assessment is required for all CMMC levels, even those without CUI requirements.
    • The contractor determines the scope of CMMC certifications (organization-wide or partial).[14]
    • The initial set of C3PAOs will consist of 250 companies, with additional assessors being added monthly.
    • There is no self certification.[15]
    • Certification will last 3 years.
    • Plans of Action and Milestones (POAMs) are not allowed.
    • Data breaches / incidents *may* prompt a requirement to get recertified. (Details not specified.) 
    • CMMC applies only to DoD contracts (i.e., does not carry over to other government contracts).
    • CMMC levels will be required in RFP sections L and M, and used as a “go /no go decision.”[16]
    • CMMC levels will be evaluated equally across all contractor sizes. However, lower levels are designed to be achievable by small, non-technical contractors.

    Key Dates: 

        • MAR 2019: CMMC first announced.
        • JUL - OCT 2019: CMMC “listening tour.”
        • JAN 2020: Version 1.0 of the CMMC framework released.
        • MAR 2020: CMMC Accreditation Body signed MOU with DoD.
        • MAR 2020: Version 1.02 released.
        • APRIL 2020: CMMC AB issues RFP for continuous monitoring.[17]
        • MID 2020[18]: Planned DFARS update from 800-171 to CMMC.
        • JUN 2020: Planned release of training from the AB.
        • JUN 2020: Planned date for incorporation into Requests for Information (RFIs) for selected prime contractors.
        • JAN 2026: Planned date for incorporation into all Request for Proposals (RFPs).

        Key Unknows: 

            • It is not clear whether the proposed development timeline will be realized. The short history of CMMC development has shown a pattern of aggressive timeline estimates that aren’t realized. 
            • It is not clear how contracts will be assigned specific CMMC levels.
            • It is not clear how recertification will be managed.
            • It is not clear how C3PAOs will be chosen, what form the assessments will take, and how much they will cost.[19]
            • It is not clear what role DoD contracting officers (or other stakeholders) will play in evaluating cybersecurity requirements (outside of verifying the CMMC certification level).
            • It is not clear whether CMMC will apply to other vehicles; e.g. grants, cooperative agreements (CAs), or other transactional authorities (OTAs).

            2. Analysis 


            CMMC could be a major evolution in the way the DoD approaches cybersecurity for defense contractors. Drawing upon the CUI DFARS clause, the DoD appears to be looking for ways to better verify that the requirements it sets are actually being satisfactorily implemented. For instance, the CMMC website states that the DFARS clause is “based on trust,” whereas CMMC will add “a verification component.” Furthermore, the emphasis placed on third party assessors, the application to all DoD contracts, and the full spectrum of levels (from “Basic Cyber Hygiene” through “Advanced/Progressive”) all suggest that the DoD is looking for ways to comprehensively evaluate the cybersecurity of the DIB at scale.

            Notwithstanding these stated intentions, the core of CMMC appears to be a restatement of existing cybersecurity compliance control sets, drawing from NIST SP 800-171, NIST SP 800-53, and other well known control sets. Although CMMC might use these control sets in a way that avoids the problems of most cybersecurity frameworks, most notably the “checkbox mentality,”[20] early evidence does not support this conclusion. CMMC appears to be placing a heavy emphasis on third party assessors and clearly defined “levels,” implying that CMMC compliance is likely to be evaluated in a mechanical, checkbox manner consistent with most contemporary cybersecurity compliance regimes.

            Despite being built from existing control sets, the underlying structure of CMMC is new, making it difficult to evaluate what compliance will look like. Most strikingly, the core distinction between “practices” and “processes” has the potential for considerable overlap. For example, the Basic Cyber Hygiene level currently has no process requirements whatsoever. However, it includes process language in its practices (i.e., “. . . in an ad hoc manner.”) Higher levels of practices also employ process language, in some cases actually using the word “process” as a practice requirement. (I.e. “The organization has a process . . .”) Moreover, despite using the words “processes,” “policy,” “practices,” and “plan” each as distinct requirements, none of these terms are defined.

            On a positive note, the establishment of clear ‘levels’ could simplify the DoD contracting environment for cybersecurity, as this will reduce uncertainty regarding what is required for CUI compliance, and the certification process should remove redundancies when negotiating multiple contracts with multiple different contracting officers. Additionally, the highest levels (4 and 5) are expected to only apply to a select few large defense contractors, while Level 1 is designed to encompass even entirely small, non-technical organizations. (One third party referred to it as being for “the lawn mowing company.”) This broad scope, coupled with the currently published requirements, suggests that although CMMC will apply to every defense contractor, the requirements will potentially not be too burdensome.

            However, even if the individual CMMC level requirements are reasonable, CMMC could also run into problems from overly aggressive application of “flow down” requirements. “Flow down” essentially requires contractors to include the same requirements in their subcontracts. If CMMC certification is required for *every* subcontractor, this could be prohibitive for large organizations (with a large number of subcontractors) wishing to pursue relatively small DoD contracts, such as research universities. Data-specific compliance regimes limit this problem by only flowing down with the relevant data. Generalized compliance regimes may not have a clear limiting principle in this respect.

            Moreover, there appears to be a conflict with regard to scope. CMMC has consistently pushed toward establishing “enterprise-wide” security certifications, in contrast to the data-specific regimes typically employed by cybersecurity standards (e.g., NIST SP 800-171 applies only to CUI). And yet, it allows the contractor to pick a specific segment of the network where the information to be protected is located. This creates an additional problem for non-IT contractors, since it is unclear which information is in scope.

            The concept of enterprise-wide cybersecurity compliance audits is a daunting one, as organizations typically do not apply security controls universally, the amount of documentation likely required for a larger organization could be prohibitive, and the process of certifying enterprise-wide compliance of even basic security controls is likely to be extremely expensive.

            Finally, since CMMC does not state an intent to apply to grants, CAs, or OTAs, these vehicles may not be impacted, and organizations may wish to prioritize these vehicles when multiple funding vehicles are possible options. Note, however, that some level of CMMC compliance may still be required if the work performed under these other vehicles generates or requires access to CUI.

            Caveats: 

                  The primary caveat to evaluating the impact of CMMC is that its history of pursuing overly aggressive timelines, frequent changes, the COVID-19 pandemic, and the upcoming election make the likelihood of it rolling out as planned questionable. Standing up a cybersecurity requirements and assessment program for the entire DIB is a gigantic task, and the proposed system has a number of flaws that has led to a significant amount of public criticism. Most notably, there has been pushback from industry trade groups,[21] Educause,[22] and former DoD Under Secretary of Defense for Acquisition, Technology and Logistics Frank Kendall,[23] all questioning the wisdom of CMMC and calling for either significant changes or its outright abandonment.

                  3. Sources 

                  References 


                    [1] Defense Industrial Base is defined as “the Department of Defense, government, and private sector worldwide industrial complex with capabilities to perform research and development and design, produce, and maintain military weapon systems, subsystems, components, or parts to meet military requirements.” https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/dictionary.pdf.

                    [2] https://www.acq.osd.mil/cmmc/index.html.

                    [3] Other control sets and frameworks currently referenced include the CIS Critical Security Controls, DIB SCC TF WG Top 10, and the CERT Resilience Management Model.

                    [4] https://www.acq.osd.mil/cmmc/docs/CMMC_ModelMain_V1.02_20200318.pdf.

                    [5] Note, despite the name, CMMC does not operate as a typical maturity model.

                    [6] Contractors without a current CMMC certification will be allowed to submit proposals but must complete certification prior to funding.

                    [7] “For a given CMMC level, the associated controls and processes . . . will reduce risk against a specific set of cyber threats.” https://www.acq.osd.mil/cmmc/index.html. Currently this “threat protection” appears to be manifested only by a statement for each level specifying “resistance against data exfiltration” and “resilience against malicious actions.”

                    [8] https://www.law.cornell.edu/cfr/text/48/52.204-21

                    [9] DFARS 252.204-7012 will continue to apply to CUI until it is superseded by CMMC.

                    [10] https://www.acq.osd.mil/cmmc/docs/CMMC_Appendices_V1.02_20200318.pdf.

                    [11] https://fcw.com/articles/2020/01/09/cmmc-chair-cyber-cert.aspx.

                    [12] https://www.acq.osd.mil/cmmc/index.html.

                    [13] https://www.cmmcab.org.

                    [14] According to the CMMC v1.02 document, “...A DIB contractor can achieve a specific CMMC level for its entire enterprise network or for particular segment(s) or enclave(s), depending on where the information to be protected is handled and stored.”

                    [15] FAQ #12, https://www.acq.osd.mil/cmmc/faq.html.

                    [16] FAQ #4, https://www.acq.osd.mil/cmmc/faq.html.

                    [17] The CMMC AB issued an RFP on April 22, 2020 for vendors to provide “continuous monitoring” in the form of “non-intrusive” review of the company’s internet traffic, a secure portal for displaying monitoring data, and security of AB/DOD intellectual property.

                    [18] This date is unlikely to be met since COVID-19 has delayed public hearing for the DFARS rule change https://fcw.com/articles/2020/05/11/cmmc-covid-dfar-rule-change-delay.aspx.

                    [19] However, the FAQ does state that the cost of certification will be reimbursable. FAQ#19 https://www.acq.osd.mil/cmmc/faq.html.

                    [20] The term “checkbox mentality” or “checkbox security” refers to a common problem in security where organizations are more concerned with their compliance with legal requirements than the actual security of their mission.

                    [21] https://www.itic.org/policy/CMMCmultiassoc_3.26_Final.pdf.

                    [22] https://er.educause.edu/articles/2020/1/us-federal-policy-perspectives-on-the-educause-2020-top-10-it-issues.

                    [23] “Cybersecurity Maturity Model Certification: An Idea Whose Time Has Not Come and Never May” by Frank Kendall, former Undersecretary of Defense for Acquisition Technology https://www.forbes.com/sites/frankkendall/2020/04/29/cyber-security-maturity-model-certificationan-idea-whose-time-has-not-come-and-never-may/?utm_source=Sailthru&utm_medium=email&utm_campaign=EBB%2004.30.20&utm_term=Editorial%20-%20Early%20Bird%20Brief#32829a773bf2.

                    Friday, June 12, 2020

                    Removing language with racial biases

                    Effective immediately, the Indiana University Center for Applied Cybersecurity Research, Trusted CI, and the ResearchSOC are joining other organizations* in ceasing the use of terms such as  “whitelist,” “blacklist,” and similar cybersecurity terms that imply negative and positive attributes and use colors also used to identify people. This is in alignment with the principles of Indiana University and the principle that there is simply is no place today for biased language with racial implications. No new materials produced will use such language and current materials will be edited to remove its use. Our code of conduct has been updated to bar its use in presentations at our events. We recognize some terminology we cannot unilaterally change without breaking inter-organizational communications (e.g. “TLP White”) and we are asking the broader cybersecurity community to reconsider such language.
                    We believe the cybersecurity community needs these measures to improve its own inclusivity and as  a small but important statement to support people of color and more inclusive terms across society.  We continue to educate ourselves on these issues and will take further steps as our understanding grows.
                    Von Welch, for the IU CACR, Trusted CI, and ResearchSOC teams

                    * For example:

                    Thursday, June 11, 2020

                    Trusted CI Webinar June 22nd at 11am ET: "How worried should I be?": The worst question we keep asking about research cybersecurity

                    Indiana University's Susan Sons is presenting the talk, "How worried should I be?": The worst question we keep asking about research cybersecurity, on June 22nd at 11am (Eastern). 

                    Please register here. Be sure to check spam/junk folder for registration confirmation email.
                    Susan Sons, Deputy Director of the Research Security Operations Center (ResearchSOC) will provide a mini threat briefing (a brief brief) and a broader discussion on how the research cybersecurity community approaches threat intelligence when we're at our best, and when we're at our worst.  Please join us and bring your questions!
                    Speaker Bio:
                    Susan serves as Deputy Director of ResearchSOC and Chief Security Analyst of IU's Center of Applied Cybersecurity Research.  She has a slight obsession with improving software engineering practices and the security of ICS/SCADA assets.  Known vulnerabilities: Can be bribed with dark chocolate.

                    Join Trusted CI's announcements mailing list for information about upcoming events. To submit topics or requests to present, see our call for presentations. Archived presentations are available on our site under "Past Events."