Pre-Draft Call for Comments | Information Security Handbook: A Guide for Managers

Pre-Draft Call for Comments | Information Security Handbook: A Guide for Managers

NIST plans to update Special Publication (SP) 800-100, Information Security Handbook: A Guide for Managers, and is issuing a Pre-Draft Call for Comments to solicit feedback from users. The public comment period is open through February 23, 2024.

Since SP 800-100 was published in October of 2006, NIST has developed new frameworks for cybersecurity and risk management and released major updates to critical resources and references. This revision would focus the document’s scope for the intended audience and ensure alignment with other NIST guidance. Before revising, NIST would like to invite users and stakeholders to suggest changes that would improve the document’s effectiveness, relevance, and general use with regard to cybersecurity governance and the intersections between various organizational roles and information security.

NIST welcomes feedback and input on any aspect of SP 800-100 and additionally proposes a list of non-exhaustive questions and topics for consideration:

  • What role do you fill in your organization?
  • How have you used or referenced SP 800-100?
  • What specific topics in SP 800-100 are most useful to you?
  • What challenges have you faced in applying the guidance in SP 800-100?
  • Is the document’s current level of specificity appropriate, too detailed, or too general? If the level of specificity is not appropriate, why?
  • How can NIST improve the alignment between SP 800-100 and other frameworks and publications?
  • What new cybersecurity capabilities, challenges, or topics should be addressed?
  • What current topics or sections in the document are out of scope, no longer relevant, or better addressed elsewhere?
  • Are there other substantive suggestions that would improve the document?
  • Specific topics to consider for revision or improvement:
    • Cybersecurity governance
    • Role of information security in the software development life cycle (e.g., agile development)
    • Contingency planning and the intersection of roles across organizations
    • Risk management
      • Enterprise risk management
      • Supply chain risk management and acquisitions
      • Metrics development and cybersecurity scorecard
    • System authorizations
    • Relationship between privacy and information security programs

The comment period is open through February 23, 2024. See the publication details for information on how to submit comments, such as using the comment template.

Read More

NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems

Publication lays out “adversarial machine learning” threats, describing mitigation strategies and their limitations.

January 04, 2024

Share

Facebook

Linkedin

Twitter

Email

  • AI systems can malfunction when exposed to untrustworthy data, and attackers are exploiting this issue.
  • New guidance documents the types of these attacks, along with mitigation approaches.
  • No foolproof method exists as yet for protecting AI from misdirection, and AI developers and users should be wary of any who claim otherwise.
Overhead view of intersection shows how deceptive markings on the road could cause an AI-directed car to veer into oncoming traffic.
An AI system can malfunction if an adversary finds a way to confuse its decision making. In this example, errant markings on the road mislead a driverless car, potentially making it veer into oncoming traffic. This “evasion” attack is one of numerous adversarial tactics described in a new NIST publication intended to help outline the types of attacks we might expect along with approaches to mitigate them. Credit: N. Hanacek/NIST

Adversaries can deliberately confuse or even “poison” artificial intelligence (AI) systems to make them malfunction — and there’s no foolproof defense that their developers can employ. Computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators identify these and other vulnerabilities of AI and machine learning (ML) in a new publication.

Their work, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2), is part of NIST’s broader effort to support the development of trustworthy AI, and it can help put NIST’s AI Risk Management Framework into practice. The publication, a collaboration among government, academia and industry, is intended to help AI developers and users get a handle on the types of attacks they might expect along with approaches to mitigate them — with the understanding that there is no silver bullet.

“We are providing an overview of attack techniques and methodologies that consider all types of AI systems,” said NIST computer scientist Apostol Vassilev, one of the publication’s authors. “We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks. We are encouraging the community to come up with better defenses.”   

AI systems have permeated modern society, working in capacities ranging from driving vehicles to helping doctors diagnose illnesses to interacting with customers as online chatbots. To learn to perform these tasks, they are trained on vast quantities of data: An autonomous vehicle might be shown images of highways and streets with road signs, for example, while a chatbot based on a large language model (LLM) might be exposed to records of online conversations. This data helps the AI predict how to respond in a given situation. 

One major issue is that the data itself may not be trustworthy. Its sources may be websites and interactions with the public. There are many opportunities for bad actors to corrupt this data — both during an AI system’s training period and afterward, while the AI continues to refine its behaviors by interacting with the physical world. This can cause the AI to perform in an undesirable manner. Chatbots, for example, might learn to respond with abusive or racist language when their guardrails get circumvented by carefully crafted malicious prompts. 

“For the most part, software developers need more people to use their product so it can get better with exposure,” Vassilev said. “But there is no guarantee the exposure will be good. A chatbot can spew out bad or toxic information when prompted with carefully designed language.”

In part because the datasets used to train an AI are far too large for people to successfully monitor and filter, there is no foolproof way as yet to protect AI from misdirection. To assist the developer community, the new report offers an overview of the sorts of attacks its AI products might suffer and corresponding approaches to reduce the damage. 

The report considers the four major types of attacks: evasion, poisoning, privacy and abuse attacks. It also classifies them according to multiple criteria such as the attacker’s goals and objectives, capabilities, and knowledge.

Evasion attacks, which occur after an AI system is deployed, attempt to alter an input to change how the system responds to it. Examples would include adding markings to stop signs to make an autonomous vehicle misinterpret them as speed limit signs or creating confusing lane markings to make the vehicle veer off the road. 

Poisoning attacks occur in the training phase by introducing corrupted data. An example would be slipping numerous instances of inappropriate language into conversation records, so that a chatbot interprets these instances as common enough parlance to use in its own customer interactions. 

Privacy attacks, which occur during deployment, are attempts to learn sensitive information about the AI or the data it was trained on in order to misuse it. An adversary can ask a chatbot numerous legitimate questions, and then use the answers to reverse engineer the model so as to find its weak spots — or guess at its sources. Adding undesired examples to those online sources could make the AI behave inappropriately, and making the AI unlearn those specific undesired examples after the fact can be difficult.

Abuse attacks involve the insertion of incorrect information into a source, such as a webpage or online document, that an AI then absorbs. Unlike the aforementioned poisoning attacks, abuse attacks attempt to give the AI incorrect pieces of information from a legitimate but compromised source to repurpose the AI system’s intended use. 

“Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” said co-author Alina Oprea, a professor at Northeastern University. “Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set.” 

The authors — who also included Robust Intelligence Inc. researchers Alie Fordyce and Hyrum Anderson — break down each of these classes of attacks into subcategories and add approaches for mitigating them, though the publication acknowledges that the defenses AI experts have devised for adversarial attacks thus far are incomplete at best. Awareness of these limitations is important for developers and organizations looking to deploy and use AI technology, Vassilev said. 

“Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences,” he said. “There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil.” 

Information technology and Artificial intelligence

Media Contact

Organizations

Related Links

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2)

Released January 4, 2024

Cloud Misconfigurations Pose Risk for Data Exfiltration and Cyberattacks

Cloud environments are rich with sensitive data and have become a prime target for threat actors. They can be large, and the multiple applications, connections, and configurations can be difficult to understand and monitor. Cloud security failures occur due to manual controls—including settings and access privileges—not being set correctly, and organizations have mistakenly exposed applications, network segments, storage, and APIs to the public. This complexity creates a risk of breach, and victims often do not know that their cloud environments have been breached. According to IBM’s Cost of a Data Breach Report 2023, misconfigured cloud infrastructure resulting in data breaches cost an average of $4 million to resolve. Threat actors typically access and exfiltrate data via exploitable misconfigured systems and involve the loss, theft, or compromise of personally identifiable information (PII), which can be used to conduct subsequent cyberattacks.
Recent incidents of misconfigurations highlight cloud security risks and the need for organizations to secure their cloud environments to help prevent data from being mistakenly exposed. For example, researchers discovered a dual privilege escalation chain impacting Google Kubernetes Engine (GKE) due to specific misconfigurations in GKE’s FluentBit logging agent and Anthos Service Mesh (ASM). The vulnerabilities in the default configuration of FluentBit, which automatically runs on all clusters, and in the default privileges within ASM were identified. When combined, threat actors can escalate privileges with existing Kubernetes cluster access, enabling data theft, deployment of malicious pods, and disruption of cluster operations.
Additionally, a Japanese game developer, Ateam, having multiple games on Google Play, insecurely configured a Google Drive cloud storage instance to “Anyone on the internet with the link can view” since March 2017. The misconfigured instance contained 1,369 files with personal information, including full names, email addresses, phone numbers, customer management numbers, and terminal (device) identification numbers. Search engines could index this information, making it more accessible to threat actors.  Furthermore, the TuneFab converter, used to convert copyrighted music from popular streaming platforms such as Spotify and Apple Music, exposed more than 151 million parsed records of users’ private data, such as IP addresses, user IDs, emails, and device info. The exposed data was caused by a MongoDB misconfiguration, resulting in the data becoming passwordless, publicly accessible, and indexed by public IoT search engines.

Increase in Cryptocurrency Scams NJCCIC

The NJCCIC has observed increased reports of cryptocurrency scams over the past few weeks, consistent with open-source reporting . The scams begin with a sophisticated phishing attack, often initiated via social media direct messages or posts, and use a crypto wallet-draining technique to target a wide range of blockchain networks. These cryptocurrency stealers are malicious programs or scripts designed to transfer cryptocurrency from victims’ wallets without their consent. Attribution is frequently obfuscated as many of these campaigns are perpetuated by phishing groups that offer wallet-draining scripts in scam-as-a-service operations.
The cybercriminal begins the scam by creating fake AirDrop or phishing campaigns, often promoted on social media or via email, offering free tokens to lure users. The target is directed to a fraudulent website to claim these tokens, which mimics a genuine token distribution platform that requests to connect to their crypto wallet. The target is then enticed to engage with a malicious smart contract , inadvertently granting the cybercriminal access to their funds, which enables token theft without further user interaction. Cybercriminals may use methods like mixers or multiple transfers to obscure their tracks and liquidate the stolen assets. Social engineering tactics in recent campaigns include fake job interviews via LinkedIn, romance scams, and other quick cryptocurrency return promotions offered through various social media platforms.
Image source: ESET H2 Threat Report
According to ESET’s H2 Threat Report, the number of observed cryptocurrency threats decreased by 21 percent in the latter half of 2023; however, a sudden increase in cryptostealer activity was primarily caused by the rise of Lumma Stealer (78.9 percent), a malware-as-a-service (MaaS) infostealer capable of stealing passwords, multi-factor authentication (MFA) data, configuration data, browser cookies, cryptocurrency wallet data, and more. This infostealer was observed spreading via the Discord chat platform and through a recent fake browser update campaign. In this campaign, a compromised website displays a fake notice that a browser update is necessary to access the site. If the update button is clicked, the malicious payload is downloaded, delivering malware such as RedLine, Amadey, or Lumma Stealer to the victim’s machine.
The NJCCIC recommends that users exercise caution when interacting with social media posts, direct messages, texts, or emails that may contain misinformation and refrain from responding to or clicking links delivered in communications from unknown or unverified senders. Additionally, users are strongly encouraged to enable MFA where available, choosing an authentication app such as Google Authenticator or Microsoft Authenticator. In the case of credential exposure or theft, MFA will greatly reduce the risk of account compromise. If theft of funds has occurred, victims are urged to report the activity to the FBI’s IC3 immediately, their local FBI field office, and local law enforcement. These scams can also be reported to the NJCCIC and the FTC. Further information and recommendations can be found in the FTC article, the Cryptonews article, and the LinkedIn article.

NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems

NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems Overhead view of intersection shows how deceptive markings on the road could cause an AI-directed car to veer into oncoming traffic. Adversaries can deliberately confuse or even “poison” artificial intelligence (AI) systems to make them malfunction — and there’s no foolproof defense that their developers can employ. Computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators identify these and other vulnerabilities of AI and machine learning (ML) in a new publication. Their work, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2), is part of NIST’s broader effort to support the development of trustworthy AI, and it can help put NIST’s AI Risk Management Framework into practice. The publication, a collaboration among government, academia and industry, is intended to help AI developers and users get a handle on the types of attacks they might expect along with approaches to mitigate them — with the understanding that there is no silver bullet. Read More
IN CASE YOU MISSED IT
NIST Calls for Information to Support Safe, Secure and Trustworthy Development and Use of Artificial Intelligence Dec. 19, 2023
The information that NIST receives from industry experts, researchers and the public will help it fulfill its responsibilities under the recent White House Executive Order on AI.

Read More

Last chance to register for the Global Nonprofit Leaders Summit

Don’t miss your chance to attend the inaugural Global Nonprofit Leaders Summit in Bellevue, Washington, January 31 – February 1.

This event is your opportunity to set a course for AI innovation in 2024. You’ll connect with over 1,000 nonprofit professionals from across the globe. Together, you’ll learn how to spark nonprofit transformation in the era of AI with skilling to get started, vital discussions on AI including use cases and challenges, and lessons from other nonprofit leaders who are leveraging AI in big and small ways.

Learn from experts in digital transformation, digital inclusion, and AI, including:

  • Brad Smith, Microsoft Vice Chair and President
  • Trevor Noah, Comedian, TV Host, Author
  • Dr. Fei-Fei Li, Co-Director of Stanford’s Human-Centered AI Institute
  • Afua Bruce, Author of “The Tech that Comes Next”
  • Ryan Roslansky, CEO, LinkedIn
  • Beth Kanter, Trainer, Facilitator, Author at BethKanter.org
  • Vilas Dhar, CEO, Patrick J. McGovern Foundation
  • Neil Heslop OBE, CEO, Charities Aid Foundation
  • Lauren Woodman, CEO, Datakind
  • Hadi Partovi, Founder and CEO, Code.org
  • Ben Sand, Founder and CEO, The Contingent

This is your last chance to register for this free event before seats are filled. Book your hotel room with our discounted rates before rooms are released on January 8.

We look forward to seeing you there!

Microsoft Philanthropies

LEARN MORE

Microsoft Security Virtual Training Day: Security, Compliance, and Identity Fundamentals

Grow your skills at Security Virtual Training Day: Security, Compliance, and Identity Fundamentals from Microsoft Learn. At this free, introductory event, you’ll gain the security skills and training you need to create impact and take advantage of opportunities to move your career forward. You’ll explore the basics of security, compliance, and identity—including best practices to help protect people and data against cyberthreats for greater peace of mind. You’ll also learn more about identity and access management while exploring compliance management fundamentals. You will have the opportunity to: Learn the fundamentals of security, compliance, and identity. Understand the concepts and capabilities of Microsoft identity and access management solutions, as well as compliance management capabilities. Gain the skills and knowledge to jumpstart your preparation for the certification exam. Join us at an upcoming two-part event:
January 8, 2024 | 12:00 PM – 3:45 PM | (GMT-05:00) Eastern Time (US & Canada)
January 9, 2024 | 12:00 PM – 2:15 PM | (GMT-05:00) Eastern Time (US & Canada)

Delivery Language: English
Closed Captioning Language(s): English
 
REGISTER TODAY >

Microsoft nonprofit events and offers in the United States and Canada

#Empower2024: AI, Your Copilot January 31, 10:30 a.m. – 4:30 p.m. ET Designed for nonprofits, this 1-day virtual conference with ProServe IT will tell you what you need to know to harness the power of artificial intelligence. Learn how AI and Microsoft Copilot can boost your productivity and drive results for your mission. Join any of the 30-minute sessions throughout the day, depending on your interest, and learn what you can do to get ready for this innovative technology in a new era of work.
Register now

Reminder! Request for Comments on the Initial Public Draft (SP 800-226) of the Guidelines for Evaluating Differential Privacy Guarantees– closes on January 25, 2024

Dear Colleagues, 

Initial Public Draft Comment Period – still open for comments! 

We still need your feedback on the Initial Public Draft of theNIST Special Publication (SP) 800-226, Guidelines for Evaluating Differential Privacy Guarantees. We are halfway through the public comment period, which closes at 11:59 p.m. ET on January 25, 2024.

For more information about this publication and to provide comments, visit the website page. If you have questions, email us at [email protected].   
 
Best,  
NIST Privacy Engineering Program NIST Cybersecurity and Privacy Program
Questions/Comments about this notice: [email protected].
CSRC Website questions: [email protected]

NIST Tool Will Make Math-Heavy Research Papers Easier to View Online

Illustration shows a screen of type being converted to two others, one in PDF format and another in web-style html format for arXiv.org..

The complex formulas in physics, math and engineering papers might be intimidatingly difficult reading matter for some, but there are many people who have trouble merely seeing them in the first place. The National Institute of Standards and Technology (NIST) has created a tool that makes these papers easier on the eyes for those with visual disabilities, and it’s about to be adopted in a major way. The tool, which converts one commonly used format for displaying math formulas into another, could help make the latest and greatest research papers accessible to all. Most new research papers are distributed as PDF files, which many people in the research community have difficulty reading.
Read More