This update to SP 800-55 is comprised of two volumes. Volume 1 — Identifying and Selecting Measures is a flexible approach to the development, selection, and prioritization of information security measures. This volume explores both quantitative and qualitative assessments and provides basic guidance on data analysis techniques, as well as impact and likelihood modeling. Volume 2 — Developing an Information Security Measurement Program is a flexible methodology for developing and implementing a structure for an information security measurement program.
To facilitate continued collaboration, the Cybersecurity Risk Analytics and Measurement Team proposes the establishment of a Community of Interest (CoI) in which practitioners and other enthusiasts can work together to identify cybersecurity measurement needs, action items, solutions to problems, and opportunities for improvement. Individuals and organizations who work or are planning to work with SP 800-55 and are interested in joining the Cybersecurity Measurement and Metrics CoI can contact the Cybersecurity Risk Analytics and Measurement Team at [email protected].
Submit Your Comments
The public comment period for both drafts is open through March 18, 2024. See the publication details for volumes 1 and 2 to download the documents and comment templates. We strongly encourage you to comment on all or parts of both volumes and use the comment templates provided.
Imagine you’re the new head of cybersecurity at your company. Your team has made a solid start at mounting defenses to ward off hackers and ransomware attacks. As cybersecurity threats continue to mount, you need to show improvements over time to your CEO and customers. How do you measure your progress and present it using meaningful, numerical details?
You might want a road map for creating a practical information security measurement program, and you’ll find it in newly revised draft guidance from the National Institute of Standards and Technology (NIST). The two-volume document, whose overall title is NIST Special Publication (SP) 800-55 Revision 2: Measurement Guide for Information Security, offers guidance on developing an effective program, and a flexible approach for developing information security measures to meet your organization’s performance goals. NIST is calling for public comments on this initial public draft by March 18, 2024.
A vulnerability has been discovered in the Apache OFBiz, which could allow for remote code execution. Apache OFBiz is an open source product for the automation of enterprise processes. It includes framework components and business applications for ERP, CRM, E-Business/E-Commerce, Supply Chain Management and Manufacturing Resource Planning. Successful exploitation could allow for remote code execution in the context of the Server. Depending on the privileges associated with the logged on user, an attacker could then install programs; view, change, or delete data. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.
Threat IntelligenceThe Hacker News is reporting that this vulnerability has been exploited in the wild, and PoC (Proof of Concept) code for Remote Code Execution is available on GitHub.
Systems Affected
Apache OFBiz versions 18.12.10 and below
Risk Government: – Large and medium government entities: High – Small government entities: Medium
Businesses: – Large and medium business entities: High – Small business entities: Medium
Home Users: Low
Technical Summary A vulnerability has been discovered in Apache OFBiz, which could allow for remote code execution.
Recommendations
Apply appropriate updates provided by Apache to vulnerable systems immediately after appropriate testing. Apply the Principle of Least Privilege to all systems and services. Run all software as a non-privileged user (one without administrative privileges) to diminish the effects of a successful attack. Vulnerability scanning is used to find potentially exploitable software vulnerabilities to remediate them. Architect sections of the network to isolate critical systems, functions, or resources. Use physical and logical segmentation to prevent access to potentially sensitive systems and information. Use a DMZ to contain any internet-facing services that should not be exposed from the internal network. Configure separate virtual private cloud (VPC) instances to isolate critical cloud systems. Use capabilities to detect and block conditions that may lead to or be indicative of a software exploit occurring.
Date/Time: Wednesday, January 17th, 2024 / 9:00 AM – 1:00 PM EST
We look forward to welcoming you to NIST’s Virtual Workshop on Secure Development Practices for AI Models on January 17. This workshop is being held in support of Executive Order (EO) 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. EO 14110 tasked NIST with “developing a companion resource to the Secure Software Development Framework [SSDF] to incorporate secure development practices for generative AI and for dual-use foundation models.”
What You Will Learn
This workshop will bring together industry, academia, and government to discuss secure development practices for AI models. Feedback from these communities will inform NIST’s creation of SSDF companion resources to support both AI model producers and the organizations who are adopting those AI models within their own software and services. Also, attendees will gain insights on major cybersecurity challenges in developing and using AI models, as well as recommended practices for addressing those challenges.
We Want to Hear from You
Participants are encouraged to share their input during the workshop. Your feedback will inform the SSDF companion resources that NIST will be developing in support of EO 14110.
Visit the NIST workshop page to learn more. If you have any questions, feel free to reach out to our team at [email protected].
*Registration for this event is required so the webinar connection details can be shared with you.
Pre-Draft Call for Comments | Information Security Handbook: A Guide for Managers
NIST plans to update Special Publication (SP) 800-100, Information Security Handbook: A Guide for Managers, and is issuing a Pre-Draft Call for Comments to solicit feedback from users. The public comment period is open through February 23, 2024.
Since SP 800-100 was published in October of 2006, NIST has developed new frameworks for cybersecurity and risk management and released major updates to critical resources and references. This revision would focus the document’s scope for the intended audience and ensure alignment with other NIST guidance. Before revising, NIST would like to invite users and stakeholders to suggest changes that would improve the document’s effectiveness, relevance, and general use with regard to cybersecurity governance and the intersections between various organizational roles and information security.
NIST welcomes feedback and input on any aspect of SP 800-100 and additionally proposes a list of non-exhaustive questions and topics for consideration:
What role do you fill in your organization?
How have you used or referenced SP 800-100?
What specific topics in SP 800-100 are most useful to you?
What challenges have you faced in applying the guidance in SP 800-100?
Is the document’s current level of specificity appropriate, too detailed, or too general? If the level of specificity is not appropriate, why?
How can NIST improve the alignment between SP 800-100 and other frameworks and publications?
What new cybersecurity capabilities, challenges, or topics should be addressed?
What current topics or sections in the document are out of scope, no longer relevant, or better addressed elsewhere?
Are there other substantive suggestions that would improve the document?
Specific topics to consider for revision or improvement:
Cybersecurity governance
Role of information security in the software development life cycle (e.g., agile development)
Contingency planning and the intersection of roles across organizations
Risk management
Enterprise risk management
Supply chain risk management and acquisitions
Metrics development and cybersecurity scorecard
System authorizations
Relationship between privacy and information security programs
The comment period is open through February 23, 2024. See the publication details for information on how to submit comments, such as using the comment template.
AI systems can malfunction when exposed to untrustworthy data, and attackers are exploiting this issue.
New guidance documents the types of these attacks, along with mitigation approaches.
No foolproof method exists as yet for protecting AI from misdirection, and AI developers and users should be wary of any who claim otherwise.
Adversaries can deliberately confuse or even “poison” artificial intelligence (AI) systems to make them malfunction — and there’s no foolproof defense that their developers can employ. Computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators identify these and other vulnerabilities of AI and machine learning (ML) in a new publication.
Their work, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2), is part of NIST’s broader effort to support the development of trustworthy AI, and it can help put NIST’s AI Risk Management Framework into practice. The publication, a collaboration among government, academia and industry, is intended to help AI developers and users get a handle on the types of attacks they might expect along with approaches to mitigate them — with the understanding that there is no silver bullet.
“We are providing an overview of attack techniques and methodologies that consider all types of AI systems,” said NIST computer scientist Apostol Vassilev, one of the publication’s authors. “We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks. We are encouraging the community to come up with better defenses.”
AI systems have permeated modern society, working in capacities ranging from driving vehicles to helping doctors diagnose illnesses to interacting with customers as online chatbots. To learn to perform these tasks, they are trained on vast quantities of data: An autonomous vehicle might be shown images of highways and streets with road signs, for example, while a chatbot based on a large language model (LLM) might be exposed to records of online conversations. This data helps the AI predict how to respond in a given situation.
One major issue is that the data itself may not be trustworthy. Its sources may be websites and interactions with the public. There are many opportunities for bad actors to corrupt this data — both during an AI system’s training period and afterward, while the AI continues to refine its behaviors by interacting with the physical world. This can cause the AI to perform in an undesirable manner. Chatbots, for example, might learn to respond with abusive or racist language when their guardrails get circumvented by carefully crafted malicious prompts.
“For the most part, software developers need more people to use their product so it can get better with exposure,” Vassilev said. “But there is no guarantee the exposure will be good. A chatbot can spew out bad or toxic information when prompted with carefully designed language.”
In part because the datasets used to train an AI are far too large for people to successfully monitor and filter, there is no foolproof way as yet to protect AI from misdirection. To assist the developer community, the new report offers an overview of the sorts of attacks its AI products might suffer and corresponding approaches to reduce the damage.
The report considers the four major types of attacks: evasion, poisoning, privacy and abuse attacks. It also classifies them according to multiple criteria such as the attacker’s goals and objectives, capabilities, and knowledge.
Evasion attacks, which occur after an AI system is deployed, attempt to alter an input to change how the system responds to it. Examples would include adding markings to stop signs to make an autonomous vehicle misinterpret them as speed limit signs or creating confusing lane markings to make the vehicle veer off the road.
Poisoning attacks occur in the training phase by introducing corrupted data. An example would be slipping numerous instances of inappropriate language into conversation records, so that a chatbot interprets these instances as common enough parlance to use in its own customer interactions.
Privacy attacks, which occur during deployment, are attempts to learn sensitive information about the AI or the data it was trained on in order to misuse it. An adversary can ask a chatbot numerous legitimate questions, and then use the answers to reverse engineer the model so as to find its weak spots — or guess at its sources. Adding undesired examples to those online sources could make the AI behave inappropriately, and making the AI unlearn those specific undesired examples after the fact can be difficult.
Abuse attacks involve the insertion of incorrect information into a source, such as a webpage or online document, that an AI then absorbs. Unlike the aforementioned poisoning attacks, abuse attacks attempt to give the AI incorrect pieces of information from a legitimate but compromised source to repurpose the AI system’s intended use.
“Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” said co-author Alina Oprea, a professor at Northeastern University. “Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set.”
The authors — who also included Robust Intelligence Inc. researchers Alie Fordyce and Hyrum Anderson — break down each of these classes of attacks into subcategories and add approaches for mitigating them, though the publication acknowledges that the defenses AI experts have devised for adversarial attacks thus far are incomplete at best. Awareness of these limitations is important for developers and organizations looking to deploy and use AI technology, Vassilev said.
“Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences,” he said. “There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil.”
Cloud environments are rich with sensitive data and have become a prime target for threat actors. They can be large, and the multiple applications, connections, and configurations can be difficult to understand and monitor. Cloud security failures occur due to manual controls—including settings and access privileges—not being set correctly, and organizations have mistakenly exposed applications, network segments, storage, and APIs to the public. This complexity creates a risk of breach, and victims often do not know that their cloud environments have been breached. According to IBM’s Cost of a Data Breach Report 2023, misconfigured cloud infrastructure resulting in data breaches cost an average of $4 million to resolve. Threat actors typically access and exfiltrate data via exploitable misconfigured systems and involve the loss, theft, or compromise of personally identifiable information (PII), which can be used to conduct subsequent cyberattacks.
Recent incidents of misconfigurations highlight cloud security risks and the need for organizations to secure their cloud environments to help prevent data from being mistakenly exposed. For example, researchers discovered a dual privilege escalation chain impacting Google Kubernetes Engine (GKE) due to specific misconfigurations in GKE’s FluentBit logging agent and Anthos Service Mesh (ASM). The vulnerabilities in the default configuration of FluentBit, which automatically runs on all clusters, and in the default privileges within ASM were identified. When combined, threat actors can escalate privileges with existing Kubernetes cluster access, enabling data theft, deployment of malicious pods, and disruption of cluster operations.
Additionally, a Japanese game developer, Ateam, having multiple games on Google Play, insecurely configured a Google Drive cloud storage instance to “Anyone on the internet with the link can view” since March 2017. The misconfigured instance contained 1,369 files with personal information, including full names, email addresses, phone numbers, customer management numbers, and terminal (device) identification numbers. Search engines could index this information, making it more accessible to threat actors. Furthermore, the TuneFab converter, used to convert copyrighted music from popular streaming platforms such as Spotify and Apple Music, exposed more than 151 million parsed records of users’ private data, such as IP addresses, user IDs, emails, and device info. The exposed data was caused by a MongoDB misconfiguration, resulting in the data becoming passwordless, publicly accessible, and indexed by public IoT search engines.
The NJCCIC has observed increased reports of cryptocurrency scams over the past few weeks, consistent with open-source reporting . The scams begin with a sophisticated phishing attack, often initiated via social media direct messages or posts, and use a crypto wallet-draining technique to target a wide range of blockchain networks. These cryptocurrency stealers are malicious programs or scripts designed to transfer cryptocurrency from victims’ wallets without their consent. Attribution is frequently obfuscated as many of these campaigns are perpetuated by phishing groups that offer wallet-draining scripts in scam-as-a-service operations.
The cybercriminal begins the scam by creating fake AirDrop or phishing campaigns, often promoted on social media or via email, offering free tokens to lure users. The target is directed to a fraudulent website to claim these tokens, which mimics a genuine token distribution platform that requests to connect to their crypto wallet. The target is then enticed to engage with a malicious smart contract , inadvertently granting the cybercriminal access to their funds, which enables token theft without further user interaction. Cybercriminals may use methods like mixers or multiple transfers to obscure their tracks and liquidate the stolen assets. Social engineering tactics in recent campaigns include fake job interviews via LinkedIn, romance scams, and other quick cryptocurrency return promotions offered through various social media platforms.
Image source: ESET H2 Threat Report
According to ESET’s H2 Threat Report, the number of observed cryptocurrency threats decreased by 21 percent in the latter half of 2023; however, a sudden increase in cryptostealer activity was primarily caused by the rise of Lumma Stealer (78.9 percent), a malware-as-a-service (MaaS) infostealer capable of stealing passwords, multi-factor authentication (MFA) data, configuration data, browser cookies, cryptocurrency wallet data, and more. This infostealer was observed spreading via the Discord chat platform and through a recent fake browser update campaign. In this campaign, a compromised website displays a fake notice that a browser update is necessary to access the site. If the update button is clicked, the malicious payload is downloaded, delivering malware such as RedLine, Amadey, or Lumma Stealer to the victim’s machine.
The NJCCIC recommends that users exercise caution when interacting with social media posts, direct messages, texts, or emails that may contain misinformation and refrain from responding to or clicking links delivered in communications from unknown or unverified senders. Additionally, users are strongly encouraged to enable MFA where available, choosing an authentication app such as Google Authenticator or Microsoft Authenticator. In the case of credential exposure or theft, MFA will greatly reduce the risk of account compromise. If theft of funds has occurred, victims are urged to report the activity to the FBI’s IC3 immediately, their local FBI field office, and local law enforcement. These scams can also be reported to the NJCCIC and the FTC. Further information and recommendations can be found in the FTC article, the Cryptonews article, and the LinkedIn article.
NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems Adversaries can deliberately confuse or even “poison” artificial intelligence (AI) systems to make them malfunction — and there’s no foolproof defense that their developers can employ. Computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators identify these and other vulnerabilities of AI and machine learning (ML) in a new publication. Their work, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2), is part of NIST’s broader effort to support the development of trustworthy AI, and it can help put NIST’s AI Risk Management Framework into practice. The publication, a collaboration among government, academia and industry, is intended to help AI developers and users get a handle on the types of attacks they might expect along with approaches to mitigate them — with the understanding that there is no silver bullet. Read More
Don’t miss your chance to attend the inaugural Global Nonprofit Leaders Summit in Bellevue, Washington, January 31 – February 1.
This event is your opportunity to set a course for AI innovation in 2024. You’ll connect with over 1,000 nonprofit professionals from across the globe. Together, you’ll learn how to spark nonprofit transformation in the era of AI with skilling to get started, vital discussions on AI including use cases and challenges, and lessons from other nonprofit leaders who are leveraging AI in big and small ways.
Learn from experts in digital transformation, digital inclusion, and AI, including:
Brad Smith, Microsoft Vice Chair and President
Trevor Noah, Comedian, TV Host, Author
Dr. Fei-Fei Li, Co-Director of Stanford’s Human-Centered AI Institute
Afua Bruce, Author of “The Tech that Comes Next”
Ryan Roslansky, CEO, LinkedIn
Beth Kanter, Trainer, Facilitator, Author at BethKanter.org