Multiple Vulnerabilities in VMware Products

Multiple vulnerabilities have been discovered in VMware vCenter Server and Cloud Foundation, the most severe of which could allow for remote code execution. VMware vCenter Server is the centralized management utility for VMware. VMware Cloud Foundation is a multi-cloud platform that provides a full-stack hyperconverged infrastructure (HCI) that is made for modernizing data centers and deploying modern container-based applications. Successful exploitation of these vulnerabilities could allow for remote code execution in the context of the administrator account. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.
Threat Intelligence VMware is aware of confirmed reports that CVE-2023-34048 has been exploited in the wild.
Systems Affected
VMware vCenter Server versions prior to 8.0U2 VMware vCenter Server versions prior to 8.0U1d VMware vCenter Server versions prior to 7.0U3o VMware Cloud Foundation (VMware vCenter Server) versions prior to KB88287
Risk
Government:
– Large and medium government entities: High – Small government entities: Medium
Businesses: – Large and medium business entities: High
– Small business entities: Medium
Home Users: Low
Technical Summary Multiple vulnerabilities have been discovered in VMware vCenter Server and Cloud Foundation, most severe of which could allow for remote code execution.
Recommendations
Apply appropriate updates provided by VMware to vulnerable systems immediately after appropriate testing. Apply the Principle of Least Privilege to all systems and services. Run all software as a non-privileged user (one without administrative privileges) to diminish the effects of a successful attack. Prevent access to file shares, remote access to systems, unnecessary services. Mechanisms to limit access may include use of network concentrators, RDP gateways, etc. Use intrusion detection signatures to block traffic at network boundaries. Use capabilities to detect and block conditions that may lead to or be indicative of a software exploit occurring.
References
VMware:
https://www.vmware.com/security/advisories/VMSA-2023-0023.html
SecurityWeek:
https://www.securityweek.com/vmware-vcenter-server-vulnerability-exploited-in-wild/
Mandiant:
https://www.mandiant.com/resources/blog/chinese-vmware-exploitation-since-2021
CVE:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-34048
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-34056

Security issue with Ivanti Connect Secure and Ivanti Policy Secure solutions

The Cybersecurity and Infrastructure Security Agency (CISA) issued Emergency Directive (ED) 24-01 that requires Federal Civilian Executive Branch (FCEB) agencies to implement vendor published mitigation guidance immediately to Ivanti Connect Secure and Ivanti Policy Secure solutions to prevent future exploitation and to run the vendor’s Integrity Checker Tool to identify any active or past compromise.  
Last week, Ivanti released information regarding two vulnerabilities, CVE-2023-46805 and CVE-2024-21887 , that allow an attacker to move laterally across a target network, perform data exfiltration, and establish persistent system access. CISA has determined an Emergency Directive is necessary based on the widespread exploitation of these vulnerabilities by multiple threat actors, prevalence of the affected products in the federal enterprise, high potential for compromise of agency information systems, and potential impact of a successful compromise.
While this Directive only applies to FCEB agencies, the threat extends to every sector using these products and we urge all organizations to adopt this guidance.

NORTH AMERICA MCT SUMMIT 8-9 MARCH 2024

Join other MCTs, industry experts, and guest speakers to learn a lot and have some fun.

This is a premier opportunity to update your skills and network.

Learn what you need to know to be successful.

The event will be held 8-9 March 2024 at the Aloft Seattle Redmond | Element by Westin hotel in Redmond, Wa.

The hotel is walking distance to the Microsoft campus.

To Register or learn more go here

Measurement Guide for Information Security: Draft of NIST SP 800-55 Available for Comment

The initial public drafts (ipd) of NIST Special Publication (SP) 800-55, Measurement Guide for Information Security, Volume 1 — Identifying and Selecting Measures, and Volume 2 — Developing an Information Security Measurement Program, are now available for public review and comment through March 18, 2024. 

This update to SP 800-55 is comprised of two volumes. Volume 1 — Identifying and Selecting Measures is a flexible approach to the development, selection, and prioritization of information security measures. This volume explores both quantitative and qualitative assessments and provides basic guidance on data analysis techniques, as well as impact and likelihood modeling. Volume 2 — Developing an Information Security Measurement Program is a flexible methodology for developing and implementing a structure for an information security measurement program.

To facilitate continued collaboration, the Cybersecurity Risk Analytics and Measurement Team proposes the establishment of a Community of Interest (CoI) in which practitioners and other enthusiasts can work together to identify cybersecurity measurement needs, action items, solutions to problems, and opportunities for improvement. Individuals and organizations who work or are planning to work with SP 800-55 and are interested in joining the Cybersecurity Measurement and Metrics CoI can contact the Cybersecurity Risk Analytics and Measurement Team at cyber-measures@list.nist.gov.

Submit Your Comments

The public comment period for both drafts is open through March 18, 2024. See the publication details for volumes 1 and 2 to download the documents and comment templates. We strongly encourage you to comment on all or parts of both volumes and use the comment templates provided.

Please direct questions and submit comments to cyber-measures@list.nist.gov.NIST Cybersecurity and Privacy Program
Questions/Comments about this notice: cyber-measures@list.nist.gov
CSRC Website questions: csrc-inquiry@nist.gov

NIST Offers Guidance on Measuring and Improving Your Company’s Cybersecurity Program

Imagine you’re the new head of cybersecurity at your company. Your team has made a solid start at mounting defenses to ward off hackers and ransomware attacks. As cybersecurity threats continue to mount, you need to show improvements over time to your CEO and customers. How do you measure your progress and present it using meaningful, numerical details?

You might want a road map for creating a practical information security measurement program, and you’ll find it in newly revised draft guidance from the National Institute of Standards and Technology (NIST). The two-volume document, whose overall title is NIST Special Publication (SP) 800-55 Revision 2: Measurement Guide for Information Security, offers guidance on developing an effective program, and a flexible approach for developing information security measures to meet your organization’s performance goals. NIST is calling for public comments on this initial public draft by March 18, 2024.

Read More

Vulnerability in the Apache OFBiz

A vulnerability has been discovered in the Apache OFBiz, which could allow for remote code execution. Apache OFBiz is an open source product for the automation of enterprise processes. It includes framework components and business applications for ERP, CRM, E-Business/E-Commerce, Supply Chain Management and Manufacturing Resource Planning. Successful exploitation could allow for remote code execution in the context of the Server. Depending on the privileges associated with the logged on user, an attacker could then install programs; view, change, or delete data. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.
Threat Intelligence The Hacker News is reporting that this vulnerability has been exploited in the wild, and PoC (Proof of Concept) code for Remote Code Execution is available on GitHub.
Systems Affected
Apache OFBiz versions 18.12.10 and below
Risk
Government:
– Large and medium government entities: High – Small government entities: Medium
Businesses: – Large and medium business entities: High
– Small business entities: Medium
Home Users: Low
Technical Summary A vulnerability has been discovered in Apache OFBiz, which could allow for remote code execution.
Recommendations
Apply appropriate updates provided by Apache to vulnerable systems immediately after appropriate testing. Apply the Principle of Least Privilege to all systems and services. Run all software as a non-privileged user (one without administrative privileges) to diminish the effects of a successful attack. Vulnerability scanning is used to find potentially exploitable software vulnerabilities to remediate them. Architect sections of the network to isolate critical systems, functions, or resources. Use physical and logical segmentation to prevent access to potentially sensitive systems and information. Use a DMZ to contain any internet-facing services that should not be exposed from the internal network. Configure separate virtual private cloud (VPC) instances to isolate critical cloud systems. Use capabilities to detect and block conditions that may lead to or be indicative of a software exploit occurring.
References
Apache:
https://lists.apache.org/thread/9tmf9qyyhgh6m052rhz7lg9vxn390bdv
SonicWall:
https://blog.sonicwall.com/en-us/2023/12/sonicwall-discovers-critical-apache-ofbiz-zero-day-authbiz/
The Hacker News:
https://thehackernews.com/2024/01/new-poc-exploit-for-apache-ofbiz.html
CVE:
https:/cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-51467

Register for the NIST Workshop on Secure Development for AI Models (January 17th 9:00am EST)

Date/Time: Wednesday, January 17th, 2024 / 9:00 AM – 1:00 PM EST

We look forward to welcoming you to NIST’s Virtual Workshop on Secure Development Practices for AI Models on January 17. This workshop is being held in support of Executive Order (EO) 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. EO 14110 tasked NIST with “developing a companion resource to the Secure Software Development Framework [SSDF] to incorporate secure development practices for generative AI and for dual-use foundation models.”

What You Will Learn

This workshop will bring together industry, academia, and government to discuss secure development practices for AI models. Feedback from these communities will inform NIST’s creation of SSDF companion resources to support both AI model producers and the organizations who are adopting those AI models within their own software and services. Also, attendees will gain insights on major cybersecurity challenges in developing and using AI models, as well as recommended practices for addressing those challenges.

We Want to Hear from You

Participants are encouraged to share their input during the workshop. Your feedback will inform the SSDF companion resources that NIST will be developing in support of EO 14110.

Visit the NIST workshop page to learn more. If you have any questions, feel free to reach out to our team at ssdf@nist.gov.

*Registration for this event is required so the webinar connection details can be shared with you.

Register Now

Pre-Draft Call for Comments | Information Security Handbook: A Guide for Managers

Pre-Draft Call for Comments | Information Security Handbook: A Guide for Managers

NIST plans to update Special Publication (SP) 800-100, Information Security Handbook: A Guide for Managers, and is issuing a Pre-Draft Call for Comments to solicit feedback from users. The public comment period is open through February 23, 2024.

Since SP 800-100 was published in October of 2006, NIST has developed new frameworks for cybersecurity and risk management and released major updates to critical resources and references. This revision would focus the document’s scope for the intended audience and ensure alignment with other NIST guidance. Before revising, NIST would like to invite users and stakeholders to suggest changes that would improve the document’s effectiveness, relevance, and general use with regard to cybersecurity governance and the intersections between various organizational roles and information security.

NIST welcomes feedback and input on any aspect of SP 800-100 and additionally proposes a list of non-exhaustive questions and topics for consideration:

  • What role do you fill in your organization?
  • How have you used or referenced SP 800-100?
  • What specific topics in SP 800-100 are most useful to you?
  • What challenges have you faced in applying the guidance in SP 800-100?
  • Is the document’s current level of specificity appropriate, too detailed, or too general? If the level of specificity is not appropriate, why?
  • How can NIST improve the alignment between SP 800-100 and other frameworks and publications?
  • What new cybersecurity capabilities, challenges, or topics should be addressed?
  • What current topics or sections in the document are out of scope, no longer relevant, or better addressed elsewhere?
  • Are there other substantive suggestions that would improve the document?
  • Specific topics to consider for revision or improvement:
    • Cybersecurity governance
    • Role of information security in the software development life cycle (e.g., agile development)
    • Contingency planning and the intersection of roles across organizations
    • Risk management
      • Enterprise risk management
      • Supply chain risk management and acquisitions
      • Metrics development and cybersecurity scorecard
    • System authorizations
    • Relationship between privacy and information security programs

The comment period is open through February 23, 2024. See the publication details for information on how to submit comments, such as using the comment template.

Read More

NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems

Publication lays out “adversarial machine learning” threats, describing mitigation strategies and their limitations.

January 04, 2024

Share

Facebook

Linkedin

Twitter

Email

  • AI systems can malfunction when exposed to untrustworthy data, and attackers are exploiting this issue.
  • New guidance documents the types of these attacks, along with mitigation approaches.
  • No foolproof method exists as yet for protecting AI from misdirection, and AI developers and users should be wary of any who claim otherwise.
Overhead view of intersection shows how deceptive markings on the road could cause an AI-directed car to veer into oncoming traffic.
An AI system can malfunction if an adversary finds a way to confuse its decision making. In this example, errant markings on the road mislead a driverless car, potentially making it veer into oncoming traffic. This “evasion” attack is one of numerous adversarial tactics described in a new NIST publication intended to help outline the types of attacks we might expect along with approaches to mitigate them. Credit: N. Hanacek/NIST

Adversaries can deliberately confuse or even “poison” artificial intelligence (AI) systems to make them malfunction — and there’s no foolproof defense that their developers can employ. Computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators identify these and other vulnerabilities of AI and machine learning (ML) in a new publication.

Their work, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2), is part of NIST’s broader effort to support the development of trustworthy AI, and it can help put NIST’s AI Risk Management Framework into practice. The publication, a collaboration among government, academia and industry, is intended to help AI developers and users get a handle on the types of attacks they might expect along with approaches to mitigate them — with the understanding that there is no silver bullet.

“We are providing an overview of attack techniques and methodologies that consider all types of AI systems,” said NIST computer scientist Apostol Vassilev, one of the publication’s authors. “We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks. We are encouraging the community to come up with better defenses.”   

AI systems have permeated modern society, working in capacities ranging from driving vehicles to helping doctors diagnose illnesses to interacting with customers as online chatbots. To learn to perform these tasks, they are trained on vast quantities of data: An autonomous vehicle might be shown images of highways and streets with road signs, for example, while a chatbot based on a large language model (LLM) might be exposed to records of online conversations. This data helps the AI predict how to respond in a given situation. 

One major issue is that the data itself may not be trustworthy. Its sources may be websites and interactions with the public. There are many opportunities for bad actors to corrupt this data — both during an AI system’s training period and afterward, while the AI continues to refine its behaviors by interacting with the physical world. This can cause the AI to perform in an undesirable manner. Chatbots, for example, might learn to respond with abusive or racist language when their guardrails get circumvented by carefully crafted malicious prompts. 

“For the most part, software developers need more people to use their product so it can get better with exposure,” Vassilev said. “But there is no guarantee the exposure will be good. A chatbot can spew out bad or toxic information when prompted with carefully designed language.”

In part because the datasets used to train an AI are far too large for people to successfully monitor and filter, there is no foolproof way as yet to protect AI from misdirection. To assist the developer community, the new report offers an overview of the sorts of attacks its AI products might suffer and corresponding approaches to reduce the damage. 

The report considers the four major types of attacks: evasion, poisoning, privacy and abuse attacks. It also classifies them according to multiple criteria such as the attacker’s goals and objectives, capabilities, and knowledge.

Evasion attacks, which occur after an AI system is deployed, attempt to alter an input to change how the system responds to it. Examples would include adding markings to stop signs to make an autonomous vehicle misinterpret them as speed limit signs or creating confusing lane markings to make the vehicle veer off the road. 

Poisoning attacks occur in the training phase by introducing corrupted data. An example would be slipping numerous instances of inappropriate language into conversation records, so that a chatbot interprets these instances as common enough parlance to use in its own customer interactions. 

Privacy attacks, which occur during deployment, are attempts to learn sensitive information about the AI or the data it was trained on in order to misuse it. An adversary can ask a chatbot numerous legitimate questions, and then use the answers to reverse engineer the model so as to find its weak spots — or guess at its sources. Adding undesired examples to those online sources could make the AI behave inappropriately, and making the AI unlearn those specific undesired examples after the fact can be difficult.

Abuse attacks involve the insertion of incorrect information into a source, such as a webpage or online document, that an AI then absorbs. Unlike the aforementioned poisoning attacks, abuse attacks attempt to give the AI incorrect pieces of information from a legitimate but compromised source to repurpose the AI system’s intended use. 

“Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” said co-author Alina Oprea, a professor at Northeastern University. “Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set.” 

The authors — who also included Robust Intelligence Inc. researchers Alie Fordyce and Hyrum Anderson — break down each of these classes of attacks into subcategories and add approaches for mitigating them, though the publication acknowledges that the defenses AI experts have devised for adversarial attacks thus far are incomplete at best. Awareness of these limitations is important for developers and organizations looking to deploy and use AI technology, Vassilev said. 

“Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences,” he said. “There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil.” 

Information technology and Artificial intelligence

Media Contact

Organizations

Related Links

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2)

Released January 4, 2024

Cloud Misconfigurations Pose Risk for Data Exfiltration and Cyberattacks

Cloud environments are rich with sensitive data and have become a prime target for threat actors. They can be large, and the multiple applications, connections, and configurations can be difficult to understand and monitor. Cloud security failures occur due to manual controls—including settings and access privileges—not being set correctly, and organizations have mistakenly exposed applications, network segments, storage, and APIs to the public. This complexity creates a risk of breach, and victims often do not know that their cloud environments have been breached. According to IBM’s Cost of a Data Breach Report 2023, misconfigured cloud infrastructure resulting in data breaches cost an average of $4 million to resolve. Threat actors typically access and exfiltrate data via exploitable misconfigured systems and involve the loss, theft, or compromise of personally identifiable information (PII), which can be used to conduct subsequent cyberattacks.
Recent incidents of misconfigurations highlight cloud security risks and the need for organizations to secure their cloud environments to help prevent data from being mistakenly exposed. For example, researchers discovered a dual privilege escalation chain impacting Google Kubernetes Engine (GKE) due to specific misconfigurations in GKE’s FluentBit logging agent and Anthos Service Mesh (ASM). The vulnerabilities in the default configuration of FluentBit, which automatically runs on all clusters, and in the default privileges within ASM were identified. When combined, threat actors can escalate privileges with existing Kubernetes cluster access, enabling data theft, deployment of malicious pods, and disruption of cluster operations.
Additionally, a Japanese game developer, Ateam, having multiple games on Google Play, insecurely configured a Google Drive cloud storage instance to “Anyone on the internet with the link can view” since March 2017. The misconfigured instance contained 1,369 files with personal information, including full names, email addresses, phone numbers, customer management numbers, and terminal (device) identification numbers. Search engines could index this information, making it more accessible to threat actors.  Furthermore, the TuneFab converter, used to convert copyrighted music from popular streaming platforms such as Spotify and Apple Music, exposed more than 151 million parsed records of users’ private data, such as IP addresses, user IDs, emails, and device info. The exposed data was caused by a MongoDB misconfiguration, resulting in the data becoming passwordless, publicly accessible, and indexed by public IoT search engines.