2024 Q1 Top Ransomware Trends

The NJCCIC continues to receive reports of ransomware incidents impacting New Jersey private organizations and the public sector. Threat actors primarily targeted critical infrastructure and educational institutions, likely due to budgetary restraints, limited resources, and reliance on third-party vendors. These incidents resulted in financial losses, operational disruptions, and the loss of confidentiality, integrity, and availability of data and information systems. For the first quarter of 2024, we review the top ransomware variants reported to the NJCCIC, highlight ransomware trends, and provide recommendations to educate users and organizations to reduce the likelihood of victimization.
For the first quarter of 2024, ransomware incidents reported to the NJCCIC consisted of Akira, LockBit, and Play ransomware. There was a sharp increase in Akira ransomware attacks, particularly after the LockBit ransomware group’s takedown. Akira ransomware operators are known for their sophisticated attacks, especially against US healthcare organizations. However, after the takedown, LockBit quickly relaunched operations to stay active and focused on targeting government agencies and critical infrastructure organizations, including healthcare. Also, cyberattacks targeting ConnectWise ScreenConnect vulnerabilities were linked to both LockBit and Play ransomware. Although existing ransomware groups continue their efforts, new ransomware gangs have initiated operations in 2024.
The top attack vectors for ransomware are phishing, compromising valid accounts, and external remote services. Threat actors are using artificial intelligence at an increased rate to generate targeted and sophisticated phishing campaigns and launch successful, profitable ransomware attacks. They also exploited vulnerabilities to infiltrate systems and networks, as predicted in the mass exploitation of technologies supporting hybrid and remote work and enterprise third-party file transfer solutions, such as virtual private networks (VPNs), cloud-based storage, and multi-factor authentication (MFA) tools.
An example of an initial attack vector in ransomware incidents reported to the NJCCIC was unauthorized remote login access via a VPN service. One of the tactics used was MFA prompt bombing , in which threat actors obtained account credentials and attempted to log in multiple times. They sent an overwhelming number of MFA authentication requests, hoping that the target would be distracted and unintentionally provide access or eventually give in due to fatigue and approve the request. The target could refrain from resisting temptation and approving the multiple notifications. This observed tactic has recently evolved into the threat actors calling the target from a spoofed support number to convince them to initiate a password reset and divulge the one-time password reset code.
Once threat actors gained unauthorized access, they infiltrated the target organization, gained access to internal systems, and moved laterally to other critical systems. Once data was exfiltrated, they encrypted systems and servers, shutting down access to critical services and files containing personally identifiable information (PII) and financial information. Additionally, the ransomware incidents affected onsite backups; therefore, victim organizations had to resort to offsite backups, if available and viable for restoration.
Ransomware remains a prevalent threat as extortion tactics continue and evolve to pressure victim organizations to pay the ransom. Threat actors used extortion tactics, such as denying access to encrypted files, stealing data, and threatening a data breach by posting on public ransomware leak sites or releasing the stolen data to regulators, clients, or patients. The additional tactic of swatting to pressure the victim organization into paying the ransom and gaining media coverage raises public safety concerns.

Register now: Embracing AI-powered tools to maximize productivity

AI is changing the modern workplace at an unprecedented pace. Adopting AI-powered tools across your organization can supercharge productivity and creativity. Secure and responsible generative AI solutions, such as Copilot for Microsoft 365, elevate your AI investment with real gains in efficiency and innovation. This essential AI companion works across your data estate to deliver undeniable value—70% of early adopters said they were more productive, and 68% reported that it improved the quality of their work.1 Join Microsoft leaders and executives as they discuss how AI can advance your journey to a high-powered organization. Explore how you can: Jump-start the AI transformation with data security and compliance Enhance communication and collaboration with AI-powered tools Adopt and measure your AI transformation Register now to learn how secure and responsible AI can transform your organization. 1 “Work Trend Index Special Report: What Can Copilot’s Earliest Users Teach Us About Generative AI at Work?,” Microsoft, November 2023.
 
The AI Advantage: Maximizing Productivity in the Modern Workplace
 
Register now >

Google’s New Generative AI Search Results Lead to Scam Websites

Search Generative Experience (SGE) is Google’s upcoming generative artificial intelligence (AI) search feature. Google first allowed users to opt into the Google SGE results in May 2023. Google recently began rolling out this feature to a small sample of random users who have not yet opted in. Selected users will see a brief AI-created overview above the Google search engine results. The “Ask a follow up” box allows users to add more details or ask follow-up questions.
However, users are urged to exercise caution with AI-generated responses, as researchers identified that Google’s SGE results may lead to scam sites. The listed websites promoted by SGE used the .online Top-Level Domain (TLD), identical HTML templates, and the same sites to perform redirects indicating that they are likely part of the same search engine optimization (SEO) poisoning campaign. Upon clicking one of the listed websites in the search results, users may undergo a series of redirects until they reach a scam site. These scam sites often host fake CAPTCHAs or fraudulent YouTube sites that push a request to subscribe to browser notifications. Scammers use browser notifications to send unwanted advertisements directly to the operating system’s desktop, even after the website in the browser has been closed. Once subscribed, these spam advertisements redirect users to fake giveaways, unwanted browser extensions, spam subscriptions, and tech support scams.

SMS Text Phishing

Threat actors continue to use SMS text messages in phishing campaigns to steal users’ personal data, account information, and funds. SMS-based phishing (SMiShing) may be more effective than email phishing as these messages are viewed on a mobile device, making it more difficult for users to identify potentially malicious communications. This threat is compounded by businesses and organizations’ legitimate use of text messages for notification and outreach purposes. Users may also be fatigued by the number of text messages they receive and act on a message by clicking a link or responding impulsively.
SMiShing messages typically claim to come from a well-known business or organization – such as Amazon, FedEx, UPS, Netflix, or the IRS – and request that the recipient click on a link, often to access a promotion, obtain information about a package delivery, or address a problem with their account. Links may be included within these messages that, if clicked, lead to fraudulent websites that capture user credentials, steal funds, or deliver malware (image 1). These messages may also request sensitive information from the user that could facilitate identity theft or account compromise.
Image 1
There has been a recent increase in other SMiShing campaigns in which a user receives a text message from an unrecognized number that contains verbiage similar to “Hey! How have you been?” The threat actors behind these campaigns seek to garner a response from the recipient. Responding may lead to a conversation in which the user is lured into a scam, such as a gift card scam (image 2), or the threat actor may simply be attempting to confirm that the phone number is active. Attempts to garner a response from the user are also used in bank impersonation campaigns, coercing the user to reply to avoid fraudulent activity on their account without requesting information or prompting them to click on a link (image 3).

What’s new about the NIST 2.0 Cybersecurity Framework Zoom Meeting

Are you curious about what’s new about the NIST 2.0 Cybersecurity Framework? This Thursday, learn from ISC2 New Jersey Chapter‘s Master Cybersecurity Trainer, Jay Ferron about the improvements that have been made to the framework. I’ll also have the opportunity to interview Jay about his fascinating career and how he played a major part in the 9/11 recovery effort.

Click on the link below to register and hope to see you there!

As usual, newcomers can register for free.

Register Here

Microsoft.Source Newsletter | Issue 57

Microsoft.Source Newsletter | Issue 57
See the latest ideas and projects from the global developer community. If someone forwarded you this newsletter and you want to receive future issues, sign up. This month’s Microsoft.Source delves into the synergy between open-source software (OSS) and Microsoft technologies.  
Resources Learn new skills with step-by-step guidance, learning paths and modules.
 
Featured Documentation Explore Azure OpenAI Service Updates in Detail >
Create advanced Copilot experiences using the Assistants API preview. Discover new models for GPT-4 Turbo (preview), GPT-3.5 Turbo, fine-tuning, and text-to-speech.  
What’s New

Blog Microsoft and Open-Source Software >
Discover Microsoft technologies that are open source, check out repos on GitHub, and learn about tools you can use for your own open-source projects. (in English)   Documentation How to deploy Mistral models with Azure AI Studio >
Microsoft is partnering with Mistral AI to bring its Large Language Models to Azure. Mistral AI offers two categories of models in Azure Machine Learning studio.  
Blog Python in Visual Studio Code – March 2024 Release > See what’s included in the March 2024 release of the Python and Jupyter extensions for Visual Studio Code. (in English)  
Events See local events >

Microsoft Build Microsoft Build / May 21 – 23 / Seattle >
Learn from experts, get hands-on with AI, and make connections with peers, Microsoft engineers, and industry leaders.  
Open Source Summit Open Source Summit / April 16 – 18 / Seattle >
Open Source Summit is the premier event for open source developers, technologists, and community leaders to collaborate, share information, solve problems.  
On demand Getting Started with the Fluent UI Blazor Library / On demand >
This Open at Microsoft episode provides an overview of the Fluent UI Blazor library and how to leverage its open-source set of Blazor components.  
On demand Build Your Own Copilot with Azure AI Studio / On demand >
Learn how to use Azure AI studio to create, manage, and deploy AI solutions with Azure OpenAI Service.  
On demand Architecting IoT applications with .NET and Meadow / On demand >
Get your next IoT project started to enable flexible hardware design and platform support, including Meadow Feather, Raspberry Pi and desktop.  
Learning
Code Sample Code Sample: Simple Chat Application using Azure OpenAI >
Build a Python Quart microframework app that streams responses from ChatGPT to an HTML/JS frontend using JSON Lines over a ReadableStream interface.  
Microsoft Copilot Transform your work with Microsoft Copilot >
Learn about Microsoft Copilot and find out how to extend it or build your own Copilot experiences with this content on Microsoft Learn.  
Cloud Skills Challenge Microsoft Learn AI Skills Challenge >
This immersive challenge will help you gain the skills, confidence, and Microsoft Credentials needed to excel in the era of AI.  

Microsoft 365 Virtual Training Day: Prepare Your Organization for Microsoft Copilot for Microsoft 365

Build the skills you need to create new opportunities and accelerate your understanding of Microsoft Cloud technologies at a free Microsoft 365 Virtual Training Day from Microsoft Learn. Join us at Prepare Your Organization for Microsoft Copilot for Microsoft 365 to learn how to implement AI to help ignite creativity, enhance productivity, and strengthen computing and collaboration skills. You’ll learn about the capabilities of Copilot, including how it works, how to configure it, and how to set it up for more powerful searches. You’ll also explore how Copilot works with Microsoft Graph—and your existing Microsoft 365 apps—to provide intelligent, real-time assistance. You will have the opportunity to: Understand the key components of Copilot for Microsoft 365 and how it works. Learn how to extend Copilot with plugins. Get guidance on completing the necessary Copilot technical and business requirements to prepare for implementation. Learn how to assign Copilot licenses, prepare your organization’s Microsoft 365 data for Copilot searches, and create a Copilot Center of Excellence. Join us at an upcoming Prepare Your Organization for Microsoft Copilot for Microsoft 365 event:
April 22, 2024 | 12:00 PM – 2:00 PM | (GMT-05:00) Eastern Time (US & Canada)


Delivery Language: English
Closed Captioning Language(s): English

Register Here

Beware of AI Tax Scams

With Tax Day quickly approaching, many taxpayers may feel stressed as they work to file their tax returns promptly and accurately. During this time, cybercriminals may exploit this human vulnerability and leverage the rapid advancement and increase in artificial intelligence (AI) and deepfakes . They continue to explore ways to steal and use your information, including personally identifiable information (PII), financial information such as W-2s and banking information, login account credentials, and other sensitive information. Once information is captured or stolen, threat actors can use it to impersonate their victims, file fraudulent tax returns on their behalf, and steal their tax refunds. They can also use the information for other identity theft and fraud schemes.
Threat actors use social engineering tactics, AI-generated deepfakes, and voice cloning technologies to impersonate legitimate and trusted tax authorities, including the Internal Revenue Service (IRS) and tax preparation services, by stealing and using their branding, logos, and interfaces. They target vulnerable people through email, phone, text messaging, and social media platforms to trick them into disclosing their information and initiating fraudulent transactions. For example, threat actors may claim a tax refund is due or send information to track the status of tax refunds via phishing emails or text messages with links that, if clicked, direct targets to spoofed IRS websites. Additionally, threat actors may claim via phone that their target did not pay taxes or filed them incorrectly and now owes the IRS for back taxes. They may also threaten arrest or legal action if the fictitious debt is not paid immediately via wire transfer, gift cards, or pre-paid debit cards.
Threat actors also create highly sophisticated phishing emails with AI-generated content to convince their targets to divulge sensitive information or visit malicious links to spoofed websites of popular online tax preparation software. Additionally, they develop AI-powered fraudulent tax software appearing as legitimate software to lure targets into downloading malicious applications that steal and capture their information. Threat actors also trick their targets by falsely advertising and promoting themselves as legitimate tax preparation services. These scammers, or “ghost tax preparers,” are not certified, but they still prepare and file false and fraudulent tax returns and defraud their clients. They may be quickly established and promise fast or significant tax refunds to entice potential victims. The NJCCIC observed emails containing a link to direct targets to a tax preparer’s website. If clicked, the website displayed its services, including streamlining the tax filing process, and it provided IRS credentials to create a sense of legitimacy. However, upon further inspection and analysis, the link to this website was considered phishing and malicious.

AZORult Malware Distributed HTML Smuggling

Threat actors are using HTML smuggling and fraudulent Google Sites pages to disseminate AZORult in a new malware campaign. AZORult, dubbed PuffStealer and Ruzalto, was first detected in 2016. It searches the desktop for sensitive documents using keywords for extensions and file names and collects browser data and cryptocurrency wallet information. AZORult’s payload has been distributed in phishing, malspam, and malvertising campaigns and is currently targeting the healthcare industry. This campaign is laden with obfuscation and evasion techniques to minimize the chance of detection by anti-malware software.
Researchers found that the HTML smuggling technique employed has AZORult’s malicious JavaScript embedded in a separate JSON file and hosted on an external website. Once the Google Site is visited, a CAPTCHA test is initiated to add a sense of legitimacy for users and protect the malware against URL scanners, such as VirusTotal. After passing the CAPTCHA test, the payload is reconstructed and downloaded to the victim’s machine. The downloaded file is disguised as a PDF file, often appearing as a bank statement to trick users into opening the file. Once launched, it will execute a series of PowerShell scripts. This payload includes ASMI bypass techniques and reflective code loading to bypass host and disk-based detection and minimize artifacts.

Protecting Model Updates in Privacy-Preserving Federated Learning

In our second post we described attacks on models and the concepts of input privacy and output privacy. ln our last post, we described horizontal and vertical partitioning of data in privacy-preserving federated learning (PPFL) systems. In this post, we explore the problem of providing input privacy in PPFL systems for the horizontally-partitioned setting.

Models, training, and aggregation

To explore techniques for input privacy in PPFL, we first have to be more precise about the training process.

In horizontally-partitioned federated learning, a common approach is to…

Read the Blog