|
|
This training program includes 16 modules. The post includes a presentation for each module, preferably recorded (when still not, we are working on the recording) and supporting information: relevant product documentation, blog posts, and other resources.
The modules listed below are split into five groups following the life cycle of a SOC:
– Module 0: Other learning and support options
– Module 1: Get started with Azure Sentinel
– Module 2: How is Azure Sentinel used?
– Module 3: Workspace and tenant architecture
– Module 4: Data collection
– Module 5: Log Management
– Module 6: Enrichment: TI, Watchlists, and more
– Module X: Migration
– Module Z: ASIM and Normalization
– Module 7: The Kusto Query Language (KQL)
– Module 8: Analytics
– Module 9: SOAR
– Module 10: Workbooks, reporting, and visualization
– Module Y: Notebooks
– Module 11: Use cases and solutions
– Module 12: A day in a SOC analyst’s life, incident management, and investigation
– Module 13: Hunting
– Module 14: User and Entity Behavior Analytics (UEBA)
– Module 15: Monitoring Azure Sentinel’s health
– Module 16: Extending and Integrating using Azure Sentinel APIs
– Module 17: Bring your own ML
The Ninja training is a level 400 training. If you don’t want to go as deep or have a specific issue, other resources might be more suitable:
Think you're a true Sentinel Ninja? Take the knowledge check and find out. If you pass the
knowledge check with a score of over 80% you can request a certificate to prove your ninja
skills!
1. Take the knowledge check here.
2. If you score 80% or more in the knowledge check, request your participation certificate
here. If you achieved less than 80%, please review the questions that you got it wrong, study
more and take the assessment again.
Short on time? Watch the Fall Ignite presentation
Already know? The Spring Ignite session focuses on what's new and an how to use demo
Get deeper? Watch the Webinar: MP4, YouTube,Presentation
Microsoft Azure Sentinel is a scalable, cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution. Azure Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response (read more).
If you want to get an initial overview of Azure Sentinel’s technical capabilities, the latest Ignite presentation is a good starting point. You might also find the Quick Start Guide to Azure Sentinel useful (requires registration). A more detailed overview, however somewhat dated, can be found in this webinar: MP4, YouTube, Presentation.
Lastly, want to try it yourself? The Azure Sentinel All-In-One Accelerator (blog, Youtube, MP4, deck) presents an easy way to get you started. To learn how to start yourself, review the onboarding documentation, or watch Insight’s Sentinel setup and configuration video.
Thousands of organizations and service providers are using Azure Sentinel. As usual with security products, most do not go public about that. Still, there are some.
Microsoft named a Visionary in the 2021 Gartner Magic Quadrant for SIEM for Azure Sentinel.
Short on time? Read this presentation.
Many users use Azure Sentinel as their primary SIEM. Most of the modules in this course cover this use case. In this module, we present a few additional ways to use Azure Sentinel.
Use Sentinel, Azure Defender, Microsoft 365 Defender in tandem to protect your Microsoft workloads, including Windows, Azure, and Office:
The cloud is (still) new and often not monitored as extensively as on-prem workloads. Read this presentation to learn how Azure Sentinel can help you close the cloud monitoring gap across your clouds.
Either for a transition period or a longer term, if you are using Azure Sentinel for your cloud workloads, you may be using Azure Sentinel alongside your existing SIEM. You might also be using both with a ticketing system such as Service Now.
For more information on migrating from another SIEM to Azure Sentinel, watch the migration webinar: MP4, YouTube, Deck.
There are three common scenarios for side by side deployment:
You can also send the alerts from Azure Sentinel to your 3rd party SIEM or ticketing system using the Graph Security API, which is simpler but would not enable sending additional data.
Since it eliminates the setup cost and is location agnostics, Azure Sentinel is a popular choice for providing SIEM as a service. You can find a list of MISA (Microsoft Intelligent Security Association) member MSSPs using Azure Sentinel. Many other MSSPs, especially regional and smaller ones, use Azure Sentinel but are not MISA members.
To start your journey as an MSSP, you should read the Azure Sentinel Technical Playbooks for MSSPs. More information about MSSP support is included in the next module, cloud architecture and multi-tenant support.
While the previous section offers options to start using Azure Sentinel in a matter of minutes, before you start a production deployment, you need to plan. This section walks you through the areas that you need to consider when architecting your solution, as well as provides guidelines on how to implement your design:
Short on time? Watch the Nic DiCola's Ignite presentation (first 11 Minutes)
Get Deeper? Watch the Webinar: MP4, YouTube, Presentation
An Azure Sentinel instance is called a workspace. The workspace is the same as a Log Analytics workspace and supports any Log Analytics capability. You can think of Sentinel as a solution that adds SIEM features on top of a Log Analytics workspace.
Multiple workspaces are often necessary and can act together as a single Azure Sentinel system. A special use case is providing service using Azure Sentinel, for example, by an MSSP (Managed Security Service Provider) or by a Global SOC in a large organization.
To learn more about why use multiple workspaces and use them as one Azure Sentinel system, read Extend Azure Sentinel across workspaces and tenants or, if you prefer, the Webinar version: MP4, YouTube, Presentation.
There are a few specific areas that require your consideration when using multiple workspaces:
The Azure Sentinel Technical Playbook for MSSPs provides detailed guidelines for many of those topics, and is useful also for large organizations, not just to MSSPs.
Sept 2021 update: our latest webinar on data collection scenarios by Edi Lahav and Yaniv
Shasha. YouTube, MP4, Deck
Short on time? Watch the Nic DiCola's Ignite presentation (Mid 11 Minutes)
Get Deeper? Watch the Webinar: YouTube, MP4, Deck.
The foundation of a SIEM is collecting telemetry: events, alerts, and contextual enrichment information such as Threat Intelligence, vulnerability data, and asset information. You can find a list of sources you can connect here:
How you connect each source falls into several categories or source types. Each source type has a distinct setup effort but once deployed, it serves all sources of that type. The Grand List specifies for each source what its type is. To learn more about those categories, watch the Webinar (includes Module 3): YouTube, MP4, Deck.
The types are:
If your source is not available, you can create a custom connector. Custom connectors use the ingestion API and therefore are similar to direct sources. Custom connectors are most often implemented using Logic Apps, offering a codeless option, or Azure Functions.
While how many and which workspaces to use is the first architecture question to ask, there are additional log management architectural decisions:
One of the important functions of a SIEM is to apply contextual information to the event steam, enabling detection, alert prioritization, and incident investigation. Contextual information includes, for example, threat intelligence, IP intelligence, host and user information, and watchlists.
Azure Sentinel provides comprehensive tools to import, manage, and use threat intelligence. For other types of contextual information, Azure Sentinel provides Watchlists, as well as alternative solutions.
Sept 2021 update: Sign up for the Explore the Power of Threat Intelligence in Azure Sentinel
webinar on Oct 25 here.
Short on time? watch the Ignite session (28 Minutes)
Get Deeper? Watch the Webinar: YouTube, MP4, Presentation
Threat Intelligence is an important building block of a SIEM.
In Azure Sentinel, you can integrate threat intelligence (TI) using the built-in connectors from TAXII servers or through the Microsoft Graph Security API. Read more on how to in the documentation. Refer to the data collection modules for more information about importing Threat Intelligence.
Once imported, Threat Intelligence is used extensively throughout Azure Sentinel and is weaved into the different modules. The following features focus on using Threat Intelligence:
Watch the Webinar: YouTube, MP4, Presentation
In many (if not most) cases, you already have a SIEM and need to migrate to Azure Sentinel. While it may be a good time to start over and rethink your SIEM implementation, it makes sense to utilize some of the assets you already built in your current implementation. To start watch our webinar describing best practices for converting detection rules from Splunk, QRadar, and ArcSight to Azure Sentinel Rules: YouTube, MP4, Presentation, blog.
You might also be interested in some of the resources presented in the blog:
Watch the Understanding Normalization in Azure Sentinel webinar: YouTube, Presentation
Watch the Deep Dive into Azure Sentinel Normalizing Parsers and Normalized Content webinar:
YouTube, MP3, Presentation
Sign up for the Turbocharging ASIM: Making Sure Normalization Helps Performance Rather Than
Impacting It webinar on Oct 6 here.
Working with various data types and tables together presents a challenge. You must become familiar with many different data types and schemas, write and use a unique set of analytics rules, workbooks, and hunting queries for each, even for those that share commonalities (for example, DNS servers). Correlation between the different data types necessary for investigation and hunting is also tricky.
The Azure Sentinel Information Model (ASIM) provides a seamless experience for handling various sources in uniform, normalized views. ASIM aligns with the Open-Source Security Events Metadata (OSSEM) common information model, promoting vendor agnostic, industry-wide normalization.
The current implementation is based on query time normalization using KQL functions. And includes the following:
Using ASIM provides the following benefits:
What is Azure Sentinel’s content?
Azure Sentinel security value is a combination of its built-in capabilities such as UEBA, Machine Learning, or out-of-the-box analytics rules and your capability to create custom capabilities and customize built-in ones. Customized SIEM capabilities are often referred to as “content” and include analytic rules, hunting queries, workbooks, playbooks, and more.
In this section, we grouped the modules that help you learn how to create such content or modify built-in-content to your needs. We start with KQL, the Lingua Franca of Azure Sentinel. The following modules discuss one of the content building blocks such as rules, playbooks, and workbooks. We wrap up by discussing use cases, which encompass elements of different types to address specific security goals such as threat detection, hunting, or governance.
Short on time? Start at the beginning and go as far as time allows.
Most Azure Sentinel capabilities use KQL or Kusto Query Language. When you search in your logs, write rules, create hunting queries, or design workbooks, you use KQL. Note that the next section on writing rules explains how to use KQL in the specific context of SIEM rules.
We suggest you follow this Sentinel KQL journey:
You might also find the following reference information useful as you learn KQL:
Short on time? watch the Webinar: MP4, YouTube, Presentation
Azure Sentinel enables you to use built-in rule templates, customize the templates for your environment, or create custom rules. The core of the rules is a KQL query; however, there is much more than that to configure in a rule.
To learn the procedure for creating rules, read the documentation. To learn how to write rules, i.e., what should go into a rule, focusing on KQL for rules, watch the webinar: MP4, YouTube, Presentation.
SIEM rules have specific patterns. Learn how to implement rules and write KQL for those patterns:
To blog post “Blob and File Storage Investigations” provides a step by step example of writing a useful analytic rule.
Short on time? watch the Machine Learning Webinar: MP4, YouTube, Presentation
Before embarking on your own rule writing, you should take advantage of the built-in analytics capabilities. Those do not require much from you, but it is worthwhile learning about them:
Sept 2021 update: sign up for the What’s New in Azure Sentinel Automation webinar on Oct 28
here.
Short on time? watch the Webinar: YouTube, MP4, Deck
In modern SIEMs such as Azure Sentinel, SOAR (Security Orchestration, Automation, and Response) comprises the entire process from the moment an incident is triggered and until it is resolved. This process starts with an incident investigation and continues with an automated response. The blog post “How to use Azure Sentinel for Incident Response, Orchestration and Automation” provides an overview of common use cases for SOAR.
Automation rules are the starting point for Azure Sentinel automation. They provide a lightweight method for central automated handling of incidents, including suppression, false-positive handling, and automatic assignment.
To provide robust workflow based automation capabilities, automation rules use Logic App playbooks:
You can find dozens of useful Playbooks in the Playbooks folder on the Azure Sentinel GitHub, or read “A playbook using a watchlist to Inform a subscription owner about an alert” for a Playbook walkthrough.
While Azure Sentinel is a cloud-native SIEM, its automation capabilities do extend to on-prem environments, either using the Logic Apps on-prem gateway or using Azure Automation as described in “Automatically disable On-prem AD User using a Playbook triggered in Azure“
Short on time? Watch the Webinar: YouTube, MP4, Deck
As the nerve center of your SOC, you need Azure Sentinel to visualize the information it collects and produces. Use workbooks to visualize data in Azure Sentinel.
Workbooks can be interactive and enable much more than just charting. With Workbooks, you can create apps or extension modules for Azure Sentinel to complement built-in functionality. We also use workbooks to extend the features of Azure Sentinel. Few examples of such apps you can both use and learn from are:
You can find dozens of workbooks in the Workbooks folder in the Azure Sentinel GitHub. Some of those are available in the Azure Sentinel workbooks gallery and some are not.
Workbooks can serve for reporting. For more advanced reporting capabilities such as reports scheduling and distribution or pivot tables, you might want to use:
Short on time? Watch the short introduction video
Get Deeper? Watch the Webinar: YouTube, MP4, Presentation
Jupyter notebooks are fully integrated with Azure Sentinel. While usually considered an important tool in the hunter’s tool chest and discussed the webinars in the hunting section below, their value is much broader. Notebooks can serve for advanced visualization, an investigation guide, and for sophisticated automation.
To understand them better, watch the Introduction to notebooks video. Get started using the Notebooks webinar (YouTube, MP4, Presentation) or by reading the documentation.
An important part of the integration is implemented by MSTICPY, a Python library developed by our research team for use with Jupyter notebooks that adds Azure Sentinel interfaces and sophisticated security capabilities to your notebooks.
Sept 21 update: sign up for the Create Your Own Azure Sentinel Solutions webinar on Nov 16
here.
Short on time? watch the "Tackling Identity" Webinar: YouTube, MP4, Deck
Using connectors, rules, playbooks, and workbooks enables you to implement use cases: the SIEM term for a content pack intended to detect and respond to a threat. You can deploy Sentinel built-in use cases by activating the suggested rules when connecting each Connector. A solution is a group of use cases addressing a specific threat domain.
The Webinar “Tackling Identity” (YouTube, MP4, Presentation) explains what a use case is, how to approach its design, and presents several use cases that collectively address identity threats.
Another very relevant solution area is protecting remote work. Watch our ignite session on protection remote work, and read more on the specific use cases:
And lastly, focusing on recent attacks, learn how to monitor the software supply chain with Azure Sentinel.
Azure Sentinel solutions provide in-product discoverability, single-step deployment, and enablement of end-to-end product, domain, and/or vertical scenarios in Azure Sentinel. Read more about them here, and sign up for the upcoming webinar on Nov 16 on how to create solutions here.
Sept 21 update: sign up for the Decrease Your SOC’s MTTR (Mean Time to Respond) by Integrating
Azure Sentinel with Microsoft Teams webinar on Nov 10 here.
Short on time? Watch the "day in a life" Webinar: YouTube, MP4, Deck
After building your SOC, you need to start using it. The “day in a SOC analyst life” webinar (YouTube, MP4, Presentation) walks you through using Azure Sentinel in the SOC to triage, investigate and respond to incidents.
Integrating with Microsoft Teams directly from Azure Sentinel enables your teams to collaborate seamlessly across the organization, and with external stakeholders. Sign up for the Decrease Your SOC’s MTTR (Mean Time to Respond) by Integrating Azure Sentinel with Microsoft Teams webinar on Nov 10 here.
You might also want to read the documentation article on incident investigation. As part of the investigation, you will also use the entity pages to get more information about entities related to your incident or identified as part of your investigation.
Incident investigation in Azure Sentinel extends beyond the core incident investigation functionality. We can build additional investigation tools using Workbooks and Notebooks (the latter are discussed later, under hunting). You can also build additional investigation tools or modify ours to your specific needs. Examples include:
Short on time? watch the Webinar: YouTube, MP4, Deck
(Note that the Webinar starts with an update on new features, to learn about hunting, start at slide 12. The YouTube
link is already set to start there)
While most of the discussion so far focused on detection and incident management, hunting is another important use case for Azure Sentinel. Hunting is a proactive search for threats rather than a reactive response to alerts.
The hunting dashboard was recently refreshed in July 2021 and shows all the queries written by Microsoft’s team of security analysts and any extra queries that you have created or modified. Each query provides a description of what it hunts for, and what kind of data it runs on. These templates are grouped by their various tactics – the icons on the right categorize the type of threat, such as initial access, persistence, and exfiltration. Read more about it here.
To understand more about what hunting is and how Azure Sentinel supports it, Watch the hunting intro Webinar (YouTube, MP4, Deck). Note that the Webinar starts with an update on new features. To learn about hunting, start at slide 12. The YouTube link is already set to start there.
While the intro webinar focuses on tools, hunting is all about security. Our security research team webinar on hunting (MP4, YouTube, Presentation) focuses on how to actually hunt. The follow-up AWS Threat Hunting using Sentinel Webinar (MP4, YouTube, Presentation) really drives the point by showing an end-to-end hunting scenario on a high-value target environment. Lastly, you can learn how to do SolarWinds Post-Compromise Hunting with Azure Sentinel and WebShell hunting motivated by the latest recent vulnerabilities in on-premises Microsoft Exchange servers.
Short on time? Watch the Webinar: MP4, YouTube, Deck
Azure Sentinel newly introduced User and Entity Behavior Analytics (UEBA) module enables you to identify and investigate threats inside your organization and their potential impact – whether a compromised entity or a malicious insider.
Learn more about UEBA in the UEBA Webinar (MP4, YouTube, Deck) and read about using UEBA for investigations in your SOC.
Short on time? watch the videos on monitoring connectors,
security operations health or workspace audit.
Part of operating a SIEM is making sure it works smoothly and an evolving area in Azure Sentinel. Use the following to monitor Azure Sentinel’s health:
Short on time? watch the video (5 minutes)
Get deeper? Watch the Webinar: MP4, YouTube, Presentation
As a cloud-native SIEM, Azure Sentinel is an API first system. Every feature can be configured and used through an API, enabling easy integration with other systems and extending Sentinel with your own code. If API sounds intimidating to you, don’t worry; whatever is available using the API is also available using PowerShell.
To learn more about Azure Sentinel APIs, watch the short introductory video and blog post. To get the details, watch the deep dive Webinar (MP4, YouTube, Presentation) and read the blog post Extending Azure Sentinel: APIs, Integration, and management automation.
Short on time? watch the video
Azure Sentinel provides a great platform for implementing your own Machine Learning algorithms. We call it Bring Your Own ML or BYOML for short. Obviously, this is intended for advanced users. If you are looking for built-in behavioral analytics, use our ML Analytic rules, UEBA module, or write your own behavioral analytics KQL based analytics rules.
To start with bringing your own ML to Azure Sentinel, watch the video, and read the blog post. You might also want to refer to the BYOML documentation.
In an ongoing effort to provide practical and actionable guidance
to help organizations manage growing cybersecurity risks, NIST has released a
draft ransomware risk management profile. The Cybersecurity Framework Profile for Ransomware Risk Management, Draft NISTIR 8374, is now open for comment
through October 8, 2021.
The draft profile, prepared by the National Cybersecurity Center
of Excellence (NCCoE), identifies security objectives from
the NIST Cybersecurity Framework that can help prevent, respond to, and recover
from ransomware events. It can be used as a guide to managing risk—including
helping gauge an organization’s readiness to mitigate ransomware threats and
react to potential impacts. The profile addresses issues that were raised in
public comments on a preliminary draft released in June.
Registration is OPEN for the 8th annual New York Metro Joint Cyber Security Conference & Workshop (Oct. 14/15). Find out more at https://infosecurity.nyc
How to Gain More from
your Connection to an OT Network
One of the most productive and non-intrusive tools in the Cyber Security
Engineer’s bag is passive Network Traffic Analysis (NTA). Providing
network maps, inventory, and firmware information among other benefits provides
insights that are not generally known any other way. Manual inventory
collection methods are error-prone and expose this information to interception
over corporate email networks, shared file folders, etc. But how do we
implement this kind of system without causing any bumps in the road for
real-time processes? What are the risks? Which methods are
best? The best sensor does no good unconnected and is of little value
connected in the wrong part of the network.
To discuss this, I will use a diagram that was developed
for my last blog post Designing a Robust Defense for Operational Technology Using
Azure Defender for IoT (microsoft.com). This diagram (below) shows an
example OT network monitored by Azure Defender for IoT.
Defender for IoT is an agentless passive Network Traffic Analysis
tool with strong roots in Operational Technology, now expanding to IoT.
Defender for IoT discovers OT/IoT devices, identifies vulnerabilities, and
provides continuous OT/IoT-aware monitoring of network traffic. The
recommended locations for Azure Defender for IoT (AD4IoT) are shown in red color. Why have these locations been
chosen? To explain this, we will break this network into pieces and
address these issues for each type of traffic.
Starting with the lower portion of this sketch, let’s look at traffic flows
around the PLCs.
1. The first arrow shows traffic between
a PLC and its ethernet-connected Input/Output (I/O) modules. This traffic
utilizes simplistic protocols and is very structured and periodic. It can
be leveraged as a threat to the overall OT system and is more vulnerable when
I/O is remote from the PLCs in unsecured areas. Malicious applications
could perform inappropriate control actions and/or falsify data. Firmware
problems in I/O modules often go unpatched unless some form of undesirable
behavior is experienced. In certain families of PLCs or controllers, the
Defender for IoT can provide data on firmware levels and types of I/O modules
if this data is requested by an HMI or historian.
The mechanism to monitor this traffic is
to span switches used in the I/O subsystem as shown here. If they are
unmanaged switches, taps may be located at the connection to the PLC or
controller.
2. The second arrow identifies traffic
from Variable Frequency Drives or similar equipment often interfaced with the
PLCs or Controllers. This communication may be Modbus, Rockwell Protocols,
or CIP. Equipment could be damaged or destroyed by inappropriate commands
sent to such devices. Good engineering practice would put bounds of
reasonability around all potential setpoints, but this may not be the
case. These protocols are well understood and in the public domain.
A man-in-the-middle attack could affect this type of equipment.
Monitoring these communications can identify inappropriate function calls,
program or firmware changes, and parameter updates. As above, switch span or
taps are the mechanisms to monitor this traffic.
3. Custom engineered systems may utilize
well-known, open OT protocols such as Modbus, OPC, or others. This
traffic should be monitored even if it is not fully understood as the behavior
patterns should be very predictable. It is common for these systems to
utilize unusual functions and atypical ranges for data. This is the
result of a developer reading a protocol spec with no actual field experience
with the protocol. Custom alerts can be configured and tuned based on the
nature of the data. Since such systems are engineered to order for a
specific purpose, the damage could have long-term implications on plant
production.
4. Traffic crossing OT Access-level
switches should always be monitored. This is the primary point at which
PLCs or controllers communicate with HMIs, engineering stations, and sometimes
historians. The problem here is that these switches carry the actual OT
control traffic. Any action that could compromise this traffic affects
the reliability of the OT system. Many switches at the I/O and access
layers may be unmanaged devices. By unmanaged, I mean that they are not
configurable and therefore cannot support a SPAN (or mirror) session.
Unmanaged switches is not an
insurmountable hurdle. Two possible paths may be followed from this
point. The least intrusive is to install network taps. The security
engineer should consult with the OT engineer on the most valuable locations for
taps. Since a stand-alone tap monitors only one data stream, the most
valuable assets (compromise targets) should be monitored. These would normally
be at least the engineering station, historian and/or alarms server (if
appropriate), and HMIs, particularly those with engineering tools
installed. If it is necessary to monitor all traffic, a tap aggregator
may be used.
Another approach would be to replace the
unmanaged switches with managed switches. This may sound daunting but
usually is not. Most managed switches are configured to “wake up” in a
basic mode which approximates an unmanaged switch. So replacement, while
requiring a system shutdown, can be accomplished rather quickly and have the
system up and functioning again. Once this is done, the configuration can
be added to provide basic security and copy traffic to a SPAN or mirror
port. Make sure these configurations are saved as most switches make
changes to operating memory which is not stored on power reset. It is
generally recommended to discuss this change with your OT support personnel
and/or OEM service engineers. They probably have some standard switch
configurations that they apply when a customer requests managed switches.
Additionally, they should be able to provide you with approximate bus speeds
needed to support OT traffic with mirroring.
What are the risks? In the case of switch SPAN (SwitchPort ANalyzer), or mirror
sessions, the only concern of serious significance is the current traffic level
on the switch. If a SPAN session is added to a heavily loaded switch, the
SPAN may drop packets because the SPAN session is a lower priority than actual
switching traffic. This could mean that some packets might slip through
unmonitored. However, it does not affect the normal functioning of the
switch for ICS traffic. Some switches, if they are greatly overloaded can
revert to ‘flood mode’ in which they act as a network hub. This situation
is extremely rare. If switch SPANning is chosen as a method, it is wise
to monitor network traffic on the switch prior to adding the session.
Assume that a full switch span will double the switch backbone traffic.
If network taps are installed, the risks are insignificant. Passive
taps should of course be chosen. Passive means that the tap continues to
pass control traffic even if it loses power. Passive taps are simply
inserted in-line with the existing traffic, see sketch below.
Installation needs to be coordinated with OT engineers to limit the impact on
operating processes.
Next, we will discuss special equipment including analysis devices and
robotics. This portion of the overall diagram is shown below.
Network traffic to analyzers typically looks like normal PC traffic using
common IT protocols. Most analyzers have some form of controller that is
designed for a specific function. Sometimes the PC is the
controller, utilizing specialized I/O boards included in the machine. Some
analyzers or groups of analyzers may be managed by mini computers.
In any case, from a network security perspective, these devices appear on the
network as computers, not analyzers per se. Patching of these customized
machines often lags behind the upgrade strategies used for standard IT
equipment. Upgrades to analysis systems must be approved by, and often be
implemented by the OEMs which may be expensive and involve downtime. Because of
infrequent patching and/or OS upgrades, this equipment can become a security
liability on a lab network. Ideally, lab equipment should be separated either
physically onto separate networks or via VLANs, but such changes may require
extensive planning and testing and still can be disruptive to ongoing lab
processes.
Most major medical laboratories utilize either a LIMS (Laboratory
Information Management System) or a middleware server to collect analytics data
from these devices and forward that data to a patient information database
managed either locally or in the cloud (see sketch below). Hence, the
traffic to/from the analyzer will be most easily recognized by the ultimate
destination at the middleware or LIMS. Since these potentially vulnerable
machines may process interactions with users on the lab network for input data
or maintenance functions, they should be monitored more closely than fully
patched IT machines. This presents a challenge to lab IT managers who may
want to gain a handle on this type of OT equipment in their network but may not
have good inventory information.
Since medical testing facilities utilize normal switched networks,
monitoring should be installed at an appropriate location to ‘see’ all the
traffic from analyzers to the middleware or LIMS server. This could be
either core or distribution level switches depending on the network
design. Standard SPAN or mirror traffic can be used.
Dual-homed machines present special security challenges since they could be
converted to active routers by malware. It is common for expensive lab or
analysis equipment to be leased. OEM terms and conditions specify how
this equipment may be used and what service it requires to achieve contracted
performance. This is often monitored via a ‘secure’ datalink to the
manufacturer’s support site. These may or may not be
bi-directional. These links are generally firewalled, either by the OEM,
by the customer or by both. Bi-directional links are inherently a threat.
Remote access to a computer on the lab network can put much more than that
computer in jeopardy.
In robotic applications, the primary issue is the speed of response.
The control systems are complex, utilizing high-level programming
toolsets. The low-level communication may not utilize standard ethernet
framing. Robot protocols vary widely and include Ethernet/IP, DeviceNet,
Profibus-DP, Profinet, CC-Link, and EtherCat protocols. Physical media
may be Cat5/6, but RG-6 coaxial, twisted pair, RS-485, and fiber are also
used. Monitoring the low-level communication between controllers and
robots requires careful coordination with the equipment designer and should not
be attempted casually. Network monitoring should utilize taps. Switch
SPAN, or mirroring is not recommended.
As described above, most industrial robots are programmed using a computer
workstation. Downloading and selection of programs may be manual or
automated using standard network protocols. So, monitoring should focus on the
programming workstations and the source of robot program selections.
Robot program file downloads may be transferred from a central server.
These could occur over SFTP, FTP, SMB, or other methods.
Finally, we would like to address the OT interface to the business
(Enterprise) network. This can be a gateway for potential threats to OT
systems. Some vulnerabilities that may be unsuccessful in the IT network
space may cause severe problems in the OT space because the machines may not be
patched. Out of date and unsupported operating systems may be in
use. As a result, traffic that enters from the Enterprise network and
ultimately reaches the OT network should be monitored.
Generally, good practice prevents any direct traversal of the DMZ. For
instance, remote desktop sessions should be hosted by a RAS server in the DMZ
which is then used to open a remote desktop session into an OT machine with
different credentials. Elaborate credential systems with short password lives
attempt to increase the challenge for attackers attempting to gain
control. Well designed implementations keep all machines in the DMZ
patched up-to-date which should limit the effect of known
vulnerabilities.
Zero day vulnerabilities will always be a threat prior to discovery.
So, monitoring sessions entering the DMZ from the Enterprise and those leaving
the DMZ for the OT network are an important part of a security design.
Similarly, monitoring traffic from the OT network to a Historian server and
Enterprise connections to that same server could uncover issues. Since
these sessions are often encrypted, efforts should focus on the legitimacy of
the Enterprise hosts, times of access, data rates, and other indicators to
validate these externally generated sessions.
The DMZ is also used as a connection point for a variety of other facility
systems such as IP phones; perimeter security systems; weather stations;
contracted supply systems like water purification, compressed air supply and
the like; wireless devices; etc. In most cases, these various systems are
assigned separate VLANs and subnets. By monitoring all the VLANS in this
zone, suspicious traffic can be identified and managed. Traffic
originating from any of these devices to the ICS network should not normally
exist.
Subnet-to-subnet traffic could be cause for concern. This is another
area where Defender for IoT can help. By mapping the assets, assigning
them to VLANs, subnets, and user assigned subsystems, communication between the
various device groups can be easily seen greatly aiding efforts to perform or
monitor network segregation.
The visual network map produced by Defender for IoT in conjunction with the
filtering capabilities on the map make it easy to identify interconnections
between various plant control systems. Having a powerful visual of
group-to-group communication makes the effort of segmentation much
easier. This process is a long and tedious one using arp tables on
switches. Also, if this effort is underway, the map will show areas that
may have been overlooked.
Conclusions:
Well-engineered connections to ICS networks can yield valuable results,
including accurate inventories, network maps, and improved security with no
risk to the reliability of the underlying OT systems. This information
can be combined, in Azure Sentinel or other
SIEM/SOAR solutions, with agent-based Defender for endpoint data to produce a
complete picture of OT networks. Custom-designed playbooks can assist
your analysts in responding to OT or IoT issues.
Teamwork between OT engineers and IT security personnel can yield benefits
for both groups while presenting a more challenging landscape to potential
intruders.
As we know, each organization is unique and have different use cases and
scenarios in mind when it come to security operations. Nevertheless we’ve
identified several use cases that are common across many SOC teams.
Azure Sentinel now provides built-in watchlist templates, which you can
customize for your environment and use during investigations.
After those watchlists are populated with data, you can correlate that data
with analytics rules, view it in the entity pages and investigation graphs as
insights, create custom uses such as to track VIP or sensitive users, and more.
Watchlist templates currently include:
Watchlists
templates insights in entity pages
We’ve created the watchlists templates schemas to be super easy and extensible, in order for
you to populate it with the relevant data. more information about using the
watchlists templates can be found here.
What’s next?
Beside surfacing the watchlists templates data inside the entity pages,
we’re working on embedding this information in the UEBA anomalies, and the
entity risk score which is planned next. Understanding if a user is a
VIP/Terminated or an asset is an HVA is important to provide both context and
security value for the analyst while investigating.
In collaboration with the Microsoft Threat Intelligence Center (MSTIC), we
are excited to announce Fusion
detection for ransomware is now publicly available!
These Fusion detections correlate alerts that are potentially associated
with ransomware activities that are observed at defense evasion and execution
stages during a specific timeframe. Once such ransomware activities are
detected and correlated by the Fusion machine learning model, a high
severity incident titled “Multiple alerts possibly related to Ransomware
activity detected” will be triggered in your Azure Sentinel workspace.
In order to help your analyst quickly understand the possible attack, Fusion
provides you with a complete picture for the suspicious activities happened on
the same device/host by correlating signals from Microsoft products as well as
signals in network and cloud. Supported data connectors include:
The screenshot below shows a Fusion incident with 22 alerts. It correlates
low severity signals that were detected around the same timeframe from the
network and the host to show a possible ransomware attack and the different
techniques used by attackers.
For more information, see Multiple alerts possibly related to Ransomware activity
detected.
Ransomware attack is a type of attack that involves using
specific types of malicious software or malware to make network or system
inaccessible for the purpose of extortion – ‘ransom’. There is no doubt that
ransomware attacks have taken a massive turn in being the top priority as a
threat to many organizations. A recent report released by PurpleSec revealed that the
estimated cost of ransomware attacks was $20 billion in 2020 and with downtime
increasing by over 200% and the cost being 23x higher than 2019.
Preventing such attacks in the first place would be the ideal solution but
with the new trend of ‘ransomware as a service’ and human operated ransomware,
the scope and the sophistication of attacks are increasing – attackers are
using slow and stealth techniques to compromise network, which makes it harder
to detect them in the first place.
With Fusion detection
for ransomware that captures malicious activities at the defense evasion and
execution stages of an attack, it gives security analysts an opportunity to
quickly understand the suspicious activities happened around the same timeframe
on the common entities, connect the dots and take immediate actions to disrupt
the attack. When it comes to ransomware attacks, time more than
anything else is the most important factor in preventing more machines or the
entire network from getting compromised. The sooner such alerts are raised to
security analysts with the details on various attacker activities, the faster
the ransomware attacks can be contained and remediated. A detection like this
will help analysts by giving the compilation of attacker activity around
execution stage helping reduce MTTD (Mean Time to Detect) and MTTR (Mean Time
to Respond).
In the Incident 1
example, Fusion correlates alerts triggered within a short timeframe on the
same device, indicating a possible chain of attacks from how the attackers got
in through possible RDP brute-force attack, followed by the use of a ‘Cryptor’
malware and potential phishing activities using malicious document associated
with the EUROPIUM activity group, to the detection of Petya and WannaCrypt
ransomware in the network.
Incident 1
Incident 2
below is another example of the Fusion ransomware detection that was confirmed
as true positive. This incident correlates alerts showing ransomware activities
at defense evasion and execution stages on the same host, along with additional
suspicious activities detected during the same timeframe to show you possible
techniques used by attackers to compromise the host.
Incident 2
In these Fusion incidents, the alerts related to ransomware/malware
detection might indicate that the ransomware/malware was stopped from
delivering its payload but it is prudent to check the machine for signs of
infection. Attackers may continue malicious activities after ransomware was
prevented – it is also important that you investigate the entire network to
understand the intrusion and identify other machines that might be impacted by
this attack.
After receiving Fusion detentions for possible ransomware activities, we
recommend you to check with the machine owner if this is intended behavior. If
the activity is unexpected, treat the machine as potentially compromised and
take immediate actions to analyze different techniques used by attackers to
compromise the host and to evade detection in this potential ransomware attack.
Here are the recommended steps:
As you investigate and close the Fusion incidents, we encourage you to provide feedback
on whether this incident was a True Positive, Benign Positive, or a False
Positive, along with details in the comments. Your feedback is
critical to help Microsoft deliver the highest quality detections!
Recent reports show the high extent to which information workers are
utilizing cloud apps while doing their everyday tasks. In an average
enterprise, there are more than 1500 different cloud services used, where less
than 12% of them are sanctioned or managed by the IT teams. Considering that
more than 78GB of data is being uploaded monthly to risky apps we conclude that
most organizations are exposed to potential data loss or risks coming out of
these cloud applications.
Shadow IT usage of risky apps is usually mitigated by a strict approach of
blocking any usage of these cloud apps that do not meet certain risk criteria,
this approach is already enabled today by using Microsoft’s Cloud App Security Shadow IT Discovery capabilities
with its native integration with Microsoft Defender for Endpoints and it’s
native integration with other 3rd party network appliances.
But what about apps that are widely used by
employees and enable their productivity (especially in the Work from
home/Covid-19 era) and their risk is not conclusive enough for a strict block.
To enable the delicate balance between employee’s productivity, and the need
for risk and compliance awareness, organizations need to take a gradual
approach:
We are pleased to announce the public
preview for a new endpoint-based capability to allow management
and control of Monitored cloud applications, manage these Monitored
applications applying soft block experience for end-users when accessing these
apps. Users will have an option to bypass the block.
IT admins will be able to add a dedicated custom redirect link so users can get
more context on why they were blocked in the first place and what valid
alternatives do they have for such apps in the organization.
Besides enabling the soft block experience, admins will be able to
continuously monitor these apps and understand how many of the users adhered to
the block and chose other alternatives, or, decided to bypass the block and
continue using the app – this will serve as a strong indication, org-wide,
whether this app is necessary and should be considered for deeper management by
IT.
By adopting a more gradual and less strict approach for blocking cloud
applications, IT organizations can reduce their overhead of handling exception
requests, but in parallel drive employee awareness.
In Cloud App security, tag the targeted
app as Monitored.
The corresponding URL/Domains indicators will appear in the Microsoft
Defender for Endpoints security portal as a new URL/Domain indicator with
action type Warn.
When a user attempts to access the Monitored
app, they will be blocked by Windows defender network protection but will allow
the user to bypass the block or get more details on why he was blocked by
redirecting him to a dedicated custom web page managed by the organization.
Over time, an IT admin can monitor the usage pattern of the app in Cloud App
Security’s discovered app page and monitor how many users have bypassed the
warning message.
After you have verified that you have all the integration prerequisites listed
in this article, follow the steps below to start warning on access
to Monitored apps with Cloud App Security and Microsoft Defender for Endpoint.
In Microsoft 365 Defender, go to settings >
Endpoints > Advanced features and enable Microsoft Cloud App Security
integration and Custom
network indicators.
In the Microsoft Cloud App Security portal, go
to Settings > Microsoft Defender for Endpoint:
With the massive volume of emails sent each day, coupled with the many methods that attackers use to blend in, identifying the unusual and malicious is more challenging than ever. An obscure Unicode character in a few emails is innocuous enough, but when a pattern of emails containing this obscure character accompanied by other HTML quirks, strange links, and phishing pages or malware is observed, it becomes an emerging attacker trend to investigate. We closely monitor these kinds of trends to gain insight into how best to protect customers.
This blog shines a light on techniques that are prominently used in many recent email-based attacks. We’ve chosen to highlight these techniques based on their observed impact to organizations, their relevance to active email campaigns, and because they are intentionally designed to be difficult to detect. They hide text from users, masquerade as the logos of trusted companies, and evade detection by using common web practices that are usually benign:
We’ve observed attackers employ these tricks to gain initial access to networks. Although the examples we present were primarily seen in credential theft attacks, any of these techniques can be easily adapted to deliver malware.
By spotting trends in the threat landscape, we can swiftly respond to potentially malicious behavior. We use the knowledge we gain from our investigations to improve customer security and build comprehensive protections. Through security solutions such as Microsoft Defender for Office 365 and the broader Microsoft 365 Defender, we deliver durable and comprehensive protection against the latest attacker trends.
We have observed attackers using HTML tables to imitate the logos and branding of trusted organizations. In one recent case, an attacker created a graphic resembling the Microsoft logo by using a 2×2 HTML table and CSS styling to closely match the official branding.
Spoofed logos created with HTML tables allow attackers to bypass brand impersonation protections. Malicious content arrives in users’ inboxes, appearing to recipients as if it were a legitimate message from the company. While Microsoft Defender for Office 365 data shows a decline in the usage of this technique over the last few months, we continue to monitor for new ways that attackers will use procedurally-generated graphics in attacks.
Figure 1. Tracking data for small 2×2 HTML tables
A graphic resembling a trusted organization’s official logo is procedurally generated from HTML and CSS markup. It’s a fileless way of impersonating a logo, because there are no image files for security solutions to detect. Instead, the graphic is constructed out a specially styled HTML table that is embedded directly in the email.
Of course, inserting an HTML table into an email is not malicious on its own. The malicious pattern emerges when we view this technique in context with the attacker’s goals.
Two campaigns that we have been tracking since April 2021 sent targets emails that recreated the Microsoft logo. They impersonated messages from Office 365 and SharePoint. We observed the following email subjects:
Figure 2. Sample emails that use HTML code to embed a table designed to mimic the Microsoft logo
Upon extracting the HTML used in these emails, Microsoft analysts determined that the operators used the HTML table tag to create a 2×2 table resembling the Microsoft logo. The background color of each of the four cells corresponded to the colors of the quadrants of the official logo.
Figure 3. Page source of the isolated HTML mimicking the Microsoft logo
HTML and CSS allow for colors to be referenced in several different ways. Many colors can be referenced in code via English language color names, such as “red” or “green”. Colors can also be represented using six-digit hexadecimal values (i.e., #ffffff for white and #000000 for black), or by sets of three numbers, with each number signifying the amount of red, green, or blue (RGB) to combine. These methods allow for greater precision and variance, as the designer can tweak the numbers or values to customize the color’s appearance.
Figure 4. Color values used to replicate the Microsoft logo
As seen in the above screenshot, attackers often obscure the color references to the Microsoft brand by using color names, hexadecimal, and RGB to color in the table. By switching up the method they use to reference the color, or slightly changing the color values, the attacker can further evade detection by increasing variance between emails.
In several observed campaigns, attackers inserted invisible Unicode characters to break up keywords in an email body or subject line in an attempt to bypass detection and automated security analysis. Certain characters in Unicode indicate extremely narrow areas of whitespace, or are not glyphs at all and are not intended to render on screen.
Some invisible Unicode characters that we have observed being used maliciously include:
Both of these are control characters that affect how other characters are formatted. They are not glyphs and would not even be visible to readers, in most cases. As seen in the following graph, the use of the soft hyphen and word joiner characters has seen a steady increase over time. These invisible characters are not inherently malicious, but seeing an otherwise unexplained rise of their use in emails indicates a potential shift in attacker techniques.
Figure 5. Tracking data for the invisible character obfuscation technique
When a recipient views a malicious email containing invisible Unicode characters, the text content may appear indistinguishable from any other email. Although not visible to readers, the extra characters are still included in the body of the email and are “visible” to filters or other security mechanisms. If attackers insert extra, invisible characters into a word they don’t want security products to “see,” the word might be treated as qualitatively different from the same word without the extra characters. This allows the keyword to evade detection even if filters are set to catch the visible part of the text.
Invisible characters do have legitimate uses. They are, for the most part, intended for formatting purposes: for instance, to indicate where to split a word when the whole word can’t fit on a single line. However, an unintended consequence of these characters not displaying like ordinary text is that malicious email campaign operators can insert the characters to evade security.
The animated GIF below shows how the soft hyphen characters are typically used in a malicious email. The soft hyphen is placed between each letter in the red heading to break up several key words. It’s worth noting that the soft hyphens are completely invisible to the reader until the text window is narrowed and the heading is forced to break across multiple lines.
Figure 6. Animation showing the use of the invisible soft hyphen characters
In the following example, a phishing email has had invisible characters inserted into the email body: specifically, in the “Keep current Password” text that links the victim to a phishing page.
Figure 7. Microsoft Office 365 phishing email using invisible characters to obfuscate the URL text.
The email appears by all means “normal” to the recipient, however, attackers have slyly added invisible characters in between the text “Keep current Password.” Clicking the URL directs the user to a phishing page impersonating the Microsoft single sign-on (SSO) page.
In some campaigns, we have seen the invisible characters applied to every word, especially any word referencing Microsoft or Microsoft products and services.
This technique involves inserting hidden words with a font size of zero into the body of an email. It is intended to throw off machine learning detections, by adding irrelevant sections of text to the HTML source making up the email body. Attackers can successfully obfuscate keywords and evade detection because recipients can’t see the inserted text—
Microsoft Defender for Office 365 has been blocking malicious emails with zero-point font obfuscation for many years now. However, we continue to observe its usage regularly.
Figure 8. Tracking data for emails containing zero-point fonts experienced surges in June and July 2021
Similar to how there are many ways to represent colors in HTML and CSS, there are also many ways to indicate font size. We have observed attackers using the following styling to insert hidden text via this technique:
Being able to add zero-width text to a page is a quirk of HTML and CSS. It is sometimes used legitimately for adding meta data to an email or to adjust whitespace on a page. Attackers repurpose this quirk to break up words and phrases a defender might want to track, whether to raise an alert or block the content entirely. As with the invisible Unicode character technique, certain kinds of security solutions might treat text containing these extra characters as distinct from the same text without the zero-width characters. This allows the visible keyword text to slip past security.
In a July 2021 phishing campaign blocked by Microsoft Defender for Office 365, the attacker used a voicemail lure to entice recipients into opening an email attachment. Hidden, zero-width letters were added to break up keywords that might otherwise have been caught by a content filter. The following screenshot shows how the email appeared to targeted users.
Figure 9. Sample email that uses the zero-point font technique
Those with sharp eyes might be able to spot the awkward spaces where the attacker inserted letters that are fully visible only within the HTML source code. In this campaign, the obfuscation technique was also used in the malicious email attachment, to evade file-hash based detections.
Figure 10. The HTML code of the email body, exposing the use of the zero-point font technique
Victim-specific URI is a way of transmitting information about the target and creating dynamic content based upon it. In this technique, a custom URI crafted by the attacker passes information about the target to an attacker-controlled website. This aides in spear-phishing by personalizing content seen by the intended victim. This is often used by the attacker to create legitimate-seeming pages that impersonate the Single Sign On (SSO) experience.
The following graph shows cyclic surges in email content, specifically links that have an email address included as part of the URI. Since custom URIs are such a common web design practice, their usage always returns to a steady baseline in between peaks. The surges appear to be related to malicious activity, since attackers will often send out large numbers of spam emails over the course of a campaign.
Figure 11. Tracking data for emails containing URLs with email address in the PHP parameter
In a campaign Microsoft analysts observed in early May 2021, operators generated tens of thousands of subdomains from Google’s Appspot, creating unique phishing sites and victim identifiable URIs for each recipient. The technique allowed the operators to host seemingly legitimate Microsoft-themed phishing sites on third-party infrastructure.
The attacker sends the target an email, and within the body of the email is a link that includes special parameters as part of the web address, or URI. The custom URI parameters contain information about the target. These parameters often utilize PHP, as PHP is a programming language frequently used to build websites with dynamic content—
Details such as the target’s email address, alias, or domain, are sent via the URI to an attacker-controlled web page when the user visits the link. The attacker’s web page pulls the details from the parameters and use that to present the target with personalized content. This can help the attacker make malicious websites more convincing, especially if they are trying to mimic a user logon page, as the target will be greeted by their own account name.
Custom URIs containing user-specific parameters are not always, or even often, malicious. They are commonly used by all kinds of web developers to transmit pertinent information about a request. A query to a typical search engine will contain numerous parameters concerning the nature of the search as well as information about the user, so that the search engine can provide users with tailored results.
However, in the victim identifiable URI technique, attackers repurpose a common web design practice to malicious ends. The tailored results seen by the target are intended to trick them into handing over sensitive information to an attacker.
In the Compact phishing campaign described by WMC Global and tracked by Microsoft, this technique allowed the operators to host Microsoft-themed phishing sites on any cloud infrastructure, including third-party platforms such as Google’s Appspot. Microsoft’s own research into the campaign in May noted that not only tens of thousands of individual sites were created, but that URIs were crafted for each recipient, and the recipient’s email address was included as a parameter in the URI.
Newer variants of the May campaign started to include links in the email, which routed users through a compromised website, to ultimately redirect them to the Appspot-hosted phishing page. Each hyperlink in the email template used in this version of the campaign was structured to be unique to the recipient.
The recipient-specific information passed along in the URI was used to render their email account name on a custom phishing page, attempting to mimic the Microsoft Single Sign On (SSO) experience. Once on the phishing page, the user was prompted to enter their Microsoft account credentials. Entering that information would send it to the attacker.
As the phishing techniques we discussed in this blog show, attackers use common or standard aspects of emails to hide in plain sight and make attacks very difficult to detect or block. With our trend tracking in place, we can make sense of suspicious patterns, and notice repeated combinations of techniques that are highly likely to indicate an attack. This enables us to ensure we protect customers from the latest evasive email campaigns through Microsoft Defender for Office 365. We train machine learning models to keep an eye on activity from potentially malicious domains or IP addresses. Knowing what to look out for, we can rule out false positives and focus on the bad actors.
This has already paid off. Microsoft Defender for Office 365 detected and protected customers from sophisticated phishing campaigns, including the Compact campaign. We also employed our knowledge of prevalent trends to hunt for a ransomware campaign that might have otherwise escaped notice. We swiftly opened an investigation to protect customers from what seemed at first like a set of innocuous emails.
Trend tracking helps us to expand our understanding about prevalent attacker tactics and to improve existing protections. We’ve already set up rules to detect the techniques described in this blog. Our understanding of the threat landscape has led to better response times to critical threats. Meanwhile, deep within Microsoft Defender for Office 365, rules for raising alerts are weighted so that detecting a preponderance of suspicious techniques triggers a response, while legitimate emails are allowed to travel to their intended inboxes.
Threat intelligence also drives what new features are developed, and which rules are added. In this way, generalized trend tracking leads to concrete results. Microsoft is committed to using our knowledge of the threat landscape to continue to track trends, build better protections for our products, and share intelligence with the greater online community.
You are subscribed to National Cyber Awareness System Current Activity for
Cybersecurity and Infrastructure Security Agency. This information has recently
been updated, and is now available.
CISA
Provides Recommendations for Protecting Information from Ransomware-Caused Data
Breaches
08/18/2021 12:30 AM EDT
Original
release date: August 18, 2021
CISA has released the fact sheet Protecting
Sensitive and Personal Information from Ransomware-Caused Data Breaches to
address the increase in malicious cyber actors using ransomware to exfiltrate
data and then threatening to sell or leak the exfiltrated data if the victim
does not pay the ransom. These data breaches, often involving sensitive or
personal information, can cause financial loss to the victim organization and
erode customer trust.
The fact sheet provides information for organizations to use in preventing
and responding to ransomware-caused data breaches. CISA encourages
organizations to adopt a heightened state of awareness and implement the
recommendations listed in this fact sheet to reduce their risk to ransomware
and protect sensitive and personal information. Review StopRansomware.gov for
additional ransomware resources.