Section 4s
of the President’s Executive Order (EO) on “Improving the Nation’s Cybersecurity (14028),” issued
on May 12, 2021, charges NIST, in coordination with the Federal
Trade Commission (FTC) and other agencies, to initiate pilot programs for
cybersecurity labeling. These labeling programs are intended to educate the
public on the security capabilities of software development practices.
To inform this effort, Sec. 4 (u)
of the EO directs NIST to “…identify secure software development practices or
criteria for a consumer software labeling program.” Furthermore, the identified
criteria “…shall reflect a baseline level of security practices, and if
practicable, shall reflect increasingly comprehensive levels of testing and
assessment that a product may have undergone.” Sec. 4 (u)
also states that “…NIST shall examine all relevant information, labeling, and
incentive programs, employ best practices, and identify, modify, or develop a
recommended label or, if practicable, a tiered software security rating system.
This review shall focus on ease of use for consumers and a determination of
what measures can be taken to maximize participation.”
Today, NIST has released for public comment a document that
advances these tasks: Draft Baseline Criteria for Consumer Software Cybersecurity Labeling.
This draft document addresses the need to develop appropriate cybersecurity
criteria for consumer software—and it informs the development and use of a
label for consumer software which will improve consumers’ awareness,
information, and ability to make purchasing decisions (while taking
cybersecurity considerations into account). This document was developed after
much input from a recent NIST workshop, position papers submitted to NIST,
additional extensive research, and many discussions with experts and
organizations from the public and private sectors.
We are seeking comments on all aspects of the criteria contained
in the draft document (more
details can be found in the ‘note to reviewers’ section of the draft document). In accordance with the EO, NIST plans to produce a final version of
these criteria by February 6, 2022.
Microsoft recently mitigated an information disclosure issue,CVE-2021-42306, to prevent private key data from being stored by some Azure services in thekeyCredentials property of an Azure Active Directory (Azure AD)Application and/orService Principal, and prevent reading of private key data previously stored in the keyCredentials property.
The keyCredentials property is used to configure an application’s authentication credentials. It is accessible to any user or service in the organization’s Azure AD tenant with read access to application metadata. The property is designed to accept a certificate with public key data for use in authentication, but certificates with private key data could have also been incorrectly stored in the property. Access to private key data can lead to an elevation of privilege attack by allowing a user to impersonate the impacted Application or Service Principal. Some Microsoft services incorrectly stored private key data in the (keyCredentials) property while creating applications on behalf of their customers. We have conducted an investigation and have found no evidence of malicious access to this data. Microsoft Azure services affected by this issue have mitigated by preventing storage of clear text private key information in the keyCredentials property, and Azure AD has mitigated by preventing reading of clear text private key data that was previously added by any user or service in the UI or APIs. As a result, clear text private key material in the keyCredentials property is inaccessible, mitigating the risks associated with storage of this material in the property. As a precautionary measure, Microsoft is recommending customers using these services take action as described in “Affected products/services,” below. We are also recommending that customers who suspect private key data may have been added to credentials for additional Azure AD applications or Service Principals in their environments follow this guidance.
Affected products/services
Microsoft has identified the following platforms/services that stored their private keys in the public property. We have notified customers who have impacted Azure AD applications created by these services and notified them via Azure Service Health Notifications to provide remediation guidance specific to the services they use.
Azure Automation deployed an update to the service to prevent private key data in clear text from being uploaded to Azure AD applications. Run-As accounts created or renewed after 10/15/2021 are not impacted and do not require further action.
Automation Run As accounts created with an Azure Automation self-signed certificate between 10/15/2020 and 10/15/2021 that have not been renewed are impacted. Separately customers who bring their own certificates could be affected. This is regardless of the renewal date of the certificate. To identify and remediate impacted Azure AD applications associated with impacted Automation Run-As accounts, please navigate to this Github Repo In addition, Azure Automation supports Managed Identities Support (GA announced on October 2021). Migrating to Managed Identities from Run-As will mitigate this issue. Please follow the guidance here to migrate.
Azure Migrate service creates Azure AD applications to enable Azure Migrate appliances to communicate with the service’s endpoints.
Azure Migrate deployed an update to prevent private key data in clear text from being uploaded to Azure AD applications. Azure Migrate appliances that were registered after 11/02/2021 and had Appliance configuration manager version 6.1.220.1 and above are not impacted and do not require further action
Azure Migrate appliances registered prior to 11/02/2021 and/or appliances registered after 11/02/2021 where auto-update was disabled could be affected by this issue. To identify and remediate any impacted Azure AD applications associated with Azure Migrate appliances, please navigate to this link.
Azure Site Recovery (ASR) creates Azure AD applications to communicate with the ASR service endpoints.
Azure Site Recovery deployed an update to prevent private keydata from being uploaded to Azure AD applications. Customers using Azure Site Recovery’s preview experience “VMware to Azure Disaster Recovery” after 11/01/2021 are not impacted and do not require further action
Customers who have deployed and registered the preview version of VMware to Azure DR experience with ASR before 11/01/2021 could be affected. To identify and remediate the impacted AAD Apps associated with Azure Site Recovery appliances, please navigate to this link.
Azure AD applications and Service Principals [1]
Microsoft has blocked reading private key data as of 10/30/2021.
Follow the guidance available at aad-app-credential-remediation-guide to assess if your application key credentials need to be rotated. The guidance walks through the assessment steps to identify if private key information was stored in keyCredentials and provides remediation options for credential rotation.
[1] This issue only affects Azure AD Applications and Service Principals where private key material in clear text was added to a keyCredential. Microsoft recommends taking precautionary steps to identify any additional instances of this issue in applications where you manage credentials and take remediation steps if impact is found.
What else can I do to audit and investigate applications for unexpected use?
Additionally, as a best practice, we recommend auditing and investigating applications for unexpected use:
Audit the permissions that have been granted to the impacted entities (e.g., subscription access, roles, OAuth permissions, etc.) to assess impact in case the credentials were exposed. Refer to the Application permission section in the security operations guide.
If you rotated the credential for your application/service principal, we suggest investigating for unexpected use of the impacted entity especially if it has high privilege permissions to sensitive resources. Additionally, review the security guidance on least privilege access for apps to ensure your applications are configured with least privilege access.
Customers who have Microsoft Sentinel deployed in their environment can leverage notebook/playbook/hunting queries to look for potentially malicious activities. Look for more guidance here.
Part of any robust security posture is working with researchers to help find vulnerabilities, so we can fix any findings before they are misused. We want to thank Karl Fosaaen of NetSPI who reported this vulnerability and Allscripts who worked with the Microsoft Security Response Center (MSRC) under Coordinated Vulnerability Disclosure (CVD) to help keep Microsoft customers safe.
Patching is a critical component of preventive maintenance for
computing technologies—a cost of doing business, and a necessary part of what
organizations need to do in order to achieve their missions. However, keeping
software up-to-date with patches remains a problem for most organizations.
Draft SP 800-40 Revision 4 makes recommendations for creating an
enterprise strategy to simplify and operationalize patching while also
improving reduction of risk. Draft SP 800-40 Revision 4 will replace SP 800-40
Revision 3, Guide to
Enterprise Patch Management Technologies, which was released in
2013.
Draft SP 1800-31 describes an example solution that demonstrates
how tools can be used to implement the inventory and patching capabilities
organizations need for routine and emergency patching situations, as well as
implementing workarounds and other alternatives to patching.
We Want to Hear from You!
Review the draft publications and submit comments online on or
before January 10, 2022. You can also contact us at cyberhygiene@nist.gov. We value and welcome
your input and look forward to your comments.
Find
out how to secure your apps and data in your Azure Virtual Desktop deployment.
Read Azure Virtual
Desktop Handbook: Security Fundamentals for technical, hands-on
guidance to help you protect your virtual desktops with built-in Azure security
features and other Microsoft security tools.
Download
this handbook to:
Familiarize yourself with Azure
Virtual Desktop architecture.
Understand which Microsoft
tools and Azure security services are automatically configured and which
you’ll need to configure yourself.
Implement the appropriate
security measures for your organization’s data, apps, user identities,
session hosts, and network access.
Learn best practices for using Azure Security
Center and improving your Azure Secure Score.
The use of small-scale distributed energy resources (DERs)
is growing rapidly and transforming the power grid. In fact, a
distribution utility may need to remotely communicate with thousands of
DERs and other grid-edge devices—many of which are not owned by them.
Any attack that can deny, disrupt, or tamper with DER
communications could prevent a utility from performing necessary
control actions and could diminish grid resiliency.
In this draft cybersecurity practice guide, the NCCoE
applies standards, best practices, and commercially available
technology to protect the digital communication, data, and control of
cyber-physical grid-edge devices. The guide demonstrates an example
solution for monitoring and detecting unusual behavior of connected
industrial internet of things (IIoT) devices and building a
comprehensive audit trail of trusted IIoT data flows.
The public comment period is open through October 20,
2021. See the publication
details for a copy of the document and instructions for
submitting comments.
Pre-Draft Call for Comments | Incorporating Privacy in
Awareness & Training
To
help organizations incorporate privacy into their security awareness and
training regimes, NIST plans to revise SP 800-50, Building
an Information Technology Security Awareness and Training Program.
In the nearly two decades since SP 800-50 was published in 2003, cybersecurity
awareness and training resources, methodologies, and requirements have evolved
considerably—and new guidance to inform this work has come from Congress and
the Office of Management and Budget.
Prior
to drafting the update, NIST is seeking public
comment on several topics, including the potential consolidation of
companion document SP 800-16, Information
Technology Security Training Requirements: A Role- and Performance-Based
Model, into the revised SP 800-50. The proposed title for
SP 800-50 Revision 1 is Building a Cybersecurity and Privacy Awareness
and Training Program. Comments are due byNovember
5, 2021.
Your
public comments will be used to influence future drafts, including an Initial
Public Draft of the update which is scheduled to be released in early 2022 as
SP 800-50 Revision 1.
Hi, my name is Raymond
Roethof, and I am a Microsoft Security enthusiast with over fifteen years
of experience within the Microsoft landscape. My focus has been Microsoft
Security, and specifically with the last three years out of six as a Red
Teamer. In this blog post, I will go through an attacker or Red Teamer’s
challenges when Microsoft Defender for Identity is in place
Many organizations go through a digital transformation by the increasing use
of cloud services. Understanding the current state of the cloud service is
essential as maintaining the state is a shared responsibility between the
company and its cloud provider.
Many Red Teamers and attackers use the on-premises environment as a stepping
stone to the cloud. So, a company must understand the comprehensive set of
security controls and capabilities available in Microsoft Azure, Microsoft 365,
and on-premises. Active Directory can be a source for lateral movement and an
excellent initial attack vector due to the high-value information it holds.
Microsoft Defender for Identity is a cloud-based security
solution that leverages your on-premises Active Directory signals to identify,
detect, and investigate advanced threats, compromised identities, and malicious
insider actions directed at your organization. Defender for Identity
also protects Active Directory Federation Services (AD FS) in your
environment by detecting advanced threats and providing visibility into
authentication events generated by AD FS.
The default Active Directory authentication protocol is Kerberos, an
authentication protocol based on tickets, and is known for being the target
method of many attacks. Kerberos is an authentication protocol developed by MIT
and adopted by Microsoft since Windows 2000. Kerberos can also be complicated
and as a result, hard to secure.
This blog post will go through attacking Active Directory
as a Red Teamer and having Defender for Identity in place to protect this
high-value information. What do I have to consider before I make my next move?
Let’s find out how Defender for Identity makes my job so difficult.
Attack Kill Chain
As a Red Teamer or an attacker, you want to reach your goal as quickly as
possible, preferably without noticing. The purpose and time it takes to perform
the attack differs in every scenario. Attackers are mainly financially driven
as Red Teamers have a specific pre-defined objective to reach.
Most of the attacks require multiple steps to reach their goal. Red Teamers
or attackers use some form of an attack kill chain as a process.
(Figure 1 – an
example of an attack kill chain process)
Note: With the digital transformation to the cloud
and the complexity of most attacks, a one-size-fits-all kill chain is not
feasible anymore, but the Cyber Attack Kill Chain is a good indication of how a Red
Teamer or attacker performs an attack. The graphic shown above is more focused
on compromising an endpoint, for example.
Reconnaissance
of Active Directory
Reconnaissance is a critical and consistent step in any kill chain. Most
information found is likely used during an attack at a later stage. Information
like server names, IP addresses, operating systems, forest architecture,
trusts, service principal names (SPNs), groups and memberships, access control
lists, and well-known security misconfigurations is probably part of every
reconnaissance phase within Active Directory.
The challenge as a Red Teamer (or an attacker – assume I’m
referring to both throughout this blog) starts with Defender for
Identity being enabled at the reconnaissance phase.
A Red Teamer needs to have a valid set of credentials, a
hash, or any form of authentication to communicate with Active Directory.
Attacks like phishing e-mails can contain a malicious payload that runs under
the user context. This way, a Red Teamer or attacker can perform an attack as
an authenticated user. Without any authentication, a Red Teamer uses an attack
like AS-Rep roasting and password sprays. If you are a Red
Teamer or an attacker, Defender for Identity detects this kind of attack
and alerts in almost real time.
Lateral Movement
Active Directory
The ultimate objective for a Red Teamer is data. For most organizations,
data is one of the most valuable assets. Getting access to all data at the
initial entry is rare for a Red Teamer or attacker, so it is common to see
lateral movement during an attack.
Let us say a Red Teamer gets a foothold, either remotely or on the network,
within the environment without being noticed as an authenticated user. The next
step would be to seek identities with higher privileges or an identity to
access high-value assets, like data.
Attacks like Kerberoasting are also common since service accounts mainly
have high privileges to services that contain high-value assets. Kerberoasting is also an attack that Defender for
Identity detects. Defender for Identity also detects newer attacks
like PetitPotam as well since version 2.158.14362.
Extended
Detection and Response
With Extended Detection
and Response (XDR), Microsoft delivers a new approach to provide intelligent,
automated, and integrated security across domains to help defenders connect
seemingly disparate alerts and get ahead of attackers. Due to signal sharing
between Microsoft Defender for Endpoint and Defender for Identity, an
indicator shows if the endpoint contains an alert within Defender for Endpoint.
An analyst can isolate the endpoint within seconds, and as a Red Teamer, you
will need to find another entry point to continue your journey.The analyst is
also probably more alert and now monitoring the environment even closer as a
result.
(Figure
2 – an illustration of an attacker navigating a minefield)
Every step we take next as a Red Teamer or
an attacker is like walking in a minefield.
From
on-premises to the cloud
Although many
organizations go through digital transformation by the increasing their use of
cloud services, attackers can use the on-premises environment as a stepping
stone to the cloud. One of my blog posts describes creating a forged security token to
authenticate to Azure AD using a private key from the AD FS server.
Unfortunately for me, Defender for Identity now detects this method of
attack as well:
(Figure 3 – A
screenshot showing the alert status of an AD FS DKM key read with supporting
evidence)
Machine Learning
for Access Control Policy Verification: NISTIR 8360 Published
Access control policy verification ensures that there are
no faults within the policy that leak or block access privileges. As a
software test, access control policy verification relies on methods
such as model proof, data structure, system simulation, and test oracle
to verify that the policy logic functions as expected. However, these
methods have capability and performance issues related to inaccuracy
and complexity limited by applied technologies. For instance, model
proof, test oracle, and data structure methods initially assume that
the policy under verification is faultless unless the policy model
cannot hold for test cases. Thus, the challenge of the method is to
compose test cases that can comprehensively discover all faults.
Alternatively, a system simulation method requires translating the
policy to a simulated system. The translation between systems may be
difficult or impractical to implement if the policy logic is
complicated or the number of policy rules is large.
NISTIR 8360, Machine Learning for Access Control Policy Verification,
proposes an efficient and straightforward method for access control
policy verification by applying a classification algorithm of machine
learning, which does not require comprehensive test cases, oracle, or
system translation but rather checks the logic of policy rules
directly, making it more efficient and feasible compared to traditional
methods. Ultimately, three general applications are provided:
enhancement of existing verification methods, verification of access
control policies with numerical attributes, and policy enforcement that
can be supported by the proposed machine learning policy verification
method.
This training program includes 16 modules. The post includes a presentation for each module, preferably recorded (when still not, we are working on the recording) and supporting information: relevant product documentation, blog posts, and other resources.
The modules listed below are split into five groups following the life cycle of a SOC:
Part 1: Overview
– Module 0: Other learning and support options
– Module 1: Get started with Azure Sentinel
– Module 2: How is Azure Sentinel used?
Part 2: Architecting & Deploying
– Module 3: Workspace and tenant architecture
– Module 4: Data collection
– Module 5: Log Management
– Module 6: Enrichment: TI, Watchlists, and more
– Module X: Migration
– Module Z: ASIM and Normalization
Part 3: Creating Content
– Module 7: The Kusto Query Language (KQL)
– Module 8: Analytics
– Module 9: SOAR
– Module 10: Workbooks, reporting, and visualization
– Module Y: Notebooks
– Module 11: Use cases and solutions
Part 4: Operating
– Module 12: A day in a SOC analyst’s life, incident management, and investigation
– Module 13: Hunting
– Module 14: User and Entity Behavior Analytics (UEBA)
– Module 15: Monitoring Azure Sentinel’s health
Part 5: Advanced Topics
– Module 16: Extending and Integrating using Azure Sentinel APIs
– Module 17: Bring your own ML
Part 1: Overview
Module 0: Other learning and support options
The Ninja training is a level 400 training. If you don’t want to go as deep or have a specific issue, other resources might be more suitable:
Already did the Ninja Training? Check what’s new in the Ninja training.
While extensive, the Ninja training has to follow a script and cannot expand on every topic. The FAQ companion to the Ninja Training tries to closed this gap.
You can now certify with the new SC-200 certification (Microsoft Security Operations Analyst) which covers Azure Sentinel. The SC-200 is not a Ninja Training certification, but the exam is largely based on Ninja Training materials, making it a good learning path for the certification. You may also want to consider the SC-900 certification (Microsoft Security, Compliance, and Identity Fundamentals), for a broader, higher level view of the Microsoft Security suite.
Premier customer? You might want the on-site (or remote these days) 4 day Azure Sentinel Fundamentals Workshop. Contact your Customer Success Account Manager to arrange.
Already a Ninja? Just keep track of what’s new, or join our Private Preview program for an even earlier glimpse.
Think you're a true Sentinel Ninja? Take the knowledge check and find out. If you pass the knowledge check with a score of over 80% you can request a certificate to prove your ninja skills!
1. Take the knowledge checkhere. 2. If you score 80% or more in the knowledge check, request your participation certificate here. If you achieved less than 80%, please review the questions that you got it wrong, study more and take the assessment again.
Microsoft Azure Sentinel is a scalable, cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution. Azure Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response (read more).
Thousands of organizations and service providers are using Azure Sentinel. As usual with security products, most do not go public about that. Still, there are some.
Many users use Azure Sentinel as their primary SIEM. Most of the modules in this course cover this use case. In this module, we present a few additional ways to use Azure Sentinel.
As part of the Microsoft Security stack
Use Sentinel, Azure Defender, Microsoft 365 Defender in tandem to protect your Microsoft workloads, including Windows, Azure, and Office:
Read and watch how such a setup helps detect and respond to a WebShell attack: Blog Post, Video demo.
To monitor your multi-cloud workloads
The cloud is (still) new and often not monitored as extensively as on-prem workloads. Read this presentation to learn how Azure Sentinel can help you close the cloud monitoring gap across your clouds.
Side by side with your existing SIEM
Either for a transition period or a longer term, if you are using Azure Sentinel for your cloud workloads, you may be using Azure Sentinel alongside your existing SIEM. You might also be using both with a ticketing system such as Service Now.
For more information on migrating from another SIEM to Azure Sentinel, watch the migration webinar: MP4, YouTube, Deck.
There are three common scenarios for side by side deployment:
Over time, as Azure Sentinel covers more workloads, it is typical to reverse that and send alerts from your on-prem SIEM to Azure Sentinel. To do that:
You can also send the alerts from Azure Sentinel to your 3rd party SIEM or ticketing system using the Graph Security API, which is simpler but would not enable sending additional data.
To start your journey as an MSSP, you should read the Azure Sentinel Technical Playbooks for MSSPs. More information about MSSP support is included in the next module, cloud architecture and multi-tenant support.
Part 2: Architecting & Deploying
While the previous section offers options to start using Azure Sentinel in a matter of minutes, before you start a production deployment, you need to plan. This section walks you through the areas that you need to consider when architecting your solution, as well as provides guidelines on how to implement your design:
An Azure Sentinel instance is called a workspace. The workspace is the same as a Log Analytics workspace and supports any Log Analytics capability. You can think of Sentinel as a solution that adds SIEM features on top of a Log Analytics workspace.
Multiple workspaces are often necessary and can act together as a single Azure Sentinel system. A special use case is providing service using Azure Sentinel, for example, by an MSSP (Managed Security Service Provider) or by a Global SOC in a large organization.
To deploy Azure Sentinel and manage content efficiently across multiple workspaces; you would like to manage Sentinel as code using CI/CD technology. This is, in general, a recommended best practice for Azure sentinel:
The foundation of a SIEM is collecting telemetry: events, alerts, and contextual enrichment information such as Threat Intelligence, vulnerability data, and asset information. You can find a list of sources you can connect here:
Documentation of the connectors which are part of the connectors gallery.
The Grand List of sources you can connect to Azure Sentinel, whether part of the gallery or not (note: this list is no longer being updated).
How you connect each source falls into several categories or source types. Each source type has a distinct setup effort but once deployed, it serves all sources of that type. The Grand List specifies for each source what its type is. To learn more about those categories, watch the Webinar (includes Module 3): YouTube, MP4, Deck.
The types are:
Built-in service-to-service connectors allow Azure Sentinel to connect directly to cloud services such as Office 365 or AWS CloudTrail. Some of the service-to-service connectors, such as AAD, utilize Azure diagnostics behind the scenes.
Direct refers to sources that natively know how to send data to Azure Sentinel or Log Analytics. These include Azure services or other Microsoft solutions that support sending telemetry (often referred to as “diagnostics“) to Log Analytics and 3rd party sources that use the ingestion API to write to Log analytics or Azure Sentinel directly. The Microsoft direct sources are listed in addition to the Grand List and in the blog post “Collecting logs from Microsoft Services and Applications.”
The Log Forwarder is a VM that enables collecting Syslog and CEF events from remote systems. If a source is listed in the Grand List as CEF or Syslog, you will use the Log Forwarder to collect from it. Learn more about the Log Forwarder in this webinar (plus a bonus: learn how to use it to filter events): YouTube, MP4, Deck.
The Log Analytics agent collects information from Windows or Linux hosts. In addition to OS events such as Windows Events, the agent can collect events stored in files. Learn more about the Log Analytics agent in this blog: collecting telemetry from on-prem and IaaS server using the Log Analytics agent. The Azure Monitor Agent is a new generation agent currently in preview that offers advantages such as Windows events filtering. The Log Analytics agent is being deprecated on 31 August 2024, so if you have not yet deployed the Log Analytics agent you should consider whether it is possible for you to start using the Azure Monitor Agent (see next bullet point).
The Azure Monitor Agent (AMA) is the replacement for the Log Analytics Agent. The Azure Monitor agent introduces several new capabilities not available in the Log Analytics agent such as filtering, scoping, and multi-homing. At the time of writing this update, AMA isn’tyetat parity with the Log Analytics agent, although this will changeovertime. Considerwhether the features you need for your Azure Sentinel deployment are supported in AMA, or whether to continue to use the Log Analytics agentfor nowand migrateat a later date. You can sign up for the Everything You Ever Wanted to Know About Using the New Azure Monitor Agent (AMA) with Azure Sentinel on Nov 22 here.
Integrate Threat Intelligence (TI) sources using the built-in connectors from TAXII servers or Microsoft Graph Security API. Read more on how to in the documentation. TI can also be important as a custom log using a custom connector or as a lookup table. You can read more about how TI is used managed in Azure Azure in the TI modules later.
If your source is not available, you can create a custom connector. Custom connectors use the ingestion API and therefore are similar to direct sources. Custom connectors are most often implemented using Logic Apps, offering a codeless option, or Azure Functions.
Module 5: Log Management
While how many and which workspaces to use is the first architecture question to ask, there are additional log management architectural decisions:
Where and how long to retain data.
How to best manage access to data and secure it.
Retention
If you want to retain data for more than two years or reduce the retention cost, you can consider using Azure Data Explorer for long-term retention of Azure Sentinel logs: Webinar Slides, Webinar Recording, Blog)
One of the important functions of a SIEM is to apply contextual information to the event steam, enabling detection, alert prioritization, and incident investigation. Contextual information includes, for example, threat intelligence, IP intelligence, host and user information, and watchlists.
Azure Sentinel provides comprehensive tools to import, manage, and use threat intelligence. For other types of contextual information, Azure Sentinel provides Watchlists, as well as alternative solutions.
Threat Intelligence
Sept 2021 update: Sign up for the Explore the Power of Threat Intelligence in Azure Sentinel webinar on Oct 25 here.
Threat Intelligence is an important building block of a SIEM.
In Azure Sentinel, you can integrate threat intelligence (TI) using the built-in connectors from TAXII servers or through the Microsoft Graph Security API. Read more on how to in the documentation. Refer to the data collection modules for more information about importing Threat Intelligence.
Once imported, Threat Intelligence is used extensively throughout Azure Sentinel and is weaved into the different modules. The following features focus on using Threat Intelligence:
View and manage the imported threat intelligence inLogsin the newThreat Intelligencearea of Azure Sentinel.
Visualize key information about your threat intelligence in Azure Sentinel with theThreat Intelligence workbook.
Watchlists and other lookup mechanisms
To import and manage any type of contextual information, Azure Sentinel provides Watchlists, which enable you to upload data tables in CSV format and use them in your KQL queries. Read more about Watchlists in the documentation.
In addition to Watchlists, you can also use the KQL externaldata operator, custom logs, and KQL functions to manage and query context information. Each one of the four methods has its pros and cons, and you can read more about the comparison between those options in the blog post “Implementing Lookups in Azure Sentinel.” While each method is different, using the resulting information in your queries is similar enabling easy switching between them.
In many (if not most) cases, you already have a SIEM and need to migrate to Azure Sentinel. While it may be a good time to start over and rethink your SIEM implementation, it makes sense to utilize some of the assets you already built in your current implementation. To start watch our webinar describing best practices for converting detection rules from Splunk, QRadar, and ArcSight to Azure Sentinel Rules: YouTube, MP4, Presentation, blog.
You might also be interested in some of the resources presented in the blog:
Watch the Understanding Normalization in Azure Sentinel webinar: YouTube, Presentation Watch the Deep Dive into Azure Sentinel Normalizing Parsers and Normalized Contentwebinar: YouTube, MP3, Presentation Sign up for the Turbocharging ASIM: Making Sure Normalization Helps Performance Rather Than Impacting It webinar on Oct 6 here.
Working with various data types and tables together presents a challenge. You must become familiar with many different data types and schemas, write and use a unique set of analytics rules, workbooks, and hunting queries for each, even for those that share commonalities (for example, DNS servers). Correlation between the different data types necessary for investigation and hunting is also tricky.
The Azure Sentinel Information Model (ASIM) provides a seamless experience for handling various sources in uniform, normalized views. ASIM aligns with the Open-Source Security Events Metadata (OSSEM) common information model, promoting vendor agnostic, industry-wide normalization.
The current implementation is based on query time normalization using KQL functions. And includes the following:
Normalized schemas cover standard sets of predictable event types that are easy to work with and build unified capabilities. The schema defines which fields should represent an event, a normalized column naming convention, and a standard format for the field values.
Parsers map existing data to the normalized schemas. Parsers are implemented using KQL functions.
Content for each normalized schema includes analytics rules, workbooks, hunting queries, and additional content. This content works on any normalized data without the need to create source-specific content.
Using ASIM provides the following benefits:
Cross source detection: Normalized analytic rules work across sources, on-prem and cloud, now detecting attacks such as brute force or impossible travel across systems including Okta, AWS, and Azure.
Allows source agnostic content: the coverage of built-in as well as custom content using ASIM automatically expands to any source that supports ASIM, even if the source was added after the content was created. For example, process event analytics support any source that a customer may use to bring in the data, including Defender for Endpoint, Windows Events, and Sysmon. We are ready to add Sysmon for Linux and WEF once released!
Support for your custom sources in built-in analytics
Ease of use: once an analyst learns ASIM, writing queries is much simpler as the field names are always the same.
Azure Sentinel security value is a combination of its built-in capabilities such as UEBA, Machine Learning, or out-of-the-box analytics rules and your capability to create custom capabilities and customize built-in ones. Customized SIEM capabilities are often referred to as “content” and include analytic rules, hunting queries, workbooks, playbooks, and more.
In this section, we grouped the modules that help you learn how to create such content or modify built-in-content to your needs. We start with KQL, the Lingua Franca of Azure Sentinel. The following modules discuss one of the content building blocks such as rules, playbooks, and workbooks. We wrap up by discussing use cases, which encompass elements of different types to address specific security goals such as threat detection, hunting, or governance.
Module 7: KQL
Short on time? Start at the beginning and go as far as time allows.
Most Azure Sentinel capabilities use KQL or Kusto Query Language. When you search in your logs, write rules, create hunting queries, or design workbooks, you use KQL. Note that the next section on writing rules explains how to use KQL in the specific context of SIEM rules.
Azure Sentinel enables you to use built-in rule templates, customize the templates for your environment, or create custom rules. The core of the rules is a KQL query; however, there is much more than that to configure in a rule.
To learn the procedure for creating rules, read the documentation. To learn how to write rules, i.e., what should go into a rule, focusing on KQL for rules, watch the webinar: MP4,YouTube,Presentation.
SIEM rules have specific patterns. Learn how to implement rules and write KQL for those patterns:
Before embarking on your own rule writing, you should take advantage of the built-in analytics capabilities. Those do not require much from you, but it is worthwhile learning about them:
Use the built-in scheduled rule templates. You can tune those templates by modifying the templates the same way to edit any scheduled rule. Make sure to deploy the templates for the data connectors you connect listed in the data connector “next steps” tab.
Learn more about Azure Sentinel’s Machine learning capabilities: MP4, YouTube, Presentation
Watch the Fusion ML Detections with Scheduled Analytics Rules webinar: YouTube, MP4, Presentation.
Learn more about Azure Sentinel’s built-in SOC-ML anomalies here. Sign up for a webinar on customized SOC-ML anomalies and how to use them here on Sept 14.
Module 9: Implementing SOAR
Sept 2021 update: sign up for the What’s New in Azure Sentinel Automation webinar on Oct 28 here.
Automation rules are the starting point for Azure Sentinel automation. They provide a lightweight method for central automated handling of incidents, including suppression, false-positive handling, and automatic assignment.
To provide robust workflow based automation capabilities, automation rules use Logic App playbooks:
Watch the Logic Apps Sentinel playbooks Webinar: YouTube, MP4,Deck
Read about Logic Apps, which is the core technology driving Azure Sentinel playbooks.
As the nerve center of your SOC, you need Azure Sentinel to visualize the information it collects and produces. Use workbooks to visualize data in Azure Sentinel.
Those resources are not Azure Sentinel specific, and apply to Azure Wrokbooks in general. To learn more about Workbooks in Azure Sentinel, watch the Webinar: YouTube, MP4, Deck, and read the documentation.
Workbooks can be interactive and enable much more than just charting. With Workbooks, you can create apps or extension modules for Azure Sentinel to complement built-in functionality. We also use workbooks to extend the features of Azure Sentinel. Few examples of such apps you can both use and learn from are:
You can find dozens of workbooks in the Workbooks folder in the Azure Sentinel GitHub. Some of those are available in the Azure Sentinel workbooks gallery and some are not.
Reporting and other visualization options
Workbooks can serve for reporting. For more advanced reporting capabilities such as reports scheduling and distribution or pivot tables, you might want to use:
Jupyter notebooks are fully integrated with Azure Sentinel. While usually considered an important tool in the hunter’s tool chest and discussed the webinars in the hunting section below, their value is much broader. Notebooks can serve for advanced visualization, an investigation guide, and for sophisticated automation.
An important part of the integration is implemented by MSTICPY, a Python library developed by our research team for use with Jupyter notebooks that adds Azure Sentinel interfaces and sophisticated security capabilities to your notebooks.
Module 11: Use cases and solutions
Sept 21 update: sign up for the Create Your Own Azure Sentinel Solutions webinar on Nov 16 here.
Short on time? watch the "Tackling Identity" Webinar: YouTube, MP4, Deck
Using connectors, rules, playbooks, and workbooks enables you to implement use cases: the SIEM term for a content pack intended to detect and respond to a threat. You can deploy Sentinel built-in use cases by activating the suggested rules when connecting each Connector. A solution is a group of use cases addressing a specific threat domain.
The Webinar “Tackling Identity” (YouTube, MP4, Presentation) explains what a use case is, how to approach its design, and presents several use cases that collectively address identity threats.
Azure Sentinel solutions provide in-product discoverability, single-step deployment, and enablement of end-to-end product, domain, and/or vertical scenarios in Azure Sentinel. Read more about them here, and sign up for the upcoming webinar on Nov 16 on how to create solutions here.
Part 4: Operating
Module 12: Handling incidents
Sept 21 update: sign up for the Decrease Your SOC’s MTTR (Mean Time to Respond) by Integrating Azure Sentinel with Microsoft Teams webinar on Nov 10 here.
Short on time? Watch the "day in a life" Webinar: YouTube, MP4, Deck
After building your SOC, you need to start using it. The “day in a SOC analyst life” webinar (YouTube, MP4, Presentation) walks you through using Azure Sentinel in the SOC to triage, investigate and respond to incidents.
Integrating with Microsoft Teams directly from Azure Sentinel enables your teams to collaborate seamlessly across the organization, and with external stakeholders. Sign up for the Decrease Your SOC’s MTTR (Mean Time to Respond) by Integrating Azure Sentinel with Microsoft Teams webinar on Nov 10 here.
You might also want to read the documentation article on incident investigation. As part of the investigation, you will also use the entity pages to get more information about entities related to your incident or identified as part of your investigation.
Incident investigation in Azure Sentinel extends beyond the core incident investigation functionality. We can build additional investigation tools using Workbooks and Notebooks (the latter are discussed later, under hunting). You can also build additional investigation tools or modify ours to your specific needs. Examples include:
Short on time? watch the Webinar: YouTube, MP4, Deck (Note that the Webinar starts with an update on new features, to learn about hunting, start at slide 12. The YouTube link is already set to start there)
While most of the discussion so far focused on detection and incident management, hunting is another important use case for Azure Sentinel. Hunting is a proactive search for threats rather than a reactive response to alerts.
The hunting dashboard was recently refreshed in July 2021 and shows all the queries written by Microsoft’s team of security analysts and any extra queries that you have created or modified. Each query provides a description of what it hunts for, and what kind of data it runs on. These templates are grouped by their various tactics – the icons on the right categorize the type of threat, such as initial access, persistence, and exfiltration. Read more about it here.
To understand more about what hunting is and how Azure Sentinel supports it, Watch the hunting intro Webinar (YouTube, MP4, Deck). Note that the Webinar starts with an update on new features. To learn about hunting, start at slide 12. The YouTube link is already set to start there.
While the intro webinar focuses on tools, hunting is all about security. Our security research team webinar on hunting (MP4, YouTube, Presentation)focuses on how to actually hunt. The follow-up AWS Threat Hunting using Sentinel Webinar (MP4, YouTube, Presentation) really drives the point by showing an end-to-end hunting scenario on a high-value target environment. Lastly, you can learn how to do SolarWinds Post-Compromise Hunting with Azure Sentinel and WebShell hunting motivated by the latest recent vulnerabilities in on-premises Microsoft Exchange servers.
Module 14: User and Entity Behavior Analytics (UEBA)
Azure Sentinel newly introduced User and Entity Behavior Analytics (UEBA) module enables you to identify and investigate threats inside your organization and their potential impact – whether a compromised entity or a malicious insider.
Part of operating a SIEM is making sure it works smoothly and an evolving area in Azure Sentinel. Use the following to monitor Azure Sentinel’s health:
Monitor your Log Analytics workspace: YouTube, MP4, Presentation, including query execution and ingest health
Cost management is also an important operational procedure in the SOC. Use the Ingestion Cost Alert Playbook to ensure you are aware in time of any cost increase.
Part 5: Advanced Topics
Module 16: Extending and Integrating using Azure Sentinel APIs
As a cloud-native SIEM, Azure Sentinel is an API first system. Every feature can be configured and used through an API, enabling easy integration with other systems and extending Sentinel with your own code. If API sounds intimidating to you, don’t worry; whatever is available using the API is also available using PowerShell.
Azure Sentinel provides a great platform for implementing your own Machine Learning algorithms. We call it Bring Your Own ML or BYOML for short. Obviously, this is intended for advanced users. If you are looking for built-in behavioral analytics, use our ML Analytic rules, UEBA module, or write your own behavioral analytics KQL based analytics rules.
To start with bringing your own ML to Azure Sentinel, watch the video, and read the blog post. You might also want to refer to the BYOML documentation.
The draft profile, prepared by the National Cybersecurity Center
of Excellence (NCCoE), identifies security objectives from
the NIST Cybersecurity Framework that can help prevent, respond to, and recover
from ransomware events. It can be used as a guide to managing risk—including
helping gauge an organization’s readiness to mitigate ransomware threats and
react to potential impacts. The profile addresses issues that were raised in
public comments on a preliminary draft released in June.