PrintNightmare, Critical Windows Print Spooler Vulnerability

Cybersecurity and Infrastructure Security Agency (CISA) - Defend Today, Secure Tomorrow

You are subscribed to National Cyber Awareness System Current Activity for
Cybersecurity and Infrastructure Security Agency. This information has recently
been updated, and is now available.

Critical Windows Print Spooler Vulnerability

06/30/2021 05:32 PM EDT


release date: June 30, 2021

The CERT Coordination Center (CERT/CC) has released a VulNote for a critical remote
code execution vulnerability in the Windows Print spooler service, noting:
“while Microsoft has released an update
for CVE-2021-1675
, it is important to realize that this update does not
address the public exploits that also identify as CVE-2021-1675.” An attacker
can exploit this vulnerability—nicknamed PrintNightmare—to take control of an
affected system.

CISA encourages administrators to disable the Windows Print spooler service
in Domain Controllers and systems that do not print. Additionally,
administrators should employ the following best practice from Microsoft’s how-to
, published January 11, 2021: “Due to the possibility for exposure,
domain controllers and Active Directory admin systems need to have the Print
spooler service disabled. The recommended way to do this is using a Group
Policy Object.” 

CISA’s Cset tool has new module on ransomware threat

Cybersecurity and Infrastructure Security Agency (CISA) - Defend Today, Secure Tomorrow

CISA’s CSET Tool Sets Sights on Ransomware Threat

12:45 PM EDT


Original release date: June 30, 2021

CISA has released a new module in its
Cyber Security Evaluation Tool (CSET): the Ransomware Readiness Assessment
(RRA). CSET is a desktop software tool that guides network defenders through
a step-by-step process to evaluate their cybersecurity practices on their
networks. CSET—applicable to both information technology (IT) and industrial
control system (ICS) networks—enables users to perform a comprehensive
evaluation of their cybersecurity posture using many recognized government
and industry standards and recommendations.

The RRA is a self-assessment based on a
tiered set of practices to help organizations better assess how well they are
equipped to defend and recover from a ransomware incident. CISA has tailored
the RRA to varying levels of ransomware threat readiness to make it useful to
all organizations regardless of their current cybersecurity maturity. The

  • Helps
    organizations evaluate their cybersecurity posture, with respect to
    ransomware, against recognized standards and best practice
    recommendations in a systematic, disciplined, and repeatable manner.
  • Guides
    asset owners and operators through a systematic process to evaluate
    their operational technology (OT) and information technology (IT)
    network security practices against the ransomware threat.
  • Provides
    an analysis dashboard with graphs and tables that present the assessment
    results in both summary and detailed form.

CISA strongly encourages all
organizations to take the CSET Ransomware Readiness Assessment, available at


New Microsoft Defender for Endpoint blog: Vulnerability management for Linux now generally available

News from Microsoft

Vulnerability management for Linux now generally available


In May we announced the support for Linux across our threat and
vulnerability management capabilities in Microsoft Defender for Endpoint.
Today, we are excited to announce that threat and vulnerability management for
Linux is now generally available across Red
Hat, Ubuntu, CentOS, SUSE, and Oracle
, with support for Debian coming
soon. In addition to Linux, the threat and vulnerability management
capabilities already support macOS and Windows, with support for Android and
iOS coming later this summer to further expand our support of third party

Vulnerability Management plays a crucial role in monitoring an
organization’s overall security posture. That’s why we continue to expand our
cross-platform support to equip security teams with real-time insights into
risk with continuous vulnerability discovery, intelligent prioritization, and
the ability to seamlessly remediate vulnerabilities for all their platforms.
With the general availability of support for Linux, organizations can now
review vulnerabilities within installed apps across the Linux OS and issue
remediation tasks for affected .

 Image 1: Software inventory page in the vulnerability management console, showing various Linux platforms

Image 1:
Software inventory page in the vulnerability management console, showing
various Linux platforms


Image 2: Software inventory page in the vulnerability management portal, showing glibc across various Linux systems

Image 2:
Software inventory page in the vulnerability management portal, showing glibc
across various Linux systems


Support for the various Linux platforms in threat and vulnerability management
closely follows what is available across our Endpoint Detection and Response
(EDR) capabilities. This alignment ensures a consistent experience for
Microsoft Defender for Endpoint customers, as we continue to expand our
cross-platform support.


More information and

The threat and vulnerability management capabilities are part of Microsoft Defender for Endpoint and enable
organizations to effectively identify, assess, and remediate endpoint
weaknesses to reduce organizational risk.


Check out our documentation for a complete overview of supported
operating systems and platforms.



Testing the New Version of the Windows Security Events Connector with Azure Sentinel To-Go!

A new Microsoft Security blog for your consideration.




 Last week, on Monday June 14th, 2021, a new
version of the 
Windows Security Events data connector reached public preview. This is
the first data connector created leveraging the new
generally available 
Azure Monitor Agent (AMA) and Data Collection Rules (DCR) features from the Azure
Monitor ecosystem. As any other new feature in Azure Sentinel, I
wanted to expedite the testing process and empower others in the InfoSec
community through a lab environment to learn more about it.

In this post, I will talk about the
new features of the new data connector and how to automate the
deployment of an Azure Sentinel instance with the
connector enabled, the creation and association of DCRs
and installation of the AMA on a Windows workstation. This is an
extension of 
a blog post I wrote, last year (2020), where I covered
the collection of Windows security events via the 
Log Analytics Agent (Legacy). 


I highly recommend reading the following blog posts
to learn more about the announcement of the new Azure Monitor
features and the Windows Security Events data connector:

Azure Sentinel To-Go!? 

Azure Sentinel2Go is an open-source project maintained
and developed by the 
Open Threat Research community to automate the deployment of an Azure
Sentinel research lab and a data ingestion pipeline to consume
pre-recorded datasets. Every environment I release through this
initiative is an environment I use and test while performing research as part
of my role in the MSTIC R&D team. Therefore, I am constantly trying to
improve the deployment templates as I cover more scenarios. Feedback
is greatly appreciated.

New Version of the Windows Security Events Connector?

 According to Microsoft docs, the Windows Security Events connector lets you stream
security events from any Windows server (physical or virtual, on-premises or in
any cloud) connected to your Azure Sentinel workspace. After last week,
there are now two versions of this connector: 

  • Security
     (legacy version):
    Based on the 
    Log Analytics Agent (Usually known as the Microsoft
    Monitoring Agent (MMA) or Operations Management Suite
    (OMS) agent).
  • Windows
    Security Events
     (new version):
    Based on the new 
    Azure Monitor Agent (AMA). 

In your Azure Sentinel data connector’s view, you
can now see both connectors:



New Version? What is New?

Connector Deployment

 Besides using the Log Analytics Agent to
collect and ship events, the 
old connector uses the Data Sources resource from the Log Analytics Workspace resource to set the collection tier
of Windows security events.




The new connector, on the other hand, uses a combination of Data Connection Rules (DCR) and Data Connector
Rules Association (DCRA)
DCRs define what data to collect and where it should be
sent. Here is where we can set it to send data to the log analytics
workspace backing up our 
Azure Sentinel instance. 



In order to apply a DCR to a virtual machine, one
needs to create an association between the machine and the rule. A
virtual machine may have an association with multiple DCRs, and a DCR
may have multiple virtual machines associated with it.



For more detailed information about
setting up the Windows Security Events connector with both Log Analytics Agent
and Azure Monitor Agents manually,
take a look at  
this document.

Collection Filtering Capabilities

 The old connector is not flexible enough to
choose what specific events to collect. For example, these are the
only options to collect data from Windows machines with the old connector:

     All events –
All Windows security and AppLocker events. 

  • Common –
    A standard set of events for auditing purposes. The Common event set
    may contain some types of events that aren’t so common. This is because
    the main point of the Common set is to reduce the volume of events to a
    more manageable level, while still maintaining full audit trail capability.
  • Minimal –
    A small set of events that might indicate potential threats. This set does
    not contain a full audit trail. It covers only events that might indicate
    a successful breach, and other important events that have very low rates
    of occurrence. 
  • None –
    No security or AppLocker events. (This setting is used to disable the

According to Microsoft docs, these are
the pre-defined security event collection groups
depending on the tier set:


On the other hand, the new connector allows custom data collection via XPath queries. These
XPath queries
are defined during the creation of the data collection rule and
are written in the form of 
LogName!XPathQuery. Here
are a few examples:


  • Collect only Security events with Event ID = 4624 
  • Collect only Security events with Event ID = 4624 or
    Security Events with Event ID = 4688
Security!*[System[(EventID=4624 or EventID=4688)]] 
  • Collect only Security events with Event ID = 4688 and
    with a process name of consent.exe.
Security!*[System[(EventID=4688)]] and *[EventData[Data[@Name=’ProcessName’]


You can select the custom
option to select which events to stream:




 Based on the new connector docs, make sure to query only Windows Security and AppLocker
logs. Events from other Windows logs, or from security logs from other
environments, may not adhere to the Windows Security Events schema and won’t be
parsed properly, in which case they won’t be ingested to your workspace.

 Also, the Azure Monitor agent supports XPath
queries for XPath version 1.0 only. I recommend reading the 
Xpath 1.0 Limitation documentation before writing XPath Queries. 


XPath stands for XML (Extensible
Markup Language) Path language, and it is used to
explore and model XML documents as a tree of nodes. Nodes can be
represented as elements,
attributes, and

 In the image below, we can see a few node examples
in the XML representation of a Windows security event:


XPath Queries? 

 XPath queries are used to search for patterns in XML
documents and leverage path expressions and predicates
to find a node or filter specific nodes that contain a specific
value. Wildcards such as ‘*
and ‘@
are used to select nodes and predicates are always embedded in square brackets 


Matching any element node with ‘*’

 Using our previous Windows Security event
XML example, we can process Windows Security events using the
wildcard ‘*’ at
the `Element` node level.

 The example below walks
through two ‘Element’ nodes
to get to the ‘Text
node of value ‘4688’.



You can test this basic ‘XPath’ query via

  • Open a PowerShell console as ‘Administrator’. 
  • Use the Get-WinEvent command to pass the XPath query. 
  • Use the ‘Logname’ parameter to define what event
    channel to run the query against.
  • Use the ‘FilterXPath’ parameter to set the XPath query. 

 Get-WinEvent -LogName Security -FilterXPath ‘*[System[EventID=4688]]



Matching any attribute node with ‘@’ 

As shown before, ‘Element’ nodes can contain ‘Attributes’ and we can
use the wildcard ‘@
to search for ‘Text
nodes at the ‘Attribute
node level. 
The example below extends the
previous one and adds a filter to search for a specific ‘Attribute’ node that
contains the following text: ‘C:WindowsSystem32cmd.exe’.




Once again, you can test the XPath query via
PowerShell as Administrator.

$XPathQuery = “*[System[EventID=4688]] and *[EventData[Data[@Name=’ParentProcessName’]=’C:WindowsSystem32cmd.exe’]]” 

Get-WinEvent -LogName Security -FilterXPath $XPathQuery



Can I Use XPath Queries in Event Viewer? 

 Every time you add a filter through the Event
Viewer UI, you can also get to the XPath query representation of the
filter. The XPath query is part of a QueryList node which
allows you to define and run multiple queries at once.


 We can take our previous example where we searched
for a specific attribute and run it through the Event Viewer Filter


    <Query Id="0" Path="Security"> 
        <Select Path="Security">*[System[(EventID=4688)]] and *[EventData[Data




Now that we have covered some of the main changes
and features of the new version of the Windows Security Events data connector,
it is time to show you how to create a lab environment for you to test your own
XPath queries for research purposes and before pushing
them to production.

Deploy Lab Environment

  • Identify the right Azure resources
    to deploy.
  • Create deployment template. 
  • Run deployment template. 

Identify the Right Azure Resources to Deploy 

 As mentioned earlier in this post, the old connector uses the Data Sources resource from the Log Analytics Workspace resource to set the collection tier of Windows
security events.


This is the Azure Resource Manager (ARM) template I
use in 
Azure-Sentinel2Go to set it up: 


at master · OTRF/Azure-Sentinel2Go (


Sources Azure Resource


  "type": "Microsoft.OperationalInsights/workspaces/dataSources", 
  "apiVersion": "2020-03-01-preview", 
  "location": "eastus", 
  "name": "WORKSPACE/SecurityInsightsSecurityEventCollectionConfiguration", 
  "kind": "SecurityInsightsSecurityEventCollectionConfiguration", 
  "properties": { 
    "tier": "All", 
    "tierSetMethod": "Custom" 

However, the new connector uses a combination of Data Connection Rules (DCR) and Data Connector Rules Association (DCRA). 
This is the ARM template I use to
create data collection rules:


Azure-Sentinel2Go/creation-azureresource.json at master ·
OTRF/Azure-Sentinel2Go (

Collection Rules Azure Resource


  "type": "microsoft.insights/dataCollectionRules", 
  "apiVersion": "2019-11-01-preview", 
  "name": "WindowsDCR", 
  "location": "eastus", 
  "tags": { 
    "createdBy": "Sentinel" 
  "properties": { 
    "dataSources": { 
      "windowsEventLogs": [ 
          "name": "eventLogsDataSource", 
          "scheduledTransferPeriod": "PT5M", 
          "streams": [ 
          "xPathQueries": [ 
    "destinations": { 
      "logAnalytics": [ 
          "name": "SecurityEvent", 
          "workspaceId": "AZURE-SENTINEL-WORKSPACEID", 
          "workspaceResourceId": "AZURE-SENTINEL-WORKSPACERESOURCEID" 
    "dataFlows": [ 
        "streams": [ 
        "destinations": [ 

One additional step in the setup of the new
connector is the association of the DCR with Virtual Machines.


This is the ARM template I use
to create DCRAs:


Azure-Sentinel2Go/association.json at master ·
OTRF/Azure-Sentinel2Go (


Collection Rule Associations Azure Resource


  "name": "WORKSTATION5/microsoft.insights/WindowsDCR", 
"type": "Microsoft.Compute/virtualMachines/providers/dataCollectionRuleAssociations", 
  "apiVersion": "2019-11-01-preview", 
  "location": "eastus", 
   "properties": { 
"description": "Association of data collection rule. Deleting this association will break the data collection for this virtual machine.", 
    "dataCollectionRuleId": "DATACOLLECTIONRULEID" 



about the XPath Queries?


As shown in the previous section, the XPath query
is part of the “dataSources
section of the data collection rule resource. It is defined under the ‘windowsEventLogs’ data
source type.


"dataSources": { 
  "windowsEventLogs": [ 
      "name": "eventLogsDataSource", 
      "scheduledTransferPeriod": "PT5M", 
      "streams": [ 
      "xPathQueries": [ 



Create Deployment Template 


We can easily add all those
ARM templates to an ‘
Azure Sentinel & Win10 Workstation’ basic templateWe just need to make
sure we install the Azure
Monitor Agent
 instead of the Log
and enable the system-assigned managed identity in the Azure VM. 


Resource List to Deploy:

  • Azure Sentinel Instance 
  • Windows Virtual Machine 
  • Azure Monitor Agent Installed. 
  • System-assigned managed identity Enabled. 
  • Data Collection Rule 
  • Log Analytics Workspace ID 
  • Log Analytics Workspace Resource ID 
  • Data Collection Rule Association
  • Data Collection Rule ID
  • Windows Virtual Machine Resource Name 


The following ARM template can be used for our
first basic scenario


Azure-Sentinel2Go/Win10-DCR-AzureResource.json at master ·
OTRF/Azure-Sentinel2Go (


Run Deployment Template 


You can deploy the ARM template via a “Deploy to Azure” button or via Azure CLI. 


“Deploy to Azure” Button 

  1. Browse
    Azure Sentinel2Go repository 
  2. Go
    to grocery-list/Win10/demos.
  3. Click
    on the “Deploy to
    ” button next to “Azure Sentinel + Win10 + DCR (DCR Resource)



  1. Fill
    out the required parameters:
    • adminUsername:
      admin user to create in the Windows workstation.
    • adminPassword:
      password for admin user.
    • allowedIPAddresses:
      Public IP address to restrict access to the lab environment.
  2. Wait
    5-10 mins and your environment should be ready.


Azure CLI 

  1. Download demo template. 
  2. Open a terminal where
    you can run 
    Azure CLI from (i.e. PowerShell). 
  3. Log in to your Azure
    Tenant locally.

az login 
  1. Create Resource Group

az group create -n AzSentinelDemo -l eastus 
  1. Deploy ARM template

az deployment group create –f ./ Win10-DCR-AzureResource.json -g MYRESOURCRGROUP –adminUsername MYUSER –adminPassword MYUSERPASSWORD –allowedIPAddresses x.x.x.x 
  1. Wait 5-10 mins and your
    environment should be ready.


Whether you use the UI or the CLI, you can monitor
your deployment by going to Resource Group > Deployments:







Lab Resources


Once your environment is deployed successfully, I
recommend verifying every resource that was deployed.


Sentinel New Data Connector


You will see the Windows Security Events (Preview) data
connector enabled with a custom Data
Collection Rules (DCR)




If you edit the custom DCR, you will see the XPath
query and the resource that it got associated with. The image below shows the
association of the DCR with a machine named workstation5.




You can also see that the data collection is set to custom and, for this
example, we only set the event stream to collect events with Event ID 4624.




Windows Workstation 


I recommend to RDP to the Windows Workstation by
using its Public IP Address. Go to your resource group and select the Azure VM.
You should see the public IP address to the right of the screen.
This would generate authentication events which will be captured by
the custom DCR associated with the endpoint.




Check Azure Sentinel Logs 


Go back to your Azure Sentinel, and you should
start seeing some events on the Overview page:




Go to Logs
and run the following KQL query:


| summarize count() by EventID


As you can see in the image below, only events with
Event ID 4624 were collected by the Azure Monitor Agent.



You might be asking yourself, “Who would only want
to collect events with Event ID 4624 from a Windows endpoint?
Believe it or not, there are network environments where due to bandwidth
constraints, they can only collect certain events. Therefore, this custom
filtering capability is amazing and very useful to cover more use cases and
even save storage!



Any Good XPath Queries Repositories in the InfoSec


Now that we know the internals of the new connector
and how to deploy a simple lab environment, we can test multiple XPath queries
depending on your organization and research use cases and bandwidth
constraints. There are a few projects that you can use.


Palantir WEF Subscriptions


One of many repositories out there that
contain XPath queries is the 
‘windows-event-forwarding’ project from Palantir. The XPath queries are Inside of the Windows
Event Forwarding (WEF) subscriptions. We could take all the subscriptions
and parse them programmatically to extract all the XPath
queries saving them in a format that can be used to be part of the
automatic deployment.


You can run the following steps in this document
available in Azure Sentinel To-go and extract XPath queries from the Palantir


Azure-Sentinel2Go/ at master · OTRF/Azure-Sentinel2Go

OSSEM Detection Model + ATT&CK Data Sources 


From a community perspective, another great
resource you can use to extract XPath Queries from is the 
Source Security Event Metadata (OSSEM) Detection Model (DM) project
. A community driven effort to help
researchers model attack behaviors from a data perspective and
share relationships identified in security events across several operating


One of the use cases from this initiative
is to map all security events in the project to the new 
‘Data Sources’ objects provided by the MITRE ATT&CK
. In the image below, we can
see how the OSSEM DM project provides 
an interactive document (.CSV) for researchers to explore
the mappings (Research output):


One of the advantages of this project over others is
that all its data relationships are in YAML format which makes
it easy to translate to others formats. For example, XML. We can use
the Event IDs defined in each data
relationship documented in OSSEM DM
and create XML files with XPath queries
in them.

OSSEM DM Relationships (YAML Files)

Let’s say we want to use relationships related to scheduled jobs in Windows.




YAML files to XML Query Lists

We can process all the YAML files and export the
data in an XML files. One thing that I like about this
OSSEM DM use case is that we can group the XML files by ATT&CK data sources. This can help organizations
organize their data collection in a way that can be mapped to detections or
other ATT&CK based frameworks internally.


We can use the QueryList format to document all ‘scheduled jobs relationships
XPath queries in one XML file.



I like to document my XPath
queries first in this format because it expedites the validation
process of the XPath queries locally on a Windows endpoint. You can use that
XML file in a PowerShell command to query Windows Security events and make
sure there are not syntax issues:

[xml]$scheduledjobs = get-content .scheduled-job.xml
Get-WinEvent -FilterXml $scheduledjobs




XML Query Lists to DCR Data Source:

Finally, once the XPath queries have been
validated, we could simply extract them from the XML files and put them in a
format that could be used in ARM templates to create DCRs.  Do you
remember the dataSources property
of the DCR Azure resource we talked about earlier? What if we could get
the values of the windowsEventLogs data
source directly from a file instead of hardcoding them in an ARM template?
example below is how it was previously being hardcoded.

"dataSources": { 
  "windowsEventLogs": [ 
      "name": "eventLogsDataSource", 
      "scheduledTransferPeriod": "PT5M", 
      "streams": [ 
      "xPathQueries": [ 

We could use the XML files created after processing
OSSEM DM relationships mapped to ATT&CK data sources and creating
the following document. We can pass the URL of the document as a parameter in
an ARM template to deploy our lab environment:


Azure-Sentinel2Go/ossem-attack.json at master ·
OTRF/Azure-Sentinel2Go (


How Do You Create the Document?


The OSSEM team is contributing and
maintaining the JSON file from the previous section in
the Azure Sentinel2Go repository. However, if you want to
go through the whole process on your own, Jose Rodriguez (
@Cyb3rpandah) was kind enough to write every single
step to get to that output file in the following blog post:


OSSEM Detection Model: Leveraging Data Relationships to
Generate Windows Event XPath Queries (


Ok, But, How Do I Pass the JSON
file to our Initial ARM template?


In our initial ARM template, we had the XPath query as an
ARM template variable as shown in the image below.




We could also have it as a template parameter.
However, it is not
flexible enough to define multiple DCRs or even update the whole DCR Data
Source object
 (Think about future coverage beyond Windows


Data Collection Rules – CREATE API 


For more complex use cases, I would use the DCR Create API. This can be executed via a PowerShell script
which can also be used inside of an ARM template via 
deployment scripts. Keep in mind that, the deployment script resource requires
an identity to execute the script. This managed identity of type user-assigned can be created at
deployment time and used to create the DCRs programmatically.


PowerShell Script 


If you have an Azure Sentinel instance without the
data connector enabled, you can use the following PowerShell script
create DCRs in it. This is good for testing and it also works in ARM templates.


Keep in mind, that you would need to have a
file where you can define the structure of the windowsEventLogs data source object
used in the creation of DCRs. We created that in the previous section remember?
Here is where we can use the OSSEM Detection
Model XPath Queries File 


Azure-Sentinel2Go/ossem-attack.json at master ·
OTRF/Azure-Sentinel2Go (




  "windowsEventLogs":  [ 
      "Name":  "eventLogsDataSource", 
      "scheduledTransferPeriod":  "PT1M", 
      "streams":  [ 
      "xPathQueries":  [ 
        "Security!*[System[(EventID=5136 or EventID=5139)]]", 
        "Security!*[System[(EventID=4656 or EventID=4661)]]", 



Once you have a JSON file similar to the one in the previous section, you
can run the script from a PowerShell console:


.Create-DataCollectionRules.ps1 -WorkspaceId xxxx -WorkspaceResourceId xxxx -ResourceGroup MYGROUP -Kind Windows -DataCollectionRuleName WinDCR -DataSourcesFile FileExample.json -Location eastus –verbose 

One thing to remember is that you can only have 10
Data Collection rules
. That is
different than 
XPath queries inside of one DCR. If you attempt to create more than 10 DCRs, you
will get the following error message:



VERBOSE: @{Headers=System.Object[]; Version=1.1; StatusCode=400; Method=PUT;  
Content={"error":{"code":"InvalidPayload","message":"Data collection rule is invalid","details":[{"code":"InvalidProperty","message":"'Data Sources. Windows Event Logs' item count should be 10 or less. Specified list has 11 items.","target":"Properties.DataSources.WindowsEventLogs"}]}}} 

Also, if you have duplicate XPath queries in one DCR,
you would get the following message:


VERBOSE: @{Headers=System.Object[]; Version=1.1; StatusCode=400; Method=PUT;  
Content={"error":{"code":"InvalidPayload","message":"Data collection rule is invalid","details":[{"code":"InvalidDataSource","message":"'X Path Queries' items must be unique (case-insensitively).  
Duplicate names: 


ARM Template: DeploymentScript Resource


Now that you know how to use a PowerShell script to
create DCRs directly to your Azure Sentinel instance, we can use it inside of
an ARM template and make it point to the JSON file that contains all the XPath
queries in the right format contributed by the OSSEM DM project.


This is the template I use to put it all together:


Azure-Sentinel2Go/Win10-DCR-DeploymentScript.json at master ·
OTRF/Azure-Sentinel2Go (


What about the DCR Associations? 

You still need to associate the DCR with a virtual
machine. However, we can keep doing that within the template leveraging the DCRAs Azure resource linked template inside of the main
template. Just in case you were wondering how I call the linked template from
the main template, I do it this way:


Azure-Sentinel2Go/Win10-DCR-DeploymentScript.json at master ·
OTRF/Azure-Sentinel2Go (




How Do I Deploy the
New Template?

The same way how we deployed the initial one. If you want the Easy Button , then simply
browse to the URL below and click on the blue button highlighted in the image


Link: Azure-Sentinel2Go/grocery-list/Win10/demos at master ·
OTRF/Azure-Sentinel2Go (




Wait 5-10 mins!




Enjoy it!






That’s it! You now know two ways to deploy and
test the new data connector and Data
Collection Rules
features with XPath queries capabilities. I hope this
was useful. Those were all my notes while testing and developing templates
to create a lab environment so that you could also expedite the testing process!


Feedback is greatly appreciated! Thank you to the OSSEM team and the Open Threat Research
(OTR) community
for helping us operationalize the research they share with
the community! Thank you, 
Jose Rodriguez. 


Demo Links



Hackers target Cisco ASA devices

 Researchers at Positive Technologies have published a proof-of-concept exploit for CVE-2020-3580. There are reports of researchers pursuing bug bounties using this exploit.

n October 21, 2021, Cisco released a security advisory and patches to address multiple cross-site scripting (XSS) vulnerabilities in its Adaptive Security Appliance (ASA) and Firepower Threat Defense (FTD) software web services. In April, Cisco updated the advisory to account for an incomplete fix of CVE-2020-3581.

Shortly after, Mikhail Klyuchnikov, a researcher at Positive Technologies also tweeted that other researchers are chasing bug bounties for this vulnerability. Tenable has also received a report that attackers are exploiting CVE-2020-3580 in the wild.


All four vulnerabilities exist because Cisco ASA and FTD software web services do not sufficiently validate user-supplied inputs. To exploit any of these vulnerabilities, an attacker would need to convince “a user of the interface” to click on a specially crafted link. Successful exploitation would allow the attacker to execute arbitrary code within the interface and access sensitive, browser-based information.

These vulnerabilities affect only specific AnyConnect and WebVPN configurations:

Proof of concept

As mentioned earlier, there is a public PoC published by Positive Technologies on Twitter, which has gained significant attention.

Vendor response

Cisco has not issued any additional information or updates since the PoC was published.

What’s New in Information Protection? a Microsoft Blog

 Throughout the last several months there have been
many new features, updates, and happenings in the world of Information
Protection at Microsoft. As we continue to build out more of this story, we
wanted to use this opportunity to connect with customers, partners, and more on
some of these updates to keep you informed and provide a single pane of glass
on everything we have been working on for the last several months. In addition,
we hope to give you some insight into the next big things being built within
MIP overall. 

Information Protection:

 General Availability: Mandatory Labeling  



General Availability: Improvements
for Exchange Online service side auto-labeling


Public Preview: Co-authoring

  • Co-authoring and AutoSave on Microsoft Information
    Protection-encrypted documents 
  • Client-based automatic and recommended labeling on Mac 
  • Mandatory labeling requiring users to apply a label to
    their email and documents 
  • Availability of audit label activities in Activity
  • Native support for variables and per-app content marking 
  • You can leverage co-authoring using: 
    • Production or test tenant 
    • Microsoft 365 apps with the following versions: 
      • Windows – Current Channel 16.0.14026.20270+ (2105) 
      • Mac: 16.50.21061301+  
  • If AIP Unified Labeling Client
    Version is in use, verify that in addition to the updated Microsoft
    365 app, you use version of the Unified Labeling
  • PLEASE NOTE: That Co-authoring for Native/Built-In
    Labeling will be added in the upcoming Current Channel within
    2 weeks 

Read more about the feature at Enable co-authoring for documents
encrypted by sensitivity labels in Microsoft 365 – Microsoft 365 Compliance |
Microsoft Docs



Public Preview: AIP Audit Logs in Activity Explorer 


 General Availability: Dynamic Markings with Variables within native labeling across all platforms 


GA: DLP Alerts 

Microsoft announces the General Availability of the
Microsoft Data Loss Prevention Alerts Dashboard. This latest
addition in the Microsoft’s data loss prevention solution
provides customers with the ability to holistically investigate DLP policy
violations across:

  • Exchange 
  • SharePoint Online 
  • OneDrive 
  • Teams 
  • Devices 
  • Cloud apps 
  • On-premises file shares 

Learn more about the feature at: Learn about the data loss prevention
Alerts dashboard – Microsoft 365 Compliance | Microsoft Docs


Information Protection:

GA: Track and Revoke 

  • Document tracking provides information for
    administrators about when a protected document was accessed.  
  • If necessary, both admins and users can revoke document
    access for protected tracked documents. 
  • This feature is available for AIP UL clientversion later 


Public Preview: DLP On-Prem 

  • The DLP on-premises scanner crawls on-premises data-at-rest
    in file shares and SharePoint document libraries and folders for sensitive
    items that, if leaked, would pose a risk to your organization or pose a
    risk of compliance policy violation  
  • This gives you the visibility and control you need to
    ensure that sensitive items are used and protected properly, and to help
    prevent risky behavior that might compromise them 
  • You need to leverage the Scanner binaries from AIP UL
    Client Version 




New Nobelium activity

 The Microsoft Threat Intelligence Center is tracking new activity from the NOBELIUM threat actor. Our investigation into the methods and tactics being used continues, but we have seen password spray and brute-force attacks and want to share some details to help our customers and communities protect themselves.  

This recent activity was mostly unsuccessful, and the majority of targets were not successfully compromised – we are aware of three compromised entities to date. All customers that were compromised or targeted are being contacted through our nation-state notification process.

This type of activity is not new, and we continue to recommend everyone take security precautions such as enabling multi-factor authentication to protect their environments from this and similar attacks. This activity was targeted at specific customers, primarily IT companies (57%), followed by government (20%), and smaller percentages for non-governmental organizations and think tanks, as well as financial services.  The activity was largely focused on US interests, about 45%, followed by 10% in the UK, and smaller numbers from Germany and Canada.  In all, 36 countries were targeted.

As part of our investigation into this ongoing activity, we also detected information-stealing malware on a machine belonging to one of our customer support agents with access to basic account information for a small number of our customers. The actor used this information in some cases to launch highly-targeted attacks as part of their broader campaign. We responded quickly, removed the access and secured the device. The investigation is ongoing, but we can confirm that our support agents are configured with the minimal set of permissions required as part of our Zero Trust “least privileged access” approach to customer information. We are notifying all impacted customers and are supporting them to ensure their accounts remain secure. 

This activity reinforces the importance of best practice security precautions such as Zero-trust architecture and multi-factor authentication and their importance for everyone. Additional information on best practice security priorities is listed below:  

(NIST) put together a Cybersecurity Framework Profile for Ransomware Risk Management. (DRAFT)

Cybersecurity Framework Profile for Ransomware Risk Management (Preliminary Draft) 



Ransomware is a type of malicious attack where attackers encrypt an organization’s data and demand payment to restore access. In some instances, attackers may also steal an organization’s information and demand additional payment in return for not disclosing the information to authorities, competitors, or the public. Ransomware can disrupt or halt organizations’ operations. This report defines a Ransomware Profile, which identifies security objectives from the NIST Cybersecurity Framework that support preventing, responding to, and recovering from ransomware events. The profile can be used as a guide to managing the risk of ransomware events. That includes helping to gauge an organization’s level of readiness to mitigate ransomware threats and to react to the potential impact of events.

NOTE: NIST is adopting an agile and iterative methodology to publish this content, making it available as soon as possible, rather than delaying its release until all the elements are completed. NISTIR 8374 will have at least one additional public comment period before final publication.

Azure Sentinel blog: Moving Azure Activity Connector to an improved method

 Title: Moving Azure
Activity Connector to an improved method



The Activity log is a platform log in Azure that provides insight into
subscription-level events. This includes such information as when a resource is
modified or when a virtual machine is started. You can view the Activity log in
the Azure portal or retrieve entries with PowerShell and CLI. For additional
functionality, you should create a diagnostic setting to send the Activity log
to your Azure Sentinel.



What changed?

The Azure Activity connector used a legacy method for collecting Activity
log events, prior to its adoption of the diagnostic settings pipeline. If
you’re using this legacy method, you are strongly encouraged to upgrade to the
new pipeline, which provides better functionality and consistency with resource

Diagnostic settings send the same data as the legacy method used to send the
Activity log with some changes to the structure of the AzureActivity table.

The columns in the following table have been deprecated in the updated
schema. They still exist in AzureActivity but
they will have no data. The replacement for these columns are not new, but they
contain the same data as the deprecated column. They are in a different format,
so in the event, you have any private or internal content (such as hunting
queries, analytics rules, workbooks, etc.) based on the deprecated columns, you
may need to modify it and make sure that it points to the right columns.





Here are some of the key improvements resulting from the move to the
diagnostic settings pipeline:

  • Improved ingestion latency (event ingestion within 2-3
    minutes of occurrence instead of 15-20 minutes).
  • Improved reliability.
  • Improved performance.
  • Support for all categories of events logged by the
    Activity log service (the legacy mechanism supports only a subset – for
    example, no support for Service Health events).
  • Management at scale with Azure policy.
  • Support for MG-level activity logs (coming in preview now).


Set up the (new) Azure Activity connector

The new Azure Activity connector includes two main steps- Disconnect the existing
subscriptions from the legacy method, and then Connect all the relevant subscriptions to the
new diagnostics settings pipeline via azure policy.








Please go to Connect Azure Activity log data to Azure Sentinel to learn
more about the new connector experience.