New from Microsoft
Microsoft is pleased to announce the publication of the Security Stack
Mappings for Azure project in partnership with the Center for Threat-Informed
Defense.
New from Microsoft
Microsoft is pleased to announce the publication of the Security Stack
Mappings for Azure project in partnership with the Center for Threat-Informed
Defense.
News from Microsoft
Title:
Vulnerability management for Linux now generally available
URL: https://techcommunity.microsoft.com/t5/microsoft-defender-for-endpoint/vulnerability-management-for-linux-now-generally-available/ba-p/2451145
Overview:
In May we announced the support for Linux across our threat and
vulnerability management capabilities in Microsoft Defender for Endpoint.
Today, we are excited to announce that threat and vulnerability management for
Linux is now generally available across Red
Hat, Ubuntu, CentOS, SUSE, and Oracle, with support for Debian coming
soon. In addition to Linux, the threat and vulnerability management
capabilities already support macOS and Windows, with support for Android and
iOS coming later this summer to further expand our support of third party
platforms.
Vulnerability Management plays a crucial role in monitoring an
organization’s overall security posture. That’s why we continue to expand our
cross-platform support to equip security teams with real-time insights into
risk with continuous vulnerability discovery, intelligent prioritization, and
the ability to seamlessly remediate vulnerabilities for all their platforms.
With the general availability of support for Linux, organizations can now
review vulnerabilities within installed apps across the Linux OS and issue
remediation tasks for affected .
Image 1:
Software inventory page in the vulnerability management console, showing
various Linux platforms
Image 2:
Software inventory page in the vulnerability management portal, showing glibc
across various Linux systems
Support for the various Linux platforms in threat and vulnerability management
closely follows what is available across our Endpoint Detection and Response
(EDR) capabilities. This alignment ensures a consistent experience for
Microsoft Defender for Endpoint customers, as we continue to expand our
cross-platform support.
More information and
feedback
The threat and vulnerability management capabilities are part of Microsoft Defender for Endpoint and enable
organizations to effectively identify, assess, and remediate endpoint
weaknesses to reduce organizational risk.
Check out our documentation for a complete overview of supported
operating systems and platforms.
.
A new Microsoft Security blog for your consideration.
Overview:
Last week, on Monday June 14th, 2021, a new
version of the Windows Security Events data connector reached public preview. This is
the first data connector created leveraging the new
generally available Azure Monitor Agent (AMA) and Data Collection Rules (DCR) features from the Azure
Monitor ecosystem. As any other new feature in Azure Sentinel, I
wanted to expedite the testing process and empower others in the InfoSec
community through a lab environment to learn more about it.
In this post, I will talk about the
new features of the new data connector and how to automate the
deployment of an Azure Sentinel instance with the
connector enabled, the creation and association of DCRs
and installation of the AMA on a Windows workstation. This is an
extension of a blog post I wrote, last year (2020), where I covered
the collection of Windows security events via the Log Analytics Agent (Legacy).
I highly recommend reading the following blog posts
to learn more about the announcement of the new Azure Monitor
features and the Windows Security Events data connector:
Azure Sentinel2Go is an open-source project maintained
and developed by the Open Threat Research community to automate the deployment of an Azure
Sentinel research lab and a data ingestion pipeline to consume
pre-recorded datasets. Every environment I release through this
initiative is an environment I use and test while performing research as part
of my role in the MSTIC R&D team. Therefore, I am constantly trying to
improve the deployment templates as I cover more scenarios. Feedback
is greatly appreciated.
According to Microsoft docs, the Windows Security Events connector lets you stream
security events from any Windows server (physical or virtual, on-premises or in
any cloud) connected to your Azure Sentinel workspace. After last week,
there are now two versions of this connector:
In your Azure Sentinel data connector’s view, you
can now see both connectors:
Besides using the Log Analytics Agent to
collect and ship events, the old connector uses the Data Sources resource from the Log Analytics Workspace resource to set the collection tier
of Windows security events.
The new connector, on the other hand, uses a combination of Data Connection Rules (DCR) and Data Connector
Rules Association (DCRA).
DCRs define what data to collect and where it should be
sent. Here is where we can set it to send data to the log analytics
workspace backing up our Azure Sentinel instance.
In order to apply a DCR to a virtual machine, one
needs to create an association between the machine and the rule. A
virtual machine may have an association with multiple DCRs, and a DCR
may have multiple virtual machines associated with it.
For more detailed information about
setting up the Windows Security Events connector with both Log Analytics Agent
and Azure Monitor Agents manually,
take a look at this document.
The old connector is not flexible enough to
choose what specific events to collect. For example, these are the
only options to collect data from Windows machines with the old connector:
All events –
All Windows security and AppLocker events.
According to Microsoft docs, these are
the pre-defined security event collection groups
depending on the tier set:
On the other hand, the new connector allows custom data collection via XPath queries. These
XPath queries
are defined during the creation of the data collection rule and
are written in the form of LogName!XPathQuery. Here
are a few examples:
Security!*[System[(EventID=4624)]]
Security!*[System[(EventID=4624 or EventID=4688)]]
Security!*[System[(EventID=4688)]] and *[EventData[Data[@Name=’ProcessName’]
=’C:WindowsSystem32consent.exe’]]
You can select the custom
option to select which events to stream:
Based on the new connector docs, make sure to query only Windows Security and AppLocker
logs. Events from other Windows logs, or from security logs from other
environments, may not adhere to the Windows Security Events schema and won’t be
parsed properly, in which case they won’t be ingested to your workspace.
Also, the Azure Monitor agent supports XPath
queries for XPath version 1.0 only. I recommend reading the Xpath 1.0 Limitation documentation before writing XPath Queries.
XPath stands for XML (Extensible
Markup Language) Path language, and it is used to
explore and model XML documents as a tree of nodes. Nodes can be
represented as elements,
attributes, and
text.
In the image below, we can see a few node examples
in the XML representation of a Windows security event:
XPath queries are used to search for patterns in XML
documents and leverage path expressions and predicates
to find a node or filter specific nodes that contain a specific
value. Wildcards such as ‘*’
and ‘@’
are used to select nodes and predicates are always embedded in square brackets “[]”.
Using our previous Windows Security event
XML example, we can process Windows Security events using the
wildcard ‘*’ at
the `Element` node level.
The example below walks
through two ‘Element’ nodes
to get to the ‘Text’
node of value ‘4688’.
You can test this basic ‘XPath’ query via
PowerShell.
Get-WinEvent -LogName Security -FilterXPath ‘*[System[EventID=4688]]
As shown before, ‘Element’ nodes can contain ‘Attributes’ and we can
use the wildcard ‘@’
to search for ‘Text’
nodes at the ‘Attribute’
node level. The example below extends the
previous one and adds a filter to search for a specific ‘Attribute’ node that
contains the following text: ‘C:WindowsSystem32cmd.exe’.
Once again, you can test the XPath query via
PowerShell as Administrator.
$XPathQuery = “*[System[EventID=4688]] and *[EventData[Data[@Name=’ParentProcessName’]=’C:WindowsSystem32cmd.exe’]]”
Get-WinEvent -LogName Security -FilterXPath $XPathQuery
Every time you add a filter through the Event
Viewer UI, you can also get to the XPath query representation of the
filter. The XPath query is part of a QueryList node which
allows you to define and run multiple queries at once.
We can take our previous example where we searched
for a specific attribute and run it through the Event Viewer Filter
XML UI.
<QueryList>
<Query Id="0" Path="Security">
<Select Path="Security">*[System[(EventID=4688)]] and *[EventData[Data
[@Name='ParentProcessName']='C:WindowsSystem32cmd.exe']]</Select>
</Query>
</QueryList>
Now that we have covered some of the main changes
and features of the new version of the Windows Security Events data connector,
it is time to show you how to create a lab environment for you to test your own
XPath queries for research purposes and before pushing
them to production.
As mentioned earlier in this post, the old connector uses the Data Sources resource from the Log Analytics Workspace resource to set the collection tier of Windows
security events.
This is the Azure Resource Manager (ARM) template I
use in Azure-Sentinel2Go to set it up:
Azure-Sentinel2Go/securityEvents.json
at master · OTRF/Azure-Sentinel2Go (github.com)
Data
Sources Azure Resource
{
"type": "Microsoft.OperationalInsights/workspaces/dataSources",
"apiVersion": "2020-03-01-preview",
"location": "eastus",
"name": "WORKSPACE/SecurityInsightsSecurityEventCollectionConfiguration",
"kind": "SecurityInsightsSecurityEventCollectionConfiguration",
"properties": {
"tier": "All",
"tierSetMethod": "Custom"
}
}
However, the new connector uses a combination of Data Connection Rules (DCR) and Data Connector Rules Association (DCRA).
This is the ARM template I use to
create data collection rules:
Azure-Sentinel2Go/creation-azureresource.json at master ·
OTRF/Azure-Sentinel2Go (github.com)
Data
Collection Rules Azure Resource
{
"type": "microsoft.insights/dataCollectionRules",
"apiVersion": "2019-11-01-preview",
"name": "WindowsDCR",
"location": "eastus",
"tags": {
"createdBy": "Sentinel"
},
"properties": {
"dataSources": {
"windowsEventLogs": [
{
"name": "eventLogsDataSource",
"scheduledTransferPeriod": "PT5M",
"streams": [
"Microsoft-SecurityEvent"
],
"xPathQueries": [
"Security!*[System[(EventID=4624)]]"
]
}
]
},
"destinations": {
"logAnalytics": [
{
"name": "SecurityEvent",
"workspaceId": "AZURE-SENTINEL-WORKSPACEID",
"workspaceResourceId": "AZURE-SENTINEL-WORKSPACERESOURCEID"
}
]
},
"dataFlows": [
{
"streams": [
"Microsoft-SecurityEvent"
],
"destinations": [
"SecurityEvent"
]
}
]
}
}
One additional step in the setup of the new
connector is the association of the DCR with Virtual Machines.
This is the ARM template I use
to create DCRAs:
Azure-Sentinel2Go/association.json at master ·
OTRF/Azure-Sentinel2Go (github.com)
Data
Collection Rule Associations Azure Resource
{
"name": "WORKSTATION5/microsoft.insights/WindowsDCR",
"type": "Microsoft.Compute/virtualMachines/providers/dataCollectionRuleAssociations",
"apiVersion": "2019-11-01-preview",
"location": "eastus",
"properties": {
"description": "Association of data collection rule. Deleting this association will break the data collection for this virtual machine.",
"dataCollectionRuleId": "DATACOLLECTIONRULEID"
}
}
What
about the XPath Queries?
As shown in the previous section, the XPath query
is part of the “dataSources”
section of the data collection rule resource. It is defined under the ‘windowsEventLogs’ data
source type.
"dataSources": {
"windowsEventLogs": [
{
"name": "eventLogsDataSource",
"scheduledTransferPeriod": "PT5M",
"streams": [
"Microsoft-SecurityEvent"
],
"xPathQueries": [
"Security!*[System[(EventID=4624)]]"
]
}
]
}
We can easily add all those
ARM templates to an ‘Azure Sentinel & Win10 Workstation’ basic template. We just need to make
sure we install the Azure
Monitor Agent instead of the Log
Analytics one, and enable the system-assigned managed identity in the Azure VM.
Template
Resource List to Deploy:
The following ARM template can be used for our
first basic scenario:
Azure-Sentinel2Go/Win10-DCR-AzureResource.json at master ·
OTRF/Azure-Sentinel2Go (github.com)
You can deploy the ARM template via a “Deploy to Azure” button or via Azure CLI.
az login
az group create -n AzSentinelDemo -l eastus
az deployment group create –f ./ Win10-DCR-AzureResource.json -g MYRESOURCRGROUP –adminUsername MYUSER –adminPassword MYUSERPASSWORD –allowedIPAddresses x.x.x.x
Whether you use the UI or the CLI, you can monitor
your deployment by going to Resource Group > Deployments:
Once your environment is deployed successfully, I
recommend verifying every resource that was deployed.
You will see the Windows Security Events (Preview) data
connector enabled with a custom Data
Collection Rules (DCR):
If you edit the custom DCR, you will see the XPath
query and the resource that it got associated with. The image below shows the
association of the DCR with a machine named workstation5.
You can also see that the data collection is set to custom and, for this
example, we only set the event stream to collect events with Event ID 4624.
I recommend to RDP to the Windows Workstation by
using its Public IP Address. Go to your resource group and select the Azure VM.
You should see the public IP address to the right of the screen.
This would generate authentication events which will be captured by
the custom DCR associated with the endpoint.
Go back to your Azure Sentinel, and you should
start seeing some events on the Overview page:
Go to Logs
and run the following KQL query:
SecurityEvent
| summarize count() by EventID
As you can see in the image below, only events with
Event ID 4624 were collected by the Azure Monitor Agent.
You might be asking yourself, “Who would only want
to collect events with Event ID 4624 from a Windows endpoint?”.
Believe it or not, there are network environments where due to bandwidth
constraints, they can only collect certain events. Therefore, this custom
filtering capability is amazing and very useful to cover more use cases and
even save storage!
Now that we know the internals of the new connector
and how to deploy a simple lab environment, we can test multiple XPath queries
depending on your organization and research use cases and bandwidth
constraints. There are a few projects that you can use.
One of many repositories out there that
contain XPath queries is the ‘windows-event-forwarding’ project from Palantir. The XPath queries are Inside of the Windows
Event Forwarding (WEF) subscriptions. We could take all the subscriptions
and parse them programmatically to extract all the XPath
queries saving them in a format that can be used to be part of the
automatic deployment.
You can run the following steps in this document
available in Azure Sentinel To-go and extract XPath queries from the Palantir
project.
Azure-Sentinel2Go/README.md at master · OTRF/Azure-Sentinel2Go
(github.com)
From a community perspective, another great
resource you can use to extract XPath Queries from is the Open
Source Security Event Metadata (OSSEM) Detection Model (DM) project. A community driven effort to help
researchers model attack behaviors from a data perspective and
share relationships identified in security events across several operating
systems.
One of the use cases from this initiative
is to map all security events in the project to the new ‘Data Sources’ objects provided by the MITRE ATT&CK
framework. In the image below, we can
see how the OSSEM DM project provides an interactive document (.CSV) for researchers to explore
the mappings (Research output):
One of the advantages of this project over others is
that all its data relationships are in YAML format which makes
it easy to translate to others formats. For example, XML. We can use
the Event IDs defined in each data
relationship documented in OSSEM DM and create XML files with XPath queries
in them.
Exploring
OSSEM DM Relationships (YAML Files)
Let’s say we want to use relationships related to scheduled jobs in Windows.
Translate
YAML files to XML Query Lists
We can process all the YAML files and export the
data in an XML files. One thing that I like about this
OSSEM DM use case is that we can group the XML files by ATT&CK data sources. This can help organizations
organize their data collection in a way that can be mapped to detections or
other ATT&CK based frameworks internally.
We can use the QueryList format to document all ‘scheduled jobs relationships‘
XPath queries in one XML file.
I like to document my XPath
queries first in this format because it expedites the validation
process of the XPath queries locally on a Windows endpoint. You can use that
XML file in a PowerShell command to query Windows Security events and make
sure there are not syntax issues:
[xml]$scheduledjobs = get-content .scheduled-job.xml
Get-WinEvent -FilterXml $scheduledjobs
Translate
XML Query Lists to DCR Data Source:
Finally, once the XPath queries have been
validated, we could simply extract them from the XML files and put them in a
format that could be used in ARM templates to create DCRs. Do you
remember the dataSources property
of the DCR Azure resource we talked about earlier? What if we could get
the values of the windowsEventLogs data
source directly from a file instead of hardcoding them in an ARM template? The
example below is how it was previously being hardcoded.
"dataSources": {
"windowsEventLogs": [
{
"name": "eventLogsDataSource",
"scheduledTransferPeriod": "PT5M",
"streams": [
"Microsoft-SecurityEvent"
],
"xPathQueries": [
"Security!*[System[(EventID=4624)]]"
]
}
]
}
We could use the XML files created after processing
OSSEM DM relationships mapped to ATT&CK data sources and creating
the following document. We can pass the URL of the document as a parameter in
an ARM template to deploy our lab environment:
Azure-Sentinel2Go/ossem-attack.json at master ·
OTRF/Azure-Sentinel2Go (github.com)
The OSSEM team is contributing and
maintaining the JSON file from the previous section in
the Azure Sentinel2Go repository. However, if you want to
go through the whole process on your own, Jose Rodriguez (@Cyb3rpandah) was kind enough to write every single
step to get to that output file in the following blog post:
In our initial ARM template, we had the XPath query as an
ARM template variable as shown in the image below.
We could also have it as a template parameter.
However, it is not
flexible enough to define multiple DCRs or even update the whole DCR Data
Source object (Think about future coverage beyond Windows
logs).
For more complex use cases, I would use the DCR Create API. This can be executed via a PowerShell script
which can also be used inside of an ARM template via deployment scripts. Keep in mind that, the deployment script resource requires
an identity to execute the script. This managed identity of type user-assigned can be created at
deployment time and used to create the DCRs programmatically.
If you have an Azure Sentinel instance without the
data connector enabled, you can use the following PowerShell script to
create DCRs in it. This is good for testing and it also works in ARM templates.
Keep in mind, that you would need to have a
file where you can define the structure of the windowsEventLogs data source object
used in the creation of DCRs. We created that in the previous section remember?
Here is where we can use the OSSEM Detection
Model XPath Queries File 😉
Azure-Sentinel2Go/ossem-attack.json at master ·
OTRF/Azure-Sentinel2Go (github.com)
FileExample.json
{
"windowsEventLogs": [
{
"Name": "eventLogsDataSource",
"scheduledTransferPeriod": "PT1M",
"streams": [
"Microsoft-SecurityEvent"
],
"xPathQueries": [
"Security!*[System[(EventID=5141)]]",
"Security!*[System[(EventID=5137)]]",
"Security!*[System[(EventID=5136 or EventID=5139)]]",
"Security!*[System[(EventID=4688)]]",
"Security!*[System[(EventID=4660)]]",
"Security!*[System[(EventID=4656 or EventID=4661)]]",
"Security!*[System[(EventID=4670)]]"
]
}
]
}
Run
Script
Once you have a JSON file similar to the one in the previous section, you
can run the script from a PowerShell console:
.Create-DataCollectionRules.ps1 -WorkspaceId xxxx -WorkspaceResourceId xxxx -ResourceGroup MYGROUP -Kind Windows -DataCollectionRuleName WinDCR -DataSourcesFile FileExample.json -Location eastus –verbose
One thing to remember is that you can only have 10
Data Collection rules. That is
different than XPath queries inside of one DCR. If you attempt to create more than 10 DCRs, you
will get the following error message:
ERROR
VERBOSE: @{Headers=System.Object[]; Version=1.1; StatusCode=400; Method=PUT;
Content={"error":{"code":"InvalidPayload","message":"Data collection rule is invalid","details":[{"code":"InvalidProperty","message":"'Data Sources. Windows Event Logs' item count should be 10 or less. Specified list has 11 items.","target":"Properties.DataSources.WindowsEventLogs"}]}}}
Also, if you have duplicate XPath queries in one DCR,
you would get the following message:
ERROR
VERBOSE: @{Headers=System.Object[]; Version=1.1; StatusCode=400; Method=PUT;
Content={"error":{"code":"InvalidPayload","message":"Data collection rule is invalid","details":[{"code":"InvalidDataSource","message":"'X Path Queries' items must be unique (case-insensitively).
Duplicate names:
Security!*[System[(EventID=4688)]],Security!*[System[(EventID=4656)]].","target":"Properties.DataSources.WindowsEventLogs[0].XPathQueries"}]}}}
Now that you know how to use a PowerShell script to
create DCRs directly to your Azure Sentinel instance, we can use it inside of
an ARM template and make it point to the JSON file that contains all the XPath
queries in the right format contributed by the OSSEM DM project.
This is the template I use to put it all together:
Azure-Sentinel2Go/Win10-DCR-DeploymentScript.json at master ·
OTRF/Azure-Sentinel2Go (github.com)
You still need to associate the DCR with a virtual
machine. However, we can keep doing that within the template leveraging the DCRAs Azure resource linked template inside of the main
template. Just in case you were wondering how I call the linked template from
the main template, I do it this way:
Azure-Sentinel2Go/Win10-DCR-DeploymentScript.json at master ·
OTRF/Azure-Sentinel2Go (github.com)
The same way how we deployed the initial one. If you want the Easy Button , then simply
browse to the URL below and click on the blue button highlighted in the image
below:
Link: Azure-Sentinel2Go/grocery-list/Win10/demos at master ·
OTRF/Azure-Sentinel2Go (github.com)
Wait 5-10 mins!
Enjoy it!
That’s it! You now know two ways to deploy and
test the new data connector and Data
Collection Rules features with XPath queries capabilities. I hope this
was useful. Those were all my notes while testing and developing templates
to create a lab environment so that you could also expedite the testing process!
Feedback is greatly appreciated! Thank you to the OSSEM team and the Open Threat Research
(OTR) community for helping us operationalize the research they share with
the community! Thank you, Jose Rodriguez.
Researchers at Positive Technologies have published a proof-of-concept exploit for CVE-2020-3580. There are reports of researchers pursuing bug bounties using this exploit.
n October 21, 2021, Cisco released a security advisory and patches to address multiple cross-site scripting (XSS) vulnerabilities in its Adaptive Security Appliance (ASA) and Firepower Threat Defense (FTD) software web services. In April, Cisco updated the advisory to account for an incomplete fix of CVE-2020-3581.
Shortly after, Mikhail Klyuchnikov, a researcher at Positive Technologies also tweeted that other researchers are chasing bug bounties for this vulnerability. Tenable has also received a report that attackers are exploiting CVE-2020-3580 in the wild.
All four vulnerabilities exist because Cisco ASA and FTD software web services do not sufficiently validate user-supplied inputs. To exploit any of these vulnerabilities, an attacker would need to convince “a user of the interface” to click on a specially crafted link. Successful exploitation would allow the attacker to execute arbitrary code within the interface and access sensitive, browser-based information.
These vulnerabilities affect only specific AnyConnect and WebVPN configurations:
As mentioned earlier, there is a public PoC published by Positive Technologies on Twitter, which has gained significant attention.
Cisco has not issued any additional information or updates since the PoC was published.
Throughout the last several months there have been
many new features, updates, and happenings in the world of Information
Protection at Microsoft. As we continue to build out more of this story, we
wanted to use this opportunity to connect with customers, partners, and more on
some of these updates to keep you informed and provide a single pane of glass
on everything we have been working on for the last several months. In addition,
we hope to give you some insight into the next big things being built within
MIP overall.
Microsoft
Information Protection:
General Availability: Mandatory Labeling
General Availability: Improvements
for Exchange Online service side auto-labeling
Public Preview: Co-authoring
Read more about the feature at Enable co-authoring for documents
encrypted by sensitivity labels in Microsoft 365 – Microsoft 365 Compliance |
Microsoft Docs
Public Preview: AIP Audit Logs in Activity Explorer
General Availability: Dynamic Markings with Variables within native labeling across all platforms
GA: DLP Alerts
Microsoft announces the General Availability of the
Microsoft Data Loss Prevention Alerts Dashboard. This latest
addition in the Microsoft’s data loss prevention solution
provides customers with the ability to holistically investigate DLP policy
violations across:
Learn more about the feature at: Learn about the data loss prevention
Alerts dashboard – Microsoft 365 Compliance | Microsoft Docs
Azure
Information Protection:
GA: Track and Revoke
Public Preview: DLP On-Prem
The Microsoft Threat Intelligence Center is tracking new activity from the NOBELIUM threat actor. Our investigation into the methods and tactics being used continues, but we have seen password spray and brute-force attacks and want to share some details to help our customers and communities protect themselves.
This recent activity was mostly unsuccessful, and the majority of targets were not successfully compromised – we are aware of three compromised entities to date. All customers that were compromised or targeted are being contacted through our nation-state notification process.
This type of activity is not new, and we continue to recommend everyone take security precautions such as enabling multi-factor authentication to protect their environments from this and similar attacks. This activity was targeted at specific customers, primarily IT companies (57%), followed by government (20%), and smaller percentages for non-governmental organizations and think tanks, as well as financial services. The activity was largely focused on US interests, about 45%, followed by 10% in the UK, and smaller numbers from Germany and Canada. In all, 36 countries were targeted.
As part of our investigation into this ongoing activity, we also detected information-stealing malware on a machine belonging to one of our customer support agents with access to basic account information for a small number of our customers. The actor used this information in some cases to launch highly-targeted attacks as part of their broader campaign. We responded quickly, removed the access and secured the device. The investigation is ongoing, but we can confirm that our support agents are configured with the minimal set of permissions required as part of our Zero Trust “least privileged access” approach to customer information. We are notifying all impacted customers and are supporting them to ensure their accounts remain secure.
This activity reinforces the importance of best practice security precautions such as Zero-trust architecture and multi-factor authentication and their importance for everyone. Additional information on best practice security priorities is listed below:
Announcement
Ransomware is a type of malicious attack where attackers encrypt an organization’s data and demand payment to restore access. In some instances, attackers may also steal an organization’s information and demand additional payment in return for not disclosing the information to authorities, competitors, or the public. Ransomware can disrupt or halt organizations’ operations. This report defines a Ransomware Profile, which identifies security objectives from the NIST Cybersecurity Framework that support preventing, responding to, and recovering from ransomware events. The profile can be used as a guide to managing the risk of ransomware events. That includes helping to gauge an organization’s level of readiness to mitigate ransomware threats and to react to the potential impact of events.
NOTE: NIST is adopting an agile and iterative methodology to publish this content, making it available as soon as possible, rather than delaying its release until all the elements are completed. NISTIR 8374 will have at least one additional public comment period before final publication.
Title: Moving Azure
Activity Connector to an improved method
Overview:
The Activity log is a platform log in Azure that provides insight into
subscription-level events. This includes such information as when a resource is
modified or when a virtual machine is started. You can view the Activity log in
the Azure portal or retrieve entries with PowerShell and CLI. For additional
functionality, you should create a diagnostic setting to send the Activity log
to your Azure Sentinel.
The Azure Activity connector used a legacy method for collecting Activity
log events, prior to its adoption of the diagnostic settings pipeline. If
you’re using this legacy method, you are strongly encouraged to upgrade to the
new pipeline, which provides better functionality and consistency with resource
logs.
Diagnostic settings send the same data as the legacy method used to send the
Activity log with some changes to the structure of the AzureActivity table.
The columns in the following table have been deprecated in the updated
schema. They still exist in AzureActivity but
they will have no data. The replacement for these columns are not new, but they
contain the same data as the deprecated column. They are in a different format,
so in the event, you have any private or internal content (such as hunting
queries, analytics rules, workbooks, etc.) based on the deprecated columns, you
may need to modify it and make sure that it points to the right columns.
Here are some of the key improvements resulting from the move to the
diagnostic settings pipeline:
The new Azure Activity connector includes two main steps- Disconnect the existing
subscriptions from the legacy method, and then Connect all the relevant subscriptions to the
new diagnostics settings pipeline via azure policy.
Please go to Connect Azure Activity log data to Azure Sentinel to learn
more about the new connector experience.
URL: https://techcommunity.microsoft.com/t5/microsoft-defender-for-endpoint/new-threat-amp-vulnerability-management-apis-create-reports/ba-p/2445813
Published On (YYYY-dd-MM):2021-14-06
Overview:
We are excited to announce the general availability of a new set of APIs for
Microsoft threat and vulnerability management that allow security
administrators to drive efficiencies and customize their vulnerability
management program. While previous versions were dependable and feature-rich,
we built the new APIs with enterprises in mind that are looking for economies
of scale within their vulnerability management program and need to handle large
datasets and device inventories daily. These new APIs provide the ability to design and export
customized reports and dashboards, automate tasks, and allow teams to build or
leverage existing integrations with third party tools.
Security teams will get detailed information as part of a full data snapshot
or they can limit the dataset to only include changes since the last data
download for a more focused view. Information from the following threat and
vulnerability management areas is included:
Now let’s look at how you can use these new APIs to boost and customize your
vulnerability management program.
Customized reports and dashboards enable you to pool the most meaningful
data and insights about your organization’s security posture into a more
focused view based on what your organization or specific teams and stakeholders
need to know and care about most. Custom reports can increase the actionability
of information and improve efficiencies across teams, because it reduces the
workload of busy security teams and allows them to focus on the most critical
vulnerabilities.
Before building custom views using tools such as PowerBI and Excel, you can
enrich the native datasets provided by Microsoft’s threat and vulnerability
management solution with additional data from Microsoft Defender for Endpoint
or a third-party tool of your choice.
In addition, these reports/dashboards give you an easy way to report key
information and trends to top management to track business KPIs and provide
meaningful insights on the overall status of the vulnerability management
program in your organization.
With a custom interface you can show the information that your teams need
and nothing more, creating a simpler task view or list of day-to-day work
items. It provides flexibility in using any of the solution’s components, such
as vulnerability report, missing security updates, installed software,
end-of-support products, and operating systems, and combining them with
advanced filtering capabilities. This can help optimize and streamline the end
user experience according to your organization’s needs.
Vulnerabilities
report
This report gives you a snapshot of the security posture of your
organization and allows you to identify the most critical and exploitable
vulnerabilities, see the most exposed devices distributed by OS, or drill down
into specific CVEs. You can user filters to show when a CVE was detected for
the first time, or use advanced properties such as Device tags, Device groups,
Device health (activeinactive), and more.
Image 1: Vulnerabilities report
Image 2:
Vulnerabilities report – severity and vulnerable devices by OS
Missing Windows
security updates
This report gives you a complete picture of all missing Windows security
updates across your organization. You can see what the most exposed operating
systems are, or search for a particular security update to show all affected
devices.
You can filter the report by the associated CVE criticality, by age of each
security update, or filter by advanced properties such as device tags, device
groups, device health (activeinactive) and more.
Image 3:
Missing Windows security updates
Software inventory
This report gives an overview of your software inventory. In addition to the
org-level view, you can explore recent installations and on which devices,
when, and in what version they were installed.
You can filter the report by number of the weaknesses associated with each
software, by software namevendor, or filter by advanced properties such as
Device tags, Device groups, Device health (activeinactive) and more.
Image 4: Software inventory report
You can create your own reports, use any of the templates we have shown
above, or check out more report templates in our GitHub library:
Have you created your own report or used these published templates? We would
love to see how you’re using these new capabilities!
Other resources:
Build OData queries with Microsoft Defender for Endpoint
Create custom reports using Microsoft Defender ATP APIs and Power BI
Automation
and integrations
A big part of a successful vulnerability management (VM) program is the
ability to automate tasks and reduce the manual workload of security and IT
teams, as well as integrating the VM solution with existing tools that are part
of an established workflow process in your organization.
Our new threat and vulnerability management APIs enable you to build a data
exchange between natively provided data and your existing tools. At the same
time, we are working with partners to continuously expand the portfolio of
out-of-the-box integrations with third party solutions. You can already
leverage our Skybox
integration today and we are in the process of releasing additional integrations
for ServiceNow VR and Kenna Security and in the coming weeks.
The Kenna Security partnership will strengthen the overall prioritization
capabilities, combining threat and vulnerability management data with real-world
threat and exploit intelligence and advanced data science to determine which
vulnerabilities pose the highest risk to your organization. To learn more about
the upcoming integration join our webinar on 6/24.
By integrating with ServiceNow Vulnerability Response you will be able to
easily automate and track workflows. We will share more information soon!
While we will have more news on integrations and automation in the coming
months, if there are specific integrations you would like to see on our
roadmap, go to the Partner Application page in the Microsoft Defender
Security Center, and click Recommend
other partners.
The threat and vulnerability management capabilities are part of Microsoft Defender for Endpoint and enable organizations to
effectively identify, assess, and remediate endpoint weaknesses to reduce
organizational risk.
Check out our documentation for a complete overview of how you can
consume these new APIs.
NIST’s
National Cybersecurity Center of Excellence (NCCoE) has released a new draft
report, NIST Interagency or Internal Report (NISTIR) 8335, Identity as a
Service for Public Safety Organizations.
Identity
as a service (IDaaS) is when a company offers identity, credential, and access
management (ICAM) services to customers through a software-as-a-service (SaaS)
cloud-service model. Public safety organizations (PSOs) could potentially
reduce costs and adopt new standards and authenticators more easily by using
IDaaS to provide authentication services for their own applications. This would
allow PSOs to offload some or most of their authentication responsibilities to
the IDaaS provider.
This
report informs PSOs about IDaaS and how they can benefit from it. It also lists
questions that PSOs can ask IDaaS providers when evaluating their services to
ensure the PSOs’ authentication needs are met and the risk associated with
authentication is mitigated properly. PSOs considering IDaaS usage are
encouraged to use this NISTIR. This report was developed in joint partnership
between the NCCoE and the Public Safety Communications Research Division (PSCR)
at NIST.
The public comment period for this draft is open through August 2,
2021. See the publication
details for a copy of the draft and instructions for submitting
comments. You can also contact us at psfr-nccoe@nist.gov.