Register now: Embracing AI-powered tools to maximize productivity

AI is changing the modern workplace at an unprecedented pace. Adopting AI-powered tools across your organization can supercharge productivity and creativity. Secure and responsible generative AI solutions, such as Copilot for Microsoft 365, elevate your AI investment with real gains in efficiency and innovation. This essential AI companion works across your data estate to deliver undeniable value—70% of early adopters said they were more productive, and 68% reported that it improved the quality of their work.1 Join Microsoft leaders and executives as they discuss how AI can advance your journey to a high-powered organization. Explore how you can: Jump-start the AI transformation with data security and compliance Enhance communication and collaboration with AI-powered tools Adopt and measure your AI transformation Register now to learn how secure and responsible AI can transform your organization. 1 “Work Trend Index Special Report: What Can Copilot’s Earliest Users Teach Us About Generative AI at Work?,” Microsoft, November 2023.
 
The AI Advantage: Maximizing Productivity in the Modern Workplace
 
Register now >

Google’s New Generative AI Search Results Lead to Scam Websites

Search Generative Experience (SGE) is Google’s upcoming generative artificial intelligence (AI) search feature. Google first allowed users to opt into the Google SGE results in May 2023. Google recently began rolling out this feature to a small sample of random users who have not yet opted in. Selected users will see a brief AI-created overview above the Google search engine results. The “Ask a follow up” box allows users to add more details or ask follow-up questions.
However, users are urged to exercise caution with AI-generated responses, as researchers identified that Google’s SGE results may lead to scam sites. The listed websites promoted by SGE used the .online Top-Level Domain (TLD), identical HTML templates, and the same sites to perform redirects indicating that they are likely part of the same search engine optimization (SEO) poisoning campaign. Upon clicking one of the listed websites in the search results, users may undergo a series of redirects until they reach a scam site. These scam sites often host fake CAPTCHAs or fraudulent YouTube sites that push a request to subscribe to browser notifications. Scammers use browser notifications to send unwanted advertisements directly to the operating system’s desktop, even after the website in the browser has been closed. Once subscribed, these spam advertisements redirect users to fake giveaways, unwanted browser extensions, spam subscriptions, and tech support scams.

SMS Text Phishing

Threat actors continue to use SMS text messages in phishing campaigns to steal users’ personal data, account information, and funds. SMS-based phishing (SMiShing) may be more effective than email phishing as these messages are viewed on a mobile device, making it more difficult for users to identify potentially malicious communications. This threat is compounded by businesses and organizations’ legitimate use of text messages for notification and outreach purposes. Users may also be fatigued by the number of text messages they receive and act on a message by clicking a link or responding impulsively.
SMiShing messages typically claim to come from a well-known business or organization – such as Amazon, FedEx, UPS, Netflix, or the IRS – and request that the recipient click on a link, often to access a promotion, obtain information about a package delivery, or address a problem with their account. Links may be included within these messages that, if clicked, lead to fraudulent websites that capture user credentials, steal funds, or deliver malware (image 1). These messages may also request sensitive information from the user that could facilitate identity theft or account compromise.
Image 1
There has been a recent increase in other SMiShing campaigns in which a user receives a text message from an unrecognized number that contains verbiage similar to “Hey! How have you been?” The threat actors behind these campaigns seek to garner a response from the recipient. Responding may lead to a conversation in which the user is lured into a scam, such as a gift card scam (image 2), or the threat actor may simply be attempting to confirm that the phone number is active. Attempts to garner a response from the user are also used in bank impersonation campaigns, coercing the user to reply to avoid fraudulent activity on their account without requesting information or prompting them to click on a link (image 3).

What’s new about the NIST 2.0 Cybersecurity Framework Zoom Meeting

Are you curious about what’s new about the NIST 2.0 Cybersecurity Framework? This Thursday, learn from ISC2 New Jersey Chapter‘s Master Cybersecurity Trainer, Jay Ferron about the improvements that have been made to the framework. I’ll also have the opportunity to interview Jay about his fascinating career and how he played a major part in the 9/11 recovery effort.

Click on the link below to register and hope to see you there!

As usual, newcomers can register for free.

Register Here

Microsoft.Source Newsletter | Issue 57

Microsoft.Source Newsletter | Issue 57
See the latest ideas and projects from the global developer community. If someone forwarded you this newsletter and you want to receive future issues, sign up. This month’s Microsoft.Source delves into the synergy between open-source software (OSS) and Microsoft technologies.  
Resources Learn new skills with step-by-step guidance, learning paths and modules.
 
Featured Documentation Explore Azure OpenAI Service Updates in Detail >
Create advanced Copilot experiences using the Assistants API preview. Discover new models for GPT-4 Turbo (preview), GPT-3.5 Turbo, fine-tuning, and text-to-speech.  
What’s New

Blog Microsoft and Open-Source Software >
Discover Microsoft technologies that are open source, check out repos on GitHub, and learn about tools you can use for your own open-source projects. (in English)   Documentation How to deploy Mistral models with Azure AI Studio >
Microsoft is partnering with Mistral AI to bring its Large Language Models to Azure. Mistral AI offers two categories of models in Azure Machine Learning studio.  
Blog Python in Visual Studio Code – March 2024 Release > See what’s included in the March 2024 release of the Python and Jupyter extensions for Visual Studio Code. (in English)  
Events See local events >

Microsoft Build Microsoft Build / May 21 – 23 / Seattle >
Learn from experts, get hands-on with AI, and make connections with peers, Microsoft engineers, and industry leaders.  
Open Source Summit Open Source Summit / April 16 – 18 / Seattle >
Open Source Summit is the premier event for open source developers, technologists, and community leaders to collaborate, share information, solve problems.  
On demand Getting Started with the Fluent UI Blazor Library / On demand >
This Open at Microsoft episode provides an overview of the Fluent UI Blazor library and how to leverage its open-source set of Blazor components.  
On demand Build Your Own Copilot with Azure AI Studio / On demand >
Learn how to use Azure AI studio to create, manage, and deploy AI solutions with Azure OpenAI Service.  
On demand Architecting IoT applications with .NET and Meadow / On demand >
Get your next IoT project started to enable flexible hardware design and platform support, including Meadow Feather, Raspberry Pi and desktop.  
Learning
Code Sample Code Sample: Simple Chat Application using Azure OpenAI >
Build a Python Quart microframework app that streams responses from ChatGPT to an HTML/JS frontend using JSON Lines over a ReadableStream interface.  
Microsoft Copilot Transform your work with Microsoft Copilot >
Learn about Microsoft Copilot and find out how to extend it or build your own Copilot experiences with this content on Microsoft Learn.  
Cloud Skills Challenge Microsoft Learn AI Skills Challenge >
This immersive challenge will help you gain the skills, confidence, and Microsoft Credentials needed to excel in the era of AI.  

Microsoft 365 Virtual Training Day: Prepare Your Organization for Microsoft Copilot for Microsoft 365

Build the skills you need to create new opportunities and accelerate your understanding of Microsoft Cloud technologies at a free Microsoft 365 Virtual Training Day from Microsoft Learn. Join us at Prepare Your Organization for Microsoft Copilot for Microsoft 365 to learn how to implement AI to help ignite creativity, enhance productivity, and strengthen computing and collaboration skills. You’ll learn about the capabilities of Copilot, including how it works, how to configure it, and how to set it up for more powerful searches. You’ll also explore how Copilot works with Microsoft Graph—and your existing Microsoft 365 apps—to provide intelligent, real-time assistance. You will have the opportunity to: Understand the key components of Copilot for Microsoft 365 and how it works. Learn how to extend Copilot with plugins. Get guidance on completing the necessary Copilot technical and business requirements to prepare for implementation. Learn how to assign Copilot licenses, prepare your organization’s Microsoft 365 data for Copilot searches, and create a Copilot Center of Excellence. Join us at an upcoming Prepare Your Organization for Microsoft Copilot for Microsoft 365 event:
April 22, 2024 | 12:00 PM – 2:00 PM | (GMT-05:00) Eastern Time (US & Canada)


Delivery Language: English
Closed Captioning Language(s): English

Register Here

Beware of AI Tax Scams

With Tax Day quickly approaching, many taxpayers may feel stressed as they work to file their tax returns promptly and accurately. During this time, cybercriminals may exploit this human vulnerability and leverage the rapid advancement and increase in artificial intelligence (AI) and deepfakes . They continue to explore ways to steal and use your information, including personally identifiable information (PII), financial information such as W-2s and banking information, login account credentials, and other sensitive information. Once information is captured or stolen, threat actors can use it to impersonate their victims, file fraudulent tax returns on their behalf, and steal their tax refunds. They can also use the information for other identity theft and fraud schemes.
Threat actors use social engineering tactics, AI-generated deepfakes, and voice cloning technologies to impersonate legitimate and trusted tax authorities, including the Internal Revenue Service (IRS) and tax preparation services, by stealing and using their branding, logos, and interfaces. They target vulnerable people through email, phone, text messaging, and social media platforms to trick them into disclosing their information and initiating fraudulent transactions. For example, threat actors may claim a tax refund is due or send information to track the status of tax refunds via phishing emails or text messages with links that, if clicked, direct targets to spoofed IRS websites. Additionally, threat actors may claim via phone that their target did not pay taxes or filed them incorrectly and now owes the IRS for back taxes. They may also threaten arrest or legal action if the fictitious debt is not paid immediately via wire transfer, gift cards, or pre-paid debit cards.
Threat actors also create highly sophisticated phishing emails with AI-generated content to convince their targets to divulge sensitive information or visit malicious links to spoofed websites of popular online tax preparation software. Additionally, they develop AI-powered fraudulent tax software appearing as legitimate software to lure targets into downloading malicious applications that steal and capture their information. Threat actors also trick their targets by falsely advertising and promoting themselves as legitimate tax preparation services. These scammers, or “ghost tax preparers,” are not certified, but they still prepare and file false and fraudulent tax returns and defraud their clients. They may be quickly established and promise fast or significant tax refunds to entice potential victims. The NJCCIC observed emails containing a link to direct targets to a tax preparer’s website. If clicked, the website displayed its services, including streamlining the tax filing process, and it provided IRS credentials to create a sense of legitimacy. However, upon further inspection and analysis, the link to this website was considered phishing and malicious.

AZORult Malware Distributed HTML Smuggling

Threat actors are using HTML smuggling and fraudulent Google Sites pages to disseminate AZORult in a new malware campaign. AZORult, dubbed PuffStealer and Ruzalto, was first detected in 2016. It searches the desktop for sensitive documents using keywords for extensions and file names and collects browser data and cryptocurrency wallet information. AZORult’s payload has been distributed in phishing, malspam, and malvertising campaigns and is currently targeting the healthcare industry. This campaign is laden with obfuscation and evasion techniques to minimize the chance of detection by anti-malware software.
Researchers found that the HTML smuggling technique employed has AZORult’s malicious JavaScript embedded in a separate JSON file and hosted on an external website. Once the Google Site is visited, a CAPTCHA test is initiated to add a sense of legitimacy for users and protect the malware against URL scanners, such as VirusTotal. After passing the CAPTCHA test, the payload is reconstructed and downloaded to the victim’s machine. The downloaded file is disguised as a PDF file, often appearing as a bank statement to trick users into opening the file. Once launched, it will execute a series of PowerShell scripts. This payload includes ASMI bypass techniques and reflective code loading to bypass host and disk-based detection and minimize artifacts.

Protecting Model Updates in Privacy-Preserving Federated Learning

In our second post we described attacks on models and the concepts of input privacy and output privacy. ln our last post, we described horizontal and vertical partitioning of data in privacy-preserving federated learning (PPFL) systems. In this post, we explore the problem of providing input privacy in PPFL systems for the horizontally-partitioned setting.

Models, training, and aggregation

To explore techniques for input privacy in PPFL, we first have to be more precise about the training process.

In horizontally-partitioned federated learning, a common approach is to…

Read the Blog

NY Metro ISSA Chapter News

As Vice President of the chapter here is

Upcoming NY Chapter Webinars

April 2ndThe Cyber Ranch Podcast, Allan Alford

Register @ https://nymissa-webinar-20240402.eventbrite.com

May 7th, Implementing Zero Trust in an Enterprise, Vinicius Da Costa 

Follow our NY Metro ISSA LinkedIn group for registration details on upcoming events. [We are open to suggestions for topics and/or speakers.]

Ongoing Chapter Activities

SECRT – Security Leaders Round Table 

SECRT is an invitation only series breakfast roundtable with chapters across the US and 1400+ opt-in members. To become part of the SECRT, visit www.secrt.us or contact Mike Melore directly.

Military Transition Bridge into Cybersecurity Career Pathways and Jobs The Cybersecurity Workforce Alliance and iQ4 are under a federal grant to develop workplace cyber/risk skills for Veterans, National Guard, Police, Coast Guard and Correction Services communities. Refer to IQ4/CWA Vets Training Program or contact David Solano for more information.