The NCCoE Buzz: Mobile Application Vetting 101

The NCCoE Buzz: Mobile Security Edition is a recurring email on timely topics in mobile device cybersecurity and privacy from the National Cybersecurity Center of Excellence’s (NCCoE’s) Mobile Device Security project team.

What is it?

Imagine you’ve found “THE” mobile application to enhance your organization’s productivity. How do you know if the benefits outweigh the potential risks of installing the mobile app?

Mobile application vetting (MAV) services are used by enterprises to scan applications for potentially unwanted behavior. Application vetting can also be used to ensure that applications meet an organization’s security and privacy requirements.

How does it work?

MAV services use a variety of static, dynamic, and behavioral analysis techniques to determine if an application demonstrates any behaviors that pose a security or privacy risk. Once analysis is complete, the MAV tool generates a comprehensive report of the application’s security and privacy characteristics.

How does it address security and privacy concerns?

MAV services provide organizations with the information necessary to make risk-based decisions when selecting/developing mobile applications for the organization. The report from the application vetting service contains various findings, such as the use of in-app purchases, insecure network communications, or exposure of sensitive personal or device information. Based on these findings, enterprises can make informed decisions on whether to block problematic applications from being installed on company devices.

What can you do?

Download our NIST SP 1800-21 and 1800-22 guides to learn more about application vetting and other mobile device security and privacy capabilities, including how these solutions can strengthen the security of your enterprise environment.

The NCCoE Mobile Device Security Team

NIST Launches New Trustworthy and Responsible AI Resource Center

NIST Launches New Trustworthy and Responsible AI Resource Center: Includes First Version of AI Risk Management Framework Playbook

The National Institute of Standards and Technology (NIST) announces the launch of the NIST Trustworthy and Responsible AI Resource Center (AIRC), a one-stop-shop for foundational content, technical documents, and toolkits to enable responsible use of Artificial Intelligence (AI). The AIRC offers industry, government, and academic stakeholders knowledge of AI standards, measurement methods and metrics, datasets, and other resources. 

The launch of the AI Resource Center was announced during the White House Summit for Democracy held this week. The AIRC is part of NIST’s continued effort to promote a shared understanding and improve communication among those seeking to operationalize trustworthy and responsible AI. 

The Resource Center will facilitate implementation of trustworthy and responsible approaches such as those described in NIST’s AI Risk Management Framework (AI RMF). That voluntary Framework articulates and offers guidance for addressing the key building blocks of trustworthy AI in order to better manage risks to individuals, organizations, and society associated with AI.

The initial version of the AIRC, which will be expanded over time based on contributions from NIST and others, includes the AI RMF 1.0 and the first complete version of the  companion playbook. Content in the AI RMF Playbook can now be filtered by AI RMF function, topic, and AI actor role so that users can quickly isolate relevant information most useful to them. 

The AIRC includes access to a standards tracker about AI standards around the globe, along with a metrics hub to assist in test, evaluation, verification, and validation of AI. 

A trustworthy and responsible AI Glossary in the AIRC is being released in beta format as a spreadsheet as approaches to visualize the relationships between and among these terms continue to advance. A final glossary will be produced at a later date based on input from the community.
In addition, the new resource center will  be a repository for NIST technical and policy documents related to the AI RMF, the NIST AI publication series, as well as NIST-funded external resources in the area of trustworthy and responsible AI. 
The AIRC Engagements and Events page will include updates on how to engage with NIST on the topic of trustworthy and responsible AI. 

Sign up to receive email notifications about NIST’s AI activities here.

NCCoE Seeks Collaborators for New Healthcare Sector Project

Become a Collaborator on the Mitigating Cybersecurity Risk in Telehealth Smart Home Integration Project
The National Cybersecurity Center of Excellence (NCCoE) has issued a Federal Register Notice (FRN) inviting industry participants and other interested collaborators to participate in the Mitigating Cybersecurity Risk in Telehealth Smart Home Integration Project.

The NCCoE Healthcare project team will build an environment that will model patients’ use of smart speakers in a telehealth ecosystem. The goal of this project is to identify and mitigate cybersecurity and privacy risks associated with these ecosystems. This project will result in a publicly available NIST Cybersecurity Practice Guide.

There are two ways to join the NCCoE for this project:

Become an NCCoE Collaborator – Collaborators are members of the project team that work alongside the NCCoE staff to build the demonstration by contributing products, services, and technical expertise.

Get Started Today – If you are interested in becoming an NCCoE collaborator for the Mitigating Cybersecurity Risk in Telehealth Smart Home Integration project, first review the requirements identified in the Federal Register Notice. To become a collaborator, visit the project page to see the final project description and request a Letter of Interest (LOI) template--you will then receive a link to download the LOI template. 

Go to the project page here

Complete the LOI template and send it to the NCCoE Healthcare team at hit_nccoe@nist.gov.

Join our Community of Interest – By joining the NCCoE Healthcare Community of Interest (COI), you will receive project updates and the opportunity to share your expertise to help guide this project. Request to join our Healthcare COI by visiting our project page.

If you have any questions, please contact our project team at hit_nccoe@nist.gov.