What is new in Android 11 Preview | Getting started with | Android 11 Developer Preview | Android Developers | Features and APIs Overview

Published by inkskull on

Data access auditing

To provide more transparency into how your app and its dependencies access private data from users, Android 11 introduces data access auditing. By using this feature, you can better identify and rectify potentially unexpected data access.

To learn more about this feature, read the data access auditing section on the page that discusses privacy changes related to permissions.

Performant graphics debug layer injection

Applications can now load external graphics layers (GLESVulkan) into native application code to expose the same functionality as a debuggable app, but without incurring the performance overhead. This feature is especially important when profiling your application with tools like GAPID. To profile your app, include the following meta-data element in your app manifest file instead of making the application debuggable:

<application ... >
    <meta-data android:name="com.android.graphics.injectLayers.enable"
                  android:value="true" />

Batch operations for media files

For consistency across devices and added user convenience, Android 11 adds several methods to the MediaStore API. To learn more about these methods, see the perform batch operations section on the Android 11 privacy page related to storage.

Rich media in quick replies

Beginning in Android 11, users can insert images and other rich media content into quick replies. To support this feature, apps need to add information to RemoteInput notifications specifying which MIME types they can handle. Do this by calling RemoteInput.Builder.setAllowDataType(). The app must also check any RemoteInput broadcasts that it receives to see if the broadcast contains content in any of those types; use RemoteInput.getDataResultsFromIntent() to do this.

Access to media files using raw file paths

Starting in Android 11, apps that have the READ_EXTERNAL_STORAGE permission can read a device’s media files using direct file paths and native libraries. To learn more about this capability, see the access files using raw paths section on the Android 11 privacy page related to storage.

Secure sharing of large datasets

In some situations, such as those that involve machine learning or media playback, your app might want to use the same large dataset as another app. In previous versions of Android, your app and another app would each need to download a separate copy of the same dataset.

To help reduce data redundancy, both over the network and on disk, Android 11 allows these large datasets to be cached on the device using shared data blobs. To learn more about sharing datasets, see the in-depth guide on sharing large datasets.

App process exit reasons

We’re interested in hearing your feedback! Please take this short survey to let us know how you’re using the feature. In particular, tell us about use cases impacted by this feature.

Android 11 introduces the getHistoricalProcessExitReasons() method, which reports the reasons for any recent process terminations. Apps can use this method to gather crash diagnostic information, such as whether a process termination is due to ANRs, memory issues, or other reasons.

The getHistoricalProcessExitReasons() method returns an instance of the ApplicationExitInfo class, which contains information related to an app process’s death. By calling getReason() on an instance of this class, you can determine why your app’s process was killed. For example, a return value of REASON_CRASH indicates that an unhandled exception occurred in your app.

Requesting and checking for low latency support

Certain displays can perform graphics post-processing, such as some external displays and TVs. This post-processing improves the graphics but can increase latency. Newer displays which support HDMI 2.1 have an auto low latency mode (ALLM, also known as game mode), which minimizes latency by switching off this post-processing. For more details on ALLM, refer to the HDMI 2.1 specification.

A window can request that auto low latency mode be used, if it is available. ALLM is particularly useful for applications like games and videoconferencing, where low latency is more important than having the best possible graphics.

To toggle minimal post-processing on or off, call Window.setPreferMinimalPostProcessing(), or set the window’s preferMinimalPostProcessing attribute to true. Not all displays support minimal post-processing; to find out if a particular display does support it, call the new method Display.isMinimalPostProcessingSupported().Note: If the user disables minimal post-processing, or if the display does not support low latency mode, calling Window.setPreferMinimalPostProcessing() has no effect.

Low-latency decoding in MediaCodec

Android 11 enhances MediaCodec to support low-latency decoding for games and other real-time apps. You can check whether a codec supports low-latency decoding by passing FEATURE_LowLatency to MediaCodecInfo.CodecCapabilities.isFeatureSupported().

To turn low-latency decoding on or off, do either of the following:

Note: Supporting low-latency decoding can require additional resources, such as higher power consumption. Use low-latency decoding only when necessary.

NDK image decoder

The NDK ImageDecoder API provides a standard API for Android C/C++ apps to decode images directly. App developers no longer need to use the framework APIs (via JNI) or bundle third-party image-decoding libraries. For more information, see the Image decoder developer guide.

Resource loaders

We’re interested in hearing your feedback! Please take this short survey to let us know how you’re using the feature. In particular, tell us about use cases impacted by this feature.

Android 11 introduces a new API that allows apps to dynamically extend how resources are searched and loaded. The new API classes ResourcesLoader and ResourcesProvider are primarily responsible for providing the new functionality. Together, they provide the ability to supply additional resources and assets, or modify the values of existing resources and assets.

ResourcesLoader objects are containers that supply ResourcesProvider objects to an app’s Resources instance. In turn, ResourcesProvider objects provide methods to load resource data from APKs and resource tables.

One primary use case for this API is custom asset loading. You can pair an instance of the new API class DirectoryAssetsProvider with a ResourcesProvider to redirect the resolution of file-based resources and assets, searching a specific directory rather than the application APK. You can access these assets through the open() family of methods from the AssetManager API class, just like with assets bundled in the APK.

Updates to the ICU libraries

Android 11 updates the android.icu package to use version 66 of the ICU library, compared to version 63 in Android 10. The new library version includes updated CLDR locale data and a number of enhancements to internationalization support in Android.

Notable changes in the new library versions include the following:

  • Many formatting APIs now support a new return object type that extends FormattedValue.
  • The LocaleMatcher API is enhanced with a builder class, support for the java.util.Locale type, and a result class featuring additional data about a match.
  • Unicode 13 is now supported.

Neural Networks API 1.3

Android 11 expands and improves the Neural Networks API (NNAPI).

New operations

NNAPI 1.3 introduces a new operand type, TENSOR_QUANT8_ASYMM_SIGNED, to support TensorFlow Lite’s new quantization scheme.

Additionally, NNAPI 1.3 introduces the following new operations:

  • IF
  • ELU
  • FILL
  • RANK

New ML controls

NNAPI 1.3 introduces new controls to help machine learning run smoothly:

Biometric authentication updates

To help you control the level of security for your app’s data, Android 11 provides several improvements to biometric authentication.

Authentication strength

Android 11 introduces the BiometricManager.Authenticators interface, which defines the following levels of authentication strength:BIOMETRIC_STRONGAuthentication using a hardware element that satisfies the Strong strength level as defined on the Compatibility Definition page.BIOMETRIC_WEAKAuthentication using a hardware element that satisfies the Weak strength level as defined on the Compatibility Definition page.DEVICE_CREDENTIALAuthentication using a screen lock credential – the user’s PIN, pattern, or password.

To define the strength levels that your app allows for biometric authentication, pass a bitwise combination of the strength levels into the setAllowedAuthenticators() method. For example, if your app requires either a “strong” hardware element or a screen lock credential, pass in BIOMETRIC_STRONG | DEVICE_CREDENTIAL.

To check whether the necessary authentication elements are available, pass the same bitwise combination of strength levels into the canAuthenticate() method. If necessary, invoke the ACTION_BIOMETRIC_ENROLL intent action, which prompts the user to enroll an authenticator that has the required authentication strength for your app. In the intent extra, provide the authenticators that your app accepts. The system chooses one of these authenticators and asks the user to register credentials for that authenticator.

After the user authenticates, you can check whether the user authenticated using a device credential or a biometric credential by calling getAuthenticationType().

Additional support for auth-per-use keys

Android 11 provides more support for auth-per-use keys within the BiometricPrompt class. Such a key requires the user to present either a biometric credential, a device credential, or either credential each time your app needs to access data that’s guarded by that key. Your app specifies what the user needs to provide in the second argument for setUserAuthenticationParameters(). Auth-per-use keys are useful for high-value transactions, such as making a large payment or updating a person’s health records.

To associate a BiometricPrompt object with an auth-per-use key, pass in 0 as the first argument for setUserAuthenticationParameters().

Deprecated methods

Android 11 deprecates the following methods:

  • The setDeviceCredentialAllowed() method.
  • The setUserAuthenticationValidityDurationSeconds() method.
  • The overloaded version of canAuthenticate() that takes no arguments.

CallScreeningService updates

On Android 11 (API level ‘R’) and higher, apps with android.Manifest.permission.READ_PHONE_STATE permission can request telephony display information updates through PhoneStateListener.onDisplayInfoChanged(). This includes radio access technology information for marketing branding purposes.

Various 5G icon display solutions for different carriers are provided by this new API. The supported technologies include LTE, LTE with carrier aggregation (LTE+), Advanced pro LTE (5Ge), NR (5G), and NR on millimeter-wave cellular bands (5G+).

Expanded camera support in Android emulator

Android 11 introduces improved Android Emulator camera capabilities. The added features include the following:

  • RAW capture
  • YUV reprocessing
  • Level 3 devices
  • Logical camera support

Mute notification sounds and vibrations during active capture

Beginning with Android 11, when actively using the camera, your app can mute only vibrations, both sounds and vibrations, or neither using setCameraAudioRestriction().

Wi-Fi Passpoint enhancements

Passpoint enables apps to automatically and silently perform authentication and connect to secure Wi-Fi hotspots. Apps that target API level ‘R’ and higher can use the following additional capabilities of Passpoint.Expiration date enforcement and notificationEnforcing expiration dates on profiles allows the framework to avoid auto-connection to access points with expired credentials, which are destined to fail. This prevents airtime usage, and saves battery and backend bandwidth. It displays a notification to the user when their profile is in range and has expired.FQDN MatchingAllow configuration of a named AAA domain separately from an Access Network Query Protocol (ANQP) fully qualified domain name (FQDN), using an Extension/Android node in PerProviderSubscription (PPS) Management Object (MO).Self-signed private CAsFor Passpoint R1 Profiles, Android accepts private self-signed CAs for connection authentication.

Wi-Fi Suggestion API is expanded

Android 11 expands the Wi-Fi Suggestion API to increase your app’s network management capabilities, including the following:

  • Connectivity management apps can manage their own networks by allowing disconnection requests.
  • Passpoint networks are integrated into the Suggestion API and can be suggested to the user.
  • Analytics APIs enable you to get information about the quality of your networks.

GNSS antenna support

Android 11 introduces the GnssAntennaInfo class, which makes it possible for your app to make more use of centimeter-accuracy positioning that the Global Navigation Satellite System (GNSS) can provide. After the user grants your app the ACCESS_FINE_LOCATION permission, your app can access the following details related to the GNSS antenna:

  • Phase center offset (PCO) coordinates
  • Phase center variation (PCV) corrections
  • Signal gain corrections

To determine whether a device can provide GNSS antenna information to your app, call hasGnssAntennaInfo().

Privacy considerations

  • The GNSS antenna can identify only the device model, not an individual device.
  • Use of the GnssAntennaInfo class requires the ACCESS_FINE_LOCATION permission.

Chat Bubbles

Bubbles are now available to developers to help surface conversations across the system. Bubbles was an experimental feature in Android 10 that was enabled through a developer option; in Android 11, this is no longer necessary.Note: Later in the preview cycle, apps will need to request permission before sending bubbles. However, no permission is needed in Developer Preview 2.

There are a number of improvements to bubble performance, and users now have more flexibility in enabling and disabling bubbles from each app. For developers who implemented experimental support, there are a few changes to the APIs in Android 11:

Updates for accessibility service developers

If you create a custom accessibility service, you can use the following features in Android 11:

  • The user-facing explanation of an accessibility service now allows for HTML and images in addition to plain text. This flexibility makes it easier to explain to end-users what your service does and how it can help them.
  • To work with a description of a UI element’s state that’s more semantically meaningful than contentDescription, call the getStateDescription() method.
  • To request that touch events bypass the system’s touch explorer, call setTouchExplorationPassthroughRegion(). Similarly, to request that gestures bypass the system’s gesture detector, call setGestureDetectionPassthroughRegion().
  • You can request IME actions, such as “enter” and “next”, as well as screenshots of windows that don’t enable the FLAG_SECURE flag.

Incremental APK installation

Installing large (2GB+) APKs on a device can take a long time, even if only a small change is made to an app. Incremental APK installation accelerates this process by installing enough of the APK to launch the app while streaming the remaining data in the background.

Use the following command to use the feature. If the device does not support incremental installation, the command fails and prints a verbose explanation.

adb install --incremental

The v4 signature file must be placed next to the APK for this feature to work.

APK signature scheme v4

Android 11 adds support for APK Signature Scheme v4. This scheme produces a new kind of signature in a separate file (apk-name.apk.idsig) but is otherwise similar to v2 and v3. No changes are made to the APK. This scheme supports incremental APK installation, which speeds up APK install.


You can run non-core apps using ANGLE to evaluate performance and decide whether or not a particular app should use ANGLE rather than the native OpenGL ES drivers. For instructions see Using ANGLE for OpenGL ES.

Dynamic intent filters

In order to receive intents, an app must declare at compile time which types of data it is able to receive by defining an intent filter in the app manifest. In Android 10 and lower, apps have no way of changing their intent filters at runtime. This is a problem for virtualization apps (such as virtual machines and remote desktops) because they have no way of knowing exactly what software the user will install inside them.

Android 11 introduces MIME groups, a new manifest element which allows an app to declare a dynamic set of MIME types in an intent filter and modify it programmatically at runtime. To use a MIME group, include a data element in your app manifest with the new android:mimeGroup attribute:

  <action android:name="android.intent.action.SEND"/>
  <category android:name="android.intent.category.DEFAULT"/>
  <data android:mimeGroup="myMimeGroup"/>

The value of the android:mimeGroup attribute is an arbitrary string ID that identifies the MIME group at runtime. You can access and update the contents of a MIME group by passing its ID to the following new methods in the PackageManager API class:

When you add a MIME type to a MIME group programmatically, it functions exactly the same as a static MIME type explicitly declared in the manifest.Note:mimeGroup strings are defined on a per-package basis. Within the same package, you can use the same mimeGroup string in multiple intent filters or components to declare a MIME group that is shared between them. Different packages cannot share a MIME group, but they can use the same mimeGroup string without interfering with each other.

Better support for HEIF images with multiple frames

Beginning with Android 11, if you call ImageDecoder.decodeDrawable() and pass an HEIF image containing a sequence of frames (such as an animation or a burst photo), the method returns an AnimatedImageDrawable containing the entire image sequence. On earlier versions of Android, the method returned a BitmapDrawable of just a single frame.

If the HEIF graphic contains multiple frames that are not in a sequence, you can retrieve an individual frame by calling MediaMetadataRetriever.getImageAtIndex().

Better support for waterfall displays

Android 11 provides several APIs to support waterfall displays, displays which wrap around the edge of the device. These displays are treated as a variant of displays with display cutouts. The existing DisplayCutout.getSafeInset…() methods now return the safe inset to avoid waterfall areas as well as cutouts. To render your app content in the waterfall area, do the following:

  • Call DisplayCutout.getWaterfallInsets() to get exact dimensions of the waterfall inset.
  • Set the window layout attribute layoutInDisplayCutoutMode to LAYOUT_IN_DISPLAY_CUTOUT_MODE_ALWAYS to allow the window to extend into the cutout and waterfall areas on all edges of the screen. You must make sure that no essential content is in the cutout or waterfall areas.

Note: If you do not set the window to LAYOUT_IN_DISPLAY_CUTOUT_MODE_ALWAYS, Android displays the window in letterboxed mode, avoiding the notch and waterfall areas.

Frame rate API

Android 11 provides an API to enable applications to tell the system their intended frame rate. Traditionally, most devices have supported only a single display refresh rate, typically 60Hz, but this has been changing. Many devices now support additional refresh rates, such as 90Hz or 120Hz. The primary purpose of the API is to enable applications to better take advantage of all the supported display refresh rates.

Android exposes several ways to access and control surfaces, so there are several versions of this API:


OpenSL ES is deprecated

Starting with NDK r21b beta 2 the OpenSL ES API is deprecated. You should use Oboe instead.

The platform still supports OpenSL ES for existing apps. However, a build warning appears when using OpenSL ES with a minSdkVersion of 30 or higher.

New AAudio function AAudioStream_release()

The function AAudioStream_close() releases and closes an audio stream at the same time. This can be dangerous. If another process tries to access the stream after it’s been closed, the process will crash.

The new function AAudioStream_release() releases the stream but does not close it. This frees its resources and leaves the stream in a known state. The object persists until you call AAudioStream_close().

Restricting audio access

Android 11 includes two new features to restrict the ability of apps to record audio.

Audio capture from a USB device

When an application without RECORD_AUDIO permission requests access to a USB audio device with audio capture capability (such as a USB headset), a new warning message appears asking the user to confirm permission to use the device. The system ignores any “always use” option, so the user must acknowledge the warning and grant permission every time an app requests access.

To avoid this behavior, your app should request the RECORD_AUDIO permission.

Concurrent mic access

Starting with Android 10, any app that holds the role RoleManager.ROLE_ASSISTANT can capture audio concurrently with any other app. Concurrent capture is not allowed if the other the app is using a “privacy- sensitive” audio source, either CAMCORDER or VOICE_COMMUNICATION. For more information see Sharing Audio Input.

Android 11 adds new methods to the AudioRecordMediaRecorder, and AAudioStream APIs. These methods enable and disable the ability to capture concurrently regardless of the selected use case.

The new methods are:

When setPrivacySensitive() is true, the capture use case is private and even a privileged Assistant cannot capture concurrently. This setting overrides the default behavior that depends on the audio source. For instance, VOICE_COMMUNICATION is private by default but UNPROCESSED is not.

Categories: Android


Leave a Reply

Your email address will not be published. Required fields are marked *

Social Media Auto Publish Powered By : XYZScripts.com