Information about the Android SDK
The Android SDK includes a core native library (closed-source) and a reference UI implementation provided in source code format. This setup allows you to fully customize the UI according to your application’s requirements. While this page provides an overview, you can read more detailed documents at Android SDK Documentation.
The Android SDK comes in two variants:
Online: This variant is small in size, but it needs to connect with the server for SensePrint eID verification/decryption.
Offline: This variant larger in size, but it can be used to verify/decrypt SensePrint eIDs with no network connectivity and completely on a mobile device.
Both variants utilize the same reference UI implementation. This means the same Kotlin source code is used for both variants. In the provided Android project you can use the following gradle task to switch between the variants:
This gradle task updates the project's dependencies, app name, application id, app icons and changes the Constants for the respective variant. After performing a gradle sync the app can be build in the selected variant.
The Constants class contains different app-wide configuration values that can customize the flow of the app. The following are notable constants for customization:
GENERATION_CAPTURE_MODE
- the face capture method used while generating a SensePrint. Can be one of FaceCaptureMode.ActiveCaptureFrontCamera
, FaceCaptureMode.PassiveCaptureFrontCamera
, or FaceCaptureMode.PassiveCaptureBackCamera
.
VERIFICATION_CAPTURE_MODE
- the face capture method used while verifying a SensePrint. Can be one of FaceCaptureMode.ActiveCaptureFrontCamera
, FaceCaptureMode.PassiveCaptureFrontCamera
, or FaceCaptureMode.PassiveCaptureBackCamera
.
SENSEPRINT_VERIFIER_AUTH_KEY
- It is possible to generate SensePrint eID QR codes targeted at a specific verifier. See the section on Generating your first SensePrint eID QR. When the verifiers_auth_key
attribute is specified while generating the QR, that QR code can only be read by a specific verifier. Set the value of SENSEPRINT_VERIFIER_AUTH_KEY
to the value you used for verifiers_auth_key
while generating the QR code.
API_SERVER_URL
- this value should point to your server that you deployed using the Docker. If you exposed the server using ngrok
use the Ngrok URL value.
MOBILE_AUTH_HEADER
- this value should be set to the mobile_api_key
value you used in the secrets.json
file while starting the Docker container for the server. If you are using a JWT Token for mobile authorization, this should be set to Bearer your_jwt_token_from_server
ISSUERS_PUBLIC_KEY
- this value is present only in the offline SDK. If you started the server in Certificate Authority mode by setting the issuers_private_key
, then the server will issue signed SensePrints. In order to verify them, you must set the corresponding ISSUERS_PUBLIC_KEY
in the Constants.
SplashActivity Responsible for initializing the SDK and loading the necessary Machine Learning models into memory. The SplashActivity culminates by loading the MainActivity.
MainActivity A simple interface displaying entry points for SensePrint eID QR generation, verification and reading. Users can interact with buttons to proceed with either operation.
QRScanActivity
This activity can be invoked with or without intent extras.
When invoked without any extras, the activity will interpret the launch as an attempt to capture QR data that will subsequently be decrypted via a face scan.
Based on the capture type which is set up in Constants.kt
configuration, after scanning a QR code (when invoked without intent data), the app will proceed to PreScanningActiveCaptureActivity
activity for Active Face Capture or PreScanningPassiveCaptureActivity
for Passive Face Capture.
When the QRScanActivity
is invoked with an extra, the activity will attempt to read the non-face-encrypted regions of a SensePrint eID QR code and show that data by passing it in an intent and launching the ReaderDetailActivity
.
ActiveFaceCaptureActivity
This activity may be invoked with or without intent extras.
When invoked with an extra containing the SensePrint bytes (from the QRScanActivity
), the activity will attempt to decrypt the SensePrint bytes and pass the metadata along to the PersonDetailActivity
.
When invoked without any extras, the activity will pass the captured image via an intent by launching the GenerateQRActivity
.
PassiveFaceCaptureActivity
This activity works in exactly the same way as the ActiveFaceCaptureActivity
. The only difference is that it captures a single good image from the session.
ReaderDetailActivity This activity gets clear text data (non-face encrypted) in an intent and displays it on the screen for viewing.
PersonDetailActivity This activity shows the eID attributes that are passed to it via an intent extra.
GenerateQRActivity
When invoked from the ActiveFaceCaptureActivity
or PassiveFaceCaptureActivity
, this activity prompts for input of eID attributes such as name, ID etc.
Upon accepting the input, the activity invokes the ShowQRActivity
to show the generated QR code.
ShowQRActivity This activity accepts QR code bytes as an intent extra, and shows a generated QR code containing those bytes on the screen.
The SDK is designed for modularity, allowing full customization of the user interface and integration flows while maintaining core functionality within a closed-source native library.
This document serves as a guide to understand and integrate the Android SDK into your application. Be sure to configure the constants accurately to align with your deployment setup and security requirements.
Biometric Verification and decryption on mobile
The SenseCrypt Mobile SDKs allow a mobile app to read and decrypt SensePrint eID QR codes.
The SDKs have two flavours - online, offline.
The online SDK is small in size, but it needs to connect with the server for SensePrint eID verification/decryption.
The offline SDK is larger in size, but it can be used to verify/decrypt SensePrint eIDs with no network connectivity and completely on a mobile device.
The choice between online and offline versions will depend on your use-case and deployment scenario.
Both online and offline SDKs are offered for Android and iOS.
Information about licensing and authorization for the mobile SDKs
The licensing for the Mobile SDKs is governed by a license file called mobile.lic
.
The Android SDK is located in the SenseCrypt-Android-SDK
folder. The following screenshot shows the location of the mobile.lic
and closed source core SDK library in the reference UI application:
The iOS SDK is located in the SenseCrypt-iOS-SDK
.
The following screenshot shows the location of the mobile.lic
and closed source core SDK library in the reference UI application:
Like the server.lic
files, mobile.lic
determines the usage quotas for mobile devices.
Information about the iOS SDK
The iOS SDK includes a core native library (closed-source) and a reference UI implementation provided in source code format. This setup allows you to fully customize the UI according to your application’s requirements. While this page provides an overview, you can read more detailed documents at .
The iOS SDK comes in two variants:
Online: This variant is small in size, but it needs to connect with the server for SensePrint eID verification/decryption.
Offline: This variant larger in size, but it can be used to verify/decrypt SensePrint eIDs with no network connectivity and completely on a mobile device.
Both variants utilize the same reference UI implementation. This means the same Swift source code is used for both variants. In the provided XCode workspace there are two XCode projects that point to the same source files. Based on the selected schema you can either build & run the online or offline app variant.
The Constants files specify configuration parameters that can be set before compile time. The below screenshot shows the location of the Constants files for the Online and Offline variant:
API_SERVER_URL - This value must point to your deployed server (e.g., via Docker). If using a temporary tunneling service like ngrok, ensure this constant holds the correct ngrok URL.
SplashScreen
Responsible for initializing the SDK and loading the necessary Machine Learning models into memory. This is the first view triggered when the application launches.
The SplashScreen
culminates by loading the MainView
.
MainView A simple interface displaying entry points for SensePrint eID QR generation, verification and reading. Users can interact with buttons to proceed with either operation.
GenerateQRView
Triggered from either the ActiveFaceCaptureView
or the PassiveFaceCaptureView
, this activity allows the user to input eID attributes (e.g., name, record ID) needed for QR code generation.
QRScannerView Scans the generated QR code containing the user’s face and other details. If the QR code is password-protected, the user will be prompted to enter the password before personal details can be decrypted.
PassiveFaceCaptureView This view captures a single, good image of a user using the camera. The captured image can then be used to generate/verify a SensePrint QR.
ActiveFaceCaptureView This view uses the camera to capture facial images but prompts the user to position their head in 3 random positions before their face is captured. This presents added security against injection attacks.
PersonDetailView This View contains all the details user will see after successfully scanning and decrypting a SensePrint QR code.
The SDK is designed for modularity, allowing full customization of the user interface and integration flows while maintaining core functionality within a closed-source native library.
This document serves as a guide to understand and integrate the iOS SDK into your application. Be sure to configure the constants accurately to align with your deployment setup and security requirements.
How online mobile SDKs connect with the server
The online SDKs function by communicating with the SenseCrypt server using an end-to-end encrypted protocol.
There are two methods available for authorization:
Mobile API Key - Configured in the app source code constants, this is the easiest method during development. Correspondingly, the mobile key is configured on the server as we saw in the server's section.
JWT Token - A JWT token mechanism allows you to use your own authorization mechanism. To use a device specific JWT, follow the following steps:
Get the device ID by calling getDeviceId()
from the mobile SDK.
Pass the device ID along with authentication parameters (such as username/password) to your own application server.
If authentication succeeds, make a server-to-server call from your application server to your SenseCrypt server's /gen-jwt
end-point passing in the device ID as the instance_id
. See the section for more details. The server-to-server call can be authorized using the api_key
that you defined in the section.
Return the generated JWT token to the mobile device.
Store the JWT token on the device for future use.
When initializing the mobile SDK, pass the server URL along with Bearer your_jwt_token
as the authentication parameter.
By default, the source code uses the Mobile API Key configured in the code constants. Since a JWT flow involves your own authentication, implementing such a flow is left to you. However, the mobile SDKs do support such a flow out of the box should you choose to implement it.
AUTH_HEADER - This value should be set to the mobile_api_key
value you used in the secrets.json
file while for the server. If you are , this should be set to Bearer your_jwt_token_from_server
SENSEPRINT_VERIFIER_AUTH_KEY (optional) - A secret that is shared with a generator of the eID SensePrint. If it is specified, the eID issuer must specify the same value while generating a SensePrint that the app will accept. See .
SENSEPRINT_VERIFIER_AUTH_KEY (optional) - A secret that is shared with a generator of the eID SensePrint. If it is specified, the eID issuer must specify the same value while generating a SensePrint that the app will accept. See .
ISSUERS_PUBLIC_KEY - An Secp256k1 curve public key in Base64 format. When the SenseCrypt server is operating in Certificate Authority mode (has a issuers_private_key
set in its ), the corresponding public key must be specified in the app as all SensePrints that are generated by the server are signed when it is operating in CA mode. The app must verify signed SensePrints using the public key.
The Face Capture methods available in the mobile SDKs
A Liveness check refers to the act of determining if an image captured by a camera is of a real person rather than their image printed on paper, shown on a screen, or a capture of someone wearing a 3D Mask.
Before a Liveness check, one or more good images of a face must be captured.
There are two methods for capturing faces: Active Face Capture and Passive Face Capture.
Active Face Capture asks the user to move their head in 3 random directions while Passive Face Capture works with just a single good image. The aim of Active Face Capture is to avoid injection attacks where a malicious actor injects a single image such that the mobile app believes that the image comes from a legitimate camera. Since it is harder to capture a video of a user in which their head position is in all requested directions, Active Face Capture mitigates against injection attacks.
Even when Active Face Capture is selected as a face capture method, when liveness is enabled for SensePrint verification, it is always followed by a Liveness check to mitigate against screen, print, and 3D mask attacks.
For customers who do not need protection against advanced threat vectors such as 3D masks, Active Face Capture in itself can act as a Liveness check. However, we recommend that customers with advanced protection requirements should always use Liveness checks on top of Active Face Capture.
Active Face Capture can only be performed using a front camera. To verify a face against a SensePrint using Active Face Capture, follow these steps:
Create an Active Face Capture Session: This must be done before the camera preview begins.
Camera Preview: The camera preview runs on the screen, capturing frames. Afterward, the frames undergo preprocessing to prepare them for the SDK. First, the images are rotated according to the SDK's requirements, then compressed into JPEG/PNG format and converted into byte arrays. These byte arrays are then passed to the SDK to determine the face position.
Active Face Capture Result : The SDK returns an Active Face Capture Result for each frame, which developers can use to update the UI. Basically, the UI needs to handle four components based on the result: expected user's facial direction, their current head position, animation status, and direction strength indicators. Developers can update the expected direction name in the text, show or hide animations, and display a direction strength indicator to guide the user to look in specific directions (e.g., bottom, left, up). These values should be continuously updated until the face scan is completed.
Three different UI options assist the user in completing the face scan. The UI includes:
Directional Animation: Brief animations guide the user to look in the required directions.
Direction Strength: An arc indicates the direction the user should look along with indicating the amount the user is looking in that direction, marking a tick when completed.
Instructions Text : Some text shows the expected facial direction by SDK
Progress Ticks: As each direction is completed, developers needs to update the UI to reflect the completed ticks. The following image shows some of these concepts as implemented in the reference UI app:
Errors: The SDK throws an error when issues occur, such as when the face is not live, multiple faces are detected, or the license has expired. Developers need to handle all these errors appropriately.
Passive Face Capture can be performed using both the front and back camera. To verify a face against a SensePrint using Passive Face Capture, follow these steps:
Create a Passive Face Capture Session: Similar to the Active Face Capture sessions, this must be done before the camera preview begins.
Camera Preview: Similar to Active Face Capture, refer to the steps outlined in Active Face Capture Step 2 for guidance.
Passive Face Capture Processing Result: Based on the result, the developer should update the UI accordingly. There are two key variables: currentHeadPose and isCompleted. The expected head pose should be displayed on the UI to guide the user in completing the face scan.
Errors: Similar to Active Face Capture, refer to the steps outlined in Active Face Capture, Step 4
Once the face scan is completed, the session is passed to the next screen and it can later be used to generate or verify a SensePrint.