Using SDK Sessions

Overview how to use ActiveFaceCaptureSession, PassiveFaceCaptureSession and ImageSession.

The mobile SDK relies on the concept of sessions for all core actions like SensePrint Generation, SensePrint Verification and FaceSigning. The capture sessions ActiveFaceCaptureSession & PassiveFaceCaptureSession are used to facilitate capturing an frame from the camera. While ImageSession is used to perform the actions with a single input image.

Note: The provided source code is in Python but the same interface is available in the Swift and Kotlin mobile SDK.

Providing Image Data

Image Session

import sensecrypt
session = sensecrypt.ImageSession()
   
with open("face_crop.jpeg", "rb") as f:
     image = f.read()

 session.set_image(image, True)

Passive Session

import sensecrypt
import cv2

// in this example we use OpenCV camera capture
cap = cv2.VideoCapture(0)

session = sensecrypt.PassiveFaceCaptureSession()

while not is_completed:
        ret, frame = cap.read()
        if not ret:
            break

        # Mirror the frame
        frame = cv2.flip(frame, 1)
        
        # Take a square crop of the frame
        h, w = frame.shape[:2]
        min_dim = min(h, w)
        frame = frame[(h-min_dim)//2:(h+min_dim)//2, (w-min_dim)//2:(w+min_dim)//2]

        # Convert the frame to jpeg bytes
        _, jpeg = cv2.imencode('.jpg', frame)

        # Process the frame
        result = session.process(jpeg.tobytes())
        is_completed = result.is_completed

Active Session

import sensecrypt
import cv2

// in this example we use OpenCV camera capture
cap = cv2.VideoCapture(0)


session = sensecrypt.ActiveFaceCaptureSession()
current_state = None
while current_state != sensecrypt.ActiveFaceCaptureStateName.ACTIVE_FACE_CAPTURE_COMPLETE:
    ret, frame = cap.read()
    if not ret:
        break

    # Mirror the frame
    frame = cv2.flip(frame, 1)
    
    # Take a square crop of the frame
    h, w = frame.shape[:2]
    min_dim = min(h, w)
    frame = frame[(h-min_dim)//2:(h+min_dim)//2, (w-min_dim)//2:(w+min_dim)//2]

    processing_status = session.get_processing_status()
    current_state = processing_status.intermediate_result.expected_user_action if processing_status.intermediate_result is not None else None
    
    if processing_status.is_processing:
        // Session is busy. Skipping frame.
        plot_scores(frame, processing_status.intermediate_result.directional_scores)
    else:

        # Convert the frame to jpeg bytes
        _, jpeg = cv2.imencode('.jpg', frame)

        # Process the frame
        result = session.process(jpeg.tobytes())
        print(result)

Emitting Actions

Once a session is successfully completed we can perform different SenseCrypt actions using a face image inside the session.

Generate SensePrint QR

request = SensePrintQrMobileRequest(
        check_live_face_before_creation=False,
        cleartext_data=None,
        metadata={"ID": "1234"},
        password=None,
        qr_format=QrFormatSchema.PNG,
        record_id="1234",
        ref_face=None,
        require_live_face=False,
        tolerance=SensePrintToleranceSchema.REGULAR,
        verifiers_auth_key=None,
        liveness_tolerance=LivenessToleranceSchema.REGULAR)

response = session.create_qr_code(
        request)

Verify SensePrint

verification_request = SensePrintRawVerificationMobileRequest(
    senseprint=senseprint_bytes,
    password=None,
    verifiers_auth_key=None,
    liveness_tolerance=None
    )
    
verification_response = session.verify_senseprint(
        request=verification_request
    )

FaceSign

face_sign_request = SensePrintRawFaceSignMobileRequest(
        password=None, 
        senseprint=senseprint_bytes,
        verifiers_auth_key=None, 
        liveness_tolerance=None, 
        purpose_id="test_face_sign", 
        data_sha256= SHA256_HASH
    )
    
signature_base64 = session.senseprint_face_sign(
        request=face_sign_request
    )

Handling Errors

Last updated