本教程介绍如何结合使用服务器端代码和客户端移动应用程序进行人脸活体检测。
小提示
有关人脸活体检测的一般信息,请参阅概念指南。
本教程演示如何操作前端应用程序和应用服务器来跨各种平台和语言执行活体检测,包括可选的人脸验证步骤。
重要说明
用于活体的人脸客户端 SDK 是一项门控功能。 你必须填写人脸识别登记表来请求访问活体功能。 当你的 Azure 订阅获得访问权限后,你可以下载人脸活体 SDK。
先决条件
- Azure 订阅 - 免费创建订阅
- 你的 Azure 帐户必须分配有“认知服务参与者”角色,你才能同意负责任 AI 条款并创建资源。 若要将此角色分配给你的帐户,请按照分配角色文档中的步骤进行操作,或与管理员联系。
- 拥有 Azure 订阅后,在 Azure 门户中创建人脸资源,获取密钥和终结点。 部署后,选择“转到资源”。
- 你需要从创建的资源获取密钥和终结点,以便将应用程序连接到人脸服务。
- 可以使用免费定价层 (
F0
) 试用该服务,然后再升级到付费层进行生产。
- 访问适用于移动设备(IOS 和 Android)和 Web 的 Azure AI 视觉人脸客户端 SDK。 首先,你需要申请人脸识别受限访问功能才能访问此 SDK。 有关详细信息,请参阅人脸受限访问页面。
- 熟悉人脸活体检测功能。 请参阅概念指南。
准备 SDK
我们提供了不同语言的 SDK 来简化前端应用程序和应用服务器的开发:
下载适用于前端应用程序的 SDK
按照 azure-ai-vision-sdk GitHub 存储库中的说明将 UI 和代码集成到本机移动应用程序中。 活体 SDK 支持适用于 Android 移动应用程序的 Java/Kotlin、适用于 iOS 移动应用程序的 Swift、适用于 Web 应用程序的 JavaScript:
当你将代码添加到应用程序中后,SDK 会处理以下事项:启动相机、指导最终用户调整其姿势、编写活体有效负载,以及调用 Azure AI 人脸云服务来处理活体有效负载。
可以监视 SDK 存储库的发布部分,以获取新的 SDK 版本更新。
下载适用于应用服务器的 Azure AI 人脸客户端库
应用服务器/协调器负责控制活体检测会话的生命周期。 应用服务器必须先创建一个会话才能执行活体检测,然后它可以查询结果,并在完成活体检查后删除会话。 我们提供了各种语言的库,用于轻松实现应用服务器。 请按照以下步骤安装所需的包:
重要说明
若要为 Azure 人脸服务密钥和终结点创建环境变量,请参阅快速入门
活体编排涉及的大致步骤如下所示:
前端应用程序启动活性检查并通知应用服务器。
应用服务器使用 Azure AI 人脸服务创建新的活体会话。 该服务创建一个活体会话并使用会话授权令牌进行响应。 有关创建活性会话所涉及的每个请求参数的详细信息,请参阅 Liveness Create Session Operation。
var endpoint = new Uri(System.Environment.GetEnvironmentVariable("FACE_ENDPOINT"));
var credential = new AzureKeyCredential(System.Environment.GetEnvironmentVariable("FACE_APIKEY"));
var sessionClient = new FaceSessionClient(endpoint, credential);
var createContent = new CreateLivenessSessionContent(LivenessOperationMode.Passive)
{
DeviceCorrelationId = "723d6d03-ef33-40a8-9682-23a1feb7bccd",
EnableSessionImage = true,
};
var createResponse = await sessionClient.CreateLivenessSessionAsync(createContent);
var sessionId = createResponse.Value.SessionId;
Console.WriteLine($"Session created.");
Console.WriteLine($"Session id: {sessionId}");
Console.WriteLine($"Auth token: {createResponse.Value.AuthToken}");
String endpoint = System.getenv("FACE_ENDPOINT");
String accountKey = System.getenv("FACE_APIKEY");
FaceSessionClient sessionClient = new FaceSessionClientBuilder()
.endpoint(endpoint)
.credential(new AzureKeyCredential(accountKey))
.buildClient();
CreateLivenessSessionContent parameters = new CreateLivenessSessionContent(LivenessOperationMode.PASSIVE)
.setDeviceCorrelationId("723d6d03-ef33-40a8-9682-23a1feb7bccd")
.setEnableSessionImage(true);
CreateLivenessSessionResult creationResult = sessionClient.createLivenessSession(parameters);
System.out.println("Session created.");
System.out.println("Session id: " + creationResult.getSessionId());
System.out.println("Auth token: " + creationResult.getAuthToken());
endpoint = os.environ["FACE_ENDPOINT"]
key = os.environ["FACE_APIKEY"]
face_session_client = FaceSessionClient(endpoint=endpoint, credential=AzureKeyCredential(key))
created_session = await face_session_client.create_liveness_session(
CreateLivenessSessionContent(
liveness_operation_mode=LivenessOperationMode.PASSIVE,
device_correlation_id="723d6d03-ef33-40a8-9682-23a1feb7bccd",
enable_session_image=True,
)
)
print("Session created.")
print(f"Session id: {created_session.session_id}")
print(f"Auth token: {created_session.auth_token}")
const endpoint = process.env['FACE_ENDPOINT'];
const apikey = process.env['FACE_APIKEY'];
const credential = new AzureKeyCredential(apikey);
const client = createFaceClient(endpoint, credential);
const createLivenessSessionResponse = await client.path('/detectLiveness-sessions').post({
body: {
livenessOperationMode: 'Passive',
deviceCorrelationId: '723d6d03-ef33-40a8-9682-23a1feb7bccd',
enableSessionImage: true,
},
});
if (isUnexpected(createLivenessSessionResponse)) {
throw new Error(createLivenessSessionResponse.body.error.message);
}
console.log('Session created.');
console.log(`Session ID: ${createLivenessSessionResponse.body.sessionId}`);
console.log(`Auth token: ${createLivenessSessionResponse.body.authToken}`);
curl --request POST --___location "%FACE_ENDPOINT%/face/v1.2/detectLiveness-sessions" ^
--header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%" ^
--header "Content-Type: application/json" ^
--data ^
"{ ^
""livenessOperationMode"": ""passive"", ^
""deviceCorrelationId"": ""723d6d03-ef33-40a8-9682-23a1feb7bccd"", ^
""enableSessionImage"": ""true"" ^
}"
curl --request POST --___location "${FACE_ENDPOINT}/face/v1.2/detectLivenesswithVerify-sessions" \
--header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}" \
--header "Content-Type: application/json" \
--data \
'{
"livenessOperationMode": "passive",
"deviceCorrelationId": "723d6d03-ef33-40a8-9682-23a1feb7bccd",
"enableSessionImage": "true"
}'
响应正文的示例:
{
"sessionId": "a6e7193e-b638-42e9-903f-eaf60d2b40a5",
"authToken": "<session-authorization-token>",
"status": "NotStarted",
"modelVersion": "2024-11-15",
"results": {
"attempts": []
}
}
应用服务器将会话授权令牌返回给前端应用程序。
前端应用程序使用会话授权令牌启动人脸实时检测器,这将启动实时流。
FaceLivenessDetector(
sessionAuthorizationToken = FaceSessionToken.sessionToken,
verifyImageFileContent = FaceSessionToken.sessionSetInClientVerifyImage,
deviceCorrelationId = "null",
onSuccess = viewModel::onSuccess,
onError = viewModel::onError
)
struct HostView: View {
@State var livenessDetectionResult: LivenessDetectionResult? = nil
var token: String
var body: some View {
if livenessDetectionResult == nil {
FaceLivenessDetectorView(result: $livenessDetectionResult,
sessionAuthorizationToken: token)
} else if let result = livenessDetectionResult {
VStack {
switch result {
case .success(let success):
/// <#show success#>
case .failure(let error):
/// <#show failure#>
}
}
}
}
}
faceLivenessDetector = document.createElement("azure-ai-vision-face-ui");
document.getElementById("container").appendChild(faceLivenessDetector);
faceLivenessDetector.start(session.authToken)
接着,SDK 会启动相机,指导用户正确调整其姿势,然后准备有效负载以调用活体检测服务终结点。
SDK 调用 Azure AI 视觉人脸服务来执行活体检测。 一旦服务进行响应,SDK 就会通知前端应用程序活体检查已完成。
前端应用程序将活性检查的完成情况中继到应用服务器。
应用服务器现在可以从 Azure AI 视觉人脸服务查询活体检测结果。
var getResultResponse = await sessionClient.GetLivenessSessionResultAsync(sessionId);
var sessionResult = getResultResponse.Value;
Console.WriteLine($"Session id: {sessionResult.Id}");
Console.WriteLine($"Session status: {sessionResult.Status}");
Console.WriteLine($"Liveness detection decision: {sessionResult.Result?.Response.Body.LivenessDecision}");
LivenessSession sessionResult = sessionClient.getLivenessSessionResult(creationResult.getSessionId());
System.out.println("Session id: " + sessionResult.getId());
System.out.println("Session status: " + sessionResult.getStatus());
System.out.println("Liveness detection decision: " + sessionResult.getResult().getResponse().getBody().getLivenessDecision());
liveness_result = await face_session_client.get_liveness_session_result(
created_session.session_id
)
print(f"Session id: {liveness_result.id}")
print(f"Session status: {liveness_result.status}")
print(f"Liveness detection decision: {liveness_result.result.response.body.liveness_decision}")
const getLivenessSessionResultResponse = await client.path('/detectLiveness/singleModal/sessions/{sessionId}', createLivenessSessionResponse.body.sessionId).get();
if (isUnexpected(getLivenessSessionResultResponse)) {
throw new Error(getLivenessSessionResultResponse.body.error.message);
}
console.log(`Session id: ${getLivenessSessionResultResponse.body.id}`);
console.log(`Session status: ${getLivenessSessionResultResponse.body.status}`);
console.log(`Liveness detection request id: ${getLivenessSessionResultResponse.body.result?.requestId}`);
console.log(`Liveness detection received datetime: ${getLivenessSessionResultResponse.body.result?.receivedDateTime}`);
console.log(`Liveness detection decision: ${getLivenessSessionResultResponse.body.result?.response.body.livenessDecision}`);
console.log(`Session created datetime: ${getLivenessSessionResultResponse.body.createdDateTime}`);
console.log(`Auth token TTL (seconds): ${getLivenessSessionResultResponse.body.authTokenTimeToLiveInSeconds}`);
console.log(`Session expired: ${getLivenessSessionResultResponse.body.sessionExpired}`);
console.log(`Device correlation id: ${getLivenessSessionResultResponse.body.deviceCorrelationId}`);
curl --request GET --___location "%FACE_ENDPOINT%/face/v1.2/detectLiveness-sessions/<session-id>" ^
--header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%"
curl --request GET --___location "${FACE_ENDPOINT}/face/v1.2/detectLiveness-sessions/<session-id>" \
--header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}"
响应正文的示例:
{
"sessionId": "0acf6dbf-ce43-42a7-937e-705938881d62",
"authToken": "",
"status": "Succeeded",
"modelVersion": "2024-11-15",
"results": {
"attempts": [
{
"attemptId": 1,
"attemptStatus": "Succeeded",
"result": {
"livenessDecision": "realface",
"targets": {
"color": {
"faceRectangle": {
"top": 763,
"left": 320,
"width": 739,
"height": 938
}
}
},
"digest": "517A0E700859E42107FA47E957DD12F54211C1A021A969CD391AC38BB88295A2",
"sessionImageId": "Ab9tzwpDzqdCk35wWTiIHWJzzPr9fBCNSqBcXnJmDjbI"
}
}
]
}
}
查询所有会话结果后,应用服务器就可以删除该会话。
await sessionClient.DeleteLivenessSessionAsync(sessionId);
Console.WriteLine($"The session {sessionId} is deleted.");
sessionClient.deleteLivenessSession(creationResult.getSessionId());
System.out.println("The session " + creationResult.getSessionId() + " is deleted.");
await face_session_client.delete_liveness_session(
created_session.session_id
)
print(f"The session {created_session.session_id} is deleted.")
await face_session_client.close()
const deleteLivenessSessionResponse = await client.path('/detectLiveness/singleModal/sessions/{sessionId}', createLivenessSessionResponse.body.sessionId).delete();
if (isUnexpected(deleteLivenessSessionResponse)) {
throw new Error(deleteLivenessSessionResponse.body.error.message);
}
console.log(`The session ${createLivenessSessionResponse.body.sessionId} is deleted.`);
curl --request DELETE --___location "%FACE_ENDPOINT%/face/v1.2/detectLiveness-sessions/<session-id>" ^
--header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%"
curl --request DELETE --___location "${FACE_ENDPOINT}/face/v1.2/detectLiveness-sessions/<session-id>" \
--header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}"
将人脸验证与活体检测相结合,可以对感兴趣的具体人员进行生物特征验证,进一步保证该人实际存在于系统中。
将活体与验证相结合的过程分为两个部分:
步骤 1 - 选择参考图像
按照身份验证方案的构成要求中列出的提示操作,确保输入的图像能够提供最准确的识别结果。
步骤 2 - 通过验证设置活体编排。
活体验证编排涉及的大致步骤如下所示:
通过以下两种方法中的一种提供验证参考图像:
应用服务器在创建活体会话时提供参考图像。 有关创建具有验证的活性会话所涉及的每个请求参数的详细信息,请参阅 Liveness With Verify Create Session Operation。
var endpoint = new Uri(System.Environment.GetEnvironmentVariable("FACE_ENDPOINT"));
var credential = new AzureKeyCredential(System.Environment.GetEnvironmentVariable("FACE_APIKEY"));
var sessionClient = new FaceSessionClient(endpoint, credential);
var createContent = new CreateLivenessWithVerifySessionContent(LivenessOperationMode.Passive)
{
DeviceCorrelationId = "723d6d03-ef33-40a8-9682-23a1feb7bccd",
EnableSessionImage = true,
};
using var fileStream = new FileStream("test.png", FileMode.Open, FileAccess.Read);
var createResponse = await sessionClient.CreateLivenessWithVerifySessionAsync(createContent, fileStream);
var sessionId = createResponse.Value.SessionId;
Console.WriteLine("Session created.");
Console.WriteLine($"Session id: {sessionId}");
Console.WriteLine($"Auth token: {createResponse.Value.AuthToken}");
Console.WriteLine("The reference image:");
Console.WriteLine($" Face rectangle: {createResponse.Value.VerifyImage.FaceRectangle.Top}, {createResponse.Value.VerifyImage.FaceRectangle.Left}, {createResponse.Value.VerifyImage.FaceRectangle.Width}, {createResponse.Value.VerifyImage.FaceRectangle.Height}");
Console.WriteLine($" The quality for recognition: {createResponse.Value.VerifyImage.QualityForRecognition}");
String endpoint = System.getenv("FACE_ENDPOINT");
String accountKey = System.getenv("FACE_APIKEY");
FaceSessionClient sessionClient = new FaceSessionClientBuilder()
.endpoint(endpoint)
.credential(new AzureKeyCredential(accountKey))
.buildClient();
CreateLivenessWithVerifySessionContent parameters = new CreateLivenessWithVerifySessionContent(LivenessOperationMode.PASSIVE)
.setDeviceCorrelationId("723d6d03-ef33-40a8-9682-23a1feb7bccd")
.setEnableSessionImage(true);
Path path = Paths.get("test.png");
BinaryData data = BinaryData.fromFile(path);
CreateLivenessWithVerifySessionResult creationResult = sessionClient.createLivenessWithVerifySession(parameters, data);
System.out.println("Session created.");
System.out.println("Session id: " + creationResult.getSessionId());
System.out.println("Auth token: " + creationResult.getAuthToken());
System.out.println("The reference image:");
System.out.println(" Face rectangle: " + creationResult.getVerifyImage().getFaceRectangle().getTop() + " " + creationResult.getVerifyImage().getFaceRectangle().getLeft() + " " + creationResult.getVerifyImage().getFaceRectangle().getWidth() + " " + creationResult.getVerifyImage().getFaceRectangle().getHeight());
System.out.println(" The quality for recognition: " + creationResult.getVerifyImage().getQualityForRecognition());
endpoint = os.environ["FACE_ENDPOINT"]
key = os.environ["FACE_APIKEY"]
face_session_client = FaceSessionClient(endpoint=endpoint, credential=AzureKeyCredential(key))
reference_image_path = "test.png"
with open(reference_image_path, "rb") as fd:
reference_image_content = fd.read()
created_session = await face_session_client.create_liveness_with_verify_session(
CreateLivenessWithVerifySessionContent(
liveness_operation_mode=LivenessOperationMode.PASSIVE,
device_correlation_id="723d6d03-ef33-40a8-9682-23a1feb7bccd",
enable_session_image=True,
),
verify_image=reference_image_content,
)
print("Session created.")
print(f"Session id: {created_session.session_id}")
print(f"Auth token: {created_session.auth_token}")
print("The reference image:")
print(f" Face rectangle: {created_session.verify_image.face_rectangle}")
print(f" The quality for recognition: {created_session.verify_image.quality_for_recognition}")
const endpoint = process.env['FACE_ENDPOINT'];
const apikey = process.env['FACE_APIKEY'];
const credential = new AzureKeyCredential(apikey);
const client = createFaceClient(endpoint, credential);
const createLivenessSessionResponse = await client.path('/detectLivenesswithVerify-sessions').post({
contentType: 'multipart/form-data',
body: [
{
name: 'VerifyImage',
// Note that this utilizes Node.js API.
// In browser environment, please use file input or drag and drop to read files.
body: readFileSync('test.png'),
},
{
name: 'Parameters',
body: {
livenessOperationMode: 'Passive',
deviceCorrelationId: '723d6d03-ef33-40a8-9682-23a1feb7bccd',
enableSessionImage: true,
},
},
],
});
if (isUnexpected(createLivenessSessionResponse)) {
throw new Error(createLivenessSessionResponse.body.error.message);
}
console.log('Session created:');
console.log(`Session ID: ${createLivenessSessionResponse.body.sessionId}`);
console.log(`Auth token: ${createLivenessSessionResponse.body.authToken}`);
console.log('The reference image:');
console.log(` Face rectangle: ${createLivenessSessionResponse.body.verifyImage.faceRectangle}`);
console.log(` The quality for recognition: ${createLivenessSessionResponse.body.verifyImage.qualityForRecognition}`)
curl --request POST --___location "%FACE_ENDPOINT%/face/v1.2/detectLivenesswithVerify-sessions" ^
--header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%" ^
--form "Parameters=""{\\\""livenessOperationMode\\\"": \\\""passive\\\"", \\\""deviceCorrelationId\\\"": \\\""723d6d03-ef33-40a8-9682-23a1feb7bccd\\\"", ""enableSessionImage"": ""true""}""" ^
--form "VerifyImage=@""test.png"""
curl --request POST --___location "${FACE_ENDPOINT}/face/v1.2/detectLivenesswithVerify-sessions" \
--header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}" \
--form 'Parameters="{
\"livenessOperationMode\": \"passive\",
\"deviceCorrelationId\": \"723d6d03-ef33-40a8-9682-23a1feb7bccd\"
}"' \
--form 'VerifyImage=@"test.png"'
响应正文的示例:
{
"sessionId": "3847ffd3-4657-4e6c-870c-8e20de52f567",
"authToken": "<session-authorization-token>",
"status": "NotStarted",
"modelVersion": "2024-11-15",
"results": {
"attempts": [],
"verifyReferences": [
{
"referenceType": "image",
"faceRectangle": {
"top": 98,
"left": 131,
"width": 233,
"height": 300
},
"qualityForRecognition": "high"
}
]
}
}
前端应用程序将在初始化 SDK 时提供参考图像。 Web 解决方案不支持此方案。
FaceLivenessDetector(
sessionAuthorizationToken = FaceSessionToken.sessionToken,
verifyImageFileContent = FaceSessionToken.sessionSetInClientVerifyImage,
deviceCorrelationId = "null",
onSuccess = viewModel::onSuccess,
onError = viewModel::onError
)
struct HostView: View {
@State var livenessDetectionResult: LivenessDetectionResult? = nil
var token: String
var body: some View {
if livenessDetectionResult == nil {
FaceLivenessDetectorView(result: $livenessDetectionResult,
sessionAuthorizationToken: token)
} else if let result = livenessDetectionResult {
VStack {
switch result {
case .success(let success):
/// <#show success#>
case .failure(let error):
/// <#show failure#>
}
}
}
}
}
除了活体结果之外,应用服务器现在还可以查询验证结果。
var getResultResponse = await sessionClient.GetLivenessWithVerifySessionResultAsync(sessionId);
var sessionResult = getResultResponse.Value;
Console.WriteLine($"Session id: {sessionResult.Id}");
Console.WriteLine($"Session status: {sessionResult.Status}");
Console.WriteLine($"Liveness detection decision: {sessionResult.Result?.Response.Body.LivenessDecision}");
Console.WriteLine($"Verification result: {sessionResult.Result?.Response.Body.VerifyResult.IsIdentical}");
Console.WriteLine($"Verification confidence: {sessionResult.Result?.Response.Body.VerifyResult.MatchConfidence}");
LivenessWithVerifySession sessionResult = sessionClient.getLivenessWithVerifySessionResult(creationResult.getSessionId());
System.out.println("Session id: " + sessionResult.getId());
System.out.println("Session status: " + sessionResult.getStatus());
System.out.println("Liveness detection decision: " + sessionResult.getResult().getResponse().getBody().getLivenessDecision());
System.out.println("Verification result: " + sessionResult.getResult().getResponse().getBody().getVerifyResult().isIdentical());
System.out.println("Verification confidence: " + sessionResult.getResult().getResponse().getBody().getVerifyResult().getMatchConfidence());
liveness_result = await face_session_client.get_liveness_with_verify_session_result(
created_session.session_id
)
print(f"Session id: {liveness_result.id}")
print(f"Session status: {liveness_result.status}")
print(f"Liveness detection decision: {liveness_result.result.response.body.liveness_decision}")
print(f"Verification result: {liveness_result.result.response.body.verify_result.is_identical}")
print(f"Verification confidence: {liveness_result.result.response.body.verify_result.match_confidence}")
const getLivenessSessionResultResponse = await client.path('/detectLivenesswithVerify/singleModal/sessions/{sessionId}', createLivenessSessionResponse.body.sessionId).get();
if (isUnexpected(getLivenessSessionResultResponse)) {
throw new Error(getLivenessSessionResultResponse.body.error.message);
}
console.log(`Session id: ${getLivenessSessionResultResponse.body.id}`);
console.log(`Session status: ${getLivenessSessionResultResponse.body.status}`);
console.log(`Liveness detection request id: ${getLivenessSessionResultResponse.body.result?.requestId}`);
console.log(`Verification result: ${getLivenessSessionResultResponse.body.result?.response.body.verifyResult.isIdentical}`);
console.log(`Verification confidence: ${getLivenessSessionResultResponse.body.result?.response.body.verifyResult.matchConfidence}`);
curl --request GET --___location "%FACE_ENDPOINT%/face/v1.2/detectLivenesswithVerify-sessions/<session-id>" ^
--header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%"
curl --request GET --___location "${FACE_ENDPOINT}/face/v1.2/detectLivenesswithVerify-sessions/<session-id>" \
--header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}"
响应正文的示例:
{
"sessionId": "93fd6f13-4161-41df-8a22-80a38ef53836",
"authToken": "",
"status": "Succeeded",
"modelVersion": "2024-11-15",
"results": {
"attempts": [
{
"attemptId": 1,
"attemptStatus": "Succeeded",
"result": {
"livenessDecision": "realface",
"targets": {
"color": {
"faceRectangle": {
"top": 669,
"left": 203,
"width": 646,
"height": 724
}
}
},
"digest": "EE664438FDF0535C6344A468181E4DDD4A34AC89582D4FD6E9E8954B843C7AA7",
"verifyResult": {
"matchConfidence": 0.08172279,
"isIdentical": false
}
}
}
],
"verifyReferences": [
{
"faceRectangle": {
"top": 98,
"left": 131,
"width": 233,
"height": 300
},
"qualityForRecognition": "high"
}
]
}
}
如果不再查询会话结果,应用服务器可以删除会话。
await sessionClient.DeleteLivenessWithVerifySessionAsync(sessionId);
Console.WriteLine($"The session {sessionId} is deleted.");
sessionClient.deleteLivenessWithVerifySession(creationResult.getSessionId());
System.out.println("The session " + creationResult.getSessionId() + " is deleted.");
await face_session_client.delete_liveness_with_verify_session(
created_session.session_id
)
print(f"The session {created_session.session_id} is deleted.")
await face_session_client.close()
const deleteLivenessSessionResponse = await client.path('/detectLivenesswithVerify/singleModal/sessions/{sessionId}', createLivenessSessionResponse.body.sessionId).delete();
if (isUnexpected(deleteLivenessSessionResponse)) {
throw new Error(deleteLivenessSessionResponse.body.error.message);
}
console.log(`The session ${createLivenessSessionResponse.body.sessionId} is deleted.`);
curl --request DELETE --___location "%FACE_ENDPOINT%/face/v1.2/detectLivenesswithVerify-sessions/<session-id>" ^
--header "Ocp-Apim-Subscription-Key: %FACE_APIKEY%"
curl --request DELETE --___location "${FACE_ENDPOINT}/face/v1.2/detectLivenesswithVerify-sessions/<session-id>" \
--header "Ocp-Apim-Subscription-Key: ${FACE_APIKEY}"
(可选)可以在活体检查之后执行进一步的人脸操作,例如人脸分析(如获取人脸属性)和/或人脸标识操作。
- 若要启用此功能,需要在执行 Session-Creation 步骤期间将“enableSessionImage”参数设置为“true”。
- 会话完成后,可以从 Session-Get-Result 步骤中提取“sessionImageId”。
- 现在,你既可以下载会话图像(可参考活体状态:获取会话映像操作 API),也可以在根据会话图像 ID 进行检测 API操作中提供 "sessionImageId",以便继续执行其他人脸分析或人脸识别操作。
有关这些操作的详细信息,请参阅人脸检测概念和人脸识别概念。
支持选项
除了使用主要的 Azure AI 服务支持选项之外,还可以在 SDK 存储库的问题部分发布问题。
相关内容
若要了解如何将活体解决方案集成到现有的应用程序,请参阅 Azure AI 视觉 SDK 参考。
若要详细了解可用于协调实时解决方案的功能,请参阅会话 REST API 参考。