Part 3. Cloud API: Web Services APIs
CV/AI Endpoints
POST/face/detect
-
Purpose: The incoming human face image will be detected and analyzed.
- Notes
- This endpoint is used for the face detection and analysis of facial features (e.g., eyes, emotion, etc.).
- The results will return each type of analysis value of the target image.
- Request Method
- POST
- Changelog
-
Nov 10, 2017 First version
Dec 21, 2017 Second version - Permission
- Must verify that the app_id and app_key of the Header belongs to the App to take further actions.
- Request Parameters
-
Need Parameter Type Description Required:
Choose one of threeimage_url String The URL of the image. image_file String The image file. Uploaded by using multipart/form-data. image_base64 String Base64 encoding image file. - Return Values
-
Fields Type Description time_used Integer The time the request takes (time unit: millisecond) face_attributes Object Value generated after human face analytics. - gender analysis: Values are: Male or Female.
- age analysis: Value is an age whole number in years.
- smile: a value between 0 and 100 for smiles. Higher score indicates the detected smile is more intense.
- headpose: a value between -180 and 180 degree angle. -180 and 180 degree angle represent the head pose of left and right profile view.
-
eyestatus: the analytic results of the eye opening with a value between 0 and 100. Bigger value indicates the greater confidence of eye open status.
- left_eye_status: left eye open status between 0 to 100.
- right_eye_status: right eye open status between 0 to 100.
- no_glass: Glass wearing condition 0 = Negative. 1 = Positive.
- occlusion: The confidence that the eye is blocked. The value is a number between 0 and 100. The confidence of the eye isn’t blocked and the values have a positive correlation.
-
emotion: Emotion expressed with a value of each field between 0 and 100. Bigger value of the field indicates the greater confidence of the emotion the field represents.
- anger: 0 ~ 100.
- disgust: 0 ~ 100.
- fear: 0 ~ 100.
- happiness: 0 ~ 100.
- neutral: 0 ~ 100.
- sadness: 0 ~ 100.
- disdain: 0 ~ 100.
- ethnicity: Result of ethnicity analytics. Values are Asian, White, Black.
-
mouthstatus: Status of mouth, including the fields below with the value between 0 to 100. Bigger value of the field indicates the confidence of the status which the field represents.
- surgical_mask_or_respirator: the confidence that the mouth is covered with a mask or respirator.
- other_occlusion: the confidence that the mouth is blocked by other things.
- close: the confidence that the mouth is closed, not blocked.
- open: the confidence that the mouth is open, not blocked.
-
eyegaze: Eye center locations analytics
- left_position_x_coordinate: the X coordinate of eye center location of left eye.
- left_position_y_coordinate: the Y coordinate of eye center location of left eye.
- right_position_x_coordinate: the X coordinate of eye center location of right eye.
- right_position_y_coordinate: the Y coordinate of eye center location of right eye.
- skinstatus: status of skin with the value between 0 to 100. Bigger value indicates the healthier the skin is.
- Sample Response
-
Sample response when a request has succeeded:
{ "time_used": 123, "face_attributes": { "gender": "male", "age": 20, "smile": 80, "headpose": 100, "eyestatus": { "left_eye_status": 50, "right_eye_status": 50 }, "no_glass": 0, "occlusion": 70, "emotion": { "anger": 30, "disgust":30, "fear":40, "happiness":50, "neutral":50, "sadness":60, "disdain":40 }, "ethnicity":"asian", "mouthstatus": { "surgical_mask_or_respirator": 40, "other_occlusion": 50, "close": 60, "open": 60 }, "eyegaze":{ "left_position_x_coordinate": 40, "left_position_y_coordinate": 10, "right_position_x_coordinate": 50, "right_position_y_coordinate": 20 }, "skinstatus": 80 } }
- Unique Error Messages of this API
-
HTTP Status Code Error Message Description 400 INVALID_IMAGE_URL The URL of the image does not exist. 400 INVALID_IMAGE_SIZE The image file size exceeds 2Mb. 412 IMAGE_DOWNLOAD_TIMEOUT The file download time has exceeded.
Analytics flows
-
The analytics methods of CV/AI can be applied across edge and cloud. Here, Example cloud analytics services employ external analytics services (e.g., Face++, Google's Vision etc.) to process and analyze the incoming data.
The DeviceMark generating pattern is accordingly different, therefore it is the way the data is analyzed. For example:
- in traffic-monitoring mode, the analytics depends on geometry and speed (average traffic flow…)
- in baby-monitoring mode, the analytics depends on motion detection, face landmark, face recognition and so on.
-
The following figure explains the supported CV/AI Analytics API flows.