API

The located Function under com.tenginekit.

1. Init Context

	val sdkConfig = SdkConfig()
	TengineKitSdk.getInstance().initSdk(path, config, context)

SdkConfig

  • backend: predict backend, cpu default now

2. FaceDetect

init

	TengineKitSdk.getInstance().initFaceDetect()

predict

We merge all the functions into one interface

	val byte = ImageUtils.bitmap2RGB(bitmap)
	val faceConfig = FaceConfig().apply {
			detect = true
			landmark2d = true
			video = false
	}
	val imageConfig = ImageConfig().apply {
			data = byte
			degree = 0
			mirror = false
			height = bitmapHeight
			width = bitmapWidth
			format = ImageConfig.FaceImageFormat.RGB
	}
	val faces = TengineKitSdk.getInstance().detectFace(faceConfig, config)

release

	TengineKitSdk.getInstance().releaseFaceDetect()

3. InsightFace

init

	TengineKitSdk.getInstance().initInsightFace()

predict

We merge all the functions into one interface

   val byte = ImageUtils.bitmap2RGB(bitmap)
   val config = InsightFaceConfig().apply {
   	scrfd = true
   	recognition = true
   	registered = false
   	video = false
   }
   val imageConfig = ImageConfig().apply {
   	data = byte
   	degree = 0
   	mirror = false
   	height = it.height
   	width = it.width
   	format = ImageConfig.FaceImageFormat.RGB
   }
   val faces = TengineKitSdk.getInstance().detectInsightFace(imageConfig, config)

release

	TengineKitSdk.getInstance().releaseInsightFace()

4. SegBody

init

	TengineKitSdk.getInstance().initSegBody()

Predict

directly return a mask, the mask is a android bitmap, the mask’s width is 398, the height is 224; the mask’s format is ARGB_8888

	val byte = ImageUtils.bitmap2RGB(bitmap)
	val config = SegConfig()
	val imageConfig = ImageConfig().apply {
			data = byte
			degree = 0
			mirror = false
			height = it.height
			width = it.width
			format = ImageConfig.FaceImageFormat.RGB
	}
	val bitmapMask = TengineKitSdk.getInstance().segHuman(imageConfig, config)

release

	TengineKitSdk.getInstance().releaseSegBody()

5. BodyDetect

init

	TengineKitSdk.getInstance().initBodyDetect()

predict

We merge all the functions into one interface

   val data = ImageUtils.bitmap2RGB(bitmap)
   val imageConfig = ImageConfig().apply {
   	this.data = data
   	this.format = ImageConfig.FaceImageFormat.RGB
   	this.height = it.height
   	this.width = it.width
   	this.mirror = false
   	this.degree = 0
   }
   val bodyConfig = BodyConfig()
   val bodyS = TengineKitSdk.getInstance().bodyDetect(imageConfig, bodyConfig)

release

	 TengineKitSdk.getInstance().releaseBodyDetect()

6.Release Context

	TengineKitSdk.getInstance().release()

DataStruct

Config

ImageConfig

  • data(byte[]): set image data byte array of image raw data

  • degree(int): set rotate degree need if in camera mode, need to rotate the right angle to detect the face

  • width(int): set bitmap width or preview width

  • height(int): set bitmap height or preview height

  • format(enum ImageConfig.FaceImageFormat): set image format, support RGB format and NV21 format current now

FaceConfig

  • detect(boolean): set true if need detect face rect

  • landmark2d(boolean): set true if need landmark2d except detect face rect

  • video(boolean): set true if in camera mode

InsightFaceConfig

  • scrfd(boolean): set true if need do scrfd

  • recognition(boolean): set true if need do arcface

  • registered(boolean): set true if already registered face

SegConfig

  • default portrait segmentation config current

BodyConfig

  • landmark(boolean): set true if need body landmark

Info

Face

all detect values ​​are normalized from 0 to 1

  • x1: face rect left

  • y1: face rect top

  • x2: face rect right

  • y2: face rect bottom

  • landmark: if not null landmark contain 212 face key points

  • headX: Human face pitch direction corner

  • headY: Human face yaw direction corner

  • headZ: Human face roll direction corner

  • leftEyeClose: Left eye closure confidence 0~1

  • rightEyeClose: Right eye closure confidence 0~1

  • mouthClose: Mouth closure confidence 0~1

  • mouthBigOpen: Open mouth Big confidence 0~1

InsightFace

  • x1: detect body rect left

  • y1: detect body rect top

  • x2: detect body rect right

  • y2: detect body rect bottom

  • landmark: if not null landmark contain 5 face key points

  • confidence: face detect confidence

  • feature: if not null feature contain 512 face feature points

Body

  • x1: detect body rect left

  • y1: detect body rect top

  • x2: detect body rect right

  • y2: detect body rect bottom

  • landmark: if not null landmark contain 16 body key points