Welcome to TengineKit’s documentation!

Summary

TengineKit logo

https://img.shields.io/crates/l/r

TengineKit, developed by OPEN AI LAB.TengineKit is an easy-to-integrate AI algorithm SDK. At present, it can run on various mobile phones at very low latency.We will continue to update this project for better results and better performance!

Have a try

  • Apk can be directly downloaded and installed on the phone to see the effect. or

  • scan code to download apk

https://www.pgyer.com/app/qrcode/A0uD?sign=&auSign=&code=Apk

Goals

  • Provide best performance in mobile client

  • Provide the simplest API in mobile client

  • Provide the smallest package in mobile client

Features

  • face detection

  • face landmarks

  • face 3dlandmarks

  • face attributes for example: age, gender, smile, glasses

  • eye iris & landmarks

  • body detect

  • hand detect(Real-time, not yet on Mobile)

  • hand landmarks(Real-time, not yet on Mobile)

  • body detect google(Real-time, not yet on Mobile)

  • body landamrks(Real-time, not yet on Mobile)

  • yolov5

Update (2021/03/25)

  • Fixed Linux sample code errer

  • Update Android sample code, up fps

  • update Linux so file

  • update Linux yolov5s

  • Fixed memory(Core v0.0.6)

Performance(Face Detect & Face Landmark)

CPU Time consuming Frame rate
Kirin 980 4ms 250fps
Qualcomm 855 5ms 200fps
Kirin 970 7ms 142fps
Qualcomm 835 8ms 125fps
Kirin 710F 9ms 111fps
Qualcomm 439 16ms 62fps
MediaTek Helio P60 17ms 59fps
Qualcomm 450B 18ms 56fps

Demo

Face Detection &
Face 2dLandmark
Face 3dLandmark &
Iris
Upper Body Detection &
Uppper Body Landmark
Hand Detection &
Hand Landmark

Gif

dance of host

Video( YouTube | BiliBili )

Introduction

This Tengine Kit app demonstrates how to use and integrate various vision based Tengine Kit features into your Android app.

Gradle Configure

first download tengine-kit-sdk1.0.0.aar

Then build.gradle in Main Module add aar dependency

	dependencies {
    	...
    	implementation files('path/tengine-kit-sdk1.0.0.aar')
    	...
    }
  • Old version API : Use TengineKitCore-v0.0.4 or older version API to complete the functions you need.

System

Android

  • Min Sdk Version 21

Permission

<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.READ_PHONE_STATE"/>

Access Guide

In setRotation of TengineKit Api, there are two parameters ori and is_screen_rotate, which are the rotation angle and whether to follow the screen rotation. Whether the android:screenOrientation parameter in the Manifest follows the screen can be set. Not setting this parameter is to follow the screen rotation.

Process

1.Device preview

This part is to get data from Camera, as the SDK input. Android/source provide camera1 and camera2 example.

2.image format

sdk support rgb and yuv(nv21), camera1 get nv21 directly, but when use camera2 you need use our sdk convert android.media.Image to nv21 byte first.

3.Angle

We use the vertical screen as an angle of 0 degrees. The data collected by the camera will deviate from a certain angle. If the data collected by the camera cannot be preprocessed correctly, the face cannot be detected.

4.Rendering

When rendering, it is rendered at an angle of 0°, which is the normal output that people see under normal circumstances. The Android part has Canvas and OpenGL rendering. Using OpenGL rendering can make your app better.

Usage

You can use Tengine Kit to detect faces in images and video.

You can first give the repository a star, your star is the driving force of our efforts, this SDK is definitely your star.

Before you begin

first download tengine-kit-sdk1.0.0.aar

Then build.gradle in Main Module add aar dependency

    dependencies {
        ...
        implementation files('path/tengine-kit-sdk1.0.0.aar')
        ...
    }

Face Detecter

1.Configure the face detector

Before you apply face detection to an image, you can change any of the face detector’s default settings with a FaceConfig object. You can change the following settings:

Settings
detect Whether or not to detect faces
landmark2d Whether to attempt to identify facial "landmarks": eyes, ears, nose,mouth, and so on.
video Whether or not use camera mode, image process is specially optimized in camera mode
Images
    val config = FaceConfig().apply {
        detect = true
        landmark2d = true
        video = false
    }
Video
    val config = FaceConfig().apply {
        detect = true
        landmark2d = true
        video = true
    }

2.Prepare the input image

You can change the input image settings with a ImageConfig object. You can change the following settings:

Settings
data Set image data byte array of image raw data
degree Set rotate degree need if in camera mode, need to rotate the right angle to detect the face
height Set bitmap height or preview height.
width Set bitmap width or preview width.
format Set image format, support RGB format and NV21 format current now.
Images
    val byte = ImageUtils.bitmap2RGB(bitmap)
    val imageConfig = ImageConfig().apply {
        data = byte
        degree = 0
        mirror = false
        height = bitmapHeight
        width = bitmapWidth
        format = ImageConfig.FaceImageFormat.RGB
    }
Video
    val imageConfig = ImageConfig().apply {
        data = mNV21Bytes
        degree = rotateDegree
        mirror = true
        height = previewHeight
        width = previewWidth
        format = ImageConfig.FaceImageFormat.YUV
    }

3.Use TengineKitSdk to predect

    val faces = TengineKitSdk.getInstance().detectFace(imageConfig, config)

4.Get information about detected faces

Each Face object represents a face that was detected in the image. For each face, you can get its bounding coordinates in the input image, as well as any other information you configured the face detector to find. For example:

    if (faces.isNotEmpty()) {
        val faceRects = arrayOfNulls<Rect>(faces.size)
        val faceLandmarks: MutableList<List<TenginekitPoint>> = ArrayList<List<TenginekitPoint>>()
        for ((i, face) in faces.withIndex()) {
    			val faceLandmarkList = mutableListOf<TenginekitPoint>()
    	       for (j in 0..211) {
        			faceLandmarkList.add(j, TenginekitPoint(face.landmark[j * 2] * width, face.landmark[j * 2 + 1] * height))
    			}
		val rect = Rect(
		    (face.x1 * width).toInt(),
		    (face.y1 * height).toInt(),
		    (face.x2 * width).toInt(),
		    (face.y2 * height).toInt()
		)
		faceLandmarks.add(i, faceLandmarkList)
		faceRects[i] = rect
        }
    }

Human Seg

human seg only support portrait segmentation current now

val byte = ImageUtils.bitmap2RGB(bitmap)
val config = SegConfig()
val imageConfig = ImageConfig().apply {
		data = byte
		degree = 0
		mirror = false
		height = it.height
		width = it.width
		format = ImageConfig.FaceImageFormat.RGB
}
val bitmapMask = TengineKitSdk.getInstance().segHuman(imageConfig, config)
imageSegMask.setImageBitmap(bitmapMask)

API

The located Function under com.tenginekit.

1. Init Context

	val sdkConfig = SdkConfig()
	TengineKitSdk.getInstance().initSdk(path, config, context)

SdkConfig

  • backend: predict backend, cpu default now

2. FaceDetect

init

	TengineKitSdk.getInstance().initFaceDetect()

predict

We merge all the functions into one interface

	val byte = ImageUtils.bitmap2RGB(bitmap)
	val faceConfig = FaceConfig().apply {
			detect = true
			landmark2d = true
			video = false
	}
	val imageConfig = ImageConfig().apply {
			data = byte
			degree = 0
			mirror = false
			height = bitmapHeight
			width = bitmapWidth
			format = ImageConfig.FaceImageFormat.RGB
	}
	val faces = TengineKitSdk.getInstance().detectFace(faceConfig, config)

release

	TengineKitSdk.getInstance().releaseFaceDetect()

3. InsightFace

init

	TengineKitSdk.getInstance().initInsightFace()

predict

We merge all the functions into one interface

   val byte = ImageUtils.bitmap2RGB(bitmap)
   val config = InsightFaceConfig().apply {
   	scrfd = true
   	recognition = true
   	registered = false
   	video = false
   }
   val imageConfig = ImageConfig().apply {
   	data = byte
   	degree = 0
   	mirror = false
   	height = it.height
   	width = it.width
   	format = ImageConfig.FaceImageFormat.RGB
   }
   val faces = TengineKitSdk.getInstance().detectInsightFace(imageConfig, config)

release

	TengineKitSdk.getInstance().releaseInsightFace()

4. SegBody

init

	TengineKitSdk.getInstance().initSegBody()

Predict

directly return a mask, the mask is a android bitmap, the mask’s width is 398, the height is 224; the mask’s format is ARGB_8888

	val byte = ImageUtils.bitmap2RGB(bitmap)
	val config = SegConfig()
	val imageConfig = ImageConfig().apply {
			data = byte
			degree = 0
			mirror = false
			height = it.height
			width = it.width
			format = ImageConfig.FaceImageFormat.RGB
	}
	val bitmapMask = TengineKitSdk.getInstance().segHuman(imageConfig, config)

release

	TengineKitSdk.getInstance().releaseSegBody()

5. BodyDetect

init

	TengineKitSdk.getInstance().initBodyDetect()

predict

We merge all the functions into one interface

   val data = ImageUtils.bitmap2RGB(bitmap)
   val imageConfig = ImageConfig().apply {
   	this.data = data
   	this.format = ImageConfig.FaceImageFormat.RGB
   	this.height = it.height
   	this.width = it.width
   	this.mirror = false
   	this.degree = 0
   }
   val bodyConfig = BodyConfig()
   val bodyS = TengineKitSdk.getInstance().bodyDetect(imageConfig, bodyConfig)

release

	 TengineKitSdk.getInstance().releaseBodyDetect()

6.Release Context

	TengineKitSdk.getInstance().release()

DataStruct

Config

ImageConfig
  • data(byte[]): set image data byte array of image raw data

  • degree(int): set rotate degree need if in camera mode, need to rotate the right angle to detect the face

  • width(int): set bitmap width or preview width

  • height(int): set bitmap height or preview height

  • format(enum ImageConfig.FaceImageFormat): set image format, support RGB format and NV21 format current now

FaceConfig
  • detect(boolean): set true if need detect face rect

  • landmark2d(boolean): set true if need landmark2d except detect face rect

  • video(boolean): set true if in camera mode

InsightFaceConfig
  • scrfd(boolean): set true if need do scrfd

  • recognition(boolean): set true if need do arcface

  • registered(boolean): set true if already registered face

SegConfig
  • default portrait segmentation config current

BodyConfig
  • landmark(boolean): set true if need body landmark

Info

Face

all detect values ​​are normalized from 0 to 1

  • x1: face rect left

  • y1: face rect top

  • x2: face rect right

  • y2: face rect bottom

  • landmark: if not null landmark contain 212 face key points

  • headX: Human face pitch direction corner

  • headY: Human face yaw direction corner

  • headZ: Human face roll direction corner

  • leftEyeClose: Left eye closure confidence 0~1

  • rightEyeClose: Right eye closure confidence 0~1

  • mouthClose: Mouth closure confidence 0~1

  • mouthBigOpen: Open mouth Big confidence 0~1

InsightFace
  • x1: detect body rect left

  • y1: detect body rect top

  • x2: detect body rect right

  • y2: detect body rect bottom

  • landmark: if not null landmark contain 5 face key points

  • confidence: face detect confidence

  • feature: if not null feature contain 512 face feature points

Body
  • x1: detect body rect left

  • y1: detect body rect top

  • x2: detect body rect right

  • y2: detect body rect bottom

  • landmark: if not null landmark contain 16 body key points

贡献内容

反馈问题

  • Github issues

  • Email: Support@openailab.com

  • QQGroup: 630836519 (TengineKit)

提交你的代码

  • 提交PR,你需要说明你修改了什么。

    • Fork it!

    • 创建你的分支: git checkout -b my-new-feature

    • 提交你的修改: git add . && git commit -m ‘Add some feature’

    • 推送这个分支: git push origin my-new-feature

    • 提交请求

Thanks a lot!!

ChangeLog

FAQ