Automate Health Care Information Processing With EMR Data Extraction - Our Workflow
We dive deep into the challenges we face in EMR data extraction and explain the pipelines, techniques, and models we use to solve them.
Dlib is an open source suite of applications and libraries written in C++ under a permissive Boost license. Dlib offers a wide range of functionality across a number of machine learning sectors, including classification and regression, numerical algorithms such as quadratic program solvers, an array of image processing tools, and diverse networking functionality, among many other facets.
Dlib also features robust tools for object pose estimation, object tracking, face detection (classifying a perceived object as a face) and face recognition (identifying a perceived face).
Though Dlib is a cross-platform resource, many custom workflows involving facial capture and analysis (whether recognition or detection) use the OpenCV library of functions, operating in a Python environment, as in the image below.
There are a number of novel APIs and interfaces for Dlib, many of which provide additional functionality not directly anticipated by the original creators, such as real-time recognition of multiple faces.
Programmer Adam Geitgey offers a FOSS face recognition API that leverages Dlib.
Geitgey's CUDA-capable library has demonstrated 99.83% accuracy on the University of Massachusetts' Labeled Faces in the Wild benchmark dataset. The project encodes Dlib's face captures into 128 data points per face, resulting in unique parameters for the hash of each face across a variety of different photos. Subsequently a Support Vector Machine (SVM) is trained on the derived faces via scikit-learn, resulting in an agile FR model that can run with minimal latency in the right conditions.
One popular Chinese GitHub repository uses Dlib to power facial reconstruction, and the eos and 4dface libraries to compute geometry and capture textures, for the creation of photorealistic mesh heads.
Many projects that utilize Dlib are themselves intended as tool-chain resources, such as Drishti, a real-time eye-tracking framework written in C++11, and intended for iOS and Android devices, as well as embedded ARM and other lightweight computing environments.
Drishti can make use of CMake for iOS deployments, with native Xcode generation smoothing the development process on Apple's platform. Since CMake retains a few extra issues in Android Studio, the developers offer some workarounds to implement Drishti in Android.
Get Me Through is a Python-based FOSS solution for recognizing and admitting invitees to an event. Besides Dlib, the project uses MongoDB and Node.js v8.1.4+, is written in C++ 11, and supports MacOS and Linux, with untested support for Windows.
A number of repositories use Dlib as the facial recognition engine for attendance monitoring frameworks. One such project from India offers an automated pipeline, including a webcam recognition framework and the automation of warning mails to students that were not registered by the system during an attendance period.
Another India-based C++ GitHub project powered by Dlib uses facial captures to generate stylized anime avatars
Dlib is increasingly being used in image synthesis applications that involve the reconstruction of faces, style transfer, or deepfake images. One legitimate use for the latter is the anonymizing of faces of 'at risk' subjects, as was accomplished for the 2020 release Welcome To Chechnya, where a modified version of DeepFaceLab was employed to superimpose 'alternative' faces on interview subjects.
DeepFaceLab offers Dlib as a face extraction tool, together with the Python library MTCNN, which has its own strengths, but is prone to return more false positives than Dlib. Other popular face recognition libraries include Single Shot Scale-invariant Face Detector (S3FD), which can operate well on a mobile GPU, but may not get access to it, depending on resource allocation; and which runs poorly on CPU, compared to its stablemates.
To aid early development of a facial recognition/detection framework in your particular operating environment, you can compare Dlib's performance to its peers with Awesome face detection, which allows you to pit six competing libraries against each other: OpenCV Haar cascade, Dlib HoG (see below), Dlib CNN (see below), MTCNN, S3FD and InsightFace.
There are many more directly commercial use cases for Dlib's face detection capabilities, where the objective is to individuate a face from images or a video stream. These include:
Naturally, face detection also operates as a precursor or initial phase for facial recognition, where the system will attempt to maintain consistent focus on an identified face and run its characteristics through a database that's likely to turn up an ID match.
General or 'abstract' use of this application is (for obvious reasons) limited to periodic state-led technology initiatives by police and similar authorities, though these often attract controversy.
In private corporate environments, facial recognition can be used to monitor attendance and facilitate security access in diverse ways, from gaining building access to unlocking timed-out workstations, among a myriad of other possibilities. Other uses include:
Dlib is incredibly fast and very lightweight. It can comfortably operate at 30fps in standard environments, and can potentially detect facial landmarks in a single millisecond, though only in the most ideal conditions. It can also operate on hardware as basic as a Raspberry PI.
Additionally, it's possible to train Dlib to identify specific shape traits in a face, for general research or for medical applications.
Dlib offers two different functions for facial capture:
HoG + Linear SVM
The Histogram of Oriented Gradients (HoG) + Linear Support Vector Machine (SVM) algorithm in Dlib offers very fast recognition of front-on faces, but has limited capabilities in terms of recognizing face poses at acute angles (such as CCTV footage, or casual surveillance environments where the subject is not actively participating in the ID process).
It also supports passport-style profile faces, though with very little margin for error (faces pointing up or down, etc.). HoG + SVM is suitable for constrained situations where the sensor can expect a direct and unobstructed view of the participant's face, such as ATM and mobile framework ID systems, as well as mobile traffic surveillance recognition systems, where cameras are able to obtain a straight profile shot of drivers.
Max-Margin (MMOD) CNN face detector
MMOD is a robust and reliable, GPU-accelerated face detector that leverages a convolutional neural network (CNN), and is far more capable of capturing faces at obscure angles and in challenging conditions, suiting it for casual surveillance and urban analysis.
MMOD is not a distinct alternative to HoG + Linear SVM, but rather can be applied to HoG itself, or to any bag-of-visual-word model, which treats discovered pixel groupings as explorable entities for potential labeling — including the identification of faces.
In such cases, these explorable entities are discovered via a three-step process:
1: Feature Extraction
Where key points in the image are detected and assigned to Scale-Invariant Feature Transform (SIFT) features.
2: Codebook/'Vocabulary' Construction (normally k-means)
At this point, it's necessary to classify the discovered groups, and to segment them from background information. Unsupervised K-Means clustering can accomplish this well by iterating over all this unlabeled data until it has calculated the minimal sum of squared distances between all the captured points and the center of the cluster. When all those centers have been calculated, each will form the apex of a grouping, which can be fed into the next stage.
3: Vector Quantization
Vector Quantization (VQ) hails from early signal processing research, and has been a central plank of compression technologies, since it deals with the definition of minimum units from a 'noisy' environment. In our work-flow, VQ calculates the number of clusters found in stage #2 (see above) against the frequency of recurring patterns in order to provide a feature representation layer, and converge the estimated groupings into usefully distinct entities.
The appeal of HoG + Linear SVM under Dlib is its low use of resources; its efficacy when operating on CPU; the fact that it has at least some latitude for non-frontal faces; its low-impact model requirements; and a relatively capable occlusion detection routine.
Negatively, a default deployment requires a minimum face-size of 80x80 pixels. If you need to detect faces below this threshold, you'll need to train your own implementation. Additionally, this approach gives poor results on acute face angles; generates bounding boxes that may over-crop facial features; and struggles with challenging occlusion cases.
The advantage of MMOD (CNN) under Dlib is (perhaps above all) its ability to recognize difficult face orientations (which may be the deciding factor, depending on your target environment); its impressive speed when allowed access to even a moderately-specced GPU; its lightweight training architecture; and its superior occlusion handling.
Negatively, it can produce bounding boxes even more restricted than HoG + Linear SVM in a default deployment; performs notably more slowly on a CPU than HoG/LSVM; and shares HoG/LSVM's native inability to detect faces smaller than 80 pixels square — again, necessitating a custom build for certain scenarios, such as acute street surveillance viewpoints that extend into the distance.
The creator of MMOD, Davis King, has provided a number of useful open source trained models for Dlib, many (but not all) of which center on facial recognition. These include:
A Dlib Face Recognition Network model with 29 convolutional layers, an optimized version of the well-used ResNet-34 network. This model was trained on three million faces across various datasets, including Face Scrub, Oxford's VGG set, and the author's own web-scraped data.
A human face detector dataset forked from a standard Dlib set, with images annotated using Dlib's native imglab tool.
A shape predictor facial landmarking model trained on the 7198 faces in Dlib's 5-point face landmark dataset, usable with either HoG or the CNN detector.
An Imperial College London dataset designed for HoG, which is excluded from commercial use.
Dlib is a versatile and well-diffused facial recognition library, with perhaps an ideal balance of resource usage, accuracy and latency, suited for real-time face recognition in mobile app development. It's becoming a common and possibly even essential library in the facial recognition landscape, and, even in the face of more recent contenders, is a strong candidate for your computer vision and facial recognition or detection framework.