Automate Health Care Information Processing With EMR Data Extraction - Our Workflow
We dive deep into the challenges we face in EMR data extraction and explain the pipelines, techniques, and models we use to solve them.
In this article we will go over a powerful use case for the TensorFlow deep learning framework that enables a store to automatically detect products through the use of image classification. We will show how TensorFlow can be used to build an end-to-end pipeline that can drastically cut time for stocktaking and monitoring shelf use in a supermarket using images of products along with deep learning. We will go over the process of setting up such a system from conception to going live into production, explaining the technical and design decisions along the way, as well as common challenges faced when creating such a system.
Image classification is a set of problems in computer vision that deals with assigning images an accurate label, called a class, usually based on features found in the image. While this can be done without deep learning, deep neural networks have been able to perform this task very accurately given a sufficiently large set of training data. For image classification, an artificial neural network takes an image as input and through training identifies the distinguishing features to accurately output a label based on what is provided in the training data. Through training, the network then learns the features it needs to evaluate the images on to accurately map images to classes. Once the network has been trained, it can then be used to classify new images. What classes we decide to use during training will decide on the level of granularity that our images are classified as (for instance: cat vs specific breed of cat, Coke vs Coke Zero 12oz). Typically, the output of the classifier will be a probability distribution over the possible classes, which can be interpreted as the likelihood that an image belongs to each class. The closer a given image is to an example the network has been previously trained on, the more likely it is able to correctly label the image. While this is easy in theory, practical application is often challenging due to the variability of real-life image data.
Any variation in the image, be it from the camera, lighting, angle, size or blur can impact the model performance. That makes it important to choose both a robust model and a representative training set and well thought-out training procedure for the model to better generalize to unseen examples. This includes so called few-shot and single-shot learning, where a model is given few or just one example of a target class respectively and then needs to recognize it in unseen images.
While image classification is the most common use for neural networks in computer vision, this method has some important limitations. The model outputs one label per image and can have trouble with images containing multiple subjects of different classes. Also, the location of the occurrence in the image is not output by the model. A typical pipeline will therefore have a set of models feeding into each other to handle the different tasks of finding an object in an image and classifying it.
TensorFlow is a end-to-end machine learning platform for training, deploying and servicing deep learning models. Since its publication in 2015, it has established itself as the most well known framework for deep learning production applications and is used by countless world-class companies. Being the first widely used large open-source DL platform, it has developed into a mature and well supported framework with a large community and many components that make setting up, training and deploying models easier. These include, but are not limited to:
For the chosen use case of detecting products in a retail setting, stability and flexibility are very important, as real-life applications in the field have to deal with device failure, poor network coverage or server outages. TensorFlow Lite allows us to make the model run directly on the mobile device that will be used to capture the data, eliminating the need for expensive and cumbersome server infrastructure and constant internet connection. We opt for a more light-weight implementation, which TensorFlow Lite allows us to do. While Pytorch also has a framework for running models on mobile devices, its implementation is newer and less stable than the mature TensorFlow Lite framework.
TensorFlow provides a platform in TensorFlow Hub with a variety of models for all common applications of deep learning with architectures for classification, segmentation, generation and others. All state-of-the-art architectures are supported by TensorFlow.
For image classification, popular architectures like Inception V3, MobileNetV2, ResNet50 and others are all included. These vary in their size, pre-training datasets and focus. Each of the commonly used network architecture has its individual design that makes it more suitable for different applications.
As a general guide, larger networks are able to deal with more complex problems, while smaller networks can handle small data sizes and easier tasks. The choice of architecture is therefore dependent on the size of the problem, as well as its planned use environment. Some models work well with just CPU inference, while others require a GPU to quickly evaluate data, setting requirements to the hardware the model will run on. In a typical use case, often multiple architectures are tested and evaluated to see which one suits the task better. We may try a large ResNet50 and a EfficientNet architecture and see them perform equally well with the EfficientNet architecture requiring a smaller overhead, making it the right choice.
Product recognition can be seen as a subtask in image classification, as images are limited to pictures of products and labels correspond to SKUs or product groups. The image supplied to the model can originate from the catalog of the suppliers or manufacturers and generally it is possible to leverage data already present. The output of the models can then be used to quantify and track product stock and movement. The system may also leverage text present in the input image to increase accuracy of classification with OCR matching against the product database.
An AI project like our product classification pipeline usually goes through a typical cycle of stages in its development, which will be covered in detail below. The stages can be summarized as:
We now will go into detail of the process of building a product recognition pipeline with a custom image classifier for automated inventory management and SKU recognition. To automate the inventory taking process, we want to be able to take a photo of a product shelf and see which products are on it. We'll concentrate on a two-model architecture for detecting local features in full-view images and classifying specific product SKUs.
From the client brief, we know that the most easily available images for the training data is the images available for their E-Commerce database, which includes a single image of every product isolated on a white background. We will be using this data for prototyping as more training data would be expensive to gather.
While these images are readily available, they are missing some information that might be crucial in identifying products correctly if the display use cases are varied enough. The product may present at an angle, with visible glare or occlusion, meaning part of the object is covered by something in the foreground. We will try to account for these factors in the training process via processing, but knowing the uniformity in target data helps gauge how good performance can be.
Both the problem complexity and data availability influence our choice of model architecture. Since we aim to use a single image for each SKU, we opt for an architecture design optimized for single-shot learning with a feature extractor ResNet pre-trained on ImageNet combined with a TransformerNet that will ensure scale invariability for detecting the feature representation of the products we encode. The neural networks will perform separate tasks and feed into each other to learn to recognize products from a single image in many contexts. We can construct the networks using the Keras “Sequential” module, which allows for creating arbitrarily large neural networks with a few lines of code.
Our use case has two input data streams, the product database with the products we are querying for and the target images of shelves with products which need to be labeled in correlation to the product database.
While the product part of the training data is available, the shelf images need to be collected, either by using publicly available datasets or by constructing a training dataset from images similar to the target images we plan to use the model on. Labeled datasets are often also available commercially through training data vendors, which provide labeled images for most use cases. Domain expertise is necessary to evaluate their applicability to a given task based on the expected variability of the target data. Stores can differ largely in style and presentation, making it hard to cover every possible use case. We focus our work on standard shelving to cover the most common use case.
Having defined the target use case, we can construct a target dataset that covers a wide range of data points, including both simple and difficult samples, meaning those, where the deviation from the training product images is low and high respectively. Adversarial examples can also be included to test for false positives by including images where no products are present at all.
Datapoint complexity: some samples are harder for the model to detect than others depending on the level of variation in the image. The second example has some core features obstructed and is at a strong angle.
Preprocessing the image will help us to make the model robust to overfitting and learn to generalize to a wider variety of product environments and backgrounds, as well as resize the images to a necessary size with the tensorflow.images module. We also choose to include some more complex transformations through the Albumentations framework for simulating partially covered, angled or distorted images, as those are included in our use case.
Finding the correct balance of data augmentation is possible with analyzing the variations inside the training data and determining underrepresented outlier cases, but too much augmentation can worsen performance and cause underfitting and lowering accuracy. Robust model evaluation can help finding the right balance by trying different levels of data augmentation and seeing which one performs best.
Image classification is a common task in computer vision, and there are many pre-trained network weights that perform very well on generic image classification tasks, and crucially can even translate to unseen classes with “fine-tuning” for the specific classes. Our feature extractor network will use a Imagenet pre-trained architecture of ResNet.
Since our model will both classify and localize the product in the image, we require two loss functions, a localization loss measure that gives the location error and a prediction loss that measures classification error.
We use the tf.keras.losses.Hinge and tf.losses.huber_loss from the TensorFlow built in loss functions for classification and location respectively. The final output produces a output of bounding boxes containing the products with a class label attached, which corresponds to a SKU from our training set. We’re also using the ReLU activation function from the Keras library.
After training we see the model is already performing well on recognizing the input products despite variation in size. Some products that are far in the background or covered from view are missed, suggesting training with more aggressive occlusion augmentation. The model outputs class and location of products, allowing us to compute the number of SKUs present in the image. The output shape of the final layer is an image mask with each pixel assigned a class probability.
We have trained a model that produces satisfying results, now it needs to be made accessible to the final application from which the model will be queried, be it on a server or on device.
Since image data tends to be large, the design of the end use application needs to account for that. Model inference increases significantly with data volume and limits in connectivity, file upload limits and database memory issues can all arise.
Luckily, TensorFlow gives us the option to bypass this by processing the images directly on the mobile device that is used to capture the images, allowing for us to only handle the much smaller output and removing the need for a large server infrastructure. Whether the model can be made small enough depends on factors like task complexity and device computing power.
We track the model after deployment with rigorous testing and automated reporting on its performance. A report can not only show us how the model is performing, but also which inputs it is struggling with by giving a probability ambiguous score, such as when the model has trouble deciding which of two SKUs it needs to classify a product as and gives a probability close to 0.5. We need to work closely with the stakeholders to find possible errors and add them to the dataset to retrain and redeploy the model, improving it over its lifecycle. This can include a lot of friction, which is why it is important to design a pipeline that works smoothly from beginning to end and makes this process seamless, with the capability of tuning the model to go over difficult samples multiple times and store all data collected to be used again during training if a mistake is identified by the user. Dataset and model versioning are one of the core components of any well designed AI pipeline.
Down the line, we may want to expand this to more use cases, like reading barcodes, computing statistics or performing other operations with the data extracted from the model.
We often see projects grow in scope as they are used and plan the infrastructure accordingly to leave room for growth down the line.
Width.ai builds custom computer vision solutions (just like this image classification model!) for businesses to leverage internally or as a part of their product. Schedule a call today and let’s talk about how we can help you deploy custom computer vision models or other machine learning models. Let’s Talk