Automate Health Care Information Processing With EMR Data Extraction - Our Workflow
We dive deep into the challenges we face in EMR data extraction and explain the pipelines, techniques, and models we use to solve them.
By implementing product recognition capabilities in your store, you can unlock a large number of improvements in your store operations and customer experiences. Your annual counting and cycle counting can be completed faster. Automated shelf monitoring can alert your staff to low stocks. The omnichannel experience of your website and app customers can improve by informing them about product availability in real-time.
All this is possible in five steps. In implementing these steps, you'll see the many uses of product recognition in your retail environment and the crucial role of computer vision and deep learning in enabling it.
A visual search engine is at the heart of this entire system because you'll be needing it for every subsequent step. So, the first step is to implement a visual search engine for products using a deep learning image processing pipeline, a vector database for image recognition, and if needed, an optical character recognition and information extraction pipeline for fine-grained image recognition.
Photo courtesy of Talles Alves on Unsplash
The deep learning pipeline can implement either object detection or instance segmentation to isolate a product from its surroundings in an image. Both are based on convolutional neural networks (CNN) for visual pattern recognition but each one brings some inherent benefits and drawbacks that you should evaluate based on the conditions in your retail environment.
Object detection isolates products in the scene as rectangular bounding boxes. Object detection models like YOLOv4 are deep learning models that can be easily fine-tuned for retail product detection and can run fast even on modestly powered smartphones. However, since the object detector treats the entire set of pixels within the bounding box as part of the product, there is a possibility that extraneous pixels get included and create problems down the line, such as including the text from a product in the background on a densely arranged shelf.
Instance segmentation, in contrast, isolates products just like people do by recognizing their irregular boundary contours. Instance segmentation models like U-Net can be fine-tuned on retail datasets for product segmentation. Compared to object detectors, they isolate occluded products better — which is a common problem in store shelves — and avoid downstream errors like spurious text characters since the segmented pixels are from only one product. However, since segmentation is a much more challenging task, they require more powerful hardware and devices, which pushes up your initial expenditure.
Projects like Detectron2 and MMDetection provide ready-to-use software and pre-trained models that make product detection and segmentation a breeze to fine-tune and use. Alternatively, you can use a cloud ML service like Amazon SageMaker which comes with built-in support for object detection and segmentation.
Regardless of the model used, it computes a numerical vector called an image embedding for each product. An image embedding is the result of the neural network looking for all the unique local and global image features that characterize each product and encoding them as a vector of numbers using a mathematical function. Each SKU is represented by a unique image embedding since each SKU will have a unique shape, textures, colors, text, barcode, and other features. These embeddings are essential to our visual search as we'll soon see.
A vector database enables us to query images and find matching image embeddings for object recognition. You need one so that when a product is shown to the system, it can search this database to check if the same product or a visually indistinguishable product has previously been registered with it and retrieve its product details.
There are many open-source, commercial, self-hosted, and managed vector databases. Milvus is an open-source self-hosted vector database that you can deploy in your cloud or on-prem infra. Pinecone is a commercial fully managed vector database-as-a-service.
Customers often have to read product labels before deciding on a purchase as many products resemble one another visually but differ in little details like flavors or ingredients. A product recognition system built around off-the-shelf cameras and smartphones is likely to run into the same problem at some point.
That’s why we suggest setting up a product text recognition model and information extraction pipeline too, though they're not necessary to get a basic visual search system up and running. The text labels enable more accurate search by requiring products matched through visual search to also match the text exactly.
Libraries like Tesseract, spaCy, and Hugging Face provide high-quality text recognition and information extraction models based on deep neural network architectures like LSTMs and transformers.
With the visual search engine ready, you need to start populating its database with your product images.
The best time to do this is when your staff is adding new products to your inventory management system (IMS). You probably already have some workflow where your staff unboxes the pallets from the warehouse, records product details and SKUs in your IMS, prints out barcode labels, and transfers different products to your stockroom shelves.
You can add an additional step where your staff uses a custom smartphone app to capture photos of the products, perform some basic preprocessing to adjust brightness and contrast, link them to your SKUs, and upload them to your visual search engine. The same product segmentation model can be run on smartphones to help your staff crop photos. Multiple photos from different angles are recommended to make the visual search pose-invariant. Once uploaded, the search engine:
You can do this as part of many other workflows too:
Eventually, your complete inventory will get added to your vector database. Keep testing the accuracy and other metrics — precision, recall, AUC — of your visual search engine frequently to ensure that it's recognizing most products correctly. A staff member should be able to point their smartphone camera at a product kept in any orientation and receive its details accurately. If there are recognition errors due to too many products, the embedding space can be expanded by recalculating longer embeddings for all products.
Photo courtesy of JuniperPhoton on Unsplash
Fixing cameras in your shelf areas enables many features that improve operational efficiencies:
For implementing these features, you can use off-the-shelf cameras or cameras specialized for shelf monitoring or even smartphones and tablets. The devices are set up opposite the shelves they monitor and typically have some pan-tilt-zoom capabilities.
The advantage of using smartphones or tablets is that they can run the same product recognition neural networks on-device if necessary. They can also do shelf monitoring by running deep neural network semantic or panoptic segmentation models on-device, measuring the areas of the shelves that are empty, and alerting the staff.
In addition to the devices fixed to your shelves, it’s a good idea to incorporate your product recognition system into smartphone apps to help your staff with their work. You've already seen earlier how your staff can use these apps to add products to the vector database. Other workflows such as annual counting and cycle counting can finish faster if your staff can simply point their phones at the shelves to have the items counted automatically.
Smartphones nowadays come with hardware acceleration support for neural networks. Mobile apps can use libraries like MediaPipe and TensorFlow Lite to run product detection, segmentation, text recognition, or information extraction neural networks — based on mobile-optimized backbone networks like YOLOv4-Tiny — directly on the device. The convenience of smartphones drastically improves the acceptance and use of the product recognition system by your staff in a way that using computers or uploading images manually just can't.
Improvements in your business operations are good reasons to adopt such a system but they pale in front of increased revenues from happy customers. A product recognition system can actually improve the omnichannel user experience of your customers.
Real-time availability of products lets your website and app customers check if the product they want is actually available on your shelf so that they can quickly visit your store to get it. In areas where shopping and parking can be a hassle, this kind of assured availability makes for happy customers.
Another feature you can provide is a visual search for products. Your omnichannel customer may spot an interesting-looking product on their way to work or while traveling. All they have to do is snap a photo and upload it to your website or app to check if your store stocks the same product. Your website or app sends the photo to your visual search engine to check if a visually similar product is available in your vector database. If it is, they can confirm availability to your customer. And if it's not, then it's a signal to your assortment plan that you may be missing a product that's in demand.
Rapid advances in deep learning, machine learning, and artificial intelligence have enabled innovative applications like product recognition and inventory management to optimize backend processes and improve customer experience. The breakneck speed of improvement is such that techniques and even algorithms from just two or three years ago are already obsolete, outperformed by newer techniques. In such an ever-changing environment, it's understandable if you aren't aware of what is possible. That's why we're here — to show you the possibilities and unlock the benefits they can bring to your retail business. Contact us!