Automate Health Care Information Processing With EMR Data Extraction - Our Workflow
We dive deep into the challenges we face in EMR data extraction and explain the pipelines, techniques, and models we use to solve them.
Images and videos are indispensable to expanding your brand reach. But finding the most suitable image or video remains a hassle for businesses. Thankfully, since around 2020, image generation models like DALL-E 2, Stable Diffusion, and Midjourney have produced incredibly breathtaking images that often surpass even human photography and art in creativity and realism. The best thing about them is that many of them are available to you too as open-source code and models.
In this article, you'll get to know one such exciting and extremely useful generative AI use case in depth — the mask-aware transformer for large-hole image inpainting.
Image inpainting refers to generating a portion of an existing image in a way that is visually realistic and semantically consistent with the rest of the image. You may need it to seamlessly remove something from an image or add something to it. Or you may use it to repair a damaged old photo.
What's inpainting useful for? Apart from image editing in general, it may prove useful for the following possibilities.
You may be an ai retail software company looking for new ways to generate retail shelf product data. Or you may be a retail business experimenting with your product placement or planogram compliance. Either way, you can use guided inpainting to experiment with different aspects like positioning, colors, shapes, or sizes of your products.
For example, in the image above, text-guided inpainting is used to imagine how a novel packaging or new product may appear on a shelf.
Inpainting allows you to generate a full image background around product photos from a picture of your product that submerge the product naturally in the background. We can build simple pipelines that take our product photo, do the preprocessing required, then generate a background that actually puts the product in the photo! Through high level prompting we can make angles, other objects, and the surface map to the product.
Product blends into the background naturally. The image around the product adjusts angles and viewpoint align with product.
You can use inpainting for background modification or super-resolution to enhance poor resolution images.
If you aren't good at image editors like Photoshop, you can use inpainting to modify an existing stock image or product image to suit your content or website design. This process also allows you to automate the process of generating these stock images for specific blogs. You can even leverage a key topic extraction model to auto generate the prompts for you specific to your blog content.
Many businesses wrongly assume that stock photos with permissive licenses are risk-free. In reality, if a photo shows a person's face in an identifiable way, those permissive usage rights don't automatically extend to the use of their face. They can sue you if you don't get their explicit permission, especially if you use the photo alongside content that is controversial or socially damaging to the person.
Inpainting offers a solution by being able to generate non-existent human faces. The mask-aware transformer explained in this article even provides a pre-trained model to demonstrate this (but only for non-commercial purposes).
Old, worn out, or damaged photos are common in some organizations like ancestry services, libraries, law enforcement, and government archives. Inpainting can be used to restore such photos.
Due to clouds, communication problems, birds, or other obstacles, missing areas are a common problem in any industry that uses drone, aerial, or satellite imagery. Inpainting is a solution that's commonly used and quite effective since there are likely to be other images of the same areas that can be used to fill in the gaps.
For small holes or minor damage, traditional image processing and inpainting techniques work fine. They just spread colors, textures, or contours from the neighborhood and that's often enough.
But large-hole inpainting is an entirely different beast despite the similar name. It requires semantic understanding of images and realistic image generation, not just the flow of colors and textures. The image above demonstrates the failure of large-hole inpainting using a traditional algorithm. That's why most of the recent advances have used deep learning techniques for inpainting. The next section introduces you to some of these approaches.
Most deep learning approaches for inpainting fall into one of these architectural categories:
In the next section, you'll learn about the mask-aware transformer, a recent approach that performs well at inpainting.
The mask-aware transformer (MAT) is a GAN with some innovative ideas to overcome the problems of large-hole inpainting. As of January 2023, it's the best-performing model on several inpainting benchmarks like Places2 and CelebA-HQ.
We'll dive deep into how it works.
A region on the inner border of a hole is strongly influenced by its local neighborhood outside the hole. The neighbors' colors, textures, contours, and other visual aspects must flow smoothly into the region. Each such border region, in turn, influences its neighbors inside the hole.
This hierarchy of influence is analogous to text generation where the sequence of previous words influences the next word, the sequence of previous sentences influences the next sentence, and so on.
Additionally, long-range and global context must also influence the objects and textures painted in a hole. If the hole is in a human face, its filling must match the parts of a typical human face at that position.
Paying attention to both local and long-range context for visual and semantic consistency is something transformer models excel at. So MAT uses a transformer stack in its generator and makes it mask-aware.
Like any GAN, MAT has a generator and a discriminator network. The discriminator is only active during training to force the generator towards ever-greater accuracy.
MAT's generator network consists of:
The discriminator network is a standard one similar to StyleGAN2. It's a convolutional network with two stages of skip-connected layers — one for the real images and one for the generated images. It evaluates the perceptual loss between real images and those synthesized by the generator network.
Since the generator network is the more interesting part of MAT, we'll study it in detail next.
The first stage of the generator generates the inputs for the transformer stack. These inputs are called "tokens," a term from language processing where transformers are popular. The tokens here are just visual feature embeddings generated on the input images by a stack of six convolutional layers. For a 512x512 image, they generate 4096 tokens of 180 dimensions.
MAT's transformer stack is designed to allow both local and long-range context to influence the inpainting and do so efficiently.
A standard transformer block has a multi-head self-attention layer and a feed-forward layer with layer normalization at their inputs. The self-attention layer decides which parts of the input to focus on. All this works fine on a normal image.
But on an image with large holes (due to a mask or damage), many tokens may be from blank patches with no useful information. The influence of these "invalid tokens" gets amplified by the standard self-attention mechanism, the use of skip connections, and layer normalizations. So the training becomes unstable and the model's generative capability goes down.
Plus, since attention's complexity is quadratic (i.e., given N inputs, it must do NxN dot products), some optimizations are needed anyway for high-resolution inpainting.
To solve all these problems, MAT replaces standard self-attention with contextual attention and normal transformer blocks with adjusted transformer blocks.
The MCA is a special multi-head attention layer. Using the shifted windows idea from the Swin transformer, it can extract useful information even when the number of usable tokens is low due to large holes.
It also maintains a dynamic mask to evaluate masked areas and retain only the useful tokens. It works like this: The input mask is the initial mask. After each attention round, it shifts the attention window by a few tokens and starts the next round. If a window has at least one usable token, the entire window is treated as usable and included in the dynamic mask. But if it has only invalid tokens, the window is discarded from the dynamic mask. In the end, you're left with a dynamically determined mask that only contains useful information.
These MCA layers are used by the so-called adjusted transformer blocks that are optimized for large-hole images. Unlike a standard block whose use of skip connections and layer norms amplifies the invalid (useless) token problem, an adjusted block is optimized for sparse tokens. It doesn't use skip connections or layer norms. Instead, it concatenates the inputs and outputs of each attention layer and sends them to a fully connected layer.
MAT's transformer stack consists of five such adjusted transformer blocks that progressively downsample the number of tokens and upsample them back to 4096 tokens, decorating them with attention information about which tokens to focus on.
The stack is followed by one convolutional decoder layer with a single global skip connection to the stack's input. Its job is to decode the transformer's latent representations back to visual feature embeddings but now with attention. They're the final outputs of the transformer stack and the inputs to the next U-net stage.
A U-net is a convolutional neural network (CNN) with a special capability: Though it applies convolutions and downsampling like any other CNN, it finally produces a full image with the same resolution as the original. It does this by progressively upsampling feature maps from a previous layer using information from the corresponding downsampling layer.
The U-net in MAT's generator takes the embeddings from the transformer stack and combines them with styling inputs. Five convolutional layers then progressively downsample the feature embeddings, halving them in each layer. Then five more layers progressively upsample the embeddings back to a full-resolution color image.
For some use cases, you may want multiple inpainted versions showing a variety of plausible objects, all of them visually and semantically consistent with the rest of the image. For example, for a generated garden photo, you may want multiple photos that show flowers of different colors to choose from. Or if your photo has people's faces, you may want to examine many faces and select the one best suited to your brand or content.
MAT does not give you tight control over what gets inpainted. Instead, it learns what kind of random noise will result in specific visual attributes in the final image. During training, supply the noise values alongside the images with the specific visual features you want. The noise-to-style mapping is learned in the transformer stack and then used by the U-net during generation.
When you want it to generate multiple variants, just supply the appropriate noise values. The noise values are abstract vectors but you can always map them to specific image features in your user interface. For example, if you're inpainting faces, you can map specific noise vectors to facial aspects like hair color, skin color, and so on.
MAT training is self-supervised and adversarial. You're free to use unlabeled image datasets. But if labels are available, you can tell MAT to optionally use them for loss calculations.
When should you train a new model? Pretrained models are only available for faces and places. The places dataset covers a wide variety of indoor and outdoor settings and MAT will work fine on any photo with similar settings. But if you want it to generate specific types of vehicles or store products or dresses or similar, you'll have to train a custom model on your own images.
Evaluating inpainting is complicated because many plausible objects can be painted and seem realistic. So evaluation is a mix of perceptual metrics and qualitative assessments.
MAT uses three perceptual metrics:
MAT scores well on all three metrics compared to other inpainting GAN and CNN models.
Compared to some of the other GAN models for inpainting, MAT comes with some great benefits.
1. Visually realistic generation: MAT's inpainting produces more realistic images compared to other recent models like CoModGAN, LaMa, or MADF.
2. High-resolution, high-quality inpainting: Most inpainting approaches compromise either the resolution or the quality. But thanks to its unique innovations, MAT achieves both high resolution and high quality. It defaults to 512x512 pixel images but can scale up to any higher resolution as long as it's a multiple of 512.
3. Fast execution: MAT's generator is fast. It's capable of inpainting high-resolution images in near real-time. MAT’s ability to be used in your own infrastructure as opposed to managed services allows you to scale the speed with cloud computing.
4. Large hole inpainting: Of course the key benefit of this architecture is the ability to inpaint accurately over a large mask area.
DALL-E 2 is OpenAI's latest text-drive image generation model. It’s the most commonly used architecture for in-painting related tasks, as it’s a simple to use managed service that lets you get going quickly. Let's see how it does on inpainting.
The demo below shows DALL-E 2 inpainting on the building, guided by the text prompt "fill with surrounding windows." It generates five variants ranging from normal to rather creative. Testing with simpler text prompts helps us evaluate across a wider range of data variance that users could provide. Products that use inpainting or outpainting often use simpler text prompts for this same reason.
DALL-E 2 isn't a GAN like MAT. Instead, it builds on two prior OpenAI models.
The first is contrastive language-image pretraining (CLIP). It's a transformer-based language-image model that learns to associate text semantics and visual concepts in images through joint contrastive training.
DALL-E 2 has access to those latent representations for text and images that CLIP learns. All it needs now is a way to reuse them to generate new images. Generating images from latent embeddings is nothing new — MAT and other GANs do it too. But DALL-E 2's novelty is that it doesn't use a GAN at all! Instead, it uses an even better approach called diffusion.
Diffusion models synthesize images from noise by progressively removing the noise. DALL-E 2's diffusion model is additionally conditioned on CLIP and text embeddings to generate its images using a decoder model called unCLIP.
Let's see some differences between MAT and DALL-E 2, both functional and technical.
MAT's inpainting automatically leans towards realism if you've trained it only on real-world images. It usually doesn't generate anything too creative or unexpected, and that's a good thing for most business use cases.
In contrast, DALL-E 2 often feels a bit overenthusiastic in creativity. Even plain simple prompts like "fill the holes with the table" can result in unexpected new objects being added. You'll need multiple cycles of prompting and erasing to make it do what you want. This can be very difficult to account for in SaaS product workflows that want to leverage this functionality. Each time a user has to think of a new prompt to generate the output they think is correct, they become less likely to keep trying to generate.
MAT is a pure vision model. Its only inputs are images. In contrast, DALL-E 2 is a text-guided image generator and text is a key part of its inputs.
As the DALL-E 2 paper points out, GANs have limited variability because adversarial learning can't model complex distributions easily. They're also prone to training instability and have more parameters.
In contrast, diffusion models provide wider distribution coverage and easy scalability. They are also comparatively lightweight.
Stable diffusion (SD) is an open-source implementation of image generation using a diffusion model. It enables you to generate images in the same way as commercial cloud services like DALL-E 2 or Midjourney. SD is also capable of inpainting on existing images. It's maintained by a consortium of academic and industry groups like CompVis at the Ludwig Maximilian University of Munich, Stability AI, and Runway among others.
Essentially, stable diffusion consists of an open-source framework, pre-trained models, self-managed application programming interfaces (APIs), cloud-managed paid APIs, and end-user graphics applications like DreamStudio.
Here's a quick demonstration of inpainting using pre-trained stable diffusion models. We used the DreamStudio application but you can also automate all this using SD's APIs. You'll notice that this is not considered large-hole inpainting as the area is not very large.
Stable diffusion is also a diffusion-based image synthesis model. It's similar to DALL-E 2 in approach but its architectural choices are different. The biggest difference is that stable diffusion prefers to do its diffusion-based denoising in a more compact latent space rather than the computationally intensive pixel space, which makes it a latent diffusion model (LDM).
SD's LDM consists of a latent diffusion autoencoder that learns to apply denoising directly to the embeddings. Essentially, SD uses the data from incremental noising to learn its reverse process. In doing so, it learns to identify both the noise itself and the probable noising step, given a noised embedding. Through repeated step-by-step denoising, it separates all the noise from the underlying image embedding and produces a "clean" embedding.
It implements this denoising using a U-net convolutional network that also has cross-attention layers in it. The use of attention blocks enables the denoising process to be conditioned to additional modalities, such as text embeddings.
How can you use text prompts to guide this denoising? The LDM described above is an unconditional diffusion model because it can denoise only the latent embeddings from images. However, the U-net's cross-attention mechanism provides a hook to inject other modalities and turn it into a conditional diffusion model.
For text-guided image generation, it uses a text-to-image conditional LDM. A language-image transformer encoder, similar to OpenAI's CLIP, is first trained contrastively on the LAION-400M image-text pairs dataset. The encoder's text and image embeddings are injected into the denoising U-net via its cross-attention blocks. The denoising is now conditioned on the text and image embeddings and can be guided through text prompts. This is what is shown in the above demo.
Many of the things said about MAT versus DALL-E 2 apply to stable diffusion too. But there are some unique differences and similarities too.
A major similarity — one that may be of interest to your data security and budget — is that stable diffusion is open-source, just like MAT, and can be optionally deployed on your own servers for data security and privacy reasons.
But you also have the option of using Stability AI's commercial cloud APIs instead.
Latent diffusion in latent space is magnitudes more computationally efficient than regular diffusion that denoises in the image space. You need fewer GPUs to run SD. In fact, you can even run it on your personal Mac or iPhone!
The use of cross-attention in the denoising U-net enables you to use any modality to influence the image generation. Text is the most obvious one. But nothing stops you from using other modalities like:
The generative ai revolution has produced an incredible list of automated capabilities that your business can use by leveraging GPT-3, Stable Diffusion, MAT, and many others. Automated creation of customized photos and videos, thought to be too complex and creative for machines just three years ago, are now available at your fingertips. Contact us to talk about how we can build you generative ai products such as these!