92% 4 Level Deep Product Categorization with MultiLingual Dataset for Wholesale Marketplace
How we were able to reach 97% on top level categories, and 92% on bottom level categories with a multilingual dataset for a customer of Pumice.ai.
We’re going to walk through building a production level twitter sentiment analysis classifier using GPT-3 with the popular tweet dataset Sentiment140.
Social media sentiment analysis has a ton of powerful business use cases for many different industries. Being able to understand the sentiment someone is using when reaching out to your brand or business is one of the quickest ways to understand general sentiment for your business, product, competing products, or your marketing strategy. Case studies such as this one show how you can analyze your interactions and use dashboards to create deep marketing analytics around product launches. We’ve laid out use cases for sentiment analysis in this article such as:
Gone are the days of using outdated sentiment analysis models, naive bayes classifiers, NLTK, and bag of words based models that paved the way in the early days. GPT-3 is a powerful NLP model capable of a wide variety of natural language processing tasks through few shot learning. If you’re unfamiliar with the capabilities of GPT-3 I suggest you read our business applications with GPT-3 post.
The first thing we’ll need to do is pick a dataset that we can use with labeled tweets for sentiment analysis. There’s a ton of these datasets out there, and you can use any of them as long as they have some form of a sentiment label. Most commonly they will have three options for what they label can be: “Positive”, “Negative”, “Neutral”. You’ll notice when scanning the datasets they are usually based around a specific company's tweets, such as tweets aimed at Apple, Best Buy, or others. These projects are normally done with a specific company's use case in mind, and you can certainly build your own dataset based on your own tweets. Keep in mind that you’ll have to label the tweets yourself. Since we’re going to try to take advantage of GPT-3s few shot learning abilities we’ll only use a few tweets in the prompt for now, but will use something trickey later down the line.
For this example I’ll move around a few datasets but we’ll start with Sentiment140 as the dataset.
This dataset has a ton of tweets, and contains the same three labels shown above. They are marked with “4” = positive, “2” = neutral, “0” = negative. The only downside to this dataset is that it is pretty outdated, with most of the tweets coming from 2009. Language differences from tweets in 2009 to 2021 is probably high, but we’ll start with this for now. For real time use of the machine learning model we’ll want to use the twitter api to grab inputs for our text analysis.
Oh look at that, GPT-3 already has a tweet classifier built in! Sorry to burst your bubble but this model just does not work well on real tweets for sentiment analysis. If you look through the examples used in the prompt you’ll notice the language used is very basic and does not match real tweets. All the given example tweets use easy language for the model to figure out and are never long enough to provide real variance. Words like “hate, loved, amazing” are used throughout, and always are attached to clear and concise thoughts. Rarely do we actually see real life examples that make sentiment analysis that easy.
We’re going to look at a bunch of ways to build this machine learning software, and we’re going to start with just a prompt based model. Using a few examples of positive tweets, negative tweets, and neutral tweets we can put together a quick model.
After running a batch of tweet examples through, the accuracy came out to around 65% with 6 total examples in the prompt. Considering how small the example size is for each classification this is pretty good. Let’s look at a few important points we found when analyzing the results.
Even as we dive into the more powerful machine learning models down the line these same points will be of emphasis.
We added an extra 30 tweets to this beginner GPT-3 models prompt and were able to increase the accuracy by around 8 points instantly. Now let’s look at real production level examples of sentiment analysis.
At the time of writing this the GPT-3 classify text api endpoint is still in beta form. Given the huge success of the other task specific api endpoints, and the success we’ve had with this specific endpoint, adding this to any guide is a no brainer.
The classifications endpoint allows you to leverage a far greater number of labeled examples for your GPT-3 model than you would be able to with the token limited prompt. You can also avoid the need to fine-tune your model using the Fine-tuning endpoint which requires hyper-parameter tuning. Let’s look at the process to reach a high accuracy classifier.
Before running the model you have to upload labeled data examples to the GPT-3 documents server. It’s a very simple process that requires you to upload a jsonl file where each line is a single training example with a “text” field, which should be your tweet, and a “label” field where you should add one of our three labels as a string. The line can also contain a “metadata” field which does not affect the output. One interesting thing to note is that you can use this same api endpoint for a bunch of classification tasks depending on what your labels are for your examples.
You’ll have to upload this file via a query which can be found here. To run the model on a new tweet via the api you can create a query in python that looks likes this:
The “file” field is a location hash of the file you will have uploaded. They provide that as an output when you complete the upload. “query” will be your tweet and “model” will be the GPT-3 engine you use to classify. We’ll get to max_examples and search_model and the value of those. The result you get back will look something like this.
The key benefit to this api endpoint is that you can use an incredibly large amount of labeled examples that cover a much larger set of variance and language domains. You can store upto 1 GB of files at once which should be more than enough.
The process of narrowing the labeled tweet examples is a two step process that starts with a conventional keyword search across the tweet examples to shrink down to the “max_examples” field we saw above. This is an integer that defaults to 200 and can be adjusted. The general rule with this GPT-3 feature is that a higher value leads to improved accuracy across a generalized test dataset, but does have higher latency and cost. I’m a huge fan of going with a much larger “max_examples” value for twitter sentiment analysis, let me explain why:
Just like any other machine learning model with hyper-parameters you can use tools such as Bayesian optimization to find the optimal value for “max_examples” that produces the highest accuracy across a test dataset.
Once we have narrowed down to a number of examples we rank the labeled tweets by semantic search scores. The “search_model” parameter decides which GPT-3 model is used for this and we recommend using “davinci”. We’ve found that for tasks across the board especially when our model is not fine-tuned that davinci exponentially outperforms the other options. If you want to learn more about how search works, you can read the api guide on the search api. Ranking labelled tweets based on semantic relevance is a huge advantage of this api endpoint, as it greatly improves the accuracy.
You’ll notice this list provided in the output results which shows which tweet examples from the uploaded file were used. The “score” field is a similarity score between this example and the input tweet text. Higher scores are most similar with most values being between 0 and 300. Given there is no set range for the score you can imagine that different search queries reach a different distribution of scores. Considering the large dataset we want to use for tweet sentiment analysis I think it's important to have an understanding of the mean and standard deviation of a test set given randomly sampled labeled tweets.
The final step simply returns what label the model believes best matches the tweet.
This is an extremely powerful way to perform twitter sentiment analysis regardless of what tools you’re considering. The data can be used quickly without having to worry about training a giant model like BERT, and provides the opportunity to use a huge amount of labeled tweets. The normal trade off between a model that requires training and the few shot learning of GPT-3 normally has to be managed, but with the ability to use a huge amount of data and have the api handle optimization tasks for you such as ranking and semantic relevance it's a huge advantage.
We’ve used this api endpoint + plus a custom prompt optimization pipeline that we built to get extremely high results on both the Sentiment140 dataset and custom client data. Here are a few things to think about with twitter sentiment analysis:
We’re going to take a lot of the information we’ve already seen from the two implementations above to form a new prompt based model like the first twitter sentiment analysis model shown, but include custom components similar to the second api endpoint tool.
The goal of this component is to reach a generalized accuracy as high as the second api endpoint tool, but gives us a level of customization that let’s difficult sentiment analysis use cases optimize their accuracy. A popular example of sentiment analysis use cases that can struggle are tweets where a lot of the language used by the writer is expected to be inferred by the person who is supposed to read it. We’ve seen this come up before in case studies such as topic summarization for asset management where the writer is directly interacting with someone which can lead to points being inferred. Natural language processing models generally struggle with this use case.
Given the normal GPT-3 prompt and the number of examples we can fit into their 2048 token limit you can imagine there’s no way to fit enough examples to cover a wide range of possible input tweets. When using the classification api above we didn’t have to worry about running out of tokens or trying to fit enough generalization into our prompt as there is no token limit.
We’re going to build our own version of the processes in the classification api to give us more control over functionality and the results. Just as before we have our dataset of tweets that are labeled for sentiment analysis. Instead of uploading them to GPT-3 or hand picking them to store in a GPT-3 prompt we’re going to put them into a data structure of your choice.
This will act as our pool of possible tweets to use as examples for a given prompt and an input tweet. When it's time to add tweets to our prompt and run our input tweet we’ll format these example tweets and their label to be the same as we saw in our first example.
We’ll want a simple script to take any given tweet and label from our dataset and put them in this format. The label is added to a line with “Sentiment: label” and the text is added to a line with “Tweet”. Don’t forget the “###” between lines.
Now we need a way to choose which labelled tweets from our dataset are perfect to use as prompt examples in our 1500 token limited GPT-3 prompt. As we saw in the first example we can’t account for a ton of language variance and topics in the tweets used as examples, so we have to choose wisely what labeled examples give us the best chance to produce a correct sentiment analysis given an input tweet.
SBERT allows us to compute semantic similarity between tweets in the same way the classification api did above. It’s no secret that semantically similar examples produce better results in GPT-3 given any input tweet. SBERT creates text embeddings that can be compared with algorithms such as cosine similarity to determine which tweets are semantically similar to one another. You can compute our own text embeddings and determine the perfectly optimized prompt example generator for our use case. Why would you want to use our own semantic similarity pipeline over the one offered with the classification api?
I highly recommend computing the text embeddings for your example tweet dataset ahead of time, as it's a long process that you won’t want to do at runtime. No matter how many steps you have in your pipeline you’ll want to precompute anything you can that is not related to the input tweet. Here’s a simple example of running the completion api with a prompt optimization function that works on the fly and builds the prompt in the format shown in the playground model.
This is what our main function should look like. “Prompt” is filled in dynamically through our pipeline that decides examples.
You can use this same code to generate the embedding vector for our input tweet as well. Unless you change your model used or the words in a tweet your embedding vector does not change.
This code example from SBERT shows how we can take our list of text embeddings from labeled tweets and compare them to a newly encoded input sentence (“tweet”). Shown as well is ranking the results and grabbing the top 5 most semantically similar tweets. This is how we can easily choose prompt examples in real time that are optimized for our input tweet.
If we were to visualize our tweets in the embedded word space they would look something like this. Sentences (“tweets”) with similar semantic topics and information become clustered together as their vectors are closer to each other.
Using the twitter api and this easy to follow guide you can quickly create a deployed application to grab new tweets for sentiment analysis or to create new training data.
Twitter sentiment analysis using GPT-3 and other machine learning algorithms is a powerful application that is easy to build and can be customized to many different levels of ability. GPT-3 is an incredible tool that we’ve blended into all of our text analysis products and clearly is more than capable of producing high quality results for twitter sentiment.
We build custom GPT-3 and NLP products that fit a huge range of industries and use cases. We have GPT-3 case studies in asset management, SaaS, ecommerce and many more. Let’s talk today about GPT-3 in your industry.