Q&A: GPT-3 Webinar

GPT-3 created by OpenAI is the biggest advancement in the AI world of language processing of last year. On the 4th of March, Bit hosted a Deep Tech Dive - GPT-3 webinar. Here our colleague Mirko explained how GPT-3 and follow-up research will impact the Finance, Insurance, and Legal industries.


GPT-3 Webinar _ press image small


During the webinar, our guests raised many interesting questions and throughout this blog, we want to share them with you. In case you missed the webinar, you can find the link to the recording at the end of this post..

Q&A GPT-3 webinar:

Q: Comparing GPT-2 to GPT-3, how much do you think the new language model has improved? I have used GPT-2 before and it was pretty good at what it did so far.

A: The architecture of the model stayed the same, only it got scaled up by a lot! This means the model captures more knowledge from the training set. This way, it can generate text that makes more sense, where GPT-2 would often go into a kind of mumbling. However, GPT-3 still suffers from some of the issues GPT-2 encountered as well. In longer generated texts, GPT-3 can lose its coherence or contradict itself.

Q: I assume filtering and curating the dataset and training GTP-3 is done by humans. Doesn't that create a bias from the human input?

A: They used several datasets that were created by different organizations. The filtering and curating done by OpenAI was on duplications and diversity. However, since all data comes from the internet, it was once written by humans. This inevitably creates a bias that is similar to the general biases in the world. Everything in our lives has biases. It is important to be very aware of this, but it is impossible to completely eliminate it.

Q: The samples are in English, how is support for other languages like dutch?

A: Dutch, and other Low Resource languages are more difficult as shown below, I gave it the first 2 Dutch questions with answers and then I asked it the final one. The question: “What shows the time?” It answered “The Clock”. So it answered correctly, but I expect it to perform worse in general and would advise translating Dutch to English before putting it through GPT-3.


Q: Legalese and the way law works is different per country, would GPT-3 be able to distinguish\recognize this?

A: GPT-3 is a statistical language model. It does not understand the way law works at all. The only thing it is really good at, is parsing given text and statistically estimating the best answer to give to you. If the required information for your question is not in the text you feed to GPT-3, it will have a low chance of answering your question correctly.

Q: GPT 3 can help define fake news?

A: GPT-3 might be able to learn this if we give it enough examples of fake and real news. However, as GPT-3 does not understand everything in our world and fake news can sound really realistic, I’m not confident it would work reliably enough.

Q: At what length of text (e.g. 1 page up to 100 pages) will the GPT-3 results still remain robust, i.e. provide for the correct answer?

A: Bluntly said, the size of context GPT-3 can handle at once, is 1024 words. However, this does not mean it completely disregards words that fall outside of the limit. It is the “memory” capacity of the model. (A more in-depth answer would require more expert AI knowledge to get into.)

Q: Does GPT-3 support ambiguous questions already? Imagine two different Japanese people being mentioned in the article.

A: I tested it for you!



Q: What would it answer if you ask if the insurance covers something that is not mentioned in the article?

A: It would make up an answer based on its knowledge and a general understanding of the world. However, you can ask GPT-3 to provide how certain it is about an answer, and in this case it would be very uncertain. So you have an indication of the validity of the answer. As can be seen below, I asked GPT-3 whether the insurance covers a broken TV - which is not mentioned in the insurance clause. In the probabilities, you can see it is almost 50/50 yes and no. This means it did not have a confident answer in one or the other.




Q: Will it know how to answer questions about particular animals that are pets?

A: I tested it for you!



Q: Do you know when we can expect GPT-3 to be integrated in Microsoft products and in which ones?

A: When it will be integrated, we don’t know yet. I see a lot of possibilities within Microsoft's existing services such as Word, Excel, Teams, or OneNote. It can also be a big strategic move for Microsoft, as they have been focussing on getting AI services easily implementable for the crowd. GPT-3 is pretty plug-and-play, so this falls within that strategy.

Q: What if you use a synonym of a word instead of the exact word in the text?

A: I tried it out for you in the article Q&A demo. Instead of asking what percentage of the population is covered by vaccinations, I put people instead of population and explicitly didn’t mention anything with “covering” or “covered”. It is still able to answer the question correctly.



Q: Would GPT-3 be able to explain why it provided its answer?

A: It gives its answer probabilities, as shown above. Other than that, I tried it out for you. But it was pretty straightforward in its response.



Q: Can it summarise long texts into several bullet points?

A: I think you can learn it to do so, by giving a few examples. I haven’t tried it yet!

Q: Will GPT-3 be able to answer when we can get invited in their public beta program?



Q: Can I experiment using your beta access or how to get this?

A: If you want to try out your ideas, please contact us!

Q: As the main model is unsupervised, I assume bias is going to be a challenge... if the model reads 10K documents written by the flat earthers I suspect it might start talking flat earth topics?

A: This is true, that is why they are very careful and not opening the entire model to the world, as well as finetuning only with a select group of trusted companies.

Q: Is it (already) possible to analyze voice instructions or easy to implement for analphabetic usage? (for disabled purposes)

A: GPT-3 is not able to transcript speech to text. Thankfully there are very good tools out there that can! What you can do, is feed GPT-3 the transcription and get the answer back by synthesizing the answer with a generated voice.

Q: When is GPT-4 expected ;-)

A: I don’t think it is very accurate, but GPT-3 says 2023. But, current speculation on the internet is somewhere in 2021.


Q: Critical for wide acceptance is clarity on ownership and responsibility. If you embed GPT-3 into your application, who is responsible for the consequence of the answer ? (see the kill yourselves)

A: I’d always hold the developer responsible. The developer should know what he/she is implementing and develop applications in a way that will not harm the public. Having said that, OpenAI implemented a lot of features to detect and tag unsafe responses!



If you missed the webinar or you just want to watch it again or share it with your colleagues, you can find the full video on our YouTube channel HERE.



Back to Blog