3 Ways Chat GPT-4 Will Outshine Chat GPT-3.5
Since OpenAI launched Chat GPT to the general public last year, the company has gone from having just a few hundred users to surpassing Facebook in its adoption. Today, a few million people probably use Chat GPT for everything, from coding an app to solving maths problems. However, many users may not know that Chat GPT originally used GPT-3.5, and that’s still the case if you’re on the free version of the chatbot. But paid users, and slowly an increasing number of free users, will start gaining access to GPT-4.
So, what’s the difference between version 3.5 and 4.0? After all, a difference of just .5 can’t be all that much… right? Well, wrong; it’s a world of difference. We’ll review some of the significant updates in this article and explain why these are potential game-changers.
1) GPT-4.0 Recognises Images And Other Visual Input
Compared to GPT-3.5, which powered Chat GPT during its public release in November, GPT-4 has undergone some noticeable changes.
One of the most noticeable updates is its ability to process visual inputs alongside the text, enabling users to pose questions based on image-based prompts.
For instance, you can upload a picture and ask the chatbot where the people in the image are standing. The AI can analyse the image and spit out that they were standing in front of the Taj Mahal or Superdome, etc. It should also be able to tell the make and model of a vehicle from a photo. But this ability, as of this writing, has yet to be extensively tested, so we can’t tell you if it can distinguish between a VW Golf and a BMW i3 Series by just uploading a picture of the tail lights.
Additionally, GPT-4 understands meme culture and humour derived from images. This far surpasses what anyone could imagine because comedy is intrinsically considered a human quality. To put things into context, AI can now understand the difference between a mundane image of the earth and the funny meme of a dancing chicken!
2) Harder To Trick Version 4.0 Compared To 3.5
While modern chatbots have made significant strides, they are still prone to manipulation, and that was proven when people manipulated version 3.5 into saying racist, homophobic and other unacceptable remarks.
Users will coax them (chatbots) with specific prompts, which make them say strange and unsettling things or convince them that they are merely describing the actions of a “bad AI.”
We’ve even heard about people creating “jailbreak” prompts that help Chat GPT and other chatbots break free of their programming constraints. While the developers have actively worked on patching up those holes, many were left open.
However, GPT-4 is different because it has been extensively trained on malicious prompts provided by users via their interaction with the chatbot over the last year or two. That is how it surpasses its predecessors regarding factuality, steerability and capacity to stay within predefined boundaries.
OpenAI considers GPT-3.5, which powers Chat GPT, a “test run” of a new training architecture. The company applied the lessons learned from that experience to develop an “unprecedentedly stable” new version, aka 4.0. Moreover, they have a better understanding of GPT-4’s capabilities, which reduces the likelihood of surprises but like with everything else in the computer world, there will be surprises!
3) GPT-4 Can Remember More Stuff
Large language models are trained on vast amounts of text data but can only remember so much when conversing with a user. Yes, it means they suffer from short-term memory loss, like many old-timers.
For example, the previous version of Chat GPT and GPT-3.5 had a token count limit of 4,096, equivalent to approximately 8,000 words or four to five pages of a book. Once the conversation went beyond that limit, the model would lose track of the earlier parts of the conversation. That’s where GPT-4 and, consequently, Chat GPT will be better.
GPT-4 is designed with a maximum token count of 32,768 or 2^15, roughly 64,000 words, or 50 pages of text. As a result, it can keep up to 50 pages of text in mind during a conversation or text generation, allowing it to recall events that occurred 35 pages ago when writing a story or essay. This means that people can have a more extended conversation with the chatbot and reference things they said a few pages back without reintroducing the data to ask a new question.
Shortcomings of GPT-4
It is important to stress at this point that despite the developments mentioned in this article, there are clearly limitations to what the chatbot can accomplish. OpenAI is warning us that this model still has some flaws that need to be ironed out and probably will over the next few years.
The biggest problem is that it tends to “hallucinate,” which means it sometimes spouts false information as if it were fact. And let’s face it; nobody wants to be fed fake news by a machine.
Then there is also the fact that GPT-4, similar to 3.5, needs to catch up on current events. It hasn’t been updated since September 2021, so it’s not up-to-date on the latest gossip. Also, it does not own up to its mistake and doesn’t seem to learn from them, which is a bummer.
Finally, there is also the off chance that GPT-4 is slightly biased in its output. But all of this is to be expected and is part and parcel with all the nice things it brings.
Final Word
While GPT-4 is undoubtedly a step in the right direction and will probably mean it improves over the next few months, no AI is 100% perfect. That said, it will make the lives of many business owners, freelancers, and meme creators on Reddit much more fun!