Table of Contents
Company OpenAI, which in November 2022 launched the catGPT, an artificial intelligence (AI) dedicated to seeking quick answers to complex tasks, announced on Tuesday (14) the new version of its generative AI: the GPT-4 arrives with the advantage of receiving other types of media to find solutions for its users. Now, the improvements allow the tool to be able to respond not only with step by step, but also propose lines of code and even read and interpret images. The company announced on its Twitter profile that the update will also be more secure and less error-prone.
What can you do with GPT-4 resources?
Since its creation, the tool of OpenAI generated many discussions about its capabilities. The potential of previous versions was already capable of, for example, being passed the BAR exam of the United States (an organization that is similar to the Brazilian Bar Association, OAB). With the new version, improvements like accuracy and troubleshooting abilities have been implemented.
In video, the company showed the already known abilities of the tool, but also indicated that the GPT-4 will be able to receive texts with up to 25 thousand words.


But it doesn't stop there: the company that launched the tool promises integration with other platforms, such as the language learning app Duolingo and Khan Academy, which has several lessons, exercises and guides on different topics. The organization responsible for the platform also claims that the GPT-4 will be integrated with the solutions offered by the financial company Morgan Stanley
Reading images and videos
Multimodal. This is one of the most innovative features of the new version of GPT-4. While previous versions were only able to read and provide answers only in text (with greater sophistication than other platforms), the update is able to deliver relevant information in photos.


But this is not just in terms of description: you can ask for the interpretation of an image that has a car on the edge of a cliff, for example, and ask to know what will happen to the vehicle afterwards.

The new reading of the application enables several advances. This capability will give the chance to analyze images more easily, for people who are blind or have low vision: the GPT-4 it goes so far as to describe clothes, indicate colors, show patterns within an image. We can also list the interpretation of maps, the observation of scenes with different objects and even the careful description of pieces of art. Of course, all this in the language of a machine learning solution.
The texts of the answers will also be able to be converted into video, according to the Chief Technology Officer (CTO) of Microsoft from Germany, Andreas Bron, at an artificial intelligence event last Thursday (09). In this way, automation will be able to approach the AIs of the Google and the Meta, which already have this feature.
More memory and assertive responses
If you're talking to someone else about thorny subjects, it's very likely that after an hour you'll forget some detail of a particular situation in the middle of a very specific context. O ChatGPT, in turn, was already efficient in this task, not leaving many details aside: the version GPT-3.5 he managed to accumulate around 8 thousand words in his memory (which gives, more or less, four to five pages of a book). This allowed him to continue talking to the user without leaving out some important details.

the potency of GPT-4 goes further in that factor, this time. Having 32.768 tokens (unit of digital information in a data stream), now the GPT-4 is able to have a booklet in his memory for interaction with people. To illustrate this increase, imagine being able to remember 50 pages at once: if you are producing a book with it, it may indicate something that is not quite in line with what had already been reported before — within a long time. narrative already produced.


GPT-4 is now more polyglot
Previous versions of ChatGPT were much more complete and satisfactory in English in terms of answers, but the quality changed in other languages mainly due to an excess of generalization. But GPT-4 seems to be better able to interpret demands from other languages, without so much difficulty.

To achieve this feat, the organizers tested the tool with multiple-choice tests in other languages. The result surprised and OpenAI reported that the tool is 82% less likely to respond to requests with content not allowed by the chat rules. Another encouraging probability is that GPT-4 is 40% more accurate in factual responses than previously released versions.
Personality innovation in AI
Driving is another upgrade included in the newly launched technology by OpenAI. This skill, known in English as steerability, allows AIs to change their behavior. This way, the resource can take on a more sympathetic character, but it can also get a little rude. In other versions, it was possible to make attitude changes with the chat, indicating that the system responded with a formal tone, or that the ChatGPT explain some theme for a child to understand, assuming a very simple language.

Innovations in this area will allow users to ask GPT-4 to answer questions like a teenager or even indicate that they should speak like someone being interviewed on the street, exactly like you watch on TV.
How to access GPT-4
Are you curious to try the new update? If you are not enrolled in the subscribers program, you will have to wait a little longer. That's because GPT-4 is only available to those on the plan Chat GPT Plus. However, there is a waiting list for non-version users. ?, where you can wait to use newly released features.

If you want to try the tool in the free version, just access the ChatGPT and create a registration for the platform. You will then be introduced to the features present and can start a conversation with the AI on any topic afterwards.
Why does GPT-4 tend to miss less?
By now, you may have seen some pretty funny hallucinations made by ChatGPT, as the AI tool is still not as capable of making more subtle interpretations or making jokes. What comes with the improvements, in fact, is that the GPT-4 it will not fall into “traps” that could lead to biased responses.

In this way, by being able to more accurately verify the information provided by the user, GPT-4 has a more detailed observation. Because it has a more extensive memory, its context analysis is also less likely to deliver incomplete or poorly elaborated responses — given the maximum size of responses in the update, which reaches 25 words.
See also:
Discover 11 ways ChatGPT will change your life in the future
Perplexity AI is a ChatGPT that shows you where your responses were collected from
How to know if a text was written by AI or bot?
Source: Mashable | Tech Crunch | Tom's Guide | The Verge | Business Insider
reviewed by Glaucon Vital in 15 / 03 / 23.
Discover more about Showmetech
Sign up to receive our latest news via email.