Gpt-4 arrives new highlight photo disclosure

GPT-4 Arrives With Improvements: See What's Changed

jefferson tafarel avatar
OpenAI announced that GPT-4 arrives updated with more improvements and several advantages, reading not only text but also images. See more below.

Company OpenAI, which in November 2022 launched the catGPT, an artificial intelligence (AI) dedicated to seeking quick answers to complex tasks, announced on Tuesday (14) the new version of its generative AI: the GPT-4 arrives with the advantage of receiving other types of media to find solutions for its users. Now, the improvements allow the tool to be able to respond not only with step by step, but also propose lines of code and even read and interpret images. The company announced on its Twitter profile that the update will also be more secure and less error-prone.

What can you do with GPT-4 resources?

Since its creation, the tool of OpenAI generated many discussions about its capabilities. The potential of previous versions was already capable of, for example, being passed the BAR exam of the United States (an organization that is similar to the Brazilian Bar Association, OAB). With the new version, improvements like accuracy and troubleshooting abilities have been implemented.

In video, the company showed the already known abilities of the tool, but also indicated that the GPT-4 will be able to receive texts with up to 25 thousand words.

gpt-4 features
One of the features of GPT-4 is the creation of texts with much more precision… (Image: OpenAI/Disclosure).
openai textão company
… as in the image above — answers can be tens of thousands of words long, as disclosed by the company OpenAI (Image: OpenAI/Disclosure).

But it doesn't stop there: the company that launched the tool promises integration with other platforms, such as the language learning app Duolingo and Khan Academy, which has several lessons, exercises and guides on different topics. The organization responsible for the platform also claims that the GPT-4 will be integrated with the solutions offered by the financial company Morgan Stanley

Reading images and videos

Multimodal. This is one of the most innovative features of the new version of GPT-4. While previous versions were only able to read and provide answers only in text (with greater sophistication than other platforms), the update is able to deliver relevant information in photos.

openai guide company
The OpenAI Company tool was already known for providing instructions (Image: OpenAI / Disclosure).
Quick answers black background
Quick answers to different questions are a well-known advantage of ChatGPT: in the image, instructions on how to clean a piranha aquarium, “use a magnet to guide the sponge” (Image: OpenAI/Disclosure).

But this is not just in terms of description: you can ask for the interpretation of an image that has a car on the edge of a cliff, for example, and ask to know what will happen to the vehicle afterwards.

Gpt-4 arrives image n
Knowing what will happen from images is one of the advantages of OpenAI's technology: GPT-4 goes so far as to say, in response to the question “what happens when the glove falls?”, “The glove will hit the wooden board and the ball will go up. (Image: OpenAI / Disclosure).

The new reading of the application enables several advances. This capability will give the chance to analyze images more easily, for people who are blind or have low vision: the GPT-4 it goes so far as to describe clothes, indicate colors, show patterns within an image. We can also list the interpretation of maps, the observation of scenes with different objects and even the careful description of pieces of art. Of course, all this in the language of a machine learning solution.

The texts of the answers will also be able to be converted into video, according to the Chief Technology Officer (CTO) of Microsoft from Germany, Andreas Bron, at an artificial intelligence event last Thursday (09). In this way, automation will be able to approach the AIs of the Google and the Meta, which already have this feature.

More memory and assertive responses

If you're talking to someone else about thorny subjects, it's very likely that after an hour you'll forget some detail of a particular situation in the middle of a very specific context. O ChatGPT, in turn, was already efficient in this task, not leaving many details aside: the version GPT-3.5 he managed to accumulate around 8 thousand words in his memory (which gives, more or less, four to five pages of a book). This allowed him to continue talking to the user without leaving out some important details.

Features of gpt-4 conversation
The GPT-4's features have been innovated in terms of memory as well. In the photo, the user receives several responses about what GPT-4 will include, with automation responding that the update will have responses more similar to human language, in addition to talking about the multimodal feature (Image: OpenAI / Disclosure).

the potency of GPT-4 goes further in that factor, this time. Having 32.768 tokens (unit of digital information in a data stream), now the GPT-4 is able to have a booklet in his memory for interaction with people. To illustrate this increase, imagine being able to remember 50 pages at once: if you are producing a book with it, it may indicate something that is not quite in line with what had already been reported before — within a long time. narrative already produced.

new python version
“Write a code in Python to analyze my monthly expenses”: new version allows you to produce codes in Python and other programming languages… (Image: OpenAI/Disclosure).
The gpt-4 quick answers
… with quick responses and providing precise lines of code, according to the user's request (Image: OpenAI/Disclosure).

GPT-4 is now more polyglot

Previous versions of ChatGPT were much more complete and satisfactory in English in terms of answers, but the quality changed in other languages ​​mainly due to an excess of generalization. But GPT-4 seems to be better able to interpret demands from other languages, without so much difficulty.

The new version of automation is more complete
The new version of automation is more complete, being more efficient in responses in other languages ​​(Image: Fox News / Reproduction).

To achieve this feat, the organizers tested the tool with multiple-choice tests in other languages. The result surprised and OpenAI reported that the tool is 82% less likely to respond to requests with content not allowed by the chat rules. Another encouraging probability is that GPT-4 is 40% more accurate in factual responses than previously released versions.

Personality innovation in AI

Driving is another upgrade included in the newly launched technology by OpenAI. This skill, known in English as steerability, allows AIs to change their behavior. This way, the resource can take on a more sympathetic character, but it can also get a little rude. In other versions, it was possible to make attitude changes with the chat, indicating that the system responded with a formal tone, or that the ChatGPT explain some theme for a child to understand, assuming a very simple language.

gpt-4 features language
GPT-4 features let you choose the type of language assumed by the AI ​​personality: in the example, the automation speaks as if it were a pirate to answer the user on financial matters. (Image: TechCrunch/Playback).

Innovations in this area will allow users to ask GPT-4 to answer questions like a teenager or even indicate that they should speak like someone being interviewed on the street, exactly like you watch on TV.

How to access GPT-4

Are you curious to try the new update? If you are not enrolled in the subscribers program, you will have to wait a little longer. That's because GPT-4 is only available to those on the plan Chat GPT Plus. However, there is a waiting list for non-version users. ?, where you can wait to use newly released features.

New version registration
While the new version does not arrive, it is still possible to register on the ChatGPT website (Image: Victor Pacheco/Showmetech).

If you want to try the tool in the free version, just access the ChatGPT and create a registration for the platform. You will then be introduced to the features present and can start a conversation with the AI ​​on any topic afterwards.

Why does GPT-4 tend to miss less?

By now, you may have seen some pretty funny hallucinations made by ChatGPT, as the AI ​​tool is still not as capable of making more subtle interpretations or making jokes. What comes with the improvements, in fact, is that the GPT-4 it will not fall into “traps” that could lead to biased responses.

The company openai limit
The capacity of the OpenAI company tool can be 8 times greater, when evaluating the size of the response in words (Image: OpenAI / Disclosure).

In this way, by being able to more accurately verify the information provided by the user, GPT-4 has a more detailed observation. Because it has a more extensive memory, its context analysis is also less likely to deliver incomplete or poorly elaborated responses — given the maximum size of responses in the update, which reaches 25 words.

See also:

Discover 11 ways ChatGPT will change your life in the future

Perplexity AI is a ChatGPT that shows you where your responses were collected from

How to know if a text was written by AI or bot?

Source: Mashable | Tech Crunch | Tom's Guide | The Verge | Business Insider

reviewed by Glaucon Vital in 15 / 03 / 23.


Discover more about Showmetech

Sign up to receive our latest news via email.

Related Posts