I specifically analyze the new AI models of Open AI and GPT-3 technologies
We begin:
What are the technologies that can analyze and re-propose any human language, be it natural or formal?
One answer is the following: GPT-3
In the world of Artificial Intelligence, the language model GPT-3 (Generative Pre-trained Transformer 3) of the company OpenAI is rapidly conquering the curiosity of the general public. We are faced with a model that uses deep learning to - for example - compose poems, stories, computer code, and songs in a swift manner, so real that they make one think of "products" written by a human being. It only needs a human input to complete the work. But the eloquence of GPT-3 is much more than a "parlor trick". Researchers believe that the techniques used to create GPT-3 could reveal the secret of a more advanced Artificial Intelligence. All well, then? In reality, the news that only Microsoft will have access to the base code of GPT-3 has raised more than one eyebrow. Let's see why.
GPT-3 uses its algorithms to generate text. These algorithms have been previously trained through an enormous database.
It evaluates and processes all the data it receives to fill in the informational gaps.
It is capable of generating text through a single phrase and completing the rest of the writing, processing over 175 billion parameters. This data is very relevant, since the previous version, GPT-2, presented in 2019, processed only 1.5 billion parameters. The progress in one year has been surprising.
It can translate texts into other languages and adapt them to different writing styles, such as journalism, novels, etc. It can also write poems or give us the best answer to the question we ask it.
In short, GPT-3 is capable of dealing with everything that is structured like a language: it can answer questions, write essays, summarize long texts, translate, take notes, and even write computer code.
Yes, you read that correctly: GPT-3 can also program. With great surprise, it has been discovered that it is capable of using a plug-in for Figma, a software tool commonly used in the design of applications and websites. This feature could have epochal implications for the development of code and software in the future.
The amount of things it can do may seem incredible, but its potential capabilities are even more astounding
What is OpenAI's GPT-3, the new artificial intelligence model
GPT-3 has been trained on a huge amount of textual data. But what if we trained such a language model not only on texts but also on images? A new research by the Allen Institute for Artificial Intelligence (AI2) is bringing this question to life. The researchers have developed a new "combined" model of text and images - known as a visual language model - capable of generating images with a caption. The images seem as detailed as the hyper-realistic deepfakes generated by GAN (Generative Adversarial Networks), and could demonstrate a new promising "direction" for obtaining better Artificial Intelligence, bringing interesting innovations also to the field of robotics. The GPT-3 model in detail
GPT-3 is part of a group of language models known as "transformer", which first spread with the language BERT by Google (Bidirectional Encoder Representations from Transformers). Before BERT, language models were rather poor; in practice, they had enough predictive power to be useful for applications such as autocompletion, but not enough power to generate a long sentence that followed grammatical rules and logical sense. BERT changed this situation by introducing a new technique called "masking": it consists of hiding different words in a sentence and asking the model to fill in the blank. The idea is that if the language model is forced to do these exercises - often millions of times - it begins to discover how words are assembled into sentences, as well as how sentences are assembled into paragraphs. As a result, the text approaches a complete sense. After masking proved to be very effective, researchers tried to apply it to visual language models by hiding words in image captions. This time the model could look at both the surrounding words and the content of the image to fill in the blank. Through millions of repetitions, not only the patterns between words are discovered, but also the relationships between words and elements of each image. The result is models capable of relating textual descriptions to visual references, just like children can create connections between the words they learn and the things they see. The models can look at a photo and write a caption directly, in a completely autonomous way. Or they can answer questions like "what is the color of the ball?" by linking the word "ball" to the spherical object present in the image. But the AI2 researchers wanted to know if these models had actually developed a conceptual understanding of the visual world. A child who has learned