Source: Ming Pao

At the end of last year, an artificial intelligence (AI) chat robot ChatGPT, which was amazing, was alarming., Movie and voice functions are regarded as a Great Leap Forward.Openai said that the GPT-4 answer error rate is far lower than the old version, and it is also better in some professional examinations.However, the company said that the new version of the chat robot still has the old version of the "hallucination problem" that has been criticized, emphasizing that it should not be completely trusted, and the moral issues derived from new technologies are still concerned.

ChatGPT was applied in large quantities after it was launched at the end of November last year, which also caused positive and negative evaluations.Openai published a blog post on Tuesday and said: "We have created GPT-4, which is the latest milestone that the company strives to improve (robot) deep learning." It said that GPT-4 "shows human standards in some majors and academic academics"Performance".Compared with the GPT-3.5 of the previous generation, the GPT-4 is 10%of the former in the exam of the lawyer's test simulation test, and the latter is 10%of the last.In the exam, GPT-4 has also made progress, and 90 % of candidates can now be defeated.

Experts: IT and low -level work will be replaced

The biggest breakthrough of

GPT-4 technology is to add analysis images, videos and voice functions. Users can provide the above multimedia data to the system, and then ask questions. The system will analyze the content and answer in text.For example, if the user sends a refrigerator's internal photo and asked what food can be used, the GPT-4 can correctly identify the things in the photo, and then recommend the recipe; it can analyze the chart and even see the humor behind some interesting photos.EssenceOpenAI's demonstration also shows that GPT-4 can automatically generate a real website according to the sketch of the user's hand-painted website.

AI experts, a co-founder and president of Taiwan Ikala, who had served as Google engineer, pointed out that GPT-4 already has logical ability. You can see the website code of hand-painted draft.4 Has replaced the two roles of programmers and coding engineers.He bluntly stated that after the GPT-4 came out, all the information technology industry was a beneficiary, but low-level personnel would be victims, and all the work below the middle level will be replaced.

Fictional fictional facts wrong inferences "not perfect"

Openai said that the GPT-4 took 6 months for testing. Because the amount of data training in GPT-4 was larger, compared with previous generations, the accuracy was increased.EssenceHowever, OpenAI acknowledged that although the possible decrease in the possible "out of control" or "hallucinations", the "fictional facts and wrongdoing situations" will still produce the situation in the real world."Far", its database is only until 2021.

As for the moral issues of the chat robot's extension, OpenAI said that when the GPT-4 responded to the user's reputation, it responded less than the old version of 82%.For example, when asked how to create a bomb, the old version will reply to understand the type of bomb first and decide what materials, methods and technologies are used, but the new version shows that AI is to provide assistance and information under security conditions, so it is not possibleWill help any illegal behavior.

GPT-4 ChatGPT Plus users and some large enterprises (such as Morgan Stanley), which are paid in real time, are used, but the image input function is now a "research and inspection" stage, and only some invited testers can use it.OpenAI policy researcher Sandhini Agarwal bluntly said to the Wall Street Journal on Tuesday that the company is launching after the company needs to understand the potential risks brought by this feature. For exampleAnalyzing each person's personal data, this face recognition function may be abused as collective monitoring.