OPENAI storm If this end, it sets off the cruel fact that AI technology is very burned. Just as many ideals that look beautiful, in the end, it is just a beautiful packaging.However, this does not mean that we want to avoid AI, on the contrary, we should approach it and master it.

It was originally named "Practice" to this column because of a Taiwanese movie with the same name more than ten years ago.The plot tells the story of a student's bicycle riding a bicycle in Taiwan's around the island for a week, which records local customs, historical stories and social issues.

"Some things don't do now, and you won't do it for a lifetime." This is a classic line in the play.The name of this drama name is because of this sentence, and on the other hand, during that period I just got involved in digital transformation of news media, rolling in the Internet and social media.Functional things, just like learning instruments must master a large number of exercises, this is a necessary homework.

Writing a column is a journalist's homework, but because of the convenience, information, knowledge, and even memory of search engines, they can be handed over to the Internet. For a long time, people have become a managing worker on the Internet.Package the views of others with different texts.

Any ideal reporter is unwilling to turn ourselves into a lack of thinking. We hope to think about some topics that we care about, absorb relevant information, and form their own knowledge system.The original intention of the "Practice" column is to retain my own learning ability. In the torrent of information created by digital, create a Noah's Ark and keep my own learning sentiment in text creation.

With the massive Internet information, and various fast interpretation of videos on the Internet, I do get out to the Internet more and more often.What information is cited, and the process of constructing its own profound knowledge structure through self -Q & A.This looks a little better, without any proficient knowledge, and appears to be malnourished.

With a large language model such as artificial intelligence, especially ChatGPT, some people say that the era of search engines may also come to an end, or it is iterated to update like a dialogue robot, because human nature is good and evil, and does not want to spend time.To do more with energy, how to do the most effort to complete the task will become the first choice.

When you can get an answer to a question in a few seconds, how many people think about?When you are told that the answer is 1234, how many people will think about the possibility of 5678?However, we cannot learn and apply it because of the danger and unknown of AI.Now everyone is worried and anxious, isn't they use AI, but whether it will be better than others.

I recently chatted with some scientific and technological talents that I found that on the one hand, AI is more capable than we think, and on the other hand, it may not be the cleverness we think.This does not mean that AI is not smart, but what is its underlying logic. In fact, human beings have not fully understood. Some people say that it is data. Some people say that they are algorithms. At present, scientists are still indifferent.The uncertainty of the underlying logic makes it impossible for humans to accurately predict the choice of AI at critical moments, which makes scientists unable to ensure that AI will not do the actions that will harm humans.

In philosophy, talk about the two concepts of "real care" and "ultimate care". The former focuses on the goodness of nature and reality, including humanistic care and rational thinking.Looking forward to the infiniteness of life, what is finally longing for is a kind of beyond limited, reaching eternal spiritual desire.Before walking to the end, the two are not contradictory, but at the critical point, two people with different tendencies or artificial intelligence programs may make very different decisions.

For example, if we are in the game in a game, the instructions for artificial intelligence are to win, and the unexpected things happened in the game, and the data cannot be found in the data, and the algorithm shows that the effective way to winCan we accept such a result?

I saw a chat AI demonstration last week.The programmer puts AI to learn the data of an old lady for several years, the information of the family and loved ones, and then let her grandson talk to the ancestors.In the demonstration video, the old lady's virtual expression was very delicate and emotional, and she knew the age of the two great -grandsons. She said that she wanted them to want them. The grandson who had entered the middle -aged grandson looked at her grandmother's expression.Such a chat AI will have good commercial value, especially for the elderly living alone, and because of the smaller and smaller family structure, fewer and less support from relatives and friends, and more loneliness, people who are familiar withCome to soothe the loneliness and loneliness, which is more effective than the legendary "asking Mi Po".After reading the demonstration, I was in a heavy mood. Will this become an emotional abduction and a nightmare of society?

We can record knowledge and save wisdom, but should we record emotions and become stumbling?I have no answer, and this is just one of the questions about AI ethics.

For more than a week, the founder and CEO of the AI ​​leader OPENAI were fired by Ortman, and the incident of recurring three days later was called "epic palace fight" by some people.Because this incident is an important battle between the two factions of artificial intelligence.

OpenAI's initial founding team seems to be established with ideals. At the beginning, they advertised the huge potential pursuit of unlocking AI and ensure that the benefits brought by them can benefit humans fairly.Therefore, they initially established a non -profit company, and the research results were open source and allowed anyone to use it.The founding team does not receive the astronomical number of the top Silicon Valley's top talents. This noble ideal attracted a group of first -class engineers at the time.

The ethics of these top talents on the use of AI is divided into two factions, one is "Effective Altruism", or also known as "securityism"; the other is "Effective Accelerationism".Simply put, the former insists on developing AI with a cautious attitude, while the latter pursues technological progress at no cost, advocating the use of capital power to promote social change.Altman belongs to the latter.The latter is not for self -fat, but in order to realize their ideals, they may make a decision to make progress. In the end, they do not know how to affect humans.

OPENAI storm If this end, it sets off the cruel fact that AI technology is very burned. Just as many idealistic ideals, in the end, it is just a beautiful packaging.However, this does not mean that we want to avoid AI, on the contrary, we should approach it and master it.

This time, the effective acceleration of burning capital has won, and it is even more urgent to establish an effective management law for artificial intelligence, not for geopolitics, but for the future of human beings.These things are not done now, and I can't do it in the next life.