Will artificial intelligence get out of control?

  For a while, the whole world tried to put the brakes on ChatGPT.
  On March 31, the Italian Personal Data Protection Agency (DPA) announced the suspension of ChatGPT’s operation in the country. This has aroused the vigilance of EU countries. Countries such as Germany, France and Ireland have also approached Italy seeking more information on the ban. Although there is no clear plan to block it, at least it has poured cold water on ChatGPT, which is in full swing.
  The issue of privacy is only scratching the surface. ChatGPT also faces bigger troubles. At the end of March, an open letter was published on the official website of the Future Life Institute, clearly calling for the suspension of training artificial intelligence technology stronger than GPT-4.
  As of April 10, 18,980 scientists, researchers, and related practitioners have signed, all of whom are top-notch bigwigs: Turing Award winner Yoshua Bengio, Apple co-founder Steve Wozniak, Stability AI’s CEO Emad Mostaque et al. This occasion is naturally indispensable to Musk, who is ranked third on the signing list.
  The open letter stated that AI systems with human-like intelligence are likely to pose profound risks to society and humanity. Today’s AI advances are beyond the creators themselves to understand, predict, or reliably control. Therefore, before things get out of control, humans should consider how to govern and review AI.

  In the imitation of human beings, artificial intelligence moves towards its own future.

  In just a few months, artificial intelligence is everywhere, ChatGPT can no longer be described as “explosive”, and it is not an exaggeration to say that it is “fanatic”. Some people see the real digital civilization; some people look forward to a new human era. Some people say that we have come to the era of AI spam; some people think that AI has ended the writing of papers and shaken the work of white-collar workers. There is also what everyone is talking about: AI has entered the era of self-emergence, and new intelligent life is about to be born.
  OpenAI CEO Altman also admitted frankly that their team did not understand how ChatGPT evolved, and even made some dangerous remarks: “AI may indeed kill humans.”
  Fear, excitement, uncertainty. But we ignore a simple fact that ChatGPT is not as easy to use as expected at this stage. It is good at fabricating false information, and logic and facts are always wrong.
  We also ignore a more basic fact: the essence of products like ChatGPT is to predict language. They’re blah, blah, blah, blah, blah, blah, blah, blah, blah, blah. What’s the meaning? Essentially, it’s based on a statistical prediction of how humans use one word in relation to the next. The work of “statistics” can be seen from all angles of the Internet, and the vast, trillions of written materials are the nutrients for its predictions, including the aforementioned news report on the banning of ChatGPT in Italy, the open letter of the Future Life Institute, and Musk’s AI panic peddling on social media.
  This is how statistics develops a salience in relation to artificial intelligence: the more we tend to talk about something, the easier it is for the AI ​​to exhibit that tendency. After all, these contents are the nourishment for its evolution.
  In other words, it will not evolve by itself. It is in the imitation of human beings that artificial intelligence is moving towards its own future.
  If an intelligent life is to be born, it will come from nothing but our imagination, our delirium.

error: Content is protected !!