Will OpenAI make artificial intelligence safer?


The debate about the danger of artificial intelligence has always existed, but no one can stop the development of AI. Elon Musk, CEO of Tesla and SpaceX, is one of those who has repeatedly expressed his concerns about the misuse of artificial intelligence. However, Musk funded a non-profit artificial intelligence research lab called openAI in 2015. Without any doubt, the wish of Musk is good, he wants to reduce the potential danger to the minimum. But will that work?

OpenAI says that their goal is to ‘advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return’. And they want to encourage their researchers to publish their work so that the advance of artificial intelligence can be shared with the whole world. These will, without any doubt, benefit the world and help the artificial intelligence develop. But will it reduce the potential danger of artificial intelligence?

Every coin has two sides. Nuclear energy, for example, could be used to make nuclear bomb to threaten thousand of people’s lives or to make nuclear power plant to generate energy. The difference is all about who is using it! Artificial intelligence is the same story, it can be used to improve humanity, or to destroy human beings. After publicizing the patents about artificial intelligence, anyone could use it and no one can be sure about what other people will do with technology they designed to benefit people. Besides, because many technology details are exposed to public, some high-tech criminals who could not get access to the technology can easily find out what they want from OpenAI.

I believe that OpenAI will greatly help artificial intelligence develop and their efforts to make artificial intelligence open to the whole world is admirable. But if their goal is to prevent illegal usage of artificial intelligence, they need to think of a better idea.