I understand how recent advances and associated hype can be scary for people, especially since doomsday scenarios related to AI have been part of our popular culture for many decades. I also understand, to address one of Ben Y. Zhao's concerns, that my opinion might come across as one of those "dismissive insiders". However, I think there at least three good reasons not to regulate AI.
AI is a fundamental technology:
Artificial Intelligence is a field of research and development. You can compare it to quantum mechanics, nanotechnology, biochemistry, nuclear energy, or even math, just to cite a few examples. Fundamental research fields or technologies should not be regulated. All of them could have scary or evil applications, but regulating them at the fundamental level would inevitably hinder advances, some of which could have a much more positive impact than we can envision now.
To put this into perspective, perhaps the only fundamental research field that is highly regulated as of today is medicine. Of course, medicine has been developed for centuries, and has intrinsic issues in that it needs to be directly tested on humans or at least living creatures. Even taking that into account, it is widely accepted that strong regulation in medicine makes it extremely hard and costly to innovate. Because of this, the medical field as a whole is clearly years behind in its adoption of basic technological advances.
Therefore, AI as such should not be regulated. What should be heavily regulated is its use in dangerous applications, such as guns or weapons.
It is way too early:
If you ask any expert as of today what should be regulated in AI the answer would have to be, inevitably, "we don't know." If you take a look at the research being carried out at Elon Musk's Future of Life Institute on the topic, you will realize that all projects are researchy in nature (e.g. see this one about how to better estimate probabilities for self-driving cars) and most of them are at their infancy (see description of this project on how to teach Deep Learning about moral concepts).
Honestly, if we had to regulate AI any time soon we would not know how to do it. What's even worse, we could let people with absolutely no understanding of the technology do it. If we connect this to the the previous idea of AI being a fundamental technology, we have a recipe for disaster. This would be worse than having let the governments regulate the Internet in the 80s.
Regulate at What Level?
Ok, let's pretend I haven't convinced you enough with the two previous reasons and you still insist on regulating. My question would be: at what level would you do this? Would you want the US government to regulate its research and deployment in general while other countries (including perhaps North Korea) would freely continue to innovate and deploy their latest advances? Clearly not. I am guessing people like Musk proposing regulation have not even thought about this, or are many thinking of a regulation at the UN level?
As far as I know, health again, is the only example of such a level of international regulation. The World Health Organization managed to convince most countries in the world to sign the IHR after over 40 years of work. Many countries, including the US, signed with reservations.
So, to summarize, AI should not be regulated because it is a fundamental technology, and at this point we would not know what to regulate or how to get enough international support for that to happen. To be fair to Musk and others though, given that it is likely to take 50 years at best to get anything done, it might be ok to have a few loud voices pushing for it now. I just hope that they don't get their voices heard too soon and we end up in a situation where people with no clue and understanding prevent us from a better future by compromising innovation in such a key area of human development.