Do we need Rules of AI?

Robot © Kittipong Jirasukhanont |

I’ve flirted a little bit with the ChatGPT and Midjourney artificial intelligence websites to see how these new tools work, with mixed success. They are interesting, and I can see how they can augment the creative process.

Of course, alarmists are concerned that they could replace the human aspect of the creative process altogether. The concern is that we will be flooded by innumerable Great American Novels or painted masterpieces generated by computers within a few seconds, subverting the time-honored tradition of spending weeks, months and years in the throes of creative fever.

I have an old-fashioned faith that people will continue to respond to the human element in their art, and we will be able to tell the difference between a book by, say, Mark Twain and a book produced by an AI that was prompted to write in the style of Samuel Clemens.

Be that as it may, some folks are worried about the Terminator scenario, in which an artificial intelligence named SkyNet evolves to the point where it decides that humanity is the greatest threat to Planet Earth and therefore must be exterminated.

Perhaps it’s time to resurrect Asimov’s Laws of Robotics and insist that our AIs must abide by them:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

You have to admit, if these were applied in the case of Sarah Connor and her son John, the whole James Cameron franchise would have been impossible and SkyNet would be a boon to mankind. Schwarzenegger’s “I’ll be bock” machine would be much warmer and fuzzier.

It’s an interesting conversation.

By the way, I wrote this. 

Leave a Reply