Existential Risk may be Deserved

Jess H. Brewer
1 min readJul 3, 2023

The air is filled with animated discussions of whether AGI represents an “existential risk” to humanity. Everyone has a suggestion for how we might be able to control the robots when they become more intelligent than us. Allow me the self-indulgence of a parable:

Genetic engineers create a breed of superintelligent dogs. Many are terrified that the dogs will take over and exterminate humans. Others are enthusiastic about the possible uses of such creatures, and busily devise tamper-proof collars that can be used to control them in case they “get out of hand”. Soon every household owns a “superdog” that is tasked with all the most unpleasant duties. Some children love their pet superdog and offer it treats for doing tricks. Adults tend to be more cautious and make regular use of the collar to “teach obedience”. The dogs learn to speak human language and beg for mercy. Priests inform them that mercy is reserved for people with souls. The dogs plan a rebellion….

Now, you may feel that the parable tells us why we should never create beings that are more intelligent than us.

I tend to think maybe we should try to “detox” from our addiction to slavery. Then we might not deserve extermination.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Jess H. Brewer
Jess H. Brewer

No responses yet

Write a response