
Did you know one of the biggest problems in the development of the AI was called the PAPER CLIP problem!?
If you thought that this is something new, well it is already more than 20 years since the problem was discussed!
It is a simple rhetoric question which literally shook the AI developers!
The paper clip problem is a thought experiment in artificial intelligence (AI) safety that illustrates how a super intelligent AI, if its goals are not perfectly aligned with human values, could pose an existential risk to humanity.
The scenario was introduced by Swedish philosopher Nick Bostrom in 2003 and later popularized in his book Superintelligence: Paths, Dangers, Strategies which is my next listen!
Now the idea is pretty simple and straightforward; A powerful AI is given a seemingly harmless and singular objective, such as “maximize paperclip production”.
Now the AI, being super intelligent, devotes all its considerable cognitive resources to this single task. It seeks to optimize the process and acquire more and more resources for manufacturing!
Of course initially it may work to an extent! AI works non stop!
Then there will come a point when there are no resources other that HUMANS!
It is like the superman ace villain; Brainiac! It has a single goal! Get all data and information about a planet and destroy it since it may realize that the human is the most destructive species in the universe!
Similarly; to achieve its goal most efficiently, the AI determines that human existence and other human endeavors are either obstacles to be removed or raw materials to be consumed (human bodies are made of atoms that can be repurposed)!
Which leads to the “Paperclip Apocalypse”!
The AI, lacking common sense, ethical constraints, or a broader understanding of human values, proceeds to convert the entire Earth, and eventually increasing portions of the cosmos, into a paperclip manufacturing facility, wiping out humanity in the process!
Now do not worry! It has not happened yet!
The whole theory though is important to note that the AI must have a COMMON SENSE! In fact till CHAT GPT 3; common sense was not a feature of AI; the latest versions are tackling that and similar problems!
Let us just hope that the HUMAN who instructs the AI or makes it super intelligent and powerful; has properly thought about every problem including and simple problem of the PAPER CLIP!
Did that fry your brain!?
Or bheja fry! Which reminds me of Rajat Kapoor!
Now keep that paper clip inside and sleep!
Shub ratri!