No Frankenstein’s Monster, Please — Open Letter To AI Empires

Petition signed by Elon Musk, Yuval Noah Harari, etc. calls on Big Tech to immediately halt AI experiments
An open letter signed by more than 22,000 influential people has called upon technology companies to immediately halt AI-related experiments and review their potential negative impact on society (Credit: Pixabay)

A Special Report

April 13, 2023: In the early years of the 20th century, when technological advancement was picking up pace, English author Aldous Huxley picturised a dystopian society in his landmark 1932 book Brave New World. The novel’s storyline delivered a prophetic warning to a world getting increasingly charmed by technological progress.

Huxley’s cryptic message was disturbing – by nurturing genetic technology, the species Homo Sapiens might someday end up creating a monstrous super human.

Since the book hit the stands, 91 years have elapsed. From genetic technology to AI (artificial intelligence), it has been a long and complicated journey for science and technology. But the warnings given by the writer-philosopher now seem to be resonating louder than ever before, with a large and influential section of the global science, tech, and academics community calling for an immediate end to modern-day Big Tech empires’ imperialist overreach.

WATCH OUR VIDEO REPORTS

A STERN OPEN LETTER

With AI technology growing from strength to strength at breakneck speed, an open letter has been signed and shot off by top names across tech and cultural circles, calling for an immediate suspension of all major AI experiments for at least six months.

The public letter came in the backdrop of the development of GPT-4, the latest updated version of ChatGPT, an AI-based chatbot. It’s computer programme that allows humans to interact with digital devices as if they were communicating with a real person.

Launched on November 30, 2022 by San Francisco startup OpenAI, in which tech giant Microsoft has made huge investments, the chatbot reached 100 million monthly active users within two months. The revolutionary feature of ChatGPT is that it is trained to learn what humans essentially mean when they ask a question.

SUPERHUMAN POWERS

Less than four and half months into its existence, ChatGPT has become a global talking point, with media analysts, technological experts, and next-door neighbours calling it a game-changer for its bewildering capabilities. The awestruck world’s fixation with what has been described as the latest technological marvel hit a further high with the launch of ChatGPT’s latest variant, GPT-4, on March 14.

In a research paper on an early version of GPT-4, Cornell University scientists noted that the AI chatbot has been trained using an unprecedented scale of computing and data. They said that “beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology, and more, without needing any special prompting.

“Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT,” the researchers noted.

PROBLEMS WITH THE
NEW TECHNOLOGY

On the flip side, ChatGPT has come under growing criticism for its potential to take over many human tasks, which may have disastrous consequences for the working class. For example, educationists have claimed that since students now have ChatGPT at their disposal, schools would think twice before giving homework. There are fears that ChatGPT could take away millions of white-collar jobs in at least 10 sectors.

However, as Empire Diaries recently pointed out in a special report, it’s technically wrong to blame ChatGPT itself for sparking sackings. To put it in a reductionist way, ChatGPT is just a lifeless technology that itself can’t sack anybody. Such decisions are, and would be taken, by ruthless corporate tzars, who use tech tools such as ChatGPT as convenient excuses to fire employees.

GPT-4 has also been drawing flak across the world for peddling misinformation and toxic content, which OpenAI admitted in its own technical report. The Silicon Valley company revealed that the prototype or early version of GPT-4, when prompted by user queries, could suggest users how to kill people, practise antisemitism, and even issue gangrape threats. OpenAI assured that such behaviour of the technology was corrected in the version launched for mass use.

The market for Generative AI, which is a technology that can create content from scratch, is rapidly becoming crowded amid intense competition between tech moguls. To take on OpenAI-run ChatGPT, Alphabet’s Google has developed a rival technology called Bard powered by its Language Model for Dialogue Applications, or LaMDA); Microsoft has drastically improved its search engine, Bing; and DeepMind, a subsidiary of Alphabet, has designed Sparrow, chatbot. Open Pretrained Transformer, or OPT, is Meta’s answer to GPT.

WHY THE LETTER
IS SO RELEVANT

It is in this scenario that the online letter, organised by the nonprofit Future of Life Institute, has called on all artificial intelligence labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.
The signatories to the open letter argue that “systems with human-competitive intelligence can pose profound risks to society and humanity”.

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the signatories wrote. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

They raised a number of questions, underscoring the potentially harmful impact of fast-improving AI-based applications. “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?” the signatories asked.

Recommending that powerful AI systems should be developed only after studying their harmful effects would be positive and their risks manageable, they said the immediate pause in the training of AI systems should be public, verifiable, and comprehensive, and must be used to jointly develop and implement “a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts”.

Also, AI developers must work with policymakers if they plan to dramatically accelerate the development of robust AI governance systems, the signatories advised in the open letter.

YUVAL NOAH HARARI,
ELON MUSK, ETC.

More than 22,000 people have already signed the letter, and the number is expected to go up. The signatories include Elon Musk, CEO of SpaceX, Tesla, and Twitter; Apple co-founder Steve Wozniack, noted author Yuval Noah Harari; and Yoshua Bengio, Turing Prize winner, and also founder and scientific director at Montreal Institute of Learning Algorithms.

Also among those who have signed the letter are Emad Mostaque, CEO of Stability AI; bestselling author Andrew Yang; John Hopfield, the inventor of associative neural networks; Connor Leahy, CEO of Conjecture; award-winning children’s book author Kate Jerome; Skype co-founder Jaan Tallinn; co-founder of Pinterest Evan Sharp; and Craig Peters, Getty Images CEO.

Many other well-placed names at Berkeley University, Oxford, NY University, Harvard, Cambridge, etc. have also lent their support to the initiative.

Interestingly, days after the letter began circulating and sparked a buzz, Italy became the first western nation to ban ChatGPT, forcing OpenAI to take it offline in the country.

ITALY CRACKS DOWN

Italy’s privacy regulator Garante launched an inquiry over the AI-powered chatbot’s alleged breach of privacy rules, accusing it of failing to put in place a proper system for checking the age of users who are supposed to be 13 years or older.

The government agency came down on the AI application’s owner following a nine-hour cyber security breach in March that made people privy to excerpts of other users’ ChatGPT conversations and even their financial information.

ChatGPT has an “absence of any legal basis that justifies the massive collection and storage of personal data” to “train” the chatbot, Garante said.

Apart from Italy, the chatbot is also officially blocked in Russia, mainland China, Hong Kong, Cuba, Iran, Syria, and North Korea.

Amid worldwide activities, excitement, suspense, fears, and apprehensions over futuristic AI applications, it’s worth recalling the caution sounded by celebrated English scientist Stephen Hawking in the last years before his demise in 2018.

The author of A Brief History of Time feared that the development of artificial intelligence could spell the end of the human race. “It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, can’t compete and would be superseded,” he warned.

REPUBLISHING TERMS:
All rights to this content are reserved. If you want to republish this content in any form, in part or in full, please contact us at writetoempirediaries@gmail.com.

Share

Related Posts

More Related news

COMMENTS & DISCUSSION

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Support Our Journalism

Why is our journalism unique? It’s because we don’t take a single rupee as ad money from foreign companies, domestic monopolies, governments, political parties, and NGOs. The only support we need and take is from critical-thinking readers like you. Because when you pay us, it doesn’t come with any hidden agenda. So, make a donation, and help our journalism survive.

Join Our Email Subscription List

For news that the mainstream media is hiding from you

Share

GET UNCENSORED NEWS!

Email is still the best way to bypass censorship. Enter your email ID below, and get our latest reports – uncensored!

WhatsApp Update

Also, WhatsApp ‘Get updates’ to 9821045739, and get links to our work on your phone.