Odisha News, Odisha Latest news, Odisha Daily - OrissaPOST
  • Home
  • Trending
  • State
  • Metro
  • National
  • International
  • Business
  • Feature
  • Entertainment
  • Sports
  • More..
    • Odisha Special
    • Editorial
    • Opinion
    • Careers
    • Sci-Tech
    • Timeout
    • Horoscope
    • Today’s Pic
  • Video
  • Epaper
  • News in Odia
  • Home
  • Trending
  • State
  • Metro
  • National
  • International
  • Business
  • Feature
  • Entertainment
  • Sports
  • More..
    • Odisha Special
    • Editorial
    • Opinion
    • Careers
    • Sci-Tech
    • Timeout
    • Horoscope
    • Today’s Pic
  • Video
  • Epaper
  • News in Odia
No Result
View All Result
OrissaPOST - Odisha Latest news, English Daily -
No Result
View All Result

ChatGPT, other language AIs nothing without humans; here’s why

PTI
Updated: August 19th, 2023, 19:13 IST
in Feature, Sci-Tech
0
AI or Artificial intelligence

Pic- IANS

Share on FacebookShare on TwitterShare on WhatsAppShare on Linkedin

Washington: The media frenzy surrounding ChatGPT and other large language model artificial intelligence systems spans a range of themes, from the prosaic – large language models could replace conventional web search – to the concerning – AI will eliminate many jobs – and the overwrought – AI poses an extinction-level threat to humanity.

All of these themes have a common denominator: large language models herald artificial intelligence that will supersede humanity.

Also Read

Met Gala 2025’s stunning carpet has surprising connection to India

2 days ago
Rafale Operation Sindoor

Rafale roars in Operation Sindoor: Know all about India’s most elite fighter jet

2 days ago

But large language models, for all their complexity, are actually really dumb. And despite the name “artificial intelligence,” they’re completely dependent on human knowledge and labor. They can’t reliably generate new knowledge, of course, but there’s more to it than that.

ChatGPT can’t learn, improve or even stay up to date without humans giving it new content and telling it how to interpret that content, not to mention programming the model and building, maintaining and powering its hardware. To understand why, you first have to understand how ChatGPT and similar models work, and the role humans play in making them work.

How ChatGPT works

Large language models like ChatGPT work, broadly, by predicting what characters, words and sentences should follow one another in sequence based on training data sets. In the case of ChatGPT, the training data set contains immense quantities of public text scraped from the internet.

Imagine I trained a language model on the following set of sentences:

Bears are large, furry animals. Bears have claws. Bears are secretly robots. Bears have noses. Bears are secretly robots. Bears sometimes eat fish. Bears are secretly robots.

The model would be more inclined to tell me that bears are secretly robots than anything else, because that sequence of words appears most frequently in its training data set. This is obviously a problem for models trained on fallible and inconsistent data sets – which is all of them, even academic literature.

People write lots of different things about quantum physics, Joe Biden, healthy eating or the Jan. 6 insurrection, some more valid than others. How is the model supposed to know what to say about something, when people say lots of different things?

The need for feedback

This is where feedback comes in. If you use ChatGPT, you’ll notice that you have the option to rate responses as good or bad. If you rate them as bad, you’ll be asked to provide an example of what a good answer would contain. ChatGPT and other large language models learn what answers, what predicted sequences of text, are good and bad through feedback from users, the development team and contractors hired to label the output.

ChatGPT cannot compare, analyse or evaluate arguments or information on its own. It can only generate sequences of text similar to those that other people have used when comparing, analysing or evaluating, preferring ones similar to those it has been told are good answers in the past.

Thus, when the model gives you a good answer, it’s drawing on a large amount of human labour that’s already gone into telling it what is and isn’t a good answer. There are many, many human workers hidden behind the screen, and they will always be needed if the model is to continue improving or to expand its content coverage.

A recent investigation published by journalists in Time magazine revealed that hundreds of Kenyan workers spent thousands of hours reading and labeling racist, sexist and disturbing writing, including graphic descriptions of sexual violence, from the darkest depths of the internet to teach ChatGPT not to copy such content.

They were paid no more than USD2 an hour, and many understandably reported experiencing psychological distress due to this work.

What ChatGPT can’t do

The importance of feedback can be seen directly in ChatGPT’s tendency to “hallucinate”; that is, confidently provide inaccurate answers. ChatGPT can’t give good answers on a topic without training, even if good information about that topic is widely available on the internet.

You can try this out yourself by asking ChatGPT about more and less obscure things. I’ve found it particularly effective to ask ChatGPT to summarise the plots of different fictional works because, it seems, the model has been more rigorously trained on nonfiction than fiction.

In my own testing, ChatGPT summarised the plot of J.R.R. Tolkien’s “The Lord of the Rings,” a very famous novel, with only a few mistakes. But its summaries of Gilbert and Sullivan’s “The Pirates of Penzance” and of Ursula K. Le Guin’s “The Left Hand of Darkness” – both slightly more niche but far from obscure – come close to playing Mad Libs with the character and place names. It doesn’t matter how good these works’ respective Wikipedia pages are. The model needs feedback, not just content.

Because large language models don’t actually understand or evaluate information, they depend on humans to do it for them. They are parasitic on human knowledge and labor. When new sources are added into their training data sets, they need new training on whether and how to build sentences based on those sources.

They can’t evaluate whether news reports are accurate or not. They can’t assess arguments or weigh trade-offs. They can’t even read an encyclopedia page and only make statements consistent with it, or accurately summarize the plot of a movie. They rely on human beings to do all these things for them.

Then they paraphrase and remix what humans have said, and rely on yet more human beings to tell them whether they’ve paraphrased and remixed well. If the common wisdom on some topic changes – for example, whether salt is bad for your heart or whether early breast cancer screenings are useful – they will need to be extensively retrained to incorporate the new consensus.

Many people behind the curtain

In short, far from being the harbingers of totally independent AI, large language models illustrate the total dependence of many AI systems, not only on their designers and maintainers but on their users. So if ChatGPT gives you a good or useful answer about something, remember to thank the thousands or millions of hidden people who wrote the words it crunched and who taught it what were good and bad answers.

Far from being an autonomous superintelligence, ChatGPT is, like all technologies, nothing without us.

By John P. Nelson: Postdoctoral Research Fellow in Ethics and Societal Implications of Artificial Intelligence, Georgia Institute of Technology

The Conversation

Tags: AIChatGPThuman
ShareTweetSendShare
Suggest A Correction

Enter your email to get our daily news in your inbox.

 

OrissaPOST epaper Sunday POST OrissaPOST epaper

Click Here: Plastic Free Odisha

#MyPaperBagChallenge

Debasis Mohanty

December 12, 2019
#MyPaperBagChallenge

Sitakanta Mohanty

December 12, 2019
#MyPaperBagChallenge

Keshab Chandra Rout

December 12, 2019
#MyPaperBagChallenge

Saishree Satyarupa

December 12, 2019
#MyPaperBagChallenge

Sisirkumar Maharana

December 12, 2019
#MyPaperBagChallenge

Tabish Maaz

December 12, 2019
#MyPaperBagChallenge

Pratyasharani Ghibela

December 12, 2019
#MyPaperBagChallenge

Adweeti Bhattacharya

December 12, 2019
#MyPaperBagChallenge

Chinmay Kumar Routray

December 12, 2019
#MyPaperBagChallenge

Pitabas Tripathy

December 12, 2019
#MyPaperBagChallenge

Ankita Balabantray

December 12, 2019
#MyPaperBagChallenge

Jyotshna Mayee Pattnaik

December 12, 2019
#MyPaperBagChallenge

Kamana Singh

December 12, 2019
#MyPaperBagChallenge

Matrumangal Jena

December 12, 2019
#MyPaperBagChallenge

D Rama Rao

December 12, 2019
#MyPaperBagChallenge

Priyabrata Mohanty

December 12, 2019
#MyPaperBagChallenge

Lopali Pattnaik

December 12, 2019
#MyPaperBagChallenge

Sarfraz Ahmad

December 12, 2019
#MyPaperBagChallenge

Rajashree Pravati Mohanty

December 12, 2019
#MyPaperBagChallenge

Amritansh Mishra

December 12, 2019
#MyPaperBagChallenge

Diptiranjan Biswal

December 12, 2019
#MyPaperBagChallenge

Pratik Kumar Ghibela

December 12, 2019
#MyPaperBagChallenge

Bijswajit Pradhan

December 12, 2019
#MyPaperBagChallenge

Akriti Negi

December 12, 2019
#MyPaperBagChallenge

Adrita Bhattacharya

December 12, 2019
#MyPaperBagChallenge

Praptimayee Biswal

December 12, 2019
#MyPaperBagChallenge

Anasuya Sahoo

December 12, 2019
#MyPaperBagChallenge

Subhajyoti Mohanty

December 12, 2019
#MyPaperBagChallenge

Smitarani Sahoo

December 12, 2019
#MyPaperBagChallenge

Anshuman Sahoo

December 12, 2019

Archives

Editorial

German Challenge

Germany flag
May 7, 2025

With the assumption of office by Christian Democratic Union (CDU) leader Friedrich Merz as Chancellor of Germany 6 May, Europe’s...

Read more

(Anti)-Trump Card 

Trump
May 6, 2025

First it was Canada, and now Australia and Singapore: the anti-Trump factor appears to be benefiting parties that are perceived...

Read more

Mandal-Kamandal 2.0

Caste census
May 5, 2025

The decision taken at a meeting of the Cabinet Committee on Political Affairs (CCPA), headed by Prime Minister Narendra Modi...

Read more

Hyphen in Geopolitics

Aakar Patel
May 4, 2025

Through the 1990s and up until fairly recently, India insisted on something called de-hyphenation. The hyphen referred to was the...

Read more
  • Home
  • State
  • Metro
  • National
  • International
  • Business
  • Editorial
  • Opinion
  • Sports
  • About Us
  • Advertise
  • Contact Us
  • Jobs
Developed By Ratna Technology

© 2024 All rights Reserved by OrissaPOST

  • News in Odia
  • Orissa POST Epaper
  • Video
  • Home
  • Trending
  • Metro
  • State
  • Odisha Special
  • National
  • International
  • Sports
  • Business
  • Editorial
  • Entertainment
  • Horoscope
  • Careers
  • Feature
  • Today’s Pic
  • Opinion
  • Sci-Tech
  • About Us
  • Contact Us
  • Jobs

© 2024 All rights Reserved by OrissaPOST

    • News in Odia
    • Orissa POST Epaper
    • Video
    • Home
    • Trending
    • Metro
    • State
    • Odisha Special
    • National
    • International
    • Sports
    • Business
    • Editorial
    • Entertainment
    • Horoscope
    • Careers
    • Feature
    • Today’s Pic
    • Opinion
    • Sci-Tech
    • About Us
    • Contact Us
    • Jobs

    © 2024 All rights Reserved by OrissaPOST