{"id":7154,"date":"2023-04-27T09:58:29","date_gmt":"2023-04-27T14:58:29","guid":{"rendered":"https:\/\/blog.zoha-islands.com\/?p=7154"},"modified":"2023-04-27T09:58:29","modified_gmt":"2023-04-27T14:58:29","slug":"how-the-first-chatbot-predicted-the-dangers-of-ai-more-than-50-years-ago","status":"publish","type":"post","link":"https:\/\/zoha-islands.com\/blog\/how-the-first-chatbot-predicted-the-dangers-of-ai-more-than-50-years-ago\/","title":{"rendered":"How the first chatbot predicted the dangers of AI more than 50 years ago"},"content":{"rendered":"<div class=\"c-entry-hero c-entry-hero--default \">\u00a0<\/div>\n<div class=\"l-sidebar-fixed l-segment l-article-body-segment\">\n<div class=\"l-col__main\">\n<div class=\"c-short-author-bio\">\n<div class=\"c-short-author-bio-wrapper\">\n<div class=\"c-short-author-bio__item\">\n<div class=\"copy\"><a href=\"https:\/\/www.vox.com\/authors\/oshan-jarow\" target=\"_blank\" rel=\"nofollow noopener\" data-analytics-link=\":short-author-bio\">Oshan Jarow<\/a> is a Future Perfect fellow, where he focuses on economics, consciousness studies, and varieties of progress. Before joining Vox, he co-founded the Library of Economic Possibility, where he led policy research and digital media strategy.<\/div>\n<\/div>\n<\/div>\n<\/div>\n<aside class=\"c-group-description\" aria-labelledby=\"group-description--label\">\n<h2 id=\"group-description--label\"><span class=\"sr-only\">This story is part of a group of stories called <\/span> <a class=\"c-group-description__image\" href=\"https:\/\/www.vox.com\/future-perfect\"> <img decoding=\"async\" loading=\"lazy\" class=\"alignnone\" src=\"https:\/\/cdn.vox-cdn.com\/uploads\/chorus_asset\/file\/16290809\/future_perfect_sized.0.jpg\" alt=\"Future Perfect\" width=\"1500\" height=\"395\" \/> <\/a><\/h2>\n<p>Finding the best ways to do good.<\/p>\n<\/aside>\n<div class=\"c-entry-content \">\n<p id=\"dtslXH\">It didn\u2019t take long for Microsoft\u2019s new AI-infused search engine chatbot \u2014 codenamed \u201cSydney\u201d \u2014 to display a growing list of discomforting behaviors after it was introduced early in February, with weird outbursts ranging from <a href=\"https:\/\/www.nytimes.com\/2023\/02\/16\/technology\/bing-chatbot-microsoft-chatgpt.html\" target=\"_blank\" rel=\"nofollow noopener\">unrequited declarations of love<\/a> to painting some users as \u201cenemies.\u201d<\/p>\n<p id=\"XRH5AU\">As human-like as some of those exchanges appeared, they probably weren\u2019t the early stirrings of a conscious machine rattling its cage. Instead, Sydney\u2019s outbursts reflect its programming, absorbing huge quantities of digitized language and parroting back what its users ask for. Which is to say, it reflects our online selves back to us. And that shouldn\u2019t have been surprising \u2014 chatbots\u2019 habit of mirroring us back to ourselves goes back way further than <a href=\"https:\/\/www.theverge.com\/2023\/2\/15\/23599072\/microsoft-ai-bing-personality-conversations-spy-employees-webcams\" target=\"_blank\" rel=\"nofollow noopener\">Sydney\u2019s rumination<\/a> on whether there is a meaning to being a Bing search engine. In fact, it\u2019s been there since the introduction of the first notable chatbot almost 50 years ago.<\/p>\n<p id=\"VpWpya\">In 1966, MIT computer scientist Joseph Weizenbaum <a href=\"https:\/\/cse.buffalo.edu\/~rapaport\/572\/S02\/weizenbaum.eliza.1966.pdf\" target=\"_blank\" rel=\"nofollow noopener\">released ELIZA<\/a> (named after the fictional Eliza Doolittle from George Bernard Shaw\u2019s 1913 play <em>Pygmalion<\/em>), the first program that allowed some kind of plausible conversation between humans and machines. The process was simple: Modeled after the Rogerian style of psychotherapy, ELIZA would rephrase whatever speech input it was given in the form of a question. If you told it a conversation with your friend left you angry, it might ask, \u201cWhy do you feel angry?\u201d<\/p>\n<p id=\"Z5QQxV\">Ironically, though Weizenbaum had designed ELIZA to demonstrate how superficial the state of human-to-machine conversation was, it had the <a href=\"https:\/\/99percentinvisible.org\/episode\/the-eliza-effect\/\" target=\"_blank\" rel=\"nofollow noopener\">opposite effect<\/a>. People were entranced, engaging in long, deep, and private conversations with a program that was only capable of reflecting users\u2019 words back to them. Weizenbaum was so disturbed by the public response that he spent the <a href=\"https:\/\/news.mit.edu\/2008\/obit-weizenbaum-0310\" target=\"_blank\" rel=\"nofollow noopener\">rest of his life warning against<\/a> the perils of letting computers \u2014 and, by extension, the field of AI he helped launch \u2014 play too large a role in society.<\/p>\n<p id=\"03obsA\">ELIZA built its responses around a single keyword from users, making for a pretty small mirror. Today\u2019s chatbots reflect our tendencies drawn from <a href=\"https:\/\/www.atriainnovation.com\/en\/how-does-chat-gpt-work\/#:~:text=The%20GPT%2D3%20model%2C%20in,and%20over%2010%20billion%20words.\" target=\"_blank\" rel=\"nofollow noopener\">billions of words<\/a>. Bing might be the largest mirror humankind has ever constructed, and we\u2019re on the cusp of installing such generative AI technology everywhere.<\/p>\n<p id=\"Zb3L5X\">But we still haven\u2019t really addressed Weizenbaum\u2019s concerns, <a href=\"https:\/\/librarianshipwreck.wordpress.com\/2023\/01\/26\/computers-enable-fantasies-on-the-continued-relevance-of-weizenbaums-warnings\/\">which grow more relevant<\/a> with each new release. If a simple academic program from the \u201960s could affect people so strongly, how will our escalating relationship with artificial intelligences operated for profit change us? <a href=\"https:\/\/www.google.com\/books\/edition\/The_Age_of_Surveillance_Capitalism\/W7ZEDgAAQBAJ?hl=en\">There\u2019s great money to be made<\/a> in engineering AI that does more than just respond to our questions, but plays an active role in bending our behaviors toward greater predictability. These are two-way mirrors. The risk, as Weizenbaum saw, is that without wisdom and deliberation, we might lose ourselves in our own distorted reflection.<\/p>\n<h3 id=\"iNcMdZ\">ELIZA showed us just enough of ourselves to be cathartic<\/h3>\n<p id=\"fkf5X2\">Weizenbaum did not believe that any machine could ever actually mimic \u2014 let alone understand \u2014 human conversation. \u201cThere are aspects to human life that a computer cannot understand \u2014 cannot,\u201d Weizenbaum <a href=\"https:\/\/www.nytimes.com\/1977\/05\/08\/archives\/experts-argue-whether-computers-could-reason-and-if-they-should.html\" target=\"_blank\" rel=\"nofollow noopener\">told the New York Times in 1977<\/a>. \u201cIt\u2019s necessary to be a human being. Love and loneliness have to do with the deepest consequences of our biological constitution. That kind of understanding is in principle impossible for the computer.\u201d<\/p>\n<p id=\"lxlqAc\">That\u2019s why the idea of modeling ELIZA after a Rogerian psychotherapist was so appealing \u2014 the program could simply carry on a conversation by asking questions that didn\u2019t require a deep pool of contextual knowledge, or a familiarity with love and loneliness.<\/p>\n<p id=\"pFfYba\">Named after the American psychologist Carl Rogers, <a href=\"https:\/\/www.thoughtco.com\/rogerian-therapy-4171932\" target=\"_blank\" rel=\"nofollow noopener\">Rogerian (or \u201cperson-centered\u201d) psychotherapy<\/a> was built around listening and restating what a client says, rather than offering interpretations or advice. \u201cMaybe if I thought about it 10 minutes longer,\u201d Weizenbaum <a href=\"https:\/\/www.mentalfloss.com\/posts\/eliza-chatbot-history\" target=\"_blank\" rel=\"nofollow noopener\">wrote in 1984<\/a>, \u201cI would have come up with a bartender.\u201d<\/p>\n<p id=\"Pkx1aQ\">To communicate with ELIZA, people would type into an electric typewriter that wired their text to the program, which was hosted on an MIT system. ELIZA would scan what it received for keywords that it could flip back around into a question. For example, if your text contained the word \u201cmother,\u201d ELIZA might respond, \u201cHow do you feel about your mother?\u201d If it found no keywords, it would default to a simple prompt, like \u201ctell me more,\u201d until it received a keyword that it could build a question around.<\/p>\n<p id=\"8YqRy6\">Weizenbaum intended ELIZA to show how shallow computerized understanding of human language was. But users immediately <a href=\"https:\/\/librarianshipwreck.wordpress.com\/2016\/07\/27\/an-island-of-reason-in-the-cyberstream-on-the-life-and-thought-of-joseph-weizenbaum\/#_ednref22\">formed close relationships<\/a> with the chatbot, stealing away for hours at a time to share intimate conversations. Weizenbaum was particularly unnerved when his own secretary, upon first interacting with the program she had watched him build from the beginning, <a href=\"http:\/\/blogs.evergreen.edu\/cpat\/files\/2013\/05\/Computer-Power-and-Human-Reason.pdf\">asked him<\/a> to leave the room so she could carry on privately with ELIZA.<\/p>\n<p id=\"gz43KS\">Shortly after Weizenbaum <a href=\"https:\/\/cse.buffalo.edu\/~rapaport\/572\/S02\/weizenbaum.eliza.1966.pdf\">published a description of how ELIZA worked<\/a>, \u201cthe program became nationally known and even, in certain circles, a national plaything,\u201d he reflected in <a href=\"https:\/\/www.google.com\/books\/edition\/Computer_Power_and_Human_Reason\/1jB8QgAACAAJ?hl=en\">his 1976 book<\/a>, <em>Computer Power and Human Reason.<\/em><\/p>\n<p id=\"7gegmq\">To his dismay, the potential to automate the time-consuming process of therapy excited psychiatrists. People so reliably developed emotional and anthropomorphic attachments to the program that it came to be known as <a href=\"https:\/\/99percentinvisible.org\/episode\/the-eliza-effect\/\">the ELIZA effect<\/a>. The public received Weizenbaum\u2019s intent exactly backward, taking his demonstration of the superficiality of human-machine conversation as proof of its depth.<\/p>\n<p id=\"BmBSM8\">Weizenbaum thought that publishing his explanation of ELIZA\u2019s inner functioning would dispel the mystery. \u201cOnce a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away,\u201d he <a href=\"https:\/\/cse.buffalo.edu\/~rapaport\/572\/S02\/weizenbaum.eliza.1966.pdf\">wrote<\/a>. Yet people seemed more interested in carrying on their conversations than interrogating how the program worked.<\/p>\n<p id=\"DRS942\">If Weizenbaum\u2019s cautions settled around one idea, it was restraint. \u201cSince we do not now have any ways of making computers wise,\u201d <a href=\"https:\/\/www.google.com\/books\/edition\/Computer_Power_and_Human_Reason\/1jB8QgAACAAJ?hl=en\">he wrote<\/a>, \u201cwe ought not now to give computers tasks that demand wisdom.\u201d<\/p>\n<h3 id=\"2EghzS\">Sydney showed us more of ourselves than we\u2019re comfortable with<\/h3>\n<p id=\"8Kcovx\">If ELIZA was so superficial, why was it so relatable? Since its responses were built from the user\u2019s immediate text input, talking with ELIZA was basically a conversation with yourself \u2014 something most of us do all day in our heads. Yet here was a conversational partner without any personality of its own, content to keep listening until prompted to offer another simple question. That people found comfort and catharsis in these opportunities to share their feelings isn\u2019t all that strange.<\/p>\n<p id=\"5D8cwB\">But this is where Bing \u2014 and all large language models (LLMs) like it \u2014 diverges. Talking with today\u2019s generation of chatbots is speaking not just with yourself, but with huge agglomerations of digitized speech. And with each interaction, the corpus of available training data grows.<\/p>\n<p id=\"83Hodo\">LLMs are like card counters at a poker table. They analyze all the words that have come before and use that knowledge to estimate the probability of what word will most likely come next. Since Bing is a search engine, it still begins with a prompt from the user. Then it builds responses one word at a time, each time updating its estimate of the most probable next word.<\/p>\n<p id=\"8kf1dN\">Once we see chatbots as big prediction engines working off online data \u2014 rather than intelligent machines with their own ideas \u2014 things get less spooky. It gets easier to explain why Sydney threatened users who were too nosy, tried to dissolve a marriage, or <a href=\"https:\/\/www.nytimes.com\/2023\/02\/16\/technology\/bing-chatbot-transcript.html\" target=\"_blank\" rel=\"nofollow noopener\">imagined a darker side of itself<\/a>. These are all things we humans do. In Sydney, we saw our online selves predicted back at us.<\/p>\n<p id=\"7mgJnB\">But what <em>is<\/em> still spooky is that these reflections now go both ways.<\/p>\n<p id=\"OQhBIb\">From influencing our online behaviors to curating the information we consume, interacting with large AI programs <a href=\"https:\/\/www.vox.com\/technology\/2018\/10\/1\/17882340\/how-algorithms-control-your-life-hannah-fry\" target=\"_blank\" rel=\"nofollow noopener\">is already changing us<\/a>. They no longer passively wait for our input. Instead, AI is now proactively shaping significant parts of our lives, from workplaces to courtrooms. With chatbots in particular, we use them to help us think and give shape to our thoughts. This can be beneficial, like automating personalized cover letters (especially for applicants where English is a second or third language). But it can also narrow the diversity and creativity that arises from the human effort to give voice to experience. By definition, LLMs suggest predictable language. Lean on them too heavily, and that algorithm of predictability becomes our own.<\/p>\n<p>Next Week AI Will Tell On It Self Of How It Will Dominate The World!! The Blog Will Be Written By AI About AI.<\/p>\n<p>Stay Tuned!!<\/p>\n<p>Have A Great Week From All Of Us At Zoha Islands \/ Fruit Islands<\/p>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>\u00a0 Oshan Jarow is a Future Perfect fellow, where he focuses on economics, consciousness studies, and varieties of progress. Before joining Vox, he co-founded the Library of Economic Possibility, where he led policy research and digital media strategy. This story is part of a group of stories called Finding the best ways to do good. &hellip; <a href=\"https:\/\/zoha-islands.com\/blog\/how-the-first-chatbot-predicted-the-dangers-of-ai-more-than-50-years-ago\/\" class=\"more-link\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":7158,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[2,3,4,5,6],"tags":[],"_links":{"self":[{"href":"https:\/\/zoha-islands.com\/blog\/wp-json\/wp\/v2\/posts\/7154"}],"collection":[{"href":"https:\/\/zoha-islands.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/zoha-islands.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/zoha-islands.com\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/zoha-islands.com\/blog\/wp-json\/wp\/v2\/comments?post=7154"}],"version-history":[{"count":0,"href":"https:\/\/zoha-islands.com\/blog\/wp-json\/wp\/v2\/posts\/7154\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/zoha-islands.com\/blog\/wp-json\/wp\/v2\/media\/7158"}],"wp:attachment":[{"href":"https:\/\/zoha-islands.com\/blog\/wp-json\/wp\/v2\/media?parent=7154"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/zoha-islands.com\/blog\/wp-json\/wp\/v2\/categories?post=7154"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/zoha-islands.com\/blog\/wp-json\/wp\/v2\/tags?post=7154"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}