This article focuses on the period when machines actually began to interact with humans, around 1950. Beyond the references specific to the field of fiction, he explores Western Sahara Email List detail the notion of robot and traces the latter’s progress until today, a time in which, willy-nilly, bots have become essential to certain daily tasks. A bot is automated technology that is programmed to perform certain tasks without human intervention. In other words, the human being can be the instigator of an action, but the bot is able to carry out this action alone. Bots are a form of artificial intelligence, a technology capable of reproducing faculties that only human beings are capable of, including speaking, seeing, learning, socializing and reasoning. Thus, a large number of bots that are now part of our daily lives, such as Siri and the
Google Assistant, are equipped with artificial intelligence that allows them to form responses as would a human interlocutor. A large part of the population regularly uses these virtual machines: for example, a study shows that 55% of people who have a cell phone use voice assistance every day or every week. They have become ordinary solutions to various needs, such as in the area of customer service. A brief overview of the origin of bots Bots see m like a recent phenomenon, but in reality they are built on more than 70 years of work. In 1950, computer scientist and mathematician Alan Turing devised an eponymous test, also known as the imitation game . Its most basic format required three participants (A, B and C). Participants A and B were a machine and a human, respectively.
Learn how to create your first bot with SenMarketing Digital
Participant C, also human, acted as an interrogator and entered questions into a computer. Participant C received responses from participants A and B. The objective was for participant C to determine which of his two interlocutors was human.Image by Bilby – Own work , Public domain, Link However, the method had a problem. Back then, the databases were extremely limited and they could only store a limited number of expressions. As a result, the computer ended up running out of answers to give to Participant C, which hampered the exercise and cut the test short. A test still relevant, but a subject of controversy In 2014, the University of Reading organized a Turing test in which a panel of 30 judges played the role of participant C.
If the machine managed, in more than 30% of the exchanges, to persuade all the judges that his answers came from a human being, the test would be considered successful. This is exactly what happened. An artificial intelligence program dubbed Eugene Goostman , posing as a 13-year-old boy in Ukraine, convinced 33% of the judges.According to the university, this was the first time that an AI had passed the Turing test. However, these results, which have garnered both praise and criticism, have hit the headlines in many media. Many have been skeptical of Eugene Goostman’s abilities, questioning their supposed superiority over more rudimentary forms of AI. Anyway, Alan Turing is rightly a pioneer in artificial intelligence, since he seems to have triggered a succession of events that led to the
AI that we know today. It was barely five years later
That the Dartmouth AI conference was held at the initiative of John McCarthy, the math professor who coined the term artificial intelligence. Following this conference, the university made AI one of its research disciplines. In 1958, while teaching at MIT, John McCarthy developed the LISP programming language, which then became the benchmark in AI, and some still do today. Many industry players, including computer scientist Alan Kay, describe LISP as the best programming language ever invented. Designed in 1966 by Joseph Weizenbaum, professor at MIT, this technology was limited, to say the least, as was its vocabulary. However, Weizenbaum knew that progress was still possible, even comparing ELIZA to a person “with limited language skills, but very attentive”.
These early inventions made it clear that human beings desire to interact with technology in the same way they communicate with each other. But at the time, technological knowledge was not sufficient to achieve this. To this constraint was added the 1966 ALPAC report , which showed great skepticism towards machine learning and advocated the abandonment of all public funding in favor of research in artificial intelligence. There are many who, in view of the meager progress made during the rest of the decade, criticize the publication of this report in the United States for having limited any major progress in the field until the 1970s. Note all the same, among the few developments of the late 1960s, the invention of the Stanford research institute: