-
Chapter 1 AI Basics
- Johns Hopkins University Press
- Chapter
- Additional Information
PART I Thinking with AI Bowen_AI_int_5pgs.indd 9 Bowen_AI_int_5pgs.indd 9 16/02/24 4:27 PM 16/02/24 4:27 PM This page intentionally left blank [23.137.249.165] Project MUSE (2024-11-22 02:16 GMT) 11 Chapter 1 AI Basics AI is one of the most impor tant things humanity is working on. It is more profound than electricity or fire. sundar pichai, CEO of Google and Alphabet If you were busy on November 30, 2022, you might have missed the early demo that OpenAI released of its new chatbot. Butafter five days, more than a million people had tried it. It reached 100 million daily active users in two months. It took TikTok nine months and Instagram two and a half years to reach that milestone (Hu, 2023). But bigger than fire? Sundar Pichai has made this comparison repeatedly, but few of us were listening when he said it publicly in January 2016 (when he also admitted he did not really know how AI worked). Fire, like other human technological achievements, has been a double- edged sword; a source of destruction and change as well as an accelerant to advancements. AI is already on a similar trajectory. Most of us have heard of AI; some may even remember when a computer beat the chess world champion, but that was a Bowen_AI_int_5pgs.indd 11 Bowen_AI_int_5pgs.indd 11 16/02/24 4:27 PM 16/02/24 4:27 PM Thinking with AI 12 differ ent sort of AI. As happens in many fields (think mRNA vaccines), research over decades takes a turn or finds a new application , and a technology that has been evolving over years suddenly bursts into public awareness. For centuries, humans looked for easy ways to rekindle fire in the middle of the night. Early chemical versions from Robert Boyle (in the 1680s) to Jean Chancel (1805) were expensive, dangerous, or both and never made it to mass production. Then, chemist John Walker accidentally discovered (in 1826) that friction could make the process safe and cheap. Like matches, seventy years of scholarly work in AI helped create the recent explosion of awareness , but in a GPT flicker, the world has changed. Expert Systems vs. Machine Learning The term artificial intelligence (AI) was coined in 1956 at a conference sponsored by the Defense Advanced Research Proj ects Agency (DARPA) to imagine, study, and create machines capable of performing tasks that typically require human cognition and intelligence. ( We’ve highlighted key terms when they are first defined and summarized them in the sidebar glossaries.) Early AI research focused on logic or expert systems that used rules designed to anticipate a wide range of pos si ble scenarios . These systems don’t improve with more iterations. This is how robots and AI are still often portrayed in stories. Even in Star Trek, the Emergency Medical Hologram is constantly limited by his programming. IBM pioneer Arthur Samuel (1959) coined the term machine learning to describe statistical algorithms that could generalBowen_AI_int_5pgs .indd 12 Bowen_AI_int_5pgs.indd 12 16/02/24 4:27 PM 16/02/24 4:27 PM AI Basics 13 ize and “learn to play a better game of checkers than can be played by the person who wrote the program.” For a simple game like checkers, it was pos si ble to develop an expert system that could search a database but also make inferences beyond existing solutions. Samuel’s checkers program, however, was “given only the rules of the game, a sense of direction, and a redundant and incomplete list of parameters which are thought to have something to do with the game, but whose correct signs and relative weights are unknown and unspecified” (Samuel, 1959). Expert systems (and their logical reasoning) initially dominated research, but machine learning (with its probabilistic reasoning ) was more useful in recognizing patterns; it became a more central part of AI research in the 1990s (Langley, 2011). With more memory and larger datasets, statistical algorithms were able to deduce medical diagnoses (WebMD, for example) and eventually led to IBM’s Deep Blue chess program beating chess champion Garry Kasparov in 1997. Machine Learning + Neural Networks = Foundational Models Neural networks are computing systems modeled like the neural connections in the human brain. Neural networks are a specific type of machine learning model where nodes (individual computational units) are organized into layers that mirror our understanding of the human brain. In the 1960s and ’70s, networks were logical and...