Reprinted from AI Evolution - Peanut
Ⅰ. Understanding the definition of AI
Before talking about ChatGPT, we should talk about AI. Let's first define what AI (Artificial Intelligence) is. Wikipedia defines it as a computer system that simulates human intelligence. This system can understand, learn and adapt to the environment to complete various tasks, including speech recognition, visual recognition, decision making and natural language processing. This is a definition that has been discussed and iterated many times, without a single word of nonsense. We can break it down word by word to see what this definition means:
-
Simulating human intelligence: AI's goal is to enable computer systems to simulate human thinking and behavior, meaning AI needs to possess cognitive abilities and information processing capabilities similar to humans.
-
Computer systems: AI relies on computer hardware and software technologies, implementing the simulation of human intelligence through programming and algorithms.
-
Understanding, learning, and adapting to the environment: These are key features of AI systems, enabling them to learn and improve autonomously in constantly changing environments. Understanding refers to AI's ability to recognize and process input data; learning refers to AI's ability to extract knowledge from data and optimize its performance; adapting refers to AI's ability to adjust its strategies and behaviors according to environmental changes.
-
Completing various tasks: AI has a wide range of applications, including speech recognition, visual recognition, decision-making, and natural language processing. These tasks usually require complex information processing and reasoning abilities, and AI is designed to achieve these capabilities.
Ⅱ. The "Artificial Stupidity" History of AI
AI is essentially not a new concept; it can be traced back to the 1950s. At that time, Alan Turing proposed the famous Turing test, which was considered a standard for evaluating whether a machine possesses human intelligence. Subsequently, the AI field began to emerge, from initial rule-based expert systems to later machine learning methods, and then to recent deep learning technologies. AI development has gone through several important stages. However, previous AI had little to do with ordinary people; it was mainly used by businesses and governments, who applied it to specific industry fields such as finance, healthcare, manufacturing, etc. For most people, AI is still a distant concept rather than a tangible tool in their daily lives.
Before 2022, AI's main applications in consumer-level products might have been "Xiao Ai," "XiaoDu," "Siri," etc., but compared to "artificial intelligence," they were more often considered "artificial stupidity." These specialized intelligent devices/assistants designed to handle specific tasks have always been far from being truly usable because human thought patterns, expressions, and need scenarios are too diverse. As soon as you slightly exceed their scope, you can only get a sense of their foolishness.
During these decades, AI development has gone through so-called rule-based expert systems, to later machine learning methods, and then to recent deep learning technologies. However, the actual results have struggled to meet these expectations. The main factors limiting AI capabilities include computational power, data volume, and algorithm efficiency.
Ⅲ. Why It's Different Now
The explosion of AI applications in 2022 can be attributed mainly to breakthroughs in these three limitations:
-
Improvement in computational power: The increase in GPU (Graphics Processing Unit) computing efficiency and the decrease in unit cost. Neural network deep learning algorithms require large-scale computational power support. Some data: In 2010, the average price of GPU per TeraFLOPS (trillion floating-point operations per second) was about $1000. In 2019, this value was $3.5, and in 2022, it dropped to $1.5. In other words, within 12 years, the cost of computing power decreased by 99.85%, an astonishing figure. Training large models is still considered something only big companies can afford to do; one can imagine that 10 years ago, no company could have accomplished this task.
-
Growth in data volume: The popularity of the Internet, especially mobile devices and mobile Internet, has provided AI with a large amount of usable data, offering rich material for AI training.
-
Innovation in algorithms: The emergence of cutting-edge technologies such as neural network deep learning has enabled AI to handle more complex tasks and improve performance.