Tuesday, August 5, 2025

Book Report - AI Snake Oil

 


If you’ve ever wondered whether AI can really tell if someone’s a good job candidate, be able to spot criminals, or diagnose your depression, AI Snake Oil by Arvind Narayanan and Sayash Kapoor is a must-read. The authors argue that many of these uses of AI are built on shaky ground. While AI has made impressive strides in areas like creating images or translating speech, it often fails when it comes to predicting human behavior – something companies and governments heavily rely on. This report breaks down the types of AI into three categories: perception, judgment automation, and social prediction, with a strong warning about the last category. 

The book also exposes how many so-called “intelligent” systems are being sold with little proof that they work, and how this blind trust can actually cause more harm than good. From biased algorithms in hiring to flawed educational software that claims to detect cheating, they show how AI is often misused or misunderstood. What makes this report especially useful is that it’s written in plain language, but doesn’t shy away from complex ideas. If you’re skeptical about how AI is being used in society or if you haven’t questioned it yet, this will open your eyes. 


My favorite part is when the book delves into how the majority of AI use involves predicting social outcomes, and how these models are being sold and used to assess employee performance–even to detect when students cheat. The main idea of AI as “snake oil” is also compelling; it highlights how many AI tools are marketed as flawless solutions, promising to work without any issues, but often fail to deliver. This leads to overhyped, underperforming, and poorly validated AI systems. 


How it ties in to the course is how these tools are often accepted without much thought, reflecting our tendency to toward fast thinking, where we rely on intuition or trust in technology without analyzing evidence. Having the book named “AI Snake Oil” shows how many of these tools are marketed as flawless solutions yet often fails to deliver. AI is able to spot patterns, but that doesn’t mean they understand the problem, and often times include biases from following patterns that one implements. For example, having AI predict who gets hired by only following past hiring patterns. This is where critical thinking becomes essential. Rather than blindly accepting AI systems, the book urges us to question their validity and evaluate their real-world performance. By shifting from fast, intuitive thinking to a more deliberate, evidence-based mindset, we can better understand the limitations of AI and make more responsible decisions about how it’s used in society.


TED-TALK Self-Awareness and Critical Thinking in the Age of AI: https://www.youtube.com/watch?v=pvWzQ1MmSns&pp=ygUYYWkgYW5kIGNyaXRpY2FsIHRoaW5raW5n

No comments:

Post a Comment