Talk at Code for America "How not to think like a machine"
Online talk on philosophy and AI Ethics given to Code for America
PRESENTATIONSFEATURED ON HOMEPAGE
10/30/20251 min read
I gave an online talk at Code for America (https://codeforamerica.org/) on philosophical and practical issues related to the use of AI. Code for America is a nonprofit civic tech organization helping to bring digital tools and services to government and communities.
One important lesson involve the perils of letting AI do certain kinds of thinking that would be better left to humans.
Topics covered included:
The prediction error coding framework in cognitive science and the relation between biological and artificial neural networks. Though AI may not think like humans for a variety of reasons, it is not because it is "merely" predicting the next token. Humans are also (plausibly) prediction machines.
The "algorithmic leviathan", as proposed by the philosopher Kathleen Creel: https://kathleenacreel.com/research.html#leviathan
The problem of COMPAS, the problematic recidivism detection model. Model accuracy and fairness may sometimes be at odds. Relevant ProPublica article: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
The distinction between normative and descriptive claims, and Hume's Is-ought gap. Though AI may, at best, produce accurate descriptions of the world, we should be more cautious about the normative claims it produces.
mail: self@k-yui.com