AI Misses the Gorilla: LLMs Struggle with Exploratory Data Analysis

2025-02-08

A study showed that students given specific hypotheses to test were less likely to notice obvious anomalies in their data, compared to students exploring freely. The author then tested large language models (LLMs), ChatGPT 4 and Claude 3.5, on exploratory data analysis. Both models failed to initially identify clear patterns in their generated visualizations; only upon providing images of the visualizations did they detect the anomalies. This highlights limitations in LLMs' exploratory data analysis capabilities, showing a bias towards quantitative analysis over visual pattern recognition. This is both a strength (avoiding human cognitive bias) and a weakness (potentially missing crucial insights).

Read more
AI