Co-adapting Human Interfaces and Large Language Models
The rise of Large Language Models (LLMs) is changing how we access information. This article explores how the digital world is adapting to LLMs, blurring the lines between 'agent' and 'environment'. The author uses code autocomplete as an example, showing how humans adapt their behavior – for instance, using 'docstring-first programming' – to work better with LLMs. This leads to more heavily commented codebases, illustrating environmental adaptation to tools. To improve LLM efficiency, the article argues for 'agent-computer interfaces' that translate human interfaces into formats LLMs understand better. The future, the author suggests, lies in designing interfaces specifically for LLMs, rather than solely focusing on model improvements. This will ultimately alter human-computer interaction, fostering new applications and content.