5 Comments
User's avatar
Founders Radar's avatar

This dream of controlling the world with words has been a long-standing one, and we’ve come a long way from the ancient mythologies to modern-day AI. But just like you mentioned, even with all the advances in AI agents, the limitations of natural language are still significant.

Expand full comment
Stan Rosenschein's avatar

The article asks the right questions, but I’d encourage the author to delve more deeply into the research literature, especially technical literature, about perception, action, and language in complex environments. No serious AI researcher would base conclusions on the limitations of traditional programming languages, loose intuitions about human abilities, or current commercial products. Instead they would propose mathematical models of agency, explore their formal properties, and conduct carefully designed empirical studies.

Expand full comment
Jamie Freestone's avatar

Thanks for reading. Yes, well, the proponents of agents are generally tech founders and employees. The serious AI researchers I work with, and those whom I contacted, are also highly sceptical of the hype around language-prompted general-purpose agents. As for the technical literature, it depends what you're referring to of course, but in my research there is a big difference between (a) those who see language or other information sources as "containing" content and therefore being able to build systems that extract the semantics from environmental data and (b) those who see language and other information sources as being closer to Shannon's view of the content of a signal as being entirely dependent on the receiver's prior knowledge/calibration. In this latter view, which I entirely endorse, much of the "semantics" that agents or robots or whatever are trying to detect/extract are found not in the present environment but in past interactions between (human) agents. Most people, from any discipline, tend to project this kind of historical information onto their current perceptions. That is because of how human perception works, being prediction-heavy and because this fact is opaque to introspection. The attempt to create AIs that perceive their environment in the way that we think we do, is futile. But we mis-perceive how we perceive. I know all this is a philosopher's complaint, but I've been encouraged by engineers and computer scientists (who aren't on the payroll of tech companies) who agree with this in principle.

Expand full comment
Kaiser Basileus's avatar

AI agents are only useful to the extent you can trust them with your secrets and since they will always be controlled by extractive big business, they will never be trustworthy.

Expand full comment
Jamie Freestone's avatar

100%

Expand full comment