AI policy work

I work in an Australian context, where my research has been focused on the limitations of autonomous systems and the threats to democracy posed by certain technologies. Here are some posts about my work in this area and links to external work.

Contents

  1. AI generated addictive content (“digital opium”)

  2. Threats to democracy

  3. Limitations of language-prompted systems (LLMs & agents)

  4. Autonomous systems among humans (robots, sleepwalkers, & self-driving cars)

  5. AI in film & TV

AI-GENERATED ADDICTIVE CONTENT

Lots of content on smartphones is already addictive for some people: online games, infinite scroll, porn. The development of cheap AI-generated content (especially video) promises to ratchet up the addictiveness. AI companies are already launching platforms solely for this content: OpenAI’s Sora app and Meta’s Vibes feed. With increased personalisation, users could be hooked on content that unprecedentedly good at hijacking their reward systems: a veritable digital opium. While studying AI governance with ENAIS this Spring, I wrote an up-to-the-minute report on the emerging phenomenon, grounding it in the science of addiction.

Here is the policy document itself, the first in the world. It has been shared with the Office of the eSafety Commissioner and I hope it can feed into the new social media minimum age laws about to start in Australia.

Here’s the Substack post announcing it:

THREATS TO DEMOCRACY

Personally, I think the gravest imminent threats posed by AI are (1) compounding pre-existing threats regarding cybersecurity, biosecurity, and nuclear security; and (2) undermining democracy. The first is a little outside my expertise — although I’m working on it. The second concerns me as both a researcher and a citizen. Democracy is not only worth protecting for many positive reasons, but future problems — including those posed by AI — will be better tackled with strong democratic institutions.

One overlooked frontier is the expansion of surveillance powers by intelligence organisations. The background to this is the continual increase in collaboration between Big Tech and natsec, as the likes of ASIO (in Australia) try to gain access to intimate data held by Big Tech. ASIO want backdoors to all tech platforms. Meanwhile, they’re also trying to make permanent the emergency powers granted to them in the wake of 9/11. These include extraordinary abilities to hack, delete, and alter data on any device. Naturally I want ASIO to be able to track and detain terrorists. They’re a crucial part of the fight against AI-enabled attacks that are sure to proliferate. But I personally think that annihilating citizens’ privacy is not worth it and it is a step towards a surveillance state. Australia should be a role model here for other democracies; these proposed laws would make us a leader only in overreach.

LIMITATIONS OF LANGUAGE-PROMPTED SYSTEMS

My main research, at the School of Engineering, ANU, is into the limitations of certain autonomous systems: LLMs, agents, household robots, self-driving cars. As a philosopher, I import insights from the study of human cognition and its limitations, to focus on points that have been overlooked.

One example is how natural language is not a neutral medium of information exchange: it is highly ambiguous and context-dependent. In short, all the information you need to act on an utterance is not in the utterance. This makes natural language bad for programming machines. That’s why we invented programming languages instead. If we follow this through, it not only explains the incredible abilities and limitations of LLMs, it also dooms many of the hoped-for applications of AI agents.

For the Australian Army Research Centre, I delivered a report (soon to be published) on the risks inherent in using LLMs and agents in a military setting. The irony here is that the military has already developed techniques, over centuries, to cope with the ambiguity and context-dependency of natural language when it’s used by humans! Here’s a brief explainer ahead of the full paper.

Here’s my Substack post about the general phenomenon and why butler-style agents are a pipe dream.

AUTONOMOUS SYSTEMS AMONG HUMANS

Then there’s the dream of butler-style household robots. The investment in this pipe-dream is stupendous. No doubt humanoid robots will soon perform many jobs in various industries. But the all-purpose household robot faces many obstacles. Some of them are technical and may one day be solved with significant breakthroughs: world models, continual learning, fleet learning, etc. Then there’s “semantics”. Roboticists want their machines to not only recognise an object, using geometry and computer vision, but to know what the object is for. I call it the “ornamental mug problem”. Current robots can sometimes identify an object as a mug and pour coffee into it and stack it in the dishwasher; they cannot know it is an ornamental mug that is not to be washed or drunk out of. In its current form the problem is unsolvable. The “semantic” information — the mug’s use in this particular household —  is not recoverable from the current environmental data, regardless of how good one’s sensor fusion is. Here is the paper, soon to be published in ACM Transactions on Human–Robot Interaction.

Self-driving cars. The reason their rollout has been so belated is owing to another quirk of human cognition. We deem agents responsible to the extent they are punishable (not merely competent as is often supposed). And we trust agents only if they have social emotions like guilt, compassion, and shame. To be punishable, one must have feelings so that we feel the punishment is imposing a cost (we don’t punish inanimate objects). And we only trust people who punish themselves if they defect from a deal (we don’t trust psychopaths). The subtlety here is that when an agent is punishable and trustworthy, we grant them leeway. We don’t expect them to be infallible because we know they will hold themselves to account and, if they really screw up, we know we can punish them and thereby recover something from the disaster. Self-driving cars cannot be punished because they don’t feel anything. They cannot be trusted because they don’t experience guilt. And so we require more than the good enough honest effort of an agent; we require the near perfect infallibility of a well-made tool, like a bridge or aeroplane. Self-driving cars are nearing this level. But many boosters assumed that once they surpassed human safety levels, they’d be welcomed to our public roads.

I don’t have any papers on self-driving cars. But I do discuss this in what is my personal favourite post on the Stark Way and briefly in my recent paper with Alex Somlyay on sleepwalking (preprint).) And I’ve lectured engineering students on this for the ethics of autonomous vehicles (slideshow), which mainly focuses on various trolley problems, when the real problems are with human psychology.

AI IN FILM & TV

I moonlight in the world of film and TV, via the production company, Unless Pictures. My world’s crossed when I started to see the looming impact of generative AI on production. It’s all happening as I prophesied. Basically, post-production and animation are the first to use image and video generation or augmentation.

unless pictures post office
Text-to-video AI: the end of film & TV?
By day I study the potential of AI systems. By night I dabble in writing for TV. These worlds collide in video generation technology. So this is a cross-post between my newsletter The Stark Way (mainly about AI, science, …
Read more

For Unless Pictures, it’s mainly about development and pre-production, which means brainstorming, writing, editing, noting, etc. Some of these tasks are obviously perfect candidates for chatbots. But it’s also highly unethical to, say, upload a writer’s original screenplay to a dubious chatbot and ask it for notes because that writer’s valuable IP has just been given away for free to train future models. So I wrote a policy — again the first of its kind apparently — for how to triage which tasks are ripe for AI and when AI should not be used for IP reasons. Because the major AI companies keep changing their terms and conditions, it’s a document I try to update every couple of months or when I hear about new terms. Currently (December 2025), Claude is the best option provided one opts-out of data sharing in the settings.

unless pictures post office
Screenwriting w/ AI -- Policy Document
(There is an updated version of this policy reflecting changes in AI companies’ use of data, extra advice for screenwriters, & a few minor edits. — Jamie, September 2025…
Read more

I’m currently doing some more work in this industry for other companies who need guidance around this issue. It differs depending on what one’s talking about; i.e. deepfakes of actors vs AI-written screenplays vs the open secret of using AI to replace animators and post-production workers. And it requires knowledge of creative industries, AI’s technical limitations, and the philosophical and ethical background too.