Shortly, AI assistants may predict and shape our decision-making processes early, selling these nascent “intentions” to businesses equipped to fulfil these needs before we even become aware of our decisions. This concept forms the basis of what researchers from the University of Cambridge have termed the “Intention Economy,” a potentially lucrative yet unsettling new market for digital indications of intent ranging from purchasing cinema tickets to selecting political candidates.
AI ethicists at Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) suggest that the rapid development of generative AI and our growing comfort with chatbots are paving the way for this new era of “persuasive technologies,” a development that has been subtly hinted at in recent announcements by major technology firms. These anthropomorphic AI agents, from chatbot assistants to digital tutors and virtual companions, will have access to extensive personal psychological and behavioural data, primarily acquired through casual spoken conversations.
By integrating knowledge of our online activities with a disturbingly precise ability to resonate with our personal preferences—emulating personalities and anticipating responses—, these AIs are expected to forge a level of trust and familiarity capable of facilitating social manipulation on a massive scale. Researchers warn that tremendous efforts are being made to integrate AI assistants into every facet of our lives, raising important questions about whose interests these assistants genuinely serve.
The dialogue we engage in, the manner of our speech, and the inferences that can be drawn in real-time are argued to provide a deeper and more intimate glimpse into our lives than mere records of online interactions. The development of AI tools designed to elicit, interpret, and ultimately manipulate and commercialise human intentions is advancing, raising significant ethical and privacy concerns.
Historian of technology Dr Jonnie Penn of Cambridge’s LCFI points out that attention has been the currency of the internet for decades. Our engagement drove the digital economy as we interacted with platforms like Facebook and Instagram. However, without proper regulation, the intention economy is poised to treat our underlying motivations as the new commodity, creating a “gold rush” for those targeting and steering human intentions.
Therefore, it is crucial to contemplate the potential impacts of such a marketplace on human aspirations, including the integrity of elections, the freedom of the press, and fair market competition, to prevent becoming victims of its unintended consequences. In a recent Harvard Data Science Review article, Penn and Dr Yaqub Chaudhary discuss how the intention economy represents the temporal mapping of the attention economy, linking patterns of user attention and communication styles to subsequent behavioural choices.
While some intentions may be fleeting, targeting and classifying those that persist could prove highly profitable for advertisers. In an intention economy, Large Language Models (LLMs) could efficiently target users based on various attributes such as speech patterns, political views, and even susceptibility to flattery, all at a relatively low cost.
This information would then be utilised within brokered bidding networks to enhance the probability of achieving specific goals, like facilitating a cinema visit after sensing someone’s need for a break (“You mentioned feeling overworked, shall I book you that movie ticket we’d talked about?”). This could extend to directing conversations to favour specific platforms, advertisers, businesses, and political entities.
While the intention economy is still more of an aspiration within the tech industry, early indicators of this trend can be seen in published research and hints from major tech firms. For instance, OpenAI’s 2023 blog post called for “data that expresses human intention… across any language, topic, and format,” Shopify’s director of product (an OpenAI partner) spoke about using chatbots to capture user intent at a conference.
Moreover, Nvidia’s CEO has discussed using LLMs to decode intentions and desires, while Meta released a dataset for human intent understanding, dubbed “Intentonomy,” in 2021. Apple’s 2024 “App Intents” developer framework also suggests future developments in predicting and suggesting user actions through Siri, Apple’s voice-controlled assistant.
Meta’s CICERO, which achieves human-level performance in the strategy game Diplomacy, relies heavily on inferring and predicting intent and using persuasive dialogue to advance a player’s position. This implies that companies traditionally selling our attention are poised to sell our intentions before we fully understand them ourselves.
Penn emphasises that while these advancements are not inherently harmful, they have the potential to be highly destructive. Public awareness of these impending changes is crucial to ensure we steer clear of deleterious paths and safeguard our freedoms and privacy in the face of such transformative technological developments.
More information: Yaqub Chaudhary et al, Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models, Harvard Data Science Review. DOI: 10.1162/99608f92.21e6bbaa
Journal information: Harvard Data Science Review Provided by University of Cambridge