An AI-powered global survey run by Anthropic reveals that a major motivation for AI use is productivity, but not for productivity’s sake. Users want to reclaim time and headspace which they can reallocate to personal relationships and pursuits. The survey identifies a range of hopes for AI tied to efficiency, being better people and living better lives, alongside concerns such as inaccuracy, misuse and overuse. In many cases a “light and shade” scenario exists where the capabilities that create benefits also produce harms.
Last week Anthropic announcement the Anthropic Institute, a move to consolidated its ethics and research efforts. This week they released details of a significant research project. The research sought to understand what ‘AI going well’ means for Claude users, grounded in their aspirations and concerns as users of AI. So they used an AI interview chatbot to ask their users about their hopes and concerns with AI, and how that connects with their experiences with AI technology.
Over a week in December last year, 80,508 respondents from 159 countries speaking 70 languages shared the role they want AI to play in their lives, whether AI is already doing that and what they are concerned could go wrong, in what Anthropic believes is the largest and most multilingual study ever conducted. This briefing summarises the findings from the research.
A significant but unsurprising finding was the use of AI to improve productivity. For users seeking productivity gains, their motivations seemed to be more than just professional; they wanted to free up time and headspace to allow them to focus more on personal relationships and pursuits. Many users identified that the capabilities that create efficiency and speed are also capable of causing workforce disruption and job displacement. This dynamic – what Anthropic alls a “light and shade” scenario where hopes and risks are “entangled” because “the same capabilities that lead to benefits also produce harms.”
The survey identifies a range of other hopes for AI tied to efficiency, being better people and living better lives. It maps how well AI is living up to those hopes based on people’s experiences using AI. And it lists concerns users raised, including inaccuracy, misuse and overuse. In many cases hopes and risks are also “light and shade” scenarios, with users correlating one with the other.
Don't just read the beginning. Become a paid subscriber and you can read the entire briefing. Subscribe to keep reading.
About the research process
Anthropic Interviewer asked interviewees a set of questions then adapted follow-up questions based on each interviewee’s responses. This approach, they say, 'bridges the typical tradeoff in qualitative research between depth and volume’, collecting 'rich, open-ended interviews at a very large scale.’ To process the volume of data returned, Anthropic used Claude to categorise each conversation across a range of dimensions: 'what people want from AI, whether they’re getting what they want, what they fear, what they do for a living (if mentioned), and their sentiment about AI overall.’ Respondent’s desires were classified into a single primary category and mulltiple concerns were identified because 'respondents tended to articulate several distinct worries rather than one.’

