Happy Thursday, gang! After a couple of weeks off, we’ve got a bumper edition for you… but first a bit of personal news. My new book, Data Culture is due out on Thursday 6 June and is now available for pre-order from Waterstones (and other notable retailers)!
I spent the best part of a year conducting a huge research project around digital transformation and how companies intend to use AI, and this book is the result. Capturing the views of over 300 business leaders on the common causes of digital transformation failure, this book sets out an actionable framework to help organisations of all sizes to build successful data-driven cultures.
I’m sure I’ll mention it a few more times leading up to the publication date, but as I haven’t talked about it before, I thought I ought to point it out. Suffice it to say, if you enjoy Playbook, then Data Culture will be right up your street! PSA over, so let’s get into it!
The layout and premise of the newsletter is simple: a once-a-week sheep-dip of tech, culture, policy and research stories, which I hope you enjoy. If you think friends or colleagues would benefit, please share with them so they can subscribe on Substack or LinkedIn.
Best wishes, Alex.
1. Tech innovation
I’ve been thinking a lot about AI agents over the last couple of weeks: I keep posing the same set of questions to people much smarter than me and so far none of them has been able to provide answers. And as AI agents keep popping up in my newsfeed, I thought I’d crowdsource my queries.
Manus launched last week, a new AI agent from Chinese firm Butterfly Effect. Unlike more basic chatbots, agents are designed to navigate multiple applications to execute complex tasks, like scheduling meetings, in response to simple user commands. In the launch video - where a tech bro with an American accent ominously seems to pronounce Manus like ‘menace’ – it is described as “potentially, a glimpse into AGI.” The head of product at Hugging Face called it “the most impressive AI tool I’ve ever tried.”
Further, The Verge reports that, “After Google and OpenAI offered up AI news on Tuesday, Microsoft has followed with announcements of its own, including details of two ‘deep reasoning’ agents for Microsoft 365 Copilot that it claims are the first of their kind, dubbed Researcher and Analyst, as well as new capabilities for custom AI agents.” The piece goes on to say that, “Researcher relies on OpenAI’s deep research AI model to pull off ‘complex, multi-step research,’ along with access to third-party data via connectors to sources like Salesforce or ServiceNow so that business customers can derive insights from across their tools.”
Let’s set aside for now any jittery concerns about ‘deep reasoning’ being unleashed on the populace without being stress-tested at scale (after all, reasoning models have already started cheating to win at chess: if you train goal-oriented AI models to optimise for success, researchers have found they tend to bend rules and navigate around ethics in unpredictable ways, without simple fixes in place). Instead, let’s focus on the incessant BigTech race to push agentic AI into the world (again, without sufficient testing).
I’ve seen lots of hype in recent months about how we’re all going to be using AI agents soon, to take care of all our mundane tasks. If you look at the typical value-add use case, it goes something like this: ask an AI agent to find a comedy show, music gig or an event based on the shared interests of me and my friends; negotiate a date with my friends’ calendars; book the best tickets available; let everyone know; and get them to pay me back. To be clear, this sounds like a compelling purpose for technology to solve first-world problems. Very tempting.
But, presuming the agent is in my Apple or Android phone, this means the AI agent would need access to my browser and browsing history, my calendar, my contacts, my friends’ calendars, our social media feeds, my messaging app, our messaging history and my credit card or bank information. And then it needs something which passes for root permissions, to be able to drive those things and override various authentication and end-to-end encryption layers by (possibly fraudulently?) operating them on my behalf.
While one day there may be mobile phones with enough power (battery life TBC) to coordinate those actions inside the handset, for this to happen now, the agent would have to transmit multiple pieces of sensitive, personal data to a cloud server in Arizona / outer space with enough compute capability to process the activity, before sending it back.
My questions are: How will that data be encrypted and kept safe? How is that managed safely? Who governs it? Where is the recourse if/when it goes wrong?
Answers on a postcard please…
2. Culture
“Only workers in the Philippines are more reluctant to go into the office than those in Britain”, declared the front page of The Times this week, referring to new research conducted by global property firm JLL. Bizarrely, having released to media the results of a survey of 12,000 workers in 44 countries, JLL has chosen not to include the report in its extensive website research hub, so I can’t link to it. However, the highlights as reported are as follows:
UK Employees are spending two days a week on average in the office, half as many as pre-pandemic;
They’re not happy about it – on average they would like to only do 1.5 days in the office;
Filipino workers have the lowest office attendance, averaging 1.4 days a week
Chinese workers average 4.1 days in the office, one day more than they would prefer
The only country where companies and employees are aligned is Greece, where the ideal average is 3.5 days, which is what they do.
3. Policy & research
OpenAI has published two new pieces of research in the last few days into how using chatbots like ChatGPT impacts users’ emotional wellbeing. The first, How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use, investigated how AI chatbot interaction modes and conversation types influence psychosocial outcomes such as loneliness, social interaction with real people, emotional dependence on AI and problematic AI usage. It revealed that those with stronger emotional attachment tendencies and higher trust in the AI chatbot tended to experience greater loneliness and emotional dependence.
The second, Investigating Affective Use and Emotional Wellbeing on ChatGPT, examined emotional changes in nearly 1000 people over 28 days as they interacted with ChatGPT under different experimental settings. It found that that a small number of users are responsible for a disproportionate share of the most affective cues.
Key findings include that female study participants were less likely to socialise with other people than their male counterparts, while people who used voice mode in a gender that was not their own reported higher levels of loneliness at the end of the experiment
MIT Technology Review reports that, “Legally, it’s largely still a Wild West landscape. Some have instructed users to harm themselves, and others have offered sexually charged conversations as underage characters represented by deepfakes. More research into how people, especially children, are using these AI models is essential.”
4. Reading List
So I devoured the book everyone’s talking about in two days and recommended it to a friend who did the same. If you’ve ever wanted to know how social media really makes money or why Zuck feels politically untouchable, read this book!
Sarah Wynn-Williams, a young diplomat from New Zealand, pitched for her dream job. She saw Facebook’s potential and knew it could change the world for the better. But, when she got there and rose to its top ranks, things turned out a little different.
From wild schemes cooked up on private jets to risking prison abroad, Careless People exposes both the personal and political fallout when boundless power and a rotten culture take hold. In a gripping and often absurd narrative, Wynn-Williams rubs shoulders with Mark Zuckerberg, Sheryl Sandberg and world leaders, revealing what really goes on among the global elite – and the consequences this has for all of us.
Candid and entertaining, this is an intimate memoir set amid powerful forces. As all our lives are upended by technology and those who control it, Careless People will give you disturbing context for why it all feels so unsettling.
5. Playbook picks & worthy clicks
How immigration crackdown is shooting the U.S. tech industry in the foot (The Conversation)
The AI development gap between China and the US has narrowed to 3 months (Reuters)
Silicon Valley staffers are deleting their dating apps (Wired)
Published: the library of pirated books that Meta secretly trained its AI on (The Atlantic)
Everything you say to Echo will be sent to Amazon (or Alexa will stop working if you opt out) (Ars Technica)
Why safety experts are arguing that AI ‘superintelligence’ could be the next nuclear-level extinction threat (The Debrief)
DOE > DEI:JP Morgan rebrands ‘equity’ as ‘opportunity’ (Fox)
Now Google has instructed workers to remove DEI terms from their work (The Information)
Tips on how do develop your Executive Presence (Harvard Business Review Podcast)
Now, you can follow Digital Culture Playbook on LinkedIn (please do!)
Wishing you all a fab weekend when you get there!