Part of what I’m hoping to do with this newsletter is introduce readers to the people who have shaped my thinking on AI, automation, and other topics, and I’m starting with Jack Clark.
I first got to know Jack when he was a reporter at Bloomberg, covering AI and enterprise tech. His newsletter, Import AI
, was several notches above anything I saw in the mainstream press in terms of sophistication and fluency with the subject matter, so it wasn’t a total shock when he decided to leave journalism and go to OpenAI, where he worked as the policy director until late last year. (He’s now working on a new project, which, annoyingly, he still won’t tell me about.)
When I was writing my book, Jack – and the projects he contributes to, including the annual AI Index report
(an essential read that tracks the progress of AI in various domains) – became an incredibly helpful resource for me. Despite working in the AI industry, he’s not a shill, and he sees the potential downsides of this technology as clearly as the potential benefits. He’s also an artist and musician, which gives him a humanistic view of AI’s cultural effects that I really value.
We Zoomed last week, and our conversation was so interesting that I thought I’d print an (edited) snippet of it.
Jack, you are the one of the only people I know who has worked as a journalist and then gone into AI research and policy. What do you think are the biggest differences between how the media portrays what’s happening in AI and what is actually happening?
I think the extent to which AI is going to need to change our institutions for how things like products and regulation work is generally undercounted. Because you focus on the interesting capabilities, or the new systems that can generate images or write text, and that’s really interesting.
But all of the institutional regulatory stuff is very, very, very slow. And so when I read a lot of stories about AI, I think one of the things that’s running through it is: these AI developers are running amok, what are we going to do?
And what I don’t see covered is the institutions that are meant to constrain these entities are just not ready, and don’t have enough money and don’t have enough people and enough knowledge to do it. And I think that’s one of the more dangerous mismatches in society.
Is there something in AI that that the public is really worried about that they shouldn’t be worried about? Or, vice versa, is there something that’s not even on people’s radars, but that we should all be freaking out about?
The stuff that should be on people’s radar is the effects of the media we consume and the culture that we’re a part of being changed by AI in the background. So you’ve already lived through this period of recommendation algorithms changing the cultural world we exist in, and thereby having subtle but long-range effects on certain people, people getting radicalized or whatever. That’s all continuing to get more and more and more advanced.
Now, we’re starting to see systems emerge that will just proactively generate media for people. And they’ll interact with all of our social media platforms and the economics of publishing and media in such a way that the world is going to build a load of hybrid human/machine media. And I think that that’s something that we should be more aware of and concerned about than we are, because it’s almost impossible to measure. It’s like climate change. How do you measure the aggregate changes in the entire media ecosystem?
Are you talking about synthetic media that’s generated by AI? Or are you just talking about recommendations that get more accurate over time and sort of pigeonhole people?
It’s really both. The way it’s really going to happen is that you’ll have an AI assistant in the newsroom that just suggests different headlines, and a human will just pick one. So it’s not going to be some dramatic “now there’s an AI journalist.” It’s going to be much more be like, “oh, humans are now teaming up with AI to generate cultural outputs.”
And the humans who are curating this stuff may not even realize in the act of curation that they’re being constrained by the suggestions of the AI. But there will be something that happens here where you’re going to start to have society be educated by stuff which partially is machine-written, even if it’s human-curated.
Isn’t that already happening to an extent? I mean, I’m thinking about all the “what time is the Super Bowl” blog posts, which are just algorithm bait.
But that’s algorithm bait. Eventually, it’s going to get good, and then it just becomes human bait. And you’ll develop different slants of stories for different readers.
When I was at Bloomberg, we used to write loads of variants on different earnings report stories ahead of the ticker coming out, and then we’d pick which one matched it. But we’d also try and do other pieces for different financial consumers covering different slices of the earnings story.
That’s all going to start happening at once. And then eventually it will be like: What type of reader are you? What slant of this do you want to consume?
That’s interesting. I’m imagining, like, a Taylor Swift album drops, and there’s 15 different versions of the review calibrated to your Spotify algorithm.
It’ll be that. But then, in a couple of years, Taylor Swift will also have albums that will have a jazzy variant and a rock variant, and one of those might be algorithmically curated.
In the 2021 AI Index report, which came out earlier this month, there was this statistic that really stuck out to me about global investment in AI going up 40% in 2020. Is that the pandemic?
I think a lot of the graphs in the AI Index this year go up and to the right in quite a profound way. It’s hard to disentangle the pandemic from the larger business environment and the fact that AI is starting to work in a major commercial way.
One thing you can note, though, is that AI is so software-based that it’s one of these industries that has been less disrupted by the effects of the pandemic than many others. And so I think that you can make an argument that the pandemic might have just not changed the trajectory of the AI industry that much.
I also think capital is kind of this lagging indicator of just how good the research has got and how applicable it’s become. And I wouldn’t be surprised if next year we see an even stranger statistic for investment.
Another headline from the report is that surveillance technology is really becoming popularized. Obviously, we’ve seen things like Clearview AI. Are there things happening in surveillance that we don’t know about?
Well, there’s a technique that is under-covered. Actually, I am writing a report about this technique right now. And I tried to do it for the Index, but it was too hard. It’s called pedestrian re-identification. Have you heard of this?
No, it sounds terrifying.
It’s wild. And it’s a long-standing thing, but it’s started to get real good. So re-ID, as it’s called, is: I have Kevin Roose on CCTV coming out of a train station. Now, you’re going to walk into a shopping mall, and I have a different security camera looking at you from a different angle with a different resolution and quality. Can I re-identify Kevin in an unsupervised way? And can I hand off from different cameras with different perspectives on you, as you walk around the city?
It’s really, really difficult. But it’s actually starting to work. I’m looking at a lot of metrics right now, and it’s a bit behind things like image recognition, because it’s harder. But it’s getting there.
And here’s the best part. I went into this thinking that I would only find research papers from universities and also potentially security or military-linked places. But guess what I’ve actually found? It’s retail startups who work for shopping malls, who are writing lots of papers about re-ID.
It’s the malls?
You’d think it would be the NSA. But it’s like, no, man, we just wanted to know why Kevin didn’t go in the sneaker shop.
So we’re building a surveillance dystopia because The Gap wants to know if I went to Old Navy last week.
It makes total sense, right? But it’s totally confusing. We sit and tell stories about the big bad villain, like the state or the military. And then I realized that most of the large effects happen just through standard commercial innovation.
I have a personal question. You are a creative person. You have a piano sitting behind you in your background, and a guitar. And recently, I have felt myself becoming less creative and original. And I worry that part of that has to do with the kind of homogenizing effect of AI and recommendation algorithms, but I also feel like that can’t be an excuse. What do you do in your own life to make sure you’re remaining creative and interesting?
A thing that scared the shit out of me the other day was, I have an art studio. I’m very privileged, obviously, I pay like $300 a month for a room. And the other day I went to the studio, it was a Sunday. I sat there for 4 hours and then I came home. And my wife was like, how was the studio? And I was like: it was fucking awful! I watched Facebook videos for three and a half hours.
And I was thinking, like, what am I doing? My life is is running through my hands, and why did that happen? It’s because we’re all lazy. And these things are targeted at the lizard brain part of us, which is super rewarding.
So little things I do are: I charge my phone in a different room from the one I sleep in. And I’ve actually built, with my studio mate, a wooden stop-motion rig that I now put my smartphone on. So I record myself making mind-maps, partly because I like the process of it. But partly because it means my phone’s literally on the fucking ceiling and I can’t touch it.
Tell me more about this. What are you filming yourself doing?
Well, I make these giant mind maps. Basically, I try to think really hard about stuff, and I try and write all that down. Because I think the act of physically writing is less distracting and more creative than when you use computers.
This one was called “the next five years of AI.”