27 Comments
User's avatar
Ampersand888's avatar

I’m glad my subscription helped you create this survey. I hope to see more first hand research in the future.

Expand full comment
kyla scanlon's avatar

Thank you! Your subscription is VERY helpful, thank you so much for contributing.

Expand full comment
Neil Pegram's avatar

Thanks for yet another insightful piece! Maybe I overlooked it in your research, but it seems like the core issue missed in a lot of the discussions around the impacts of AI, is a timeframe.

For instance related to the impacts of AI on employment, it makes a big difference if we are discussing job loss over the next 2 years, 10 years, 25 years, or 50 years (young people choosing a university degree now are going to be impacted by a 50 year question). Imagine if when the first production automobile hit the market, we had polled all the people who work as carriage drivers, care and feed horses, manufacture carriages and equipment for horses...etc, and asked them if they thought cars would impact or eliminate their job. The most important part of the question is the timeline. 2 years after the invention of the car, it barely had an impact on those people's employment and some people started working for the new car industry. But if you'd ask them how the job market related to horse carriages was going to look in 100 years, they probably would have rightfully assumed that it was going to be a 90% reduction.

But what's really unique about AI over past technological and industrial changes is that in 100 years human carriage builders are going to seem about as intelligent as the horses, compared to AI.

So what we're doing now, is like watching horses trying to survey other horses in 1925 about whether or not they think these new automobiles are going to eliminate their jobs in 100 years. The horses might have been scared of automobiles and maybe they could even communicate between each other that they've seen lots of automobiles, and communicate to their foal that automobiles are scary, but they couldn't do a complex survey of the long-term impacts of automobiles on the global industry, and there's absolutely no way they could have comprehended self-driving electric cars in 100 years.

So, while I find it fascinating that we are having lots of discussions about who's going to lose their job in the next 2 years, we should be really discussing which industries are going to have a 95% elimination of human workers in 25 years (or less), and then backcast from that.

For instance with the advent of AI apps like Showrunner, we can debate about how many jobs in CGI and animation are going to be eliminated in the next 2 years. But it seems pretty clear that in 25 years 95% of the current jobs related to the production of animated film will be eliminated. Just like all of the thousands of people who used to hand draw individual images of Mickey mouse.

These questions of timelines are very important to actually planning and investing in the structures of companies, education systems, and employment in our societies, because those are 25 year or ideally 50 year questions, not 2-year questions.

Thank you for all of your work. (Ps I also love bikes)

Expand full comment
Keith Wilkinson's avatar

The results make sense intuitively based on what I experience. I do think though we need to move past "AI" as a nebulous concept. I guarantee everyone in the survey has benefited from "AI" for years we just don't call it AI its predictive maintenance, smart sensors, route planning, etc

If we pose the question can smart document workflows help your office, its probably a totally different response.

Also, a lot of imagination spent on white color jobs, and indeed that a big surface area for AI impact, but we should think about other sectors too.

I delve into AI for public utilities here:

https://open.substack.com/pub/title22/p/the-dawn-of-the-blue-collar-knowledge?r=2o3x1e&utm_campaign=post&utm_medium=email

Expand full comment
Imranullah dawoodbasha's avatar

Great insight. Essentially, the issue with training and trust is something we face in every organization. Tech companies working on AI should develop strategies to make AI accessible and understandable for the general public. Believe it or not, the Pandora's box has been opened, and there is no stopping AI advancements, whether we understand it or not.

Expand full comment
David Pollard's avatar

I have to admit that I'm rather skeptical that lowering the interest rate help the unemployment rate by encouraging firms to hire more graduates, instead of just buying more AI. The genie is out of the bottle, and more money isn't going to make firms forget about it.

Expand full comment
Peter Coy's avatar

This is interesting but there's no reason to think that the people who saw and responded to your survey are representative of the general public.

Expand full comment
ScottB's avatar

Peter: As Kyla noted in her introduction, this was merely intended as a preliminary exploration. You are correct that the accuracy of this data could only be tested through a much larger, more representative sampling. Until government is willing, and/or universities are able to fund more research on this issue, it is still interesting to see what her readers are thinking about this issue at the moment.

Expand full comment
PERRY KINKAIDE's avatar

You've zeroed in on a critical issue - whether AI evolves to serve public or personal interests or better yet, to disintermediate and thus displace the need for regulatory/government intervention. CORE will be "trust" and wherher over the next few years evidence accumulates that errors and mistakes decline to disappear as is the case with any disruptive potentially transformative technology. Right now best that governments STAY AWAY. We do not need a false positive.

Expand full comment
Curious808's avatar

Any meatier stuff I imagine would be in the open-ended answers. As-is, and not to be facetious, the report could have been generated by a commercial LLM, given decent prompts: "What would the likely response be to a survey like this?" It reads like the kind of tentatively balanced response of people who haven't yet arrived at a well-internalized position. (Which is maybe a reasonable description of what current LLM synthesis does.)

Expand full comment
Adam Brinegar's avatar

Nice work. Much of the popular discourse on AI seems to be driven by the needs of market participants, rather than what's actually happening. Ongoing empirical research helps. And we need more empirical research not paid for specifically by AI companies.

Expand full comment
Muhammad Hemani's avatar

Great read!

Expand full comment
Tumithak of the Corridors's avatar

Good work as always, Kyla. I like how you actually brought in worker voices instead of just doing the market/CEO thing. That “functional vs existential trust” distinction is a sharp one.

One thing I keep thinking about: workers have been told for five years straight that AI is coming for their jobs. Given that, it makes sense they’d want adoption to be clunky. Workers have a vested interest in slowing things down. It's self defense. If it all slides in too smoothly, management gets to write the story on their terms.

Small note on the research side: that MIT “95% of GenAI pilots fail” stat has been bouncing around without a live source. The original report went up, then got pulled, and now it mostly survives as a citation of a citation. I went down the rabbit hole and wrote it up here if you’re curious: https://substack.com/@tumithak/note/c-147305192

Appreciate you putting this out. The messy middle is exactly where the real story is.

Expand full comment
Jeffery Keilholtz's avatar

Wow -- insightful and meaningful work, Kayla. Thank you! Keep going.

Expand full comment
IRAW's avatar

For the last fifteen years of my thirty+ years in tech (retired in 2021) I included "technology integrator" in my job duties, acting as a liaison between overconfident vendors, internal clients, and the hardware intended to host the software and produce the valued objective. It was challenging and creative and required a blend of people skills, project management, and robust understanding of networked computer operating systems and mass storage. I find it difficult to imagine that something has fundamentally changed to eliminate the need for such roles.

One of the important aspects of implementing complex systems is "managing by walking around", giving stakeholders a chance to be heard outside of formal meetings, to express anxiety and dissent. If that aspect is omitted, there is a higher chance of push-back or "lying flat" by key personnel, which can doom a project that otherwise had every chance of success. How can AI replace this feature of the human landscape?

I feel like I'm taking a graduate-level course in ... something! What is the title of this course!? The instructor is tremendous, the content compelling. Thank you for your ongoing work to mash the daily torrent into digestible form.

Expand full comment
David Salzillo's avatar

A great bird's eye view of this issue, Kyla! Thanks!

Expand full comment
Mikhail Krylov's avatar

I use it for writing. As a editor

Expand full comment