What sparked this reflection was a Hard Fork exchange. The excerpt starts around 36:10 and runs ~5 minutes. Watch from the moment: 

UC Berkeley psych professor Alison Gopnik frames AI as cultural technology, or essentially a medium of expression like print or the internet–not something with human-like intelligence; the hosts Kevin Roose and Casey Newton consider what happens when models acquire goals and tools–the models are anything but static outputs, but instead can produce outcomes that humans wouldn’t expect. Is AI a mere collection of linguistic artifacts, what is known in academic circles as a stochastic parrot? Or does it show actual intelligence in narrow areas? My short answer: both.

As a cultural technology, AI facilitates accessing, remixing, and connecting humanity’s knowledge. It can lower learning curves for working knowledge on a topic, such as by rephrasing in terms the creator understands. I can ease some tedious tasks of the creative process, such as debugging computer errors. So these tools have the potential to lower the cost of producing cultural artifacts, thereby increasing humans’ power to create things or maybe even create value. 

But does it also decrease incentives to make cultural products? 

We’ve already watched incentives centralize for decades. So a new cultural technology isn’t automatically bad if gatekeepers become less essential for valuable ideas to reach an audience and a more robust creative culture emerges. But it could go the other way: a handful of unscrupulous players could accelerate lowest-common-denominator content, flattening a cultural landscape and reducing the average value of each experience of that culture.

(When incentives prioritize attention, we end up with garbage regurgitated children’s books. Imagine if the tool were used instead by someone with a commitment to high-quality children’s books. What could they do?)

Because this future is uncertain, we need the types of regulations that Gopnik mentions to preserve the value of belonging to a culture and protect the diversity that a complex society relies on.

At the same time, Hard Fork’s hosts point out that the perspective Gopnik seems to focus on is how these systems act as a mirror. They argue that when systems take actions, they look like a new kind of technology

The heart of the “new kind of technology” claim is agency in software. AI doesn’t just output text; it can act within a space of digital tools you give it. Browsers, calendars, forms, APIs, or login rights allow it to make changes in the digital world. While limited in reliability, even at these current limits, tool-using assistants prove surprisingly capable of handling many repetitive, unchallenging, but mentally exhausting workflows. 

Until recently, you needed to be a data scientist to get value from this type of technology. Now you need to be knowledgeable of its limitations, imaginative, and willing to experiment. 

This has the effect of democratizing what can be done on computers. If you’ve tried asking ChatGPT what a software error means and it helped you resolve it quickly, imagine what it can help you do when it acts as an interface to that software, and has the manual to look up how to do the things you ask it to do. Connecting a model to tools reduces the friction for human action and creates space for more creative approaches from individuals working towards a mission.

Do I want both ways to read AI to be true? Yes.

I want culture to feel more democratic again: more community, more closeness, more representation of our best. Can we have the broad reach the Internet has enabled, along with local cultural wealth?

I also want AI to be a new kind of technology because then it may produce many benefits that transcend what we can currently imagine.

I know I carry biases; my life isn’t representative of the human experience. That’s why I pair optimism with a scientific process of testing my hypotheses with experiments, and openness to revising my perspective.

If any of this resonates, and you’re working to improve wisdom, compassion, and livability in your corner of the world, your next step is to experiment safely and see if the latest AI tools help your mission.

Responsible AI Roadmap — 30-Minute Webinar
Flyer for AlignIQ's free Safety Webinar featuring Dan Garmat

Top 9 risks and their top 9 mitigations, in plain language. Built for purpose-driven teams.Designed for leaders and operators at nonprofits, B Corps, education, and health.
Register: https://www.eventbrite.com/e/responsible-ai-roadmap-30-minute-webinar-tickets-1552891784129

In this post, we used AI for polish, not purpose. If you spot something I should refine or nuance, I’ll update this post.


Leave a Reply

Your email address will not be published. Required fields are marked *