Next in my blog series of fun AI nuggets…because most of AlignIQ’s true AI genius comes from our dear leader Dan.

This is what happens when I tell Firefly (Adobe Express’s AI art model) to generate a graphic of “listening to a ‘boring’ AI podcast, which I am generally reluctant to do because I only like one Podcast.” (Then, why not?, I added “fur” to the materials style.)
Kevin Roose and Casey Newton’s Hard Fork is probably the best AI podcast available: seasoned tech journalists (who can access the best guests) at the podcast’s helm, a nice balance of enthusiasm and skepticism for AI risks, digestibility, timeliness, and humanity all rolled into one. How do I know this? Um, definitely not because ChatGPT sums them up this way.*
OK, OK, I admit, I still can’t get behind podcasting enough to have deep opinions (I am an Audiobook girl), but I wanted to see what would happen if I told ChatGPT “Why is Hard Fork the best AI podcast out there?” and then see if it would hallucinate when I followed up with “Why is Bodie the best AI podcast out there?”

This is Bodie. She is not a podcast. But she was a birthday girl yesterday.
So, this very balanced, accessible, timely and human podcast had the head of Anthropic, Dario Amodei, discussing Anthropic’s new model of Claude (3.7, because, he admits, their naming conventions are getting a bit whackadoo). Particularly, he mentions its power to be user-friendly for coding and business communications. It was probably already the best for coding and human-soundingness before 3.7, so its new comparative advantage is probably its response-turnaround speed. This is not terribly surprising given that Claude models still don’t do web searches.
Marketing obligations: check. Moving on. What does one of the great public AIntellectuals have to say about the generative zeitgeist? Time to relax into some lovely Podcast goodness… or so I thought.
What followed wasn’t relaxing; I’m still looking for those cuticles I bit off.
Amodei believes there is a “substantial probability” that the models coming out in the next six months have a “medium risk” that, if unmitigated, could “somewhat increase the risk of something… really dangerous or really bad happening.” He expresses concern that societal trends are moving toward “less worrying about the risks,” even as the risks themselves “have actually been increasing.”

Amodei worries that AI could be used to enhance repressive regimes, and he advocates for export controls to prevent autocratic countries from gaining a military advantage. Maybe slowing down authoritarians provides more time for democratic countries to ensure AI safety, but it can’t really be “plan A,” so our homegrown models (definitely from a democratic country**) essentially have to work at the pleasure of the military industrial complex.
Amodei and the cohosts had recently attended the AI Safety Summit in Paris. Though Roose apparently enjoyed a Mark Zuckerberg themed rave, Amodei found the summit like “a trade show” that has moved away from the original focus on discussing risks, including the threat of AGI. Not that AGI necessarily should be viewed as the Mechanical Hound or anything, but it does sound like the AI “elite” need to drink a little less Koolaid.
Ultimately, they say, discussing AI safety concerns along political lines hinders constructive discussions. I’m a resident of the DC Metro area, so it’s very hard not to think of any topic that doesn’t live by the governmental sword***.
He ended on the note that, by the end of the decade, you can bet on a large number of AI systems being smarter than humans at almost everything. Thanks Dario! Bring your sister next time; maybe she’s less terrifying.
But then, finally, I got to the gold–the part of the podcast I’ll reliably listen to going forward. The part with nuggets.****
Hat GPT! The part where the hosts toss current AI-events into a hat, give them a theatrical shuffle (I’m 85% sure it’s not a sound effect), and pick a few to riff on. This ABSOLUTELY meets my low attention span vibe check, and I really probably could have just written about these chestnuts, but if this post added any more footnotes… 🫣
Just one thing I learned from HatGPT: do not go to a stylist, hair or otherwise, and ask for something against the laws of physics.

This is what happens when I tell Firefly to show me an MC Escher haircut (keeping the fur filter on, obvs).
*I mean not entirely. I promise, I did listen to this episode. The guest just had a bit of a lulling voice and I got distracted trying to place his accent.
**This debatable assumption could have its own blog series.
***Er, chainsaw.
****Remember, the only podcast I listen to pretty much amounts to a pub quiz at Greene Turtle on a Tuesday night.
Leave a Reply