There’s a temptation to turn everything into KPIs and metrics, but what we can measure is never exactly what we care about.
What does harmony look like when humans mediate it through machines?
Wanting to learn about and make the world better comes from our experience, but the work to do those things doesn’t always need a direct connection to that primary motivation, as we know all too well. Some of that work can be delegated to those who don’t share the vision, and this is where AI…
When working with AI, it’s helpful to think of it as an intern in its first week. In the capacity of scriptwriting, AI would be an intern in their first week with an uncanny unconsciousness of their resonance with the human experience.
The real opportunity lies in balancing automation with human judgment, so that technology not only accelerates work, but also deepens impact.
Without knowledge of how to optimize answers for ‘truth,’ they’re modeling what humans do–tell stories, hedge, prevaricate, lie, do bad math, and sometimes, eventually, suss out the truth.
…These tools have the potential to lower the cost of producing cultural artifacts, thereby increasing humans’ power to create things or maybe even create value.
“While cognitive offloading with AI reduces people’s higher thinking abilities, thoughtful integration of ‘extraheric AI’—which nudges, questions, and challenges users—can substantially improve critical thinking.”
There are hard parts in navigating a new culture, language, and cuisine. They feel like situations where the learning curve looks steeper than I’m sure I can handle. To help get me over some of these more challenging moments where I’m not an expert (yet), I’ve occasionally tested what AI assistants can do.
OpenAI has been taking some serious flak in the past weeks about its “synthetic courtesy” to the point that Sam Altman has said that, due to how “annoying” GPT-4o has become, the upcoming update will address GPT’s excessive pandering.