4 Comments
User's avatar
José G. Hernández's avatar

Powerful and unsettling, Matt. What if the real “alignment problem” isn’t with AI, but with our own incentives, our inability to pause before the next shiny thing? Always thoughtful thinking.

Expand full comment
Matt Haggman's avatar

Thank you Jose. And I entirely agree: your point gets right to the heart of our challenge. Our own incentives make this particular challenge so hard - yet the stakes are so high.

Expand full comment
Howard's avatar

Matt, thanks for a succinct overview of the safety issue. I have been rereading Marshall McLuhan these days. As I read your post, I wondered what he would make of our current dilemma.

Expand full comment
Matt Haggman's avatar

Thank you, Howard. Interesting. Would love to hear more about your McLuhan reading -- and ways it may apply here.

Expand full comment