Should we have AI Laws? | Possible Ep 101
Reid Hoffman
@reidhoffmanAbout
Hi—I’m Reid Hoffman, co-founder of LinkedIn, partner at Greylock, co-host of the Possible Podcast (https://www.possible.fm/), and co-founder of Inflection AI. I’m interested in technology and ideas that enable human progress. I use YouTube to share some of the conversations I’ve taken part in, along with my thoughts on AI, innovation, starting and scaling businesses, and more. My AI avatar, Reid AI, makes the occasional appearance here, too. I serve on nonprofit boards such as Endeavor, New America, Opportunity@Work, the Stanford Institute for Human-Centered AI, and the MacArthur Foundation’s Lever for Change. If you like what you see, you can subscribe for more.
Latest Posts
Video Description
In this episode of Possible, Reid Hoffman and Aria Finger explore how AI is colliding with real-world regulation, responsibility, and even civility. As states like Utah, California, and Illinois roll out new AI laws governing everything from chatbot disclosure to bans on AI-driven therapy, Reid breaks down the logic that’s driving these policies. The conversation dives into OpenAI’s self-imposed limits on medical, legal, and financial advice, the challenge of providing access while managing liability, and why safe harbor laws could unlock life-saving potential for AI. From there, the discussion zooms out to the global stage, where China is pushing for an international AI governance body and the U.S. risks losing moral and technical leadership. Finally, Aria and Reid end on a human behavior note: a study showing AI performs better when users are rude. What does that say about how we train these models? But mostly, what does it reveal about us? From transparency to civility, what kind of intelligence do we really want to build?
No Recommendations Found
No products were found for the selected channel.















