Bruce Schneier and Nathan Sanders discuss media outlets being inundated with a high volume of AI-generated text, swamping ...
Faculty Associate George Chalhoub is quoted in Fortune, offering a reflection on Moltbook that underscores how large-scale ...
Affiliate Ram Shankar Siva Kumar and coauthors "present a practical scanner for identifying sleeper agent-style backdoors in ...
At a recent panel convened by the Weatherhead Center for International Affairs, BKC Affiliate Bruce Schneier spoke on the threats and opportunities presented by governments worldwide adopting AI tools ...
Professor Gabriel Weil will discuss the role that tort law can play in compelling AI companies to internalize the risks ...
How can large language models (LLMs) transform the way lawyers, researchers, and the public interact with the law? Join us for a hands-on conversation about the potential of LLMs to make sense of ...
Research by Faculty Associate James Riley suggests that resistance to AI automating jobs arises not from ethical objections about devaluing human labor, but from concerns about the feasibility of this ...
Faculty Associate Virgilio Almeida and coauthors explore governance structures and forms of institutional oversight to maintain human control over agentic AI.
Trebor Scholz and Mark Esposito provide guidance for building community-owned alternatives to extractive AI systems.
In an interview with The Harvard Gazette, Faculty Co-Director Rebecca Tushnet explains the legal difficulties that AI ...
As people engage in a digital platform ecosystem where the boundaries blur between leisure, play, and work, what key challenges and opportunities do they face? Who can develop skills through their ...
Researchers from Harvard’s Insight and Interaction Lab built an interpretability dashboard that shows a chatbot’s internal assumptions about a user — such as age, gender, class, and race — making ...
Results that may be inaccessible to you are currently showing.
Hide inaccessible results