Enjoyable throughout with many questions I’ve never been asked before. A few highlights:
Why the timing of the DeepSeek freakout was irrational:
The December paper was the more impressive one. There are other attempts to replicate R1 open source. There’s this attempt at UC Berkeley that was very interesting. And [R1] was less surprising to many people than the December paper.
The December paper is where the 5.6 million cost comes from. It’s where the big cost savings all come from. And it was just sitting there and it didn’t really get press. I don’t think it really affected the market until January. It affected the academic world, I think. I think a lot of academics were taking it seriously. And I would assume some of the companies as well. But it didn’t really move things until the second step released.
I explain why the second Trump AI EO targeting two OMB memos for rescission is so important:
The second thing was they announced their new executive order on AI, which rolled back more of the red tape or aims to roll back more of the red tape and took particular issue with two memos from the Office of Management and Budget. [OMB has a] controversial position now in the media because that’s one of the institutions Doge is trying to use to cut costs. But it also has jurisdiction over what’s called procurement. And these are rules for government purchases of AI tools, AI software and hardware, which can warp what government and what government contractors end up doing.
So lots of big tech companies—Meta, Microsoft, Google—they’re all government contractors. And so this was one of the ways that they tried to meet these equity goals using the government contractors. And Donald Trump also said we’re going to review these OMB memoranda and essentially going to replace them.
On the origins of the modern counter-bioterrorism movement away from targeting state actors and towards targeting American citizens:
After 9/11, the approach to policing and the approach to counterterrorism really changed. It changed innocent until proven guilty to guilty until proven innocent. This is now the controversy over various programs that were used in the post-9/11 surveillance apparatus to target terrorists.
There are some claims that it’s now being used against American citizens as well. And, of course, also claims at the time that it was being used against American citizens, the latter of which I believe is true. There are now court cases that are ruled in the defendant’s favor where that was the case.
And this led to an approach where people became very uncomfortable with their fellow man. They thought, what if my neighbor is a terrorist? What if my neighbor is a domestic terrorist? What if my neighbor is going to create a bioweapon?
I think that’s, number one, factually not true. We have Google once again. And number two, I think that’s a really dangerous philosophy. I think that philosophy has led to a lot of the overreach we’ve seen in the past years, and including the overreach on AI. I should be clear, it explicitly has led to the overreach on AI. This is the motivation that some people cite.
And it’s also led to a kind of decline of trust. And a decline of trust not just in terms of political agreement. Like some people say, there’s a decline of trust, people don’t like governments from the opposing side. But no, I think this is much worse. It’s much worse to believe that your neighbor is going to be a terrorist than it is to believe that the political party you don’t like is bad. I think the latter we’ve had a lot of precedent for. The former we haven’t really had a precedent for in America. And the global precedents have not really ended very well. The global precedent is like the Troubles or all these periods of immense political violence, which I don’t think have ended well.