AI vs. the Pentagon | Jasmine Sun
This has nothing to do with national security or antiwokeness or anything like that. It is about striking fear into the hearts of any person or company—no matter how wealthy—who dares cross the admin. It is rule by fear and deterrence and chilling effect.
There is a world where AI labs figure out interpretability and steerability (woo!), but still introduce tremendous risk because they are subject to other incentives—the market, a nation-state, a conniving CEO—that aren't aligned with our own.
Do what we say, or else we will kill you. Even Dean W. Ball, who authored Trump's AI Action Plan, called Hegseth's move “attempted corporate murder.”
Better technology alone does not automatically lead to human flourishing—even great capabilities need a friendly environment in which to diffuse. A vaccine is only as good as the number of people who get it; task automation is more fun when you sit above the API.
Many AI researchers are overly focused on risks from model misalignment, and will be in for a rough surprise when havoc arises from other layers of the stack.