
As a student in an emergency management program (I’m cramming my two-year master’s degree into like 14 years), I’m finding discussions of artificial intelligence safety to be pretty interesting. Some people have great ideas, while other people don’t. But, one thing I didn’t expect to see was a group trying to apply anti-gun logic to AI.
After looking over their position and thinking about the wider situation, I found something that readers here would probably be interested to see. In short, anti-gun logic fails when applied to other situations too, and for pretty much the same reasons.
Before I can discuss why their logic predictably fails, I need to make sure you’re speed on the AI safety debate in as few words as possible. The fear is that a future (perhaps a very near future) AI program will be more intelligent than all people at every task. Such a superintelligence could cause humanity serious problems if its goals aren’t aligned with ours. For example, if we told that kind of AI to maximize the number of paper clips made, it could, for instance, go out of control and try to make paper clips out of the iron in our bloodstreams when it runs out of other sources.
In other words, AI doesn’t have to be evil to be a problem. Poorly thought out goals that our existence runs afoul of could be enough to get us all killed. At least theoretically.
Some people who are afraid of AI going full Terminator on us are asking for government regulation that imposes AI safety standards on companies that are developing these programs. But one group I came across recently is going all-in on government power and is calling for a complete ban on artificial superintelligence.
Specifically, Control/AI (a more technical and cool way of saying “AI control”?) is calling for the following policies:
-
Banning the deliberate development of superintelligence
-
Prohibiting dangerous AI capabilities and superintelligence precursors such as automated AI research and hacking
-
Requiring companies to demonstrate that an AI will not use forbidden capabilities before they run it
-
Creation of a licensing system for advanced AI development
As I’ve explained elsewhere, government regulation of AI companies isn’t a fix-all for this kind of risk. Such an approach might work to slow down or end AI development for a short time because the massive data centers needed to create and run AI programs are physically very big and use a lot of electricity. Running a rogue AI operation would be about as difficult as running a large illegal gun factory or a big marijuana growhouse. It’s just hard to hide something so big and power-hungry for very long.
But, just as 3D-printed firearms have already proven, going after the big players can’t keep a technology under wraps forever. The cost of computing power and the amount of electricity it requires has dropped precipitously over the last 100 years. The first calculator was as big as half of a city block, while today it’s an app in your phone that’s infinitely more powerful, takes up virtually no space and uses almost no electricity.
Banning or heavily regulating advanced AI development won’t be possible once it becomes feasible to do that kind of research in mom’s basement. The technology will be developed by somebody. It’s just a matter of how much time we have to prepare.
Just as with guns, the fact that bans and heavy-handed regulation are ineffective doesn’t mean we have no other options for protecting the public from irresponsible actors. There are plenty of ways to limit the risks of rogue AI. Options include:
- Prohibiting the use of AI programs to control critical infrastructure (dams, power plants, refineries, pipelines, etc.)
- Limiting the amount of resources an AI program can have access to
- Prohibiting AI from controlling dangerous chemicals, explosives, and weapons in most situations
- Increasing preparedness for the misuse of AI (hackers, DIY builders of killer robots, etc.)
One thing is for sure. Failed anti-gun logic, with its outright bans, rigged permitting and licensing schemes, and other ludicrous control strategies that rely on government power, doesn’t work any better when it’s applied to things other than guns. Flawed authoritarian approaches will be flawed no matter where it’s applied.


Californians In Legal Jeopardy After (California) DOJ Issues Advisory In Rhode v. Bonta.
https://www.youtube.com/watch?v=uWo0bOsSp9Q
Comparing the infringement of a God-given right spelled out in the U.S. Constitution…to regulation of computer software that up to this point, primary purpose is to invade the privacy of ALL who come in contact with it.
Am I not surprised Sensiba wolud write this kind of drivel, but the fact SNW decided it was worth posting is disappointing.
She’s comparing the control i.e. gun control and AI control, that if that control is left unchecked both can have the same outcome of ‘loss’ (loss of 2A to tyranny people – loss of human control to tyranny of AI) thus in that respect similar.
She is correct in her assessment, regardless of your opinion of the analogy she used.
BREAKING: (4th circuit) DECISION – COURTS CAN TAKE AWAY YOUR 2A RIGHTS (temporarily)…
The US Court of Appeals for the 4th Circuit has found that Americans who have been involuntarily committed to mental institutions can temporarily lose their 2nd Amendment rights. Mark Smith Four Boxes Diner discusses.
https://www.youtube.com/watch?v=4UdBSrqB3PI&pp
Yes, data centers are big and take a lot of electricity. But all the public cloud providers (AWS, Google, Microsoft) run customers’ AI intermixed with customers’ other workloads.
Cloud providers rent out their capacity (CPUs, memory, storage, GPUs) to anyone with a credit card, and bill by the second for its use.
Developers don’t need to be physically present with the machinery, as they do with a gun factory or a marijuana grow.
Developers already commonly work anywhere there’s an Internet connection: in Mom’s basement connecting over the cable TV feed, at a table in Starbucks, in a van parked in the Kalahari connecting over Starlink, anywhere.
So yes, the technology is already democratized, and even more than industrial-age manufacturing of physical goods like guns.
Among lots of things we’ve lost control of: Rising Popularity Of Rent-A-Womb Industry Emboldens Pedophiles To Buy Proximity To Kids.
https://thefederalist.com/2025/07/29/rising-popularity-of-rent-a-womb-industry-emboldens-pedophiles-to-buy-proximity-to-kids/
You cannot stop the signal.
But private billion dollar technology companies will work with communist china, and other authoritarian governments to suppress American civil rights.
They don’t need government regulations. They do it for free. To make more $$$.
“Prohibiting AI from controlling dangerous chemicals, explosives, and weapons in most situations” Tell that to the military. They are now adapting AI to fly “loyal wingman” drones, to allow regular drones to hunt out and exterminate the enemy while evading their own destruction, to run complex air defense operations, and so forth and so on. There is no way that governments will abandon such programs.