Richard Batt |
The New Yorker's AI Apocalypse Story and What Your Business Should Actually Worry About
Tags: AI Strategy, Opinion
Guardian columnist Emma Brockes wrote about reading the New Yorker's investigation by Ronan Farrow and Andrew Marantz on Sam Altman and AGI. The piece went deep on existential risk: AI systems becoming uncontrollable, economic disruption creating a "permanent underclass," the possibility of superintelligence we can't align with human values.
Brockes went from "not thinking about AI" to existential panic. She asked ChatGPT for reassurance. The response was, by her account, "wholly witless", corporate speak that addressed none of her actual concerns.
Here's what happened: she conflated two completely different problems. And almost every business owner I talk to does the same thing.
Key Takeaways
- AGI existential risk and AI deployment risk are completely different problems, stop treating them as one
- Your business doesn't need to solve alignment theory. It needs to decide whether to deploy AI while everyone else does
- From 120+ projects: companies fall behind because they're researching AGI risks while competitors automate
- The real risk isn't whether AI will eliminate humanity in 20 years. It's whether your competitor will eliminate your market share in 6 months
The Two Problems People Confuse
Problem 1: Existential Risk
Will advanced AI systems become misaligned with human values? Could a superintelligent system do harm we can't undo? These are real questions. Serious researchers are working on them. The New Yorker story was about these questions.
This problem is global. It requires alignment research, policy frameworks, international cooperation. Sam Altman isn't going to solve it. You aren't going to solve it. The question of how to align superintelligence is genuinely important and genuinely unsolved.
Problem 2: Deployment Risk
If I use AI to automate my loan processing, am I making good decisions? Could this system make mistakes that hurt customers? How do I ensure fairness and accuracy? These are real questions too. But they're completely different from the existential risk questions.
This problem is local. It's your problem. It requires understanding your specific system, your data quality, your use case. You can solve it. Most businesses are solving it right now, imperfectly but reasonably well.
Brockes was asking about problem 1. ChatGPT answered about problem 2. That's why she got witless corporate reassurance instead of addressing her actual concern.
Practitioner Insight: What Actually Matters for Your Business
From 120+ projects, I've seen exactly two categories of deployment risk that actually stop things:
Data Quality Risk: Your AI system is only as good as its training data. Garbage in, garbage out. If your invoice processing system learns to look for red invoices and all fraudulent invoices happen to be printed on red paper, you've created a $50K problem. This is real. This is solvable. You address it with data audits, testing, and human oversight.
Human Behavioral Risk: People stop caring about accuracy once they delegate to a system. The loan processor used to check the application. Now they rubber-stamp the AI decision. When the AI is right 97% of the time and the processor gives up on the other 3%, you've traded accuracy for speed. This is real. This is solvable. You address it with training, auditing, and incentives.
That's it. From my 120+ projects, if you solve those two things, your AI deployments work.
Notice what's not on that list: superintelligence, misalignment, permanent underclasses, economic disruption from AGI. These aren't deployment risks. They're existential questions that require policy and research, not quarterly audits.
Why Companies Worry About the Wrong Thing
If you're researching AGI alignment theory while your competitor is automating invoices, you're making a strategic mistake.
Here's the math: Automation that cuts invoice processing time from 5 days to 1 day saves $150K/year for a 50-person company. That's real money. That's a new hire. That's capital to invest in something else.
If your competitor ships that automation in Q2 and you spend Q2 debating whether AGI will destroy humanity, they've just gained $150K in annual margin on you. By Q4, they've gained $300K. By next year, they've pulled ahead on cash, hiring, and product velocity.
The risk you should worry about isn't "what if AI becomes dangerous." It's "what if I'm the only company in my industry still doing invoice processing by hand while everyone else automated it."
From 120+ projects: the businesses that fell behind weren't the ones using AI wrong. They were the ones still researching while everyone else shipped.
The Three Questions That Matter
Question 1: Is this system accurate enough for my use case?
For loan decisions, you probably want 98%+ accuracy. For email categorization, 95% is fine. For highlighting documents for human review, 85% is fine because the human is the final decision-maker.
This is a technical question. You answer it by testing. Not by waiting for AGI policy papers.
Question 2: Can I audit it?
Can you look at a decision the system made and understand why? Can you trace how it arrived at that decision? If the answer is no, don't use it for consequential decisions.
This is an operational question. You answer it by understanding your system, testing edge cases, and maintaining human oversight.
Question 3: What's my fallback if it breaks?
If your AI invoice processing system goes haywire and starts approving everything, what's your manual backup? You should have one. It's called "call the accountant."
This is a contingency question. You answer it by designing with failure modes in mind.
Notice what's not on this list: "Will this create a permanent underclass?" "Is superintelligence aligned?" "What if AI becomes conscious?" These are important questions for humanity. They're not questions that stop you from deploying loan processing automation next quarter.
The Honest Trade-off
There is a real cost to AI deployment: concentration of power. A company that automates its operations does gain advantages. If only a few companies can afford AI, those companies will outcompete everyone else. That's a real economic question.
But it's not a reason for your company to not deploy AI. It's a reason for policy-makers to ensure AI access is broad. Both things are true.
For you as a business owner: the risk of not deploying AI is bigger than the risk of deploying it poorly. If your competitor deploys it well and you don't deploy it at all, they win.
If you both deploy it and you do it poorly, they still win but you're closer. And you learn something.
Stop Waiting for Certainty
Brockes was looking for reassurance that AI is safe. The New Yorker article didn't provide it because the alignment question isn't settled. It probably won't be settled for years. Maybe decades.
You can't wait for that problem to be solved to deploy AI in your business. You have to deploy it now, carefully, with testing and oversight. While the experts debate AGI.
The businesses winning right now are the ones that decided: "We can't solve alignment theory. We can solve whether our loan processor is accurate enough. We can solve whether our system has human oversight. We can solve whether we have a manual backup."
Those are the problems you're actually responsible for. Solve them. Move fast.
FAQ
Isn't AI deployment risky?
Yes. So is driving a car. So is accepting credit card payments. So is hiring a new employee. All systems are risky. The question is whether the risk is manageable, not whether risk exists.
What if the AI makes a bad decision?
That's why you have human oversight. If your system is making decisions without human review, that's your deployment problem, not AI's problem. Fix it with oversight.
Should I be worried about AGI?
Worry about it in the abstract. Support AI safety research. Vote for good policy. But don't let AGI concern stop your Q2 automation project.
How do I know if my AI system is trustworthy?
Test it. Audit it. Compare its decisions to human decisions. If it's better or equal and faster, deploy it with human oversight. If it's worse, don't use it.
What's the difference between AGI risk and deployment risk?
AGI risk: What if superintelligent AI becomes misaligned? Deployment risk: What if my invoice processing system is 87% accurate instead of 95%? They're completely different problems requiring completely different solutions.
Stop researching and start deploying. Take the AI Readiness Assessment to get your deployment roadmap. Download the AI Quick-Wins Checklist to identify your first automation.