AI Is Rewriting the Rules of Business—And Wall Street Is Forcing Everyone to Play
From search engines demanding website overhauls to billionaires gatekeeping IPO advice, artificial intelligence has stopped being a future concern. It's a present power tool. Here's what actually matters.
The scramble has begun. Not the polite, measured kind you see in boardrooms with quarterly reports. The scramble—the one where companies are rewriting their entire digital presence because Google’s search results now have a new master to please, and it’s an AI that doesn’t care about your carefully optimized keywords.
Businesses are changing how they present information on websites to get noticed by AI search. This isn’t some distant future thing. It’s happening right now, in March 2024, and it’s creating the first visible crack in how the internet actually works.
Here’s the thing nobody wants to say plainly: we’re watching a controlled demolition of the old internet, live.
When Your Business Model Becomes a Subscription Line Item
Let’s start with the most naked power move of the year. Elon Musk is requiring Wall Street firms to purchase subscriptions to Grok—his AI chatbot—if they want to advise on SpaceX’s IPO. One of the largest IPOs in history, and the price of entry is: pay to use my AI.
This isn’t a negotiation. This is a gate.
For a decade, we’ve heard tech CEOs talk about democratizing this and empowering that. What we’re actually watching is consolidation dressed up as innovation. Musk didn’t invent a new principle here—he just made the extraction mechanism visible. Every platform that boots your search visibility unless you optimize for their algorithm is doing the exact same thing. He’s just demanding payment upfront instead of hiding it behind opaque ranking systems.
Photo by Markus Winkler / Pexels
The banks’ll pay, obviously. They’ll grumble, their CFOs will note it as a line item, and then they’ll pass the cost to clients. But the precedent matters. It establishes that access to major financial opportunities now requires tribute to the AI overlords.
Meanwhile, OpenAI bought a streaming show called “TBPN.” Not to make money from it, probably—to manufacture consent. The company framed the purchase as creating “a space for a real, constructive conversation about the changes A.I. creates.” Translation: we’re going to control the narrative about us. If Congress is going to regulate us, we want them seeing the story we tell first.
This is smart PR disguised as art. It’s also propaganda, but let’s not pretend that’s unique to OpenAI. The difference is the stakes. We’re not talking about a phone company buying a newspaper. We’re talking about an AI company buying a content platform specifically to shape how people think about AI companies.
The AI Moment We’re Not Talking About
There’s a story in here that nobody’s focusing on, and I think it matters more than the Musk-sized drama.
Two brothers built a $1.8 billion company with mostly AI doing the corporate work. How many employees? Not many. This isn’t a fluke—it’s a proof of concept. It’s efficient. It’s also, as the reporting noted, “a little bit lonely.”
That offhand observation is doing a lot of work. What it’s actually saying is: the future is productive, and the future is isolating.
The efficiency gains are real. The labor displacement is real. And we don’t have policy for it yet. The two brothers and their AI aren’t breaking any laws. They’re just… winning? Scaling? It’s hard to even articulate what’s happening because our language for employment and value creation assumes humans.
Here’s my genuine uncertainty: I don’t know if this scales. Can two humans plus AI run a $10 billion company? A $50 billion one? Or does complexity eventually require actual people making judgment calls that AI still can’t be trusted with? I’m not going to pretend I know.
But I’ll bet you this—by Q4 2024, we’ll see the first IPO filing that prominently features a minimal human headcount as a feature, not a bug. “Look how efficient we are! Look how few people we pay!” Some investment firm will use it as a selling point. And then another will copy it. And then it becomes normal.
Photo by nappy / Pexels
The Weird Cultural Reckoning
Meanwhile, in Beijing, an AI assistant started a “raising lobsters” frenzy. Users trained the tool to suit their needs. It went viral. This is China’s version of ChatGPT adoption, but it’s revealing something different—not just that AI is useful, but that people want to teach it, shape it, anthropomorphize it as a companion rather than a tool.
In the UK, fewer adults are posting on social media. Ofcom found this. Experts think it’s tied to the shift toward short video—TikTok’s format is winning, the old feed is dying. But here’s what’s not being said: maybe people are also just tired. Maybe the algorithmic squeeze on engagement has hit a fatigue ceiling.
Simultaneously, millions are playing games about mundane jobs. PowerWash Simulator 2 is nominated for BAFTA Games Awards. People are paying money to simulate doing work that’s tedious in real life.
Why? I think it’s because these games offer the only thing modern life doesn’t: completion. You wash the wall, the wall is clean, the task is done. The dopamine hits are real and immediate. There’s no algorithmic feed suggesting you didn’t wash it hard enough. There’s no notification saying someone else washed their wall faster. Just you, your power washer, and a visible outcome.
That’s not entertainment. That’s therapy.
The cultural inversion is wild: we’re building AI to do our work for us while playing games that simulate simple work, and we’re spending less time on social platforms designed to maximize our attention. The systems are working. Just not in the way anyone predicted.
The Cybersecurity Time Bomb
One more thing that should scare you more than it does: AI is coming for cybersecurity, and the defense is also AI.
Hackers can attack with greater speed using new systems. The good news is companies like Anthropic and OpenAI are building defense systems. The bad news is we’re officially in an arms race where both sides are getting faster, and humans can’t audit either one anymore.
This is the part where I’m supposed to have a calm take, but I genuinely don’t. We’ve built systems that can attack faster than humans can understand the attacks, and we’re racing to build defense systems that work even faster. The humans in the middle—your security team, Congress, regulators—are already lost.
Someone, somewhere, is going to get hacked in a way that takes three weeks to even detect because the AI attack was so novel nobody recognized the pattern. This will happen in 2024 or 2025, and it’ll be the moment people realize we’ve crossed a threshold where the system is no longer comprehensible to the people responsible for it.
What I’m Watching
-
Website SEO rewrites hitting critical mass (Q2 2024): Track how many major sites announce their “AI search optimization” strategy. When it becomes industry standard, we’ll know the old internet is officially dead.
-
First minimal-headcount IPO filing (Q3-Q4 2024): Watch for a company in their S-1 that makes employee efficiency a marquee selling point. That’ll be the cultural inflection point—when Wall Street decides the future is lean and human-free.
-
Regulatory response to Musk’s Grok-IPO requirement: Will the SEC say anything? If not by June 2024, we know they’re asleep at the wheel.
-
First major AI-enabled cyberattack that went undetected for weeks (Q2-Q4 2024): This is inevitable. Watch for the CISO conference talks that acknowledge they can’t keep up.
The future isn’t coming. It’s here, fractured across a dozen different battlegrounds, and nobody’s winning yet because the rules haven’t been written. The question isn’t whether AI changes everything. It’s whether we stay awake long enough to write the rules before the lobsters finish raising themselves.