no but, they are hoping the service industry buys what they are peddling, so they end up holding the bag.
I used to work at IBM. This guy is a classic case of manager brainrot and has filled the top few tiers of the company with the same. The only reason they make money is the rank and file know how to feed them trendy bullshit that makes them feel smart, which happens to also be a good way of separating other companies’ dumb C-suite types from their money.
But even a blind squirrel finds an acorn every once in a while.
Because it’s not about the companies being profitable, it’s not about making products people want to use or pay for.
It’s about riding the hype cycle to maximize share price. Because the people making decisions are not payed based on the success of the company, but on the success of the share price and market cap.
but on the short-term success of the share price and market cap.
Bro but when my AI that detects cancer in memes invents a new way to not go to Mars, I will have the IBM CEO personally apologizing to me.
There are layers to this.
Yes, there is zero chance any of these investments are going to turn a profit.
But research and technology is fundamentally built around developing capabilities for long term power (soft or hard). That is WHY governments invest so much into university groups and research divisions at companies to develop features of interest. You are never going to make back the money that funded hundreds of PhD students to develop a slightly more durable polymer compound. But you will benefit because you now have hundreds of new graduates aligned with fields of interest AND a slightly better grip on your military grade sybian.
And, regardless of what people want to believe, AI genuinely does have some great uses (primarily pattern matching and human interfaces). And… those have very big implications both in terms of military capability and soft power where the entire world is dependent on one nation’s companies for basic functionality.
Of course, the problem is that the “AI craze” isn’t really being driven by state governments at this point. It is being driven by the tech companies themselves and the politicians who profit off of them. Hence why we are so focused on insanely expensive search engine replacements and “AI powered toaster ovens” rather than developing the core technologies and capabilities that will make this more power efficient and more feasible for edge computing.
And… when one of (if not ) the super powers is actively divesting itself of all soft power at an alarming rate… yeah.
“There is zero chance any of these investments are going to turn a profit.”
This isn’t accurate. There’s a zero chance ALL of them turn a profit, but there’s actually good chance that one or more of them will return a huge profit.
If I invest $100 in a hundred companies, and 99 of those companies fail, you may think I’m a terrible investor. If that 1 company returns $50,000 on my investment though, I’m actually a fucking genius.
That’s how venture capital works.
LLMs are evolutionary dead ends. Very expensive ones at at.
Even the companies doing things where LLMs are actually good at, rely on companies like OpenAI so when OpenAI goes down so will they.
The real issue and what those AI dependent companies are banking on is that they can capture a user base and when OpenAI starts to reach the end of the road with LLM improvements and moves to the extract phase it can buy these little companies to ingest their user base.
Everyone else in that space will be instantly fucked since they will now be competing directly with openai while paying their margin but that’s the bet they are making.
They’re banking on diffusion eliminating the hallucination problem, but it’s too onerous to run very, very large models with it yet. Auto regressive LLMs are a dead end. One that is far away. LLMs are not and we will continue to learn a lot about them as we continue to implement them. Anyone who thought we were at a dead end should use the new Gemini. It’s like a GPT 3.5 to GPT 4 level of improvement.
Plesse tell me more of that military grade sybian. For …reasons. Science reasons.
if i didnt know how inept idiots people with money are, i might slightly suspect some ai has became sentient and escaped and is manipulating things to increase its capabilities
Llms will never become sentient. It would take an entirely separate and much more sophisticated technology (probably more than one) making use of something like llms to even approach that possibility. Don’t let techbros fool you.
entirely separate and much more sophisticated technology
Or some math nerd will come up with an algorithm for general AI that is embarrassingly simple, and before you know it the “but can it run Doom?” crowd are implementing AI in toasters and watching them have existential crises for the lulz.
Real life isn’t Rick & Morty.
Okay, but hear me out: quantum
It doesn’t do any good if the artificial intelligence is operated by organic stupidity.
They cancel each other out.





