Yes, there is zero chance any of these investments are going to turn a profit.
But research and technology is fundamentally built around developing capabilities for long term power (soft or hard). That is WHY governments invest so much into university groups and research divisions at companies to develop features of interest. You are never going to make back the money that funded hundreds of PhD students to develop a slightly more durable polymer compound. But you will benefit because you now have hundreds of new graduates aligned with fields of interest AND a slightly better grip on your military grade sybian.
And, regardless of what people want to believe, AI genuinely does have some great uses (primarily pattern matching and human interfaces). And… those have very big implications both in terms of military capability and soft power where the entire world is dependent on one nation’s companies for basic functionality.
Of course, the problem is that the “AI craze” isn’t really being driven by state governments at this point. It is being driven by the tech companies themselves and the politicians who profit off of them. Hence why we are so focused on insanely expensive search engine replacements and “AI powered toaster ovens” rather than developing the core technologies and capabilities that will make this more power efficient and more feasible for edge computing.
And… when one of (if not ) the super powers is actively divesting itself of all soft power at an alarming rate… yeah.
“There is zero chance any of these investments are going to turn a profit.”
This isn’t accurate. There’s a zero chance ALL of them turn a profit, but there’s actually good chance that one or more of them will return a huge profit.
If I invest $100 in a hundred companies, and 99 of those companies fail, you may think I’m a terrible investor. If that 1 company returns $50,000 on my investment though, I’m actually a fucking genius.
The real issue and what those AI dependent companies are banking on is that they can capture a user base and when OpenAI starts to reach the end of the road with LLM improvements and moves to the extract phase it can buy these little companies to ingest their user base.
Everyone else in that space will be instantly fucked since they will now be competing directly with openai while paying their margin but that’s the bet they are making.
They’re banking on diffusion eliminating the hallucination problem, but it’s too onerous to run very, very large models with it yet. Auto regressive LLMs are a dead end. One that is far away. LLMs are not and we will continue to learn a lot about them as we continue to implement them. Anyone who thought we were at a dead end should use the new Gemini. It’s like a GPT 3.5 to GPT 4 level of improvement.
There are layers to this.
Yes, there is zero chance any of these investments are going to turn a profit.
But research and technology is fundamentally built around developing capabilities for long term power (soft or hard). That is WHY governments invest so much into university groups and research divisions at companies to develop features of interest. You are never going to make back the money that funded hundreds of PhD students to develop a slightly more durable polymer compound. But you will benefit because you now have hundreds of new graduates aligned with fields of interest AND a slightly better grip on your military grade sybian.
And, regardless of what people want to believe, AI genuinely does have some great uses (primarily pattern matching and human interfaces). And… those have very big implications both in terms of military capability and soft power where the entire world is dependent on one nation’s companies for basic functionality.
Of course, the problem is that the “AI craze” isn’t really being driven by state governments at this point. It is being driven by the tech companies themselves and the politicians who profit off of them. Hence why we are so focused on insanely expensive search engine replacements and “AI powered toaster ovens” rather than developing the core technologies and capabilities that will make this more power efficient and more feasible for edge computing.
And… when one of (if not ) the super powers is actively divesting itself of all soft power at an alarming rate… yeah.
“There is zero chance any of these investments are going to turn a profit.”
This isn’t accurate. There’s a zero chance ALL of them turn a profit, but there’s actually good chance that one or more of them will return a huge profit.
If I invest $100 in a hundred companies, and 99 of those companies fail, you may think I’m a terrible investor. If that 1 company returns $50,000 on my investment though, I’m actually a fucking genius.
That’s how venture capital works.
LLMs are evolutionary dead ends. Very expensive ones at at.
Even the companies doing things where LLMs are actually good at, rely on companies like OpenAI so when OpenAI goes down so will they.
The real issue and what those AI dependent companies are banking on is that they can capture a user base and when OpenAI starts to reach the end of the road with LLM improvements and moves to the extract phase it can buy these little companies to ingest their user base.
Everyone else in that space will be instantly fucked since they will now be competing directly with openai while paying their margin but that’s the bet they are making.
They’re banking on diffusion eliminating the hallucination problem, but it’s too onerous to run very, very large models with it yet. Auto regressive LLMs are a dead end. One that is far away. LLMs are not and we will continue to learn a lot about them as we continue to implement them. Anyone who thought we were at a dead end should use the new Gemini. It’s like a GPT 3.5 to GPT 4 level of improvement.
Plesse tell me more of that military grade sybian. For …reasons. Science reasons.