AI is the next big thing

Honestly, I’d expect the failure rate to be much higher with only 6-12 survivors.

Many vendors are contributing to the hype by engaging in “agent washing” – the rebranding of existing products, such as AI assistants, robotic process automation (RPA) and chatbots, without substantial agentic capabilities. Gartner estimates only about 130 of the thousands of agentic AI vendors are real.

If the 130 number of current, real AGI is accurate, I’d expect more weeding out.
OTOH if the consortiums are country driven or partially funded by governments,
that may be why there are 130 of them today. Nobody wants to get left behind.

AI isn’t real. But you can marry one.

AI next big thing: Then there’s this.

Texas family sues Character.AI after chatbot allegedly encouraged autistic son to harm parents and himself

1 Like

What can you say about it all :man_shrugging:

The apparent hard coded emphasis to praise Musk in all things is laughable.
And per article, this like the 3rd or 4th time it’s gone bonkers.

We have a long way to go to get to real AGI.

Yea, I find it pretty funny really.

I doubt we ever reach AGI.

and the time it’s going to take to reach AGI (if it’s reached), there won’t even be capital available to get there in the first place.

AI investments in general are going to collapse, UNLESS, the president decides to use public money to keep it alive (which he’s doing). Also, it’s not just capital investment into AI firms, it’s also the massive over-deployment of capital right now to build Data Centers to power the damn AI.

If AI eventually collapses financially, then Data Centers will collapse with it. But hey… who typically cleans up major capital messes? It’s the tax payer.

Americans have no idea what’s going on, and most people don’t even know what AI or AGI is.

Is your job being taken over by AI? Is that why you hate it?

My job is too highly regulated to be “taken over” by AI.

I don’t hate AI. I hate the capitalist agenda that surrounds AI because it’s causing more problems than it’s attempting to solve.

AI should not be subsidized by taxpayers, and we should not be building AI at the expense of significantly harming the environment / worsening climate change (which it’s doing).

And as I said, AI will collapse. It’s not generating returns for investors (hence why the government plans to get taxpayer money involved), and said investors will eventually pull their money out, which will cause a domino affect.

LLMs are not profitable. Companies like OpenAI will not generate a profit until they sell the very product that gets them private capital in the fist place → AGI.

Google, META, Microsoft, etc… have balance sheets to hedge their AI arms, but if AI collapses, then they will likely rid their AI arms due to lack of profits from them.

I wouldn’t go that far, but it’s probably a 2040 or 2060 type thing. The very
way we are attempting “training” it today to “learn” may be a dead end trail.

If you play with the free versions of Grok or ChatGPT you can see how it uses some resources (facts) incorrectly to make a partially true general statements.

I’m also coming to think it may be beyond the capacity of companies to really
develop AGI. It may require the resources of a country to accomplish. The holy
grail of fusion may be model that will be required. And fusion research, as far as I know, is only done at the country ( or consortium of countries) level

ChatGPT’s brilliance is in its ability to predict text extremely fast. The brilliance is NOT the actual substance of the responses itself.

But the technology behind ChatGPT is not what sells it. It’s the grift of calling ChatGPT “AI” or making the assumption that it is some sort of bridge to AGI. This is what’s attracting capital.

ChatGPT also relies HEAVILY on major energy consumption, which is both environmentally and financially unsustainable.

But as I said, the distance it takes to reach AGI is far too long to sustain investment (just like Nuclear Energy, which is trying to leverage AI/Data Centers to get investment as well).

Entire thing is a grift.

1 Like

Sure you guys are way way smarter than all those MIT and Harvard grads that can’t see the true light.

lol which grads are you referring to?

The professors getting shunned for going to Eppie Island?

Deflect deflect deflect. I guess you lost.

I’m okay with calling ChatGPT, and its competitors, AI; just certainly not AGI.
These primitive versions may be the necessary bridges needed to get to AGI;
but yes, it’s a very expensive path to get to AGI. Commercial companies can probably product-ize today’s AI offerings ( and already are !) to build revenue
streams from. There is definitely some hype out there on how companies labeling
things, but there are real AI products out there today too.

Can we say the same for the MIT and Harvard grads from 1950s and 1960s that thought we would only need a few computers for the entire country, or thought fission generated electricity would be too cheap to meter, or thought fusion was just 50 years away ? Point being, really bright people sometimes get it totally wrong.

1 Like

You just compared me to Ivy League grads

What do you want me to say instead?

Orange man is an ivy grad. Is he smarter than you?

Maybe he is. I don’t get butt hurt admitting I’m not the smartest person in the world.

I never said I was the smartest person in the world

I’m giving you my argument a long with literal evidence