Its already getting Alzheimer?
For kicks not too long ago I asked the Microsoft bing ai a technical question that I already knew the answer to, but couldn’t remember where it was cited in the TX administrative code. It provided me with an incorrect answer based upon a chat forum response from 2010. I told it that it was wrong and had cited a dubious internet opinion, and it replied, “I’m sorry, I don’t want to have this conversation anymore”.
Machine learning models, the most common form of AI, are notoriously bad at math. In lay terms, this is because they approximate an answer rather than compute it. The ability to “learn” and approximate is what makes ML so powerful, but it’s Achilles heel is that it’s really really bad at math. Your average second grader is better.
With that as context, ChatGPT is a large language model, not a traditional ML model. I suspect that it suffers from the same problem as ML but will have to look into it further when I get some time.
Food for thought: both Steven Hawking and Elon Musk have speculated that AI will eventually lead to the extinction of humans.
Edit: I’m not banging on LLMs like ChatGPT at all. In fact we just developed one for a state agency that’s very intuitive and powerful. Given that they are relatively new and the hype / unrealistic expectations that seem to accompany anything AI related, understanding their capabilities and weaknesses is important.
I use Bard a lot now. And Chat GPT when I want a second opinion.
I use it to write automation scripts OFTEN… Still working.
ChatGPT is not AI
What is it then?
It’s a large language model (LLM) that is widely accepted as an AI technique. This is the first time I’ve encountered a different viewpoint.
To be precise, ChatGPT is the conversational interface in front of the actual LLM, but that’s splitting hairs.
It is a language model that sources responses to prompts using text from the internet without providing the source itself or being able to identify if it’s correct or incorrect
It’s very impressive technology with extreme repercussions, but it’s not the AI in the way that many people think it is. ChatGPT is obviously not sentient, and it doesn’t actually know what it’s doing. It’s simply following an algorithm that builds off of previous responses. It doesn’t know when it’s incorrect and will confidently provide incorrect answers in a manner that seems correct to average people.
ChatGPT being able to pass exams such as the GMAT or BAR exam is about as comparable as a human with no foundational knowledge of either exam passing those said exams while also having the internet right next to them.
Can’t disagree with 3rd ward. We’re both right
And it’s very useful for sprucing up my poor marketing and proposal language
I’m not saying it’s a useless technology. I do think LLMs pose a risk to many jobs especially in the white collar worker demographic
AI- the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages."
Looks to meet the definition
ChatGPT still relies on human input
The source of its responses are texts from the internet that were originated by humans