First two, I’d agree are highly regulated. Not sure if I would consider law to be “highly” regulated, simply just regulated. But there’s lots of corporate jobs that aren’t in those or other highly regulated categories.
That sounds more like robotic process automation to me and not AI. Now of course if you’re talking about an unsophisticated investor or just any person off the street, sure I could see they would confuse the two. But everything I have seen in my industry has been pretty clear on the difference.
So another words it is just a more sophisticated Google search. I don’t believe a Google search leads to any increase in lawsuits or increased vulnerability to them. I would agree that it could lead to an increased chance of plagiarism, unverified output, etc. since AI can take and put things in a natural language type of format very easily, so someone who is lazy, does not have the requisite knowledge, or otherwise doesn’t check the output could certainly commit malpractice, I don’t know that the same person wouldn’t have the same problems even if they didn’t use AI, they just may have to put a bit more time in.
The real help these days with AI via ChatGPT and such is the natural language processing. Being able to ask a question and have a conversational type interaction with my research, as well as getting a more natural language response, just makes things a lot easier. I absolutely check what I’m getting from AI, I still take the time to look at the sources, make my own understanding, conclusions, and determinations on the factual basis. I’ve also seen from experience that AI can give you incorrect, outdated, or misinterpreted answers, it has some reliability issues. I guess the difference is, I’m not relying on AI to make decisions. I’m also not making decisions on anything where I used AI for research without client input, or even without discussing it with other professionals on the team.
Certainly if you’re using AI to write a research paper, legal brief, or a tax audit defence letter, and using the output verbatim, then you’re at risk of a lawsuit or other negative action because you did not actually put in the professional due diligence. The thing is, you can have that kind of failure without AI.
@3rdWardCoog2 what is it that you do professionally?
Corporate finance, but focused on the energy sector
That’s as specific as I will get lol
That answers my question perfectly, didn’t need anything more specific.
So AI is just predictive text?
Yikes, even Microsoft isn’t immune.
Minimum wage increases and its impact over the years.
First example have been there for everybody to see at the McD’s counter.
But ask yourself this question. Without robot/AI would Amazon even be in business today?
AI has been with us for much longer than anyone thinks. The auto industry is another prime example.
I do us Chapt GPT for my emails and partly for my presentations. What I am mostly concern is the AI role in school. To teach is for students to learn and reason, not to copy paste. That is my biggest AI fear.
Without basic teaching/learning our entire society is f…d.
No, I remember when there were several competing websites in the early days. Amazon innovated, embraced and took advantage of technology to distance itself. You gotta admire that. Heck any big company isn’t run without software. FedEx? UPS? Those companies couldn’t stay in business without software and AI.
I think AI naturally lends itself to a societal existential crisis when all is said and done
Great, as if rental cars aren’t enough of a potential rip off.
Not replacing the EEs that design, debug, validate and manufacture these solutions…
It will, and already has, reduced some entry level SW development roles… but has also created new roles for AI SW dev…
That’s true, but Hertz is getting rid of employees that used to do this manually?
Yikes.
i was thinking that AI is being used very badly. i’m sure for some big things like more effecient fuel systems, medical systems and such it is used well but individuals, samm and mid size companies should be using it to solve problems, make improvements and innovate; some are but not many.
I’d say, it sounds like they aren’t using AI then or they are just now learning how to use it. Rather than them using it badly.
The more time passes, the more I think society is incompatible with AI. Jobs aside, as this stuff gets better, it will get so, so much easier to use it to commit fraud and spread disinformation. It’s already hilariously easy to impersonate someone over the phone, and live video isn’t that far off. It’s trivial to spread countless photos and videos and soundbites purportedly of news events, at a time when traditional news viewership and trust in news organizations as a source of truth are at an all-time low. We’re rapidly accelerating (more than we already were!) into a world where everyone has a personal reality custom-tailored to engage them on a hormonal level.
I don’t think there’s a technology that’s ever been more concerning.
I’m not as worried about the paradigm shift in employment as the effect on people’s mental health and also the energy consumption required to run these AI datacenters… between AI and crypto I don’t see how we can’t move to using more nuclear power in order to sustain these industries.
Agree with all of the concerns above. Two additional ones:
-
AI outputs are subject to the programming decisions of its creators. We’ve already seen this with Grok, including this week when xAI began allowing it to offer “politically incorrect” answers and it immediately began calling itself MechaHitler.
Grok’s antisemitic outbursts reflect a problem with AI chatbots | CNN Business
The Grok chatbot spewed racist and antisemitic content : NPR -
As people become more and more entrenched in using AI, the impacts on human intelligence and critical thinking will be staggering. This is an early, non-peer-reviewed study, but it’s still very concerning.
ChatGPT’s Impact On Our Brains According to an MIT Study | TIME