+3
Yes, AI can exhibit racial bias, but it’s important to understand that it’s not a result of AI systems independently developing prejudice or hatred like humans might. Instead, AI bias often reflects and perpetuates existing societal inequalities and biases present in the data used to train the algorithms.
It’s worth noting that none of those myriad businesses (and I do mean none) are turning a profit right now; they’re all deeply in the red by a factor of like 100. Right now the monetary cost of AI is being footed by VC speculation and R&D funds, and it’s an open question whether it will be financially viable in the immediate future, to put things charitably. The case that we’re in a bubble is more credible than people realize, and has little to nothing to do with whether the technology is actually good.
I’d encourage a large grain of salt when you see articles like these. It’s extraordinarily commonplace for companies to describe fairly basic and ages-old statistical methods as “AI” to generate press. The same thing happened a decade ago with “Machine Learning” and “Data Science.” I strongly suspect that’s what Delta is doing in that article, specifically.
That was an interesting clip I hadn’t seen
before. There are more reports out there
of this happening. Seems like we have
achieved one goal of making AI in our
image.
“Models didn’t stumble into misaligned behavior accidentally; they calculated it as the optimal path,” they wrote.
Yeah but even there, these Kingdoms are running out of money to send everyone monthly checks to do nothing. That is why they started putting their resources into actually building an economy. Also, the other negative side effect of people getting money without having to work is that it creates incentives for extremism. When people, especially young men, have too much free time on their hands, they fall into gangs, drugs and religious fanaticism. It’s not a surprise. So I don’t think sending checks to people is a good long term solution.
Google X’s former chief business officer Mo Gawdat says the notion AI will create jobs is “100% crap,” and even warns that “incompetent CEOs” are on the chopping block. The tech guru predicts that AGI will be better at everything than most humans—echoing the likes of Google DeepMind CEO Demis Hassabis and OpenAI chief Sam Altman. Only the best workers in their fields will keep their jobs “for a while,” and even “evil” government leaders might be replaced by the robots.