> As the AI improves over time, it progressively becomes much better at these things than their employees.
> People looooooooooooove speculating about things that are nowhere near happening.
I agree with that, but I think the real risk isn't that AI improves and gets better than an employee. I think the real risk is that it replaces employees, regardless of whether or not it's better, because it's cheaper. It'll be like the first tier of tech support, but without an option for escalation.
My personal opinion is that is dumb, oversold garbage and people are falling for it because it's good at grammar and spelling. I base that on asking it about things where I know there are common misunderstandings with authoritative clarification. The dumb loud people confidently repeat incorrect claims in large volumes while the authoritative sources are silently bewildered at the stupidity. From what I've seen AI is trained on data produced, at least in part, by the "dumb loud masses" and it's "knowledge" is based on quantity over quality.
As it about the ZFS scrub of death which is a mass mania of idiocy and it'll gladly tell you all about it. Or ask it what the validation methods are for TLS certificates and it'll happily regurgitate the common knowledge of DV, OV, and EV even though the official docs [1] definitively state it's DV, IV, OV, and EV in section 7.1.2.7.1.
It's unreliable and perpetuates misinformation because, AFAIK, it treats most of the input information equally and that's not how things work. I don't remember who, but I remember seeing a famous marketer from the 70s or 80s (?) talking about how most of their success came from realizing that "visibility is credibility". That's true, and unfortunate because we're ignoring a lot of intelligent people that aren't willing to engage in a shouting match to get their voice heard while the dumbest, loudest half of the population is having their viewpoints used to train the LLMs that people are going to rely on for information discovery.
The really scary part is that, based on what I've seen, people that do contract work for the government seem to be very eager to replace their workforce (costs) with AI (not costs). Just wait until you need to deal with the government for something and the whole process is like having an argument with a super Redditor.
> I think the real risk isn't that AI improves and gets better than an employee.
As long as they are less intelligent than us, they probably won't spiral out of control. The real risk is that they get substantially smarter than us. Gorillas can't control us, because we are smarter. We don't even actively dislike them, but they are nonetheless threatened with extinction. We do not want to be in the role of the gorillas.
> People looooooooooooove speculating about things that are nowhere near happening.
I agree with that, but I think the real risk isn't that AI improves and gets better than an employee. I think the real risk is that it replaces employees, regardless of whether or not it's better, because it's cheaper. It'll be like the first tier of tech support, but without an option for escalation.
My personal opinion is that is dumb, oversold garbage and people are falling for it because it's good at grammar and spelling. I base that on asking it about things where I know there are common misunderstandings with authoritative clarification. The dumb loud people confidently repeat incorrect claims in large volumes while the authoritative sources are silently bewildered at the stupidity. From what I've seen AI is trained on data produced, at least in part, by the "dumb loud masses" and it's "knowledge" is based on quantity over quality.
As it about the ZFS scrub of death which is a mass mania of idiocy and it'll gladly tell you all about it. Or ask it what the validation methods are for TLS certificates and it'll happily regurgitate the common knowledge of DV, OV, and EV even though the official docs [1] definitively state it's DV, IV, OV, and EV in section 7.1.2.7.1.
It's unreliable and perpetuates misinformation because, AFAIK, it treats most of the input information equally and that's not how things work. I don't remember who, but I remember seeing a famous marketer from the 70s or 80s (?) talking about how most of their success came from realizing that "visibility is credibility". That's true, and unfortunate because we're ignoring a lot of intelligent people that aren't willing to engage in a shouting match to get their voice heard while the dumbest, loudest half of the population is having their viewpoints used to train the LLMs that people are going to rely on for information discovery.
The really scary part is that, based on what I've seen, people that do contract work for the government seem to be very eager to replace their workforce (costs) with AI (not costs). Just wait until you need to deal with the government for something and the whole process is like having an argument with a super Redditor.
1. https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-...