Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have safety concerns about stepladders but I'm not a stepladder doomerist.

Which of these experts are doomers?



One of the guys who invented back propagation, Geoffrey Hinton, for one. Of the three people who won the Turing Award for AI research, two are sounding the alarm. Hinton said he’s most concerned about the existential risk.


>Of the three people who won the Turing Award for AI research, two are sounding the alarm.

Two of them who won it for stealing ideas Schmidhuber came up with over a decade ago and not giving credit.


I'm not entirely sure what we're supposed to take from this. Are you saying that they are not actually AI experts, and we're returning to the claim that only people who do not understand the tech are concerned about these risks? That's a stretch.

Or is it a new move:

- people with safety concerns just don’t understand the tech.

- “here’s a list of people with safety concerns who demonstrably do understand the tech very well”

- Those particular experts are bad people so we shouldn't listen to them.

I'm really not committed to a side of this debate, and if I check my own bias, it's probably on the Steven Pinker "progress is good" side of things. I just keep hearing bad arguments from the AI-optimist side, and your comment is just another example.


>I'm not entirely sure what we're supposed to take from this. Are you saying that they are not actually AI experts, and we're returning to the claim that only people who do not understand the tech are concerned about these risks? That's a stretch.

The point could be that they are legitimate AI experts, but narrow-minded: their thinking utilitarian, their reasoning motivated. Unlike the original inventor, whose thinking was abstract enough to originate a novel method, but at the same time not pragmatic enough to even get the credit that he is owed. It's not an entirely unfamiliar scenario in tech, is it?

An analogy for those people would be an engineer who designs machine guns, or fighter planes. They may be really great at figuring out how to make the gun shoot farther, or the plane fly faster. At the same time, they may not have a very good grip on the idea that, in the end, someone's gonna get killed by their invention.

Such an engineer will probably have a pretty decent rationalization that would enable them to keep getting paid for doing what they love: inventing these fun machines for throwing bits of lead across the air and/or flying around. Whoosh!


Fighter plane engineers are fully aware of how their inventions will be used and generally agree with the mission.

"Because I make killing machines. And they were both damn fine killing machines."

https://www.youtube.com/live/_MUK241uZHM?feature=share


Of course they're gonna interview the guy who's gonna say that - it makes for better viewing! Others might hold more conflicted outlooks, we just never hear about them on popular media. (A quote attributed to Mihail Kalashnikov does come to mind.)

But that's kinda beside the point. I'm more concerned that you missed the primary logical error that I committed in my analogy: the AI experts in question aren't doing the rationalizations but the scaremongering.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: