>>
>>17112
>there's a lot of psychological guess-work that goes into establishing a persons motives, questioning witnesses and manipulating the jury
That's just another type of intelligence. If a general super-AI is smarter than a human in one way, it's smarter than a human in ALL ways. That includes reading people, coercing people, and manipulating people. A machine can do things that a human cannot, such as read people's physiology to detect when they are telling the truth or lying, or imperceptibly change its tone of "voice" to change a person's emotional reaction to its words, similar to subliminal messages.
Most people still seem to think that even if an artificial super-intelligence arises, there will still be some magical, invisible trait that will enable meat humans to outperform it in some ways. This is delusion of human exceptionalism, which is a holdover from religious dogma. In truth, such an intelligence will be better at literally everything. It'll be better at games; it'll be better at producing art; it'll be better at singing; it'll be better at predicting the stock market; it'll be better at having conversations; and if it were given a proper body to reside in, it would be better at fucking your wife.
>>17114
"Unskilled labor" has an established definition. We can't just change it to suit this topic; that's not how it works. If you want to use a different definition, you have to use a different term. Saying that "unskilled" means "how easy it is for a robot to replace people", just because this is a topic about robots replacing people, is circular reasoning.
Also, from experience actually being a salesman, it's not unskilled labor by the actual definition, because you need to be licensed in whatever you're selling. Additionally, that "psychological manipulation" you speak of is all just rote. It's classes you take that teach you specific techniques for how to trick people, primarily involving controlling the conversation, appeals to emotion, guilt-tripping, and exact vocabulary. There's no magic to the process; everyone does it the same way, parroting their lessons like a schoolchild. Success or failure is mostly based on how willing you are to use those techniques despite knowing you are ruining people's lives to line your pockets with cash. An emotionless AI would be perfect at this because it can't be hesitant even a little bit due to having a moral compass.
>If we have true AIs then we have humanoid robots, because the latter is much easier than the former
My God, you need to at least study materials science as a baseline before spouting some crap like this.
Here's a fun little question to get you thinking: why are some sex toys (say, fleshlights) harder, but others (tenga eggs) are softer? Shouldn't all sex toys be soft and squishy like real genitals...? Well, it's a question of durability. A harder toy will last for years, while a softer one will tear after only a few uses. Okay, so how is it that real genitals can be even softer and yet not immediately break down? You already know this: they can heal themselves. Every time a person has sex, they sustain micro injuries - tiny tears and ruptures due to mechanical stresses. In a biological system, these are quickly repaired. But a mechanical system, like the sex toy, cannot do so. The micro damage will accumulate until it results in a catastrophic failure like tearing in half. That will happen quicker if it is less durable.
This applies to all systems. There is a tradeoff between flexibility and strength, and NOTHING in nature or technology has both; nature just cheats the rule by constantly being in a state of breaking and rebuilding itself. You can't build something like, say, a knee joint without either making it so rigid the robot can barely walk, or so fragile that it's needing replacement all the time. A bird can fly for a decade without having its wings cut off and replaced, but an airplane has to be maintained constantly.
In order to get around this issue, we have to avoid it entirely by invoking magical sci-fi technology that doesn't exist, like self-repairing robots that work on nanotechnology or some such nonsense. While is basically saying that we're going to build systems so close to biology, that they will be indistinguishable from biology.
That isn't even touching problems like balance/coordination and how to power the dang thing. You'll notice any "walking" robots they have around either have to be tethered to massive power cables, or have batteries that don't last very long. There is nothing in technology that matches the compactness efficiency of a biological system converting food into energy.
We already have AI that can perform a lot of human tasks with regards to intelligence (trading stocks, playing chess/go, generating music), but making an independent artificial body is still completely unsolvable.
>garbage trucks
The biggest issue is, and always has been, liability. If someone is driving a car (or a garbage truck) and runs over a kid, the driver is liable and therefore legally and morally and financially responsible unless it can be proven there was a mechanical defect. But if a vehicle that is controlled by an AI does so, who bears that burden? The company that owns the vehicle? The company that built the vehicle? The company that owns the AI? The dude that programmed the AI?? It would probably be impossible to determine exactly what caused it to do so, and it would probably be a fault in the AI itself. Either way, since there is no "end user" to dump total liability on, none of the organizations that currently perform these tasks would be interested in risking their profit to do so.
SpaceX might have fully autonomous rockets, but you'll notice their landing pads and drone ships are isolated in case something goes wrong, so they don't injure anyone or damage anyone else's property. What you're proposing with garbage trucks is infeasible, considering they're driving past people's houses/parked-cars and children playing in the streets and driveways. SpaceX isn't going to start landing rockets near playgrounds, either. And it's not because of moral goodness, but liability.