Let's say you need surgery and you are given the choice that it's free but will be done by an autonomous robot, without the supervision of a human. Will you take the risk and accept all the liabilities if something goes wrong?
We already give a lot of trust to machines these days, often without realizing it. For instance, we trust that an important email will be received without the content being altered during its course.
There are automatic systems that bid against stock markets, where we are talking about millions of USD at each bid (although it might end up in disasters sometimes).
The military is also another example, with autonomous weapons becoming more and more popular (they're both cheap and efficient, as long as they don't turn against the ones who sent them into the field).
So how about sensitive domains where decades of expertise are required, and yet there is no guarantee that they are error-proof? Medical errors are rare, but not nonexistent.
Let's say the autonomous surgery robot has been tested on 10,000 humans for the top 10 most common surgeries. First, you might question how and maybe even the ethics of it. To which I will answer that this is the same as testing new drugs; they are not put on the market based on statistics on paper, but after being tested by several layers of test subjects, humans, some of whom could be handicapped for life in the process.
And again, you have the choice: robot and free, or human and expensive.
My prediction is that people would first require to have someone (or a company) liable if something goes wrong.
Fine, let's imagine that an entrepreneur convinces an insurance company with a process which is, again, very similar to testing a new drug. Therefore, the company policy accepts to cover the risk if something goes wrong, which "statistically" will be very minimal so that the insurance company will eventually be profitable in covering this risk.
So you have your autonomous surgery robot, tested on 10,000 subjects, with an insurance company that will cover the risk if something goes wrong.
I predict more and more people will say yes. The risk is "worth" taking. Others will just refuse to be touched by a robot, which I can understand since robots will not be able to bond as humans can, and they will take what feels like the safest route... or some kind of luxury!
Let's now stretch this thought process even more. If we become a multi-planetary civilization, much less "concentrated," the choice won't even be possible; we would need to have machines skilled and smart enough to heal our human bodies whenever we got badly hurt.
Having such "robots" will be the only choice in such a situation.
Said in another way, AI robots will not only become a reality, but would be labeled: "Drugs 2.0."