Robots in 2021 are really terrible at Human Things.
Really, really terrible.
Partly, this is because humans are outrageously complicated, and human social interactions happen incredibly fast.
You might be thinking now, "But computers are really really fast! The computer can solve math problems that take me an hour in seconds. Or less!"
True. Computers are great at math.
But computers are much slower at processing information that isn't purely digital, and much, much slower at generating content for humans.
And we don't even really have the mathematics to even properly describe human behavior. We are just starting to have mathematics to model the behavior of crowds, where individual behavior isn't as important as the general sense of what's going on, to https://www.pnas.org/content/117/12/6370 use social robots to influence the cooperation between members of a group, and to identify facial expressions , but all these tasks are so computationally expensive that they can not fit into mobile platforms -- even speech recognition (which is relatively well understood and widely deployed - that's Alexa, Siri, and Hey Google, among others) requires a connection to the internet to have a supercomputer analyze your voice and guess at what you said. (Although that is an area of active research, with some success!)
So we have a lot of work to do before social robots are ready to "replace" humans.
What's on that list?
voice recognition and speech analysis operating locally in real time (and fast - try pausing in a conversation with your friends and counting how many seconds it takes to feel like the pause has been incredibly awkward -- for me its less than 5 seconds!)
facial recognition and *expression* recognition (again, totally something that machine learning is going to be good at, but the training data set will need to be enormous to get something as capable as humans are of identifying similar expressions on different faces)
voice and facial expression *generation* This one has come a long way with high-quality screens, and most of the current crop of social robots don't even try to have faces with moving parts. Robots like Moxie use adorable iconography to supplement facial expressions, showing bubbles with recognizable icons to tell the user what Moxie is "thinking" about (even when Moxie is sleeping)
Mobility. Boston Dynamics is way out front on this , as you can see in this incredible video of Atlas doing parkour, but a generalist social robot will need to be able to be fast and agile and gentle and cautious. That's asking a lot!
And, finally, social robots are going to need trust. When a human being walks into the room, I have a lot of experience with what capabilities they might have and what the range of possible things are that a human can do. But especially as machine learning is used more in motion planning and other areas of robotics, a robot may move their arm from point A to point B not at all along the path a human would take.
To be widely accepted, humans will need to be able to predict robot movement, which either means more careful crafting of movement algorithms to match biologically-inspired expectations, or adjusting people's expectations to the ways that robots more typically move.
Hey Kat! Wow you bring up a lot of good points about social robots. There is a lot that goes into creating a social robot. Humans are so much more complex than robots and for robots to even come close to interacting like a human, a lot of different human aspects need to be implemented to these Robots. The four points you made on what robots will need to replace humans seem very true. It's going to take a lot of work, research, and time before we get a robot that can entail all these things. To be honest, I think it'll be a little scary if a robot can imitate a human to such precision, but I am wondering…