Technically, in the domain of AI and machine learning, Robots apply reinforcement learning by deploying rewards to perform concise actions and reduce errors. In particular industries, any mistake is intolerable and fatal such as robotics practicing medical surgery. However, some address the use of robots in a societal environment; vulnerability is a necessity and crucial to ensuring successful implementations. An article published in Scientific American addressed the vulnerable side of Robots and their influence on people (Kramer, 2020). The highlighted factor considered that the imperfection of robots encourages people to communicate back and promotes better interaction.
Again, how about a robot in a surgery room or in a self-driving car? Definitely, the approximation could not be practical to consider a necessary faulty margin in that sense and should be out of the discussion, at least in this post. The scope involves the social interaction with human subjects to serve particular objectives. Surprisingly, Salem et al. (2015) concluded the human interaction with a faulty robot did not influence the communication dialogues, and the willingness of interaction remained acceptable. It is critical to measure the faulty incidents and address threats on participants’ lives, especially when dealing with the elderly or children. The defectiveness might add a personalized and distinctive touch to the robot if it occurred consistently at particular momentum, promoting unique character, attitude, and behavior.
The individualism of a human being raises the question of whether robots should be identified through a unique and personalized experience to establish attachment and emotional involvement.
Building a robot should be somehow near perfection, yet maintaining personality demanded flaws. So, the algorithms should be executed perfectly to produce imperfect and poor results within boundaries to ensure safety and control. Such glitches could be funny and humorous to engage closely with people. It could present emotions through voice frequencies (uttering, warmth, etc.) or face impressions such as blinking eyes or an opened-mouthed. Morgan (2017) addressed the things robots cannot outperform humans. One of the criteria addressed empathy and how a robot might share a similar experience with humans to share and talk about. Hewlett (2015) suggested with hesitation that humans can play the role of inspirational guide to offer a learning experience, yet worried about the outcome. Would it be a great idea that robots come with uploaded fictitious memories? Would that be a lie, a white lie, or something else? Use your imagination. Undoubtedly, many ethical questions could be raised, and tons of debatable topics needed to be addressed; however, just talking about it, it is a start.
Hewlett, M. (2015, November). Do Robots Have Feelings Too? Atos. https://atos.net/en/blog/do-robots-have-feelings-too
Kramer, J. (2020). Empathy Machine: Humans Communicate Better after Robots Show Their Vulnerable Side, Scientific American. https://www.scientificamerican.com/article/empathy-machine-humans-communicate-better-after-robots-show-their-vulnerable-side/
Morgan, B. (2017). 10 Things Robots Can’t Do Better Than Humans, Forbes. https://www.forbes.com/sites/blakemorgan/2017/08/16/10-things-robots-cant-do-better-than-humans/?sh=69012caac83d
Salem, M., Lakatos, G., Amirabdollahian, F., & Dautenhahn, K. (2015, March). Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust. In 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 1-8). IEEE.