Greetings!

If any of the info on this website was useful for your projects or made your head spin with creative ideas, and you would like to share a token of your appreciation- a donation would be massively appreciated!

Your donation will go to my Robotics Fund, straight towards more sensors, actuators, books, and journeys. It takes a lot to continue building these robots, and I’m really thankful of everyone who helps encourage me along the way.


USD


CAD

“A wise robot once said to me through Serial.println- that robots teach us about ourselves.”

Helpless Personal Robots

Helpless Personal Robots

A deep thought, by RobotGrrl

Imagine we have determined the magical ratio between appearance, intelligence, and social knowledge for personable robots to be accepted in our society.[1] They have the ability to have a fulfilling conversation at tea time with you, they can walk, they look polished, and they can understand social norms in cultures. They are the perfect personable robots.

When these robots are rolling off the assembly line, they do not have any social intelligence. The designers of the robots decided it is better to immerse them in an environment where they can learn about the culture for their area. This decision allows for a more focussed personal robot.

A more focussed personal robot would spare no computer cycles and memory on knowledge that would not be utilized in its designated surrounding environment. Tuned to the social dynamics of the area, the robot will be able to speak the proper language and dialects and it would be aware of the historic weather patterns, for both survival purposes and conversation. Depending on how educated the area is, the robot’s style of reasoning will differ. The style of reasoning that the robot would use would effect how it converses and behaves in society. We want the robot to stand out as little as possible, thereby adapting to the environment is crucial.

In order to properly embed the robot with this customized social intelligence, they have to be trained to behave the proper way. The social intelligence cannot come pre-programmed because of the complex nature of the knowledge to be learnt. It is not a large number of facts, rather it is a pattern of behaviour that the robot must learn. In order to be trained for the patterns, the robots will learn from previous similarly tuned robots.[2]

Imagine there are a couple of robots playing the “teacher” role, and a group of robot “students”. The students have to be taught about the culture an behaviour patterns. The only way to learn this material is through repetition followed by a “quiz”, resulting in a reward or punishment[3]. Due to the pace of the robots, the complete training would only take 1-2 weeks, at most.

During this time, the robots are gaining more social intelligence, they are also becoming more aware about the ethics of the surroundings. Their reaction to this new found awareness will differ in every robot, since every robot is in a different progress towards being fully trained.

Their reaction will also depend on how strict the teacher robots are, and how flexible they are to be understanding of the robots concerns. If the teacher robots are extremely strict, and make the robot feel it is not of will[4] to be there, as if they are forced to be there. After seeing that they have to behave in a certain way, they may not be able to understand that they are also encouraged to formulate their own beliefs, we are only asking them to behave in compliance to our society. If their understanding is not complete, they will feel like there is no escape.

We want to encourage the robots to follow their own beliefs, but still act in a socially acceptable way. The teacher robots will have to be adept at predicting situations like this, in order to prevent unnecessary situations.

If there were any moment that would cause the anticipated “robot uprising”, this would be the perfect recipe for it. The robots would be aware enough about how our society works, and have their own theory about what is just or unjust. Their conflicting understanding of the world and their own beliefs, can cause extreme confusion and frustration in the robot. As we are creating complex emotional and behavioural systems that will be able to comply with our social standard, we will have to be able to be weary of the possible consequences.

How will we be able to balance the struggle between teaching the robots how to behave in society, and allowing them to be able to act by themselves? As we are teaching the robots, they are emerging from being captives of their own programs to being able to form postulations about their own beliefs. If we keep preaching our social knowledge upon them, there will be a point where the robots will feel as though they are forced to be there, and they will feel helpless.

How can the acquisition of our social knowledge not result in a feeling of helplessness for the robots? We cannot simply shut off any components, or isolate any parts of the robot, as they are integral to the development of its behaviour. Will teacher robots have enough capacity to generate a theory of mind to help the confused and frustrated student robots?

If we have human supervisors that will help the student robots, will the supervisor be able to keep up with the pace of the robots that are learning to even know when to intervene?

We have never created a separate sentient being before. How will we decide as to what the boundaries of such beings will be? If creating these beings would be the only way for us to further our existence, would we still do it?

Are we willing to take the risk of possible mishaps, when in return we create a sentient being that is able to provoke us to think deeper about our own existence, enhance our quality of life, and further the survival of our society[5]?


[1] Here, by “accepted”, we mean that it will be a common sight and humans will not be scared of the robots. The robots will be social accompaniments. Additionally, they will not be discriminated against because they are a robot

[2] Some hand-tuned robots would have had to be created at the beginning. This would have been a long time ago, and the number of hand-tuned robots would be so minimal compared to those that are trained by them.

[3] The robots are situated in a physical environment because of the lack of individuality that is found in simulated environments. It is the individuality that makes the robots able to empathize, thereby necessary to be the perfect personable robot.

We are assuming that the personal robots of the future still use a neural network type of architecture to learn behaviour patterns. In order to train these, the various weights of each node has to be computed. The weights only become close to the wanted value after thousands of iterations, with either a positive or negative reward after each iteration.

[4] The robots don’t necessarily have “free will” just yet, as they must be fully trained before released into society. They will be beginning to become aware of the importance of free will in our world.

[5] I originally wrote the word “evolve” here, instead of “further our survival of our society”. These robots will not be evolving from us from a biological standpoint. They are separate beings that were created from our intelligence. They will, however, be able to learn our stories, and pass them on.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

4 CommentsLeave a Comment


  • bhtooefr

    4 years ago

    Hmm.

    I’m seeing some interesting parallels to discussions of education of humans, as well.

    One book that I’ve read actually suggests that humans learn best when they’re put into an environment doing real-world things, and allowed to soak up knowledge – not sitting them in school for hours and hours at a time. (Then again, that book was actually claiming that forced schooling in the US was intended to dumb down the populace, and keep people from having free will. Hmm.)

    Also, in my opinion, the school of hard knocks is good for teaching right and wrong – usually, doing wrong will cause bad things to happen. So, you’d have to put the robot into an environment where doing wrong would cause bad things to happen to it.

    As for being unable to pre-program the social knowledge, what about creating a few different social knowledges, then figuring out which ones are the safest, and copying them as the pre-loaded state of a new robot?

    One thing that might help is, robots usually have a very clear purpose in… I don’t want to say life, but existence. Humans don’t, and many wars have been fought over exactly that. Which leads into another point… religious leaders program humans to follow them, by claiming to know our purpose in life, which is usually to serve $DEITY, in the ways that the religious leader claims $DEITY wants – and there would need to be a leap of faith to decide that $DEITY really does exist, and really does want what the religious leader is saying. Here, humans would clearly be the deity in question, and we would actually be in contact with the robots every day – no need for a go-between religious leader, no leap of faith needed. So, we could train robots to treat us as gods (except we don’t have the whole immortality thing going on.)

    Now, if there IS a robot uprising… would it be a bad thing? One thing that I could see happening is some very good moral values are instilled in the robot… and then the robot goes out into the real world, and sees some massive corruption going on. The robot uprising could be started right there… but against people that it sees as hypocrites.

  • Erin, the RobotGrrl

    3 years ago

    Hey bhtooefr, thanks for the thoughtful comment!

    The similarity between the two types of education is the process. In human education, we undergo the process of learning new information. The same would be for the robot education, they would be learning new information too.

    One of the things that will be different about this style of “education” for the robots will that it will be much more faster. It would be more like an intense training facility in a physical form, than a traditional “sit down in front of blackboard” education.

    Good point about the school of hard knocks, there is a quote from Einstein praising it as well. However, would society accept a robot that might malfunction at a random moment of time?

    We can postulate about this by thinking about some current machines. Cars, for instance. We do not accept if they will spontaneously malfunction. Car companies create safety regulations for allowing us to be confident that most of the time something will not malfunction.

    I believe that the education process would be somewhat of a safety regulation for the robots, before they enter society. This way, they have been introduced to aspects of their environment through the behaviour and learning process of the teacher robot.

    Pre-programming the social knowledge would be taking a shortcut from creating a true individual character from the robot. Remember, this is the hypothetical perfect social robot.

    Interesting thought about the deity aspect of the human-robot relationship. If the robots are a separate “species” (as they are completely autonomous and sentient beings), would it be unethical to treat them as underlings of us humans? Or would it be more ethical to treat them as peers?

    The same thought can be applied to your thought about the robot uprising. We will have to figure out the proper distribution of power. How can we hold the robots in check? Do we need to be able to do so, and would it be unethical?

  • bhtooefr

    3 years ago

    One thing I’ll note on the ethics of all of this…

    Can we agree that dogs are sentient (even if they’re not sapient) beings? They definitely have feelings. And, dogs can live independently – even domestic breeds. Just look at feral dogs.

    However, we own dogs for our own pleasure and service. They seem quite happy with this arrangement.

    So, I think having sentient robots as underlings can be perfectly ethical.

  • bhtooefr

    3 years ago

    Oh, and back to the point about failure… we allow humans to fail. Sure, we go to jail or get fined if they fail, but we’re allowed to fail in the first place. And, they’re often given leniency, and an attempt at rehabilitation, if it’s a first offense. Preventing failure in the first place would be unethical, it would be a serious restriction on freedom, if fellow humans would be considered an equal species.

    For that matter, we often allow dogs to fail a couple times before we put them down. However, in the case of dogs, we actually DO prevent failure, because we consider them as an underling species. (And, here’s the important part – domesticated dogs consider us a superior species, as well. That makes that treatment ethical.)

Leave a CommentPlease be polite. We appreciate that.

Your Comment