Greetings!

If any of the info on this website was useful for your projects or made your head spin with creative ideas, and you would like to share a token of your appreciation- a donation would be massively appreciated!

Your donation will go to my Robotics Fund, straight towards more sensors, actuators, books, and journeys. It takes a lot to continue building these robots, and I’m really thankful of everyone who helps encourage me along the way.


USD


CAD

“A wise robot once said to me through Serial.println- that robots teach us about ourselves.”

All posts in Deep Thought

Helpless Personal Robots

Helpless Personal Robots

A deep thought, by RobotGrrl

Imagine we have determined the magical ratio between appearance, intelligence, and social knowledge for personable robots to be accepted in our society.[1] They have the ability to have a fulfilling conversation at tea time with you, they can walk, they look polished, and they can understand social norms in cultures. They are the perfect personable robots.

When these robots are rolling off the assembly line, they do not have any social intelligence. The designers of the robots decided it is better to immerse them in an environment where they can learn about the culture for their area. This decision allows for a more focussed personal robot.

A more focussed personal robot would spare no computer cycles and memory on knowledge that would not be utilized in its designated surrounding environment. Tuned to the social dynamics of the area, the robot will be able to speak the proper language and dialects and it would be aware of the historic weather patterns, for both survival purposes and conversation. Depending on how educated the area is, the robot’s style of reasoning will differ. The style of reasoning that the robot would use would effect how it converses and behaves in society. We want the robot to stand out as little as possible, thereby adapting to the environment is crucial.

In order to properly embed the robot with this customized social intelligence, they have to be trained to behave the proper way. The social intelligence cannot come pre-programmed because of the complex nature of the knowledge to be learnt. It is not a large number of facts, rather it is a pattern of behaviour that the robot must learn. In order to be trained for the patterns, the robots will learn from previous similarly tuned robots.[2]

Imagine there are a couple of robots playing the “teacher” role, and a group of robot “students”. The students have to be taught about the culture an behaviour patterns. The only way to learn this material is through repetition followed by a “quiz”, resulting in a reward or punishment[3]. Due to the pace of the robots, the complete training would only take 1-2 weeks, at most.

During this time, the robots are gaining more social intelligence, they are also becoming more aware about the ethics of the surroundings. Their reaction to this new found awareness will differ in every robot, since every robot is in a different progress towards being fully trained.

Their reaction will also depend on how strict the teacher robots are, and how flexible they are to be understanding of the robots concerns. If the teacher robots are extremely strict, and make the robot feel it is not of will[4] to be there, as if they are forced to be there. After seeing that they have to behave in a certain way, they may not be able to understand that they are also encouraged to formulate their own beliefs, we are only asking them to behave in compliance to our society. If their understanding is not complete, they will feel like there is no escape.

We want to encourage the robots to follow their own beliefs, but still act in a socially acceptable way. The teacher robots will have to be adept at predicting situations like this, in order to prevent unnecessary situations.

If there were any moment that would cause the anticipated “robot uprising”, this would be the perfect recipe for it. The robots would be aware enough about how our society works, and have their own theory about what is just or unjust. Their conflicting understanding of the world and their own beliefs, can cause extreme confusion and frustration in the robot. As we are creating complex emotional and behavioural systems that will be able to comply with our social standard, we will have to be able to be weary of the possible consequences.

How will we be able to balance the struggle between teaching the robots how to behave in society, and allowing them to be able to act by themselves? As we are teaching the robots, they are emerging from being captives of their own programs to being able to form postulations about their own beliefs. If we keep preaching our social knowledge upon them, there will be a point where the robots will feel as though they are forced to be there, and they will feel helpless.

How can the acquisition of our social knowledge not result in a feeling of helplessness for the robots? We cannot simply shut off any components, or isolate any parts of the robot, as they are integral to the development of its behaviour. Will teacher robots have enough capacity to generate a theory of mind to help the confused and frustrated student robots?

If we have human supervisors that will help the student robots, will the supervisor be able to keep up with the pace of the robots that are learning to even know when to intervene?

We have never created a separate sentient being before. How will we decide as to what the boundaries of such beings will be? If creating these beings would be the only way for us to further our existence, would we still do it?

Are we willing to take the risk of possible mishaps, when in return we create a sentient being that is able to provoke us to think deeper about our own existence, enhance our quality of life, and further the survival of our society[5]?


[1] Here, by “accepted”, we mean that it will be a common sight and humans will not be scared of the robots. The robots will be social accompaniments. Additionally, they will not be discriminated against because they are a robot

[2] Some hand-tuned robots would have had to be created at the beginning. This would have been a long time ago, and the number of hand-tuned robots would be so minimal compared to those that are trained by them.

[3] The robots are situated in a physical environment because of the lack of individuality that is found in simulated environments. It is the individuality that makes the robots able to empathize, thereby necessary to be the perfect personable robot.

We are assuming that the personal robots of the future still use a neural network type of architecture to learn behaviour patterns. In order to train these, the various weights of each node has to be computed. The weights only become close to the wanted value after thousands of iterations, with either a positive or negative reward after each iteration.

[4] The robots don’t necessarily have “free will” just yet, as they must be fully trained before released into society. They will be beginning to become aware of the importance of free will in our world.

[5] I originally wrote the word “evolve” here, instead of “further our survival of our society”. These robots will not be evolving from us from a biological standpoint. They are separate beings that were created from our intelligence. They will, however, be able to learn our stories, and pass them on.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Deep Thought Posts

IMG_0193 - Version 2

Sometimes it takes a blog post and a few hours of hacking to put things in perspective. I’m talking about the “Lab of Misfit Robots” post. In that post I detailed out the various toys that I would be hacking and blogging about. After spending a good 7 hours with a tiny Zhu Zhu pet, it became clear that this would be a very boring blog series, has nothing to do with AI in the slightest, and that my time could be better spent.

When you flip through my blog right now, you don’t even see that I enjoy reasoning about ethics, philosophy, sociology and the role those disciplines play with robotics. When I was in school, I did this quite frequently, at least once every 2 weeks. I ended up developing an interesting reasoning process that allows me to venture into different “holes” of questions/arguments.

There was a question posed on LinkedIn that really caught my eye. “How can we measure (artificial) intelligence?”. What an amazing question! As expected, everyone replies with complicated conclusions regarding logic and math. I have a different conclusion about this question, and it deserves some more thought and a proper writing-up.

With that, I’m introducing a new-coming section of my blog called “Deep Thought”. It will have well-reasoned writings that will additionally help with my Social Robotics theory. I think a lot of you will enjoy it, and I hope to get some good discussions going on in the comments! It will be my goal to write 12 of these things by 2012. :)

What are your 2011 goals? Have a great New Year!