Greetings!

If any of the info on this website was useful for your projects or made your head spin with creative ideas, and you would like to share a token of your appreciation- a donation would be massively appreciated!

Your donation will go to my Robotics Fund, straight towards more sensors, actuators, books, and journeys. It takes a lot to continue building these robots, and I’m really thankful of everyone who helps encourage me along the way.


USD


CAD

“A wise robot once said to me through Serial.println- that robots teach us about ourselves.”

All posts tagged AI

Timely Factbot – Python IRC Chatbot

IMG_9325 - Version 2

The telling of facts while in a group setting is quite interesting. In an ideal scenario, there would be an equal amount of fact sharing by each member in the group. Usually this is not the case, as one person can know more about one subject and be very good at rattling off facts. Conversing with people like that can be difficult, as that person’s role becomes more of the fact-rambler, eventually becoming distant from the group itself. What if we could leave the fact-rambling to a robot, and let the humans do what they do best?

This might sound contradictory to everything that we have learnt so far. Are we not supposed to be fact machines, after all that we are “taught” in school to memorize equations and whatnot? That’s what books are for, you really don’t need to know any of those things, just how to apply them. Robots are good at “knowing” things, but maybe not so much applying them.

Picture this setting in an IRC chat-room (Fat Man and Circuit Girl):

electricguy:
it’s impossible to stand like a foot from a helicopter rotor without getting sucked in!!:P

slinky:
the angle of the chopper was the first dead giveaway.

electricguy:
hmmm, wonder if mythbusters wanna try that too!:P

ithon_no_audio:
wondering if you shine a mini projector on cloth then have a camera on the same side of the projector see through the cloth but with a polorized filter rotated to cancel out the polarised light from the projector?

ithon_no_audio:
that way a robot can look at you with human like eyes while really look at you

RobotGrrl:
so you have a projector and a camera and a cloth… how do you get the pupils?

electricguy:
elly dees!

RobotGrrl:
why do you need to cut polarization?

electricguy:
hmmm, do you think it’s a good idea to overclock a computer that already have approx 50 degrees C in CPU temp?

ithon_no_audio:
so the camera don’t see the projector and rather the light behind the cloth

bhtooefr:
is that 50 at idle or full load?

electricguy:
like half load:P

Conversations like this happen regularly throughout the times that I’m actually “in” there. There’s a few topics going on at once, crazy helicopters, polarization of projectors, and CPU load. What would really be nice is if somehow ithon and I magically learned something about polarization, while we were discussing it. A timely factbot would be great!

Basically since September 1 I’ve been playing around with Python libraries for IRC chatbot connections and playing around with the natural language toolkit. A few months ago, when everything was peachy, I was thinking about projects that would be cool to fit in with an AI and HCI class. I made a conversation robot software a few years ago, but haven’t given it much time after the science fair was over. Seems like a pretty good opportunity to work on something similar using the ideas that were used in that old software!

The plan of attack so far is to first be able to tokenize the messages that are sent. From there, design a Markov approach to construct a “memory” of what is being discussed. Some magical AI will happen, then some text will be fetched from Wikipedia and sent into the chatroom.

I’m super excited for designing the Markov approach. I don’t even know anything about that very much!

The best part about this factbot in my opinion is that it will be interacted with indirectly. No crazy commands will need to be used. It will be listening and waiting for a good time to start to ramble off some facts. It probably would be beneficial to add support for messages that are sent that tell the factbot to elaborate more. I’ll keep blogging about this as I go. It’s sort of going to be like a side-hobby, I might not be able to dedicate as much time to it as I would have thought, but it will be fun anyway :D

Forgot to mention that I’ll probably be playing around with this factbot in #robot at comm.cslabs. Once it’s working reliably, I’ll have more faith in bringing it out into the “real world”. :D

FNR – BubbleBoy Behaviour AI

This week and this Friday Night Robotics I was working on a behaviour AI for the newly refurbished BubbleBoy! It is much easier to design an AI that you know will be interactable with thew orld! Without the headbobbing capability of BubbleBoy, this effort would not be worth it.

BubbleBoy’s behaviour is primarily focused on food and water. BubbleBoy lives its anthropomorphized life just be be fed/watered! This means that BubbleBoy will want to know when it will expect to be fed, so that way it can headbob at the most optimal times. Being fed/watered is having a button pressed on BubbleBoy’s green stage area (the part with the blue and white LCD).

BubbleBoy will have three sensors, and a rudimentary measure time. The three sensors are the LDR, lid switch, and Xbee. All of these are onboard the robot (except for the Xbee which isn’t implemented yet).

Here’s a broad flowchart of the behaviour, which I will then explain below in detail.

BubbleBoy Behaviour AI Flowchart

  1. Creating the Expectation

  2. This is the observatory phase. BubbleBoy initially does not have any expectation of when to be fed, so it waits around. While it is waiting, its collecting data from all of the sensors and storing them to an array. There has to be 10 numbers in the array for each sensor before BubbleBoy can proceed to the next stage, pattern finding. Once this is fulfilled, and if BubbleBoy is fed/watered, then it goes on to find a pattern.

  3. Pattern Finding

  4. BubbleBoy is seen as a simple robot. Thereby, its pattern finding is relatively simple as well. The main idea is to check each sensor’s array and see if there is a pattern within the residuals. Meaning, if looking for a sequential pattern, the array would be iterated through (starting at i=0, stopping after i=8), and i+1 would be subtracted from i. An average of the residual change would be calculated at the same time. The array would be iterated through again, this time to count how many items are within +- 10% of the residual average. If the count is above 7, then it is said that there is a sequential pattern in there.

    The same process is done for a secondary pattern, and a ‘thirdary’ pattern. Meaning, every 2nd number and 3rd number is checked to see if there is a pattern. It also goes through and checks with offsets, just incase the pattern is “even/odd/whatever”.

    If there are no patterns, a random primary sensor is chosen for BubbleBoy to work with.

    A problem exists in determining which pattern for which sensor to trust the most. A thirdary pattern for a photosensor is less dependable than a sequential pattern for a lid switch. This is handled in the next step.

  5. Determining the Sensor to Use

  6. The sensor with whatever pattern to follow is chosen through a Bayes Filter. This allows for a simple specification of confidence levels for a given sense resulting in a particular state through a table where all rows add up to 1.0. The sense columns are the sensor and pattern type. Meaning, there’s three sensors for every sensor, giving 9 senses in total. The states are confidence intervals. Anything >= 85 would be considered the most confident.

    Bayes Filter AI Lookup Tables

    The numbers in bold are the ones that are more probable to be a result. A sensor in particular to look at is the Xbee for a thirdary pattern. The numbers are dispersed in such a way that it is almost a bimodal distribution. In order to understand why this is, imagine the scenario in real:

    Xbee modems can communicate for up to a few meters. If BubbleBoy only receives a message every 3rd instance, it could either be a clever pattern, or it could be a miscommunication.

    For this reason, the largest probability is given to the least confident state, and the second largest probability is given to the most confident state. It depends on the Bayes Filter and random roulette for what will actually be chosen.

    This is just for determining what sensor is the most confident, that will provide the best result. This can mean that the chosen sensor will be followed, but it also may not. This will be explained in the next step.

  7. Calculating the Cost Adjustment

  8. The adjusted cost is determined by using the result from the previous step, and multiplying the probability by the inverse of whatever column it was situated in. In simpler terms, if the result was found in the best confidence interval state, then it would be multiplied by 5. If it was in ( 85, 70 ], it would be multiplied by 4…

    This is then used as the sense for the next Bayes Filter. The state is how much of a cost to reduce the sense by.

    Bayes Filter AI Lookup Tables

    Once the cost adjustment amount is determined, it is then subtracted from the cost of the particular sense’s cell. All of the senses are in a grid, with the most “primary” being the closest to BubbleBoy. An A* search is then used to choose the closest and least cost sensor. Once this is done, BubbleBoy can get on to entertaining its audience while waiting for food/water!

  9. Following the Expectation

  10. This is the main loop of the program. It’s where BubbleBoy is thinking in the present time about if it will be fed or not! :)

    The key idea here is that BubbleBoy is thinking in the now. Meaning, it tracks the patterns differently than it does when it is reflecting back on them (in a “past” thought).

    Sensor data is retrieved and placed into an array of size 10. The data at i is checked wither it is within +- 10% of the average in the pattern. The threshold percentage amount differs for the type of pattern, where secondary would be +- 20%, and thirdary would be +- 30%. If the data fits in to the check, then a “yes” counter is incremented.

    Once the “yes” counter is >= 6, the food level will begin to decrease by (i/2)^2. At the same time, BubbleBoy will begin to show signs that he is excited to be fed/watered soon, by spinning its hat and bobbing its head.

    If the “no” counter is >= 6, then it means that the expectation isn’t really working in the present thought. A flag is set to redo the expectation once BubbleBoy receives food and water.

    i is then either incremented or reset to 0, depending if it hit 9 or not. (That’s a sort of obvious step)

  11. Real-World Behaviors

  12. When the time elapsed from not receiving food exceeds 150% of that of the observed elapsed time, BubbleBoy goes in to a “wallow” mode. When BubbleBoy is wallowing, it spins its hat slowly, and bobs rarely.

    If the time elapsed is ( 150%, 100% ], BubbleBoy is “angry” because it did not receive its food exactly before the time elapsed. The hat will not spin, and BubbleBoy will bob side to side, and once (quickly) in the opposite axis to simulate a sort of “twitching” to all this anger!

    If the time elapsed is ( 100%, 85% ], BubbleBoy is eager to please. Hat tricks will be common, same as delightful bobbing. Depending on how much food/water BubbleBoy has, it may also hoola hoop!

  13. Last part of Following the Expectation

  14. Depending on when BubbleBoy was fed, if it was in ( whatever, 100 ], the expectation will be done. Essentially, BubbleBoy is a positive/eager thinker that believes it should always be fed before the elapsed observed time. What an attitude!

    If it is in [ 85, 100 ), then the expectation will be kept.

    To reformulate the expectation, the previous steps are executed on the collected data. Then, everything repeats!

What will be super interesting to see, in my opinion, will be when the discrepancies occur from past thought to present thought. It will be interesting to see which sensors fare better through that transformation.

It will also be interesting to see if this actually can work on an Arduino, and not in simulation. I created a simulated version of BubbleBoy in Processing earlier in the week.

Screen shot 2010-07-04 at 2.56.16 PM

I’ll do the initial coding and testing through this simulation, mainly because I already coded the Bayes Filter Algorithm (with random roulette) in Processing from 2009 Honors Summer Research. :D Plus, in Processing it is very simple to communicate to an Arduino through Firmata. I can read in the data from BubbleBoy through there.

Hopefully next Friday I will have devised a test sequence to test the soon-to-be-coded AI on. This will of course be Open Source, under the Attribution-NonCommercial-ShareAlike 3.0 Unported License (BY-NC-SA).

Let me know what you think of this AI in the comment section below! (And yes I know it is very linear, but BubbleBoy doesn’t have enough DOF in the real world to spend the effort making the AI more nonlinear, since the observed result will essentially be the same!)

January Happenings

What has happened in January? Tons of stuff!

irobot-create

For Matlab this semester, it’s an independent project. I’m working with a friend to implement an adaptive online SLAM algorithm for an iRobot with a CMU cam and ultrasonic sensor. We want it to be able to reach a goal location even if objects are placed in front of it. I’ll be blogging more about this later, though. ;)


Screen shot 2010-02-01 at 8.43.36 AM

The Social Robotics software that I worked on over the summer is now released under the GPLv3 license. I encourage everyone to check out the Social Robotics page if you want to learn more about the project! I am still in the process of creating the documentation and commenting for the code. As soon as it is complete, I will make a blog post. =)

Luckily for me, I took time to make detailed daily and weekly summaries. This will help a lot, plus it’s always neat to look back and see what the difficult parts were.


Did you hear/watch this year’s FIRST game animation? The game is about soccer! Team 229 has many useful links on their webpage that can fill you in.

This year I’m helping out with the website, maybe I will get to help out with some AI coding for the autonomous mode later on. It all depends on what the high school students think up!


I ended up adding a class two hours before the first lecture- Applied Statistics I. I don’t enjoy statistics very much since I have horrible memories of it from Math 536. But, once I gained access to view the class on the gradebook software, I immediately noticed two words:

SecondLife ……………… Project

Is this for real!?!?! It turned out that it is, and it is awesome! A friend and I are working on trying to figure out if there is a correlation between the virtual economy and the real economy. We’re going to focus mainly on North & South America, Europe and Australia.

Here’s a screenshot of my professor in SecondLife!

secondlife-statistics


I’m taking a class on Computer Graphics. It’s really neat– I’m learning OpenGL!

OpenGL is something that I’ve wanted to learn for a while now. It’s actually quite simple when you’re given a template to work with!

Screen shot 2010-02-01 at 8.35.03 AM

Above is the first homework assignment! We were given a lot of time with it, which allowed me to play around with the code. I have to make the colours more plain before I hand it in, though. :(

I have no idea what I want to make with OpenGL at the moment. Maybe a moving robot? I definitely want to make some sort of game, though. (That way I can sell it on the iPhone App Store!)


That’s all for now. I’ll be blogging more about the Matlab project, since I think it’s going to be a hit!