Computer Science 110
Foundations of Computing through Digital Media

Denison

CS 110 Lab Project 11

Building a Robot Brain

Due: By Thanksgiving Departure

Last lab, we learned some of the basic commands for maneuvering the scribbler robot. In this lab, we will learn to use some of the sensors that are part of the scribbler robot (the ones on the red robot body) and other sensors that are part of the Fluke dongle. The end goal is to build smarter robot "brains" that can sense their environment through the sensors instead of having to use dead-reckoning, in which errors can "add up", and using precise positioning. In other words, we want to enable our robot behaviors to adapt to the environment.

Part 1: Experimenting with the Sensors

Start by getting with your teammate and obtaining a Scribbler robot and Fluke dongle appropriate to the laptop or desktop computer you will be working with. Next, bring up the Terminal window, change the Terminal's "current directory" to the Desktop, and start the Python interpreter with the py2 command in the Terminal.

Access the library and initialize the interface between the robot and the computer by issuing the two commands:

>>> from myro import *
>>> init('/dev/tty.scribbler')

One way to get a sense (pun intended) of the sensors is to query the values of the sensors while experimenting with the robot. Issue the commands:

>>> manualCamera()
>>> senses()
A caution: do not type anything else in the Terminal window, or the graphics window will stop updating, and you will have to begin again. If the Senses window is updating, you will see sensor values for two line sensors, a stall sensor, 3 brightness sensors, 3 obstacle sensors, two ir sensors, 3 light sensors, and a battery sensor.

The bright sensors and the obstacle sensors are associated with the Fluke dongle. The bright sensors are really a result of processing on images from the Fluke's camera, measuring relative light/pixel intensity in the left, center, and right of the image. The obstacle sensors are also on the circuit board and emit ir (like your remote) and measure how much comes back to an ir receiver. So if a lot of ir is reflected, it is an indication of an obstacle close at hand, and if little ir is reflected, then presumably no obstacle is present. The ir emitters on the dongle are also positioned left, center, and right.

Leaving the sensor graphics window running, experiment by manipulating the robot and recording the effect on the sensors:

  1. What is the steady state ambient light readings from the bright sensors? Record minimum and maximum values with the robot just sitting on the table. Is there a difference between left, center, and right?
  2. Now point the robot toward the ceiling lights and hold it steady. Record the new minimum and maximum values.
  3. Without actually putting your finger(s) on it, cover the camera and see how low you can get the bright sensor values to go. Record your result.
  4. For those that have a "flashlight" app, shine a light directly at the camera on the dongle, and see how high you can get the bright sensor values to go. Again, record your result.
  5. Moving on to the Fluke obstacle sensors, design a similar set of experiments. Start by focusing on the center obstacle sensor and place your robot on the table with nothing in front of the dongle. Then use a notebook or other solid object and vary the distance between the notebook and the ir emitter/receiver. Measure and record your results as you vary the distance. How consistent is your data? What can you do to help make the data more consistent?
  6. Now experiment with the left and right obstacle sensors. The ir emitters can be easily seen on the edges of the Fluke dongle, near the top of the circuitry. Use your notebook and try and isolate the effect on each of the left and right obstacle sensor.

In the Senses window, the 'ir' fields (as opposed to the 'obstacle' fields) refer to a set of ir emitter/receivers on the Scribber robot itself. These are on the opposite side from the Fluke dongle in a position that look like left and right car headlights. The Scribbler 'light' sensors are the three ports (left, center, and right) on the forward hood of the car.

  1. What values do you get from the left and right ir sensors? Experiment with these like you did the Fluke's obstacle sensors. Start with a clear area out from the sensors and then move a notebook or other obstacle back and forth and observe when the values change. Is there a particular distance at which you see a change?
  2. Can you get a change on one ir sensor independently from the other?
  3. Experiment with the light sensors on the scribbler. I have to note that, personally, I have seen very little effect as I try and work with these sensors, and so we will probably try to not use them for building robot behaviors. A recent change in the fluke and/or scribbler firmware could account for this.

Part 2: Normalizing and Averaging

Our next task is to overcome the awkward units and some of the variability in the data we obtain from the sensors. In an ideal world, sensors would give us values in the range 0.0 to 1.0 and would not be "noisy" with effects from outside influences. We try and approximate that behavior by taking multiple sensor readings, averaging the result, and then mapping the average, using the experimental highs and lows obtained from Part 1, to a value range between 0.0 and 1.0. Consider the following code that defines an initialization function, a high/low normalization mapping to [0.0,1.0], a function for getting a normalized form of the center Fluke obstacle sensor, and then a main function and its invocation:
from myro import *
import time

#---------------------------------------------------------------------------
# The sinit function takes care of all the one-time initialization of the
# scribbler robot.  The parameter, forw, sets the orientation of which part
# of the scribbler robot is 'forward'.  A value of 0 means the normal front
# of the scribbler (opposite the dongle) is forward, and a 1 means the side
# with the dongle is considered forward.
#---------------------------------------------------------------------------
def sinit(forw):
  init('/dev/tty.scribbler')
  setForwardness(forw)
  manualCamera()
  setIRPower(120)

#---------------------------------------------------------------------------
# The normalize function takes a sensor value along with experimentally
# obtained low and high values for that sensor and maps that into the range
# 0.0 to 1.0.  Note that values below the low first get mapped to the low
# and values above the high get mapped to the high.
#---------------------------------------------------------------------------
def normalize(value, low, high):
    value = max(value, low)   # handle low outliers
    value = min(value, high)  # handle high outliers
    delta = float(high - low) # range of sensor values to be mapped
    value = value - low       # realign sensor value to 0 relative
    return value/delta        # division gives the 0.0 to 1.0 ratio


#---------------------------------------------------------------------------
# normCenterObstacle takes multiple sensor readings on a particular sensor,
# averages them, and then uses the normalization function above.
#---------------------------------------------------------------------------
def normCenterObstacle():
  count = 3
  sum = 0
  for i in range(count):
    sum += getObstacle(1)
  avg = sum / float(count)
  return normalize(avg, 0, 1000)   # assumes the lowest values are 0 and highest are 1000

#---------------------------------------------------------------------------
# main uses a timed 'behavior loop' to interact with the robot
#---------------------------------------------------------------------------
def main():
  sinit(1)
  while timeRemaining(10):
    print '%6.4f' % 'normCenterObstacle()
    time.sleep(.5)

main()
The sinit() function gathers initialization together in one place and is called by main(). In it, we perform the basic initialization and also turn off the autoexposure for the camera, and reduce the IR power (setIRPower takes an argument in the range of 0 to 255 and can be used to help calibrate the Fluke obstacle sensors. If you are not getting 'clean' values even after averaging that allow you to detect an obstacle in front of the robot, you can experiment and work with this power setting to try and adjust.)

The main function introduces two new functions, one from the robot library and the other from the time library. The timeRemaining() function takes an argument of a number of seconds to repeat a robot behavior loop. If the motors are still running, this will stop the robot on expiration of the time as well. The sleep function suspends Python instruction execution of the given number of seconds. So the purpose of this program is simply to run the robot for 10 seconds and, at half second intervals, report our normalized center obstacle sensor reading.

Make sure you understand how the normCenterObstacle() function is trying to provide the Normalization and averaging that we desire. Then experiment by running this program, multiple times if necessary, to try to determine a "threshold" at which the robot senses it is pretty near an obstacle. You are welcome to adjust the loop time as well as the sleep time, or even the low and high thresholds within normCenterObstacle() (or even the IRPower() until you get consistent results. You may want to work in the hallway, where the ambient IR level is lower and more consistent.

Implement a function called normBright that performs similar functionality to normCenterObstacle, but does it for all three of the brightness sensors, and getting all three (left, center, right) of the Bright sensors in each call to getBright(). Note that you can pull out the three return values in one assignment statement as follows:

L, C, R = getBright()
where L, C, and R are variable names of your own choosing. Your own function should return 3 values, which it can do as follows, assuming normL, normC, normR are 3 variables holding your calculated values:
  return normL, normC, normR
A caller would do the same kind of 3-variable assignment shown above when calling your function.

You should change the above program to use your new normalized brightness call and report and experiment with the results.

Part 3: Towards a full-fledged Robot Control Program

Say that we want to build a robot behavior that makes the robot react as if it is "scared" of light. Like many robot behaviors, this can be accomplished by creating a loop in which the robot first senses its environment, and then decisions are made and commands given to react to what was sensed. This is then repeated. We can use our timeRemaining() function in a while loop to give the outer level control for this behavior.

In the above case, we would sense our 3 brightness sensors. Given the brightness in the left, center, and right of the robot's "field of vision", we want to write Python code that distinguishes between three cases: The light is "greatest" on the left; the light is "greatest" in the center; or the light is "greatest" on the right. If we are scared of the light, and the light is greatest on the left, then we want to turn to the right (and perhaps back up some). Similarly, if the light is greatest on the right, we want to turn to the left (and back up). If the light is greatest in the center, we simply want to back up away from the light.

Write a program called scared.py, where the robot exhibits this behavior. It may help to use threshold and look for the difference between the "greatest" and the others to be beyond this threshold. Also note that when we use the forward/backward/turnLeft/turnRight, we do not have to specify a time. If we just use a single argument to these functions, then the robot continues to run, and it will make its reactions more "smooth". Just make sure that when your program behavior loop ends, you call the stop() function.