Tuesday, October 4, 2011

Image Processing (Acquisition)

 In this post we will talk about Image Acquisition and processing using Matlab.

A digital image is composed of 'pixels' which can be thought of as small dots on the screen. A typical size of an image is 512-by-512 pixels. Later on in the course you will see that it is convenient to let the dimensions of the image to be a power of 2. For example, 29=512. In the general case we say that an image is of size m-by-n if it is composed of m pixels in the vertical direction and n pixels in the horizontal direction.

Matlab has an Image Acquisition Tool Box which can be used for acquiring 
images.

To open the toolbox, click on start -->


Toolboxes ---> Image Acquisition ---> Image Acquisition Tool (imaqtool)

Once the Image Acquisition ToolBox is open, you could start preview, view the type of camera you are using etc.

You could also do all this using MATLAB commands.

Plug in a camera to your laptop.
In the Workspace, type:
imaqhwinfo
and hit enter.

You will get an output something like this:


>> imaqhwinfo

ans = 

    InstalledAdaptors: {'coreco'  'winvideo'}
        MATLABVersion: '7.6 (R2008a)'
          ToolboxName: 'Image Acquisition Toolbox'
       ToolboxVersion: '3.1 (R2008a)'

Thus by using this command we can know the adapters installed. 

Now, we want to capture the video, right?
There is a command in MATLAB called 'videoinput'. It helps you to create video input objects. 
Syntax:


OBJ = VIDEOINPUT(ADAPTORNAME)
OBJ = VIDEOINPUT(ADAPTORNAME,DEVICEID)
OBJ = VIDEOINPUT(ADAPTORNAME,DEVICEID,FORMAT)
OBJ = VIDEOINPUT(ADAPTORNAME,DEVICEID,FORMAT,P1,V1,...)

We already got the adapter name by using the imaqhwinfo command. Winvideo and Coreco are the name of the installed adapters. Any one of them can be used.
For format, open the Image Acquisition Toolbox, it will be mentioned on the left side of the toolbox. The camera I was using was a 'YUY2_352x288'. 
In the workspace again type imaqhwinfo(write name of your adapter here) and hit enter, this will give you the device id.


>> imaqhwinfo('winvideo')

ans = 

       AdaptorDllName: 'C:\Program Files\MATLAB\R2008a\toolbox\imaq\imaqadaptors\win32\mwwinvideoimaq.dll'
    AdaptorDllVersion: '3.1 (R2008a)'
          AdaptorName: 'winvideo'
            DeviceIDs: {1x0 cell}
           DeviceInfo: [1x0 struct]

Thus, the device id is 1.

So, my videoinput statement becomes something like this:

obj=videoinput('winvideo',1,'YUY2_352x288');

Next --> triggerconfig
This command is used to configure the trigger settings of the video input object. i.e it defines how you want the camera to trigger, whether it should be  manual or auto.
We define it manual because we want it to start only when we need it.
remember that 'obj' is the video input object we have defined.

thus, my command statement looks like this:
triggerconfig(obj,'manual');

Now, we need to set the number of frames we want the camera to capture every time it's triggered. For my use i set it to 1. Also, the triggering should happen continuously, thus it should also be set to infinity.

set(obj,'FramesPerTrigger',1);
set(obj,'TriggerRepeat',inf);


after that start(obj) initializes 'obj' which is the object defined for videoinpout. 

If you want to preview the video during run-time, give the command preview(obj) after start(obj).

Thus our code in MATLAB till now is:


obj=videoinput('winvideo',1,'YUY2_352x288');
triggerconfig(obj,'manual');
set(obj,'FramesPerTrigger',1);
set(obj,'TriggerRepeat',inf);
start(obj);
preview(obj);

To trigger the camera the command: trigger(obj), is used.
use the getdata command to get data and store it in a variable
im=getdata(obj,1);

Thus, now we have one image stored in im.

I recently took part in a competition in which I had to follow a red line by using a web-cam to see the line.
In such a case, the image that was obtained contained a red line on a white background. The preview window (which you get by using the preview(obj) command) looks like this :



This image needs to be converted into grayscale image first. A grayscale image represents an image as a matrix where every element has a value corresponding to how bright/dark the pixel at the corresponding position should be colored. 
the command rgb2gray is used to conver the image to grayscale.
The camptured image is stored in im.
thus,
b=rgb2gray(im);
Now, b stores the grayscale image.
Below are shown two grayscale images with index intensity values for the RED portion and the WHITE area.



Thus we see that there is a difference between the index values of WHITE and RED.
A function roicolor() can be used to definitely separate red and white by specifying a range of threshold index value. All values within that range will show as white and values outside that range will show as black. 

a=roicolor(b,[100,115]);


 The black portion in between the white strip where the red tape was is because of there was a high intensity reflection of the room light (see in preview image above)

Now we invert the image, i.e make the black region white and white region black, so that the region of interest shows up more clearly.



Now that we have this image, we calculate the centre of gravity of the black portion of the image. If the centre of gravity lies near the centre of the image, we send instruction to our robot to keep moving forward, else the centre of gravity lies somewhere on the right side, the bot should move right and similarly for moving left side. The range of centre of gravity needs to be decided by us. For my use, I specified that if the x coordinate of the centre of gravity was between 140 and 180 the robot should move straight, if it was above that the robot should move right else left.

The instructions were sent to the microcontroller using UART communication.
The serial port can be configured in MATLAB using these commands:

ser= serial('COM42','BaudRate',9600,'DataBits',8); 
fopen(ser); 

The command fwrite(ser,'character') can be used to transmit data using UART.

At the end of the code, the serial port has to be closed and the videoinput object closed. My final code looked like this:


clc;
clear all;
close all;
obj=videoinput('winvideo',1,'YUY2_352x288');
triggerconfig(obj,'manual');
set(obj,'FramesPerTrigger',1);
set(obj,'TriggerRepeat',inf);
start(obj);
ser= serial('COM42','BaudRate',9600,'DataBits',8); 
fopen(ser); 
while(1)        % infinite loop

    trigger(obj); 
    im=getdata(obj,1);   %get the image

    trigger(obj); 
    im1=getdata(obj,1);  % the first time triggering occurs the image is just
                                  % noise. Thus the 2nd image is used.
    b=rgb2gray(im1);      % convert to grayscale
    a=roicolor(b,[100:118]);  %define a region of interest
    a=~a;
   
    c=center(a);
                      % use disp(c) to see the values, while testing
    if (c(2)>190)
    fwrite(ser,'r');  % send move right
    elseif (c(2)<170 && c(2)>10)
    fwrite(ser,'l');
    elseif c(2)<170
    fwrite(ser,'l'); % stop

 end
 end
 fclose(ser);
stop(obj),delete(obj),clear obj;  
-------

The function center(a) that I have used is not an inbuilt function. I would suggest you attempt to write it on your own.

This is the video of the robot following the red line.



2 comments: