# Artificial Intelligence

### Normalizing Constraints For A Multi Objective Fitness Function

Most optimization problems involving genetic algorithms will contain multiple constraints; a useful way of handling these constraints, is by multiplying them by a factor which will scale the constraint to a predefined range. It is generally a good idea to have all constraints contributing equally to the final fitness value.

Example:

//calculate fitness m_dFitness = distFit + 400*rotFit + 4*fitAirTime;

The factor been used in the above example is 400, distFit has a max of 400 therefore, rotFit which has a max of 1 is multiplied by 400 and fitAirTime which has a max of 100 is multiplied by 4.

If you wanted to normalize each constraint to a different range then you could use this formula to calculate the normalized value y = 1 + (x-A)*(r2-r1)/(B-A) where A,B is the original range and r1, r2 is the range that the value x should be normalized to.

Example:

//calculate fitness var distFit = 1 + (x-0)*(10-1)/(50-0); var rotFit = 1 + (x-0)*(10-1)/(90-0); var fitAirTime = 1 + (x-0)*(10-1)/(30-0); m_dFitness = distFit + rotFit + fitAirTime;

This option is more feasible since it does not rely on any constraint value to derive a multiplication factor.

### Designing A Custom Neural Network In MATLAB

The MATLAB Neural Network toolbox ships with numerous predefined and canonical neural nets, however sometimes you may need to create a custom net with just the right connections, biases and hidden layers to suite your particular problem domain. To achieve this goal we can use the matlab *network* object. The * network *object allows granular design of neural networks by exposing all properties of the net that we are designing. The preceding code demonstrates how to build a simple neural to learn the truth table for Logical AND.

First lets look at the Logical AND truth table:

p |
q |
p ∧ q |
---|---|---|

T | T | T |

T | F | F |

F | T | F |

F | F | F |

Open a new edit window in MATLAB and enter the following code:

- This creates an empty network object and assigns it to the
*net*variable, sets up the number of inputs and uses cell array syntax to index into its properties.%% Design Custom Neural Network net = network; % create network net.numInputs = 2; % set number of inputs net.inputs{1}.size = 1; % assign 2 to input size net.inputs{2}.size = 1; net.numLayers = 1; % add 1 layer to network net.layers{1}.size = 1; % assign number of neurons in layer net.inputConnect(1) = 1; % connet input to layer 1 net.inputConnect(2) = 1; net.biasConnect(1) = 1; % connect bias to layer 1 net.biases{1}.learnFcn = 'learnp'; % set bias learning function net.biases{1}.initFcn = 'initzero'; % set bias init function net.outputConnect(1) = 1; net.layers{1}.transferFcn = 'hardlim'; % set layer transfer function [hard limit] net.inputWeights{1}.initFcn = 'initzero'; % set input wieghts init function net.inputWeights{1}.learnFcn = 'learnp'; % set input weight learning function net.inputWeights{2}.learnFcn = 'learnp'; net.inputWeights{2}.initFcn = 'initzero'; net.initFcn = 'initlay'; % set network init function net.trainFcn = 'trainc'; % set network training function net.performFcn = 'mae'; % set network perf evaluation function view(net) net = train(net,[0 0 1 1;0 1 0 1],[0 0 0 1]) ; % train network

- Custom Network Diagram:
- Test Network

In the command window typenet([1;1])

This should output a 1 to the command window indicating 1 AND 1 = 1

### Image Classification Using MATLAB SOM/LVQ

I like to think of myself as a hacker :-), not in today’s sense of the word [person who breaks into secured computer areas] but as a hacker in the sense of first two definitions found here. I like to experiment with things especially related to computers and Artificial Intelligence in particular. MATLAB happens to be a virtual toolbox for the trade pun intended :-), using some of its toolboxes we will see how we can solve a ubiquitous problem that faces most persons with a nice camera and a voracious appetite for taking pictures.

Now, I don’t have an expensive Nikon, but I do have loads of pictures; and one day i was trying to find this particular picture when it occurred to me that if i could not remember the name or date when i took the picture it would require me to search blindly every picture I had in order to find that special one. Now what if i had some way of finding the picture based on what i could remember of it? ie. environment in which it was taken, colours and objects along with some other visual specifications, wouldn’t that be cool.

So I went for the proverbial toolbox *MATLAB, *which tools will I need?

- Neural Network -> selforgmap, lvqnet, vec2ind, ind2vec
- Image Processing -> imhist, imresize, rgb2gray, imread

Other:

- mspaint

**Note**: For this demonstration I will be using:

- MATLAB R2011b on Windows 7
- Pictures found at C:\Users\Public\Pictures\Sample Pictures

Ok lets do it, start up MATLAB, copy and paste all the pics from the above directory to MATLAB’s current directory. {Chrysanthemum, Desert, Hydrangeas, Jellyfish, Koala, Lighthouse, Penguins, Tulips}.

- In the new edit window copy and paste the code as given below. Save file as scan.m
function scan(img) files = dir('*.jpg'); hist = []; for n = 1 : length(files) filename = files(n).name; file = imread(filename); hist = [hist, imhist(rgb2gray(imresize(file,[ 50 50])))]; %#ok end som = selforgmap([10 10]); som = train(som, hist); t = som(hist); %extract class data net = lvqnet(10); net = train(net, hist, t); like(img, hist, files, net) end

Links to the functions used were provided above therefore i will not be going into the details of how they work, however there is a narrative with regard to the workings of the code:

The code starts by searching the current MATLAB directory for all files with a .jpg extension. On each iteration of the loop an image is loaded and resized to 50 x 50, it is then converted to greyscale and a histogram measurement is taken of its pixels [feature vector]; the results are then appended to a 256 x n matrix with n been the number of images scanned.

A self organizing map network is then used to identify classes into which the images fall. The feature matrix and class data is used to train a Learning Vector Quantization neural network, that will be used for classification of images presented to it.

- Next we will create a function to display all matching images for an image we submit to the LVQ network.
function like(im, hist, files , net) hs = imhist(rgb2gray(imresize(im,[50 50]))); cls = vec2ind(net(hs)); [~, n] = size(hist); for i = 1 : n if(cls == vec2ind(net(hist(:, i)))) figure('name', files(i).name); imshow(imresize(imread(files(i).name), [100 100])) end end end

- Download a picture of a koala and save it outside your MATLAB path as
*koalatest.jpg* - At the MATLAB command prompt type
*scan(imread(‘[replace with path to koalatest]\koalatest.jpg’)* - After a minute or two the networks should have been trained and a figure displaying the matching koala.jpg image shown to you.

**NOTE: **As explained above this is hacking, not production code I wrote this up in about 20 minutes as a demonstration for classification of images, with imagination this could be extended to classify things like sound for example using a feature map crated from humming a tune to find a song with a similar melody.

### Predicting The Lottery With MATLAB® Neural Network

DISCLAMER: This post does not in any way prove or disprove the validity of using neural networks to predict the lottery. It is purely for the purpose of demonstrating certain capabilities available in MATLAB ® . The results and conclusions are my opinion and may or may not constitute applicable techniques of predicting the popular Jamaican Lottery Cash Pot game.

**Background:**

Supreme Ventures Jamaica Limited has a lottery game called Cash Pot (CP) . The game is based on 36 balls being loaded into a chamber and one ball been selected at random from the grouping. The game is ran four (4) times each day seven (7) days per week.

**Anecdotal Heuristics:**

While doing a little tongue and cheek research at my favorite barbershop, I stumbled upon some heuristics that are employed by most patrons who play the (CP) game. One involved writing down the day, time, and winning number for each day’s lottery. After building up a sufficient dataset, they could then query a particular day and time; and with some simple arithmetic tally the most likely number to be played on that day and time. I was informed that this proved to be a very efficient way of telling which number was to be played next. Another popular heuristic involved pre-assigned symbols; these symbols were associated with each of the thirty six (36) numbers. Then based on dreams aka “rakes” numbers would be chosen that matched the symbols seen in the “rake”. These two methods were the favorite amongst the players of Cash Pot.

**Procedure for predicting Cash Pot with MATLAB ANN:**

- Get the dataset from Supreme Ventures Jamaica website. [contains all winning numbers with date and time]
- We will need to do some twiddling with the file in order to get it into a format that MATLAB can use. To do that we need to remove all headings/sub-headings and labels.
- Next remove the DRAW# and MARK columns since we will not be using those in our analysis.
- In column D use the =WEEKDAY() formula to get the day number from the corresponding date: repeat for all rows.
- Use find and replace to replace MORNING with 1, MIDDAY with 2, DRIVETIME with 3 and EVENING with 4. [Save the file]
- Using MATLAB folder explorer, navigate to the file then double click on it to run the import tool.
- Select columns B and D then hit the import button; this should import only columns B and D, rename the imported matrix to cpInputs .
- Select column C and hit the import button; this should import column C only, rename the imported matrix to cpTargets.
- Because MATLAB sees Neural Network(NN) features as rows, transpose the two matrices using
- In the MATLAB command window type nntool.
- Import cpInputs and cpTargets into the NN data manager.
- Hit the new button on the Neural Network Data Manager and change the default name to cpNN.
- Set Input data to cpInputs, Target data to cpTargets.
- Hit the create button to create the NN.
- Let’s go ahead and train this network. On the train tab of the Network: cpNN dialog, select cpInputs for Inputs and cpTargets for Targets; then press the Train Network button to start the network training.
- Results of training.
- After training the network to the desired tolerance’s go back to the Neural Network/Data Manager dialog box and hit the export button, select cpNN from the list then hit the export button.
- Go back to the MATLAB command window and type
- The resulting value will be the NN’s best guess of what will be the winning entry for Cash Pot on a Tuesday at DRIVETIME.

cpInputs = cpInputs’; cpTargets = cpTargets’;

** **

**Note:**

The newly created NN has two inputs, the first been the day of the week on which the [CP] is scheduled to be played and the second input the time of day that the [CP] is scheduled to played. It also has a hidden layer with 10 neurons with associated bias, and an output layer with 1 neuron and its associated bias. The output is a scalar double which represents the predicted winning number.

CpNN([2;3]) % [day;time]

**Conclusions:**

My initial analysis of the results of the NN was not conclusive, maybe the parameters of the NN could be adjusted and the results compared to actual winning numbers. However, even after doing so one may find that the outputs are still random and contain no discernible patterns, which should be the case for a supposedly random game of chance.

### Particle Swarm Optimization (PSO)

Nature is a great teacher as it relates to optimization and evolutionary computing. Among the many lessons taught by nature is the concept of the swarm; whether its a swarm of Locusts, school of Piranhas or a flock of birds the group and group conciousness serves a very important and dominating purpose. In case of the Locusts and Piranhas that purpose is embodied in their very optimized feeding technique which has evolved over millennia into the efficiency that we see today. Particle Swarm Optimization (PSO) is just one of the many and varied optimization techniques borrowed from nature. Many see Particle Swarm Optimization as a hybrid Genetic Algorithm(GA) implementation which allows a more deterministic search of the problems solution space.

Particle Swarm Optimization takes advantage of a mathematical formula that tells each candidate solution(CS) how far it is from optimal and what it needs to do to be to closer to the swarms optimal position. Depending on the calculated position of the CS and the speed of convergence the CS is guided to a new optimal position for each epoch.

We can visualize this this by looking at a school of Mackerel trying to escape the jaws of a hungry great white shark. Each Mackerel is aware of its own position in the school, the position of the shark and also the position and wellness of its neighbour. This swarm feature allows the school to function as a single cohesive organism. It is quite obvious to the fish that if their neighbour turns left and is still not eaten then it is a pretty safe bet that when it turns left it too may be swimming off into the sunset and not into the hungry jaws of the shark.

The PSO optimization is a much simplified expression of the previous narrative, if you are already familiar with the workings of a Genetic algorithm you should be quite at home with this incarnation. A GA with some simple modifications can be converted into a PSO; by by modifying its operators so that each CS is aware of its immediate neighbours and their fitness. Another advantage with PSO’s is the reduced number of parameters which must be configured in order for the technique to work. Many different solutions can be solved without much modification to the algorithm.

### Using A Perceptron ANN With Genetic Algorithm Trainer To Solve Logical AND

Neural nets (NN) have always been a “black box” fascination for me until recently when i had my eureka moment. Neural Networks are mainly used for classification and prediction applications include stock market trends, feature recognition and some types of optimization techniques. A NN can be trained in two ways, supervised and unsupervised.

**Supervised**

The NN is exposed to a dataset with the training values and the correct outputs to those values. A Training algorithm such as back propagation is used to minimize the error between resulting outputs and the correct outputs.

**Unsupervised**

The NN is allowed to form its own conclusion based on the dataset provided. These types of networks are usually used for classification type problems, a very good example of this it the SOM (Self Organizing Map) network.

That said, if you have not already read my post on Genetic Algorithms (GA) please do so now, since we will be using GA’s as the means of optimizing the required weights for each input of the Perceptron.

Wow you are back already? Ok lets look at a simple NN to see how it works, the NN we will be reviewing is called the Perceptron this particular configuration has only two input neurons since we will be dealing with a binary operation and one output neuron.

i = inputs

w = weights

o = output

b = bias

A bias is also associated with the network, this bias is simply a neuron with a constant multiplied by a weight, this bias is added to the activation function in order for the NN to recognize problems that are not linearly separable. Each input is multiplied by a corresponding weight, the weight is a small value which will be adjusted until the the output neuron is activated. Activation occurs when the activation function Σ(i1*w1+i2*w2)+b is greater than certain threshold. The output of the activation function is then sent to a transfer function which interprets that output as 1 or 0 depending on the inputs supplied to the NN.

### Simple Genetic Algorithm To Evolve A String of Integers

Genetic Algorithms are a means of optimization copied from the natural world. According to the theories of evolution, nature has a way of selecting the best (fittest) individuals to mate (Crossover) and reproduce; thus carrying on the best features of each selected individual. Even though traits from both parents are carried over into children there is still an element of randomness involved (Mutation) that gives their offspring the ability to explore their fitness landscape (adapt) to their environment. Genetic Algorithms tries to mimic this behavior with three common operators (1)Selection, (2)Crossover, (3)Mutation.

**Selection: **

The selection operator determines how the individuals of a population are selected to mate, the most popular selection method is called elitism and this is the method that we will use in our genetic algorithm implementation.

**Crossover:**

The crossover operator determines how the parents are recombined to form offspring. We will be using single point crossover in this implementation.

**Mutation:**

Mutation inserts randomness into the genotype of each offspring giving it the ability to diversify from the features of its parents.

**Note**

Implementing genetic algorithms can be seen as somewhat of an art because almost all of the code is boiler plate except for the chromosome representation used and how the fitness of each individual is calculated. These two factors usually have the most impact on the accuracy and speed of the genetic algorithm and are the most difficult to represent. These factors along with the mutation rate, crossover rate and selection method have to be tinkered with until a viable configuration is reached.

This implementation will evolve the number sequence 123456789 in that specific order.

**Interesting Functions**

private static void CalculateFitness(sequence seq) { var ordered = new sequence{buffer = new List<string>() {"1", "2", "3", "4", "5", "6", "7", "8", "9"}}; int result=0; for (int i = 0; i < 9; i++) { if (ordered.buffer[i] == seq.buffer[i]) result++; } seq.fitness = result; }

private void Epoch(List<sequence> population) { //Elitism survivors.AddRange(population.Where(i => i.fitness >= survivorThreshold)); Mutate(population.Where(i => i.fitness >= mutantThreshold)); CrossOver(population.Where(i => i.fitness >= crossoverThreshold) as IEnumerable); population.Clear(); population.AddRange(survivors); for (int i = 0; i < populationSize - survivors.Count; i++) { var temp = GenerateSequence(); CalculateFitness(temp); population.Add(temp); } survivors.Clear(); }

The full code listing can be found at github