Ants Solving the Traveling Salesman Problem

I have been working on building the stochastic optimization algorithm inspired by ant foraging called the Ant System as described in Marco Dorigo’s paper, “The Ant System: Optimization by a colony of cooperating agents.”  Code for running it yourself using Python 3 can be found in my Ant System github repo.  It is being used to find solutions to the Traveling Salesman problem on the 30-city “Oliver 30” data set.  Initial results below.  The general question is: given a map of cities, what is the minimum path to take such that you visit all cities precisely one time?  This has obvious applications to logistics or routing traffic in optimal ways.

Algorithm Broad Strokes

The general idea of the Ant System is to build something that uses both local and global information to guide decisions.  Local information could be something like:  move to the city that is closest to my current city that I haven’t been to yet.  Another way might be:  assign a probability for moving to a new city to be proportional to how close it is.  If it’s really close, large probability, if it’s really far away, small probability.

Global information can be utilized in a variety of ways.  In the case of real ants, they leave pheromone trails and the simple dynamics of the laying + evaporating chemical system results in short paths being reinforced more heavily than longer paths.  The algorithm follows a similar approach:  wait until the ant completes an entire tour through the 30 cities and then lay a trail that has a strength inversely proportional to its tour length.  This way ants that had short paths leave more “pheromone” than those with long paths.

The two types of information are combined into a single measure by multiplying the two values.  The strength of each score is weighted by raising it to a power, the exponents of which are  tune-able parameters in the model.  Details are given below.

Algorithm Details

As mentioned, each “ant” k calculates the amount of pheromone laid at the end of a complete round by dividing a constant by its path length:  \Delta\tau^k_{ij} = Q/L_k .  Obviously this value equals zero for all connecting paths {i,j} that the ant did not travel.  Following Dorigo’s method, I had the same number of ants as cities.  This means a total pheromone level is calculated by summing the contributions of the 30 ants:  \Delta\tau_{ij} = \Sigma_k^N \Delta\tau_{ij}^k .  From one generation to the next, the global trail information is updated by multiplying the previous value by an evaporation constant (\rho<1) and adding in the new contribution calculated above.  \tau_{ij}(t+n) = \rho\tau_{ij}(t) + \Delta\tau_{ij} .  The time is incremented by n since the ant has traveled to n cities since the previous update.  Call this global information measure G.

Local information was quantified by calculating a “visibility” matrix for which the {i,j} element was simply equal to 1/d_{ij} where d_{ij} is the euclidean distance between cities i and j.  Let’s denote this local information measure as L.

For the next generation we calculate the probability of going to a city as a multiplication of G and L, where they are raised to parameter exponents:  p_{ij} \propto G^\alpha L^\beta .  Now the strength of influence of local and global information can be adjusted by changing the \alpha and \beta exponent parameters.  Of course the probability is only non-zero for cities that haven’t been visited yet in the current tour.


There is basically three types of behavior.  1)  The influence of pheromones is very low and the ants are only guided by local information that is weighting their choices probabilistically.  This means that if the true optimum path requires traveling to a city that looks like a bad choice locally, you’re not likely to find it quickly.  2)  The influence of the pheromones is very high and the ants quickly converge to a solution regardless of quality.  If ants do not choose a particular path early, even if good, it is quickly eliminated as an option since each iteration becomes increasingly likely to follow previous generations and the \tau_{ij} values of certain paths become almost zero.  Dorigo refers to this as “stagnation” behavior.  Or 3) a good balance is struck such that bad paths are eliminated from consideration and good paths are rewarded, even if they appear like bad choices locally.

Comparison Results

[Update:  The problems identified in this section were resolved by randomly assigning the starting city of each ant instead of placing them all on the same city initially.  Randomly assigning cities is a better approach because initial decisions strongly influence the decisions open to the ants at later iterations since cities cannot be revisited.  Once this adjustment was made, Dorigo’s results were validated.]

Dorigo says the best parameter set was found to be \rho = 0.5, \alpha = 1, \beta = 5 .  However, I found these settings to exhibit stagnation behavior very quickly, suggesting the \alpha parameter was too large.  To get comparable results to those reported I had to reduce to \alpha =0.4 .  I was never able to get a path as low as what he defined as his average result.  Interestingly, the branching behavior was also different.

The branching for a city was defined to be the count of non-trivial options.  Due to the symmetry of the problem (i.e. the trail laid between city 1 and 2 is the same as the trail between cities 2 and 1) the minimum amount of branching is two.  If the branching for all cities is equal to two then that means there is only one path being used and no others are being explored.  Whereas Dorigo’s results had optimal values resulting when branching was in the 5-10 range, my best results were when branching quickly reduced and stayed around 12.  My best path so far is 425.99 whereas his is 423.74.  A chart below shows typical output.



Next Steps

Due to large differences in behavior for parameter ranges I want to quadruple check all code to verify it’s working as intended.  Once completed the next stage will be creating a mechanism for allowing the \alpha and \beta parameters to be different throughout the population.  Then a simple inheritance + evolution model can be incorporated so that we can let various division of labor strategies compete dynamically.

Leave a Reply

Your email address will not be published. Required fields are marked *