Using Python/PyGame to build a platform game

I have been playing around with the book Learning Python with Raspberry Pi.  Chapter 5 steps the reader through creating a Mario Brothers-type 2D platform game.  Using that to start I modified it to include my dog Bruce doing his favorite thing in the world:  chasing a ball.  A short video is below.  You can get the python code to modify/play on GitHub.

 

Using DAX to calculate a moving average in PowerPivot

I have been playing with the PowerPivot functionality in Excel. (Note: Only a few versions of Excel support this feature. I had to buy/install 3 different ones before I finally got what I needed. Ended up with the stand-alone version of Excel.) If you’re not familiar with the tool definitely check it out. Briefly, it supports data models which means you can link various tables together and then run analyses on your data without having to do a ton of vlookup’s. This is best demonstrated in pivot tables and charts where you can bring in fields from different tables and see their relationships interactively.

EXAMPLE:

Let’s say you have a pivot table of sales data by date and you want to know the cumulative amount of sales along the table rows. Written correctly, your DAX formula will be just as capable if you’re sorting by week number as it would if you used months instead. Maybe you only want to look at sales of red automobiles–the formula still works since it inherits the filters.

As a first example I wanted to make a simple 30-day moving average. I used the freely available AdventureWorks database that has sales data from a fictional bike shop. Our sales data has multiple sales in a given day. Since we do not want each and every sale entry to have a calculated value, we do not want a “calculated column”. Instead we want a “calculated field” or a “measure” in older versions. So make a pivot table with dates/sales amount, click inside it, then under the PowerPivot tab choose to add a Calculated Field or Measure. The table you link it to merely controls which table the field is in when trying to find it in the Pivot Table fields. Here’s the code and then I’ll explain how it works.
Read more

Learning D3.js

I’ve been learning how to program in R via Coursera and became interested in visualization.  I know very little about html/CSS but stumbled upon some d3.js tutorials at Scott Murray’s site that are very enjoyable and practical.  After a few hours I generated some random data and was able to make this dynamic visualization.  Notice that each time you refresh the page the data are different (after you run the animation).  This is because your browser is running the javascript and hence random number generation for the data set.  This also works on mobile devices.  Now that I have some of the machinery, hopefully it can be put to use to make less trivial examples.  A current project is exploring structure in an e-mail system based on To/From and time stamps.  Visualizing the dynamics of an e-mail chain through a network could be entertaining if nothing else!

Being More Effective in a Technical Job

Due to some mentoring from a friendly colleague I’ve taken away a simple, but valuable lesson about managing a resource constrained technical problem within an organization.  For those of you that have become frustrated trying to solve a problem while also “managing your managers”, this may add perspective.

Background:  I’m currently tasked with delivering a high-profile, but uncertain R&D solution where, like everything else, it was needed yesterday.  It has the attention of several levels of managers with varying degrees of technical understanding and there are many indirect paths of communication within the organization.  In these scenarios, keeping communication coherent is a project itself.

As such, the frustration came in having to spend as much or more time in meetings answering questions, talking about plans, reassuring interested parties…etc than actually working to solve the problem.  I’m sure I’m not the only person that’s wanted to say, “I would have this answer for you, but I’ve been too busy talking in meetings!”  The conversation with my colleague helped me understand that the perception of how the project is going is separate than the reality.  I thought of a sort of truth-table to illustrate the point, shown below.

Perception graphic

Obviously we all want to be in Square 1, where the project is going well and managers perceive it as going well, and want to avoid Square 4, where it’s all going poorly.  So which of the other two are preferable?  This question is relevant because you may be confronted with a choice on whether you prioritize spending time on aligning communication or on solving the technical problem.

Consider the dynamics of being in each of these two squares.  Put another way, if you’re in Square 2, are you more likely to end up in Good/Good or Bad/Bad?  If the perception is bad but the project is going well, you’re likely to get intervention from managers since they see the choice as either stepping in to fix the problem or doing nothing while the project fails.  Obviously a self-interested manager would do the only thing they can, which is step in to help.  This may be a good move if the project is actually going poorly, but if the project is doing well in reality, this is likely to hurt the chances of success since presumably you are the expert on solving the problem at hand and were given the task for a good reason.  This means you’ll be spending your time trying to justify your plans or explaining why alternative plans are not as good instead of doing real, productive work.

Alternatively, in Square 3 you have real work to do on the problem but the managers trust your vision and do not intervene disruptively.  This essentially buys you time to redirect into Square 1.  In the previous case, you’re spending more and more time convincing people of your vision and fielding technical nuances, which takes away your time to actually solve the problem.  In this way the communication is very important for keeping the project on track because it has practical consequences on resource allocation.

This understanding of communication have shifted my perspective on priority.  Before, as a technically minded person, I would have said Square 2 was better because once you get traction on the problem you can sort out the communication.  I now better understand the adage “Perception is reality.”

Hardware to Enable A.I.

In my previous post I mentioned that I thought a major limiting factor to developing “real” A.I. will be hardware that supports massively parallel information processing.  In short, my criticism was that existing A.I. solutions typically rely on brute force calculations and very powerful machines but that this was very different than what the brain does.  I used the example of computing how a school of fish move by traditional modeling techniques versus how the real system does it, which is each fish figures out where to go next by itself and doesn’t need a master book-keeper tracking the movements of everyone and telling it where to go next.

I’ve been reading about IBM’s efforts to build hardware that is a step toward the type of information processing I had in mind.  They call them Neurosynaptic Chips.  I’m very much interested in learning how they write programs that utilize such functionality.  I imagine this is a bit like trying to fit a square peg into a round hole.  The brain’s “program” is not software, but instead is hard-coded by the structure of the connections between neurons.  Or put another way, the “program” is in the hardware architecture itself.  Perhaps a hardware/software model is more flexible than what the brain uses since one could conceivably use software to re-route information pathways instead of needing to alter the physical paths themselves.  It’s far from obvious that nature’s solution to the intelligence question is the best one in principle, but we’ll have to wait until we understand how the brain does it before we can start parsing out the elements that are critical from the ones that are merely consequences of the constraints of evolutionary dynamics.  It’s fun to think about!

Speculation/Opinion: A Call for Calm Amid A.I. Fears

It seems there’s been a recent surge of famous smart people expressing their fears about the future of artificial intelligence and the ramifications of passing the so-called singularity, which is where machine intelligence surpasses humans’.  Voices like Stephen Hawking and Elon Musk seem to be the loudest.  A recent book by philosopher Nick Bostrom warns of so-called Super Intelligence, which is a speculative concept on intelligence that is orders of magnitude beyond human capability.

First, let’s assess where we are currently:  nowhere remotely close.  Some great computer scientist have gotten computers to do some really cool things, but very little of it could be mistaken for an intelligence of anything except the programmer.  Let’s consider a couple examples.

light-brain_132796_4

Obligatory Brain Picture

The classic:  playing chess.  Quite simply, the machine is good because it has the ability to simulate the outcome of billions of possibilities and to choose the move that maximizes the probability of winning at each stage.  Humans are incapable of this amount of computation yet can still compete at a high level by a mechanism I won’t pretend to know how to describe.  In this case, the technology that enables the computer to compete has little to do with creating intelligent software and everything to do with the hardware that the computers use to do their relatively simple calculations very quickly.

Another illumination of this idea is presented by Jeff Hawkins in his book On Intelligence and a similarly themed Ted Talk.  I will paraphrase.  Imagine coming home from work, walking up to your house and going through the door like you do every day.  However, on this particular day someone stuck a small piece of sandpaper on the bottom of the door knob.  The moment your hand touches the knob it would likely get your attention as being abnormal.

The current way to mimic this with software is similar to how a machine is used in chess:  brute force.  Your robot would climb the stairs (something incredibly difficult to get a robot to do, although toddlers learn it with ease while chewing gum and having a conversation) and then need to run down the list of all the things to check.  If the programmer didn’t have the foresight to to check the texture under the knob as it was grabbed, then the machine won’t detect the abnormality.  Even if the programmer were clever enough to hard-code this routine there’s a huge space of other possibilities that could occur and the programmer would always be playing catch-up to account for new curve balls.  Whatever it is humans are doing, it seems it doesn’t include ruling out all conceivable possibilities.

These are fundamentally different approaches.  Computer scientists have found great ways of finding patterns in data and getting machines to be able to find (some) cats in videos using the best engineers and most powerful systems, but the approaches remain in the category of “huge numbers of calculations” and the results do not allow easy translation to new types of problems.

Getting a robot to demonstrate the flexibility of motor skills of a 3 year old is beyond the state of the art by a large margin.  But a computer can multiply numbers far better and faster than the most trained mathematician.  Finding clever ways to use mathematics and billions of fast computations to get a computer to approximate something humans do with ease is not anything like machine intelligence.  The fact is we know very little about building intelligent systems and to suggest we’re something on the order of a decade or two away from building a technology that surpasses human ability is as fanciful as thinking we would be driving flying cars in the year 2000.  Barring a radical new approach that enables more progress in a decade than has been accomplished in 50+ years, it just isn’t going to happen.  Arguments that quote Moore’s Law fail to see the distinction between powerful computation (which we’ve already figured out) and intelligence, where we’re clueless.

Hardware differences

A major difference, I speculate, that accounts for a portion of the discrepancy is that in information processing architectures.  Although exceptions certainly exist, the modus operandi of a computer is having a task, computing the outcome, and then going to the next task.  This is easy for us to overlook because computers complete their simple tasks very, very quickly.  Let’s say you want to model the movement of a school of fish.  The typical approach is to stop the clock, cycle through each fish and calculate what they will do next given some simple rules and the current configuration that’s frozen in time, let one tick on the clock click, stop the watch again, move the fish to their new positions, then recalculate for each fish again and repeat this loop many times.  In reality, each fish is worrying about themselves and there’s no master book-keeper tracking each and every movement.

Similarly, even if we “train an artificial neural network” so that it can detect faces in an image, the master book-keeping computer still has to sequentially compute how each pixel is affecting each artificial neuron.  In the retina/brain, the magic is largely happening in parallel.  Like the fish, the neurons are reacting to their particular environments and it doesn’t need to wait in line to carry out its action.  Understanding what each fish is doing individually is only part of the story—the collective behavior of many simply interacting fish is the emergent behavior of interest.

There are of course technologies that take a computer program and break down its tasks into smaller pieces for parallel computation if the problem at hand lends itself to such parallelization, but it’s far from the scale of what is happening in the brain.  Having a cluster of 10 powerful computers breaking down billions of computations and re-assembling the results is qualitatively different than a billion simple things making simple calculations simultaneously with ambiguous “inputs” and “outputs”, with many signals being sent among units on a complex network structure.  Statements that suggest that the problem is that we don’t have the computational power to simulate billions of neurons I think again miss the mark:  these implicitly assume the approach should be the status quo—serial computation (or parallel computation to speed up a largely serial task).  I would guess advances in AI research will be as much limited by hardware development as software and that doesn’t mean more clock cycles per second.  However, developing the tools to control and measure such a massively parallel system are challenges themselves.  I think the real meat of the problem is in emergence phenomena of complex systems, but of course, being a person studying in this field, I would think such a grand thing about it.

Simulating the dynamics by stopping the clock and making incremental changes may help us understand mechanisms and be useful exploratory tools, but there’s no reason to expect the best way to build a “school-of-fish-like” machine is with serial processing.  The fact that a lot of the AI community and the institutions that fund them are focused on traditional software-on-computers types of solutions suggests not only are we far away from realizing this “real” Artificial Intelligence, we’re not even in the right paradigm.

The Fear

So where does the fear come from?  I won’t pretend to have an answer to such a question, but it seems peculiar that people largely regarded for their intelligence fear intelligence that’s beyond their own.  Perhaps they’re fearful that their strength will be marginalized?  Regardless of their personal motivation, it’s all too common to see the public influenced by sensational reporting.  The fact that you likely understand the Hal 9000 and Skynet references not only shows this fear of advanced AI isn’t a new one, but also that their “distant future” predictions (2001 and 2029 respectively) tend to massively underestimate just how far away radical technological developments are to their present.  Although there’s little harm in needlessly speculating on how to prevent a Super Intelligence from destroying the human race (it feels absurd even typing it), there could be harm in taking some of the recommended actions, like creating institutions and legislation to fix the problems before they’re problems.  If we’re completely in the dark on how to build intelligent machines because we have a poor understanding of intelligence it seems reckless to speculate on the potential powers and “intentions” of such technologies.  Suggesting ways of dealing with these speculative powers and intentions appears, to me, quite absurd.  To imagine that we’ll stumble upon the technology to create a Super Intelligence and then make a mistake that allows it to super-manipulate us into it super-destroying our entire race vastly underestimates the challenges we’re facing.  Thinking we should start crafting ideas for controlling it seems to me like drafting traffic laws for flying cars in the 1970s.  Before we start building the Faraday cages to suppress the beast, let’s first see if we can get that voice recognizer built with the power of Google to do a little better than this on my voicemail:

“Ohh. Hi. This is Tony for maim client, she. I’m calling about steps 32 in the C R bye hey you know I’mtalking about once a step 30 to Okay bye”

Using Fourier Transforms to Characterize Implantation Damage

As an undergraduate at the University of Missouri – St. Louis I had a joint project with the NASA Missouri Space Grant Consortium and my current company, SunEdison Semiconductors.  The aim was to better understand helium and hydrogen ion implantation into silicon before a thermal treatment.  This was an active area of research to help optimize the SmartCut technology for transferring thin films to create Silicon on Insulator (SOI) materials.

Data

Our data set would be gathered via the preparation of cross-sections of implanted silicon for analysis via Transmission Electron Microscopy.  Interestingly, there were highly regular dark bands parallel to the surface and at depths expected for the implantation species and damage.  The bands remained dark regardless of imaging conditions–bright field or darkfield.  This suggested the contrast was the disruption of electron channeling due to displaced atoms in the crystal.  In other words, the darker it was, the more displaced silicon atoms, the more damage.  See the image and intensity profile plot below.  The take-away was:  TEM is a reasonable way to at least see the damage.

SOI damage

Goal and Method

Aside from measuring standard things under different conditions, we also wanted to be able to say something quantitative about the structure in the damage layer and that’s what we’ll focus on here.  For sufficiently thin samples we could see a sort of mottled contrast.  In the image below you can see the thinner part of the sample highlighted as Region 1 shows significantly more structure than the thicker Region 2.

How can we be quantitative about this structure?  One way is to look at the makeup of the spatial frequencies needed to construct the two image regions.  This map can be obtained by taking the magnitude of the Fourier Transform of the image, called the Power Spectrum (PS).  This can be done so that the center of the PS is an intensity called the DC Peak which is the zero frequency and gives a measure of average image intensity.  Further away from this center point corresponds to increasing frequency.  So low-frequency (read:  long range) contrast correlations are near the center and quickly changing intensities at short-range require high frequencies.  If you take the average intensity of a band at a given radius in the PS, this tells you something about the amount of those spatial frequencies required to construct the image.

Power Spectrum/Fourier Transform Intuition

To build intuition, lets think more about this relationship.  Imagine you have an image of a zebra-stripe vertical pattern where there’s 10,000 lines counted from the left to the right of the image.  If you were to make a profile plot of intensity similar to Fig. 1 above, you would see a very fast-changing signal that would look like a high frequency square wave.  If the number of lines were reduced to 10, you would see a similar plot but with a much lower frequency.  Similarly, the PS of the first case would show a bright peak far away from the center (indicating high frequency) and the second image would have a peak much nearer.  The angle of this point gives us information on the orientation of the corresponding spacings in the image.  More specifically, drawing a line in the PS from the center to the bright dot of interest is a line that is perpendicular to the spacing in the image.  So vertical lines would have points to the left and right of the center at angles 0 and 180 degrees.

How do we apply this to our problem?  We simply take a PS in the damaged region and another PS nearby as a baseline and compare the spatial frequencies.  If some particular band of frequencies is strong (indicated by a radial intensity average in a band of the PS) in the damaged area, that tells us something about the size scale of the texture.

Results

To show we were actually getting spatial frequencies as a result of the damaged texture, we compared PS from the regions marked above since the average intensity profiles are similar but in the case of region 2 the texture information is washed out due to the sample thickness.  (Note:  ion milling was used to thin the sample and the presence of the oxide near region 2 indicates it is less thinned than region 1, where the oxide has been milled away).  We normalized the PS intensities to match at the DC peak.

PS intensity bands

Observe that in the thinner region there’s a significant bump in intensity in the 40 – 80 pixel range.  This says something about our characteristic damage size.  2g measures in pixels can be converted to spatial frequencies if you know the camera constant, magnificantion…etc of your image.  In this case the 40-80 pixel 2g range corresponds to 90-180 Angstrom spacings.

Digital Darkfield

To complete the analysis of this damage layer we wanted to confirm that the regions of the image that were using these frequencies were indeed the regions we were interpreting as implantation damage.  This was achieved by applying a filter to the fourier transform to only select the frequency range of interest, and then inverse-transforming to see our filtered image that only allowed spatial frequencies we let pass.

Specifically, we made a doughnut shape where inside the doughnut we multiplied the FT of the image by 1 and outside (and in the middle) of the doughnut we multiplied by zero.  The width of the doughnut was defined by our 2g measures in the image above and was the 40-80 pixel distance from the center of the PS.

After this multiplication, only the frequencies within the doughnut remained and we then took the inverse transform.  There are so-called “ringing” effects due to this abrupt cut off in frequency, but the image below shows that the damaged region in the thin part of the sample lights up much more brightly than other regions.  It also shows that this texture is washed out as the thickness increases, as our intuition might suggest.

inverse FT

It’s important to note that this result is completely independent of the damaged area being dark compared to the rest of the sample.  This shows that the frequencies required to create the “damaged region” in the original image lie in a particular range and therefore give us information on its characteristic size scale.  The fact that this region lights up when we filter out all other spatial frequencies confirms this.

Acknowledgments

Phil Fraundorf at UMSL for teaching me how to do all of this stuff as well as providing the ideas/tools.

Jeff Libbert, Lu Fei, and MEMC/SunEdison Semiconductors for providing guidance, funding, and materials.

The NASA Space Grant Consortium and UMSL for providing funding and the opportunity to do undergraduate research!

 

 

Machine Learning Practice on Breast Cancer Data

I found an open data set on the Machine Learning Repository concerning Breast Cancer and thought I would use it as a way to practice machine learning approaches.  Briefly, the training data indicate whether the instance was Benign or Malignant and the values of a variety of features:  i.e. Clump Thickness, Uniformity of Cell Size, Uniformity of Cell Shape…etc.

Please go to my GitHub Breast Cancer Repository for the data set and code.

Approach Outline

The data set had 9 features, two classifications and about 700 data points.  I split it so that the training of my model happened on the first 350 data points and the remainder were used for checking the accuracy.

Our prediction equation is a sigmoid and is interpreted as the probability that the tumor is malignant based on the input features.  The input to the sigmoid is a sum of each feature multiplied by a parameter value, i.e. \vec{\theta}\cdot \vec{X} where \vec{\theta} is the parameter vector [\theta_0, \theta_1, \theta_2, ..., \theta_9] and \vec{X} is the feature vector of a given data point i:  [1, x_{1i}, x_{2i}, x_{3i}, ..., x_{9i}].  Explicitly, the input for a set of features \vec{X} is:  \theta_0 + \theta_1x_1 + \theta_2x_2 + ... + \theta_9x_9 and the output is the probability of being malignant with the chosen model parameters (\vec{\theta}).  My program says if the output is greater than or equal to 0.5 (meaning the probability is greater than 50%), then the prediction is that the tumor is malignant.

The cost function is such that if the sigmoid predicts a 1 (meaning the probability the tumor is malignant is 100%) and the real classification is that it’s benign, then there is an infinite cost.  The same is true if it predicts a 0 (meaning the probability of being malignant is 0%) and the real classification is malignant.  The cost equation ends up being:  J = -\frac{1}{m}\sum_{i=1}^m (y*log(h) + (1-y)*log(1-h)) where m is the number of data points, h is the output of the prediction sigmoid and y is the classification of 0 or 1  (benign/malignant).  The idea is that if you’re 100% confident and end up being wrong, you have a very large cost.

The job is then to find the vector theta that minimizes this cost function.  The gradient descent method is one choice, but for the first iteration I tried a minimization function built into Octave ‘fminunc’ with 400 iterations.

Initial Results

The ‘fminunc’ returned a vector of parameters theta that minimized the cost function specified above.  I then used the sigmoid with these values for \vec{\theta} to make predictions on the remaining ~350 data points.  So for each data point there are a set of features given by \vec{X} that were dotted with my parameter vector and fed as the input to the sigmoid function.  If this number was greater than or equal to 0.5, it predicted the tumor was malignant.

The prediction accuracy ends up being 98%!

Gradient Descent

Next I used a Gradient Descent (GD) minimization algorithm instead of the built-in ‘fminunc’ feature to compare results.  The results when using GD were very sensitive to the learning rate parameter \alpha .  If I used \alpha = 0.1 things remained pretty stable and gave reasonable accuracy, but took more iterations to achieve.  The accuracy ended up exceeding that of fminunc very slightly with the best value being 98.6% compared to fminun’s 98.2%.  Essentially it classified one more of the data points correctly.

For \alpha = 0.5 the transient behavior resulted became unstable as log functions began returning infinities.  The algorithm ignored these results and continued to minimize and the output would end up being similar to cases where this wasn’t an issue, but the descent was not a smooth march towards the minimum.  See the figures below for a comparison between the methods and an example of an unstable cost output as a function of the number of iterations.

What the Model Tells Us

Once we have parameters that give us high accuracy in prediction, what can we learn from it?  If you look at the sigmoid function it’s clear that when the input argument is positive, the answer tends towards 1 and when it’s negative it tends towards 0.  The more positive/negative the input is, the closer to 1/0 the output is.  So instead of writing out the sigmoid function, let’s just look at the input and see the conditions that make it positive or negative to garner some intuition.

The input is just \vec{\theta}\cdot \vec{X} = \theta_0 + \theta_1x_1 + \theta_2x_2 + ... + \theta_9x_9.  The machine learning algorithm outputs the following for the parameter vector:  \vec{\theta} = [ -8.2, 0.52, -0.10, 0.42, 0.27, 0.11, 0.41, 0.12, 0.09, 0.31 ].  Recall that \vec{X} is our vector of features (with x_0 = 1 so that we can have a \theta_0 = const term).  The description of each feature is given below:

#  Attribute:  Domain

   0. Constant:  1 for all
   1. Clump Thickness:  1 – 10
   2. Uniformity of Cell Size:  1 – 10
   3. Uniformity of Cell Shape:  1 – 10
   4. Marginal Adhesion:  1 – 10
   5. Single Epithelial Cell Size:  1 – 10
   6. Bare Nuclei:  1 – 10
   7. Bland Chromatin:  1 – 10
   8. Normal Nucleoli:  1 – 10
   9. Mitoses:   1 – 10
Now, the first parameter, which merely multiplies the constant 1, is equal to -8.2.  This means that in the absence of any other feature information, it’s most likely that the tumor is benign, not malignant.  Good news!  The next highest parameter value is 0.52, which multiplies the Clump Thickness.  This feature seems to be the most strongly correlated with being malignant.  The third parameter value is less than zero, indicating the higher the Uniformity of Cell Shape, the more negative the argument becomes and therefore the more likely to be benign.

Next Steps

Next we’ll see how the prediction accuracy changes when using more a complicated feature set (e.g. including combinations of features like x_1x_2x_4^3).

accuracy-vs-iter

cost-vs-iter1

 

Acknowledgments

Data came from the University of Wisconsin Hospitals, Madison from Dr. William H. Wolberg.

Citations

1. O. L. Mangasarian and W. H. Wolberg: “Cancer diagnosis via linear
programming”, SIAM News, Volume 23, Number 5, September 1990, pp 1 & 18.

2. William H. Wolberg and O.L. Mangasarian: “Multisurface method of
pattern separation for medical diagnosis applied to breast cytology”,
Proceedings of the National Academy of Sciences, U.S.A., Volume 87,
December 1990, pp 9193-9196.

3. O. L. Mangasarian, R. Setiono, and W.H. Wolberg: “Pattern recognition
via linear programming: Theory and application to medical diagnosis”,
in: “Large-scale numerical optimization”, Thomas F. Coleman and Yuying
Li, editors, SIAM Publications, Philadelphia 1990, pp 22-30.

4. K. P. Bennett & O. L. Mangasarian: “Robust linear programming
discrimination of two linearly inseparable sets”, Optimization Methods
and Software 1, 1992, 23-34 (Gordon & Breach Science Publishers).

Synthesis and Characterization of Nanoporous Carbon

As part of a materials research group we were creating highly-ordered nanoporous carbon.  Not only is it a beautiful material to put in the Transmission Electron Microscope (TEM), it can have a very interesting effect on materials when they’re confined in the pores.

The Motivation

Our research group was interested in hydrogen storage materials.  Lithium Borohydride (LiBH4) was a material of promise in the literature that could be heated to release its hydrogen and cooled down in a hydrogen bath to absorb it.  Unfortunately, the temperatures and pressures required for this cycling make it an impractical choice for something like a fuel storage system in a hydrogen-powered car.  However, it was found by confining this material in the nano-pores of a carbon framework, the temperature of hydrogen release from LiBH4 could be greatly reduced.  Characterizing this behavior was our aim.  But first, we needed a way to create consistent nano-pores in a carbon framework!

Creating the Nanoporous Carbon

We ultimately wanted well controlled, hollow tubes that were very long and had a diameter of a few nanometers.  The technique/recipe was based on a paper by Meng, et. al. but can be described as “evaporation induced self-assembly”.  Essentially polymers were chosen that would undergo the self-assembly of different structures as a consequence of concentration gradients.  We chose a range that would result in a hexagonal structure of tubes.  You then bake the structure to “cross-like” and lock everything in place.  You then bake at a higher temperature (600C) to remove the hexagonally ordered polymer, leaving tubes and the matrix. Finally, bake to around 900C to remove all but the carbon atoms.  Voila!  Nanoporous carbon.

Characterization via TEM

This was an extremely tedious task, but one that came with great reward.  The goal was to find an image that was on-edge to show the hollow tubes propagating through the material and also a view looking down the tubes to show them arranged hexagonally.  I had a single-stage tilt and was imaging under bright field conditions.  Finding the lines was a much easier task, but it was still tricky to get a nice image because the only way to get nice contrast was to have several tubes stacked on top of one another.  Kind of like looking at a hand that’s tilted sideways so that the fingers all line up, I needed all the tubes to line up so there was a portion of the sample that would be much less dense than that which surrounded it and would therefore allow more electron penetration and hence a bright spot on the image.  See the left image below for the result.

Finding the “down the pipe” view was much, much harder.  Imagine you have tubes that are 2 inches wide and 20 foot long–if you’re looking down them even slightly off angle you won’t be able to see through the other side.  I needed to find something comparable that happened to land that way on the sample stage by chance, but was also thin enough that the structure could be imaged by transmitting electrons.  Further, to find new chunks to look at you must zoom out, re-focus, find something, zoom in, and set focus.  Far from trivial, especially when you’re turning all those knobs in the dark!  After many hours, I found the “goldilocks” piece and it was rather stunning.  Maybe I’m biased.

Results

After confirming we had the proper material several experiments were conducted to understand the kinetics of the reactions.  Most notably, the temperature for hydrogen release of LiBH4 was reduced from 460C to 220C and the emission of Diborane gas (toxic) as a reaction byproduct was eliminated.  This work resulted in two papers:  Journal of Physical Chemistry C and Chemistry of Materials.

 

 

C10 linesC10 pores

Ellipse Detection

I enjoy learning new algorithms and am interested in computer vision.  It turned out that my company had a use for recognizing small circles in images and defect maps so I thought I’d use it as an opportunity to learn something.  You can find all the code and example data files you need at my GitHub repository on ellipse detection.

Background:

The standard way of finding the signature defect pattern was by human observation.  It’s obviously advantageous to automate this task both for rigorous statistics and for the sake of freeing up peoples’ time.  Further, more robustly understanding the timing and frequency of this defect might allow us to learn something about the root cause.

Method:

I first wanted to prototype a solution in the easiest case, which is when dealing with a defect map with a list of (x,y) coordinates.  I found a paper by Xie, et. al. that reduces the task of finding an ellipse to that of fitting a single parameter by making a few assumptions about the points and using a voting scheme on a one-dimensional accumulator array.  This is probably overkill since my defects are largely circular, but I wanted to understand how they reduced the complexity of the problem and get practice incorporating algorithms from the literature.

The trick to reduce the problem from finding 5 parameters \{x_0, y_0, \alpha, a, b\} (center point, orientation,  length of major/minor axes) down to one is to simply assume the two chosen points are the two endpoints of the major axis.  If this is the case then \{x_0, y_0, a, \alpha \} can all be calculated and only b needs to be voted upon.

So you pick two points and calculate the 4 geometric terms.  Next, iterate over all other points and calculate a minor axis for each and then increment the vote for that choice of minor axis.  Once you’ve iterated over all other points for that pair, find the minor axis that was voted for the most.  If the number of votes is above your threshold, you’ve found an ellipse!  This act of voting in a parameter space is called a Hough transform.

But as they say, the devil’s in the details.  It’s up to you to put in rules about minimum distances to consider and the appropriate value for the vote threshold.

Results:

I found I had far too many points than I needed to get a reasonable fit, so I did some filtering.  The below image shows the original points (blue), the remaining points after filtering (red), and the final calculated ellipse (solid blue line).  Better accuracy can be obtained by changing the resolution of the bins used for grouping the axes, but I was more interested in stability than an accurate fit.

ellipse1

 

 

Method criticisms:

Although I was able to get it to work, I had to deviate somewhat from the paper.  For instance, I was not able to get good results by merely deleting points as soon as any type of ellipse was found.  The reason is that a constant value for the minimum vote would sometimes be too high and miss ellipses or would be too low and allow many nearby ellipses fitting the same points and I would sometimes exclude a possible good ellipse by removing points after a bad ellipse was “found”.  I ended up keeping all points in the defect list even if they led to a “found ellipse” because it’s possible that another ellipse using those same points may be a better fit.  I found it better to leave the points in, find all the suggested ellipses, and then choose the best ones by ruling out nearby alternatives with lower vote numbers.

Conclusion:

The prototype is successful in finding ellipses with the small file set I tested, but there is a lot of room for improvement in tuning the various parameters and rules to strike a balance between speed and accuracy.

 

1 2 3