Weasel Program - Python + C Version

This is a continuation of recent posts on the Weasel Program. There are various claims about what the program demonstrates about the processes that drive evolution. And I am trying to sort it out for myself.

Computer models are great for studying the relative importance of the factors affecting the thing being modeled. One aspect of this is sensitivity analysis. In order to do a sensitivity study of the parameters affecting the Weasel Program, it would be convenient to use Python. This would allow me to use tools such as matplotlib and numpy. However, Python is relatively slow. C is much faster. In this post I show how to use SWIG (Simplified Wrapper and Interface Generator) to get a combination that is just about the best of both languages. I've used SWIG since the 90's and it remains popular today. It has been much improved over the years and it works not only with C, but C++ and about 20 other languages as well. So it's a tool made for polyglots.

The first thing I did was put all source code in the same folder (weasel.c and weasel-driver.py). Then I refactored the C program I orginally wrote. Now that I understand the problem better I improved its design a bit. (All software designs evolve, no pun intended.) But most importantly, I decided which of the model's defining parameters and functions need to be externally available. (I used the Go convention that what I thought needed to be a publicly available name begin with a capital letter.) Actual implementation details remain hidden. Here is the refactored weasel.c file.

#include < stdio.h >
#include < stdlib.h >
#include < string.h >
#include < time.h >
#include < sys/time.h >

/* Weasel Program -- evolutionay simulation demonstration

A simple computer simulation demonstrating
that the process that drives natural selection
(based on random variation combined with non-random selection)
is qualitatively different from results based on pure random selection.

See Wikipedia for more info on Weasel Program.

Program has been made more general by allowing a gene to be any integer.

Program allows selection process to be modified.
That is, does not have to be number of matching genes.

Basic algorithm:
    Generate children from single parent keeping most fit as next parent.
    Single parent = no crossover

    Target genome does not drift (change, shrink, enlarge).
    Target genome is array of integers.
    So a gene is just an integer value.(from 0 to RAND_MAX)

    Stop when child genome fitness >= target fitness

    Note: to save space, child is discarded immediately if not most fit
    in generation so far.

Note, used following command lines for speed:
    gcc -O3 weasel.c -o weasel
or debugging:
    gcc -Wall -g weasel.c -o weasel.dbg
    gdb weasel.dbg

// Avoids any malloc/free issues:
#define MAX_GENOME_LEN 10000
#define MAX_NUM_GENES 10000

// Define these to control simulation:
// (Give some defaults to get started with a sensitivity analysis.)

// Represents simulation parameters.
int Genome_len = 100;  // <= MAX_GENOME_LEN
int Num_genes = 30;  // <= MAX_NUM_GENES
int Generation_size = 100;
double Mutation_probability = 0.02;
double Fitness_err = 0.01;
double Target_fitness = 30;

// Control simulation.
int Max_generations = 10000;

// Represents evolutionary model.
double calc_child_fitness() {
    // For this particular model:
    // Fitness model is completely 'internalized'.
    // Our strategy is centered only on the genome itself.
    // No external world or creature to simulate.
    // No effect on child creation mechanism.
    // Actual fixed target gene value does not matter (so set = 0).
    // Only need match particular value at particular position
    // on the target genome.
    const int target[MAX_GENOME_LEN] = {0};
    int i;
    double fitness = 0.;
    for (i = 0; i < Genome_len; i++) {
        fitness += (double)(CHILD[i] == target[i]);
    return fitness;

// Represents reproduction and gene semantics.
void create_child() {
    int i;
    for (i = 0; i < Genome_len; i++) {
        // drand48() = 48 bit uniform 0.0 to 1.0
        if (drand48() <= Mutation_probability) {
            CHILD[i] = rand() % Num_genes;
        } else {
            CHILD[i] = PARENT[i];
void create_random_parent() {
    // Setup first generation.
    int i;
    for (i = 0; i < Genome_len; i++) {
        PARENT[i] = rand() % Num_genes;

Starting with a parent,
generate children that have chance of mutated genes.
Most fit (only) child survives to become next generation's parent.
(A parent never survives from one generation to the next.)
int Evolve() {  // generations needed

    int i, j;
    double fitness, max_fitness;

    // Assume first generation parent fitness is low.
    int generation = 0;
    max_fitness = 0.0;

    // Iterate until a child reaches target fitness or to many generations.
    for (generation = 1; generation < Max_generations; generation++) {
        if (max_fitness >= Target_fitness - Fitness_err) break;
        max_fitness = 0;  // yes, a generation can go downhill!
        for (i = 0; i < Generation_size; i++) {
            fitness = calc_child_fitness();
            if (fitness >= max_fitness) {
                max_fitness = fitness;
                for (j = 0; j < Genome_len; j++) {
                    FITTEST[j] = CHILD[j];
        for (j = 0; j < Genome_len; j++) {
            PARENT[j] = FITTEST[j];
    return generation;

int main() {

    struct timeval start_time, end_time;

    Genome_len = 100;
    Num_genes = 30;
    Generation_size = 100;
    Mutation_probability = 0.02;
    Fitness_err = .01;
    Target_fitness = Genome_len;

    Max_generations = 10000;

    printf("LEN: %4d, NUM: %4d, SIZE: %4d, MUTATION: %7.3f\n",
           Genome_len, Num_genes, Generation_size, Mutation_probability);

    int i;
    int num_trials = 10;
    double total_time = 0.0;
    int total_generations = 0;;
    for (i = 0; i < num_trials; i++) {
        gettimeofday(&start_time, NULL);

        total_generations += Evolve();

        gettimeofday(&end_time, NULL);
        long microseconds =
            (end_time.tv_sec * 1000000 + end_time.tv_usec) -
            (start_time.tv_sec * 1000000 + start_time.tv_usec);
        total_time += microseconds / 1000000.;

    double time_per_trial = total_time / num_trials;
    int generations_per_trial = total_generations / num_trials;

    printf("TRIALS: %4d, AVE. GEN: %4d/trial, AVE. TIME: %7.3f sec/trial\n",
           num_trials, generations_per_trial, time_per_trial);

    return 0;

Some things to notice. I went from characters to integers so I could use any number of genes I wanted. I increased the size of the static arrays so I wouldn't need to worry about accidental overflow. I defined some defaults for the model parameters. And I made corresponding changes to the main function so I could test all the other changes. (One of the issues with SWIG is that sometimes it makes testing harder. And it is a general best practice anyway to do as much testing before integration as practical.) There was no need to remove main before making it a python callable module using SWIG.

Here is the corresponding SWIG interface file.

/* SWIG Interface File: weasel.i */
%module weasel
extern int Genome_len;
extern int Num_genes;
extern int Generation_size;
extern double Mutation_probability;
extern double Fitness_err;
extern double Target_fitness;
extern int Max_generations;

extern int Evolve();

extern int Genome_len;
extern int Num_genes;
extern int Generation_size;
extern double Mutation_probability;
extern double Fitness_err;
extern double Target_fitness;
extern int Max_generations;

extern int Evolve();

The thing to note here is how this makes it much more difficult to add or change things. It's like defining a class. Something I usually try and avoid when trying to understand some new idea or prototype some unfamiliar algorithm. But this is evidence for the argument that I should have used C++ from the start rather than C.

And these are the commands I used to create the Python callable weasel module. (I'm using Ubuntu 12.04, and installed SWIG at ~/apps/swig. So your commands might need to be a little different.)

#/usr/bin/env bash
~/apps/swig/bin/swig -python weasel.i
gcc -c -fpic weasel.c weasel_wrap.c -I/usr/include/python2.7
gcc -shared weasel.o weasel_wrap.o -o _weasel.so

Finally, here is how I refactored the Python version I originally posted.

'''Weasel Program -- evolutionay simulation demonstration

A simple computer simulation demonstrating
that the process that drives natural selection
(based on random variation combined with non-random selection)
is qualitatively different from results based on pure random selection.

See Wikipedia for more info on Weasel Program.

This is a driver accessing via SWIG a C module that does the evolutionary stuff.
This was done for performance. See the weasel.i interface file for what's accessible.

import random
import time
import weasel as w  # this is the C module accessible via SWIG


# Adjust the following to see how they affect number of generations
# needed to reach fitness goal and time to convergence.

# Set simulation parameters:
wv = w.cvar
wv.Genome_len = 30;
wv.Num_genes = 30;
wv.Generation_size = 100;
wv.Mutation_probability = 0.02;
wv.Fitness_err = .01;
wv.Target_fitness = wv.Genome_len;

# Control the simulation:
wv.Max_generations = 10000;

fmt_str = "LEN: %4d, NUM: %4d, SIZE: %4d, MUTATION: %7.3f\n"
fmt_str_vars = (wv.Genome_len, wv.Num_genes,
    wv.Generation_size, wv.Mutation_probability)
print fmt_str % fmt_str_vars

# To keep track of performance.

num_trials = 1
total_time = 0.
total_generations = 0
for i in range(num_trials):

    start_time = time.time()
    generations = w.Evolve()
    dt = time.time() - start_time

    total_generations += generations
    total_time += dt

    print "TRIAL: %3d" % (i+1), "GEN: %4d" % generations, "TIME: %7.2f sec" % dt

fmt_str = "   AVERAGE GEN: %4d TIME: %7.2f sec/trial"
print fmt_str % (total_generations / num_trials, total_time / num_trials)

I had to change it's name to weasel-driver.py because SWIG generates its own weasel.py file. The result is is a Python program that is only slightly slower than if written in pure C.

Weasel Program - Go Version

One more implementation of the Weasel Program. (See previous two posts.) Then I'll get into the simulation issues.

I really like the ideas behind Go. Based on the still largely theoretical arguments for it being created (not much data yet) I hope it becomes popular for the type of programming problems it's designed for. It's very C-like mostly, which I think is good. But unfortunately, this little simulation doesn't allow it to show off any of the advantages it has over C.

I don't have very much experience in Go and so am taking every opportunity I can to see what it can do.

My bug rate per line of code has always been pretty consistent regardless of language. Go seems no different. Performance wise, for this particular problem, Go split the difference in execution time between the program written in Python and the program written in C.

/* Weasel Program -- evolutionay simulation demonstration

A simple computer simulation demonstrating 
that the process that drives natural selection
(based on random variation combined with non-random selection)
is qualitatively different from results based on pure random selection.

See Wikipedia for more info on Weasel Program.

    Genome as string representing list of genes modelled as chars.
    Target genome is a fixed string of chars.
    Fixed mutation probability.
    Fixed number of children per generation.

Basic algorithm:
    Generate children from single parent keeping most fit as new parent
    for next generation.

    Stop when child genome = target genome

    Note: to save space, child is discarded immediately if not most fit
    in generation so far.

package main

import (

// Adjust the following to see how they affect number of generations and
// time to convergence. (Convenient to define here, visible and easy to
// change.
var targetGenome = []byte("METHINKS IT IS LIKE A WEASEL!")
var mutationProbability float32 = 0.05
var numChildrenPerGeneration int = 100

// Useful to define globally
var genomeLen, perfectFitnessScore, numGenes int
var parentGenome, childGenome, survivingChildGenome []byte

func init() {

    genomeLen = len(targetGenome)
    perfectFitnessScore = len(targetGenome)
    numGenes = len(genes)

    // First generation parent is just a random set of genes.
    parentGenome = make([]byte, genomeLen)
    for i := range parentGenome {
        parentGenome[i] = genes[rand.Intn(numGenes)]

    childGenome = make([]byte, genomeLen)
    survivingChildGenome = make([]byte, genomeLen)

func calcFitness(genome []byte) (fitness int) {
    for i, targetGene := range targetGenome {
        if genome[i] == targetGene {

func createChild() {
    for i, parentGene := range parentGenome {
        if rand.Float32() < mutationProbability {
            childGenome[i] = genes[rand.Intn(numGenes)]
        } else {
            childGenome[i] = parentGene

func main() {

    var i, generation, fitness, maxFitness int

    startTime := time.Now()
    // Let user know how we started.
    maxFitness = calcFitness(parentGenome)
    fmt.Printf("%3d %3d %s\n", generation, maxFitness, string(parentGenome))

    // Evolve.  Loop each generation.
    for maxFitness < perfectFitnessScore {
        maxFitness = 0
        for i = 0; i < numChildrenPerGeneration; i++ {
            fitness = calcFitness(childGenome)
            if fitness >= maxFitness {
                maxFitness = fitness
                copy(survivingChildGenome, childGenome)
        copy(parentGenome, survivingChildGenome)
    fmt.Printf("%3d %3d %s\n", generation, maxFitness, string(parentGenome))
    elapsedTime := time.Since(startTime)
    fmt.Printf("Elapsed time: %v\n", elapsedTime)

Weasel Program - C Version

I've just started playing around with my Python version of the Weasel Program (see my last post). I can already see that if I want to automate scenarios (for a range of parameters: target genome length, mutation probability, and generation size) that I will need to address performance. So I think I'll just bite the bullet now.

Here is a version I just wrote in C. I have a vague notion I can refactor to call C from Python if my investigation of the Weasel gets that far. And I am under no illusions that even C is up to the task of handling simulated genomes anywhere near the size found in real life, but the principle that faster is better still holds.

Just in passing, I've been using C over 30 years and during that time I've also used a lot of other languages. But if I had to choose to program in one computer language exclusively -- C would be my choice. For example, first and foremost, I think of C++ as a better C. I'm not using C++ here because the OO design style, verbosity, and memory overhead are not needed for such a small program. But I digress.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include <sys/time.h>

/* Weasel Program -- evolutionay simulation demonstration

A simple computer simulation demonstrating 
that the process that drives natural selection
(based on random variation combined with non-random selection)
is qualitatively different from results based on pure random selection.

See Wikipedia for more info on Weasel Program.

    Genome as string representing list of genes modelled as chars.
    Target genome is a fixed string of chars.
    Fixed mutation probability.
    Fixed number of children per generation.

Basic algorithm:
    Generate children from single parent keeping most fit as new parent
    for next generation.

    Stop when child genome = target genome

    Note: to save space, child is discarded immediately if not most fit
    in generation so far.


Note, used following command lines for speed:
    gcc -O3 weasel.c -o weasel
or debugging:
    gcc -Wall -g weasel.c -o weasel.dbg
    gdb weasel.dbg

const float MUTATION_PROBABILITY = 0.05;

 * Also convenient to define globally.

// minus null terminator
const size_t GENOME_LEN = sizeof(TARGET_GENOME) - 1;
const size_t NUM_GENES = sizeof(GENES) - 1;

/* Mechanism of evolution. */

int calc_fitness(const char * child_genome) {
    // How many positions match the DNA of the perfect child?
    int i, sum = 0;
    for (i = 0; i < GENOME_LEN; i++) {
        sum += (child_genome[i] == TARGET_GENOME[i]);
    return sum;

Usually, a child is different from its parent. 
That is, allow chance of mutation.
void create_child(char * parent_genome, char * child_genome) {
    int i;
    char parent_gene;
    float random_num;
    char random_gene;
    for (i = 0; i < GENOME_LEN; i++) {
        parent_gene = parent_genome[i];
        random_num = (float)rand() / (float)RAND_MAX;  // 0.0 to 1.0
        random_gene = GENES[rand() % NUM_GENES];
        child_genome[i] = 
            random_num <= MUTATION_PROBABILITY ? 
    child_genome[i] = 0;  // if print as string

Starting with a parent, 
generate children that have chance of mutated genes.
Most fit (only) child survives to become next generation parent.
(A parent never survives from one generation to the next.)
int main() {

    struct timeval start_time, end_time;
    gettimeofday(&start_time, NULL);

    char parent_genome[GENOME_LEN+1];   // string terminators
    char child_genome[GENOME_LEN+1];
    char surviving_child_genome[GENOME_LEN+1];

    int i, child_fitness, max_fitness = 0;

    // Not nowadays considered a great random number generator, 
    // but good enough for our purposes. */

    // Initially, just a random string genome of character genes.
    // Fitness will no doubt be quite low.
    int generation = 0;
    for (i = 0; i < GENOME_LEN; i++) {
        parent_genome[i] = GENES[rand() % NUM_GENES];
    parent_genome[i] = 0;  // null terminator
    max_fitness = calc_fitness(parent_genome);
    printf("%3d %3d %s\n", generation, max_fitness, parent_genome);

    // Iterate until a child reaches perfection.
    while (max_fitness < PERFECT_FITNESS_SCORE) {
        max_fitness = 0;  // yes, generation can go downhill!
        for (i = 0; i < NUM_CHILDREN_PER_GENERATION; i++) {
            create_child(parent_genome, child_genome);
            child_fitness = calc_fitness(child_genome);
            if (child_fitness >= max_fitness) {
                max_fitness = child_fitness;
                strcpy(surviving_child_genome, child_genome);
        strcpy(parent_genome, surviving_child_genome);
    printf("%3d %3d %s\n", generation, max_fitness, parent_genome);

    gettimeofday(&end_time, NULL);
    long microseconds = 
        (end_time.tv_sec * 1000000 + end_time.tv_usec) - 
        (start_time.tv_sec * 1000000 + start_time.tv_usec);
    printf("Elapsed time: %7.2fms\n", microseconds / 1000.);

    return 0;

The C version is about twice the number of lines of code as the Python version. IMHO, the C code is more complicated (even though I managed to avoid dynamic memory allocation). I did not keep count, but the defect rate per line of code (all minor defects for both programs for such a simple simulation) seemed about the same for both programs.

The performance reward? Execution is about seven times faster. Of course, increase the size of the simulation parameters and this number would grow.

Weasel Program -- Python Version

I recently stumbled across a type of thought experiment that is both very simple to simulate on a computer and yet very interesting to study and think about. That's too good to pass up!

It's called The Weasel Program. Its "aim is to demonstrate that the process that drives evolutionary systems -- random variation combined with non-random cumulative selection -- is different from pure chance." It's very a very basic example of a genetic algorithm.

(I've had a lot of fun in the past with another class of computer optimization techniques based on simulated annealing, but I have to save that story for another day.)

It also turns out that Dawkin's Weasel has created a bit of a controversy between certain teleological argument believers, particularly Intelligent Design types, and certain outspoken scientists. See The War of the Weasels. I see the Weasel controversy as a very simplified (and so perhaps instructive) version of the difficulties encountered in the Global Climate Model verification and validation debate.

But before getting into that and possibly going off into the deep end of the pool, I know I need to start off carefully and come up to speed by actually implementing a version (or two or three) of The Weasel Program. And then playing around until I get a feel for the numbers.

Of course, there are many examples of Weasel programs in various computer languages on the web. I could just download something. But to really understand code, I have to write code.

I've been coding a lot lately in Java and PHP. Practical languages -- but personally not very exciting or pleasing to write in. So as a chance to keep fresh in other languages I know and like to code in a lot more, I decided to implement a Weasel in Python. Here it is:

'''Weasel Program -- evolutionary simulation demonstration

A simple computer simulation demonstrating that the process that drives natural selection
(based on random variation combined with non-random selection)
is qualitatively different from results based on pure random selection.

See Wikipedia for more info on Weasel Program.

    Genome modelled as string representing list of genes modelled as chars.
    Target genome is a fixed string of chars.
    Fixed mutation probability.
    Fixed number of children per generation.
Basic algorithm:
    Generate children from single parent keeping most fit as new parent
    for next generation.
    Stop when child genome = target genome
    Note: to save space, child is discarded immediately if not most fit
    in generation so far.

import random
import time

# Also implementing this in other languages, so keep track of performance.
start_time = time.time()

# Adjust the following to see how they affect 
# number of generations and time to convergence:


# Convenient to define here:

GENOME_LENGTH = len(TARGET_GENOME)  # how many genes can match?
PERFECT_FITNESS_SCORE = GENOME_LENGTH  # all genes match target

# The mechanism of evolutionary selection.

def calc_fitness(child):
    '''How many positions match the target genome?'''
    fitness = 0
    for i in range(GENOME_LENGTH):
        if child[i] == TARGET_GENOME[i]:
            fitness += 1
    return fitness

def create_child(parent):
    '''Allow chance of mutation.'''
    return [(random.choice(GENES) if random.random() < MUTATION_PROBABILITY 
               else gene) for gene in parent]

# Initialization (the first generation is just a random string of genes)
generation = 0
surviving_child = [random.choice(GENES) for _ in range(GENOME_LENGTH)]
max_fitness = calc_fitness(surviving_child)
print "%3d" % generation, "%3d" % max_fitness, "".join(surviving_child)

# Then evolution proceeds as series of successive generations,
# using random mutations and survival of the fittest.
while max_fitness < PERFECT_FITNESS_SCORE:
    parent = surviving_child[:]
    generation += 1
    max_fitness = 0
    for _ in range(NUM_CHILDREN_PER_GENERATION):
        child = create_child(parent)
        child_fitness = calc_fitness(child)
        if child_fitness > max_fitness:
            max_fitness = child_fitness
            surviving_child = child

print "%3d" % generation, "%3d" % max_fitness, "".join(surviving_child)

print "Elapsed time: %7.2fms" % ((time.time() - start_time) * 1000.)
The program runs in about 100ms +/-50% on my desktop machine depending on how lucky the mutations are for that particular run.

One of the things I like about Python is how easy it is to read. Even for someone who is not that familiar with programming in general or Python in particular. But there are a couple of places in the above program where the syntax might seem confusing. One is where generators are used, for example:

surviving_child = [random.choice(GENES) for _ in range(GENOME_LENGTH)]

surviving_child is a list of random genes of length GENOME_LENGTH. The underscore (_) is just a common way of indicating that the for loop's index is not being used. And the generator's syntax can seem strange since it seems to put the body of the loop before the for keyword.

Another place where Python may be confusing is its form of what C-like languages call the ternary operator.

random.choice(GENES) if random.random() < MUTATION_PROBABILITY else gene

In a C-like language, this expression would be:

random.random() < MUTATION_PROBABILITY ? random.choice(GENES) : gene

Everyone Should Learn About Software

Can we trust climate models? This was the title of a recent blog post by Tamsin Edwards. She says:
Ever since I started scientific research, I’ve been fascinated by uncertainty. By the limits of our knowledge. It’s not something we’re exposed to much at school, where Science is often equated with Facts and not, as it should be, with Doubt and Questioning. My job as a climate scientist is to judge these limits, our confidence, in predictions of climate change and its impacts, and to communicate them clearly.
I liked this article. However, I do not think the key public issue about the climate models is a lack of appreciation of the uncertainty inherent in science. Rather, it is ignorance about the difficulties and limitations inherent in computer modelling of natural phenomena. Especially utterly complex systems like the climate. BTW, I think this ignorance about computer coding extends to some scientists as well.

This is an aspect of a more general problem: the need for everyone in modern society to understand what software is all about. And the fact that schools are not teaching this.

For example, there was an article in the Wall Street Journal over the weekend critical of the US education system titled Sorry, College Grads, I probably Won't Hire You. The author, Kirk McDonald, complains that: "States should provide additional resources to train and employ teachers of science, technology, engineering and math, as well as increase access to the latest hardware and software for elementary and high-school students." His advice is that "If you want to survive in this economy, you'd be well-advised to learn how to speak computer code".

Many programmers don't think this is an issue. Typical Programmer says:
This is another article beating the "everyone must learn to code" drum. Employers can’t find enough people with programming skills, schools aren’t turning out enough engineers, jobs at these cool companies are left unfilled because American students are too lazy or short-sighted to spend a summer learning "basic computer language." If only it was that simple. I have some complaints about this "everyone must code" movement, and Mr. McDonald’s article gives me a starting point because he touched on so many of them.
I recommend reading the entire post. But, in the end, I am not convinced with Typical Programmer Greg Jorgensen's points. Simply put, the reason to learn a computer language is not to be able to program, but to have a foundation from which to understand the nature of software. As an engineer, I have taken a lot of courses in science and math as well as engineering. But I am not a scientist or mathematician. I am an engineer. However, it is necessary for me to understand science and math in order to be a competent engineer. By analogy, many people now think it takes an understanding of software to be a productive member of modern society. And I think that too. And the best way to learn is by doing.

Older and Wiser... Up to a Point

Having had a lot of fun, and having gained a lot of software knowledge and wisdom over the years, I do not look forward to the day when, like an old pro athlete, I have to retire from being a programmer because I physically can't do it anymore.

So, I was pleased to stumble across an IEEE Spectrum Tech Talk article that suggested I don't have much to worry about along that line for quite a while. Here is an excerpt:
Prejudice against older programmers is wrong, but new research suggests it's also inaccurate. A dandy natural experiment to test the technical chops of the old against the young has been conducted—or discovered—by two computer scientists at North Carolina State University, in Raleigh. Professor Emerson Murphy-Hill and Ph.D. student Patrick Morrison went to Stack Overflow, a Web site where programmers answer questions and get rated by the audience. It turned out that ratings rose with contributors' age, at least into the 40s (beyond that the data were sparse). The range of topics handled also rose with range (though, strangely, after dipping in the period from 15 to 30). Finally, the old were at least as well versed as the young in the newer technologies.

Of course, such a natural experiment can't account for all possible confounding factors.
This is in line with what I already knew about chess. I played a lot of chess when I was a kid. I was told then that a person's chess rating didn't drop all that much during the player's lifetime.

The article also addresses chess, but a bit ambiguously. Here is a figure the article presents:

It was hard for me to interpret the vertical axis. I am not sure if SD Units represent standard deviation or not. I remember that for I.Q., one standard deviation is about 15 points, but I don't know if there is an appropriate analogy that can be drawn here.

So I looked up Aging in Sports and Chess which presents some experimental results documented in a paper by Ray C. Fair done in 2007. It says that if you are 100% of your maximum ratings by age 35, then you are 99% at age 40, 97% at age 50, 95% at age 60, 93% at age 70, 90% at age 80, and 80% at age 90. So it doesn't look like more than a slow steady decline until people get into their 80s.

As with the Stack Overflow data, it is hard to judge the relationship of data about chess skill to programming skill. But all this data suggests I don't have too much to worry about for a long time.

What is Wrong with Risk Matrices? -- Part 2

This is a continuation of my last post.

Risk is probability times consequence of a given scenario. The Wikipedia gives an example of a risk matrix as:

Risk has been granularized to reflect uncertainty, lack of data, and subjectivity. The vertical axis is probability. The horizontal axis is consequence. As can be seen from the figures in my previous post, this matrix is logically inconsistent with respect to iso-risk curves (lines of constant risk) whether or not the axis scales are linear or log-log. That is, for example, it is mathematically impossible to have any value other than "Low" in the first row or first column if the scales are linear. Or different values for any diagonal if the axes are log-log.

However, IMHO, the above example does not illustrate what is actually wrong with risk matrices. It is that these table values are being interpreted as risk values in the first place. Instead, risk matrix table values should be interpreted as risk categories. They measure our level of concern about a particular category of risk, not its actual value. The axes measure the "worst case scenario" values of probability and consequence for a given risk.

A risk value for any expected scenario at, say, a nuclear power plant could never be anything other than "Low". The very idea of an acceptable "Extreme" risk of "Certain" probability and "Catastrophic" consequences is ridiculous. No stakeholder would sign-off on that. To be of practical utility, the table values cannot be interpreted as risk values.

A risk category is defined as the management process used to manage risks that are at a certain level of concern. There should be stakeholder consensus on risk categories. Risk categories can be ordered by the amount of process grading and tailoring they allow for ordered levels of concern.

For example, for the most "Extreme" risk category, it may be required that each and every risk, regardless of its probability or consequence in a worst case scenario, be formally documented, analyzed to an extreme level of detail and rigor, prioritized, mitigated to an extremely low level, tracked throughout its lifetime, and signed-off on periodically by all stakeholders. Risk categories of lower concern (High, Moderate, and Low) will allow grading and tailoring of these Extreme requirements to lower levels.

What's Wrong with Risk Matrices?

Risk is equal to probability times consequence. The Wikipedia entry for risk matrix has a reference to an article by Tony Cox titled What's Wrong with Risk Matrices? Unfortunately, the article is behind a paywall. So as far as I am concerned, it doesn't exist. But I was able to google a blog posting about the article.

Interesting was how Cox constructed a risk matrix and then drew lines of constant risk on it. Something like this:
The lines are hyperbolas with the axes as asymptotes.

According to the blog posting referenced above: "Cox shows that the qualitative risk ranking provided by a risk matrix will agree with the quantitative risk ranking only if the matrix is constructed according to certain general principles."

Before examining these general principles, and let me be clear that I fundamentally disagree with these principles, first things first.

I have always viewed risk matrices as having log-log scales. (And I'm not the only one looking at risk matrices this way.) Something like this:

Notice the constant values of risk are all straight lines with a slope of minus one. And note the risk contours in this example are separated by an order of magnitude, not just a doubling of risk value as in the previous figure. This means it is easier to represent a wider range of probabilities and consequence scenarios (measurable in dollar amounts) using a log-log scale.

But the most important reason why I think it is better to use a log-log scale is because risk categorization is subjective. And I believe that where possible subjective judgments, like risk category, should be measured in decibels. As I have written about in a previous post: The decibel is a log scale that simplifies overall power gain/loss calculations and is convenient for ratios of numbers that differ by orders of magnitude. Additionally, log scales have been useful in describing the potential of certain physical phenomenon (e.g., earthquakes) and human perceptions (e.g., hearing). Thus, log scales can be useful when performing risk assessments and other related quality assurance activities.

Next time, a rather loud (pun intended) criticism of Cox's general risk matrix principles. :-)

Failures During Runtime

Researchers recently published online a PDF entitled: A Characteristic Study on Failures of Production Distributed Data-Parallel Programs. The data they used was provided by Microsoft. The programs were all MapReduce-like in structure, composed of "declarative SQL-like queries and imperative C# user-defined functions."

Interesting to me was that the authors collected some actual statistics about the failures encountered during runtime. I love real-world numbers. However, I found their results a bit hard to follow. Here is what I got out of the paper.

They ignored operating system and hardware failures. Of the run-time errors considered: 15% were "logic" errors, 23% were "table" errors and 62% were "row" errors.

They gave examples of logic errors such as: cannot find DLLs or scripts for execution, accessing array elements with an out-of-range index, accessing dictionary items with a non-existent key, or other data-unrelated failures.

Table errors such as: accessing a column with an incorrect name or index, a mismatch between data schema and table schema, or other table-level failures.

And row errors such as: corrupt rows with exceptional data, using illegal arguments, dereferencing null column values, exceptions in user-defined functions, allowing out-of-memory errors, or other row-level failures.

Also interesting was that, according to the authors, 8% of runtime errors could not have been caught by programmers during testing using their current debugging toolset.

It seems possible to create check-in and checkout documentation processes for a development organization's SCM system that could automatically generate statistics similar to the above. I think this would have a positive effect on software quality. For example, the researchers suggest that many failures have a common fix pattern that could be automated. Whether the cost would be worth the effort--I don't know. But it does seem obvious that SCM should be a prime source of quality-related data.

    Programmers Must Consider Risk

    There is a thoughtful programmer-oriented blog called The Codist written by Andrew Wulf. In a recent posting he starts off:
    I need to step outside my usual persona of writing about programming to comment on the happenings of the past few days. 
    In Boston two brothers decided to blow up the Marathon, and an hour from my house half the city of West, Texas was blown to pieces in a massive explosion.
    However, he goes on to discuss these events in a way that I don't think is actually outside the realm of programming. Why? He talks about risk.

    As I have written about before, there is always the risk that software may not perform correctly. The general risks, both benefits and consequences, will be different for different stakeholders. (I wrote that software should be independently verified and validated.) The software development effort must understand, communicate and help deal with the risks associated with the software for all its stakeholders.

    In this post I simply note that software is an integral part of modern society. Thus, risk is an integral part of software engineering. In fact, risk is a fundamental concept to engineering in general.

    Software may have played an important part in both of the events Andrew mentions. For the Boston Marathon, I am pretty sure that data mining software such as face recognition algorithms were used in identifying the suspects. No details about the Texas event are publicly available yet, but with SCADA systems being common in plants nowadays, I can easily imagine software being important there too.

    Why are PC Sales Declining?

    On good data, PC sales are rapidly declining. Shocking! Everyone has an opinion about why this is happening. Mine? I think a lot of it is has to do with a self-fulfilling belief currently held by many people influential in the PC industry. For example:
    “In a sense, these devices [smartphone, tablet, PC] are kind of blurring together,” Andreessen says. “A lot of the killer apps these days – and I would say this is true of Facebook, Twitter, Pinterest, and Gmail – you can use them on whatever device you want, or use them on all the devices at the same time.

    “I use the laptop at work, I use the phone when I am walking around – it’s the marrying of the smart device and the user interface back to the cloud that makes these things magical.”
    This has become a meme. A type of meme where believing in it makes it come true. And the PC business believes it. Look at the interface for Windows 8, formerly known as Metro. Look at the Unity interface for Ubuntu. Both obviously terrible interfaces for doing things only a PC is powerful enough to do. Yet the idea of one interface tied back to the cloud for all devices seems to be a truism among the PC leadership. So that's what it's going to be.

    The PC will be the equivalent of a big-screen TV.

    On the other hand, PCs can do so much more than smartphones and tablets can do. Is there anything else PCs should be doing?

    From a recent Alan Kay, Time Magazine article, the interviewer (David Greelish) asked:
    "What do you think about the trend that these devices are becoming purely communication and social tools? What do you see as good or bad about that? Is current technology improving or harming the social skills of children and especially teens? How about adults?"
    To which Alan Kay replied:
    "Social thinking requires very exacting thresholds to be powerful. For example, we’ve had social thinking for 200,000 years and hardly anything happened that could be considered progress over most of that time. This is because what is most pervasive about social thinking is “how to get along and mutually cope.” Modern science was only invented 400 years ago, and it is a good example of what social thinking can do with a high threshold. Science requires a society because even people who are trying to be good thinkers love their own thoughts and theories — much of the debugging has to be done by others. But the whole system has to rise above our genetic approaches to being social to much more principled methods in order to make social thinking work.

    "By contrast, it is not a huge exaggeration to point out that electronic media over the last 100+ years have actually removed some of day to day needs for reading and writing, and have allowed much of the civilized world to lapse back into oral societal forms (and this is not a good thing at all for systems that require most of the citizenry to think in modern forms).

    "For most people, what is going on is quite harmful."
    Kay thinks PCs could be doing more. So why don't they? Do we lack the knowledge, wisdom, and skill to make it so?

    I've been working on making the PC a "personal and family web assistant". Software that does something to help us to, as Kay put it, "make social thinking work." A "device" that acts as our agent and family protector, working to optimize the relationship between our private lives and the WWW. The main component of the software can only run on a PC.

    The Solution is Not the Problem

    A great thing about agile software development is that it encourages problem decomposition. The decomposition may be functional, structural, or object oriented. Decomposition is my workhorse technique for handling software complexity. And complexity is my number one development problem.

    Since an agile goal is working code each iteration, decomposition is usually intended to produce real and useful (if incomplete) solutions. This means testing and data gathering are an integral part of the agile design process.

    Thus, agile development has some ideas that I think are appealing to every type of engineer. However, the agile programming methodology also has its hard-to-do parts.

    One weakness is that agile development tends to focus on the solution and not the problem. For example, people needing custom software will often submit their "requirements" in the form of a description of how they want their new GUI to look like. (This has happened to me many times.) In other words, they implicitly define the requirements by explicitly defining what they think the solution should be. Unfortunately, these kinds of customers tend to fit in well on an agile team. My experience is that such solutions tend to be mediocre at best. (I call these "Every Program Looks Like a Facebook Web Page" solutions.)

    Better is to define the requirements independent of what the eventual implementation might look like. As the British poet, Edward Hodnett once said: “If you do not ask the right questions, you do not get the right answers.” The solution is not the problem. So keep them separate.

    But obviously, this is hard to do in an iterative development environment.

    Also, the software requirements are just information. And there is a big difference between information, knowledge, and wisdom. Time is required to become knowledgeable about the requirements (and domain expertise and experience) and even more time is required to determine the wisest solution.

    Interesting aside. I just googled:
    • "software information" (> 9 million hits)
    • "software knowledge" (> 1 million hits)
    • "software wisdom" (< 5000 hits).
    I guess "software wisdom" basically doesn't exist.

    Agile methods can also sometimes be an impediment in using my other workhorse technique for attacking complex problems: paradigm shifts. I talked about paradigms in my previous post.

    And the reason why is -- innovation. Coming up with an innovative solution to a hard software problem  depends on coming up with a new analogy or new paradigm shift. Agile methods commit to paradigms way too early and at too low level for much chance of real innovation happening. So using agile methods for such problems are difficult.

    Software Meta Development Note - Paradigms

    Implementing complex software solutions to user requirements is what makes programming so difficult. That is, programming is a lot harder than just implementing algorithms. There are two common ways of handling this complexity--decomposition into multiple interfaces and "paradigm shifts" at interfaces. (These interfaces include functional, class, and data-structural APIs. They include user interfaces.)

    First let me clear up what I mean by a paradigm shift at an interface. It's where a more complex internal problem solution procedure (the paradigm) is presented (at the interface) as something simpler and easier to understand to the user of the procedure. For example, Newton's Method can be used to find the square root of a number. But it is more practical to implement a special function called sqrt rather than expose the complexity of a function called newton_raphson to the programmer, and let her figure out how to get a square root from it. There is a paradigm shift from Newton's Method to square root at the sqrt interface.

    The subject of this post is that there are three different kinds of simple paradigms that are useful to think about: naturalartificial, and what I call synthetic.

    A natural paradigm is encountered a lot in object oriented programming. We can have a logical car object, checkbook object, or screen object. By using such real-life analogies it makes our complex problems easier to understand and work with. Even without further decomposition, we already understand these natural, complex things. I was able to implement a robot/facility message handling system at an automated factory once by simulating the US postal system. In the digital simulacrum, electronic messages became letters, junk mail,  priority mail, etc. Every robot had its logical mail box complete with flag. So did the cranes and other facility equipment. Some computers turned into post offices. Servers became mail centers. Every message had its sender and return zip codes. Etc. We knew the design would work -- the mail does get delivered in real life! And I could even explain what we we doing to the project's managers. :-)

    Natural paradigms have carried over to skeuomorphic graphical user interfaces (GUIs) that emulate objects in the physical world. Steve Jobs and Apple were famous for this. The concept of a file folder is a classic natural paradigm for users.

    An artificial paradigm is where the problem is expressed, at least in part, in terms of abstract, domain specific entities. A C.S. degree requires learning a lot of artificial computer science paradigms. Mathematicians, scientists, and engineers have their own artificial paradigms. An example of an artificial paradigm shift is mapping data from a hierarchical file structure to a relational database. In this case from an artificial paradigm to a different artificial paradigm.

    Perhaps the archetypal example of an artificial GUI paradigm is the QWERTY keyboard. Before the typewriter, no one would have a clue what a keyboard was for. After the typewriter--see next.

    A synthetic paradigm is where you take a common natural paradigm or concept and mix it with an artificial paradigm. A good example is a spreadsheet. Naturally, it represents a sheet of paper with rows and columns where you can write numbers. The concept is fundamental to all accounting. But artificially, we add the ability to put live mathematical formulas and scripts in the cells. Something we can't do in nature, but a snap in the abstract computer world. It changes everything about what spreadsheets can do. The way accountants did their jobs changed in a fundamental way with the invention of spreadsheets.

    The most popular GUI oriented synthetic paradigm is the mouse. The mouse has artificial components such as multi-clickable buttons and a clickable, rotatable wheel. But it is also a natural extension of hand pointing.

    What's the takeaway from all this? If you want to write software that changes the way things are accomplished in a fundamental way, invent a new synthetic paradigm for a user domain and present it in an API or GUI.

    NASA Science

    Big science is making big announcements (for example, see here) about the Alpha Magnetic Spectrometer (AMS) device attached to the International Space Station.

    Another type of signature for dark matter has been found instead of just gravity signatures. There is now good data that suggests dark matter particles collide with each other and that they produce "ordinary" collision decay particles. So even though dark matter does not interact with light, these decay particles do and that is what we are seeing in the AMS.

    OK. So to continue to be skeptical of dark matter, then I must explain data involving another whole type of phenomena in addition to coming up with a better explanation of dark matter gravity observations.

    Dark matter just became more likely to actually exist, I think. This makes the AMS device a great bit of science.

    But do I think the $2 billion spent on the device was worth that kind of information? Off the top of my head, that's about $5 for every person in the USA.

    Truthfully, I can't say. I think it doubtful that a consensus argument can be made that society as a whole (each person, on average) benefits from such an experiment to the tune of $2 billion ($5 each). How can the numbers possibly work out? How can we come to a consensus on a dollar amount to assign to the results of this experiment?

    And that applies to any dollar amount. What if the next dark matter experiment will cost $20 billion, $200 billion, ...? Is there any way we can avoid being arbitrary, capricious and whimsical in our spending on science? To have an engineering discipline in our spending on science?

    This may be a more difficult question than the one on dark matter.

    The Guided Cantilever Method - Part 4

    The Guided Cantilever Method for Quickly Checking Piping Flexibility -- Part 4

    This is a continuation of my last post. I couldn't help myself, I just had to do some coding! :-) Here in Part 4, the final part in this series, I describe a simple web application that allows you to perform a quick flexibility check using the guided cantilever method without having to memorize anything.

    The web page is here. And here is a screenshot:

    The expansion of the long vertical leg causes the short horizontal leg to deflect, which is modeled as a guided cantilever. The application calculates the minimum length required to guarantee the system is adequately flexible. There are a number of assumptions required, so be sure to read my previous posts here, here, and here.

    To use the program, enter into the listboxes:
    • Nominal pipe size (NPS) in inches 
    • Maximum design temperature (Tmax) in degrees F
    • Length of the longer leg (the vertical leg) in feet.
    And indicate the pipe material using the radio buttons, either carbon steel or stainless steel.

    The minimum length of the shorter leg will be automatically calculated and displayed as shown above in red.

    As covered in previous posts, these are the equations used:
    L = 7.77 * sqrt(y * NPS)
    // Where:
    //    L = minimum length required for shorter, horizontal leg (ft)
    //    NPS = nominal pipe size (in)
    //    y = (Tmax - 100) * LL / 10000
    //           y = thermal expansion to be absorbed (in)
    //           Tmax = maximum design temperature (F)
    //           LL = length of longer, vertical leg (ft)
    Everything is done on the client in Javascript. So feel free to download the web page and use it however you like.

    The Guided Cantilever Method - Part 3

    The Guided Cantilever Method for Quickly Checking Piping Flexibility -- Part 3

    This is a continuation of my last post. Here in Part 3, I show how to manually check more complicated piping systems for adequate thermal flexibility. There are two tricks for doing this: 1) fictitious anchors and 2) the-tail-does-not-wag-the-dog.

    Fictitious Anchors

    The idea here is that any point or points on a complicated piping network can be fixed against rotation and/or displacement (or a fixed displacement or rotation can be imposed at any point) and the flexibility of the resulting subsystems analyzed independently. (The fixed points isolate one part of the system from the others.) The importance of this is that if each subsystem has adequate flexibility with fixed points in place, then the overall system will have adequate flexibility with the fixed points removed. (Equal sized pipe only. For unequal pipe sizes, see next trick.)

    An example should make this clear:
    Notice that for analysis purposes, an imaginary fictitious anchor has been added to the system illustrated in Figure 2. This allows us to use the guided cantilever method since the system has been broken up into two simple subsystems similar to what we solved for last time in Part 2. There is a 20'X20' subsystem and a 10'X10' subsystem.

    Note that the required length of a guided cantilever goes up with the square root of the displacement to be absorbed. This means that if the 10'X10' subsystem is adequately flexible then the 20'X20' subsystem will be even more adequately flexible. And so if the 10'X10' subsystem is OK, then the entire system will be OK when the imaginary anchor is "removed".

    So is the 10'X10' subsystem adequately flexible?
    from math import sqrt
    # The thermal displacement to be absorbed (y) is:
    #     Material = stainless steel
    #     Length = 10 (ft)
    #     Temperature = 350 (F)
    #     Formula from last post:
    #         y_per_100ft = 1.33 * (T - 100.) / 100.
    y_per_100ft = 1.33 * (350. - 100.) / 100.
    y = y_per_100ft * (10 / 100)
    # y = .3325 (in)
    # So that the required leg length to absorb this expansion is:
    NPS = 4
    L = 7.77 * sqrt(y * NPS)
    # L = 9.0 (ft)
    Thus, the 10'X10' subsystem is adequately flexible. Thus, the entire system is adequately flexible.

    Although it takes some practice to using fictitious anchors effectively, once you've gotten the hang of it, I have found fictitious anchors are a very effective tool.

    The Tail Does Not Wag the Dog

    Recall that resistance to bending moment is proportional to section modulus. And that the section modulus of pipe is proportional to its radius to the third power. Thus, all other things being proportional, larger diameter piping will dominate the thermal displacement reactions of complex piping systems versus smaller diameter pipe. For example, the section modulus of a 6" NPS sch. 40 pipe is 8.50 (in3) while the section modulus of a 3" NPS sch. 40 pipe is 1.72 (in3). A strength ratio of almost 5 to 1, even though the size ratio is less than 2 to 1. Thus, a 3" NPS "tail" will not wag a 6" NPS "dog".

    This means that in complicated piping systems with varying pipe sizes, it is appropriate for the fictitious anchors to be placed and to impose movements that would otherwise be the unrestrained displacements of the larger pipe sizes.

    For example, in Figure 2 above, if the 20'X20' subsystem were 8" NPS pipe instead of 4", and there were an 8X4 reducer at the fictitious anchor, it would be appropriate to assume the fictitious anchor would impose 20' worth of thermal expansion on the 10'X10' subsystem instead of being fixed in place. Clearly, the 10'X10' subsystem would then fail a manual check and the entire system would have to be computer analyzed.


    In practice, it is easily possible to manually analyze for flexibility a majority of piping systems in less time than it would take to perform computer analyses on these systems. All piping engineers and designers should be familiar with the guided cantilever method.

    The Guided Cantilever Method - Part 2

    The Guided Cantilever Method for Quickly Checking Piping Flexibility -- Part 2

    This is a continuation of my last post. Here in Part 2, let's go through a specific example of how to do a quick manual check that a piping system has adequate flexibility to absorb required thermal expansion displacements.
    The system consists of carbon steel pipe with a shorter leg of 20', an elbow, and a longer leg of 25'. The pipe's outside diameter is 6.625" (NPS 6"). And the fluid's operating temperature is 500F. As the pipe warms, the thermal expansion of both legs will generate enormous forces unless the system contains adequate flexibility to relieve the imposed displacements. Assuming the end anchors are rigid and as strong as the pipe (definitely not the case if attached to rotating equipment), controlling will be the expansion of the longer leg being absorbed by bending in the shorter leg. If the bending stresses in the shorter leg are below code allowables, then the system will be adequately flexible.

    The first step is to conservatively estimate the free expansion of the longer leg.

    The thermal expansion of pipe (y) in inches per 100 feet (in/100ft), between 70 degrees F and some temperature (T) can be approximated by:
    y_per_100ft = (T - 100.) / 100.  // for carbon steel
    y_per_100ft = 1.33 * (T - 100.) / 100.  // for stainless steel
    Or in words: For carbon steel, take the design temperature and subtract 100 then divide by 100. That's the expansion in inches per hundred feet. For stainless steel, add a third more.

    The formulas will break down for temperatures around 200F or less. But for the range of temperatures commonly encountered, the equations overestimate less than 10 percent. And the formulas are easy to remember and work out in our head. (This is important if we want to be able to do quick manual checks.)

    The next step is to conservatively calculate the length of the shorter leg needed to adequately absorb the expansion of the longer leg. For this we use a convenient version of the guided cantilever formula:
    from math import sqrt
    L = sqrt(E * y * D / S / 48)
    // Where:
    //    L = Length of cantilever beam (ft)
    //    E = Youngs modulus (psi)
    //    y = displacement of the the guided end (in)
    //    D = beam (pipe) outside diameter (in)
    //    S = max. nominal bending stress (psi)
    Modeling the shorter pipe leg as a cantilever whose guided end is being displaced by the free thermal expansion of the longer leg is a very conservative model for maximum bending stresses in Figure 1. For use with a calculator (remember those?) or an iPython session this formula can be made even simpler. Let:
    • E=29E6. This is a typical maximum value for carbon steel. Stainless is less. So a conservative value to use for all materials.
    • D=NPS. This is not a conservative assumption, but who can remember the outside diameters for all different nominal pipe sizes? I never could. This introduces an error of less than about 5%, which is more than made up for in the following assumption.
    • S=10E3. Why such a conservatively low a number? (In the Peng link I gave in Part 1, the author used 20,000 psi for his guided cantilever method.) My justification for using 10,000 psi is:
      • If the system lacks adequate flexibility, fatigue cracking will occur in the elbow, not the pipe. That's because the elbow has a "stress intensification factor" (SIF) associated with it. SIFs are multiplied by the nominal stress in the adjacent pipe to estimate the actual stress in the elbow.
      • It is true that SIFs are mitigated by an elbow's "flexibility factor" (FF). (An elbow goes out-of-round under stress and so will strain more than simple beam theory would predict.) However, the stress reduction of FFs do not completely offset the stress intensification caused by SIFs. So, conservatively, elbow FFs are ignored.
      • All pipe fittings, not just elbows, have SIFs associated with them. Piping fatigue almost always occurs in the system's fittings.
      • SIFs are usually between 2.0 and 3.0.
      • Finally, since thermal cycling causes fatigue failure, allowable fatigue stresses are about 1.5 times yield stress, which is typically between 20,000 and 30,000 psi. So using 10,000 psi, assuming a 2.0 to 3.0 SIF, and ignoring FFs will keep me 50% below a 1.5 times yield stress number.
      • Therefore, using S=10000 for both carbon and stainless pretty much guarantees a conservative result. I can't recall ever encountering a pathological case where using this assumption gave an unconservative result.
    The resulting formula is:
    L = 7.77 * sqrt(y * NPS)  // 7.77 is easy to remember!
    // Where:
    //    y = thermal expansion to be absorbed (in)
    //    NPS = nominal pipe size (in)
    Using this formula for the system in Figure 1 we have:
    // Given:
    // Expansion of longer leg which is 25 ft long:
    y = ((500. - 100.) / 100.) * (25. / 100.)
    // y = 1. (in)
    // And nominal pipe size:
    NPS = 6
    // Required length of shorter leg:
    L = 7.77 * sqrt(1. * 6.)
    // L = 19. (ft)
    Since the shorter leg is 20 ft in length, the piping system is adequate.

    The Guided Cantilever Method - Part 1

    The Guided Cantilever Method for Quickly Checking Piping Flexibility - Part 1

    When I started out in engineering, as a twenty-one year old back in the 1970's, I was a pipe stress analyst. My mentor was a very senior, very experienced engineer named Norman Blair. Computerized analyses of piping systems were common in those days but it was time consuming to prepare the input (yes, generating punched cards) and computers were expensive to use. (We only had those big mainframes in those days!) So we still did a lot of manual piping stress analyses.

    There were two types of pipe stress that we had to analyze. Primary stresses due to sustained loadings such as weight and secondary stresses due to the pipe undergoing temperature changes that caused it to cyclically expand and contract. (Usually because of the the hot/cold fluids running through them.) Keeping secondary stresses within code allowables required insuring the piping had adequate flexibility to absorb the expansions/contractions without over-stressing.

    Supporting a pipe's weight was easy. For thermal analyses, Norm gave me a copy of Spielvogel's Piping Stress Calculations Simplified to read and to help me develop a good understanding of what piping flexibility analysis was all about. It turns out I didn't actually employ very many of the manual techniques in the book. That would have been more time consuming than preparing computer analyses of them. But Norman did introduce me to a quick, conservative way to manually eliminate piping systems from having to undergo computer analysis at all.

    Most piping designers were pretty good at guessing at how much flexibility may be needed. I became adept at quickly verifying the acceptable flexibility of perhaps over 80% of the systems I encountered in typical oil refineries or chemical process plants without resorting to using a computer at all. This translated into significant savings of time and material.

    The method I used for most of these manual verifications is called the Guided Cantilever Method. The best basic description of the method that I could google is Quick Check on Piping Flexibility by L. C. Peng. However, Peng doesn't mention the crucial trick that makes the method extremely practical -- fictitious anchors. In fact, I could not google a relevant reference to "fictitious anchors" at all. So allow me to document the trick here.

    The main reason I want to to document the Guided Cantilever Method is because, IMHO, there is a tendency among pipe stress engineers nowadays to go ahead and run a computerized analysis on almost every piping system they encounter. However, I think that even in this age of desktop computers with great GUI interfaces, manual analyses still have their place. Perhaps I can encourage some pipe stress engineers to do more manual analyses where cost effective.

    I will go through the method in some detail, in upcoming Part 2 of this post, because if engineers actually want to use the method, they will have to understand what makes the method conservative so they will be able to identify when the assumptions being made might not be applicable in some unusual case. And it is the unusual cases that cause the biggest failures in analyses.

    Simple PHP Data Storage

    The subject of my post today is about a database schema I often to use for websites. To make things a bit more practical I present some PHP code snippets detailing how I often store and retrieve form data, which is a very common task when developing custom websites.

    But first, let's talk databases. There are exceptions to every rule, but my default data storage/retrieval library is SQLite. I've used it for desktop applications, embedded smartphone applications, as well as for websites. I've accessed it from a number of languages. In case you are not familiar with this RDBMS:
    SQLite is a software library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine. SQLite is the most widely deployed SQL database engine in the world. The source code for SQLite is in the public domain.
    From SQLite's When To Use page:
    The basic rule of thumb for when it is appropriate to use SQLite is this: Use SQLite in situations where simplicity of administration, implementation, and maintenance are more important than the countless complex features that enterprise database engines provide. As it turns out, situations where simplicity is the better choice are more common than many people realize.
    Another way to look at SQLite is this: SQLite is not designed to replace Oracle. It is designed to replace fopen().
    And later from the same link:
    SQLite usually will work great as the database engine for low to medium traffic websites (which is to say, 99.9% of all websites). The amount of web traffic that SQLite can handle depends, of course, on how heavily the website uses its database. Generally speaking, any site that gets fewer than 100K hits/day should work fine with SQLite. The 100K hits/day figure is a conservative estimate, not a hard upper bound. SQLite has been demonstrated to work with 10 times that amount of traffic.
    Second, let's talk frameworks. There are a myriad of web application frameworks we can use to help us develop our websites. In the link above, the list just for PHP-based ones has 17 entries. (The funny thing is, I consider PHP itself to be a "web application framework". A very general one. It is certainly not much a general-purpose programming language.)

    IMHO, the reason there are so many frameworks to choose from is because each is only appropriate for it own set of requirements which, apparently, don't overlap as much as one might assume. There may be more sinister reasons. So we need to be very careful and skeptical when choosing a framework.

    No one person can be familiar with more than a few of these frameworks, but I'm sure most of the frameworks provide some help in mapping form data to the database and back. But what if we decide not to use a framework?

    Finally, the interface between forms and their data storage. The most obvious way is to map a form to a database table. Each column is a field in the form. (Multiple select listboxes are a complication.) Each row in the table is a different user. The only drawback to this, but it's a big one, is that every time one of a form's fields change the database schema has to change.

    So I've gotten into the habit of modeling the form data into associative arrays. It's almost as simple as a one-to-one mapping. Yet much more robust of a design. All the forms and all the users can go into one table if needed. The schema?
    CREATE TABLE ArrayData (  -- all form data can be stored here
        ownerId TEXT,     -- form user
        arrayId TEXT,     -- form Id
        arrayKey TEXT,    -- form field name
        arrayValue TEXT,  -- form field value
        uTime NUMBER      -- date record last modified, PHP microtime(TRUE)

    Notice the flexibility. The table's schema never changes no matter how any form changes. The timestamp means old data doesn't have to be deleted from the table unless you need to.

    Here is a PHP code snippet I used for interfacing to the database for the last website I built. (A simple, small, internal website.) I cleaned it up a little bit. Notice its schema does not have columns for ownerID or uTime as the more general example above did. They weren't needed for this particular web application. However, there is code for handling multi-valued form items.
     * Simple SQLite DB access functions.
     * Our web pages will need some sort of db, for example, storing form 
     * data. In this case, nothing complex, so set up something easy, 
     * convenient, but not absurdly inefficient or with obvious injection 
     * issues.
     * Example usage:
     *     $form_id = 'test_form';
     *     $form_data = array(
     *         'one' => 'first',
     *         'two' => 'second',
     *         'three' => 'third',
     *         'four' => array(
     *             'fourth',
     *             '4th',
     *             'one less than fifth'
     *         )
     *     );
     *     $multi_valued_items = array('four');
     *     save_form_data($form_id, $form_data);
     *     $new_form_data = fetch_form_data($form_id, $multi_valued_items);
     *     // Note: fetch_form_data($form_id) would return only last value in 
     *     // 'four' array.
     * NOTE: If using the highest level functions (save_form_data and 
     * fetch_form_data) are not appropriate, we can often use the lower level 
     * functions to help access any SQLite 3 database.
     * See open_db, exec_db, fetch_db, and fetchall_db below.
    // A form's data will be persisted to/from an SQLite database modeled as an 
    // associative array.
    // Insert/replace the forms data in the database.
    function save_form_data($form_id, $form_data, $db_filename='info.sqlite') {
        $db = open_db($db_filename);
        exec_db($db, "CREATE TABLE IF NOT EXISTS ArrayData (" .
            "arrayId TEXT, arrayKey TEXT, arrayValue TEXT)");
        foreach ($form_data as $key => $value) {
            exec_db($db, "DELETE FROM ArrayData " .
                "WHERE arrayId = ? AND arrayKey = ?", array($form_id, $key));
            if (is_array($value)) {
                foreach ($value as $val) {
                    exec_db($db, "INSERT INTO ArrayData " .
                        "VALUES (?, ?, ?)", array($form_id, $key, $val));
            } else {
                exec_db($db, "INSERT INTO ArrayData " .
                    "VALUES (?, ?, ?)", array($form_id, $key, $value));
    // If some keys can have multiple values (like a list box) then they must 
    // be identified in $multi_values array.
    function fetch_form_data($form_id, $multi_values=NULL, 
        $db_filename='info.sqlite') {
        $form_data = array();
        $db = open_db($db_filename);
        $results = fetchall_db($db, "SELECT * FROM ArrayData");
        foreach ($results as $value) {
            if ($value['arrayId'] != $form_id) continue;
            $key = $value['arrayKey'];
            if (is_array($multi_values) AND in_array($key, $multi_values)) {
                $form_data[$key][] = $value['arrayValue'];
            } else {
                $form_data[$key] = $value['arrayValue'];
        return $form_data;
    function open_db($db_filename = 'info.sqlite') {
        try {
            $db = new PDO('sqlite:' . $db_filename);
            $db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
        } catch (Exception $e) {
            // Hard to see how we can continue in a way meaningful to the user.
            // Log and let user know bad news.
            $e_msg = $e->getMessage();
            $msg = "$db_filename: $e_msg";
        return $db;
    function exec_db($db, $sql, $parameters=false) {
        try {
            if (!$parameters) {
                $result = $db->exec($sql);
            } else {
                $query = $db->prepare($sql);
                $result = $query->execute($parameters);
        } catch (Exception $e) {
            $msg = $e->getMessage();
            //log_error("EXEC EXCEPTION: $msg"); // try to keep plugging along
            return false;
        return $result;
    // Return result (one row) of $sql query.
    // Returns false rather than an assoc array on failure.
    // Since an exception is likely just bad SQL, also returns false on exception.
    function fetch_db($db, $sql, $parameters=false) {
        try {
            if ($parameters) {
                $query = $db->prepare($sql);
                if ($query) {
            } else {
                $query = $db->query($sql);
            if ($query) {
                return $query->fetch(PDO::FETCH_ASSOC);
            } else {
                return false;
        } catch (Exception $e) {
            //log_error("SQL: $sql");
            if ($parameters) {
                foreach ($parameters as $param) {
                    //log_error("PARAMETER: $param");
            $msg = $e->getMessage();
            //log_error("FETCH EXCEPTION: $msg");
            return false;
    // Return results (all rows) of $sql query.
    function fetchall_db($db, $sql, $parameters=false) {
        try {
            if (!$parameters) {
                $query_results = $db->query($sql);
                if ($query_results) {
                    return $query_results->fetchall(PDO::FETCH_ASSOC);
            } else {
                $query = $db->prepare($sql);
                return $query->fetchall(PDO::FETCH_ASSOC);
        } catch (Exception $e) {
            $msg = $e->getMessage();
            //log_error("FETCHALL EXCEPTION: $msg");
            return false;

    Printf Debugging in PHP

    I've been debugging programs since the 1970's. I have fixed a few bugs over the years! Although nothing beats a symbolic debugger that is able to let me set breakpoints and single step through code, I don't think I'm just being an old dinosaur programmer by saying that "printf" debugging and assertions still have practical uses. That's because most of the time I don't need anything more sophisticated to understand what my code is doing wrong. (I'm more apt to need an IDE-based debugger trying to figure out someone else's code.)

    Below is the code snippet that I use to help me do printf debugging in PHP. If you want to use it, place it in an utility file that gets included at the top of your other other PHP files. Of course, using these printing functions will certainly mess up any fancy formatting you may be trying to accomplish on your web page. But inserting an assert() or checkpoint() call in your code is only intended as a quick and easy way to probe buggy behavior anyway.

    I would be remiss not to mention that during development (only!), make sure that the Apache web server's php.ini configuration file has set the display_errors parameter to on. I usually catch more than half my PHP scripting bugs this way. Very in-your-face, quick and easy debugging. An alternative to editing the PHP configuration file is to open up a (Ubuntu) terminal window and enter:
        tail -f /var/log/apache2/error.log

    This will display the ten latest PHP errors that have been generated.

    Here is the code:
    // ***************************************************************************
    // Development Debugging Utilities.
    // (For those of us who like to debug with printing to output and/or log 
    // files.)
    // Note debugging flag below so we don't have to worry too much about leaving
    // embedded debugging/assertion dev/test code in source.
    // Typical Usages:
    //   assert("$sanity_state == 'SANE' /* This comment will be visible too! */");
    //   checkpoint(); // Will print 'GOT HERE!' and checkpoint's location
    //   checkpoint($var); // Will print $var object and checkpoint's location
    //   checkpoint($var, $tag); // Will print $tag string and $var object and 
    //                           // checkpoint's location
    // ***************************************************************************
    $DEBUGGING_FLAG = true; // set to false to turn these utilities off!
    // Setup for assertions:
    assert_options(ASSERT_ACTIVE, $DEBUGGING_FLAG);
    if (assert_options(ASSERT_ACTIVE)) {
        error_reporting(E_ALL | E_NOTICE);
    assert_options(ASSERT_WARNING, true);
    assert_options(ASSERT_QUIET_EVAL, true);
    function assert_handler($file, $line, $code) {
        echo "<hr><p>
        <b>Assert Triggered</b><br>
        Trigger Code: $code<br>
           &nbspFile: $file<br>
           &nbspLine: $line<br>
    assert_options(ASSERT_CALLBACK, 'assert_handler');
    // Setup for embedded (in the output) checkpoints:
    function checkpoint($var='GOT HERE!', $id='CHECKPOINT') {
        global $DEBUGGING_FLAG;
        if ($DEBUGGING_FLAG) {
            $btr = debug_backtrace();
            $line = $btr[0]['line'];
            $file = $btr[0]['file'];
            echo "<hr><p>";
            echo "<b>Checkpoint:</b> $id<br>";
            echo "File: $file<br>";
            echo "Line: $line<br>";
            if (is_array($var)) {
                echo "Value:\n<pre>";
                echo "</pre>";
            } elseif(is_object($var)) {
                echo "Value:\n<pre>";
                echo "</pre>";
            } elseif (isset($var)) {
                echo "Value: $var";
            echo "</p><hr>";
    // Setup for logging errors to the server's log file:
    function log_error($msg) {
        global $DEBUGGING_FLAG;
        if ($DEBUGGING_FLAG) {
            $dbg_btr = debug_backtrace();
            $line = $dbg_btr[0]['line'];
            $file = $dbg_btr[0]['file'];
            error_log("function log_error::\"$msg\"::$file::$line");

    Fixing Bugs in Complex Software

    In my previous post I gave a list of Akin's Laws. I noted things I felt might be similarly true for both software design and spacecraft design. One of these was:
    • When in doubt, estimate. In an emergency, guess. But be sure to go back and clean up the mess when the real numbers come along. 
    In what way do I think this applies to software engineering? When testing complex software.

    From the blog posting A Fork in the Road by Matt Youell is a relevant quote:
    Modern software systems contain so much abstraction and layering that it is really hard to judge the level of effort that will be involved in addressing any one problem.
    Youell goes on to describe two very different ways of trying to find a bug in a "quite tangled system".

    A really tough bug is often occurring at a level of abstraction or layer very much below the application level. This is what can make it hard to find. This is code we most likely did not write. There may be a misunderstanding about how a layer behaves. The bug may actually be in the application's operating system or is a bug in one of the libraries the application is using. It may even be showing up due to the way libraries interact.

    The blog posting points out that for such complex bugs, it may be better to just refactor a portion of the code rather than try to track down the exact location of the bug. Refactoring often means that different abstractions and layers are used and used in different ways. (And, obviously, it should get rid of a possibly flawed application layer algorithm.)

    But the point of the law given above is that we must be aware that this leaves a real mess as far as testing the refactored code is concerned. How do you test that a bug has been fixed if you did not understand the nature of the bug in the first place? This is over and above having to revalidate and reverify the refactored code from scratch.

    What Else Do We Know?

    In a previous post, I reproduce a list of some of the general things we know about software development practices. But what about good engineering practices in general? Specifically, what are practices that software engineering shares with the other engineering disciplines?

    An interesting list I recently stumbled across is Akin's Laws of Spacecraft Design. As both a software developer and a (former) registered mechanical engineer, some of the items on the list struck me as having some meaning to software engineers as well as aeronautical engineers. Here is what I culled:
    • Engineering is done with numbers. Analysis without numbers is only an opinion.
    • Design is an iterative process. The necessary number of iterations is one more than the number you have currently done. This is true at any point in time.
    • Your best design efforts will inevitably wind up being useless in the final design. Learn to live with the disappointment.
    • At the start of any design effort, the person who most wants to be team leader is least likely to be capable of it.
    • In nature, the optimum is almost always in the middle somewhere. Distrust assertions that the optimum is at an extreme point.
    • Not having all the information you need is never a satisfactory excuse for not starting the analysis.
    • When in doubt, estimate. In an emergency, guess. But be sure to go back and clean up the mess when the real numbers come along.
    • Sometimes, the fastest way to get to the end is to throw everything out and start over.
    • There is never a single right solution. There are always multiple wrong ones, though.
    • Design is based on requirements. There's no justification for designing something one bit "better" than the requirements dictate.
    • (Edison's Law) "Better" is the enemy of "good".
    • (Shea's Law) The ability to improve a design occurs primarily at the interfaces. This is also the prime location for screwing it up.
    • The previous people who did a similar analysis did not have a direct pipeline to the wisdom of the ages. There is therefore no reason to believe their analysis over yours. There is especially no reason to present their analysis as yours.
    • The fact that an analysis appears in print has no relationship to the likelihood of its being correct.
    • Past experience is excellent for providing a reality check. Too much reality can doom an otherwise worthwhile design, though.
    • The odds are greatly against you being immensely smarter than everyone else in the field. If your analysis says your terminal velocity is twice the speed of light, you may have invented warp drive, but the chances are a lot better that you've screwed up.
    • A bad design with a good presentation is doomed eventually. A good design with a bad presentation is doomed immediately.
    • (Larrabee's Law) Half of everything you hear in a classroom is crap. Education is figuring out which half is which.
    • When in doubt, document. (Documentation requirements will reach a maximum shortly after the termination of a program.)
    • The schedule you develop will seem like a complete work of fiction up until the time your customer fires you for not meeting it.
    • It's called a "Work Breakdown Structure" because the Work remaining will grow until you have a Breakdown, unless you enforce some Structure on it.
    • (Montemerlo's Law) Don't do nuthin' dumb.
    • (Varsi's Law) Schedules only move in one direction.
    • (Ranger's Law) There ain't no such thing as a free launch.
    • (von Tiesenhausen's Law of Program Management) To get an accurate estimate of final program requirements, multiply the initial time estimates by pi, and slide the decimal point on the cost estimates one place to the right.
    • (von Tiesenhausen's Law of Engineering Design) If you want to have a maximum effect on the design of a new engineering system, learn to draw. Engineers always wind up designing the vehicle to look like the initial artist's concept.
    • (Mo's Law of Evolutionary Development) You can't get to the moon by climbing successively taller trees.
    • (Atkin's Law of Demonstrations) When the hardware is working perfectly, the really important visitors don't show up.
    • (Patton's Law of Program Planning) A good plan violently executed now is better than a perfect plan next week.
    • (Roosevelt's Law of Task Planning) Do what you can, where you are, with what you have.
    • (de Saint-Exupery's Law of Design) A designer knows that he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away.
    • Any run-of-the-mill engineer can design something which is elegant. A good engineer designs systems to be efficient. A great engineer designs them to be effective.
    • (Henshaw's Law) One key to success in a mission is establishing clear lines of blame.
    • Capabilities drive requirements, regardless of what the systems engineering textbooks say.
    • The three keys to keeping a new manned space program affordable and on schedule:
      •        No new launch vehicles.
      •        No new launch vehicles.
      •        Whatever you do, don't decide to develop any new launch vehicles.