Archive | Testing RSS for this section

The Specter of Undefined Behavior

If you’ve ever spoken to a programmer, and really got them on a roll, they may have said the words “undefined behavior” to you. Since you speak English, you probably know what each of those words mean, and can imagine a reasonable meaning for them in that order. But then your programmer friends goes on about “null-pointer dereferencing” and “invariant violations” and you start thinking about cats or football or whatever because you are not a programmer.

I often find myself being asked what it is that I do. Since I’ve spent the last few years working on my Computer Science degree, and have spent much of that time involved in programming language research, I often find myself trying to explain this concept. Unfortunately, when put on the spot, I usually am only able to come up with the usual sort of explanation that programmers use among themselves: “If you invoke undefined behavior, anything can happen! Try to dereference a null pointer? Bam! Lions could emerge from your monitor and eat your family!” Strictly speaking, while I’m sure some compiler writer would implement this behavior if they could, it’s not a good explanation for a person who doesn’t already kind of understand the issues at play.

Today, I’d like to give an explanation of undefined behavior for a lay person. Using examples, I’ll give an intuitive understanding of what it is, and also why we tolerate it. Then I’ll talk about how we go about mitigating it.

Division By Zero

Here is one that most of us know. Tell me, what is 8 / 0? The answer of course is “division by zero is undefined.” In mathematics, there are two sorts of functions: total and partial. A total function is defined for all inputs. If you say a + b, this can be evaluated to some result no matter what you substitute for a and b. Addition is total. The same cannot be said for division. If you say a / b, this can be evaluated to some result no matter what you substitute for a and b unless you substitute b with 0. Division is not total.

If you go to the Wikipedia article for division by zero you’ll find some rationale for why division by zero is undefined. The short version is that if it were defined, then it could be mathematically proven that one equals two. This would of course imply that cats and dogs live in peace together and that pigs fly, and we can’t have that!

ti86_calculator_divbyzero

However, there is a way we can define division to be total that doesn’t have this issue. Instead of defining division to return a number, we could define division to return a set of numbers. You can think of a set as a collection of things. We write this as a list in curly braces. {this, is, a, set, of, words} I have two cats named Gatito and Moogle. I can have a set of cats by writing {Gatito, Moogle}. Sets can be empty; we call the empty set the null set and can write it as {} or using this symbol . I’ll stick with empty braces because one of the things I hate about mathematics is everybody’s insistence on writing in Greek.

So here is our new total division function:

totalDivide(a, b) if (b does not equal 0) output {a / b} otherwise output {}

If you use totalDivide to do your division, then you will never have to worry about the undefined behavior of division! So why didn’t Aristotle (or Archimedes or Yoda or whoever invented division) define division like this in the first place? Because it’s super annoying to deal with these sets. None of the other arithmetic functions are defined to take sets, so we’d have to constantly test that the division result did not produce the empty set, and extract the result from the set. In other words: while our division is now total, we still need to treat division by zero as a special case. Let us try to evaluate 2/2 + 2/2 and totalDivide(2,2) + totalDivide(2,2):

1: 2/2 + 2/2 2: 1 + 1 3: 2

Even showing all my work, that took only 3 lines.

1: let {1} = totalDivide(2,2) 2: let {1} = totalDivide(2,2) 3: 1 + 1 4: 2

Since you can’t add two sets, I had to evaluate totalDivide out of line, and extract the values and add them separately. Even this required my human ability to look at the denominator and see that it wasn’t zero for both cases. In other words, making division total made it much more complicated to work with, and it didn’t actually buy us anything. It’s slower. It’s easier to mess up. It has no real value. As humans, it’s fairly easy for us to look at the denominator, see that it’s zero, and just say “undefined.”

Cartons of Eggs

I’m sure many of you have a carton of eggs in your fridge. Go get me the 17th egg from your carton of eggs. Some of you will be able to do this, and some of you will not. Maybe you only have a 12 egg carton. Maybe you only have 4 eggs in your 18 egg carton, and the 17th egg is one of the ones that are missing. Maybe you’re vegan.

A basic sort of construct in programming is called an “array.” Basically, this is a collection of the same sort of things packed together in a region of memory on your computer. You can think of a carton of eggs as an array of eggs. The carton only contains one sort of thing: an egg. The eggs are all packed together right next to each other with nothing in between. There is some finite amount of eggs.

SAMSUNG DIGITAL CAMERA

If I told you “for each egg in the carton, take it out and crack it, and dump it in a bowl starting with the first egg”, you would be able to do this. If I told you “take the 7th egg and throw it at your neighbor’s house” you would be able to do this. In the first example, you would notice when you cracked the last egg. In the second example you would make sure that there was a 7th egg, and if there wasn’t you probably picked some other egg because your neighbor is probably a jerk who deserves to have his house egged. You did this unconsciously because you are a human who can react to dynamic situations. The computer can’t do this.

If you have some array that looks like this (array locations are separated by | bars | and * stars * are outside the array) ***|1|2|3|*** and you told the computer “for each location in the array, add 1 to the number, starting at the first location” it would set the first location to be 2, the second location to be 3, the third location to be 4. Then it would interpret the bits in the location of memory directly to the right of the third location as a number, and it would add 1 to this “number” thereby destroying the data in that location. It would do this forever because this is what you told the machine to do. Suppose that part of memory was involved in controlling the brakes in your 2010 era Toyota vehicle. This is obviously incredibly bad, so how do we prevent this?

The answer is that the programmer (hopefully) knows how big the array is and actually says “starting at location one, for the next 3 locations, add one to the number in the location”. But suppose the programmer messes up, and accidentally says “for the next 4 locations” and costs a multinational company billions of dollars? We could prevent this. There are programming languages that give us ways to prevent these situations. “High level” programming languages such as Java have built-in ways to tell how long an array is. They are also designed to prevent the programmer from telling the machine to write past the end of the array. In Java, the program will successfully write |2|3|4| and then it will crash, rather than corrupting the data outside of the array. This crash will be noticed in testing, and Toyota will save face. We also have “low level” programming languages such as C, which don’t do this. Why do we use low level programming languages? Let’s step through what these languages actually have the machine do for “starting at location one, for the next 3 locations, add one to the number in the location”: First the C program:

NOTE: location[some value] is shorthand for “the location identified by some value.” egg_carton[3] is the third egg in the carton. Additionally, you should read these as sequential instructions “first do this, then do that” Finally, these examples are greatly simplified for the purposes of this article.

1: counter = 1 2: location[counter] = 1 + 1 3: if (counter equals 3) terminate 4: counter = 2 5: location[counter] = 2 + 1 6: if (counter equals 3) terminate 7: counter = 3 8: location[count] = 3 + 1 9: if (counter equals 3) terminate

Very roughly speaking, this is what the computer does. The programmer will use a counter to keep track of their location in the array. After updating each location, they will test the counter to see if they should stop. If they keep going they will repeat this process until the stop condition is satisfied. The Java programmer would write mostly the same program, but the program that translates the Java code into machine code (called a compiler) will add some stuff:

1: counter = 1 2: if (counter greater than array length) crash 3: location[counter] = 1 + 1 4: if (counter equals 3) terminate 5: counter = 2 6: if (counter greater than array length) crash 7: location[counter] = 2 + 1 8: if (counter equals 3) terminate 9: counter = 3 10: if (counter greater than array length) crash 11: location[count] = 3 + 1 12: if (counter equals 3) terminate

As you can see, 3 extra lines were added. If you know for a fact that the array you are working with has a length that is greater than or equal to three, then this code is redundant.

For such a small array, this might not be a huge deal, but suppose the array was a billion elements. Suddenly an extra billion instructions were added. Your phone’s processor likely runs at 1-3 gigahertz, which means that it has an internal clock that ticks 1-3 billion times per second. The smallest amount of time that an instruction can take is one clock cycle, which means that in the best case scenario, the java program takes one entire second longer to complete. The fact of the matter is that “if (counter greater than array length) crash” definitely takes longer than one clock cycle to complete. For a game on your phone, this extra second may be acceptable. For the onboard computer in your car, it is definitely not. Imagine if your brakes took an extra second to engage after you push the pedal? Congressmen would get involved!

windows_xp_bsod

In Java, reading off the end of an array is defined. The language defines that if you attempt to do this, the program will crash (it actually does something similar but not the same, but this is outside the scope of this article). In order to enforce this definition, it inserts these extra instructions into the program that implement the functionality. In C, reading off the end of an array is undefined. Since C doesn’t care what happens when you read off the end of an array, it doesn’t add any code to your program. C assume you know what you’re doing, and have taken the necessary steps to ensure your program is correct. The result is that the C program is much faster than the Java program.

There are many such undefined behaviors in programming. For instance, your computer’s division function is partial just like the mathematical version. Java will test that the denominator isn’t zero, and crash if it is. C happily tells the machine to evaluate 8 / 0. Most processors will actually go into a failure state if you attempt to divide by zero, and most operating systems (such as Windows or Mac OSX) will crash your program to recover from the fault. However, there is no law that says this must happen. I could very well create a processor that sends lions to your house to punish you for trying to divide by zero. I could define x / 0 = 17. The C language committee would be perfectly fine with either solution; they just don’t care. This is why people often call languages such as C “unsafe.” This doesn’t mean that they are bad necessarily, just that their use requires caution. A chainsaw is unsafe, but it is a very powerful tool when used correctly. When used incorrectly, it will slice your face off.

What To Do

So, if defining every behavior is slow, but leaving it undefined is dangerous, what should we do? Well, the fact of the matter is that in most cases, the added overhead of adding these extra instructions is acceptable. In these cases, “safe” languages such as Java are preferred because they ensure program correctness. Some people will still write these sorts of programs in unsafe languages such as C (for instance, my own DMP Photobooth is implemented in C), but strictly speaking there are better options. This is part of the explanation for the phenomenon that “computers get faster every year, but [insert program] is just as slow as ever!” Since the performance of [insert program] we deemed to be “good enough”, this extra processing power is instead being devoted to program correctness. If you’ve ever used older versions of Windows, then you know that your programs not constantly crashing is a Good Thing.

windows_xp_bsod

This is fine and good for those programs, but what about the ones that cannot afford this luxury? These other programs fall into a few general categories, two of which we’ll call “real-time” and “big data.” These are buzzwords that you’ve likely heard before, “big data” programs are the programs that actually process one billion element arrays. An example of this sort of software would be software that is run by a financial company. Financial companies have billions of transactions per day, and these transactions need to post as quickly as possible. (suppose you deposit a check, you want those funds to be available as quickly as possible) These companies need all the speed they can get, and all those extra instructions dedicated to totality are holding up the show.

Meanwhile “real-time” applications have operations that absolutely must complete in a set amount of time. Suppose I’m flying a jet, and I push the button to raise a wing flap. That button triggers an operation in the program running on the flight computer, and if that operation doesn’t complete immediately (where “immediately” is some fixed, non-zero-but-really-small amount of time) then that program is not correct. In these cases, the programmer needs to have very precise control over what instructions are produced, and they need to make every instruction count. In these cases, redundant totality checks are a luxury that is not in the budget.

Real-time and big data programs need to be fast, so they are often implemented in unsafe languages, but that does not mean that invoking undefined behavior is OK. If a financial company sets your account balance to be check value / 0, you are not going to have a good day. If your car reads the braking strength from a location off to the right of the braking strength array, you are going to die. So, what do these sorts of programs do?

One very common method, often used in safety-critical software such as a car’s onboard computer is to employ strict coding standards. MISRA C is a set of guidelines for programming in C to help ensure program correctness. Such guidelines instruct the developer on how to program to avoid unsafe behavior. Enforcement of the guidelines is ensured by peer-review, software testing, and Static program analysis.

Static program analysis (or just static analysis) is the process of running a program on a codebase to check it for defects. For MISRA C, there exists tooling to ensure compliance with its guidelines. Static analysis can also be more general. Over the last year or so, I’ve been assisting with a research project at UCSD called Liquid Haskell. Simply put, Liquid Haskell provides the programmer with ways to specify requirements about the inputs and outputs of a piece of code. Liquid Haskell could ensure the correct usage of division by specifying a “precondition” that “the denominator must not equal zero.” (I believe that this actually comes for free if you use Liquid Haskell as part of its basic built-in checks) After specifying the precondition, the tool will check your codebase, find all uses of division, and ensure that you ensured that zero will never be used as the denominator.

It does this by determining where the denominator value came from. If the denominator is some literal (i.e. the number 7, and not some variable a that can take on multiple values), it will examine the literal and ensure it meets the precondition of division. If the number is an input to the current routine, it will ensure the routine has a precondition on that value that it not be zero. If the number is the output from some other routine, it verifies that the the routine that produced the value has, as a “postcondition”, that its result will never be zero. If the check passes for all usages of division, your use of division will be declared safe. If the check fails, it will tell you what usages were unsafe, and you will be able to fix it before your program goes live. The Haskell programming language is very safe to begin with, but a Haskell program verified by Liquid Haskell is practically Fort Knox!

The Human Factor

Humans are imperfect, we make mistakes. However, we make up for it in our ability to respond to dynamic situations. A human would never fail to grab the 259th egg from a 12 egg carton and crack it into a bowl; the human wouldn’t even try. The human can see that there is only 12 eggs without having to be told to do so, and will respond accordingly. Machines do not make mistakes, they do exactly what you tell them to, exactly how you told them to do it. If you tell the machine to grab the 259th egg and crack it into a bowl, it will reach it’s hand down, grab whatever is in the space 258 egg lengths to the right of the first egg, and smash it on the edge of a mixing bowl. You can only hope that nothing valuable was in that spot.

Most people don’t necessarily have a strong intuition for what “undefined behavior” is, but mathematicians and programmers everywhere fight this battle every day.

Benchmarking Haskell with Criterion

Lately, I’ve been working on parallelizing some Haskell. I’ve been furiously coding away, but at some point you have to stop “optimizing”, and see if you’ve actually accomplished anything.

Unfortunately, Haskell is kind of lazy, which makes benchmarking difficult. You could write some massive huge function, run it in main, and time it. Your time will likely be just above 0 seconds.

After some searching, it turns out that the answer is on Hackage. The Criterion package is a benchmarking library that can be used to effectively analyze performance. Using it could probably be simpler, but not by much!

First a function:

Let’s optimize a function:

fibs :: Int -> Int fibs 0 = 0 fibs 1 = 1 fibs n = fibs (n - 1) + fibs (n - 2)

Good old Fibonacci Sequence. As you likely know, this implementation isn’t very good. It starts to get slow at around 20, and at around 40, it’s runtime begins to be measurable in minutes. Let’s try to optimize it:

Int -> Int goodFibs 0 = 0 goodFibs 1 = 1 goodFibs n = goodFibs' (n, 2, 1, 0) where goodFibs' (to, count, n, nmo) | to == count = n + nmo | otherwise = goodFibs' (to, (count + 1), (n + nmo), n)

You may have seen an implementation like this before. The gimmick here is that instead of using the traditional recursive definition, we just count up to the nth position of the sequence. But is it actually faster? Let’s find out.

We can write a simple test bench for this. First, we need to cabal install criterion, then we are ready to go. Now, let’s write our test bench:

import Criterion.Main main :: IO () main = defaultMain [bgroup "Good Fibs" [bench "4" $ nf goodFibs 4, bench "15" $ nf goodFibs 15, bench "30" $ nf goodFibs 30], bgroup "Bad Fibs" [bench "4" $ nf fibs 4, bench "15" $ nf fibs 15, bench "30" $ nf fibs 30]]

There’s a fair amount going on here. Let’s walk through this function-by-function:

defaultMain :: [Benchmark] -> IO ()

This is a function that takes a list of benchmarks and runs them. This function allows you to pass arguments to Criterion to control its operation. A list of these options can be obtained by passing --help to the program.

nf :: NFData b => (a -> b) -> a -> Benchmarkable

This function takes a function from a -> b, and an a, and evaluates the function to normal form. This solves the laziness problem as a value that is evaluated to normal form is completely evaluated. You could also have used whnf :: (a -> b) -> a -> Benchmarkable to evaluate to weak head normal form. There are also IO versions of these functions.

bench :: String -> Benchmarkable -> Benchmark

This function takes a string, and a Benchmarkable, and returns a Benchmark. Nothing magical going on here; this is how you make a benchmark. The string is the name of your benchmark.

bgroup :: String -> [Benchmark] -> Benchmark

This function takes a string and a list of benchmarks, and returns a Benchmark. This allows you to group benchmarks. As you can see, I’ve grouped 3 separate calls to fibs and goodFibs using bgroup. As above, the string is the name of your benchmark group.

Now, the payoff

So, how did I do? Let’s try it out. Build your program, and execute it. You should see something like this:

benchmarking Good Fibs/4 time 16.87 ns (16.87 ns .. 16.88 ns) 1.000 R² (1.000 R² .. 1.000 R²) mean 16.88 ns (16.87 ns .. 16.88 ns) std dev 7.316 ps (5.814 ps .. 8.984 ps) benchmarking Good Fibs/15 time 29.59 ns (29.57 ns .. 29.61 ns) 1.000 R² (1.000 R² .. 1.000 R²) mean 29.62 ns (29.61 ns .. 29.63 ns) std dev 28.03 ps (23.33 ps .. 33.59 ps) benchmarking Good Fibs/30 time 47.32 ns (47.21 ns .. 47.43 ns) 1.000 R² (1.000 R² .. 1.000 R²) mean 47.27 ns (47.20 ns .. 47.35 ns) std dev 231.7 ps (210.4 ps .. 255.3 ps) benchmarking Bad Fibs/4 time 33.35 ns (33.34 ns .. 33.37 ns) 1.000 R² (1.000 R² .. 1.000 R²) mean 33.37 ns (33.35 ns .. 33.42 ns) std dev 99.38 ps (25.73 ps .. 205.0 ps) benchmarking Bad Fibs/15 time 9.446 μs (9.444 μs .. 9.448 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 9.435 μs (9.416 μs .. 9.444 μs) std dev 41.64 ns (3.462 ns .. 69.91 ns) benchmarking Bad Fibs/30 time 12.88 ms (12.86 ms .. 12.89 ms) 1.000 R² (1.000 R² .. 1.000 R²) mean 12.86 ms (12.85 ms .. 12.87 ms) std dev 23.34 μs (5.597 μs .. 49.02 μs)

As you can see, my goodFibs does well. It grows mostly linearly with increasingly higher arguments, and all invocations take some amount of nanoseconds.

My fibs doesn’t do so well. fibs 4 takes nanoseconds, fibs 15 takes microseconds, and fibs 30 takes milliseconds. Clearly goodFibs was worth the time put into it, and I’m not one of Knuth’s monsters. Yay me!

Checking Our Heads With Liquid Haskell

Consider this function:

headInTail :: [a] -> [a] headInTail l = (tail l) ++ [head l]

Pretty straightforward, right? It takes a list, extracts the head and sticks it in the tail. Surely you’ve written something like this before. It should be fine, right?

*Main> headInTail [1,2,3] [2,3,1]

…checks out. Let’s try a few more:

*Main> headInTail "hello" "elloh" *Main> headInTail ["cat"] ["cat"]

…good. And the moment we’ve all been waiting for:

*Main> headInTail [] *** Exception: Prelude.tail: empty list

Oh yeah, head and tail don’t work for empty lists… Normally, we have some choices on how to proceed here. We could wrap the function in a Maybe:

maybeHeadInTail :: [a] -> Maybe [a] maybeHeadInTail [] = Nothing maybeHeadInTail l = Just $ headInTail l

…which introduces an annoying Maybe to deal with just to stick our heads in our tails. Or, we could just do something with the empty list:

headInTail :: [a] -> [a] headInTail [] = [] headInTail l = (tail l) ++ [head l]

…but what if returning the empty list isn’t the correct thing to do?

Another choice is to document this behavior (as head and tail do), and just never call headInTail []. But how can we guarantee that we never attempt to call this function on an empty list? Shouldn’t this be the type system’s problem?

Unfortunately, not all is roses and puppies. Sometimes the type system cannot help us. Sometimes somebody thought it’d be a good idea to use Haskell’s Evil exception system. Whatever the case, there are tools to help us.

Liquid Haskell

Liquid Haskell is a static code analysis tool that is used to catch just these sorts of problems. Liquid Haskell allows us to define invariants which will be enforced by the tool. Liquid Haskell is a research project that is still in development. As such, it has some rough spots, however it’s still very much capable of helping us with our problem here. But before we begin, we need to get the tool installed.

To install the tool, execute:

cabal install liquidhaskell

…simple right? Unfortunately, we’re not quite done. We need to install an SMT solver. This tool is used by Liquid Haskell. Currently, the tool defaults to Z3. I’m not sure how to use a different solver (and Z3 works just fine), so I suggest you you Z3. You’ll have to build Z3, and place the binary somewhere on the PATH. After this is done, and assuming your .cabal/bin directory is also on the PATH, testing your source file is a simple as:

liquid [FILE].hs

Let’s Have A Look

Create a haskell source file that contains the following:

headInTail :: [a] -> [a] headInTail l = (tail l) ++ [head l]

After that’s done, let Liquid Haskell analyze your file:

liquid [YOUR_FILE].hs

A bunch of stuff should scroll by, then in a second you’ll see something similar to the following:

**** UNSAFE ********************************************* Doop.hs:5:22: Error: Liquid Type Mismatch Inferred type VV : [a] | VV == l && len VV >= 0 not a subtype of Required type VV : [a] | len VV > 0 In Context VV : [a] | VV == l && len VV >= 0 l : [a] | len l >= 0

If you go to the line and column indicated, you’ll find the argument to tail. Conveniently, it seems that Liquid Haskell comes pre-loaded with definitions for some library functions. Normally, you’ll have to define those yourself. In fact, let’s do just that.

The next logical step here is to write a specification for our function. This specification is a statement about what sorts of values the function can take. Add the following to your source file, in the line above the signature for headInTail:

{-@ headInTail :: {l:[a] | len l > 0} -> [a] @-}

If you re-run liquid on your source file, you’ll see that the warning went away, and the program now indicates that your source is “SAFE”. So, what does this refinement mean?

Basically, these refinements are machine-checked comments. They have no impact on the program, they exist for Liquid Haskell. Think of it as being like an enhanced type signature. Like a normal type signature, we start with the name of the function, then two colons. This part, however:

{l:[a] | len l > 0}

…should be new. Basically, this part says that the list should not be empty. You should read it as “l is a [a] such that len l is greater than zero.” A lot of the notation used by Liquid Haskell comes from formal logic. Let’s break this down some more:

l:[a]

Here we bind a symbol, l, to the first list argument. At any point to the right of this symbol until the end of the scope defined by {}, we can reference this symbol.

{... | ...}

The pipe symbol indicates that we are going to make some statement about the type on the left hand side.

len l > 0

Here we state that the length of l must be greater than 0. It looks like we are calling a function, and we sort of are; len is a measure which is a special function that is used in specifications. However, the subject of measures is a post for another day.

You may now be thinking: “Well this is all well and good, but what’s to stop me from calling this function on an empty list?” To answer that, let’s implement main:

main = do i <- getLine putStrLn $ headInTail i

Add this to your source file, and then run liquid [YOUR_FILE].hs and you’ll notice that Liquid Haskell has a problem with your attempt to call headInTail:

**** UNSAFE ********************************************* Doop.hs:3:29: Error: Liquid Type Mismatch Inferred type VV : [Char] | VV == i && len VV >= 0 not a subtype of Required type VV : [Char] | len VV > 0 In Context VV : [Char] | VV == i && len VV >= 0 i : [Char] | len i >= 0

Liquid Haskell is telling you that it can’t prove that the length of i is greater than 0. If you execute your main function, you should see that it works as expected. Type in a string, and it’ll do the right thing. Push enter right away and you’ll get an exception.

*Main> main hello elloh *Main> main *** Exception: Prelude.tail: empty list

…ick… Let’s fix this:

main = do i <- getLine case i of [] -> putStrLn "Get your head checked." _ -> putStrLn $ headInTail i

Now if you analyze your file, Liquid Haskell should be happy. Honestly, you should be happy too: the tool caught this problem for you. It didn’t go to testing messed up, and the bug certainly didn’t escape testing unfound. Your program is now objectively better:

*Main> main isn't that neat? sn't that neat?i *Main> main Get your head checked.

DMP Photo Booth: Release Candidate 1

It’s been a long time coming, but the day is almost here: the day DMP Photo Booth is officially released.

Following last week’s successful stress test, I’ve been doing some last-minute touch-up work. I’ve been preparing my laptop for the big day, and finding and squashing some bugs.

DMP Photo Booth RC1 using my fancy new dark GTK theme

DMP Photo Booth RC1 using my fancy new dark GTK theme

Now things are coming along, and I feel the time has come to proceed to RC1. This means that barring any major new show stoppers, DMP Photo Booth will proceed to version 1.0 on June 22.

You can find the latest release here. On that page you’ll find the latest source for DMP Photo Booth and the reference modules, as well as a pre-compiled version for Debian/AMD64. Just extract the tarball into a folder and double-click the executable and you’re off! It comes pre-configured with sane defaults.

Moving forward, I plan to work on a “GTK Trigger Module”. This will just show a window with a button you can click to trigger the photo session. I understand that not everybody feels like constructing an Arduino thingamabober, and that this is surely the only thing preventing DMP Photo Booth from going viral on a global scale. Hopefully this is done by 1.0, but if not it will likely make it into a version 1.0.1, to be released shortly after 1.0.

DMP Photo Booth: Crunch Time

Over the last few months, I’ve become distracted. As anybody who’s ever committed to one project can probably tell you: it stops being exciting. What was once your pride and joy becomes the daily grind. Things get stale. As was the case with me, I suspect that this happens for most people when development of new features ends and the focus shifts to bug fixes.

I became distracted. My mind began to wander to the next thing, which in my case ended up being Haskell. I began learning Haskell, and it was definitely educational. I learned a lot with Haskell, and I plan to stick with it so that when I list it on my resume, I don’t get destroyed on a whiteboard test. Then came The Great Matrix Affair of 2014; I got overwhelmed at school. I spent so much time studying and doing homework that I couldn’t muster up the motivation to program. Things fell by the wayside, as you can see in my blog post output for February. Luckily for me, that is done, and I have the next two months free to program!

What Remains To Be Done?

It’s been a good 6 – 8 weeks since I’ve really focused on DMP Photo Booth, so the first order of business is the figure out what needs to get done. After doing some brainstorming I’ve settled on a list:

Finish The Trigger

I’ve mostly completed the trigger, but it doesn’t work. The button is soldered wrong, and while it was magically working for a while, it has since stopped. I need to fix the wiring issue, and then drill a hole in the box to put it through. After that and maybe a quick coat of paint it will be complete.

This particular task is due by Friday. I have a series of VIP demos coming up, the first of which is Saturday morning. A few of my cousins are coming in from out of town on Saturday to do wedding stuff, and I want to show it off then. While my cousin Allen is an engineer, and can appreciate a breadboard mockup of what will Totally Become A Real Thing, it certainly won’t be impressive. My cousin Laraine will likely be less amused, but I’m sure I’ll get a pat on the head for my “hard work”. Due to this, it’s important that the trigger be done before then.

Facebook Printer Module

The reference suite of modules was planned to be: a Trigger Module using my Arduino implementation, a Printer Module using CUPS, a Camera Module using LibGPhoto2, and a Lua module for each so that modules can be created using Lua. Of these, only the Lua Printer Module remains to be done. Since creating a Lua module is a trivial task (and not terribly important to be honest), this is a very low priority item.

However, the current Printer Module requires a physical printer. This might not always be ideal, since printers are big and heavy. What if you just want to bring a laptop and a camera and have a photo booth? My fiancée is having just this sort of situation; she wants to use the photo booth at her bachelorette party, but who wants to lug a huge printer to a hotel room? To solve this, I’ve promised her a Facebook Printer Module.

The idea is that when dmp_pm_print() is called, the photo strip will be uploaded to facebook. While I know this sort of thing can be done, I haven’t really researched it much. If it turns out that you have to pay facebook money to get this sort of access, I will find a hosting service that is free. Maybe Photobucket, or Dropbox, or whatever. The important thing is that the photo strips will end up in some central location on the internet so that anybody at the party can download the strip later. Ideally, this central location would be a facebook gallery so people can tag themselvs and be all Web 2.0.

My fiancée’s bachelorette party is in May, so this project isn’t a burning priority. However, this represents the most substantial addition of new functionality remaining to be done, so I plan to work it sooner rather than later. Code will be posted on my Github as it evolves, and like DMP Photo Booth will be available under the GPLv3.

Mac Support

To this point, all my development has been done in Linux. First using Ubuntu, and now using Debian. However, most people don’t use Linux. While Linux is the main target platform for DMP Photo Booth, I have been coding as portably as possible, so it should be little effort to port the application to Mac. Over the next few months, I’ll be making sure it compiles and runs correctly my old Macbook. My Macbook is vintage 2010, as such it is only running Snow Leopard. Therefore if anybody in the audience is a Mac user, and has a current version of OSX, feel free to give it a shot as well and let me know how it goes.

Ideally, my fiancée will bringing the Macbook to her bachelorette party and not my development laptop, therefore this project is due at the same time as the Facebook Printer Module, in May.

Bugs!

Finally, bugs. I need to identify them. I need to squash them. And I need unit tests.

After making a big show about being a good person and doing unit tests, I promptly lost the faith. Soon after implementing those first tests, I made a major change to how I handled errors in my code. Suddenly, all my tests were broken, and I was faced with the choice: rewrite them all, or delete them. After some thought I decided that my tests weren’t that great and that I’d probably change something again and break them all again. So I deleted them.

I’ve got to say, I miss those tests. There has been more than a few times where I’d refactored something and wasn’t sure if it still worked. Sure, it seems to work, but does it really? If I had unit tests in place, I could say with a greater degree of certainty that they do. Plus, “it sounds like a lot of work” is not a good reason not to do something, so it won’t be one for me.

On top of that, I’ll be identifying and squashing bugs the old fashioned way: by trying stuff. I’ve already found a few doozies, and I’m sure I’ll find more. As I find them I’m going to post them on the bug tracker for DMP Photo Booth on Github. I do this for a few reasons: 1) it will provide a public centralized repository of open issues so that I don’t lose or forget about them. 2) I would like others to post bugs here. If I post them here and show that I am fixing them, I feel it will establish confidence that it is looked at and acted upon.

This is due when DMP Photo Booth goes to version 1.0. That is planned to be on June 21, the day of my wedding. DMP Photo Booth will get a good solid night of work as the 80 or so people attending put it through its paces. Assuming all goes to plan with minimal issues, DMP Photo Booth will be declared to be out of Beta. That said, to be truly version 1.0, unit tests must be in place and “all bugs must be fixed”.

Packaging

Currently, DMP Photo Booth is available in source form only. No end-user ever had to compile Microsoft Office; I don’t feel they should have to compile DMP Photo Booth.

To that end, binary distributions will be available for Linux and Mac when DMP Photo Booth goes to version 1.0.

Got My Work Cut Out For Me

It’s a long list to be sure, but I have 4 months to focus on it. However, I’ve decided that I should spend some time focusing on other things too, so that I don’t burn out. To that end, I plan to spend 1 day per week focusing on learning new technologies, and 1 day per week keeping my old skills sharp.

For new technologies, this means things like learning more Haskell, and other languages or frameworks. Whatever strikes my fancy. For old skills this means practicing with Lua and Java, and maybe even C++ if I’m feeling particularly masochistic that day. This will likely take the form of blog posts on topics relating to these, so stay tuned!

Profiling With Gprof

As promised, today I’ll be talking about GProf.

What Is GProf?

Gprof is a profiling tool. There are two versions: the original written for BSD UNIX, and a re-implementation for Linux. There are some minor differences between the two that you can read about here. We are going to concern ourselves with the Linux version.

Gprof is a pretty bare-bones utility. In the spirit of UNIX it does just one thing well. It doesn’t have any fancy menus or exotic extras. You just run it and it goes.

Preparing Your Program

First things first, we need to compile and link our program with profiling enabled. Using gcc, this is easy; just add -pg to the compile and linking options. If you’re using NetBeans, this is simple as well.

In NetBeans, right click your project and click Properties

Click Build > C Compiler. In the Additional Options field, add -pg to the end.

Click Build > Linker. In the Additional Options field, add -pg to the end.

Click OK. You are now good to go. Clean and build your project and you are ready for the next step.

Run Some Test Cases

Here is where things get confusing. You’ve compiled your program, surely you have to run your application within gprof similar to how you ran Valgrind, right? Let’s give that a shot…

# gprof [your_program] gmon.out: No such file or directory

Hmm… I’ll save you some time: Google will tell you that gmon.out should be there, clearly you messed something up and you’re a bad person.

Here’s the secret: you aren’t supposed to run your program through gprof at first. You need to run your program by itself first and it will create gmon.out in the current directory upon termination. Repeated runs will replace the existing gmon.out file. Run your application and perform some test cases. Follow good test practices and really stress your program. Now, you are ready to generate your profile data. Make sure gmon.out is in the current working directory and then…

gprof [your_program] > [output_file]

GProf should return without error and there should be a new file. Open it up in your favorite text editor. As you can see, it contains a flat profile, and a call graph of your program. Below each is a description of the columns. You should be able to take it from here, but if you want additional information check out the GNU gprof webpage.

Oh yeah, and don’t forget to remove the -pg from your compiler and linker lines when you’re done; they’ll tank your performance.

At The Gates Of Valhalla

In my post the other day, I talked about using Valgrind and GProf to debug DMP Photo Booth. As tends to be the way of most articles on the Internet, I didn’t actually spend any time talking about how to use said programs. You are, however, on notice that they are Good.

Well, I promised I’d tell you how to use them. I may be many things, but a liar I am not. Today, I’ll start with Valgrind.

Valgrind

Installation

First things first, we need to install. Once again, Ubuntu comes through for us. It’s just a simple matter of …

sudo apt-get install valgrind

… and we’re off to the races. Unfortunately for those of you in the audience running Windows, Valgrind is not available for Windows.

Valgrind Vs. GLib

You may remember in my last post a lot of talk about difficulties with GTK. Per the Gnome Wiki, the recommended way to launch Valgrind with a GLib/GTK application is:

G_DEBUG=resident-modules valgrind \ --tool=memcheck \ --leak-check=full \ --leak-resolution=high \ --num-callers=20 \ --log-file=vgdump \ [your-program]

Let’s talk about these options.

  • G_DEBUG=resident-modules
    • This is not actually an argument to Valgrind, but an environment variable. This is used by GLib. You need to use this if your application makes use of GModule. For instance, if you’re implementing a module-based Photo Booth application. If you don’t use this, your modules may get unloaded prematurely. If this doesn’t apply to your application, feel free to omit this.
  • valgrind
    • Calls valgrind
  • –tool=memcheck
    • Valgrind is actually a suite of tools. Memcheck is the tool that checks for memory leaks
  • –leak-check=full
    • Enables searching for leaks on exit
  • –leak-resolution=high
    • Sets the detail level of leak stack traces
  • –num-callers=20
    • This sets the depth of the stack traces. For instance, setting this to 20 will show: foo() called by bar() … and so on up to 20 times
  • –log-file=vgdump
    • The file to output to. In its current configuration, a file named vgdump will be placed in the current directory
  • [your-program]
    • Finally, your program. Place in whatever command you’d call to start your program.

…so you do all that and execute your program. Valgrind will monitor your program while you interact with it. Go ahead and run some test cases. It’ll be a little slow due to the monitoring, but that’s ok. Finally, when you’re done, exit your program and open the log file. The first thing you should notice is that it is long. Really long.

This is due to GTK. GTK doesn’t clean up after itself on exit, instead relying on the OS to clean up on process termination. While this behavior is considered to be fine by most, it makes this step difficult. On the Gnome Wiki, you’ll find a Suppression file that is used to mitigate some of this. My experience with this is that it doesn’t do much.

The best way I’ve found is to just search for “definitely lost”. These are most likely to be caused by your program. You can go line by line and check each possibly lost section, but I’ve found that this isn’t practical as 99/100 of these originate from gtk_main().

%d bloggers like this: