Photo of Mira
On April third, my daughter, Mira Crenshaw Hendrix, was born. She was born a healthy 9lb 10oz, and has been a healthy eater since then. My wife Tanya continues to amaze me. She was in labor continuously for three days. She pushed through contractions while sleeping for a few minutes in-between each one. Despite the long labor, she still achieved her goal of a natural childbirth.

I was exhausted as well, but we also had a trio of amazing midwives from Alma Midwifery to help us through everything. They came to the house regularly to check on the health of Tanya and the baby, and were really good at their jobs while being highly supportive of all of our needs.

Posted in Personal | Leave a comment


Software developers make assumptions all the time about the format of the data that accepted by the program. Understanding the user is critical to making useful software—trying to cover every corner case can distract from covering the mainstream user.

Unfortunately, computer criminals are far from the typical user, and will often try to exploit bugs in application code that come from putting too much trust in data received from the user. To gain access to a web service or system, attackers may mix SQL commands in with user data, or carefully craft malformed files that cause a buffer overflow or segmentation fault. If the attack succeeds, it can lead to data loss or the attacker gaining control of the system.

To help automate finding attacks, security researchers have developed tools, called fuzzers that throw random and semi-random data at an application to try to make it crash. Like any tool, fuzzers can be used for good or evil. Both attackers and security professionals use fuzzing to find vulnerabilities in application code.

I’ve recently spent some time looking at available fuzzers, and it’s been quite interesting. A couple of the tools I looked at that worked out of the box are:

  • zzuf is an open source fuzzer that intercepts file and network calls, and changes some of the input bits randomly.  The architecture is pretty simple, but worked out of the box without much extra effort.  zzuf does not require the source code for the application to test.  Since zzuf randomly permutes inputs, it is unlikely to catch deep application bugs in the internal logic, but it can catch quite a bit of parsing errors on other problems.
  • Peach is a smart fuzzer that makes use of file format information to try to generate fuzz test cases that test edge cases in the code. Peach requires a bit more effort to setup, but the extra formatting information can let Peach potentially reach deeper into the application than a purely random tester. Peach supposedly supports both Linux and Windows, but I could not get Peach to properly monitor the application I was testing under Linux.

In you are just getting started fuzzing, I think zzuf and Peach are great tools to take a look at right now. For the future, one of the exciting research developments is an approach called automated whitebox fuzzing. Fuzzers in this category include open source tools such as KLEE as well as SAGE and PEX from Microsoft Research. I’ve tried out KLEE and ran into some problems getting its special version of libc to compile, so I can’t recommend it yet. However, I think these tools are the future of fuzzing once some of the technical difficulties are worked out.

Whitebox fuzzers analyze the actual compiled code of the application to test, and execute the code both concretely with actual inputs, and symbolically with symbolic inputs that denote arbitrary values. The symbolic execution is used to identify the control flow branches traversed by the concrete execution. The branch conditions are sent to a constraint solver such as Z3 or STP which is then used to generate fresh new inputs that traverse different branches in the program. This helps the fuzzer systematically cover additional execution paths in the code, and leads to greater overall code coverage than random mutation.

Because of the mix of concrete and symbolic execution, whitebox fuzzers are also referred to as concolic fuzzers. It seems like the terminology one uses depends on where one heard about first heard about the technique.

Posted in Uncategorized | Leave a comment

Elliptic Curve Cryptography

Lately, I’ve been learning quite a bit about Elliptic Curve Cryptography (ECC)—an approach to public key cryptography that offers much better security than RSA. For example, NIST recommends an RSA key of at least 2048 bits if you want to secure information through 2030. For ECC, the same level of security can be obtained with a 224 bit key.

As a public key method, one can use ECC to securely communicate between two parties that do not share a secret key. Instead, each party has a public key that can be freely shared, and a private key only known to that party. Once the keys are generated, and public keys are shared, one can use ECC for two basic tasks:

Secret agreement. Two parties can use the ECDSA algorithm to agree on a shared secret (such as an AES key) without the possibility of an  eavesdropper learning or affecting the generated key.  This shared can then be used to efficiently communicate directly.

Digital Signatures. One party can sign a message so that other users who receive the message know that the author has sent this message.  This allows verifying the integrity of messages.

ECC has been recently added to OpenSSL, and incorporates some very interesting mathematics. If you have an interest in public key cryptography, I definitely recommend learning about it. A good place to start reading is Handbook of Elliptic and Hyperelliptic Curve Cryptography, but there are also many references available online.

Posted in Cryptography | Leave a comment


Welcome to my homepage.  I am a software developer working on tools for secure and trustworthy system development.

Posted in Meta | Leave a comment