Best Practices in Codebase Rescues

A codebase rescue - the process of cleaning up, refactoring, and modernizing legacy open-source code - has some unique challenges. This essay is going to be about things to do and things not to do. It's based on extensive experience with codebase rescues including GPSD, NTPsec, VMS Empire, OpenAdventure, and many others.

A typical codebase rescue project (the kind I see most often, anyway) is some kind of Unix network service code written in C dating back to the early 1990s or even 1980s. It's been in use forever, it's been hacked over by many hands, and serious compatibility issues would be likely to arise if you tried meeting its functionality from scratch. Often the program's behavioral repertoire is not fully documented, and there may well be no one person who fully understands either the repertoire or the code. There may be an RFC that the codebase does not quite track.

Legacy games like OpenAdventure or Super Star Trek make an interesting contrast. They're not as complex as network service daemons, but the idiom of the code can be even older and weirder. I've done one rescue from FORTRAN that dated back to 1973! Nevertheless, they teach some of the same methodological lessons as more serious rescues, the things you do to keep the Internet running.

So, you're faced with 200KLOC of archaic C - for reasons I don't understand, these ancient projects seem to cluster around that size. Your job is to bring it up to date so that maintaining it will no longer be a nightmare, and it can be extended/improved.

Define your victory condition

Know what you want to accomplish, and state it clearly when you start your rescue. This will help you attract developers and funding. Also, you'll know when you're done - not a trivial matter on a large project.

Probably the single most frequent motivation for these rescues is that some piece of infrastructure software has developed security issues due to neglect and bit-rot. It may have CVEs, or its crypto might be out of date.

Keep track of what you fix so you can brag about it (in the best understated just-the-facts-ma'am style, of course). This also attracts developers and funding.

Get thee to modern version control

If the rescue codebase has a history under Subversion, you should probably move. If it's under CVS or anything proprietary you should certainly move.

The reason is simple: you want to reduce the friction costs of development, and in particular the entry barrier for new developers.

Practically speaking, this means moving to git. Procedure for that is beyond the scope of this essay, but you can read the http://www.catb.org/~esr/reposurgeon/dvcs-migration-guide.html[DVCS migration HOWTO].

First, write regression tests

The first thing to do on the code itself, if at all possible, is write regression tests. Programs as old as the typical rescue project rarely if ever have them already.

Very occasionally this isn't possible; NTPsec was like this, for reasons too painful and technical to go into here. But usually your program can either be made to run in a filter mode that deterministically transforms inputs to outputs, or it can be made to capture transaction logs that, with more effort, it can be taught to replay.

(The reproducibility is important enough that you shouldn't be afraid to do things like spoof the C library's clock-time calls if you have to. Do what it takes.)

Good tests will save you immense amounts of pain and work, because you'll know quickly when you broke something while changing the code. A practice that goes with this is: every bug report should become another regression test.

If your program's inputs and outputs are binary (say, TCP/IP packets with non-text content) put in the effort to write lossless conversion to and from some textualized format, and then keep your test loads and check files in that form. This will make it easier to hand-compose tests that probe obscure regions of the programs's behavioral space.

This whole section is good practice for any code you write or maintain. The reasons I'm emphasizing it for rescues are (a) the hacked-over-by-many-hands history has often left them lots of odd little behavioral corner cases which users later came to depend on and you must thus preserve, and (b) writing synthetic test loads is generally the only way you have to verify the odd behaviors you think you see in the code.

If you find writing tests irritating and boring, remind yourself of a truth: each hour of test construction tends to save you many hours of boring and frustrating debugging work down the road. And that's not even counting the moments of outright panic and terror you avoid if you do this right.

The combination of good tests with fast version control is especially liberating. It frees you to move fast, try chancy things, and have fun without worrying that your experiments will break the project.

More truth: sufficiently good tests eventually drive your rate of visible errors low enough to make you look like a coding god with precognition and wizard fingers. This is not a sham or an accident.

Second, clean up all compiler warnings

The first thing you should do to the code itself is clean up all compiler warnings. The reason you should do this is dumb and simple; if you don't, they form a visual thicket in which actual error messages can hide.

We applied this policy to NTPsec after a false start during which we noticed what happened when we didn't.

Another advantage of doing this step relatively early is that cleaning up warnings is a good way to motivate shallow little dives into the code, which are preparation for the deeper ones you'll need to do at later steps.

Motivation like that is important - reading code as an activity in itself is hard and most people can't actually make themselves do it at the volume required to wrap their brains around one of these large, crufty old projects.

Accordingly a loop you will repeat many times during any rescue is: find some small thing you want to modify, read just enough code to do it, make the change, check correctness...then look for another thing to modify. If you repeat often enough, the knowledge you acquire from each point change will accumulate and you will start being able to make larger changes.

Continue this at larger and larger scales until you understand everything you need to. Fixing compiler warnings is the first step on that road.

Third, apply code checkers

Static code-auditing programs like Coverity and cppcheck are your friend. Add productions to your project's build recipe to make applying them easy. Treat their warnings like the compiler warnings they resemble.

You'll do one big fix pass with these at the beginning of your cleanup; this, too, is part of your iterative code-learning process. You should also apply them regularly in later stages for the exact same reason you never want to leave a compiler warning hanging out.

Yes, coming up with the right set of suppressions for bogus warnings is fiddly and boring. Do it in the knowledge that every hour of work you put in will pay off in a lot more fiddly and boring work avoided down the road.

Fourth, enforce C99 compliance

I've been assuming the codebase you're writing is written in C under Unix. Thus is a safe bet because almost no other language and operating-system API has been roughly stable long enough to make codebase rescues both practical and necessary.

(One possible exception is very old FORTRAN; this is really annoying but basically tractable, as there are FORTRAN to C source-code translators that work.)

The first non-trivial challenge you will face is bringing the C up to date.

If you are very unlucky you might encounter pre-ANSI C without function prototypes, though such survivals are rare as I write in 2018 and might be extinct by the time you read this. If you trip over one, fix that first; this is a change that is easy to verify.

Over time, C - and the generally accepted style of C programming - has moved towards stricter typing and more enforcement of type safety. In old code you may find uses of type punning, casts, and unions that would be unacceptable today.

There's a reason these old-fashioned practices are now frowned upon; they are defect attractors. You should eliminate them. More type safety means your compiler and auditing tools can do more to warn you of problems, allowing you to lower your defect rate.

In particular, try hard to eliminate casts. These are a sign that the code is violating type safety, and that is always a potential trouble spot. There are two places you may be unable to - casting pointers returned from malloc/calloc is necessary, and dealing with machine-word length variations in printf(3) and friends is another. Any casts outside these contexts should be viewed with deep, deep suspicion and probably terminated with extreme prejudice.

Apply the C89 void type. Functions that don't return anything should get it as a return type (in older code it was generally int). Untyped pointers such as those returned by malloc(3) should be (void ), not (char ) as in older code.

If the codebase does not already use a 'bool', a step that is more important than it looks at first sight is identifying semantically boolean variables and lifting them to use bool and the 'true' and 'false' keywords. Old-school practice was to type these as 'init' and use FALSE and TRUE macros. Get rid of these and use .

Yes, you want the slight improvement in type checking to lower downstream defect rates. The less obvious but possibly more important reason is that this booleanizing step is likely to teach you important things about the code even while you only half notice that it's doing so. This is because semantic bools tend to be concentrated in important parts of the program's control flow.

Old code sometimes embeds a tacit assumption that C ints are 32 bits.
This is difficult to audit for systematically, but be aware of it when you're hunting casts and other defect attractors.

Try compiles with C99 mode set. This may produce warnings that will be useful.

Why C99, rather than C89 or C11? The good-practice advice used to be to assume only C89, but I learned by field observation in the late 2000s on the GPSD project that C99 support had become universal and solid in every C compiler. It may have happened sooner without me noticing; like most C programmers who lived through the pre-standardization years I had a strong habit of conservatism in my assumptions in order to avoid nasty surprises.

On the other hand, post-C99 features are seldom relevant to rescues of old code. There are two minor exceptions:

The more important is anonymous unions; these are useful and safe, but only officially standard in C++. When I write coding guidelines for my C projects I allow these and have never had a port problem as a result.

The other might be the atomic-operation and memory-barrier primitives in C11. If you need them you're really going to need them. Better to assume C11 conformance and accept that you can only use a compiler/hardware combination that supports it than do something ad-hoc; these primitives are notoriously hard to get right and the cost of screwing up can be nasty.

The C cleanup step may overlap some with the next one. That's OK.

Fifth, enforce POSIX compliance

Codebases that need rescue tend to be full of preprocessor conditionals to support Unixes not seen since dinosaurs roamed the Earth. This is often true even of code most of which postdates successful Unix standardization; Unix tradition long discouraged throwing out any portability shims at all, because you never knew what weird platform your code might be deployed to.

But it's the 21st century and time to throw that crap out. The standards people won; take advantage. Write to a conformance baseline of POSIX.1-2008 (until recently I might have said POSIX.1-2001, but toolchain progress has continued getting made).

You collect several gains from this. The most obvious is readability/maintainability. Those thickets of ifdefs and variant library calls that time forgot are hard on the eyes and brain. They proliferate test paths.

It is an empirical fact (related to proliferating test paths) that those grotty variant sections tend to be really powerful defect attractors. As elsewhere in software engineering, ugly correlates with bad. Removing them can reduce your downstream bug load significantly.

Finally, getting rid of port shims enables your next step.

Sixth, shoot autoconf through the head

This step is a bit controversial - autoconf still has its advocates. In unusual cases it might be best to leave autoconf in place; by the end of this section I hope you will know how to recognize those.

This is a complicated topic. A bit of history helps in understanding the mess we're in.

Original 1970s Unix practice for software build recipes was to use plain Makefiles. This still works well for small projects shipping highly portable, self-contained code. But Makefiles scale quite badly, and don't at all address the problem of discovering local environment features like variant system calls or which service libraries are installed. Makefiles are particularly weak in coping with divergent Unix dialects, a problem which began to become acute in the early 1980s with the AT&T/BSD split.

Autoconf originated in the early 1990s as an attack on the problems Makefiles didn't solve. It includes feature discovery for the local environment, which in autoconf-speak is confusingly called "configuration".

When autoconf was written it solved a real problem. But the design included some choices that turned into complexity sinkholes. No modern scripting language was yet widely enough deployed to be considered portable, so autoconf was written in a mix of shell and m4 macros - both weak languages for its purposes, resulting in large autoconf files being extremely difficult to understand and modify.

A particularly unfortunate call was making autoconf a "two-phase" builder that generates Makefiles to be run, rather than a one-phase system that processes a build recipe directly. As more discovery features were added, the Makefile generation process became an increasingly crufty pile of kludgy auxiliary programs and mysteriously interlinked processing passes.

Eventually, advancing Unix and C standardization reduced the complexity of the feature-discovery phase enough that autoconf started to become overkill. In theory, at that point, many projects could have dropped back to makefiles. A few actually did. But autoconf recipes have a lot of inertia; they're so difficult to understand as wholes that the expedient tweak, solving today's problem at the cost of adding yet another glob of complexity, is the rule.

The result is you probably have an autoconf recipe in your codebase that has accreted complexity right along with the hacked-by-many-hands accretions to your C. It is more than likely a Lovecraftian horror by now - vast and incomprehensible, yielding dog-slow builds with weird sporadic failure modes that are nigh-impossible to diagnose.

And, due to standardization, much of that complexity may now be unnecessary! Every port-shim ifdef you removed in the previous step probably made part of your autoconf recipe obsolete.

You may be very lucky. Your autoconf recipe size may be small enough to comprehend, and it will get smaller once you've ripped out the OS- and compiler-feature discovery. If you wind up with a recipe that is in the narrow complexity band that is low enough to wrap your brain around but a bit too high for a plain makefile, you're done.

But it is unlikely this will occur. Usually you should shoot autoconf through the head and replace it with a better build engine.

Exactly what to replace it with is beyond the scope of this essay. Drop back to a Makefile if you can; otherwise there are several good build engines out there, most notably SCons and waf. There is also cmake, which I dislike because it repeats the autoconf two-phase mistake, but some people swear by it.

After this point the rescue steps stop being serial; you can do parts of the rest in parallel with each other. I'll express this by no longer numbering them.

Refactor

I can't say a lot about this extremely important step, because it's semantically driven and what you need to do varies so much by project. But there are a few generalities.

Your primary goal should be to increase readability and maintainability. If you reduce line count, you are probably heading in the right direction.

Remember, you're not actually refactoring unless you can verify that you're not changing the externally-visible behavior of the code. If you can't do that verification, what you're actually doing is "flailing around in a way likely to break stuff".

So...if you didn't already have a strong test suite before going into refactoring, shame on you: build one now. Throughout the refactoring phase, improve your test coverage. Use tools like coverage analyzers to prove that your tests reach it all.

Make it pretty. This doesn't guarantee that it will be right, but it improves your odds. Once you have that test suite, it greatly improves your odds.

Old-school code tended to be sloppier about information-hiding than we are nowadays after decades of learning about OO languages. Try bundling globals into context structures. See if you can narrow the visibility of those structures.

Despite traditional tools like YACC and LEX, generation of code from declarative descriptions is still a sadly underutilized technique. Any time you can reduce the amount of hand-written C in your codebase, you win.

Move code out of C when possible

C was designed for circumstances in which clock cycles were scarce and resource constraints were so tight that memory management had to be optimized by hand. In the early 1970s, that was everywhere. Now almost nothing but operating-system kernels and micro-controller firmware is so constrained.

What C traded for its efficiency was vulnerability to a large class of insidious errors in storage management. Those still bedevil us even after the original rationale for most uses of C is obsolete. C itself, now, is one of the most serious defect attractors.

We have alternatives. Modern scripting languages, with their automatic memory management, can take over for C in most circumstances where software only has to operate at human speed, as opposed to machine speed. They also bring many advantages in maintainability and ease of program modification. Python is especially strong in this regard.

Accordingly, it's good practice to move any code you can out of C into Python - or any other language with automatic memory management; but Python gets another advantage from nowadays being ubiquitous, often used in always-installed system scripts on modern Unixes.

If your code is a 200KLOC network service daemon, translating it to Python by hand might not be practical even if the result would handle machine-to-machine transaction speeds well (which, alas, it might not). But if your daemon has smaller client programs flying with it, those can often be moved to Python relatively easily.

In NTPsec, for example, we moved ntpq - the most-used monitoring tool - from C to Python. You can't tell this by watching it run; the default appearance is identical (which relates to a point I'll get to momentarily). But we now absolutely know that ntpq will never have a buffer-overrun crash. And it is far easier to debug or extend than the C code was.

(Another thing I can say is that as of early 2018 the time for worrying about Python 2 compatibility is basically past. All the major distros seem to be planning to go to 'python' equalling Python 3 within the next six months.)

Moving large, old C programs to a language with automatic memory management is just beginning to look like it might become practical. The enabling technology is Go, the language designed at Google to be about as close as possible to C and still have garbage collection. C to Go might be practical at large scale where C to Python is not.

I say "might" because as I write in early 2018 nobody has actually done this yet, at least not to well-known open-source code in public. I will be rather surprised if this remains true eighteen months from now.

Here, however, is a major caveat about these code moves: do not try to refactor or redesign while you are translating, that way lies doom and madness. I say this because the complexity of translation and the complexity of redesign on the fly are mutually multiplicative and are likely to rapidly take you to a place where you will be utterly lost.

To avoid this, do the stupidest possible 1:1 code transliteration that the impedance mismatch between your languages will let you get away with first; then go nuts with all the cool features of your target language.

(The main reason I expect C to Go to work better than C to Python at large scales is that the impedance mismatch at that junction is much lower. Go appears to have been designed with this as a goal.)

Apprentices are important

A final point about code rescues: it is good to have some apprentices around, and participating in the work, when you do one of these.

It's good for the apprentices. A codebase rescue is near-ideal circumstances to learn the kinds of good practices I've been describing, precisely because nobody is preoccupied by large questions of original design.

It's good for the future of the codebase, too. The usual reason that large old codebases bit-rot enough to need a rescue is that they've been under-maintained for years. Apprentices who learn good chops on a rescue are natural candidates to be future maintainers.


Net-SNMP

The SNMP protocol is a widely used protocol to monitor and maintain routers, switches, computers, printers, and other devices such as UPS's. This protocol is not only used to monitor and manage devices connected within an organization's network, but also this protocol could go all the way up to Tier 1 ISPs. Net-SNMP (www.net-snmp.org) is the reference implementation of SNMP, and is the most widely-used SNMP implementation. The reference implementation can run on most *nix-based OS's, and various processor architectures. Unfortunately, the project's development activity has slowed significantly; the latest release, 5.7.3, was released in December 2014. Coupled with issues within the existing RFCs this has lead to non-compliant SNMP implementations; or, worst, proprietary protocols that effectively replaces SNMP. This fragmentation of a standard monitoring and management protocol could lead to major issues down the road.

The main goal of the "Net-Snmp Rescue" project is to assure the longevity of the SNMP protocol and its reference implementation. By providing a robust and well-maintained implementation, we hope that relevant organizations will use an SNMP protocol implementation that adheres to the standard, allowing for better maintainability of the networks that have now become an integral part of our world. Our goals are to make the current code base more readable; reduce the number of open bugs; improve performance; increase maintainability; and produce documentation for future users and developers.

Technology and Standards

Programming languages: C, Perl, Python Protocol: SNMP protocol as defined by various RFC standards

Immediate openings

Testers: This is highest priority at the moment! We need people testing 5.8pre2 (https://sourceforge.net/p/net-snmp/news/2018/03/net-snmp-58pre2-available/) on as many systems, networking environments, and different SNMP implementations, as possible. The Net-SNMP core dev team is planning on releasing 5.8 in 2 months. Let’s help 5.8.0 be the 1st ever release in Net-SNMP history that will never need a patch release.

Expectations are simple: if you can run and compile Net-SNMP 5.8pre2, we want you. If you understand the buzzwords in the 5.8pre2 announcement above, we want you. Bonus points if you write test scripts for your test; although at this point step-by-step instructions will be accepted. Your work now will most likely be needed in the future. Contact the project manager (see below) if you’re interested in helping out.

Other openings Document writers: The Net-SNMP protocol is composed of some 20 RFCs. We need someone to document how all these RFCs come together, among other things, for future SNMP implementers. Developers: The core dev team want to get rid of the “#ifdef hell” going on in the codebase. If you know C, and want to learn how to untangle a pot of spaghetti code now is your chance.

State of things

All parties agree that things need to move a whole lot faster A future release is in the works The core team plans to move out of SourceForge

More information

List of RFCs related to the SNMP protocol: http://www.snmp.com/protocol/snmp_rfcs.shtml Repo: https://sourceforge.net/p/net-snmp/code/ci/master/tree/


Simplifying AgentX Part 1: Packet Structure

The Simple Network Management Protocol (SNMP) is a system for monitoring and controlling other software and hardware systems. It can be used to monitor anything from the state of a running Network Time Protocol daemon, to the level of fuel in a storage tank. It exposes the information in a large hierarchical tree structure, with numerous "Management Information Base" files defining what data is expected in different parts of the tree. Because of the large number of different entities monitored by SNMP it is useful to split the work between a "master agent" that handles external communication and security, and one or more "sub-agents" which focus on their monitoring task and only communicate with the master agent.

AgentX is the protocol used for master to sub-agent communication, designed around the turn of the century as an open standard to replace the myriad protocols that sprung up over the years. Sadly, both SNMP and AgentX suffer from the Curse of Systems Programming: it is decidedly uncool and of niche interest. The result is that documentation outside of RFCs is hard to find at best. Often the only way to figure out for certain what the protocol wants is to simply poke pre-existing agent with a stick and see what happens. Hence this series; all of the information in this can be gleaned from the AgentX RFC (RFC2741), but RFCs are not particularly newbie friendly. And their sheer terseness means that sometimes it can be difficult to discover what is defined in the standard if someone is not already familiar with the subject.

AgentX packets consist of a 20-byte header, an optional context string, and an optional payload. Nearly all packet types have a payload, and many can have the context string. Padding bytes are inserted to align values to 4-byte word boundaries.

PACKET HEADER

The packet header starts with 4 bytes:

  • 1 byte defining the AgentX version, currently always has a value of 1
  • 1 byte specifying the packet type
  • 1 byte for option flags
  • 1 reserved byte (padding)

Most flags are specific to certain packet types, but the Network Byte Order bit (bit 4) is relevant to all packets. If bit 4 is true, all data is given in big endian format. Any implementation must be able to handle either endianness.

The rest of the header is 16 bytes divided into four 4-byte fields:

  • Session ID: Uniquely identifies a single session between a master and sub-agent. A sub-agent may have more than one session open with a master at a given time.

  • Transaction ID: Identifies a single logical command across multiple packets.

  • Packet ID: Identifies a specific packet.

  • Payload Length: Length of the data after the header.

CONTEXT STRING

For some packet types the user may provide a non default context for indexing into extra configuration information. If a packet type has this option the Non-default Context bit in the flags header (bit 3) enables the option, if enabled the context string immediately follows the packet header. The context string is implemented as an Octet String.

Packet types with context string option: Register Unregister Get GetNext GetBulk TestSet Ping Notify IndexAllocate IndexDeallocate AddAgentCapabilities RemoveAgentCapabilities

Packet types without context string option: Open Close CommitSet UndoSet CleanupSet Response

PAYLOAD

PDU Arguments
CommitSet, UndoSet, CleanupSet, Ping Nothing
Open Timeout
OID
Description
Close Reason
Register Timeout
OID
[optional OID]
Unregister Padding byte, Priority
OID,
[optional OID]
Get and GetNext Search Range (all ending OIDs are null in Get packets)
GetBulk Non-repeating count
Search Range
TestSet, IndexAllocate, IndexDeallocate, Notify VarBind List
AddAgentCapabilities OID
Description
RemoveAgentCapabilities OID
Response System Uptime
Error code
[optional VarBind List]

COMPOUND TYPES

Octet String

An Octet String consists of a 4-byte length followed by a blob of bytes. If the bytes are not a multiple of 4 then null pad bytes will be added to the end, but not counted as part of the length. The Octet String is the actual form of many SNMP/AgentX types including Context Strings, Descriptions, the BITS type, and the OPAQUE type.

Object Identifier

An Object Identifier (OID) consists of 4 bytes of header followed by zero or more 4-byte unsigned integers (subids). The header is 1 byte for the number of subids (max 128), 1 byte for the prefix, 1 byte used as an include flag, and one alignment byte. If the prefix field is non-zero, it and the internet prefix is prepended to the rest of the subids in the format (..). The include flag is only used by Search Ranges.

Varbind

A VarBind consists of a 4-byte header, an OID, and an optional data blob. The header is 2-bytes to specify the value type, plus 2 alignment bytes. For types that have associated data the blob is used.

Search Range

A Search Range is simply two OIDs back to back. The first OID specifies the beginning of the range in- or exclusive depending on the inclusive flag. The second OID determines the non-inclusive end of the range (include is always false), or unbounded if it is empty.

In future posts I plan on describing other parts of the AgentX and SNMP standards that have gotchas or suffer from unclear documentation. Definite topics include the agent-to-master connection sequence, and how data types work.