Articles
- Security and Privacy Vulnerabilities of In-Car Wireless Networks: A Tire Pressure Monitoring System Case Study by Ishtiaq Rouf et al.
- SPY Car Act of 2015
- Planning for the Future by Mike Wood
- Introduction to Test Driven Development (TDD) by Scott Ambler
- Wikipedia article on The Magical Number Seven, Plus or Minus Two
Reflections
All of these articles, except the Wikipedia article on The Magical Number Seven, Plus or Minus Two, revolve around building safe, secure, and reliable software. In fact, the first recommendation for creating secure tire pressure monitoring systems in Security and Privacy Vulnerabilities of In-Car Wireless Networks was to follow basic reliable software design practices. They noted that impossible values were allowed to be displayed in the studied tire pressure monitoring systems. Had reliable software design practices been followed, some of the more obviously incorrect test cases could have been avoided. One possible reliable software design practices that could be used to design future tire pressure monitoring systems is test-driven development (TDD). The idea behind TDD is tests are written before development, and development is centered around tests, and tests must pass before development can continue. Ideally, the case where a valid tire pressure that also displays a tire pressure warning flag should have been caught before developing the system, and a test would have been written for that case in TDD.
Five years after Security and Privacy Vulnerabilities of In-Car Wireless Networks was published, the SPY Car Act was enacted. The SPY Car Act is essentially a federal regulation to ensure sensors and other computer systems in vehicles are secure. It even outlines a specific design principle to ensure vehicles remain safe in the event of an attack: “ISOLATION MEASURES.—The measures referred to in subparagraph (A) shall incorporate isolation measures to separate critical software systems from noncritical software systems.” This would ensure that if something like the tire pressure monitoring sensors were spoofed, the safety-critical systems in the car would retain their integrity. Ensuring a system is safe and secure also ties in with the notion of reliable systems. If a system is unsafe, or can easily be attacked, it has the potential to not function as specified, making it unreliable. Reliable systems must be able to fail gracefully, that way when the system is down for whatever reason, it causes as little trouble to its users as possible. Mike wood outlines multiple ways to handle failure in software systems in Preparing for the Future, and stresses that failure is not a matter of “if”, but “when.” After reading through each of the techniques, it became clear to me that regardless of what technique is implemented in a system to prevent or handle failure, those techniques are useless if the use case for the cause of failure was not foreseen. This is where, I believe, TDD is a useful software development practice for creating reliable software.
I also believe TDD supports the idea that humans have a hard time keeping more than seven—plus or minus two—objects in active memory at once. In TDD, you think of several test cases to develop code to pass those test cases, and repeat. This allows a developer to focus on test cases alone before worrying about the implementation of a system to pass those test cases. If we develop in a way that mirrors how much we can think about at one time, then the resulting system should be more reliable, safe, and secure.