In the previous two posts (Part I and Part II), I talked about the lives lost on the roads and how improved communications and IT can help save lives, and about the path to deployment for that technology. In this post I’m going to address security.

Security Innovation (and NTRU Cryptosystems, which was acquired by SI in July 2009) have been active in this area since 2003; since then, I’ve been the editor of IEEE 1609.2, which specifies the security processing for these communications. We’ve done a lot of work analyzing exactly what the security requirements are. In the next few posts in this series I’m going to go through some of the design decisions we made. This should be useful in two ways:

  1. When you come across news stories about the system (which will be more and more common in the next eighteen months), you’ll have some independent background that might help clarify any information about the system’s security, bearing in mind that security is a topic that high-level news stories frequently get wrong
  2. It will provide some insight into the way we as crypto and security experts think about these topics and how decisions are made with respect to security design.

As in any project, before defining security mechanisms, you need to be clear about what the threats are. For secure vehicle communications, there are two main risks:

  1. Someone will introduce fake messages into the system.
  2. The system will compromise user privacy to an unacceptable level.

If the system is flooded with fake messages, what happens? To an extent this depends on whether and how the fake messages affect the driver. Different carmakers will implement different alert systems within their cars, and all carmakers are keenly aware of the importance of reducing driver distractions, so they will work as hard as they can to reduce false alerts. Nevertheless, if fake messages can be accepted, they’ll lead to false alerts, and the easier it is to send fake messages the more false alerts will be raised. This might cause one of three problems:

  1. Drivers react to the false alert in a way that causes an accident.
  2. Incident responders (police, ambulance, etc) are drawn to an accident that isn’t happening, leaving less police to respond to an actual planned crime.
  3. Drivers over time pay less and less attention to the alerts; the system gradually becomes less and less effective in saving lives because of it.

As we thought about these problems, it seemed that (3) was the real threat.

We didn’t think (1), tricking drivers into causing accidents, was a significant threat, because the system will not be used for automatic driving. Any driving decisions will be made by the drivers themselves. It’s hard to come up with a scenario where a false message has a significant chance of causing an accident, unless the false message causes a distraction when the driver’s already in a dangerous situation; certainly it’ll be hard for an attacker to consciously create an accident when one was unlikely to happen otherwise.

On (2), trying to hoax the police has been around for centuries. The police know how to deal with it. The important thing is to move carefully when putting in place automated systems that might overreact to these messages. But the reaction is what needs to be managed, not the message itself.

But (3) is a significant concern. If everyone turns off their alert system, no-one gets the alerts, and the lives that could be saved go unsaved. So we have to protect the messages in a way that guarantees that incoming messages to your car genuinely reflect what’s happening around you right now. If we don’t the system will grind to a halt.

That leads to needing the usual security services: authorization, authentication, integrity checking (the other common security service, confidentiality, isn’t needed as these messages are meant to be broadcast).

So we decided what the basic security services need to be. But there are some outstanding issues:

  • How do we implement them – public-key, symmetric-key, or other?
  • Privacy – how do we ensure you can’t be tracked?
  • Scalability – how do you manage 300 million units with potentially hostile owners?
  • Management – what’s the best way to keep all cars up to date, so they can trust messages they should and not trust messages they shouldn’t?

I’ll go through these one by one in the next few posts in this series.