I have been re-reading Taleb’s “Antifragile,” and was delighted to find a discussion on speed and safety within his systems thinking.
For those that are unfamiliar with Taleb’s work, he has written quite extensively on the topic of fragility. In a nutshell, his argument is that society spends far too many resources trying to ‘optimize’ systems, such as through trying to predict future events. Even with highly sophisticated models, however, we are still not very good at prediction (the recent presidential election is a perfect example. Given such limitations, Taleb argues that we should instead be focusing our energy on building robust systems that can withstand predictions being wildly off.
Although he generally takes the viewpoint of a limited government perspective, he makes makes some suggestions about when governments should intervene in systems. From chapter 7:
What should we control? As a rule, intervening to limit size (of companies, airports, or sources of pollution), concentration, and speed are beneficial in reducing Black Swan risks. These actions my be devoid of iatrogenics–but it is hard to get governments to limit the size of government. For instance, it has been argued since the 1970s that limiting speed on the highway (and enforcing it) leads to an extremely effective increase in safety. This can be plausible because risks of accidents increase disproportionately (that is, nonlinearly) with speed, and humans are not ancestrally equipped with such intuition. Someone recklessly driving a huge vehicle on the highway is endangering your safety and needs to be stopped before he hits your convertible Mini…
This should sound pretty familiar to those steeped in VZ thinking. Rather than trying to prevent collisions, we are trying to reduce speeds so that when mistakes happen (because they will–even with AV technology) the result is not severe injury or death.
Interestingly, he also makes the case that we may be over-regulating in some areas, such as the placement of signs. He cites the case of Drachten, Germany, where they removed all the traffic signs which led to an increase in safety. The theory behind this safety improvement is that the overly-standardized and predictable roadway results in drivers paying less attention (especially to people walking and bicycling).
I don’t think these are necessarily in conflict. Instead, I would argue that it is acceptable to create that type of less predictable roadway (by removing signs, for example) only after we have reduced speeds to shafe levels (determined by the types of collisions that could occur on the roadway). For example, driving in proximity to vulnerable users and at speeds below 15 mph, it is quite possible to make the continuous mental calculations necessary to drive safely. In fact, when you are already at low speeds, this type of environment likely encourages you to slow down even more when necessary, because other drivers are similarly driving slowly. However, when you are driving 55mph (which could be perfectly safe since there are no vulnerable users around), it would be really difficult to navigate highways without advance warning signs (and slowing down to make the mental calculations would probably make the roadway less safe). So ultimately, I would argue that the Drachten case really only applies when speeds are already low enough..which is what we are working on for Vision Zero.
So there you have it - Vision Zero is robust!