From the abstract of my upcoming presentation at AAG 2016 in San Francisco:
Mapping dead-ends: how and where to consider noise reduction
Algorithmic detection of dead-ends and indirect streets could help cartographers create more legible transportation maps. In this presentation, we briefly cover algorithmic techniques for dead-end detection, borrowing many ideas from graph theory. We then apply these techniques to a sample of regions, using OpenStreetMap data, to discover the spatial variability in potential for geometry simplification, particularly as modes vary between car, foot, and bike. In what sort of regions and for which modes should we consider reducing the visual noise introduced by dead-ends and highly indirect streets?
This is something I’ve been toying with since the Cincinnati bike map last year, where I implemented Tarjan’s Algorithm to detect and diminish simple dead-ends.
The effect on clarity, at least for me, was remarkable and totally novel. At least until Minh pointed out that Google Maps has been doing something similar for at least a couple years (fuck!). Well, in any case, they haven’t done it all that well, and I still haven’t seen it on any other maps that I’m aware of doing it. The point of this paper/presentation will be to explore other possibilities for dead-end detection, a bit past what can be done with Tarjan’s very useful Algorithm, and to think about some ways that such dead-ends could be visualized and what difference any of this makes anyway, particularly as we consider which mode a map (explicitly or implicitly) is designed for.
So, I know no one reads this, but if anyone finds themselves in SF this March/April, I hope you’ll drop by and make my world feel slightly smaller!