Mind the Orwellian Surveillance Apparatus 1
The biggest surprise to me reading this piece about Transport For London’s experiment at Willesden Green tube station is that all this extra technology could piggyback on the existing, slightly old and outdated CCTV cameras. 2
[This] was not just about spotting fare evaders. The trial wasn’t a couple of special cameras monitoring the ticket gate-line in the station. It was AI being applied to every camera in the building. And it was about using the cameras to spot dozens of different things that might happen inside the station.
For example, if a passenger falls over on the platform, the AI will spot them on the ground. This will then trigger a notification on the iPads used by station staff, so that they can then run over and help them back up. Or if the AI spots someone standing close to the platform edge, looking like they are planning to jump, it will alert staff to intervene before it is too late.
In total, the system could apparently identify up to 77 different ‘use cases’ — though only eleven were used during trial. This ranges from significant incidents, like fare evasion, crime and anti-social behaviour, all the way down to more trivial matters, like spilled drinks or even discarded newspapers.
So, the system identified 77 different use cases but they only decided to use 11 of them. That graphic would look a lot scarier if the left pane listed all 77 potential use cases.
Given how much the alerts the system generates rely upon station staff reacting to them in order to fix the issues being identified, it’d be nice to imagine that the quantity of incidents revealed might argue for increasing staffing levels.
Why do I have an uneasy feeling that it might not go that way?
[Via LinkMachineGo]
Title shamelessly borrowed from the subtitle of the source post↩︎
If this technology can work with older CCTV it brings that Person of Interest moment that little bit closer to reality. We’d better hope that the Machine wins out over Samaritan.↩︎