In November, the San Francisco SPCA deployed a 5-foot-tall, 400-pound robot to patrol its campus. Not for muscle, mind you, but for surveillance. The SPCA, a large complex nestled in the northeast corner of the city's Mission neighborhood, has long dealt with vandalism, break-ins, and discarded needles in its surrounding parking lots. Fearing for the safety of its staff, the SPCA figured the robot could work as a deterrent, a sort of deputy for its human security team.
The robot came from a Silicon Valley startup called Knightscope, whose growing family of security machines work as slower, more disciplinarian versions of self-driving cars. SPCA used their K5 robot, which is good for outdoor use. Its scaled-down cousin K3 is meant for the indoors, while the K1 is a stationary pillar that will soon monitor things like building entrances. And the K7, a four-wheeled robot meant for patrolling perimeters of airports and such, is going beta next year. The company is on a mission to take a bite out of crime by augmenting human security guards with machines. The path there, though, is fraught with ethical pitfalls.
The K5, along with almost 50 other Knightscope robots across 13 states, sees its world by coating it with lasers, autonomously patrolling its domain while taking 360-degree video. In an on-site control room, a human security guard monitors this feed for anomalies. Knightscope says K5 can read 1,200 license plates a minute to, say, pick out cars that have been parked for an inordinate amount of time. If you get in the robot’s way, it says excuse me. In the event of an emergency, the security guard can speak through the robot to alert nearby humans. The SPCA's robot patrolled both its campus and the surrounding sidewalks while emitting a futuristic whine, working as a mobile camera to theoretically deter crime.
None of these machines are equipped with tasers or flamethrowers or anything like that. “This is not for enforcement,” says William Santana Li, chairman and CEO of Knightscope. “It's for monitoring and giving an understanding of the situation for those humans to do their jobs much more effectively.” Again, the SPCA’s robot wasn’t meant to replace humans, but supplement them.
“Very simply,” Li adds, “if I put a marked law enforcement vehicle in front of your home or your office, criminal behavior changes.”
So does other behavior, it turns out. After the SPCA's Knightscope was set out on its route, homeless residents took it to task. A group of people setting up camp allegedly threw a tarp over the robot and knocked it over and smeared BBQ sauce on its sensors.
Now, by this point you probably don’t recoil when you see a security camera and throw rocks at it—for better or worse, we’re all under surveillance in public. But the K5 just feels different—and it elicits different reactions. In a shopping mall, the robot seems unassuming, even vaguely endearing. Kids run up and hug it. But in the outdoors, it's a roaming embodiment of surveillance, recording video of everything around it. Which is particularly unsettling to people who make the outdoors their home.
“Keep in mind, this concept of privacy in a public area is a little bit odd,” says Li. “You have no expectation of privacy in a public area where all these machines are operating.”
Still, a camera on a wall is one thing. A giant camera that roams the streets of San Francisco is another. “When you’re living outdoors, the lack of privacy is really dehumanizing after awhile, where the public’s eyes are always on you,” says Jennifer Friedenbach, executive director of San Francisco’s Coalition on Homelessness. “It’s really kind of a relief when nighttime comes, when you can just be without a lot of people around. And then there’s this robot cruising around recording you.”
After the San Francisco Business Times published a piece on the SPCA’s foray into security robotics, public outcry grew that the organization was using the robot to roam the sidewalks around its facility to discourage homeless people from settling. The SF SPCA denies its intent was anti-homeless. “The SF SPCA was exploring the use of a robot to prevent additional burglaries at our facility and to deter other crimes that frequently occur on our campus—like car break-ins, harassment, vandalism, and graffiti—not to disrupt homeless people,” said the group’s president, Jennifer Scarlett, in a statement.
Nevertheless, the group discontinued its pilot program with Knightscope last week. Deploying robots in a mall is fairly innocuous, but clearly in a more sensitive use case like this, the ethical conundrums of human-robot interaction got out of hand quick.
- Robbie Gonzalez
I Spent the Night With Yelp’s Robot Security Guard, Cobalt
- Cade Metz
Security Bots Will Battle in Vegas for Darpa's Hacking Crown
- Kim Zetter
Why a Killer Robot Was Likely the Only Option For Dallas Police
More on Security Bots
If you think the ethics of security robots are murky now, just you wait. Knightscope wants to keep humans in the loop with its robots, but it’s not hard to imagine a day when someone else gets the bright idea to give other security machines a lot more autonomy. Meaning, have AI-powered robots recognize faces and look for patterns in crimes. Patrol this area preferentially at this time of day, for instance, because this suspicious group of people tends to come around.
Algorithms are already forming biases. In 2016, an investigation by ProPublica revealed that software used to determine criminal risk was biased against black defendants. Now imagine a security robot loaded with algorithms that profile people. It’s especially troubling considering the engineers developing artificial intelligences don’t necessarily know how the algorithms are learning. “There should be not only a human at the end of the loop, but a human at the beginning, when you're learning the data,” says computer scientist Michael Anderson of the Machine Ethics program.
Really, what robot makers will need are ethicists working alongside engineers as they develop these kinds of systems. “Engineers aren't necessarily able to see the ramifications of what they're doing,” says ethicist Susan Anderson, also of Machine Ethics. "They're so focused on how it can do this, it can do that.”
Could a robot at some point help an organization like SPCA? Yeah, maybe. These are early days of human-robot interaction, after all, and humans have as much to learn from the robots as the robots have to learn from us. Maybe there are ways to go about it without rolling over somebody’s toes.
Related Video
Security
Watch Hackers Sabotage an Industrial Robot Arm
Researchers were able to take control of a 220-pound robotic arm to damage the products it manufacturers—or the person that operates it.