Apps and Ethics

I just got an alert from a trusted friend* to the existence of an app - Radar - which is designed to alert you if social media accounts start showing signs that your friends are in distress. The app is intended to help friends help friends in need. It was launched by a suicide crisis line in the UK called Samaritins.

But it's set off a (rightful) alarm about surveillance and privacy and algorithmic alerts. In order to work the app needs to constantly monitor all your accounts, be programmed to infer emotions from content, and alerts you if someone you follow is determined to be "in need." Problems abound - let's look at a few:

  1.  Not everyone who might follow you is necessarily your "friend." Many are probably bots. Worse, some may be stalkers.
  2. Algorithmic determination of emotional states? No question there - the risk of false positives or negatives seems rather high.   The app notes on it's own website that it's in beta - "and won't get it right every time." Suicidal ideation and social media apps full of trolls and troublemakers hardly seems like the place to take this chance.
  3. Constant monitoring of all the accounts you follow. Meaning that no consent is ever asked for from those whose accounts it's reading. And the app is storing data - does it need to?
I'm sure the app is well-intentioned. But practices around Privacy and Consent are precisely the issues that civil society organizations need to get right. This one seems to get them wrong.

*Thanks, Ben!

No comments: