The time has come for me to write my Samaritans Radar blog post. Others have already said much about it and I don't think I need to repeat that here. The moment I heard about this new offering from suicide prevention charity Samaritans, I knew it was a terrible idea. Samaritans Radar is an app that Twitter users can download and install, which allows them to become "good Samaritans" by monitoring the tweets available in their timeline provided by the accounts they follow, looking for key words and phrases suggesting the tweeter may be suicidal, and then sending email alerts to the subscriber about flagged tweets.
This is done without any of those tweeters' knowledge or consent. In fact Samaritans make it clear that they protect the privacy of the app's subscribers and will not reveal to their friends that they are being monitored.
The decision to deploy an app like this, which focuses on the rights and choices of the bystander rather than the potential person in need, is a baffling one. The Samaritans service typically involves a distressed person phoning their hotline to speak confidentially to a trained volunteer who tries to talk them down from the ledge, so to speak. The Samaritans have built up a cache of trust as experts in suicide prevention, and it is well deserved.
But with their latest offering, they have sadly lost sight of their mission and jeopardized their hard earned status. The problem is that this amounts to surveillance of, among others, the mentally ill. Mental health sufferers and privacy advocates have been up in arms since the announcement of the new service last week. The response from Samaritans to the criticism has been the most disturbing aspect of this tale.
At first the Samaritans Radar spokespeople dismissed privacy concerns as coming from users who do not understand how Twitter works, noting that anything their service flags up is publicly available on Twitter anyway, and that any follower would have been able to see those public tweets. They just might not have been paying attention at the time.
When users tried to explain that privacy is more complicated than that, Samaritans stubbornly stuck to their guns and downplayed the negative feedback as the mutterings of a handful of privacy extremists. They did make one concession, which is to provide an opt-out feature for users who send them a message request. This is something that frankly would have been expected from the very start at a minimum. The opt-out failed to quell the criticism.
One week on, and Samaritans have taken legal advice that the Samaritans Radar service operates within the law of the United Kingdom, particularly addressing issues raised about the Data Protection Act 1998. The advice that the company has communicated is astounding. Samaritans believe that they are not bound by the Data Protection Act because they are neither a data controller nor a data processor (who then is?). If the Samaritans were deemed to be a data controller, then they believe they satisfy the "vital interest" exemption allowing them to process sensitive personal information without consent. Information practitioner Jon Baines demolishes that position in his Information Rights and Wrongs blog.
Perhaps even more astounding is that Samaritans are taking legal advice and continuing to defend their controversial new service in the face of very sharp criticism from some of the very people they claim to be attempting to help. Many mental health sufferers who have turned to Samaritans in their past need have stated they no longer trust the Samaritans with the information they might provide. Others no longer feel safe speaking earnestly on Twitter about their conditions. Some have already left, thereby abandoning what might have been a vital outlet for them.
Technology journalist Adrian Short started a petition to have the app shut down. At the time of writing it has nearly 1200 signatures after four days. Short has indicated that he is considering a law suit against the charity if they continue to refuse to budge. I cannot imagine how Samaritans will attempt to justify to its patrons and donors the risk and expense of going to court over this untested and unproven offering.
I had a lengthy dialogue today with a Samaritans volunteer who had tweeted that he couldn't understand the bad reaction, but the app is worth it if it saves even one life. I think I managed to persuade him that there are many problems with the app that would be best solved by pulling it until it can be reworked. The feeling on the street is that it is causing actual real harm just by its mere existence. I laid out the following points:
- Monitoring people with mental health issues looking for signs, tends to make them feel more exposed, therefore worse.
- Many forms of mental illness are joined by paranoia. The mere confirmed existence of surveillance causes anxiety.
- Not all interventions are helpful. A sufferer should be able to choose someone s/he trusts. Trust has not been respected.
- There are people on twitter who enjoy tormenting mental health sufferers. This app is a gift to them.
- It feels like the app is making judgments about people. Suicidal people are not helped by being judged, and that's not what the Samaritans do.
That's a start. There are many other issues, but I don't want this to be too long. We agreed that there appears to be some inexperience of the medium among the decision makers at Samaritans and that they may have rushed into this solution without sufficient understanding. There was a similar offering deployed into the Facebook ecosystem which seems to have been much more successful.
Perhaps the people behind the Twitter offering failed to appreciate that Facebook's trust model is very different. There is a fairly clear concept of friendship in Facebook. That is the foundation of its relationships. On Twitter, the relationship between an account and its followers is not at all clear. If you were going to try to emulate friendship, the best approach would be to add a constraint of mutual following, but even this falls short.
By simply allowing any follower to act as a potential intervener, Samaritans have violated the trust of the users who have no real control over who might get to receive alerts, apart from locking their accounts and dismissing all of their followers. That would be rather pointless.
There is also the question of whether this is even legal. Information law experts tend to agree that it runs afoul of the Data Protection Act. Reading paragraphs 56 and 69 together from the ICO guidance on the processing of sensitive personal data, we can see that the Samaritans cannot claim to have obtained consent from users (which they have not claimed) and that it is hard to conclude that they would not need it (which they have claimed).
It is therefore unsurprising that they have put forward the legal position they have. The best bet to get them off the hook is to claim not to be bound by the Act at all (as they have done), but they have not offered the advice they were given, and the view is frankly unsupportable. And if they are not the data controller, then who is? The subscribers perhaps? They might like to know they may be breaking the law.
If that argument fails, then the Samaritans suggest that they (and I suppose their subscribers) are exempt by reason of vital interest. It's really all they've got, but it's extremely weak. As Jon Baines points out, the app is delivering a large number of false positives, and there is simply no basis to conclude that untrained people in receipt of these alerts would be able to have any net positive effect on a person's health prospects. You could say this is a matter of life or death. All the more reason to proceed carefully, considering a random stranger even with the purest intentions could make the situation a whole lot worse.
Conclusion: if the Samaritans wind up in court they will surely lose, and for what? They seem willing to sacrifice their hard earned reputation for the theoretical possibility of saving one person's life. It is naive and foolish. The charity's attitude this past week has been arrogant and condescending. They are refusing to listen to the concerns of their potential clients, many of whom have turned their backs on what was once a trusted ally. I do hope the Samaritans wake up to sense before irreparable harm is done either to themselves, or worse, to someone who is truly vulnerable.
EDIT 2014-11-07 10:20:
There's something I forgot to say in my original post. There is an interesting issue highlighted around the muting of Twitter accounts. Twitter offers a Mute feature similar to its Block feature, but with some key differences:
- An account that is muted is not made aware of this in any way. It is still able to follow the account that has performed the Mute and to see public tweets from that account in its timeline feed.
- A muted account can be followed by the account that performed the mute, in which case mentions from the muted account will be seen. Otherwise there is no interaction visible to to account that performed the mute.
Because the muted account is not aware it is being muted, the API does not communicate this fact. Therefore, a muted account would be able to receive alerts like any other follower. I can't imagine anyone being comfortable with this. Furthermore, that muted account which is not followed back has no way to directly intervene. Mute was introduced by Twitter as a compromise after it unilaterally changed the way that blocking worked so that it would be silent to the blocked account. This was an effort to calm the blow ups and pile ons that tend to happen when people react to being blocked.
I couldn't understand the negative reaction at the time, but now it makes perfect sense. People who had been stalked in the past felt aggrieved that they would no longer be able to stop people following their public accounts or appearing to interact with them. Twitter got that horribly wrong and they fortunately relented. Had that not happened though, we would now have a Samaritans Radar app that Twitter users would be unable to block other users from scanning their tweets with it. The original offering had no opt-out. This just goes to show how little thought was put into the risks associated with the chosen model.
Someone on Twitter who is currently unable to comment had some questions for me regarding the points I made about muted accounts. We seem to agree, but she felt that I could have made it clearer. I will try.
ReplyDeleteMute can be used as a sort of soft block option when you don't want someone to get riled up about the fact that you are now free to ignore them. In order to make this work, the account that is muted can have no idea that this has happened (though they might suspect it). There is nothing visible to the muted account in the state of Twitter that will tell them they have definitely been muted. A use of the Twitter API authorised with the muted account's credentials will not know about the muted state, and so the Samaritans app will certainly send alerts if that account is a follower and a subscriber and a tweet from the account that muted them is caught by the app.
The problem with this should be fairly obvious. First, if I'm the sort of person who doesn't like to block (I am in fact), then I will tend to use Mute as a way to make people appear to go away. This gives a false sense of security. I know they can still follow and even if not following, can view my public tweets from within their logged in context. Generally I don't care. I certainly do care knowing that such a person could now be receiving email alerts about my tweets that have been identified as potentially suicidal. Muting someone that I do not follow is a clear indicator of my intention that I want nothing to do with them. I don't want them to be in a position of trying to intervene over my state of mental health. Quite the opposite. They would not actually be able to intervene directly anyway, so it's pointless. They could however be disruptive in other ways, like sharing it around. This is not a friend in any sense, and no one will be happy at the thought of making it easier for such a person to be a menace or to collect signs of vulnerability like trophies. I hope this clarifies anything that might not have been clear already.