1-888-643-2217 Email ABEX
Keeping you updated

Trolling & catfishing: The social media fairy tale with no happy ending

Social media isn’t always a fairy tale, and the veil of anonymity that such channels provide can easily be used for evil, rather than good.

The ease with which channels like Instagram, Facebook and Twitter bring people together is extraordinary, and should be celebrated. However, we cannot ignore the darkness that lurks beneath the online world, particularly with regards to social networks. Social media isn’t always a fairy tale, and the veil of anonymity that such channels provide can easily be used for evil, rather than good.

Lady Gaga was recently quoted as saying that social media is “the toilet of the internet.” Of course, this scathing review disregards the many benefits of social networks, but it does adequately sum up how many users feel about what can, and does, happen within the confines of these sites.

The potential for misuse of online platforms like social networks is huge. It is therefore essential that organizations operating within this space have broad regulatory cover in an evolving legal landscape. Affirmative cover for user generated content is also key, as are policies covering emotional distress or bodily injury.

The digital world is evolving all the time, and with new changes come new pitfalls. Phenomenon like trolling and catfishing are just two examples of the ways in which social media anonymity is being exploited, causing serious harm to those who fall victim, and significant damage to the online platforms used to facilitate abuse.

Companies of course have a moral duty to stamp out such behavior, but their responsibilities extend further than that. Inaction over online harassment may equate to negligence in the eyes of the law, and this is something that tech companies need to be aware of.

Policies regarding digital responsibilities are now integral components in business insurance coverage, particularly in the tech industry. With issues such as trolling and catfishing becoming increasingly prevalent, comprehensive protection must be tailored to the exposures of social media organizations.

What is trolling?

A troll is a person who makes use of the anonymity that the internet provides, in order to share inflammatory, abusive remarks about specific people or groups. It’s a type of online bullying that has become increasingly prevalent with the advent of social networks.

Several high-profile causes have brought the devastating consequences of online bullying to the attention of the global press. In 2014, model and reality television star Charlotte Dawson took her own life, after a lengthy and well publicized battle with online trolls. One night of particularly bad abuse led to her being hospitalized, before her suicide. She said, “It just triggered that feeling of helplessness when the trolls got to me. They got the better of me and they won.’’

In 2017, 14-year-old schoolgirl Molly Russell also took her own life. In the following days and weeks, her family discovered distressing material about depression and suicide on her Instagram account. Parliamentary Under Secretary of State for Mental Health, Inequalities and Suicide Prevention Jackie Doyle-Price later said that harmful suicide and self-harm content online “has the effect of grooming people to take their own lives.”

What is catfishing?

A type of deception made possibly by anonymity on the Internet, catfishing is a targeted campaign of duplicity. In order to ‘catfish’ another person, a perpetrator creates a fake social networking presence, and designs an entire faux identity to be used online. Often, it is used to target vulnerable people for financial gain. It can also be used as a form of online trolling.

A New York Man who has fallen victim to catfishing harassment at the hands of his ex-boyfriend is now suing tech company Grindr for its part in the abuse. Matthew Herrick endured months of harassment, with fake profiles appearing on the network impersonating him, and strange men being sent to his home and workplace.

Herrick filed 50 complains with Grindr, 14 police reports, and even obtained a temporary restraining order, but it did not end the harassment. Herrick is now arguing that Grindr has violated product liability law. His lawsuit states, “This is a case about a company abdicating responsibility for a dangerous product it released into the stream of commerce. Grindr’s inaction enables the weaponization of its products and services.”

GDPR and the change in digital responsibilities

GDPR is a prime example of the way in which online platforms are coming under closer scrutiny than ever before. This legislation, which sets out a concrete list of data protection and privacy regulations, has been seen as a conscious attack on large tech companies.

In light of changes brought about by GDPR, social platforms must focus on users’ privacy protection, and it’s possible that more laws may be inforced to protect users. All it takes is one claim against a platform for reputations to be irrevocably damaged.

Large data collectors, such as social media sites, are likely to be most affected by the regulations of GDPR. These sites may now face increased regulation, with governments upping their input into what these sites must do. Take user-generated content, for example. Now, it’s down to individual websites to monitor user-generated content on their platforms and remove if necessary. It’s therefore essential that sites like these have the proper coverage to handle claims of negligence.

How are social networks combating online harassment?

Social networks like Facebook, Twitter, Instagram and online dating sites like Grindr have worked to reduce the impact of catfishing and trolling on their platforms. However, there has been admission that not enough work has been done to protect users. The problem is, of course, a difficult one to overcome.

Twitter chief executive Jack Dorsey admitted the social media giant had not done enough to banish trolls from the site. Dorsey said, “We’ve made progress, but it has been scattered and not felt enough. Changing the experience hasn’t been meaningful enough. And we’ve put most of the burden on the victims of abuse (that’s a huge fail).”

Users publishing abusive content can be easily banned and blocked, but this doesn’t stop perpetrators from simply creating another fake profile to continue their actions. On dating sites, profiles will often need to be linked to an active social media account, however this too can be easily circumvented by those intent on using the platforms in a negative way.

Some platforms are making changes to tackle abuse and remove harmful content. Instagram, for example, banned images of self-harm after the site received widespread condemnation following the death of Molly Russell. These are welcome changes, however such reforms by social networks tend only to come about as a result of public lobbying in the event of a high-profile case. Much more needs to be done.

The categories of negligence are never closed. Tech companies need be one step ahead in terms of protecting users against trolling, catfishing and other types of online harassment, whilst also ensuring that they themselves have adequate cover should an incident arise.

Source: www.cfcunderwriting.com

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Blog

FOLLOW OUR BLOG

Receive notifications of new posts automatically.



ABEX - AFFILIATED BROKERS EXCHANGE IS ON FACEBOOK.

Like us on Facebook

Connect with us on LinkedIn