Recently there have been a number of incidents in the world that have highlighted an interesting moral, ethical and in some cases legal debate about social media and who is responsible for the content on these platforms.
Briefly, Instagram is under fire for being a platform where it is trivial to find information and "guidance" on various forms of self harm, eating disorders and even suicide. Several high profile events have shown that young people especially are finding what they see as support groups of like minded people which then normalises their own feelings/actions rather that supporting them in the ways they need to get help.
Facebook has once again come under criticism for the woeful monitoring or moderation of the content on its network (and don't forget they own Instagram so are equally responsible for criticisms levelled there also). In the wake of the recent terrorist attack in New Zealand, it quickly became apparent that one of the main motivators for the attacker was to spread their actions as far as possible using various outlets, Facebook being the originating platform.
These actions have raised several questions which on the face of it are trivial to answer, but when you look into it raise all sorts of questions about who ultimately holds the responsibility to moderate content online. Questions such as:
Australia and New Zealand were in a difficult position - they clearly had terrorist attacks happening in, or linked to, their countries and the attacker had posted their "manifesto" on social media before then live streamed the attacks on Facebook. Both governments called for "more to be done" to prevent this kind of thing happening - which anyone would conclude is entirely reasonable. However, it's also obvious that they're fighting a battle they can't win.
Some ISP's in Australia especially have taken the decision to block certain websites that hosted copies of the attack video and failed to remove it. This is likely to avoid government action or bills being passed that would require further and more sweeping changes to be made. Interestingly, some of the pages that have been blocked have protested their innocence and annoyance that ISP's have blocked them. Clicks = cash, so less traffic is hurting their business.
They're also a little miffed because Facebook has faced no action from governments (as yet) because they "took swift and serious actions." The cynic in you might point out that the reason ISP's haven't blocked Facebook in the same way they blocked other sites is because Facebook forms a huge part of their traffic and if they block it, users will literally cancel their subscriptions and move to an ISP that didn't block them. Arguably, if you're going to put blocks in place, then Facebook has to be blocked - this is where the attack started, was publicly planned and then carried out.
Sadly, then, it seems the frustration most governments feel is with the fact that some companies such as Facebook (and Instagram as they own them), Google and so forth are simply "too big" to be tackled. Sajid Javid, the current UK Home secretary repeatedly makes public statements that social media companies must be more responsible or take more preventative measures or "face action." He knows full well, or will quickly find out, that no matter how much he would like to make changes, he is largely powerful - the internet is not under any one countries control.
So then we move our attention to those social media companies. It is fairly clear that governments can only introduce rudimentary measures and then only in their country or jurisdiction and then these can be easily bypassed by most VPN/Proxy services. Also, the people we are trying to protect the most - the young, are by far the most switched on to technology and they are all capable of bypassing measures either themselves or by following methods by word of mouth.
Therefore, it is not unreasonable to suggest that companies like Facebook should take the lead when it comes to finding, removing or blocking this kind of content altogether. Let's establish some facts - they are the central point, they are the only organisation with access and control to all this content and most important of all, they absolutely are capable of implementing measures that would either completely or vastly reduce this kind of content.
If they want to.
And it's a big "if" isn't it. These companies don't charge users to use the service, they experience an unimaginable amount of content being uploaded to their platforms on an hourly basis and make all their profit from advertising. Advertisers do not care about morals, they care about reaching their target audience as intrusively and repeatedly as is possible. Social media has given them these opportunities in a way never possible before in history - this is why social media platforms like Facebook are worth hundreds of billions.
The sad fact is that Facebook are a business. Business exists for one single core purpose - to make profit. To think that they have a moral obligation is to pull the wool over our own eyes - they absolutely do not. Given the choice between an awkward press release and some soothing words to various parties and actually making changes for moral reasons, they'll choose the press release every time. Any changes that Facebook make that would make an impact in the real world would result in fewer users on their platform. People don't like the idea of "censorship" and are used to an always on, instant upload culture. Any delay in content appearing on the platform would give a "second rate experience." To a company like Facebook this is unacceptable. User count is everything and less users = less profit. They want all of your data and they want more of it. The more they have, the more they're worth.
So yes, if they wanted, Facebook could easily bring in a multi pronged attack that would have a drastic effect. They could easily afford to employ hundreds of moderators in different countries to automatically review flagged content. They could employ their machine learning to analyse content in a more aggressive way and to highlight more for review. They could tighten their guidelines and refuse to allow hate speech on their platform, or to refuse groups advocating certain right wing view points. They could follow through on the terms and conditions all users agree to and remove more accounts, or even bring action against particularly offensive users. The list is almost endless - but all of it comes at a cost.
Did they do anything? Their analysis looks really good. They did immediately remove the video of the attack but... once it was reported. Remember, before it took place, the manifesto, a page of hate speech was already hosted by them without raising any alarm. Remember also that computers are absolutely amazing at text and language analysis and they definitely have the technology to highlight this. Then they hosted a free for all - users uploading the content which they then had to chase down and remove or rely on user reports. Users took measures to obfuscate the video so that it didn't match a hash created of the original stream. From the outside, it looks like they did a lot to prevent the spread. In reality, they did what they had to do to minimise the poor publicity of what had happened on their platform.
Will anything change in future? The answer is no, because the only way Facebook could be encouraged to change is their users closing their accounts. If users started to leave en-mass then they would begin to listen. They made positive noises when users left of the Cambridge Analytica scandal. The unfortunate truth is that millions of users simply don't care, they want to log in, aimlessly read terrible memes and look at pictures of pets and dinners before moving on.
It is, then, perhaps us as users that hold the ultimate responsibility here. We are responsible for not posting offensive content and also we have the absolute power to force change if we want. If you genuinely think that a platform hosting a live terrorism event and then taking zero action afterwards is socially unacceptable then maybe there is a moral obligation on us to no longer associate ourselves with that platform.
Morals and ethics is an endless conversation.