Learn IT with Davo
  • Mr Davidson's Blog
  • Twitter
  • A Level CS - H446
    • A Level Exam Technique
    • Lessons
    • Unit 3 - Coursework Guidance
  • OLD GCSE CS - J276
    • All GCSE Questions
    • GCSE Exam Technique
    • Glossary of Terms
    • Unit 1 Revision >
      • 1.1 - Systems Architecture
      • 1.2 - Memory
      • 1.3 - Storage
      • 1.4 - Wired and Wireless Networks
      • 1.5 - Topologies, Protocols and Layers
      • 1.6 - System Security
      • 1.7 - Systems Software
      • 1.8 - Ethics and Law
    • Unit 2 Revision >
      • 2.1 - Computational Thinking
      • 2.1 - Searching and Sorting Algorithms
      • 2.1 and 2.2 - Writing Algorithms/Programming Techniques
      • 2.2 - SQL and Database Structure
      • 2.3. Robust Code
      • 2.4. Logic
      • 2.5. Translators and Facilities
      • 2.6. Data Representation
  • NEW GCSE CS - J277
    • Glossary of Terms
    • Exam Technique
  • GCSE Business - J204
    • Lessons >
      • Unit 1 - Business Activity, Marketing and People
      • Unit 2 - Operations, Finance and Influences
    • Exam Technique
  • Contact

Who's responsibility is it, anyway?

25/3/2019

0 Comments

 
Recently there have been a number of incidents in the world that have highlighted an interesting moral, ethical and in some cases legal debate about social media and who is responsible for the content on these platforms.

Briefly, Instagram is under fire for being a platform where it is trivial to find information and "guidance" on various forms of self harm, eating disorders and even suicide. Several high profile events have shown that young people especially are finding what they see as support groups of like minded people which then normalises their own feelings/actions rather that supporting them in the ways they need to get help.

Facebook has once again come under criticism for the woeful monitoring or moderation of the content on its network (and don't forget they own Instagram so are equally responsible for criticisms levelled there also). In the wake of the recent terrorist attack in New Zealand, it quickly became apparent that one of the main motivators for the attacker was to spread their actions as far as possible using various outlets, Facebook being the originating platform.

These actions have raised several questions which on the face of it are trivial to answer, but when you look into it raise all sorts of questions about who ultimately holds the responsibility to moderate content online. Questions such as:
  • If content posted online is offensive or inappropriate - who is guilty of any crime? Is it the person posting the content, is it the platform for publishing the content or both?
  • Where is the line between the human rights of freedom of speech and freedom of expression and the need to create a moderate, inoffensive community through various forms of moderation? When is it OK to remove user posts?
  • Who is responsible for moderation? Is it the social media platforms or the users themselves?
  • How do you stop content from being spread, copied and re-uploaded when you're trying to remove it from social media?
The internet is notoriously difficult to moderate and in some ways, totally impossible. This is by design - the network was intended to be self healing, re-routing itself around broken connections, multiple levels of redundancy. The world wide web is the same - mirroring services ensure that content is replicated and never lost. Once something is "out there" then the ship has already sailed, you're fighting a losing battle to try and remove content. Remember that the network is also global - there is nothing that says the laws of your country apply in another, and usually they don't.

Australia and New Zealand were in a difficult position - they clearly had terrorist attacks happening in, or linked to, their countries and the attacker had posted their "manifesto" on social media before then live streamed the attacks on Facebook. Both governments called for "more to be done" to prevent this kind of thing happening - which anyone would conclude is entirely reasonable. However, it's also obvious that they're fighting a battle they can't win.

Some ISP's in Australia especially have taken the decision to block certain websites that hosted copies of the attack video and failed to remove it. This is likely to avoid government action or bills being passed that would require further and more sweeping changes to be made. Interestingly, some of the pages that have been blocked have protested their innocence and annoyance that ISP's have blocked them. Clicks = cash, so less traffic is hurting their business. 

They're also a little miffed because Facebook has faced no action from governments (as yet) because they "took swift and serious actions." The cynic in you might point out that the reason ISP's haven't blocked Facebook in the same way they blocked other sites is because Facebook forms a huge part of their traffic and if they block it, users will literally cancel their subscriptions and move to an ISP that didn't block them. Arguably, if you're going to put blocks in place, then Facebook has to be blocked - this is where the attack started, was publicly planned and then carried out. 

Sadly, then, it seems the frustration most governments feel is with the fact that some companies such as Facebook (and Instagram as they own them), Google and so forth are simply "too big" to be tackled. Sajid Javid, the current UK Home secretary repeatedly makes public statements that social media companies must be more responsible or take more preventative measures or "face action." He knows full well, or will quickly find out, that no matter how much he would like to make changes, he is largely powerless - the internet is not under any one countries control.

So then we move our attention to those social media companies. It is fairly clear that governments can only introduce rudimentary measures and then only in their country or jurisdiction and then these can be easily bypassed by most VPN/Proxy services. Also, the people we are trying to protect the most - the young, are by far the most switched on to technology and they are all capable of bypassing measures either themselves or by following methods by word of mouth.

Therefore, it is not unreasonable to suggest that companies like Facebook should take the lead when it comes to finding, removing or blocking this kind of content altogether. Let's establish some facts - they are the central point, they are the only organisation with access and control to all this content and most important of all, they absolutely are capable of implementing measures that would either completely or vastly reduce this kind of content. 

If they want to.

And it's a big "if" isn't it. These companies don't charge users to use the service, they experience an unimaginable amount of content being uploaded to their platforms on an hourly basis and make all their profit from advertising. Advertisers do not care about morals, they care about reaching their target audience as intrusively and repeatedly as is possible. Social media has given them these opportunities in a way never possible before in history - this is why social media platforms like Facebook are worth hundreds of billions. 

The sad fact is that Facebook are a business. Business exists for one single core purpose - to make profit. To think that they have a moral obligation is to pull the wool over our own eyes - they absolutely do not. Given the choice between an awkward press release and some soothing words to various parties and actually making changes for moral reasons, they'll choose the press release every time. Any changes that Facebook make that would make an impact in the real world would result in fewer users on their platform. People don't like the idea of "censorship" and are used to an always on, instant upload culture. Any delay in content appearing on the platform would give a "second rate experience." To a company like Facebook this is unacceptable. User count is everything and less users = less profit. They want all of your data and they want more of it. The more they have, the more they're worth.

So yes, if they wanted, Facebook could easily bring in a multi pronged attack that would have a drastic effect. They could easily afford to employ hundreds of moderators in different countries to automatically review flagged content. They could employ their machine learning to analyse content in a more aggressive way and to highlight more for review. They could tighten their guidelines and refuse to allow hate speech on their platform, or to refuse groups advocating certain right wing view points. They could follow through on the terms and conditions all users agree to and remove more accounts, or even bring action against particularly offensive users. The list is almost endless - but all of it comes at a cost.

Did they do anything? Their analysis looks really good. They did immediately remove the video of the attack but... once it was reported. Remember, before it took place, the manifesto, a page of hate speech was already hosted by them without raising any alarm. Remember also that computers are absolutely amazing at text and language analysis and they definitely have the technology to highlight this. Then they hosted a free for all - users uploading the content which they then had to chase down and remove or rely on user reports. Users took measures to obfuscate the video so that it didn't match a hash created of the original stream. From the outside, it looks like they did a lot to prevent the spread. In reality, they did what they had to do to minimise the poor publicity of what had happened on their platform.

Will anything change in future? The answer is no, because the only way Facebook could be encouraged to change is their users closing their accounts. If users started to leave en-mass then they would begin to listen. They made positive noises when users left of the Cambridge Analytica scandal. The unfortunate truth is that millions of users simply don't care, they want to log in, aimlessly read terrible memes and look at pictures of pets and dinners before moving on. 

It is, then, perhaps us as users that hold the ultimate responsibility here. We are responsible for not posting offensive content and also we have the absolute power to force change if we want. If you genuinely think that a platform hosting a live terrorism event and then taking zero action afterwards is socially unacceptable then maybe there is a moral obligation on us to no longer associate ourselves with that platform.

Morals and ethics is an endless conversation.

0 Comments

    Site License:

    Picture

    Latest Site News:

    01/10/2021
    • A Level CS - Unit 1 Complete and uploaded
    • ALL GCSE Business lessons for unit 1 and 2 are now online

    Archives

    July 2021
    January 2021
    October 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    June 2019
    March 2019
    February 2019
    November 2018
    August 2018
    May 2018
    February 2018
    January 2018
    November 2017
    August 2017
    June 2017
    October 2016
    July 2016
    April 2016
    March 2016
    February 2016

    Categories

    All

    RSS Feed

Powered by Create your own unique website with customizable templates.
  • Mr Davidson's Blog
  • Twitter
  • A Level CS - H446
    • A Level Exam Technique
    • Lessons
    • Unit 3 - Coursework Guidance
  • OLD GCSE CS - J276
    • All GCSE Questions
    • GCSE Exam Technique
    • Glossary of Terms
    • Unit 1 Revision >
      • 1.1 - Systems Architecture
      • 1.2 - Memory
      • 1.3 - Storage
      • 1.4 - Wired and Wireless Networks
      • 1.5 - Topologies, Protocols and Layers
      • 1.6 - System Security
      • 1.7 - Systems Software
      • 1.8 - Ethics and Law
    • Unit 2 Revision >
      • 2.1 - Computational Thinking
      • 2.1 - Searching and Sorting Algorithms
      • 2.1 and 2.2 - Writing Algorithms/Programming Techniques
      • 2.2 - SQL and Database Structure
      • 2.3. Robust Code
      • 2.4. Logic
      • 2.5. Translators and Facilities
      • 2.6. Data Representation
  • NEW GCSE CS - J277
    • Glossary of Terms
    • Exam Technique
  • GCSE Business - J204
    • Lessons >
      • Unit 1 - Business Activity, Marketing and People
      • Unit 2 - Operations, Finance and Influences
    • Exam Technique
  • Contact