Learn IT with Davo
  • Mr Davidson's Blog
  • Twitter
  • A Level CS
    • A Level Exam Technique
    • Lessons
    • Unit 3 - Coursework Guidance
  • OLD GCSE CS - J276
    • All GCSE Questions
    • GCSE Exam Technique
    • Lessons
    • Glossary of Terms
    • Unit 1 Revision >
      • 1.1 - Systems Architecture
      • 1.2 - Memory
      • 1.3 - Storage
      • 1.4 - Wired and Wireless Networks
      • 1.5 - Topologies, Protocols and Layers
      • 1.6 - System Security
      • 1.7 - Systems Software
      • 1.8 - Ethics and Law
    • Unit 2 Revision >
      • 2.1 - Computational Thinking
      • 2.1 - Searching and Sorting Algorithms
      • 2.1 and 2.2 - Writing Algorithms/Programming Techniques
      • 2.2 - SQL and Database Structure
      • 2.3. Robust Code
      • 2.4. Logic
      • 2.5. Translators and Facilities
      • 2.6. Data Representation
  • NEW GCSE CS - J277
    • Glossary of Terms
    • Exam Technique
    • Lessons >
      • Unit 1 - Computer Systems
  • GCSE Business - J204
    • Lessons >
      • Unit 1 - Business Activity, Marketing and People
      • Unit 2 - Operations, Finance and Influences
    • Exam Technique
  • Contact

Hacking your brain

15/3/2016

0 Comments

 
A thought occurred to me after I wrote yesterdays post about neural networks/machine learning and that is:

If you recreate a brain using computer hardware and software, you have several problems:
  • You can very easily "genetically engineer" yourself - it would literally be a change of a few lines of code
  • Your brain is vulnerable to security issues/hacking
  • You could be turned off...!

This raises a whole world of legal, moral and ethical issues to which I don't believe there are any perfect answers. Just a few initial thoughts from that list above make things even more complicated...

You would (will) be living in a society where technology and AI have finally evolved to a point where machines are capable of making decisions. Indeed, as most things will be automated, they will be machine controlled. The advantages of having AI control of machinery are numerous and so it's likely that even menial machines will have some form of intelligence. As an aside, if you haven't watched Red Dwarf, you really need to see the episode where Lister meets Talkie Toaster:
Picture
Toast? What about a bagel? Ahhh... you're a waffle man!
Imagine if those awful self service checkouts actually had some intelligence and you could talk to them? This kind of technology is coming, the beginnings are already available today and history tells us that these things only get better and better with refinement over time. Things like Siri voice control will be as normal and ubiquitous as handles on doors.

So, if our machines can think, they can also very easily turn you off and if you've been turned off you're no longer in control. 

I am fascinated by the nuances of our personalities, how our minds work and what makes us... us! Because we don't truly understand how we become who we are, we cannot predict whether a machine with the same neural capability will develop skills such as empathy, understanding, kindness and so on. Maybe, machines will possess all the intelligence but no feelings whatsoever and make cold, calculated decisions.

I'm just off to see a man about a dog at Skynet...

The thought also crosses my mind that you clearly have a situation where both human and machine-human people will exist side by side. Who has precedence? Do you have equality? What if the humans simply... switch you off! Conversely and presumably, a human-machine would be able to work 24/7 without the need for sleep. Does this mean that humans are out of a job because they have needs such as sleep?

What about reproduction? You can store as many human machines as you like. Who or what decides to reproduce a mechanical person? Would human machines have the same living requirements as us? Would they even need a body in the traditional sense of the word? Probably not - if you wanted to go somewhere you could just zip down a network.

Which makes me think again - if this is an exact copy of a human mind then they will crave the ability to touch, taste, feel, love... What would happen if the machine gets depressed? A machine can surely do more damage than an individual ever could.

Really the list of questions is never ending.

Finally, the issue of security. At present we can safely assume it's pretty much impossible to hack someones brain - we're fairly secure and amazing advances in medicine and bio engineering aside, I think we will be for quite some time to come. However, as soon as you make something digital it becomes vulnerable to attack. Can you imagine a bot-net of people? Now that would be a problem...

Makes you think, doesn't it?
0 Comments

"I need your clothes, your boots and your motorcycle..."

14/3/2016

0 Comments

 
PictureHow to be Arnie, repeat after me; "Get out.... Get in... Get off.... It's not safe, get out."
Ah Arnie. Unlikely cult figure, star of so many top quality movies. Renowned the world over for his gritty, show stopping one liners and oscar winning performances of a life time. 

Perhaps not, but he is brilliant enough that someone made an entire programming language dedicated to him. Indeed, if you're any kind of self respecting geek, you should be able to complete your coursework in this language - click here to get started. I'll worry about explaining the syntax to the exam board later after I've got over your sheer brilliance/audacity at following through on such an absurd idea.

Why am I rambling on about Arnie? Well, suddenly we find ourselves living in a time when the ideas portrayed in the Terminator movies are not so crazy after all. The idea that you could create a robot with super human intelligence that could take over the world and start World War 3 seemed to be rather far fetched. Now you have people like Stephen Hawking genuinely having a bit of a panic, both here and here, because they know full well recent developments point towards this being not just probable, but "OMG this is actually going to happen, what are we going to do?"

Picture
Just your average guy really, no need to worry. Oh, wait, he knows more about the universe than God? Best pack up and head to mars, people!
You may have seen in the news recently that Google built a really smart bit of software that can play the game Go. If you're not a  board game aficionado, the basic gist is players take it in turn to place black or white circles, various circles get flipped over as a result and there are literally millions of permutations of possible moves/board combinations, which makes it a computationally difficult task as it isn't possible to analyse every possible move for its strengths/weakness in a reasonable time, so you have to create a program which uses a heuristic or "friday night, that'll do, lads" approach.
Picture
They don't like it, sir! They do not like it!
Computers really don't like anything other that concrete certainties, mainly down to  the fact they are binary devices! There are no grey areas when you live in a world of either 0 or 1 and nothing in between. That's why in programming you have to work on absolute truths. Even in an artificial intelligence system, currently the rules are still based on immutable truths, so you've got issues already and they only get worse when we consider the  human condition....
Picture
"Would you like this gold bar I just found, dear?" "CHRIST MOM, I HATE YOU, WHY DO YOU HAVE TO RUIN MY LIFE!! CAN'T YOU SEE I'M BEING AN INDIVIDUAL HERE?! YOU JUST. DON'T. GET. IT!"
You see, us humans are weird. Take this example:
"What do you fancy for dinner?"
"Er.. Toothpaste."
"Why?"
"Dunno, just fancy it."

If you had to think (and I mean really think) about the reasons behind your decisions, you'd have a hard time wouldn't you? I mean, why do you fancy toothpaste for dinner? You might just like the taste, you might have nothing else in the house and be too lazy to shop, you might be pregnant, not realise it and have a craving for toothpaste. Or... You just want it. 

None of these reasons work in computer land. They don't get grey areas like this. How do you program a computer to understand "just because", desire or whimsical actions? 

Back in the 70's people thought computers would rapidly become more intelligent than humans. Wild ideas were formed in peoples minds and fantastic prophecies about future computers were made. Only someone then spoiled the party and actually sat down to try this out. They then realised it was incredibly, incredibly hard to make a computer even remotely "human" and then gave up and went for a cup of tea instead.

To this day, there is still a Turing prize available for the first truly human behaving computer program.

But.. Even so, there are some awesome developments and they all come from the field of Neural Engineering/Neural Networks.

What's that then?

Basically, the idea of making a program which behaves exactly like neurons in your brain. Sounds good, right? Crash course in brain science:
  • Your brain contains millions of neurons
  • They can be connected together to form "pathways" which electrical signals are fired down
  • The more often a pathway is used, the stronger the link becomes (this is forming long term memories and the reason why you should ALL be reading your notes after lessons and revising frequently, not all at once)
  • If a pathway is never used it can disperse and the neurons can make new connections
  • These connections form all of our memories/thoughts/consciousness etc

So now you know. 

The program then spends a significant time "learning." If it does something wrong, it goes back and tries another way. Each time it's successful the program strengthens that path and "learns" a skill or ability. This continues until it has learned enough to be useful. This kind of computer learning is relatively new - when I was at university the lecturers were all excited about it but it was so new they didn't actually teach us about it. Which was nice.

Here's a fantastic example of a machine learning in action, this guy made a program which learns to play Super Mario and it really is impressive. Not only that, this will give you great insight into how the process works and the limitations of current work:
While you're at it, you should watch it learn to play Mario Kart too. Then you should all consider doing something similar for your coursework....
How good is that?!

Anyway, on to more important things... Who wants to die?

Seriously, who?

Well, not me, and it seems that a certain Russian guy and me have a lot in common because if I had a spare hundred million pounds or so knocking about under the fridge, I too would pay lots of beardy scientists to come up with the technology to make me live forever. Until science works out the meaning of life at least.

The BBC reports that Dmitry Itskov would rather like technology to come to the rescue and make him (and the rest of us, he's a generous sort) immortal.

The way he plans to do this is to basically create a copy of every neuron in his brain. This is nowhere near as nuts as it sounds and if you look in to the pace of development in technology, you'll realise that his  30 year time scale is also not so bonkers. We will absolutely certainly have the storage capacity and processing power necessary to emulate a human mind in 30 years. This still leaves a  lot of questions to be answered and some of them are a bit of a bugger...
  • Are you really living forever if your mind is in a machine?
  • Are you technically a clone, therefore the real you is still dead?
  • Is a machine, that has all of the capabilities of a human mind "alive." If so, will it have the same rights as you? If not, why?
  • ​Will it be "conscious?" We still don't understand how our thoughts/mind/personality actually works. This will be a crazy one to find out.

And this raises one final point which is more prevalent to you than me (although please do bring me back from the dead when the technology works) and that is the fact that the young people of today will grow up in a society that has to answer these kinds of questions and more. You will be expected to make decisions that are harder than any that have gone before, decisions which literally impact the very definition of what being alive actually means.

​Bloody hell.

0 Comments

    Site License:

    Picture

    Latest Site News:

    05/01/2021
    • A Level Unit 1.2 resources added
    • More business lessons now available

    Archives

    January 2021
    October 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    June 2019
    March 2019
    February 2019
    November 2018
    August 2018
    May 2018
    February 2018
    January 2018
    November 2017
    August 2017
    June 2017
    October 2016
    July 2016
    April 2016
    March 2016
    February 2016

    Categories

    All

    RSS Feed

Powered by Create your own unique website with customizable templates.
  • Mr Davidson's Blog
  • Twitter
  • A Level CS
    • A Level Exam Technique
    • Lessons
    • Unit 3 - Coursework Guidance
  • OLD GCSE CS - J276
    • All GCSE Questions
    • GCSE Exam Technique
    • Lessons
    • Glossary of Terms
    • Unit 1 Revision >
      • 1.1 - Systems Architecture
      • 1.2 - Memory
      • 1.3 - Storage
      • 1.4 - Wired and Wireless Networks
      • 1.5 - Topologies, Protocols and Layers
      • 1.6 - System Security
      • 1.7 - Systems Software
      • 1.8 - Ethics and Law
    • Unit 2 Revision >
      • 2.1 - Computational Thinking
      • 2.1 - Searching and Sorting Algorithms
      • 2.1 and 2.2 - Writing Algorithms/Programming Techniques
      • 2.2 - SQL and Database Structure
      • 2.3. Robust Code
      • 2.4. Logic
      • 2.5. Translators and Facilities
      • 2.6. Data Representation
  • NEW GCSE CS - J277
    • Glossary of Terms
    • Exam Technique
    • Lessons >
      • Unit 1 - Computer Systems
  • GCSE Business - J204
    • Lessons >
      • Unit 1 - Business Activity, Marketing and People
      • Unit 2 - Operations, Finance and Influences
    • Exam Technique
  • Contact