A thought occurred to me after I wrote yesterdays post about neural networks/machine learning and that is:
If you recreate a brain using computer hardware and software, you have several problems:
This raises a whole world of legal, moral and ethical issues to which I don't believe there are any perfect answers. Just a few initial thoughts from that list above make things even more complicated...
You would (will) be living in a society where technology and AI have finally evolved to a point where machines are capable of making decisions. Indeed, as most things will be automated, they will be machine controlled. The advantages of having AI control of machinery are numerous and so it's likely that even menial machines will have some form of intelligence. As an aside, if you haven't watched Red Dwarf, you really need to see the episode where Lister meets Talkie Toaster:
Imagine if those awful self service checkouts actually had some intelligence and you could talk to them? This kind of technology is coming, the beginnings are already available today and history tells us that these things only get better and better with refinement over time. Things like Siri voice control will be as normal and ubiquitous as handles on doors.
So, if our machines can think, they can also very easily turn you off and if you've been turned off you're no longer in control.
I am fascinated by the nuances of our personalities, how our minds work and what makes us... us! Because we don't truly understand how we become who we are, we cannot predict whether a machine with the same neural capability will develop skills such as empathy, understanding, kindness and so on. Maybe, machines will possess all the intelligence but no feelings whatsoever and make cold, calculated decisions.
I'm just off to see a man about a dog at Skynet...
The thought also crosses my mind that you clearly have a situation where both human and machine-human people will exist side by side. Who has precedence? Do you have equality? What if the humans simply... switch you off! Conversely and presumably, a human-machine would be able to work 24/7 without the need for sleep. Does this mean that humans are out of a job because they have needs such as sleep?
What about reproduction? You can store as many human machines as you like. Who or what decides to reproduce a mechanical person? Would human machines have the same living requirements as us? Would they even need a body in the traditional sense of the word? Probably not - if you wanted to go somewhere you could just zip down a network.
Which makes me think again - if this is an exact copy of a human mind then they will crave the ability to touch, taste, feel, love... What would happen if the machine gets depressed? A machine can surely do more damage than an individual ever could.
Really the list of questions is never ending.
Finally, the issue of security. At present we can safely assume it's pretty much impossible to hack someones brain - we're fairly secure and amazing advances in medicine and bio engineering aside, I think we will be for quite some time to come. However, as soon as you make something digital it becomes vulnerable to attack. Can you imagine a bot-net of people? Now that would be a problem...
Makes you think, doesn't it?
Ah Arnie. Unlikely cult figure, star of so many top quality movies. Renowned the world over for his gritty, show stopping one liners and oscar winning performances of a life time.
Perhaps not, but he is brilliant enough that someone made an entire programming language dedicated to him. Indeed, if you're any kind of self respecting geek, you should be able to complete your coursework in this language - click here to get started. I'll worry about explaining the syntax to the exam board later after I've got over your sheer brilliance/audacity at following through on such an absurd idea.
Why am I rambling on about Arnie? Well, suddenly we find ourselves living in a time when the ideas portrayed in the Terminator movies are not so crazy after all. The idea that you could create a robot with super human intelligence that could take over the world and start World War 3 seemed to be rather far fetched. Now you have people like Stephen Hawking genuinely having a bit of a panic, both here and here, because they know full well recent developments point towards this being not just probable, but "OMG this is actually going to happen, what are we going to do?"
You may have seen in the news recently that Google built a really smart bit of software that can play the game Go. If you're not a board game aficionado, the basic gist is players take it in turn to place black or white circles, various circles get flipped over as a result and there are literally millions of permutations of possible moves/board combinations, which makes it a computationally difficult task as it isn't possible to analyse every possible move for its strengths/weakness in a reasonable time, so you have to create a program which uses a heuristic or "friday night, that'll do, lads" approach.
Computers really don't like anything other that concrete certainties, mainly down to the fact they are binary devices! There are no grey areas when you live in a world of either 0 or 1 and nothing in between. That's why in programming you have to work on absolute truths. Even in an artificial intelligence system, currently the rules are still based on immutable truths, so you've got issues already and they only get worse when we consider the human condition....
You see, us humans are weird. Take this example:
"What do you fancy for dinner?"
"Dunno, just fancy it."
If you had to think (and I mean really think) about the reasons behind your decisions, you'd have a hard time wouldn't you? I mean, why do you fancy toothpaste for dinner? You might just like the taste, you might have nothing else in the house and be too lazy to shop, you might be pregnant, not realise it and have a craving for toothpaste. Or... You just want it.
None of these reasons work in computer land. They don't get grey areas like this. How do you program a computer to understand "just because", desire or whimsical actions?
Back in the 70's people thought computers would rapidly become more intelligent than humans. Wild ideas were formed in peoples minds and fantastic prophecies about future computers were made. Only someone then spoiled the party and actually sat down to try this out. They then realised it was incredibly, incredibly hard to make a computer even remotely "human" and then gave up and went for a cup of tea instead.
To this day, there is still a Turing prize available for the first truly human behaving computer program.
But.. Even so, there are some awesome developments and they all come from the field of Neural Engineering/Neural Networks.
What's that then?
Basically, the idea of making a program which behaves exactly like neurons in your brain. Sounds good, right? Crash course in brain science:
So now you know.
The program then spends a significant time "learning." If it does something wrong, it goes back and tries another way. Each time it's successful the program strengthens that path and "learns" a skill or ability. This continues until it has learned enough to be useful. This kind of computer learning is relatively new - when I was at university the lecturers were all excited about it but it was so new they didn't actually teach us about it. Which was nice.
Here's a fantastic example of a machine learning in action, this guy made a program which learns to play Super Mario and it really is impressive. Not only that, this will give you great insight into how the process works and the limitations of current work:
While you're at it, you should watch it learn to play Mario Kart too. Then you should all consider doing something similar for your coursework....
How good is that?!
Anyway, on to more important things... Who wants to die?
Well, not me, and it seems that a certain Russian guy and me have a lot in common because if I had a spare hundred million pounds or so knocking about under the fridge, I too would pay lots of beardy scientists to come up with the technology to make me live forever. Until science works out the meaning of life at least.
The BBC reports that Dmitry Itskov would rather like technology to come to the rescue and make him (and the rest of us, he's a generous sort) immortal.
The way he plans to do this is to basically create a copy of every neuron in his brain. This is nowhere near as nuts as it sounds and if you look in to the pace of development in technology, you'll realise that his 30 year time scale is also not so bonkers. We will absolutely certainly have the storage capacity and processing power necessary to emulate a human mind in 30 years. This still leaves a lot of questions to be answered and some of them are a bit of a bugger...
And this raises one final point which is more prevalent to you than me (although please do bring me back from the dead when the technology works) and that is the fact that the young people of today will grow up in a society that has to answer these kinds of questions and more. You will be expected to make decisions that are harder than any that have gone before, decisions which literally impact the very definition of what being alive actually means.