Ah Arnie. Unlikely cult figure, star of so many top quality movies. Renowned the world over for his gritty, show stopping one liners and oscar winning performances of a life time.
Perhaps not, but he is brilliant enough that someone made an entire programming language dedicated to him. Indeed, if you're any kind of self respecting geek, you should be able to complete your coursework in this language - click here to get started. I'll worry about explaining the syntax to the exam board later after I've got over your sheer brilliance/audacity at following through on such an absurd idea.
Why am I rambling on about Arnie? Well, suddenly we find ourselves living in a time when the ideas portrayed in the Terminator movies are not so crazy after all. The idea that you could create a robot with super human intelligence that could take over the world and start World War 3 seemed to be rather far fetched. Now you have people like Stephen Hawking genuinely having a bit of a panic, both here and here, because they know full well recent developments point towards this being not just probable, but "OMG this is actually going to happen, what are we going to do?"
You may have seen in the news recently that Google built a really smart bit of software that can play the game Go. If you're not a board game aficionado, the basic gist is players take it in turn to place black or white circles, various circles get flipped over as a result and there are literally millions of permutations of possible moves/board combinations, which makes it a computationally difficult task as it isn't possible to analyse every possible move for its strengths/weakness in a reasonable time, so you have to create a program which uses a heuristic or "friday night, that'll do, lads" approach.
Computers really don't like anything other that concrete certainties, mainly down to the fact they are binary devices! There are no grey areas when you live in a world of either 0 or 1 and nothing in between. That's why in programming you have to work on absolute truths. Even in an artificial intelligence system, currently the rules are still based on immutable truths, so you've got issues already and they only get worse when we consider the human condition....
You see, us humans are weird. Take this example:
"What do you fancy for dinner?"
"Dunno, just fancy it."
If you had to think (and I mean really think) about the reasons behind your decisions, you'd have a hard time wouldn't you? I mean, why do you fancy toothpaste for dinner? You might just like the taste, you might have nothing else in the house and be too lazy to shop, you might be pregnant, not realise it and have a craving for toothpaste. Or... You just want it.
None of these reasons work in computer land. They don't get grey areas like this. How do you program a computer to understand "just because", desire or whimsical actions?
Back in the 70's people thought computers would rapidly become more intelligent than humans. Wild ideas were formed in peoples minds and fantastic prophecies about future computers were made. Only someone then spoiled the party and actually sat down to try this out. They then realised it was incredibly, incredibly hard to make a computer even remotely "human" and then gave up and went for a cup of tea instead.
To this day, there is still a Turing prize available for the first truly human behaving computer program.
But.. Even so, there are some awesome developments and they all come from the field of Neural Engineering/Neural Networks.
What's that then?
Basically, the idea of making a program which behaves exactly like neurons in your brain. Sounds good, right? Crash course in brain science:
So now you know.
The program then spends a significant time "learning." If it does something wrong, it goes back and tries another way. Each time it's successful the program strengthens that path and "learns" a skill or ability. This continues until it has learned enough to be useful. This kind of computer learning is relatively new - when I was at university the lecturers were all excited about it but it was so new they didn't actually teach us about it. Which was nice.
Here's a fantastic example of a machine learning in action, this guy made a program which learns to play Super Mario and it really is impressive. Not only that, this will give you great insight into how the process works and the limitations of current work:
While you're at it, you should watch it learn to play Mario Kart too. Then you should all consider doing something similar for your coursework....
How good is that?!
Anyway, on to more important things... Who wants to die?
Well, not me, and it seems that a certain Russian guy and me have a lot in common because if I had a spare hundred million pounds or so knocking about under the fridge, I too would pay lots of beardy scientists to come up with the technology to make me live forever. Until science works out the meaning of life at least.
The BBC reports that Dmitry Itskov would rather like technology to come to the rescue and make him (and the rest of us, he's a generous sort) immortal.
The way he plans to do this is to basically create a copy of every neuron in his brain. This is nowhere near as nuts as it sounds and if you look in to the pace of development in technology, you'll realise that his 30 year time scale is also not so bonkers. We will absolutely certainly have the storage capacity and processing power necessary to emulate a human mind in 30 years. This still leaves a lot of questions to be answered and some of them are a bit of a bugger...
And this raises one final point which is more prevalent to you than me (although please do bring me back from the dead when the technology works) and that is the fact that the young people of today will grow up in a society that has to answer these kinds of questions and more. You will be expected to make decisions that are harder than any that have gone before, decisions which literally impact the very definition of what being alive actually means.