Learn IT with Davo
  • Mr Davidson's Blog
  • Twitter
  • A Level CS
    • A Level Exam Technique
    • Lessons
    • Unit 3 - Coursework Guidance
  • OLD GCSE CS - J276
    • All GCSE Questions
    • GCSE Exam Technique
    • Lessons
    • Glossary of Terms
    • Unit 1 Revision >
      • 1.1 - Systems Architecture
      • 1.2 - Memory
      • 1.3 - Storage
      • 1.4 - Wired and Wireless Networks
      • 1.5 - Topologies, Protocols and Layers
      • 1.6 - System Security
      • 1.7 - Systems Software
      • 1.8 - Ethics and Law
    • Unit 2 Revision >
      • 2.1 - Computational Thinking
      • 2.1 - Searching and Sorting Algorithms
      • 2.1 and 2.2 - Writing Algorithms/Programming Techniques
      • 2.2 - SQL and Database Structure
      • 2.3. Robust Code
      • 2.4. Logic
      • 2.5. Translators and Facilities
      • 2.6. Data Representation
  • NEW GCSE CS - J277
    • Glossary of Terms
    • Exam Technique
    • Lessons >
      • Unit 1 - Computer Systems
  • GCSE Business - J204
    • Lessons >
      • Unit 1 - Business Activity, Marketing and People
      • Unit 2 - Operations, Finance and Influences
    • Exam Technique
  • Contact

1.1 - Systems Architecture

In this section (click to jump):
  • What is a computer and how do they work?
  • The Definition of a Computer
  • The CPU
    • The Fetch Decode Execute Cycle
    • Performance Factors (Clock, Cache, Cores)
    • CPU Instructions
    • Data Buses
    • Components of a CPU
    • Registers (the Accumulator)
    • The Components of a CPU (Control Unit, ALU)
    • A Detailed FDE Cycle
  • Embedded Systems

YouTube Revision Playlist for this section:


Introduction

PictureLovely plumage....
People get very confused by computers and, in some ways, it's not really surprising considering they can come in hundreds of different forms, in millions of possible configurations to fit all kinds of needs. However, when you break it right down, computers are very, very simple. Many people with beards are now turning purple and frothing at the mouth with anger that I've just dared to simplify things.

Lets get some basic facts out of the way. You may not believe me (I wouldn't believe me either) but a computer is literally nothing more than a calculating machine.

A calculator? How are all those super bare dank memes on Instagram the product of a big old calculator? How can it possibly be that your latest snapchat of your dog sat with a grape on its head is just the result of basically playing the "how many times can you press equals until it says 'E' on the display" game?

But it is.

Let's go on a journey. Everything is better with a story, so get your Ovaltine/Horlicks ready and settle down. 

 
Picture
The advert for this used to be mental, just people yelling HOOOORRRLLLIIICCCKSS! I looked everywhere on YouTube for it but just ended up getting side tracked, watching endless 1990's TV adverts...
Anyway, back to the story...

What do you need to make your phone, laptop, tablet or robotic butler work? The answer, of course, is power.

​Electricity isn't witch craft, it is simply a flow of charged electrons and, in our case, it's the flow of those through a circuit. A circuit is nothing more than a fancy collection of wires and switches.


What on earth has this got to do with numbers, calculators and above all else - computers?

Well, the answer is that electricity isn't the easiest thing to accurately control. Many people have scratched their beards and just stared in puzzlement at electricity, others just got on the wrong side of it and looked bemused whilst their hair stood on end and their feet smouldered. But... some clever people realised that the easiest way to understand electricity was this:

If it's flowing - it's on.
If it's not flowing - it's off.

Now, if you don't understand that idea, stand up, walk over to the light switch in the room you're in and, whilst poking your tongue out of one corner of your mouth, flick the switch. If the light comes on in the room, then clearly, electricity is flowing. If the light goes off, then clearly there is no more electricity flowing. Got it? You can go for a lie down now if you like.

As an aside, you can get dimmer switches, so obviously it is possible to finely control the amount of current flowing in a circuit, but not accurately and, here's the deal breaker, could you accurately, time after time, set that dimmer switch to exactly the same point? The answer is no, not reliably.

So, let's recap:
  • We have electricity. It flows.
  • With 100% certainty we can tell if electricity is ON or if it is OFF.

If you don't like Maths, look away now.

Some genuinely astounding and ingenious people realised that it is possible to represent ON as the number 1 and OFF as the number 0. 
Picture
I have a cunning plan...
In history there have been some incredibly important people, but none other than:

George Boole

and

John Von Neumann (I love him, he was absolutely bonkers)

Obviously there were others involved, but these two could easily be credited with inventing computing. If they'd known where it would lead, I'm fairly sure they just wouldn't have bothered and would probably have both just turned into honest to goodness alcoholics instead. It's better than a future filled with endless posts about people's dinner on social media.

George Boole invented logic (this is why it's called Boolean Logic and you have the Boolean data type), which is an incredible statement. Can you imagine that? 

"What do you do?"
"Oh you know, office job, nothing important. How about you, George?"
"Oh I invented Logic yesterday, it's going to change the entire course of history. Then I mowed the dog and took the lawn for a walk."

Without melting your intelligence circuits and sending you in to a trance, Boolean logic is extremely clever Maths that allows you to cleverly manipulate numbers (do calculations) with 100% accuracy and predictable outcomes. This is extremely important in computing, because who wants a machine that is only right most of the time? You'd love that down the car show room:

"Buy this new car! The auto park feature is computer controlled, and that means it will only crash you into another vehicle the odd time or two, but other than that it's flawless!! Go on, live a little, take a gamble!"

This is all during the 1800's. A time when electricity was literally the work of magicians and wizardry and you could still find people who would like to argue about whether the world was flat or round. Mental.

Fast forward to the early 1940's and very beardy people began to realise that:
  • Boolean  Logic is represented using the numbers 0 and 1
  • You can represent any number using just 0's and 1's (Binary number system)
  • Electricity can be turned on and off to represent the numbers 0 and 1
  • Therefore, we can make circuits that perform Boolean logic using electricity.

Can you see where this is going?

Computers were invented because we needed to be able to do Maths quickly, reliably and constantly. The 1940's was... war time. War means people die and anything that gives you the upper hand means less people die - so we invented a computer which would do the maths required to crack the German Enigma code. This, quite literally, saved thousands of lives.

And now we use the technology to insult people in the comments section of YouTube! Ah, all those massive leaps forwards were so worth it.

So, we arrive where we started - we had literally made a giant (it filled a huge room) calculator. But it doesn't explain how we got to where we are today.

The final part of the story is that we had already realised that to solve a problem all we needed to do was:
  • Turn the data in to numbers
  • Stuff those numbers in to a computer
  • Do clever maths using Boolean logic
  • Get the answer we want

Actually this is exactly where we are still at today. All computing still works in the same way. Whether you can believe it or not, all we do is turn everything in to numbers and then crunch them in clever ways to do whatever we want. We turn pictures in to numbers and pump them out at insane rates to create games, we turn button presses in to numbers so you can use a keyboard and we turn sound in to numbers so you can listen to god awful rappers go on about cars, drugs and guns because that's just what it's like down Burntwood McDonalds these days.

What's more crazy, is the only numbers we turn all this into is the number 0 or 1 and if we break it down more, don't forget that is literally because we are turning the power on and off to represent those numbers. 

What device do we know about that turns the power on and off? The humble switch.

I'll leave you with the thought that you quite genuinely could, if you were totally and utterly unhinged, create a computer that comprised solely of light switches and light bulbs and, logically, it would be identical to your smartphone.

Don't, though. You'll die before you're finished and your parents will be left with the most absurd debt in human history when they owe Screwfix £30 billion for all the light switches you purchased on your futile journey to idiotic infamy. Plus you'd upset all the electricians.

 

The Definition of a Computer

So, to define, a computer is:
  • A machine...
  • Which takes numbers (binary numbers) as input
  • Performs calculations on them
  • Provides output (the answers)
  • and does this repeatedly until you tell it to stop or turn it off.

Now you know what a computer is, let's learn something useful for the exam
 

The CPU

Picture
Looks dull, is in fact a totally mental piece of engineering.
The CPU is arguably the most important component in any computer system. Understanding what it really does and how it does it can seem confusing at first.

Before we dive any deeper lets make sure we understand the following things:
  • ​A computer is a binary device, meaning it can only understand sets of 0's and 1's
  • A computer works by following lists of instructions called programs (or software, or applications)
  • The instructions we can use in a program are fixed and are called the "instruction set"

Let's look at instruction sets in more detail.

Imagine a TV remote control. There are a set number of buttons on there, each one corresponds to a function of your TV. Unless you change your TV for a new one, these buttons and functions will never change. Think about it, your TV doesn't suddenly gain new abilities from one day to the next.
Picture
That's your lot, mate...
You can think of your TV remote as the "instruction set" of your TV. Everything it can do is on a button somewhere on that controller. By pressing these buttons, or in some cases combinations of these buttons, you can get the TV to do what you want.

The CPU is no different. It has a set, fixed number of instructions. These instructions are things it can carry out or do for you and they are all very simple. By using them in the correct order we can get the computer to do something useful for us. Here are some generic examples of CPU instructions and their meaning:

LDA   -   Load a piece of data into the accumulator
STA    -   Store the value of the accumulator in memory
CMP  -  Compare the value in the accumulator with another value
JMP   -  Jump to a new location in memory and continue from there


These may make no sense to you at the moment, but you should be able to see that they are not complex in any way. In fact most CPU instructions simply involve moving a piece of data (a number) from one place to another, making a decision based on a value or moving to another place in memory. Because of the simplicity of each instruction it takes many, many instructions to do anything remotely useful. Moving your mouse across the screen would result in hundreds of these instructions being carried out.

This explains many things:
  • Programs are really, really long - because we need lots of simple instructions to do something complex
  • We have invented "high level" languages (ones that look like English rather than gibberish) because programming in CPU instructions (called assembly language) can be very complex.
  • CPU's must be very, very fast at carrying out these instructions simply because it takes many hundreds or even thousands of instructions to do anything that we would consider useful.

So... what is a CPU then? A CPU is:
  • The Central Processing Unit
  • The place where all program instructions are sent to be carried out
  • Capable of doing a set  number of things (the instruction set)
  • Programs made up of these simple instructions are carried out by the CPU - it "executes" the instructions (does as it's told!)
  • The results of processing can be seen as outputs - things happening.

So it's the piece of hardware in a computer system that makes things happen! Without it, you couldn't run any programs. Indeed, you couldn't even turn the machine on.
 

The Fetch Decode Execute Cycle

Not one for words and reading? Worried you'll run the batteries out for your eyes? Don't worry, there's a video for this bit!
Picture
Dogs are nutters.
By this point we know what the CPU is and what it does:
  • Gets instructions from a program
  • Carries them out!
​
But the next step in our understanding is to work out exactly how it does that.

Processing happens in a cycle. From the moment you turn your computer on until you turn it off, the CPU is literally doing nothing but going round and round processing instructions. Even when you leave your computer alone and it appears to be doing absolutely nothing, the CPU is still processing instructions - it doesn't know any other way. It's like a workaholic that never takes a break, never stops working and never has a breakdown, turning itself into a gibbering wreck.

You've been doing cycles since you were in reception class - life cycles, the water cycle, the art of balancing on a unicycle whilst juggling flaming badgers. All standard stuff. So here's another cycle for you and it's called the Fetch, Decode, Execute Cycle:
Picture
Here's a picture I stole from Google Images. First rule of Computer Science, don't do something again if it's already been done.
It's fairly obvious that the CPU is, then, doing 4 things repeatedly, over and over again, to make the programs we give it work.

Let's break it down.

Fetch:

  1. Programs (instructions and data) are stored in memory (RAM - if you don't know what that is, click here and get reading)
  2. The CPU FETCHES an instruction from memory. This is really important - in an exam people often make the mistake of saying that instructions are sent to the CPU, they're not, the CPU requests them when it needs them from RAM.

Decode:

  1. The CPU then has to work out what the instruction means. It literally asks "what have you asked me to do?" If it makes life better, imagine the little computer people sat there with a big book, like a dictionary, and every time an instruction comes along they look it up in the book and read out the meaning so the CPU understands.
    Believe it or not, the CPU has to work out what an instruction means every single time. You could ask it to do the same thing 1000 times in a row and it would look it up 1000 times. Remember a CPU is not intelligent and cannot "think" for itself.
  2. Once the CPU has decoded the instruction it can pass it through the correct circuit that will actually carry the instruction out. Usually these instructions are carried out in somewhere called the Arithmetic and Logic Unit (ALU) because, you know, most instructions are either maths or logic. Innit.

Execute:

  1. This should be really obvious - the CPU now does what you've asked it to! Add a number, jump to another program instruction or compare two pieces of information for example.
  2. This processing is usually carried out in something called a "register." More on those later (keep reading, it's got more cliff hangers than Hollyoaks this...(but far less murders and deaths than is apparently normal for a small Cheshire village))

Store:

It'd be a bit weird if after doing an instruction the answer didn't go anywhere, or we just binned it. So the final part of the cycle is to put the answer somewhere. This can either be:
  • In a register (a piece of memory in the CPU) ready for further use by another instruction
  • Back in main memory (RAM) so it can be used later.

The speed of the deed - Megahertz

We're really getting somewhere now. We know what the CPU is and how it manages to carry out instructions, but before we finally move on, we actually need to understand something really rather important - how fast this cycle actually happens.

In computing we measure each cycle in Hertz. A single Hertz (1hz) would mean 1 single cycle was carried out each second. 

CPU speeds (cycle speeds) have increased at absurd rates over time. This was neatly encapsulated in something called "Moores Law" after a beardy bloke called Gordon Moore who worked at Intel and made the prediction that the number of transistors (switches) inside a CPU would double every two years. This usually means that performance also doubled at that rate too, and for many, many years his law was correct.

In 1980 you'd have measured your CPU performance in terms of Megahertz(Mhz), which sounds awesome and, to be fair, it was at the time. 1Mhz = 1 million cycles per second, which is quite something, right?
Picture
This beauty had a 1Mhz processor and it was all you needed I tell you! I loved this thing.
Plus the adverts had an amazing jingle "Are you keeping up with the Commodore, 'cos the Commodore is keeping up with you'
It was incredible, see for yourself...
You'd be fairly depressed today if even your watch ran at 1Mhz.

Around the end of 1999 CPU speed approached the magical 1 Gigahertz(Ghz) mark. 1Ghz = 1000Mhz which is equal to 1 billion cycles per second. Now we're talking. That was a magical moment. I remember reading about the first 1Ghz computer and feeling like it was some kind of amazing milestone. In reality it used up so much power you'd get a phone call from the national grid every time you turned it on and it ran hot enough to mean you no longer needed central heating. Great days.

We are now up to about 4-5Ghz in a desktop CPU, which is utterly mental, especially when you realise that modern processors aren't just 1 CPU at all, they're 4, 6 or even 8 CPU's rolled in to one. 

In reality performance is a little more complex than saying 4Ghz = 4 Billion instructions per second, but what it does mean is that we are capable of doing billions of things per second in modern computers.

What does this mean? Well, as we increase the "clock speed" in a computer (increase the Ghz) we increase the rate at which we can process instructions for a program. What does that mean? Well, it means that we can make things happen faster, get more done in the same amount of time as a slower processor - and that's only a good thing, right?

Summary:
  • Speed is measured in Hertz, Megahertz and today we are up to Gigahertz

  • 1hz = 1 cycle per second
  • 1mhz = 1 million cycles per second
  • 1ghz = 1 billion cycles per second

  • The more cycles per second, the more instructions we execute per second, the faster our computer appears to run.
 

Performance Factors

PictureLike a machine, Monkman has optimised his brain to work in ways mere mortals can only dream of.
In computing we are on a never ending quest to get faster, more efficient and to use less energy whilst doing so. That's incredible really. If we treated the development of cars in the same way, we'd expect next years model to go twice as fast and use half of the fuel as this years model - and we'd expect the same improvements year after year until the point where you can travel at the speed of light for 10p of petrol.

Seems reasonable.

The only downside is in computing one way of making things better is to make them smaller and smaller, so if we applied the same logic, our light speed car would also be about 1 atom big. Bummer.

So how do we increase the speed and efficiency of a CPU? Well, there are many ways but only 3 that we need to focus on for the exam.  These are:
  • Clock Speed
  • Number of Cores
  • Amount of Cache

Clock Speed
​

Clock speed we have covered already in the previous section on the Fetch, Decode, Execute cycle but, if you're lazy or simply like repetition, here's s summary:
  • Clock speed is the number of cycles per second a CPU can carry out
  • This is directly related to how many instructions per second a CPU can execute (although 1 cycle does not equal 1 instruction execution)
  • Increasing clock speed = more Mhz/Ghz = more cycles per second = more instructions processed per second = faster running programs

There is only one or two down sides here:
  • More Mhz/Ghz also means you generate more heat from the CPU or you need to make the circuitry even smaller to gain higher clock speeds using less power. This is difficult and obviously finite, you can only go so small before you are literally pushing single atoms around very thin bits of wire.
  • When a program is running it is rare that it never pauses for some reason. For example, waiting for the user to type something on the keyboard. During times like these, it doesn't matter if you had a 100Ghz processor, it's going to sit there doing nothing while it waits for your input! So you don't always get a massive increase in performance from higher clock speeds because the CPU has to waste time whilst it waits for other things to happen.

Picture
"1.2 Gigawatts! Are you mad?!" Clock speed - clock tower - time machine - back to the future. Another tenuous link. I'm here all week, ladies and gentlemen.

Number of Cores
Dual core, Quad core, Octo-Core are terms you may well have heard before and are often used in adverts for computers, but what does it actually mean?

When processor manufacturers (Intel and AMD) began to realise that just mashing out faster and faster processors wasn't actually giving the performance gains they wanted, they had to turn to something else. Their idea was a very good one - two processors are better than one. Oh and four processors are even better than two.

So a core is a "single processing unit inside a CPU"

This means a dual core processor is literally 2 CPU's in one. It has two processing units inside it. A quad core has 4 and so on.

There are some awesome advantages here:
  • 1 CPU can only carry out one task at a time. Therefore, more cores mean you can carry out more tasks in parallel (at the same time)
  • The more cores you have, the more you can split tasks amongst those cores to make them run faster (not all tasks can be split in this way)
  • The more cores in a machine, the better it will be able to "multitask"

However this brought with it some complications:
  • 2 cores does NOT mean twice as fast! 4 cores does not mean four times as fast! Do not put this in an exam, it's not true!
  • Programs and operating systems have had to be adapted so they can share tasks amongst cores and make use of the extra processing cores available to them. This is not a simple task.
  • Not all tasks are able to be split in this way and can only ever run on one core at a time.

Why can't all tasks be split? Let's have an analogy moment.

Imagine I'm painting my room a lovely fetching shade of bile green. My room is simple and boring and has 4 walls of equal size and shape. Let's pretend each wall takes me 1 hour to paint.

If I paint the room myself it will clearly take 4x1 hours = 4 hours.

But does it matter what order I paint the walls in? Of course not. Painting one wall has nothing to do with painting any of the others. So I can much more quickly paint the room if I invite some friends round and bribe them with beer and food to paint my house for me.

If I increase the number of people painting to 4 then clearly... 4 people working for 1 hour will get the job done. I've reduced the time taken from 4 hours down to 1. Winner. And I lied about the beer. My friends hate me.

This is the same in computing. If a task can be split like this, then we can get huge performance gains by splitting it over many cores (this is what graphics processors do really well and they have literally thousands of cores)
Picture
My lovely, cared for car.
But lets take another example - washing my car.

I invite the same 3 friends round again, they've forgiven me so it's ok. 

This time I want them to help me wash my car (not wise, it's so rusty it's probably soluble).

I give each friend a job - one has the hose to rinse it down, one has a bucket and spunge, one has a bottle of car wax and the other has a nice chamois (what an incredible word) leather to dry it off and make it shiny.

The jobs have to be carried out in order:
  1. Rinse
  2. Wash with sponge
  3. Wax
  4. Dry and Shine.

There is no way my friends can actually help reduce the time this job takes. You cannot start waxing the car before it's been washed, or you cant dry it before it's been cleaned. Therefore my 3 friends just stand around bored, drinking beer (I had to this time, they'd never forgive me if I lied a second time.)

This is a perfect example of a job in computing that could not be improved by having more and more cores - some jobs simply cannot be split up to be made to work faster. In these scenarios, you simply want the highest clock speed you can get.


Amount of Cache

Let's get this definition nailed, its a classic exam question:

Cache is: A small amount of memory, inside the CPU used to hold frequently accessed instructions and data. It runs at virtually the same speed as the CPU.

​Many years ago, in the greasy mists of computing history, it became apparent that CPU speeds were far outpacing the speed that RAM could go. This is a problem. Imagine a CPU is a really, really hungry and incredibly angry dictator. The CPU wants feeding, it wants instructions fed to it constantly and quickly. This enviable job belongs to RAM. If RAM can't supply instructions fast enough then the hungry CPU becomes idle and when it gets idle it gets angry and no one likes an angry, idle CPU.
Picture
...and finally, Sir, a wafer thin mint.
Being serious for a moment, this is a real problem. If RAM cannot send instructions quickly enough to the CPU then the CPU will literally sit there idle, doing nothing. 

If RAM only works at half the speed of the CPU, then it is possible that for 50% of the time the CPU will be doing nothing. Why is that a problem? Well it suddenly makes your 1Ghz processor effectively a 500Mhz processor - half the speed.

Cache solves this problem:
  • Cache runs at CPU speed so can feed the processor instructions as quickly as it needs
  • Often programs execute the same instructions repeatedly (in a loop for example) so if we keep them in Cache we don't have to go to RAM and waste time.

Imagine Cache as a funnel, we can fill it up with lots of instructions and data and it'll feed them into the CPU when it needs them, all we need to do is keep it topped up so it doesn't run out.

Advantages of Cache:
  • Runs at CPU speed
  • Keeps frequently accessed instructions and data so the CPU can work on them quickly
  • Reduces the amount of time a CPU is waiting for instructions to be loaded from memory
  • Reduces the frequency (number of times) the CPU has to access RAM

Disadvantages of Cache:
  • Cache costs a LOT of money to make compared to RAM (this is why RAM isn't as fast as your CPU!)
  • Cache is limited, usually around 8mb (it takes up a lot of physical space inside the CPU, again a cost issue)
  • If the instruction we want is NOT in cache then we have to wait to access RAM again

So, the bottom line is, cache improves performance because we are not waiting for instructions to be read from memory. The more cache we have, the less often we will need to access RAM.

 

CPU Instructions - Operator and Operand

Picture
So far you've seen some simple instructions that the CPU can execute such as:
LDA

However, this is only half of the story. I lied. I'm sorry...

Most instructions actually need to be in two parts - the instruction itself and some form of data.


Therefore an instruction is more likely to look like:

STA #5F

What does that mean? Well, we can split the instruction in to two parts - the Operator and the Operand. It's definition time:

Operator - in our example this would be STA. This is the actual instruction, it tells the CPU what we want it to actually do. In this case Store the Accumulator contents somewhere.

Operand -
in our example this would be #5F. This part of the instruction is either a number or a memory address. In other words it is the part of the instruction that says where or how the instruction should be carried out.

So what could our instruction STA #5F actually mean then? Well, if 5F is a memory address then it simply means "put the contents of the accumulator into memory location 5F (5F is a hexadecimal number, if you don't understand hex, it's in Unit 2)

What about another example? LDA #60

Well, LDA means "Load the Accumulator" so a number will be put in to the accumulator. Which number? Well the second part of the instruction tells us that - 60.  

Luckily at GCSE we don't need to worry about whether the number is a memory address or just literally a number.

In summary, then, what do you need to know for the exam:
  • The operator is the first part of the instruction and tells the CPU what to do.
  • The operand is the memory address or data that is needed to tell the CPU how to use that instruction.

 

Components of a CPU

I keep talking about these things called "registers" and the "accumulator" like you actually know what they are. It's time we solve the mystery and clarify what is actually inside a CPU and what makes it tick (quite literally tick).

Here's a fascinating diagram of the inside of a CPU:
Picture
Everything inside the large white box is inside the CPU. We've introduced quite a few ideas and concepts in this diagram so the next few sections will go through these in detail. For now, here are a few things we should understand from looking at the diagram:

System Clock - the CPU is controlled by a system clock which is contained in the motherboard. Without a system clock, the CPU can't organise itself and doesn't know when to do things. Like an exceptionally clever machine, everything in a computer is synchronised to happen on the tick of a clock. When the system clock "ticks" the CPU knows to perform one part of the fetch, decode, execute cycle.

The Program Counter - the Program Counter or PC is essential to the running of any program. It is a register that simply contains a memory address. This memory address tells the CPU where the next instruction to be executed is in memory. Without this register, the CPU would not know what to do next. To run a program, you simply put the address of the first instruction in the program counter and it will start to execute your program.

So, before we dive into the rest of this diagram, the basics we need to understand are:
  • There is a system clock which controls when things happen in the CPU
  • The Program Counter keeps track of what the CPU needs to do next and can be changed to change the program we are currently running.​
 

Registers

PictureThis is what Google images came up with for "register" so now you know...
When the CPU is working on an instruction it needs somewhere to put that data. The instructions and data inside a CPU are stored in Registers.

A register is nothing more than a very small piece of memory inside the CPU which holds a single piece of data or an instruction. It is nothing to do with Cache, before you get confused.

A register may have a special purpose, such as holding status information or for working with specific parts of the CPU or they may be general purpose.

The size of a register depends on its purpose and the type of CPU you are using. Short status registers may only be 8 or 16 bits wide. However general purpose and specialist registers may be 32, 64 or even 128 bits wide.

In your exam you will be expected to know the name and function of the:
  • Memory Address Register - the address of the location in memory we want to read or write to
  • Memory Data Register - the data we read from memory or want to write to memory
  • Current Instruction Register - the instruction we are working on right now
  • Program Counter - the address of the next instruction to be executed.

It is much easier to understand these in the context of the fetch, decode, execute cycle, so skip to the last section for more information.

 

The Accumulator

The accumulator is, simply, just another register. However it is probably the most commonly used register and is directly associated with the ALU (arithmetic and logic unit). It is designed to be used with any instruction which makes use of the functions of the ALU (which is the majority of instructions in an average program).

So instructions such as load, store, add, compare and so on are generally performed on data held in the Accumulator.

That wasn't so bad, was it?

The Control Unit

Picture
As a child I always found the Fat Controller quite disturbing. I still do.
The control unit is properly important. Based on the clock pulse the CPU receives, the control unit will coordinate what the CPU is doing - for example fetching data and instructions, decoding or executing them. Simply it is controlling the fetch, decode, execute cycle. However, that's not all it does...

The control unit will:
  • Send signals down the control bus (it's so important it has it's own bus service. Magic.) to the RAM to tell it whether the CPU needs to read data (get it) or write data (send it).
  • Send timing signals to the components of the CPU
  • Send signals to the ALU or Input/Output devices


ALU

PictureAlways recycle your CPU's ALU after use.
The Arithmetic and Logic Unit is responsible for carrying out most of the instructions in the CPU instruction set.

At GCSE you need to know very little about it other than:
  • It performs logical and mathematical operations
  • Examples are: addition, subtraction, logical comparisons (AND, OR, NOT),  shift operations (literally moving bits from side to side)

 

Data Buses

Picturebefore anyone says it again, no, this isn't Hitler.
No. It isn't the bus you're thinking of.

A bus in computing is literally a set of wires, usually one wire per bit of data that you need to transfer at any one time.  For example, if you wanted to transfer 1 byte of data (that's 8 bits) then you might have 8 wires. This way you can send all 8 bits at once and they arrive together, quickly. Obviously, the more you want to send, the more wires you need - 32 bits = 32 wires.

A bus is used to connect one part of a computer to the other. In this context we are looking at how we connect the CPU to its best mate RAM. 

If you look at a motherboard you can see these wires:

Picture
If you look around the CPU socket (the big white square) you can see hundreds of little blue lines coming to and from it. These are data or bus wires.
In our exam we are concerned about three buses that connect the CPU to RAM. These are:
  • The Control Bus
  • The Data Bus
  • The Address Bus

Before we go any further it's important to understand why there are three of them, after all, this means more wires and more things to control. To answer this we need to understand what kinds of things are coming in and out of the CPU all the time (this is not an exhaustive list):
  • Instructions to read
  • Data to read
  • Data to write back to memory
  • Control signals

If everything had to go down one bus then we would have a lot of time wasted. Imagine the CPU wants  to send a piece of data back to memory and then read the next instruction. With only one bus we would have to wait for the data to be sent, then we could get the next instruction. In CPU terms this would take forever and as we learned before, making the CPU wait is not a good idea at all.

So, to make things happen more quickly, we make use of several buses. Not only does this speed things up but it means we can:
  • Fetch one instruction whilst executing another
  • Fetch data whilst requesting another instruction from memory
  • Send control signals whilst sending and receiving data.

The point to understand here is we are doing things efficiently. Many things are happening simultaneously (or in parallel) and as you should know by now, this is only a good thing.

The Control Bus

The control bus is used to send signals to the RAM controller to tell it whether we want to read or write data.


The Data Bus

The data bus is used to send instructions and data TO the CPU (a READ operation) and also to send data FROM the CPU back to RAM (a WRITE operation).

The Address Bus

The address bus is used to send the address of the instruction or data that we want to read from or write to memory.
 

A Detailed Fetch, Decode, Execute Cycle

PictureDo all the revision!
If you've made it this far, well done, you're an educational warrior. This is the final step in truly understanding how ALL of this ties together. The 8 steps below show you what each register, bus and component of the CPU is doing at each stage of the Fetch, Decode, Execute cycle.

Although this seems complicated at first, it is really just a description of the diagram we looked at earlier and how data flows through the various buses and registers.

During the Fetch stage of the cycle:

1. The Program counter is incremented to point to the location of the next instruction

2. The contents of the Program Counter are put into the Memory Address Register(MAR)

3. The address is transferred along the Address Bus to Main Memory (so we know where to get the next instruction from)

4. The data/instruction that has been fetched is transferred back to the processor along the Data Bus

5. This is held in the Memory Data Register(MDR)

6. The instruction would then be transferred to the Current Instruction Register (CIR), ready for decoding

So it should be fairly obvious by now that the majority of work each cycle happens during the fetch phase.

During the decode/execute phase the following two things happen:

7. The instruction is split into an Op-Code and an Operand

8. The instruction is carried out by the ALU (Arithmetic Logic Unit)

 

Embedded Systems

Picture
An embedded system is simply a computer which:

- Combines hardware and software together in one device (sometimes even one chip)

– Is custom built for a specific purpose

– Designed to perform a single, well defined task​

Embedded systems are usually extremely reliable and undergo extensive testing because they are responsible for some pretty important things like driving your car for you so you can fall asleep at the wheel and not die.

Advantages of embedded systems are:
• Usually small which means cheap to manufacture

• Very reliable (when did your dish washer last crash?)

• Will be perfectly suited/customised for the particular purpose it’s made for


Disadvantages are:
• Very expensive to create and develop

• Can be used in safety critical environments so must be heavily tested

• Can be very complex systems

• (usually) cannot be easily updated


Powered by Create your own unique website with customizable templates.
  • Mr Davidson's Blog
  • Twitter
  • A Level CS
    • A Level Exam Technique
    • Lessons
    • Unit 3 - Coursework Guidance
  • OLD GCSE CS - J276
    • All GCSE Questions
    • GCSE Exam Technique
    • Lessons
    • Glossary of Terms
    • Unit 1 Revision >
      • 1.1 - Systems Architecture
      • 1.2 - Memory
      • 1.3 - Storage
      • 1.4 - Wired and Wireless Networks
      • 1.5 - Topologies, Protocols and Layers
      • 1.6 - System Security
      • 1.7 - Systems Software
      • 1.8 - Ethics and Law
    • Unit 2 Revision >
      • 2.1 - Computational Thinking
      • 2.1 - Searching and Sorting Algorithms
      • 2.1 and 2.2 - Writing Algorithms/Programming Techniques
      • 2.2 - SQL and Database Structure
      • 2.3. Robust Code
      • 2.4. Logic
      • 2.5. Translators and Facilities
      • 2.6. Data Representation
  • NEW GCSE CS - J277
    • Glossary of Terms
    • Exam Technique
    • Lessons >
      • Unit 1 - Computer Systems
  • GCSE Business - J204
    • Lessons >
      • Unit 1 - Business Activity, Marketing and People
      • Unit 2 - Operations, Finance and Influences
    • Exam Technique
  • Contact