Running to know that all instructions have codes

Running head: Assignment 1 for Shahla AlyasiryCS200 Fundamentals of Information TechnologyModule 1Assignment 1Shahla AlyasiryBoston University AbstractHow does an operating system ensure that hardware is used efficiently?Describe in detail how a CPU executes an instruction.The lecture describes two types of events which change the normal flow of control in the processor. Briefly describe each type of event and explain why each is needed.Over the past decade, processor manufacturers, like Intel, have used various technologies to increase system performance. Describe in detail 3 of these technologies as discussed in the online content.The speed of a computer system isn’t based solely on the speed of the processor. Discuss how cache memory and newer buses have contributed to faster computer systems over the years. Without operating system computers wouldn’t be effective to use. Because it’s program or a set of programs in the operating system that makes it efficient by performing number of important tasks. In this question is mentioned one of them-hardware. It’s the operating system that runs it, controls its work and makes sure of its efficient use. For example, I/O devices are connected with device controller through buses. The device controllers are responsible for certain set of instructions. Some of them located in CPU, some device-specific ones in the operating system. Since the application software doesn’t normally deal with those device specifics, the operating system supplies those specifics to the software.The basic process of how CPU executes an instruction is a repetitive and sequential cycle of fetching and executing the instruction. But this simple cycle can be understood better if we break it down following way:First, we need to know that all instructions have codes (Op code) that tell us about what operation will be performed. Also, they have certain number of operand specifiers (this can be zero or more) that shows what data will be used in the operation. Constant operands can be stored in instruction, while temporary ones might be stored in a register. A register itself is also temporary storage location. It is an important subsystem of the processor. Four types of register are used to control the process of execution of an instruction. It starts in the Program Counter, but when that instruction execution ends, it already contains the memory address for the upcoming instruction.Once it gets the address for the next instruction, it fetches the instruction from the main memory. Then instruction that is given is decoded to find out what it will do.Next process like I mentioned above is to determine the address of the operands (can be zero or more) and fetch them and execute the instruction.Now it will determine where to store the results and will call for the next instruction.Typically, execution of instructions runs sequentially. But sometimes this flow is changed, either by exception or interrupt. Exceptions can be due to some unexpected situation happening within the program. It can also happen when a program stops control over the operating system to complete some operations on behalf of it. It’s called a “system call”. In both cases processor performs execution of special instruction, exception handler. If exception handler can’t recover the situation, it will simply stop the program or will just make an exit system call and will terminate its execution. Exceptions are good for using to catch the errors in program.Unlike exceptions, interrupts are asynchronous and might or might not be related to the executing program. They usually happen due to external event. For instance, when we move mouse (which can happen any time), hardware interrupt happens, and processor receives a signal about it. If processor detects an interrupt, it executes special instruction for that, interrupt handler. Once the interrupt is executed, the programs returns back to the instruction that was interrupted. Interrupts are good aid for the processor to run efficiently by allowing complete multiple tasks in a very short time.Increasing performance of computing systems is a natural demand that keeps competition alive. Basic ways for that are either to increase the amount of operations performed in certain time, or decreasing the time spent on the operation. Let’s take a look at 3 of these technologies use in last 10 years:First one is increasing the clock cycle of the processor. We know that instruction executing is a sequential and repetitive cycle. The speed of the processor is determined by how many cycles does it take to execute an instruction or how many instructions can be executed per clock cycle. If this cycle was measured by hertz before, or megahertz later, now it goes by gigahertz. For example, my laptop runs on 7th generation 2.2 GHz processor, which considers average for nowadays.Next the concept of pipelining was invented. The execution of an instruction involves simple cycle of fetching and executing it and done sequentially. With pipelining concept processor doesn’t have to wait for one cycle to finish to start the new one. Instead, we can overlap it through pipelining. For example, once the instruction is fetched and execution process begins, the processor can already start fetching the next instruction, while continuing the execution of the first one. This way we get more increased performance for the processor.Third technique was based on increasing the number of processors in a computer, which was not cost-effective. Then Intel came up with the technology called Hyper-Threading, which means one physical processor chip can actually act as two, meaning executing two independent instructions at the same time. Again, as for example, I will use my laptop, which has one physical processor chip that works with four cores, meaning it can perform 4 independent instruction executions.We know that the amount of main memory is playing important role in computer’s performance. We also know that processors work a lot faster than the main memory. But thanks to the cache memory that is located in between the processor and the main memory we don’t really feel that speed difference. Cache memory is more expensive (also faster) and smaller than the main memory. When information is requested by the processor, cache memory moves it from the main memory at much higher speed than the main memory, which helps the processor do more work in much shorter time.As one of the main components of the computer, buses also have been improved over the time. This applies especially to two of their characteristics: bus speed and bus width. Bus speed determines how much information per second can be transferred through them (also measured in hertz). Bus width regulates how many bits can be transferred at the same time (more bits means better performance).   ReferencesPrintable Lectures from MET CS 200 01 Fundamentals of Information Technology (2018 Spring)