# Synopsis on virtualizing the CPU

Turning a single CPU(or small set of them) into seemingly infinite number of CPUs and thus allowing many programs to seemingly run at once is what we call we virtualizing the CPU.

There might some magics we have notice that the the ability to run multiple programs at once raises all sorts of new questions. For example, if two programs want to run at a particular time, which one should run?

# The Abstraction: The Process

The OS creates this illusion by virtualizing the CPU. By running one process, the stopping it and running another, and so forth, the OS can promote the illusion that many virtual CPUs exist when in fact there is only on pysical CPU(or a few). This basic technique, known as time sharing of the CPU.

To implement virtualization of the CPU, and to implement it well, the OS will need both som low-level machinery mechanisms as well as some high-level policies.

• Low-level machiney mechinisms

Low-level methods or protocols that implements a needed piece of functionality. For example, context switch between processes, stop a running program and start running another on a given CPU.

• High-level policies

Policies are algorithm for making some kind of decision within the OS. For example, which program should gain the CPU? a scheduling policy in the OS will make this decision, likely using historical infomation, workload knowledge and performance metrics.

（这种区分方式在这本书里挺常见的，后面讲Virtualization on Memory时也会这样介绍）

## The Abstration: A Process

A process is simply a running program.

To understand what constitutes a process, we thus have to understand its machine state: what a program can read or update when it is running.

• Memory(or we can call it address space)
• Register
• Program counter(PC), sometimes called instruction pointer or IP, which tells us which instruction of the program is currently being executed.
• Stack pointer and frame pointer
• Ability of accessing persistent storage devices.

## Process Creation: A Little More Detail

One mystery that we should unmask a bit is how programs are transformed into processes.

• The first thing that ths OS must do to run a program is to load its code and any static data into memory, into the address space of the process. Programs intially reside on disk(or, in some modern systems, flash-based SSDs) in some kind of executable forma; thus, the process of loading a program and static data into memory requires the OS to read those bytes from disk and place them in memory somewhere. To truly understand how modern OS performs this, you'll have to understand more about the machinery of paging and swappong, topics we'll cover in the future.
• Once the code and static data are loaded into memory, there are a few other things the OS needs to do before running the process, such as allocate memory for the program's run-time stack.
• The OS may also allocate some memory for the program's heap.
• The OS will also do some other initialization tasks, particularly as related to I/O.

## Data Structures

The OS is a program, and like any program, it has some key data structures that track various relevant pieces of information. To track the state of each process, for example, the OS likely will keep some kind of process list for all processes.

The xv6 Proc Structure:

// the registers xv6 will save and restore
// to stop and subsequently restart a process
struct context {
int eip;
int esp;
int ebx;
int ecx;
int edx;
int esi;
int edi;
int ebp;
};

// the information xv6 tracks about each process
// including its register context and state
struct proc {
char *mem;                  // Start of process memory
uint sz;                    // Size of process memory
char *kstack;               // Bottom of kernel stack
// for this process
enum proc_state state;      // Process state
int pid;                    // Process ID
struct proc *parent;        // Parent process
void *chan;                 // If non-zero, sleeping on chan
int killed;                 // If non-zero, have been killed
struct file *ofile[NOFILE]; // Open files
struct inode *cwd;          // Current directory
struct context context;     // Switch here to run process
struct trapframe *tf;       // Trap frame for the
// current interrupt
}

# Mechanism: Limited Direct Executionn

How to Efficiently virtualize the CPU with control?

The OS must virtualize the CPU in an efficient manner, but while retaining control over the system. To do so, both hardware and operating systems supported will required.

## Basic Technique: Limited Direct Execution

The "direct execution" part of the idea is simple: just run the program directly on the CPU.

When the OS wished to start a program running, it creates a process entry for it in a process list(and do some other works), then starts running the user's code.

But this approach gives rise to a few problems:

1. How can the OS make sure the program doesn't do anything that we don't want it to do?
2. When we running a process, how does the OS stopit from running and switch to another process, thus implementing the time sharing of CPUs.

## Problem 1: Restricted Operations

The hardware assists the OS by providing different modes of execution.

• In user mode, applications do not have full access to hardware resources.
• In Kernel mode, the OS has access to full resources of the machine.

Special instructions to trap into kernel and return-from-trap back to user-mode programs are also provided, as well instructions that allow the OS tell the hardware where the trap table resides in memory.

## Program 2: Switching Between Processes

How can the OS regain control of the CPUs so that it can switch bewteen processes?

• A Cooperative: Wait For System Call
• In a cooperative scheduling system, the OS regains control of the CPU by waiting for a system call or run an illegal operation of some kind to take place.
• Disadvantage: it's a passive approach less than ideal? What if a process ends up in an infinite loop and never makes a system call?
• A Non-Cooperative Approach: The OS Takes Control
• A timer device can be programmed to raise an interrupt every so many millseconds; when the interrupt is raised, the currently running process is halted, and a pre-configured interrupted handler in OS runs. At this point, the OS has regrained control of the CPU, and thus can do what it pleases: stop the current process, and start a different one.

## Saving Restoring Context

Now that the OS has regained control, whether cooperatively via a system call, or more forcefully via a timer interrupt, a decision has to be made: whether to continue running the currently-running process, or switch to a different one.

This decision is made by a part of the operating system known as the scheduler.
If the decision is made to switch, the OS then executes a low-level piece of code which we refer to as a context switch.

# Summary

We thus have the basic mechanisms for virtualizing the CPU in place. But a major questions is left unanswered: which process should we run at a given time?

That's what a scheduler will do.