Category Archives: Process

A process is a unit of activity characterized by the execution of a sequence of instructions, a current state and a set of system resources associated

Process of the operating system

Process

A process is a unit of processing managed by the operating system (it is a program in execution)

The operating system implements a collection of abstract machines

Each machine is a simulation of the hardware of the von Neumann underlying

The process manager creates the environment in which coexist various processes. Each one runs in its own abstract machine as a multi-tasking

A resource is any element of the abstract machine that can be requested by a process

A process requests a resource by making a system call, which can cause it to crash if it is not available

The reservation of the resource causes the resource to be part of the configuration of the abstract machine in the process

Examples of resources: processor, main memory, I/O devices, files

Tables of the operating system

The operating system maintains tables that describe the processes and resources of the system:

  • Process tables: tables of BCPs
  • Tables of memory: information on the use of the memory
  • Tables of I/O: information associated with the peripherals and the I/O operations
  • Tables files: information about the files open

Operating system tables

Memory image of the process

A process address space is the set of logical addresses that can be routed

On computers 32 bit the max is 4 GB

On 64 bit computers, the maximum is 128 GB, however, at present there are no plates that support memories so great

The memory image of the process is the block of physical memory assigned to the process

The address space of the process is limited to the block of physical addresses corresponding

The address space is important for the protection of resources against unauthorized access

State of the processor

When running, the processor state resides in the processor logs:

  • General records
  • PC
  • Stack pointer (SP)
  • State records
  • Special registers

When it does not run, its processor state resides in the BCP

Change of context

When an outage occurs:

  • Saves the state of the processor at the corresponding BCP
  • It starts executing the routine treatment of interruption of the operating system

Scheduler:

  • Module of the operating system that selects the next process to run

Dispatcher:

  • Module of the operating system that makes running the selected process

Process Control Block (PCB)

Information process identification

  • Process and parent ID (pid, ppid)
  • Actual and effective user ID (uid, euid)
  • Actual and effective group ID (gid, egid)

Information of the state of the processor

Information of control of the process

  • Information for planning and status
  • Description of the address space
  • Assigned resources (open files, ports, ...)
  • Communication between processes
  • Pointers for structuring the processes in lists or queues

Process Control

Process Control

Process control allows us to talk about execution, threads or threads, which are a sequence of tasks chained so small that they can be executed by an operating system

Process model with two states

A process can be in one of two basic states:

  • Execution
  • No execution

Two-state process

Processes not running

With a single queue of non-running processes, the scheduler would have to traverse it to find a process that is not blocked. FIFO (First-in, First-out) strategy is not used

Two types of non-execution processes:

  • Ready to run
  • Locked: Expect to complete an I/O operation

Model of five states

  • Execution
  • Ready
  • Locked
  • New
  • Finished
  • Five-state process

    Processes suspended

    The operating system may decide to “evict” a process from main memory and transfer it to disk

    Two new states:

    • Locked/Suspended: The process is in secondary memory waiting for an event
    • Ready / Suspended: The process is in secondary memory available for execution

    Two-state sleep process

    Signals and exceptions

    Signals and exceptions are the mechanisms that uses an operating system to notify a process of the occurrence of a particular event

    The signals are used in POSIX and exceptions are used in Windows

    Signals

    A signal is the interruption of a process

    Receiving the signal:

    • The process stops its execution
    • Branches to execute a signal processing routine (code must be part of the process itself)
    • Once the processing routine is executed, follow the execution of the process

    Sending the signal:

    • The source can be the operating system or a process
    • At POSIX, it is done using the kill service

    Exceptions

    Event that occurs during the execution of a program and that require the execution of a piece of code outside the normal flow of execution

    Can be generated by the hardware or the software

    Exception handling requires programming language support. For example, in Java:

    Threads

    A modern process manager separates dynamic execution from static aspects of the computational environment from the process:

    • Modern process: Unit of resource ownership
    • Thread (or light process): Unit or engine running

    A process can contain a single ground thread or several threads of execution

    States of a thread

    States:

    • Execution
    • Ready
    • Locked

    The state suspension belongs to the concept of process

    Basic operations related to thread state change:

    • Creating/destroying a thread
      • When you create a process, you also create a thread
      • A thread of a process can create other threads within the same process
    • Lock/Unlock
      • Reservation-specific resource from the thread

    Processes in a multi-threaded environment

    Own information for each thread:

    • Processor Logs: counter, battery, status, etc.
    • Thread Status: Execution, Ready or Locked

    Information shared by all threads in the same process:

    • Address Space: code and data
    • Global Variables
    • Open files
    • Other shared resources

    Benefits of threads

    Facilitates modularity by allowing you to encapsulate each task in a thread independent

    Increases job execution speed:

    • It takes less time to create/finish a new thread than a process
    • It takes less time to switch context between two threads of the same process
    • Because threads in the same process share memory and files, they can communicate with each other without invoking the kernel

    Allows concurrent programming

    Process server

    A server is a process that is pending receiving work orders that come from other processes (clients)

    Once the order is received, it executes it and responds to the customer with the result

    Communication between the client process and the server is done through the ports

    Daemon

    They are booted when you start the system. They are always active and do not die

    Running in the background and are not associated with a terminal or login process

    They are typically waiting for an event or perform a task periodically

    They don't do the work directly: they throw other processes (or threads) to do their homework

    Examples: FTP server, web server, ...

POSIX process management

POSIX process management

Identification of processes

POSIX identifies each process by a unique integer called the process id of type pid_t

The function to get the id of the process that performs the call is

The function to get the id of the parent process is

The function to get the id of the real user is

The function to get the id of the actual group is

Environment of a process

The environment of a process consists of the list of variables that are passed to the process at the time you start your run

They are accessible through an external variable that points to a list of environment variables:

Some environment variables:

  • HOME: working directory initial user
  • LOGNAME: name of the user associated with the process

The function to get the value of an environment variable is char

Creation of processes

The function to create a process is

Returns the id of the child process to the parent process and 0 to the child process, will return -1 in case of failure

Creates a child process that runs the same program as the parent. Inherits open files (descriptors are copied)

The functions for running a different program (code) are:

As arguments path, executable file file and arg are used as executable arguments

Returns -1 in case of error. If successful it will not return any value

Changes the memory image of the process. The same process runs another program but keeps the files open

Termination of processes

The function to terminate a process is

As an argument state is used, which is the return code to the parent process

Ends the execution of the process

Closes all the file descriptors open

All the resources are released in the process

Waiting for the terminaciçon of a process

The functions to wait for the completion of a child process are:

As arguments state, the termination identifier of the child process, pid, process identifier, options, options identifier are used

Returns the identifier of the child process or -1 in case of error

Allows a parent process to wait until the end of the execution of a child process

Example program

Planning

Planning

You need to make a planning to determine what will be the programs that are supported in the system

Types of planning

  • Long term planning: Decision to add processes to set of processes to run
  • Medium term planning: Decision to add processes to set of processes that are partially or fully in memory
  • Short term planning: Decision on which available process/thread will be executed on the processor

Long term planning

Determines which programs are supported on the system. You must make two decisions:

  • When you can create a new process
  • What will be the next process to support. Combine processes with higher processor load and processes with higher I/O load

Controls the degree of multiprogramming. The more processes that are created, the lower the percentage of time each process can run

Medium term planning

Part of the function of exchange

It is based on the need to control the degree of multiprogramación

If virtual memory is not used, you should consider the memory needs of the process

Short term planning

CPU Usage Planning: Manage CPU Sharing Between Processes/Threads Ready to Run

The goal of the scheduler is to divide the processor time among the processes/threads that can run

You should consider factors such as:

  • Equity: spread out the CPU usage
  • Efficiency: avoid time idle CPU
  • Performance: to maximize the number of requests

Planning mechanisms

The scheduler consists of three logical components:

  • Queue: When a process changes to the Ready state, the gluer places it in a queue-type data structure
  • Context Switch: When a process is to be evicted from the CPU, the context switch saves the contents of the CPU logs in the process BCP
  • Distributor: The dealer selects one of the processes in the queue of Ready and assigns the CPU

Planning queue diagram

Criteria of the planning

Criteria for comparing the performance of the various planning algorithms:

  • Time of service (T_s) Estimated time of execution
  • Time of return (T_r) Elapsed time from the time it arrives at the system until it ends
  • Waiting time (T_w) Sum of times that the process is not running
  • Return time normalized (\frac{T_r}{T_s}) Return time divided by the time of the service

Use of priorities

The planner will select always to a process of higher priority before than the lower priority

Uses multiple queues of Ready-to-represent each level of priority

Lower priority processes can be starved (never chosen)

A solution to starvation is to allow a process to change its priority based on its age or execution history

Planning policies

They rely on a selection function to determine which process, from among the Ready, is chosen to execute

The function can be based on priorities, resource needs, or process execution characteristics

There are two kinds of algorithms depending on the decision mode (instant the selection function is applied):

  • No expulsion (non-appropriation or non-preferred): Once the process goes into the Execution state, it continues to run until it terminates or crashes waiting for an I/O (burst)
  • With expulsion (appropriate or preferred): The process that is currently running can be interrupted and passed to the Ready by Operating System state. They allow to give a better service since they prevent a process from monopolizing the processor for a long time

Planning algorithms

Algorithms without ejection:

  • FIFO (First-come First-served): First to arrive, first to be served
    • Select to run the process the more ancient of the queue of Ready
    • Easy-to-implement
    • Average waiting time very high
    • Penalizes short processes on the long-term processes
    • Favors the processes with CPU load of the processes with I/O load
    • It is often used combined with priorities
  • SPN (Shortest Process Next): First, the short process
    • Selects the process with the shortest time of service
    • Difficulty in estimating the expected time of execution for each process
    • Minimizes the average waiting time
    • It penalizes those processes over the short processes
    • Possibility of starvation for long processes

Algorithms with ejection:

  • Round Robin (RR)
    • Uses a time quantum that is a fraction of the time that allows each process to use the CPU
    • Ejects the process that has consumed its quantum of time and happens to occupy the last place of the queue of Ready
    • There are difficulties in choosing the size of the quant (less than 80% of CPU bursts)
    • Favors the processes with CPU load of the processes with I/O load
    • The large timeout, but ensures a CPU distribution with good response times
  • FB (Feedback): Feedback multilevel
    • Divide the Ready queue into a queue hierarchy: RQ_0, \cdots, RQ_n, each with a priority level
    • Uses a time quantum, and a dynamic mechanism of priorities
    • The processes fall by RQ_0 and in each execution burst descend to the next queue
    • FIFO is used in each queue, except for the lower priority that is treated with a rotating shift
    • Favors short processes in front of the most old and long
      • Short processes will end quickly, without descending too low in the queue hierarchy
      • The long-term processes will be gradually brought down
    • There is a great number of variations of this scheme
    • To avoid starvation of long processes can be varied as a function of each queue

    Planning on POSIX

    Each planning policy has an associated range of at least 32 priority levels

    The scheduler will select the process with the highest priority

    When a process is ejected by a higher-priority process, the process becomes the first in the queue associated with its priority

    Three planning policies coexist in the planner:

    • SCHED_FIFO: It is an expulsion planning policy based on static priorities, in which processes with the same priority are served on the first come, first-served basis (FIFO queue). This policy will have at least 32 priority levels
    • SCHED_RR: This policy is very similar to SCHED_FIFO, but it uses a round-robin method to plan processes of the same priority. It also has at least 32 priority levels
    • SCHED_OTHER: It is a policy of planning defined by the implementation

    Services POSIX for process planning

    The three scheduling policies are defined in the header file

    Change the planning parameters of a process:

    Obtain the planning parameters of a process:

Concurrency

Concurrency

Concurrency is a property of systems in which the processes of a computation are done simultaneously, and can interact with each other

Computing models on which concurrent processes can run:

  • Multiprogramación with a single processor
  • Multiprocessor
  • Distributed processing

It affects a large number of operating system design issues:

  • Communication between processes
  • Sharing and competition for resources
  • Synchronization of the execution of several processes
  • Allocation of processor time to processes/li>

Interaction between the processes

Types of processes:

  • Independent
  • Cooperating

Interaction between processes:

  • Processes share or compete for access to physical or logical resources (including independent processes)
  • Processes communicate and synchronize among themselves to achieve a common goal

Competition among processes for resources

The main control problem is the need for mutual exclusion. While a process is; using a share should not be allowed access to other processes

Making mutex compliance creates two additional issues:

  • Deadlock
  • Starvation

Classic problems of communication and synchronization

  • The problem of the critical section
  • The problem of the producer-consumer
  • The problem of readers-writers
  • Client-server communication

The problem of the critical section

We have no concurrent processes, which can be independent or cooperative

The critical section, each process has a fragment of code from which you are accessing some shared resource

When one process is running in its critical section, no one else can run on yours

Example of critical section

Two processes P_1 and P_2 share variables a and b. Variables meet the relationship a = b

Consider the following concurrent execution:

At the end of the execution no longer comply with the condition a = b

The solution is to use mutual exclusion when entering the critical sections

It is necessary to use some synchronization mechanism:

Requirements that any solution must offer:

  • Mutual exclusion
  • To avoid deadlocks
  • Limited Wait: avoid starvation

The problem of the producer-consumer

One or more producers generate data and put them in a buffer

A single consumer takes items from the buffer one-by-one

Only one producer or consumer may access the buffer at a given instant

Example of producer-consumer

Producer-consumer problem

The problem of readers-writers

Any number of readers can read the file simultaneously

You can only write to the file by a writer in every moment

If a writer is accessing the file, no reader can read it

Example of readers-writers

Reader-Writer Problem

Client-server communication

The server processes offer a number of services to other processes and clients

The process server can reside on the same machine as the client or in a different

Example of client-server

Client-server communication

Communication mechanisms

  • Files
  • Variables in shared memory
  • Pipes
    • Nameless: pipes
    • Named: FIFOS
  • Message passing

Synchronization mechanisms

  • Signals
  • Pipes
    • Nameless: pipes
    • Named: FIFOS
  • Traffic lights
  • Monitors and variables conditional
  • Message passing

Pipes

Mechanism of communication and synchronization

A pipeline or pipe is a data structure that is implemented in the kernel of the operating system for communication between address spaces

Can only be used between processes that are inherited through the call

Unnamed pipes: pipes

They use a FIFO buffer with a one-way data flow. They have one read endpoint and one write endpoint treated using file descriptors:

  • Writing: put data in the pipe
  • Reading: extract data from the pipe

Services POSIX pipes

Create a pipe without a name

Pipe Read File Descriptor:

File Descriptor to write to the pipe

Close the end of a pipe

Descriptor of file to close

To read from a pipe

Arguments use fd pipe read file descriptor, variable buffer where read data is stored, and nb maximum number of bytes to read

If the pipe is empty, the reading process is blocked until some process enters data

If the pipe is not empty, the call returns the number of bytes read and removes the requested data from the pipe

Write on a pipe

Arguments use fd pipe write file descriptor, variable buffer where data to be written is stored, and nb maximum number of bytes to write

If the pipe fills up, the writer process is blocked until it can be completed

The read and write operations are performed atomically

Create a named pipe

There is No need to inherit it via fork

To open a named pipe

Deleting a named pipe

Read, write, and close a FIFO, just like pipes, by: