프로세스 스케줄링
Chapter 5: Process Scheduling
Basic Concepts
• Process Scheduling is
- the basis of multi-programmed operating system
• Terminology
- CPU scheduling, Process scheduling, Kernel Thread scheduling
- used interchangeably, we use process scheduling
• In a single-processor system (=>single-threaded process)
- Only one process can run at a time
- Any others must wait until the CPU is free and can be rescheduled.
- When the running process goes to the waiting state,
- the OS may select another process to assign CPU to improve CPU utilization.
- Every time one process has to wait, another process can take over use of the CPU
• Process scheduling is
- a fundamental function of operating-system.
- to select a process from the ready queue and assign the CPU
• Maximum CPU utilization obtained with multiprogramming
- by process scheduling
Diagram of Process State from ch.3
• It is important to understand that only one process can be running on any processor at any instant.
• Many processes may be ready and waiting states.
CPU - I/O burst cycle
• Process execution consists of
- a cycle of CPU execution (CPU burst) and I/O wait (I/O burst)
• Process alternate between these two states
- Process execution begins with a CPU burst, which is followed by an I/O burst, and so on.
- Eventually, the final CPU burst ends with an system call to terminate execution.
• CPU burst distribution of a process
- varies greatly from process to process and from computer to computer
Histogram of CPU-burst Times
• CPU burst distribution is generally characterized
- as exponential or hyper-exponential
- with large number of short CPU burst and small number of long CPU burst
• I/O bound process has many short CPU bursts
• CPU bound process might have a few long CPU bursts.
Process Scheduler
• is one of OS modules.
• selects one of the processes in memory that are ready to execute, and allocates the CPU to the selected process.
• CPU scheduling decisions may take place when a process:
1. switches from running to waiting state: I/O request, invocation of wait() for the termination of other process
2. switches from running to ready state: when interrupt occurs
3. switches from waiting to ready: at completion of I/O
4. terminates: running state → terminated state
5. starts: new state → ready state
• Scheduling under 1 and 4 is non-preemptive
• Scheduling under 2, 3 and 5 is preemptive
Non-preemptive vs. Preemptive
• Non-preemptive Scheduling
• Once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU
• either by terminating or by switching to the waiting state.
• used by Windows 3.x
• Preemptive Scheduling
- Current running process can be switched with another at any time
○ because interrupt can occur at any time
- Most of modern OS provides this scheme. (Windows, Max OS, UNIX)
- incurs a cost associated with access to shared data among processes
- affects the design of the OS kernel
○ Certain OS (UNIX) waits either for a system call to complete or for an I/O block to take place before doing a context switch.
○ protects critical kernel code by disabling and enabling the interrupt at the entry and exit of the code.
Dispatcher
• Dispatcher module is a part of a Process Scheduler
• gives control of the CPU to the process selected by the short-term scheduler; this involves:
- switching context
- switching to user mode
- jumping to the proper location in the user program to restart that program
• Dispatch latency – time it takes for the dispatcher to stop one process and start another running
Scheduling Criteria
• Based on the scheduling criteria, the performance of various scheduling algorithm could be evaluated.
- Different scheduling algorithms have different properties.
• CPU utilization – ratio (%) of the amount of time while the CPU was busy per time unit.
• Throughput – # of processes that complete their execution per time unit.
• Turnaround time – the interval from the time of submission of a process to the time of completion.
Sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O
• Waiting time – Amount of time a process has been waiting in the ready queue, which is affected by scheduling algorithm
• Response time – In an interactive system, amount of time it takes from when a request was submitted until the first response is produced
(for time-sharing environment)
Optimization Criteria
• It is desirable to maximize:
- The CPU utilization
- The throughput
• It is desirable to minimize:
- The turnaround time
- The waiting time
- The response time
• However in some circumstances, it is desirable to optimize the minimum or maximum values rather than the average.
- Interactive systems, it is more important to minimize the variance in the response time than minimize the average response time.
Process Scheduling Algorithms
• First-Come, First-Served Scheduling (FCFS) <=먼저 들어온 놈부터 시작
• Shortest-Job-First Scheduling (SJF) <=빨리 끝내는 부터 시작
• Priority Scheduling <=
• Round-Robin Scheduling
• Our measure of comparison is the average waiting time.
- CPU utilization, Throughput, Turnaround time,
FCFS Scheduling
• FCFS scheduling algorithm is non-preemptive
- Once the CPU has been allocated to a process, that process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O.
- is particularly troublesome for time-sharing systems (interactive computing environment).
• Convoy effect (a problem of FCFS) occurs.
- When one CPU-bound process with long CPU burst and many I/O-bound process which short CPU burst.
- All I/O bound process waits for the CPU-bound process to get off the CPU while I/O is idle
- All I/O- and CPU- bound processes executes I/O operation while CPU is idle.
- results in low CPU and device utilization
Shortest-Job-First (SJF) Scheduling
• associate with each process the length of its next CPU burst.
• use these lengths to schedule the process with the shortest time
• Two schemes:
- non-preemptive – once CPU given to the process it cannot be preempted until completes its CPU burst
- preemptive – if a new process arrives with CPU burst length less than remaining time of current executing
process, preempt. This scheme is known as the Shortest-Remaining-Time-First (SRTF)
• SJF is optimal – gives minimum average waiting time for a given set of processes
Round Robin (RR) Scheduling
• Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds.
• After this time has elapsed, the process is preempted and added to the end of the ready queue.
• If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the
CPU time (= q time units) in chunks of at most n x q time units at once.
• No process waits more than (n-1) x q time units.
• Performance depends on the size of the time quantum.
- q large => RR is same as FCFS
- q small => provides high concurrency: each of n processes has its own processor running at
1/n the speed of the real processor
Round Robin (RR) Scheduling
• The time quantum q must be large with respect to context switch, otherwise overhead is too high.
• If the context switching time is 10% of the time quantum, then about 10% of the CPU time will be spent in context switching.
• Most modern OS have time quanta ranging from 10 to 100 milliseconds,
• The time required for a context switch is typically less than 10 microseconds; thus the context-switch time is a small fraction of the time quantum.
Priority Scheduling
• A priority number (integer) is associated with each process
• The CPU is allocated to the process with the highest priority (smallest integer highest priority)
- Preemptive
- Non-preemptive
• SJF is a priority scheduling where priority is the predicted next CPU burst time
• Problem => Starvation – low priority processes may never execute
• Solution =>Aging – as time progresses, increase the priority of the process
Scheduling Algorithm with multi-Queues
• Multi-level Queue Scheduling
• Multi-level Feedback Queue Scheduling
Multilevel Queue
• Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
- The processes are permanently assigned to one queue, generally
based on some property, or process type.
• Each queue has its own scheduling algorithm
- foreground – RR
- background – FCFS
• Scheduling must be done between the queues
- Fixed priority scheduling - serve all from foreground then from background, Possibility of starvation.
- Time slice scheduling – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR, 20% to background in FCFS
• No process in the batch queue could run unless the queues with high priority were all empty.
• If an interactive editing process entered the ready queue while a batch process was running, the batch process would be preempted.
Multilevel Feedback Queue
• A process can move between the various queues; aging can be implemented in this way
• Multilevel-feedback-queue scheduler defined by the following parameters:
- number of queues
- scheduling algorithms for each queue
- scheduling algorithms between the queues
- method used to determine when to promote a process
- method used to determine when to demote a process
- method used to determine which queue a process will enter when that process needs service
Example of Multilevel Feedback Queue
• Three queues:
- Q0 – RR with time quantum 8 milliseconds
- Q1 – RR with time quantum 16 milliseconds
- Q2 – FCFS
• Scheduling
- A new job enters queue Q0 which is served RR. When it gains CPU, job receives 8 milliseconds.
- If it does not finish in 8 milliseconds, job is moved to queue Q1.
- At Q1 job is again served RR and receives 16 additional msec.
If it still does not complete, it is preempted and moved to queue Q2.
- The job is served based on FCFS in queue Q2
Multiple-Processor Scheduling
• CPU scheduling more complex when multiple CPUs are available
• Homogeneous processors within a system or heterogeneous processors within a system
• AMP vs. SMP
- Symmetric Multiprocessing (SMP) – each processor makes its own scheduling decisions.
- Asymmetric Multiprocessing (AMP) – only one processor accesses the system data structures, alleviating the need for data sharing.
• Load sharing on SMP system
- keeps the workload evenly distributed across all processors in SMP.
- Push migration vs. Pull migration
Real-Time Scheduling
• Hard real-time systems – required to complete a critical task within a guaranteed amount of time
• Soft real-time computing – requires that critical processes receive priority over less fortunate ones
Author And Source
이 문제에 관하여(프로세스 스케줄링), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다 https://velog.io/@stefanob11/프로세스-스케줄링저자 귀속: 원작자 정보가 원작자 URL에 포함되어 있으며 저작권은 원작자 소유입니다.
우수한 개발자 콘텐츠 발견에 전념 (Collection and Share based on the CC Protocol.)