Please make a comment if link is not working for you. I appreciate your valuable comments and suggestions. For more books please visit or site. Windows Development : Good source of information on Windows internals. Note especially the section on Windows Base Services. The comp. Download william stallings operating systems 6th edition pdf book with a stuvera membership plan together with s of other computer science books for less than the price of one.
Operating system william stallings 6th pdf book is a perfect IT book for students and IT practitioners. By using several innovative tools, Stallings makes it possible to understand critical core concepts that can be fundamentally challenging.
The understanding operating systems 6th edition pdf book new edition includes the implementation of web based animations to aid visual learners. At key points in the operating system william stallings 6th edition pdf free download book, students are directed to view an animation and then are provided with assignments to alter the animation input and analyze the results. The william stallings operating systems 6th edition pdf book concepts are then enhanced and supported by end-of-chapter case studies of UNIX, Linux and Windows Vista.
Simulation Projects: The IRC provides support for assigning projects based on a set of seven simulations that cover key areas of OS design? The most recently reported issues are listed below.
Slideshare uses cookies to improve functionality and performance, and wililam provide you with relevant advertising. We don't recognize your username or password. Allocation should be transparent to the programmer. Other Student Resources. Friend Reviews. Appendix 7A: Loading and Linking. Purchasing this textbook new grants the reader 6 months of access to this online is, when the computer is shut down, the contents of the memory are lost.
The system's operation time t0 is then the time required for the boundary to cross the hole, i. The compaction operation requires two memory references—a fetch and a store—plus overhead for each of the 1 — f m words to be moved, i. Virtual memory paging: not all pages of a process need be in main memory frames for the process to run. In general, the principle of locality allows the algorithm to predict which resident pages are least likely to be referenced in the near future and are therefore good candidates for being swapped out.
Its purpose is to avoid, most of the time, having to go to disk to retrieve a page table entry. With prepaging, pages other than the one demanded by a page fault are brought in. Page replacement policy deals with the following issue: among the set of pages considered, which particular page should be selected for replacement.
The working set of a process is the number of pages of that process that have been referenced recently. A precleaning policy writes modified pages before their page frames are needed so that pages can be written out in batches. Split binary address into virtual page number and offset; use VPN as index into page table; extract page frame number; concatenate offset to get physical memory address b.
Thus, each page table can handle 8 of the required 22 bits. Therefore, 3 levels of page tables are needed. Tables at two of the levels have 28 entries; tables at one level have 26 entries. Less space is consumed if the top level has 26 entries. PFN 3 since loaded longest ago at time 20 b. PFN 1 since referenced longest ago at time c.
These two policies are equally effective for this particular page trace. This occurs for two reasons: 1 a user page table can be paged in to memory only when it is needed. Of course, there is a disadvantage: address translation requires extra work. Source: [MAEK87]. The P bit in each segment table entry provides protection for the entire segment. The address space however is bytes. Adding a second layer of page tables, the top page table would point to page tables, addressing a total of bytes. But only 2 bits of the 6th level are required, not the entire 10 bits.
So instead of requiring your virtual addresses be 72 bits long, you could mask out and ignore all but the 2 lowest order bits of the 6th level. Your top level page table then would have only 4 entries. Yet another option is to revise the criteria that the top level page table fit into a single physical page and instead make it fit into 4 pages.
This would save a physical page, which is not much. In that case we pay the 20 ns overhead on top of the ns memory access time. Second, when the TLB does not contain the item. Then we pay an additional ns to get the required entry into the TLB. Snow falling on the track is analogous to page hits on the circular clock buffer. Note that the density of replaceable pages is highest immediately in front of the clock pointer, just as the density of snow is highest immediately in front of the plow.
In fact, it can be shown that the depth of the snow in front of the plow is twice the average depth on the track as a whole. By this analogy, the number of pages replaced by the CLOCK policy on a single circuit should be twice the number that are replaceable at a random time.
The analogy is imperfect because the CLOCK pointer does not move at a constant rate, but the intuitive idea remains. Reading, MA: Addison-Wesley, page The operating system can maintain a number of queues of page-frame tables. A page-frame table entry moves from one queue to another according to how long the reference bit from that page frame stays set to zero. When pages must be replaced, the pages to be replaced are chosen from the queue of the longest-life nonreferenced frames.
Use a mechanism that adjusts the value of Q at each window time as a function of the actual page fault rate experienced during the window. The page fault rate is computed and compared with a system- wide value for "desirable" page fault rate for a job. The value of Q is adjusted upward downward whenever the actual page fault rate of a job is higher lower than the desirable value. Experimentation using this adjustment mechanism showed that execution of the test jobs with dynamic adjustment of Q consistently produced a lower number of page faults per execution and a decreased average resident set size than the execution with a constant value of Q within a very broad range.
The memory time product MT versus Q using the adjustment mechanism also produced a consistent and considerable improvement over the previous test results using a constant value of Q. If total number of entries stays at 32 and the page size does not change, then each entry becomes 8 bits wide. By convention, the contents of memory beyond the current top of the stack are undefined.
On almost all architectures, the current top of stack pointer is kept in a well-defined register. Therefore, the kernel can read its contents and deallocate any unused pages as needed.
The reason that this is not done is that little is gained by the effort. If the user program will repeatedly call subroutines that need additional space for local variables a very likely case , then much time will be wasted deallocating stack space in between calls and then reallocating it later on. If the subroutine called is only used once during the life of the program and no other subroutine will ever be called that needs the stack space, then eventually the kernel will page out the unused portion of the space if it needs the memory for other purposes.
In either case, the extra logic needed to recognize the case where a stack could be shrunk is unwarranted. Source: [SCHI94]. Medium-term scheduling: The decision to add to the number of processes that are partially or fully in main memory.
Short-term scheduling: The decision as to which available process will be executed by the processor 9. Response time is the elapsed time between the submission of a request until the response begins to appear as output. Some systems, such as Windows, use the opposite convention: a higher number means a higher priority 9. Preemptive: The currently running process may be interrupted and moved to the Ready state by the operating system. The decision to preempt may be performed when a new process arrives, when an interrupt occurs that places a blocked process in the Ready state, or periodically based on a clock interrupt.
When the currently- running process ceases to execute, the process that has been in the ready queue the longest is selected for running. When the interrupt occurs, the currently running process is placed in the ready queue, and the next ready job is selected on a FCFS basis. In this case, the scheduler always chooses the process that has the shortest expected remaining processing time.
When a new process joins the ready queue, it may in fact have a shorter remaining time than the currently running process. Accordingly, the scheduler may preempt whenever a new process becomes ready.
When a process first enters the system, it is placed in RQ0 see Figure 9. After its first execution, when it returns to the Ready state, it is placed in RQ1. Each subsequent time that it is preempted, it is demoted to the next lower-priority queue. A shorter process will complete quickly, without migrating very far down the hierarchy of ready queues. A longer process will gradually drift downward. Thus, newer, shorter processes are favored over older, longer processes.
Within each queue, except the lowest-priority queue, a simple FCFS mechanism is used. Once in the lowest-priority queue, a process cannot go lower, but is returned to this queue repeatedly until it completes execution. The proof can be extended to cover later arrivals. A sophisticated analysis of this type of estimation procedure is contained in Applied Optimal Estimation, edited by Gelb, M.
Press, If you do, then it is entitled to 2 additional time units before it can be preempted. Here the response ratio of job 1 is the smaller, and consequently job 2 is selected for service at time t.
This algorithm is repeated each time a job is completed to take new arrivals into account. Note that this algorithm is not quite the same as highest response ratio next. The latter would schedule job 1 at time t. Intuitively, it is clear that the present algorithm attempts to minimize the maximum response ratio by consistently postponing jobs that will suffer the least increase of their response ratios.
Mondrup, is reported in [BRIN73]. Consider the queue at time t immediately after a departure and ignore further arrivals. The waiting jobs are numbered 1 to n in the order in which they will be scheduled: job: 1 2.
Notice that this proof is valid in general for priorities that are non- decreasing functions of time. For example, in a FIFO system, priorities increase linearly with waiting time at the same rate for all jobs. Therefore, the present proof shows that the FIFO algorithm minimizes the maximum waiting time for a given batch of jobs. Assume that an item with service time Ts has been in service for a time h.
That is, no matter how long an item has been in service, the expected remaining service time is just the average service time for the item. This result, though counter to intuition, is correct, as we now show. Therefore the expected value of the remaining service time is the same as the original expected value of service time.
With this result, we can now proceed to the original problem. When an item arrives for service, the total response time for that item will consist of its own service time plus the service time of all items ahead of it in the queue. The total expected response time has three components. Now, consider a newly arrived process, which is placed at the end of the ready queue for service. It must wait until all q processes waiting in line ahead of it have been serviced. When the ready queue has many processes that are interactive, responsiveness is very important e.
An argument in favor of a large quantum: Using a large quantum will enhance the throughput, and the CPU utilization measured with respect to real work, because there is less context switching and therefore less overhead. A system for which both might be appropriate: There are some systems for which both small and large quanta are reasonable.
Although this type of job can be considered as a batch job, in some sense, it still has to interact with the user. Therefore, during the times when there is no user interaction, the quantum might be increased to optimize the throughout and during interactive time, the quantum might be lowered to provide better responsiveness.
Two adjacent arrivals to the second box the "service" box will arrive at a slightly slower rate, since the second item is delayed in its chase of the first item. Since priorities are initially based only on elapsed waiting times, W is clearly independent of the service time x. We have already developed the formula for R. For V, observe that the arrival rate to the service box is l', and therefore the utilization is r'.
When the quantum is decreased to satisfy more users rapidly two things happen: 1 processor utilization decreases, and 2 at a certain point, the quantum becomes too small to satisfy most trivial requests. Users will then experience a sudden increase of response times because their requests must pass through the round- robin queue several times.
Medium: Parallel processing or multitasking within a single application. Coarse: Multiprocessing of concurrent processes in a multiprogramming environment. Very Coarse: Distributed processing across network nodes to form a single computing environment.
Independent: Multiple unrelated processes. A global queue of ready threads is maintained, and each processor, when idle, selects a thread from the queue. The term load sharing is used to distinguish this strategy from load-balancing schemes in which work is allocated on a more permanent basis. Gang scheduling: A set of related threads is scheduled to run on a set of processors at the same time, on a one-to-one basis. Dedicated processor assignment: Each program is allocated a number of processors equal to the number of threads in the program, for the duration of the program execution.
When the program terminates, the processors return to the general pool for possible allocation to another program. Dynamic scheduling: The number of threads in a program can be altered during the course of execution. When a processor becomes idle, it picks the next ready thread, which it executes until completion or blocking.
Smallest Number of Threads First: The shared ready queue is organized as a priority queue, with highest priority given to threads from jobs with the smallest number of unscheduled threads. Jobs of equal priority are ordered according to which job arrives first. As with FCFS, a scheduled thread is run to completion or blocking. Preemptive Smallest Number of Threads First: Highest priority is given to jobs with the smallest number of unscheduled threads.
An arriving job with a smaller number of threads than an executing job will preempt threads belonging to the scheduled job. A soft real-time task has an associated deadline that is desirable but not mandatory; it still makes sense to schedule and complete the task even if it has passed its deadline. In the case of a periodic task, the requirement may be stated as "once per period T" or "exactly T units apart. Responsiveness: Responsiveness is concerned with how long, after acknowledgment, it takes an operating system to service the interrupt.
User control: The user should be able to distinguish between hard and soft tasks and to specify relative priorities within each class. A real-time system may also allow the user to specify such characteristics as the use of paging or process swapping, what processes must always be resident in main memory, what disk transfer algorithms are to be used, what rights the processes in various priority bands have, and so on.
Reliability: Reliability must be provided in such a way as to continue to meet real- time deadlines. Fail-soft operation: Fail-soft operation is a characteristic that refers to the ability of a system to fail in such a way as to preserve as much capability and data as possible.
The result of the analysis is a schedule that determines, at run time, when a task must begin execution. Static priority-driven preemptive approaches: Again, a static analysis is performed, but no schedule is drawn up. Rather, the analysis is used to assign priorities to tasks, so that a traditional priority-driven preemptive scheduler can be used.
Dynamic planning-based approaches: Feasibility is determined at run time dynamically rather than offline prior to the start of execution statically. An arriving task is accepted for execution only if it is feasible to meet its time constraints. One of the results of the feasibility analysis is a schedule or plan that is used to decide when to dispatch this task.
Dynamic best effort approaches: No feasibility analysis is performed. The system tries to meet all deadlines and aborts any started process whose deadline is missed. In the case of a repetitive or periodic task, this is actually a sequence of times that is known in advance.
In the case of an aperiodic task, this time may be known in advance, or the operating system may only be aware when the task is actually ready. Starting deadline: Time by which a task must begin. Completion deadline: Time by which task must be completed. The typical real-time application will either have starting deadlines or completion deadlines, but not both.
Processing time: Time required to execute the task to completion. In some cases, this is supplied. In others, the operating system measures an exponential average. For still other scheduling systems, this information is not used. Resource requirements: Set of resources other than the processor required by the task while it is executing.
Priority: Measures relative importance of the task. Hard real-time tasks may have an "absolute" priority, with the system failing if a deadline is missed.
If the system is to continue to run no matter what, then both hard and soft real-time tasks may be assigned relative priorities as a guide to the scheduler. Subtask structure: A task may be decomposed into a mandatory subtask and an optional subtask.
Only the mandatory subtask possesses a hard deadline. Each square represents five time units; the letter in the square refers to the currently running process.
The first row is fixed priority; the second row is earliest deadline scheduling using completion deadlines. The task may be delayed up to an interval of t and still meet its deadline. A laxity of 0 means that the task must be executed now or will fail to meet its deadline. A task with negative laxity cannot meet its deadline. The resulting schedules differ in the number of potentially costly context switches: 13, 11, and 13, respectively. Any timeline repeats itself every 24 time units.
None of the methods can handle the load. The tasks that fail to complete vary for the various methods. Note that EDF has fewer context switches 11 than the other methods, all of which have The total utilization of P1 and P2 is 0. Therefore, these two tasks are schedulable. The utilization of all the tasks is 0. Observe that P1 and P2 must execute at least once before P3 can begin executing. However, P1 is initiated one additional time in the interval 0, This is within the deadline for P3.
By continuing this reasoning, we can see that all deadlines of all three tasks can be met. When T3 leaves its critical section, it is preempted by T1. Buffering techniques may be used to improve utilization. Generally, it is possible to reference data by its block number. Disks and tapes are examples of block-oriented devices.
Stream-oriented devices transfer data in and out as a stream of bytes, with no block structure. Terminals, printers, communications ports, mouse and other pointing devices, and most other devices that are not secondary storage are stream oriented.
Specifically, a process can transfer data to or from one buffer while the operating system empties or fills the other. SCAN: The disk arm moves in one direction only, satisfying all outstanding requests en route, until it reaches the last track in that direction or until there are no more requests in that direction.
The service direction is then reversed and the scan proceeds in the opposite direction, again picking up all requests in order. Thus, when the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again. Source: [KNUT97]. Recognize that each of the N tracks is equally likely to be requested.
This follows directly from the last equation. If it is in M2 but not in M1, then a block of data is transferred from M2 to M1 and then read. The middle section in Figure Thus, this reduces to the strategy of Figure The old section consists of one block, and we have the LRU replacement policy.
Since each byte generates an interrupt, there are interrupts. A record is a collection of related fields that can be treated as a unit by some application program. A database is a collection of related data. The essential aspects of a database are that the relationships that exist among elements of data are explicit and that the database is designed for use by a number of different applications.
Each record consists of one burst of data. Sequential file: A fixed format is used for records. All records are of the same length, consisting of the same number of fixed-length fields in a particular order.
Because the length and position of each field is known, only the values of fields need to be stored; the field name and length for each field are attributes of the file structure. Indexed sequential file: The indexed sequential file maintains the key characteristic of the sequential file: records are organized in sequence based on a key field. Two features are added; an index to the file to support random access, and an overflow file. The index provides a lookup capability to reach quickly the vicinity of a desired record.
The overflow file is similar to the log file used with a sequential file, but is integrated so that records in the overflow file are located by following a pointer from their predecessor record.
Indexed file: Records are accessed only through their indexes. The result is that there is now no restriction on the placement of records as long as a pointer in at least one index refers to that record. Furthermore, variable-length records can be employed. Direct, or hashed, file: The direct file makes use of hashing on the key value.
The indexed sequential file provides a structure that allows a less exhaustive search to be performed. The working directory is a directory within that tree structure that is the current directory that a user is working on. There may be unused space at the end of each block. This is referred to as internal fragmentation. Variable-length spanned blocking: Variable-length records are used and are packed into blocks with no unused space.
Thus, some records must span two blocks, with the continuation indicated by a pointer to the successor block. Variable-length unspanned blocking: Variable-length records are used, but spanning is not employed.
There is wasted space in most blocks because of the inability to use the remainder of a block if the next record is larger than the remaining unused space. Chained allocation: allocation is on an individual block basis. Each block contains a pointer to the next block in the chain. Indexed allocation: the file allocation table contains a separate one-level index for each file; the index has one entry for each portion allocated to the file.
When spanned records bridge block boundaries, some reference to the successor block is also needed. One possibility is a length indicator preceding each record. Another possibility is a special separator marker between records. In any case, we can assume that each record requires a marker, and we assume that the size of a marker is about equal to the size of a block pointer [WEID87].
For spanned blocking, a block pointer of size P to its successor block may be included in each block, so that the pieces of a spanned record can easily be retrieved.
Less than half the allocated file space is unused at any time. Indexed b. Indexed sequential c. This improves security, integrity organizing backups , and avoids the problem of name clashes.
Treating a directory as an ordinary file with certain assigned access restrictions provides a more uniform set of objects to be managed by the operating system and may make it easier to create and manage user-owned directories.
If the operating system structures the file system so that subdirectories are allowed underneath a master directory, there is little or no additional logic required to allow arbitrary depth of subdirectories. Limiting the depth of the subdirectory tree places an unnecessary limitation on the user's organization of file space. First we would establish a data structure representing every block on a disk supporting a file system.
A bit map would be appropriate here. When finished, we would create a free list from the blocks remaining as unused. This is essentially what the UNIX utility fsck does. Keep a "backup" of the free-space list pointer at one or more places on the disk. Whenever this beginning of the list changes, the "backup" pointers are also updated. This will ensure you can always find a valid pointer value even if there is a memory or disk block failure.
Using the information from a , we see that the direct blocks only cover the first 96KB, while the first indirect block covers the next 16MB.
There will thus be two disk accesses. One for the first indirect block, and one for the block containing the required data. In many cases, embedded systems are part of a larger system or product, as in the case of an antilock braking system in a car.
If these events do not occur periodically or at predictable intervals, the embedded software may need to take into account worst-case conditions and set priorities for execution of routines.
Thus, and embedded OS intended for use on a variety of embedded systems lend itself to flexible configuration so that only the functionality needed for a specific application and hardware suite is provided. Untested programs are rarely added to the software. After the software has been configured and tested, it can be assumed to be reliable.
Similarly, memory protection mechanisms can be minimized. The disadvantage of using a general-purpose OS is that it is not optimized for real-time and embedded applications. Thus, considerable modification may be required to achieve adequate performance. It will help:. This site comply with DMCA digital copyright. We do not store files not owned by us, or without the permission of the owner.
We also do not have links that lead to sites DMCA copyright infringement. If You feel that this book is belong to you and you want to unpublish it, Please Contact us.
0コメント