1.What are the major differences between deadlock, starvation, and race?
The major difference between deadlock, starvation and race is that in deadlock, the problem occurs when the jobs are processed. Starvation, however is the allocation of resource that prevents one job to be executed. Race occurs before the process has been started.
2. Give some real life example of deadlock, starvetion and race?
Deadlock: When two person are about to buy one product at the time.
Starvation: When one person borrowed a pen from his classmate and his
classmate get his pen back.
Race: When two guys have the same girlfriend.
3. Four necessary condition needed for the deadlock from exercise #2:
if the product is only one.
if the two person needed that one product urgently.
if there's other alternative products available.
if the two person are brand concious and the product happen to be what they like.
4. Design an algorithm for using it so that both deadlock and starvation are not possible.
public boolean tryAcquire( int n0, int n1, ... ) { if ( for all i: ni ≤ availi ) { // successful acquisition availi -= ni for all i; returntrue; // indicate success } else return false; // indicate failure}
init) Semaphore s = new Semaphore(1,1);
Thread A Thread B
-------- --------
s.acquire(1,0); s.acquire(0,1);
s.acquire(0,1); s.acquire(1,0);
Thread B--------
while(true) {
s.acquire(0,1);
if ( s.tryAcquire(1,0) ) // if second acquisition succeeds
break; // leave the loop
else {
s.release(0,1); // release what is heldsleep( SOME_AMOUNT); // pause a bit before trying again
}
}
run action s.value--- ------ -------
(1,1)A s.acquire(1,0) (0,1)B s.acquire(0,1) (0,0)A s.acquire(0,1)
A blocks on secondB s.tryAcquire(1,0) => falseB s.release(0,1)
(0,1)A s.acquire(0,1) (0,0) A succeeds on second
5. figure 5.16 shows a tunnel going trough a mountain and two streets parallel to each other- one at each intrance/exit of the tunnel. Traffic light are located at each end of the tunnel to control the crossflow of tghe traffic through each intersection.
a. Deadlock will not happen because there are two traffic lights that control the traffic.The deadlock may when some of the motorist dont follow the traffic light because there's only one bridge to drive through.
b. Deadlock can be detected when there will be a huge bumper to bumper to the traffic and there will be accident that will happen.
c. The solution to prevent deadlock is that, the traffic lights should be accurate and motorist should follow it. In order to have a nice driving through the bridge.
6. Based on figure 5.17 answer the following question.
a. This is not a deadlocked.
b. There is no blocked processes.
c. P2 can freely request on R1 and R2.
d. P1 can freely request on R1 and R2.
e. Both P1 and P2 have requested R2.
1. P1 will wait after the request of P2.
2. P2 will wait after the request of P1.
Tuesday, January 15, 2008
Thursday, December 13, 2007
Assignment #3
Page 56
EXERCISES:1,2,3
A.Explain he following:
Multiprogramming. Why i s it used?
A multiprogramming is a technique used to utilize maximum CPU time by running multiple programs simultaneously. The execution begins with the first program and continuous till an instruction waiting for a peripheral is reached, the context of this program is stored, and the second program is memory is given chance to run. The process continued until all program finished running. Multiprogramming has no guarantee that a program will run is timely manner.
B.Internal Fagmentation. How does it occur?
The internal fragmentation occurs it when a fixedpartition is partially used by program, the remaining space within the partition is unavailable to any other job and that's the time internal fragmentation occur when there is another job followed on the space. So that it will not wasted.
C.Compaction: Why is it need?
Compaction is very needed because it is the process of collecting fragments of available memory space into contiguous in block by moving programs and data in acomputer's memory disks, or known as garbage collection.
E.Relocation: How often should it performed? It depend on the process of address refferences in program.
#2. Describe the Major Disadvantages for each of the four memory allocation schemes presented in the chapter. The disadvantage of this memory allocation its an overhead process, so that while compaction is being done everything else must wait.
#3.Describe the Major Advantages for each of the memory alloatiuon schemes presented in the chapter. They could be divided into segments of variable sizes of equal size. Each page, or segment, could be stored wherever there was an empty block best enough to hold it.
http://www.google.com.ph/search?hl=en&q=multiprogramming.Why+is+it+used&meta=
Page 104
EXERCISE:#4
A:What is the cause of thrashing?
Thrashing is caused by under allocation of the minimum number of pages required by a process, forcing it to continuously page fault.
B:How does the operating system detect thrashing?
The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming.
C:Once thrashing is detcted, what can the operating system do to iliminate it?
It can be eliminated by reducing the level of multiprogramming .
http://www.go4expert.com/forums/showthread.php?t=834
EXERCISE:#5
What purpose does the referenced bit serve in a demand paging system?
=is it want it to be loaded faster???... it seems that demand paging introduce the concept of loading partial of program into the memory and when begins the pages'll be loaded into the memory as they'r e needed
http://www.daniweb.com/forums/thread8504.html
EXERCISES:1,2,3
A.Explain he following:
Multiprogramming. Why i s it used?
A multiprogramming is a technique used to utilize maximum CPU time by running multiple programs simultaneously. The execution begins with the first program and continuous till an instruction waiting for a peripheral is reached, the context of this program is stored, and the second program is memory is given chance to run. The process continued until all program finished running. Multiprogramming has no guarantee that a program will run is timely manner.
B.Internal Fagmentation. How does it occur?
The internal fragmentation occurs it when a fixedpartition is partially used by program, the remaining space within the partition is unavailable to any other job and that's the time internal fragmentation occur when there is another job followed on the space. So that it will not wasted.
C.Compaction: Why is it need?
Compaction is very needed because it is the process of collecting fragments of available memory space into contiguous in block by moving programs and data in acomputer's memory disks, or known as garbage collection.
E.Relocation: How often should it performed? It depend on the process of address refferences in program.
#2. Describe the Major Disadvantages for each of the four memory allocation schemes presented in the chapter. The disadvantage of this memory allocation its an overhead process, so that while compaction is being done everything else must wait.
#3.Describe the Major Advantages for each of the memory alloatiuon schemes presented in the chapter. They could be divided into segments of variable sizes of equal size. Each page, or segment, could be stored wherever there was an empty block best enough to hold it.
http://www.google.com.ph/search?hl=en&q=multiprogramming.Why+is+it+used&meta=
Page 104
EXERCISE:#4
A:What is the cause of thrashing?
Thrashing is caused by under allocation of the minimum number of pages required by a process, forcing it to continuously page fault.
B:How does the operating system detect thrashing?
The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming.
C:Once thrashing is detcted, what can the operating system do to iliminate it?
It can be eliminated by reducing the level of multiprogramming .
http://www.go4expert.com/forums/showthread.php?t=834
EXERCISE:#5
What purpose does the referenced bit serve in a demand paging system?
=is it want it to be loaded faster???... it seems that demand paging introduce the concept of loading partial of program into the memory and when begins the pages'll be loaded into the memory as they'r e needed
http://www.daniweb.com/forums/thread8504.html
Tuesday, November 20, 2007
|Assignment # 1
THE TI-99 HOME COMPUTER TIMELINE
Bill Gaskill
Bill Gaskill
1993 marks the 10th anniversary of the decision by Texas Instruments to abandon the Home Computer. I have compiled the information in this timeline not in celebration of TI's decision to orphan the 99/4A, but rather to honor the community that remains ten years after TI's decision. I hope you enjoy the reading.
THE BIRTH OF THE MICROCOMPUTER INDUSTRY
1947: Bell Labs engineers John Bardeen, Walter Brattain and William Shockley invent the transistor, which paves the way for the creation of smaller computers.
1955: IBM becomes the first computer manufacturer to offer plug-in peripherals for their computers. Although the computers are of the mainframe type, the concept will catch on and become an integral part of microcomputer technology.
1959: Texas Instruments releases the first integrated circuit after its engineers figure out how to put more than one transistor on the same material and connect them without wires.
1964: John G. Kemeny and Thomas E. Kurtz develop the BASIC programming language at Dartmouth College. BASIC will become a mainstay in the microcomputer world.
1969: Intel, then a one-year old company, releases a 1K-bit RAM chip, which is the largest amount of RAM ever put on an integrated circuit up to that time.
1972: Intel introduces the 8008 chip in April 1972. It becomes the first 8-bit microprocessor to hit the market.
- Nolan Bushnell founds Atari and ships the Pong game.
1973: The first "mini" floppy disk is introduced.
1974: Intel introduces the 8080 chip in April 1974. The 8080 is the first microprocessor capable of addressing 64K bytes of memory.
-Texas Instruments releases the TMS 1000 4-bit chip. It becomes an immediate success as over 100 million are sold for use in video games, microwave ovens, calculators and other electronics products.
- In an article appearing in the July 1974 issue of Radio Electronics, author Jonathan Titus tells readers how to build the Mark 8 "personal minicomputer."
- Motorola begins work on the M6800 chip, designed by Chuck Peddle. Peddle would later leave Motorola to join MOS Technology, the creators of the 6502 chip. Peddle ultimately became Commodore's Systems Division Director, responsible for the release of the PET 2001 in October 1977, after Commodore acquired MOS Technology in order to have its own chip source.
- Naval Post-graduate School instructor Gary Kildall creates a new operating system for Intel's 8080 microprocessor called CP/M, an acronym for Control Program for Microcomputers. It sells for $70.
- Creative Computing magazine is founded by David H. Ahl in Morristown New Jersey.
THE BIRTH OF THE MICROCOMPUTER INDUSTRY
1947: Bell Labs engineers John Bardeen, Walter Brattain and William Shockley invent the transistor, which paves the way for the creation of smaller computers.
1955: IBM becomes the first computer manufacturer to offer plug-in peripherals for their computers. Although the computers are of the mainframe type, the concept will catch on and become an integral part of microcomputer technology.
1959: Texas Instruments releases the first integrated circuit after its engineers figure out how to put more than one transistor on the same material and connect them without wires.
1964: John G. Kemeny and Thomas E. Kurtz develop the BASIC programming language at Dartmouth College. BASIC will become a mainstay in the microcomputer world.
1969: Intel, then a one-year old company, releases a 1K-bit RAM chip, which is the largest amount of RAM ever put on an integrated circuit up to that time.
1972: Intel introduces the 8008 chip in April 1972. It becomes the first 8-bit microprocessor to hit the market.
- Nolan Bushnell founds Atari and ships the Pong game.
1973: The first "mini" floppy disk is introduced.
1974: Intel introduces the 8080 chip in April 1974. The 8080 is the first microprocessor capable of addressing 64K bytes of memory.
-Texas Instruments releases the TMS 1000 4-bit chip. It becomes an immediate success as over 100 million are sold for use in video games, microwave ovens, calculators and other electronics products.
- In an article appearing in the July 1974 issue of Radio Electronics, author Jonathan Titus tells readers how to build the Mark 8 "personal minicomputer."
- Motorola begins work on the M6800 chip, designed by Chuck Peddle. Peddle would later leave Motorola to join MOS Technology, the creators of the 6502 chip. Peddle ultimately became Commodore's Systems Division Director, responsible for the release of the PET 2001 in October 1977, after Commodore acquired MOS Technology in order to have its own chip source.
- Naval Post-graduate School instructor Gary Kildall creates a new operating system for Intel's 8080 microprocessor called CP/M, an acronym for Control Program for Microcomputers. It sells for $70.
- Creative Computing magazine is founded by David H. Ahl in Morristown New Jersey.
1978: The Plato computer aided instruction system is developed at the University of Illinois. Control Data Corporation would license these applications to Texas Instruments late in 1983, but by then, the fate of the Home Computer was already sealed.
- Machine and operating system independent UCSD Pascal is released by the Regents of the University of California at San Diego for $200.
- In March, Texas Instruments begins trying to recruit personal computer specialists by running full page ads entitled "Your Experience with personal computers is going to open an unlimited career at TI." in trade publications. The ads seek qualified applicants for Personal Computer Product Marketing Managers, Systems Programmers, Digital Design Engineers, Product Design Engineers, Application Software Specialists and Marketing Support Engineers. The recruitment efforts are largely unsuccessful when potential applicants discover the job is in Lubbock, Texas rather than close to the center of the microcomputer industry, which is northern California's Silicon Valley, situated only an hour's drive from San Francisco.
- In April, Texas Instruments releases a recreational Solid State Software Leisure Library module for the TI58 and 59 programmable calculators, coining and trademarking the term Solid State Software.
- Intel introduces the 8086 microprocessor.
- In August MICROpro releases Seymour Rubenstein's Word-Master word processor, which is the predecessor to WordStar.
- Illinois residents Ward Christensen and Randy Suess create the first microcomputer bulletin board system, conceived, designed, built, programmed, tested and installed in the 30 day period between January 16th and February 16th 1978.
- The $895 Exidy Sorcerer is released in October by Exidy Computers of Sunnyvale, California. The machine sports 8K RAM, a 64 column by 30 row screen and the ability to use plug-in modules which are the size of 8-track tapes. The Sorcerer appears to be the first "Home Computer" to support ROM cartridge use.
- In December Axiom Corporation introduces the EX-801 printer and EX-820 printer/plotter for $495 and $795 respectively. Both have available interfaces for the Apple II, TRS-80, PET and Exidy personal computers.
- Epson introduces the MX-80 dot matrix printer, shocking the industry with its low price and high performance.
- Over 14 million microprocessors are manufactured by year's end, with the 8-bit 6502 chip and TI's 4-bit TMS 1000 chip leading the pack.
JAN 1979: Double sided disk drives are announced but few are available as manufacturers run into difficulty gearing up for production.
FEB 1979: Rumors begin to fly about TI's new personal computer, despite the fact that it has not been formally announced. The rumors say the computer will have 40K of ROM, it will generate 20 lines of 40 characters on a standard television, have provisions for accommodating video disk players and video tape recorders, and it will have support for sophisticated sound production.
- Machine and operating system independent UCSD Pascal is released by the Regents of the University of California at San Diego for $200.
- In March, Texas Instruments begins trying to recruit personal computer specialists by running full page ads entitled "Your Experience with personal computers is going to open an unlimited career at TI." in trade publications. The ads seek qualified applicants for Personal Computer Product Marketing Managers, Systems Programmers, Digital Design Engineers, Product Design Engineers, Application Software Specialists and Marketing Support Engineers. The recruitment efforts are largely unsuccessful when potential applicants discover the job is in Lubbock, Texas rather than close to the center of the microcomputer industry, which is northern California's Silicon Valley, situated only an hour's drive from San Francisco.
- In April, Texas Instruments releases a recreational Solid State Software Leisure Library module for the TI58 and 59 programmable calculators, coining and trademarking the term Solid State Software.
- Intel introduces the 8086 microprocessor.
- In August MICROpro releases Seymour Rubenstein's Word-Master word processor, which is the predecessor to WordStar.
- Illinois residents Ward Christensen and Randy Suess create the first microcomputer bulletin board system, conceived, designed, built, programmed, tested and installed in the 30 day period between January 16th and February 16th 1978.
- The $895 Exidy Sorcerer is released in October by Exidy Computers of Sunnyvale, California. The machine sports 8K RAM, a 64 column by 30 row screen and the ability to use plug-in modules which are the size of 8-track tapes. The Sorcerer appears to be the first "Home Computer" to support ROM cartridge use.
- In December Axiom Corporation introduces the EX-801 printer and EX-820 printer/plotter for $495 and $795 respectively. Both have available interfaces for the Apple II, TRS-80, PET and Exidy personal computers.
- Epson introduces the MX-80 dot matrix printer, shocking the industry with its low price and high performance.
- Over 14 million microprocessors are manufactured by year's end, with the 8-bit 6502 chip and TI's 4-bit TMS 1000 chip leading the pack.
JAN 1979: Double sided disk drives are announced but few are available as manufacturers run into difficulty gearing up for production.
FEB 1979: Rumors begin to fly about TI's new personal computer, despite the fact that it has not been formally announced. The rumors say the computer will have 40K of ROM, it will generate 20 lines of 40 characters on a standard television, have provisions for accommodating video disk players and video tape recorders, and it will have support for sophisticated sound production.
2. a regional bank might decide to buy six server instead of one supercomputer fo two reasons:
1. because a super computer isonly use in major company and the regional bank need many computer for there employees to help people who are in need of services and to shorten there work.
2.A regional bank need six server computer rather than a supercomputer because a supercomputer is only one computer rather than a six server computer that many employee of the bank can use.
Sunday, November 18, 2007
Assignment #2
Virtual memory is a computer system technique which gives an application program the impression that it has contiguous working memory, while in fact it is physically fragmented and may even overflow on to disk storage. Systems which use this technique make programming of large applications easier and use real physical memory (e.g. RAM) more efficiently than those without virtual memory.
Virtual Memory Operating systems based on Microsoft Windows NT technologies have always provided applications with a flat 32-bit virtual address space that describes 4 gigabytes (GB) of virtual memory. The address space is usually split so that 2 GB of address space is directly accessible to the application and the other 2 GB is only accessible to the Windows executive software.The virtual address space of processes and applications is still limited to 2 GB unless the /3GB switch is used in the Boot.ini file. When the physical RAM in the system exceeds 16 GB and the /3GB switch is used, the operating system will ignore the additional RAM until the /3GB switch is removed. This is because of the increased size of the kernel required to support more Page Table Entries. The assumption is made that the administrator would rather not lose the /3GB functionality silently and automatically; therefore, this requires the administrator to explicitly change this setting.Virtual memory (VM) in UNIX operating system is just what it says -- "virtual" -- it really doesn't exist. The VM size is NOT consuming any disk space. Unless a user's X system is performing swapping there's absolutely no need to worry about the swap file size nor its location. Swapping activity is provided by observing the "0(0) pageouts" in the last header line of the Terminal top command. Another useful Terminal command is the vm_stat(1) command (see man vm_stat). This command also displays the number of Pageouts. The pageout value is an indication that physical memory is being paged(swapped) to the swap file. This i/o is done in page chunks. A page chunk is 4096 bytes in size. When physical memory is paged (swapped) to the swap file it is being done so because physical memory is being over-subscribed. The best solution for avoiding frequent over-subscription of physical memory is to have fewer Apps running at same time or install more physical memory. When physical memory becomes over-subscribed the OS will seek out inactive memory pages and copy them to the swap file in order to make room for the active memory pages -- which may have to be copied from the swap file back into physical memory. a page fault is a fixed length block of memory that is used as a unit of transfer between physical memory and external storage like a disk, and a page fault is an interrupt (or exception) to the software raised by the hardware, when a program accesses a page that is mapped in address space, but not loaded in physical memory.The hardware that detects this situation is the memory management unit in a processor. The exception handling software that handles the page fault is generally part of an operating system. The operating system tries to handle the page fault by making the required page accessible at a location in physical memory or kills the program in case it is an illegal access. working set of information W(t,τ) of a process at time t to be the collection of information referenced by the process during the process time interval (t − τ,t)”. Typically the units of information in question are considered to be memory pages. This is suggested to be an approximation of the set of pages that the process will access in the future (say during the next τ time units), and more specifically is suggested to be an indication of what pages ought to be kept in main memory to allow most progress to be made in the execution of that process.
The effect of choice of what pages to be kept in main memory (as distinct from being paged out to auxiliary storage) is important: if too many pages of a process are kept in main memory, then fewer other processes can be ready at any one time. If too few pages of a process are kept in main memory, then the page fault frequency is greatly increased and the number of active (non-suspended) processes currently executing in the system are set to zero.
The working set model states that a process can be in RAM if and only if all of the pages that it is currently using (the most recently used pages) can be in RAM. The model is an all or nothing model, meaning if the pages it needs to use increases, and there is no room in RAM, the process is kicked from memory to free the memory for other processes to use.
In other words, the working set strategy prevents thrashing while keeping the degree of multiprogramming as high as possible. Thus it optimizes CPU utilization and throughput.
The main hurdle in implementing the working set model is keeping track of the working set. The working set window is a moving window. At each memory reference a new reference appears at one end and the oldest reference drops off the other end. A page is in the working set if it is referenced in the working set window.Page size is usually determined by a processor architecture. Traditionally, pages in a system had uniform size, for example 4096 bytes. However, processor designs often allow two or more, sometimes simultaneous, page sizes due to the benefits and penalties. There are several points that can factor into choosing the best page size.Intel x86 supports 4MB pages (2MB pages if using PAE) in addition to its standard 4kB pages, and other architectures may often have similar feature. IA-64 supports as much as eight different page sizes, from 4kB up to 256MB. This support for huge pages (or, in Microsoft Windows terminology, large pages) allows "the best of both worlds", reducing the pressure on the TLB cache (sometimes increasing speed by as much as 15%, depending on the application and the allocation size) for large allocations and still keeping memory usage at a reasonable level for small allocations.
Huge pages, despite being implemented in most contemporary personal computers, are not in common use except in large servers and computational clusters. Commonly, their use requires elevated privileges, cooperation from the application making the large allocation (usually setting a flag to ask the operating system for huge pages) or manual administrator configuration; operating systems commonly, sometimes by design, cannot page them out to disk.
Linux has supported huge pages on several architectures since the 2.6 series. Windows Server 2003 (SP1 and newer) and Windows Server 2008 supports huge pages under the name of large pages, but Windows XP and Windows Vista do not.The read/write speed of a hard drive is much slower than RAM, and the technology of a hard drive is not geared toward accessing small pieces of data at a time. If your system has to rely too heavily on virtual memory, you will notice a significant performance drop. The key is to have enough RAM to handle everything you tend to work on simultaneously -- then, the only time you "feel" the slowness of virtual memory is is when there's a slight pause when you're changing tasks. When that's the case, virtual memory is perfect.
When it is not the case, the operating system has to constantly swap information back and forth between RAM and the hard disk. This is called thrashing, and it can make your computer feel incredibly slow.
The effect of choice of what pages to be kept in main memory (as distinct from being paged out to auxiliary storage) is important: if too many pages of a process are kept in main memory, then fewer other processes can be ready at any one time. If too few pages of a process are kept in main memory, then the page fault frequency is greatly increased and the number of active (non-suspended) processes currently executing in the system are set to zero.
The working set model states that a process can be in RAM if and only if all of the pages that it is currently using (the most recently used pages) can be in RAM. The model is an all or nothing model, meaning if the pages it needs to use increases, and there is no room in RAM, the process is kicked from memory to free the memory for other processes to use.
In other words, the working set strategy prevents thrashing while keeping the degree of multiprogramming as high as possible. Thus it optimizes CPU utilization and throughput.
The main hurdle in implementing the working set model is keeping track of the working set. The working set window is a moving window. At each memory reference a new reference appears at one end and the oldest reference drops off the other end. A page is in the working set if it is referenced in the working set window.Page size is usually determined by a processor architecture. Traditionally, pages in a system had uniform size, for example 4096 bytes. However, processor designs often allow two or more, sometimes simultaneous, page sizes due to the benefits and penalties. There are several points that can factor into choosing the best page size.Intel x86 supports 4MB pages (2MB pages if using PAE) in addition to its standard 4kB pages, and other architectures may often have similar feature. IA-64 supports as much as eight different page sizes, from 4kB up to 256MB. This support for huge pages (or, in Microsoft Windows terminology, large pages) allows "the best of both worlds", reducing the pressure on the TLB cache (sometimes increasing speed by as much as 15%, depending on the application and the allocation size) for large allocations and still keeping memory usage at a reasonable level for small allocations.
Huge pages, despite being implemented in most contemporary personal computers, are not in common use except in large servers and computational clusters. Commonly, their use requires elevated privileges, cooperation from the application making the large allocation (usually setting a flag to ask the operating system for huge pages) or manual administrator configuration; operating systems commonly, sometimes by design, cannot page them out to disk.
Linux has supported huge pages on several architectures since the 2.6 series. Windows Server 2003 (SP1 and newer) and Windows Server 2008 supports huge pages under the name of large pages, but Windows XP and Windows Vista do not.The read/write speed of a hard drive is much slower than RAM, and the technology of a hard drive is not geared toward accessing small pieces of data at a time. If your system has to rely too heavily on virtual memory, you will notice a significant performance drop. The key is to have enough RAM to handle everything you tend to work on simultaneously -- then, the only time you "feel" the slowness of virtual memory is is when there's a slight pause when you're changing tasks. When that's the case, virtual memory is perfect.
When it is not the case, the operating system has to constantly swap information back and forth between RAM and the hard disk. This is called thrashing, and it can make your computer feel incredibly slow.
Subscribe to:
Posts (Atom)