tag:blogger.com,1999:blog-18822028901166893232024-03-13T04:49:39.544-07:00CS-223 Operating SystemGlennhttp://www.blogger.com/profile/12050898129795620974noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-1882202890116689323.post-8369044544314100012008-01-15T23:47:00.000-08:002008-01-19T19:50:41.990-08:00Research Topic<span style="font-family:arial;">1.What are the major differences between deadlock, starvation, and race?</span><br /><span style="font-family:arial;">The major difference between deadlock, starvation and race is that in deadlock, the problem occurs when the jobs are processed. Starvation, however is the allocation of resource that prevents one job to be executed. Race occurs before the process has been started.<br />2. Give some real life example of deadlock, starvetion and race?</span><br /><span style="font-family:arial;"><strong></strong></span>Deadlock: When two person are about to buy one product at the time.<br />Starvation: When one person borrowed a pen from his classmate and his<br />classmate get his pen back.<br /><span style="font-family:arial;"><strong style="FONT-WEIGHT: normal">Race:</strong> When two guys have the same girlfriend.<br />3. Four necessary condition needed for the deadlock from exercise #2:<br />if the product is only one.<br />if the two person needed that one product urgently.<br />if there's other alternative products available.<br />if the two person are brand concious and the product happen to be what they like.<br />4.</span><span style="font-family:arial;"> Design an algorithm for using it so that both deadlock and starvation are not possible.<br /></span><span style="font-family:arial;"></span><br /><span style="font-family:arial;">public boolean tryAcquire( int n0, int n1, ... ) { if ( for all i: ni ≤ availi ) { // successful acquisition availi -= ni for all i; returntrue; // indicate success } else return false; // </span><span style="font-family:arial;">indicate failure}</span><br /><span style="font-family:arial;"></span><br /><span style="font-family:arial;">i</span><span style="font-family:arial;">nit) Semaphore s = new Semaphore(1,1);</span><br /><span style="font-family:arial;">Thread A Thread B</span><br /><span style="font-family:arial;"></span><br /><span style="font-family:arial;">-------- --------</span><br /><span style="font-family:arial;"></span><br /><span style="font-family:arial;">s.acquire(1,0); s.acquire(0,1);</span><br /><span style="font-family:arial;">s.acquire(0,1); s.acquire(1,0);</span><br /><span style="font-family:arial;">T</span><span style="font-family:arial;">hread B--------</span><br /><span style="font-family:arial;">while(true) </span><span style="font-family:arial;">{</span><br /><span style="font-family:arial;">s.acquire(0,1);</span><br /><span style="font-family:arial;">if ( s.tryAcquire(1,0) ) // if second acquisition succeeds</span><br /><span style="font-family:arial;">break; // leave the loop</span><br /><span style="font-family:arial;">else </span><span style="font-family:arial;">{</span><br /><span style="font-family:arial;">s.release(0,1); // release what is heldsleep( SOME_AMOUNT); // pause a bit before trying again</span><br /><span style="font-family:arial;"></span><br /><span style="font-family:arial;">}</span><br /><span style="font-family:arial;">}</span><br /><span style="font-family:arial;"></span><br /><span style="font-family:arial;">run action s.value--- ------ -------</span><br /><span style="font-family:arial;">(1,1)A s.acquire(1,0) (0,1)B s.acquire(0,1) (0,0)A s.acquire(0,1)</span><br /><span style="font-family:arial;">A</span><span style="font-family:arial;"> blocks on secondB s.tryAcquire(1,0) => falseB s.release(0,1) </span><br /><span style="font-family:arial;">(0,1)A s.acquire(0,1) (0,0) A succeeds on second</span><br /><span style="font-family:arial;"><br />5. figure 5.16 shows a tunnel going trough a mountain and two streets parallel to each other- one at each intrance/exit of the tunnel. Traffic light are located at each end of the tunnel to control the crossflow of tghe traffic through each intersection.<br />a. Deadlock will not happen because there are two traffic lights that control the traffic.The deadlock may when some of the motorist dont follow the traffic light because there's only one bridge to drive through.<br />b. Deadlock can be detected when there will be a huge bumper to bumper to the traffic and there will be accident that will happen.<br />c. The solution to prevent deadlock is that, the traffic lights should be accurate and motorist should follow it. In order to have a nice driving through the bridge.</span><br /><span style="font-family:Arial;">6. Based on figure 5.17 answer the following question.</span><br />a. This is not a deadlocked.<br /></span>b. There is no blocked processes.<br />c. P2 can freely request on R1 and R2.<br />d. P1 can freely request on R1 and R2.<br />e. Both P1 and P2 have requested R2.<br /><br />1. P1 will wait after the request of P2.<br />2. P2 will wait after the request of P1.Glennhttp://www.blogger.com/profile/12050898129795620974noreply@blogger.com0tag:blogger.com,1999:blog-1882202890116689323.post-91418716307868957332007-12-13T00:28:00.000-08:002007-12-13T01:23:29.670-08:00Assignment #3<strong>Page 56</strong><br /><strong>EXERCISES:1,2,3</strong><br /><strong>A</strong>.Explain he following:<br />Multiprogramming. Why i s it used?<br />A <strong>multiprogramming</strong> is a technique used to utilize maximum CPU time by running multiple programs simultaneously. The execution begins with the first program and continuous till an instruction waiting for a peripheral is reached, the context of this program is stored, and the second program is memory is given chance to run. The process continued until all program finished running. Multiprogramming has no guarantee that a program will run is timely manner.<br />B.<strong>Internal Fagmentation</strong>. How does it occur?<br /> The internal fragmentation occurs it when a fixedpartition is partially used by program, the remaining space within the partition is unavailable to any other job and that's the time internal fragmentation occur when there is another job followed on the space. So that it will not wasted.<br />C.<strong>Compaction</strong>: Why is it need?<br />Compaction is very needed because it is the process of collecting fragments of available memory space into contiguous in block by moving programs and data in acomputer's memory disks, or known as garbage collection.<br />E.<strong>Relocation</strong>: How often should it performed? It depend on the process of address refferences in program<strong>.</strong><br /><strong>#2.</strong> Describe the Major Disadvantages for each of the four memory allocation schemes presented in the chapter. The disadvantage of this memory allocation its an overhead process, so that while compaction is being done everything else must wait<strong>.</strong><br /><strong>#3</strong>.Describe the Major Advantages for each of the memory alloatiuon schemes presented in the chapter. They could be divided into segments of variable sizes of equal size. Each page, or segment, could be stored wherever there was an empty block best enough to hold it.<br /><br /><a href="http://www.google.com.ph/search?hl=en&q=multiprogramming.Why+is+it+used&meta">http://www.google.com.ph/search?hl=en&q=multiprogramming.Why+is+it+used&meta</a>=<br /><br /><br /><br /><strong>Page 104</strong><br /><strong>EXERCISE:#4</strong><br /><strong>A:</strong>What is the cause of thrashing?<br /><strong>Thrashing</strong> is caused by under allocation of the minimum number of pages required by a process, forcing it to continuously page fault.<br /><strong>B:</strong>How does the operating system detect thrashing?<br />The system can <strong>detect thrashing</strong> by evaluating the level of CPU utilization as compared to the level of multiprogramming.<br /><strong>C:</strong>Once thrashing is detcted, what can the operating system do to iliminate it?<br />It can be<strong> eliminated</strong> by reducing the level of multiprogramming .<br /><br /><a href="http://www.go4expert.com/forums/showthread.php?t=834">http://www.go4expert.com/forums/showthread.php?t=834</a><br /><br /><strong>EXERCISE:#5</strong><br /><br />What purpose does the referenced bit serve in a demand paging system?<br />=is it want it to be loaded faster???... it seems that demand paging introduce the concept of loading partial of program into the memory and when begins the pages'll be loaded into the memory as they'r e needed<br /><br /><a href="http://www.daniweb.com/forums/thread8504.html">http://www.daniweb.com/forums/thread8504.html</a>Glennhttp://www.blogger.com/profile/12050898129795620974noreply@blogger.com0tag:blogger.com,1999:blog-1882202890116689323.post-1601062539761940902007-11-20T22:42:00.000-08:002007-11-20T23:21:14.459-08:00|Assignment # 1<div align="center">THE TI-99 HOME COMPUTER TIMELINE<br /> Bill Gaskill </div><div align="center"> </div><div align="justify"> 1993 marks the 10th anniversary of the decision by Texas Instruments to abandon the Home Computer. I have compiled the information in this timeline not in celebration of TI's decision to orphan the 99/4A, but rather to honor the community that remains ten years after TI's decision. I hope you enjoy the reading.<br /> THE BIRTH OF THE MICROCOMPUTER INDUSTRY<br /> 1947: Bell Labs engineers John Bardeen, Walter Brattain and William Shockley invent the transistor, which paves the way for the creation of smaller computers.<br /> 1955: IBM becomes the first computer manufacturer to offer plug-in peripherals for their computers. Although the computers are of the mainframe type, the concept will catch on and become an integral part of microcomputer technology.<br /> 1959: Texas Instruments releases the first integrated circuit after its engineers figure out how to put more than one transistor on the same material and connect them without wires.<br /> 1964: John G. Kemeny and Thomas E. Kurtz develop the BASIC programming language at Dartmouth College. BASIC will become a mainstay in the microcomputer world.<br /> 1969: Intel, then a one-year old company, releases a 1K-bit RAM chip, which is the largest amount of RAM ever put on an integrated circuit up to that time.<br /> 1972: Intel introduces the 8008 chip in April 1972. It becomes the first 8-bit microprocessor to hit the market.<br /> - Nolan Bushnell founds Atari and ships the Pong game.<br /> 1973: The first "mini" floppy disk is introduced.<br /> 1974: Intel introduces the 8080 chip in April 1974. The 8080 is the first microprocessor capable of addressing 64K bytes of memory.<br /> -Texas Instruments releases the TMS 1000 4-bit chip. It becomes an immediate success as over 100 million are sold for use in video games, microwave ovens, calculators and other electronics products.<br /> - In an article appearing in the July 1974 issue of Radio Electronics, author Jonathan Titus tells readers how to build the Mark 8 "personal minicomputer."<br /> - Motorola begins work on the M6800 chip, designed by Chuck Peddle. Peddle would later leave Motorola to join MOS Technology, the creators of the 6502 chip. Peddle ultimately became Commodore's Systems Division Director, responsible for the release of the PET 2001 in October 1977, after Commodore acquired MOS Technology in order to have its own chip source.<br /> - Naval Post-graduate School instructor Gary Kildall creates a new operating system for Intel's 8080 microprocessor called CP/M, an acronym for Control Program for Microcomputers. It sells for $70.<br /> - Creative Computing magazine is founded by David H. Ahl in Morristown New Jersey. </div><div align="justify"> 1978: The Plato computer aided instruction system is developed at the University of Illinois. Control Data Corporation would license these applications to Texas Instruments late in 1983, but by then, the fate of the Home Computer was already sealed.<br /> - Machine and operating system independent UCSD Pascal is released by the Regents of the University of California at San Diego for $200.<br /> - In March, Texas Instruments begins trying to recruit personal computer specialists by running full page ads entitled "Your Experience with personal computers is going to open an unlimited career at TI." in trade publications. The ads seek qualified applicants for Personal Computer Product Marketing Managers, Systems Programmers, Digital Design Engineers, Product Design Engineers, Application Software Specialists and Marketing Support Engineers. The recruitment efforts are largely unsuccessful when potential applicants discover the job is in Lubbock, Texas rather than close to the center of the microcomputer industry, which is northern California's Silicon Valley, situated only an hour's drive from San Francisco.<br /> - In April, Texas Instruments releases a recreational Solid State Software Leisure Library module for the TI58 and 59 programmable calculators, coining and trademarking the term Solid State Software.<br /> - Intel introduces the 8086 microprocessor.<br /> - In August MICROpro releases Seymour Rubenstein's Word-Master word processor, which is the predecessor to WordStar.<br /> - Illinois residents Ward Christensen and Randy Suess create the first microcomputer bulletin board system, conceived, designed, built, programmed, tested and installed in the 30 day period between January 16th and February 16th 1978.<br /> - The $895 Exidy Sorcerer is released in October by Exidy Computers of Sunnyvale, California. The machine sports 8K RAM, a 64 column by 30 row screen and the ability to use plug-in modules which are the size of 8-track tapes. The Sorcerer appears to be the first "Home Computer" to support ROM cartridge use.<br /> - In December Axiom Corporation introduces the EX-801 printer and EX-820 printer/plotter for $495 and $795 respectively. Both have available interfaces for the Apple II, TRS-80, PET and Exidy personal computers.<br /> - Epson introduces the MX-80 dot matrix printer, shocking the industry with its low price and high performance.<br /> - Over 14 million microprocessors are manufactured by year's end, with the 8-bit 6502 chip and TI's 4-bit TMS 1000 chip leading the pack.<br /> JAN 1979: Double sided disk drives are announced but few are available as manufacturers run into difficulty gearing up for production.<br /> FEB 1979: Rumors begin to fly about TI's new personal computer, despite the fact that it has not been formally announced. The rumors say the computer will have 40K of ROM, it will generate 20 lines of 40 characters on a standard television, have provisions for accommodating video disk players and video tape recorders, and it will have support for sophisticated sound production. </div><div align="justify"><a href="http://groups.google.com.ph/group/comp.sys.ti/browse_thread/thread/f982f043f6ab7181/7e8d92d196d2ed0f?hl=en&lnk=st&q=article+about+operating+system+appearing+in+a+computing+magazine+#7e8d92d196d2ed0f">http://groups.google.com.ph/group/comp.sys.ti/browse_thread/thread/f982f043f6ab7181/7e8d92d196d2ed0f?hl=en&lnk=st&q=article+about+operating+system+appearing+in+a+computing+magazine+#7e8d92d196d2ed0f</a></div><div align="justify"> </div><div align="justify"> </div><div align="justify">2. a regional bank might decide to buy six server instead of one supercomputer fo two reasons:</div><div align="center">1. because a super computer isonly use in major company and the regional bank need many computer for there employees to help people who are in need of services and to shorten there work.</div><div align="center">2.A regional bank need six server computer rather than a supercomputer because a supercomputer is only one computer rather than a six server computer that many employee of the bank can use.</div>Glennhttp://www.blogger.com/profile/12050898129795620974noreply@blogger.com0tag:blogger.com,1999:blog-1882202890116689323.post-47496511801821986802007-11-18T01:37:00.001-08:002007-12-13T00:27:48.750-08:00Assignment #2<div align="justify"><span style="color:#000000;"><strong>Virtual memory is</strong> a </span><a title="Computer" href="http://en.wikipedia.org/wiki/Computer"><span style="color:#000000;">computer</span></a><span style="color:#000000;"> system technique which gives an application program the impression that it has contiguous working memory, while in fact it is physically fragmented and may even overflow on to disk storage. Systems which use this technique make programming of large applications easier and use real physical memory (e.g. </span><a title="RAM" href="http://en.wikipedia.org/wiki/RAM"><span style="color:#000000;">RAM</span></a><span style="color:#000000;">) more efficiently than those without virtual memory.</span></div><div align="justify"><span style="color:#000000;"><strong>Virtual Memory Operating systems based on Microsoft Windows</strong> NT technologies have always provided applications with a flat 32-bit virtual address space that describes 4 gigabytes (GB) of virtual memory. The address space is usually split so that 2 GB of address space is directly accessible to the application and the other 2 GB is only accessible to the Windows executive software<strong>.</strong>The virtual address space of processes and applications is still limited to 2 GB unless the /3GB switch is used in the Boot.ini file. When the physical RAM in the system exceeds 16 GB and the /3GB switch is used, the operating system will ignore the additional RAM until the /3GB switch is removed. This is because of the increased size of the kernel required to support more Page Table Entries. The assumption is made that the administrator would rather not lose the /3GB functionality silently and automatically; therefore, this requires the administrator to explicitly change this setting.<strong>Virtual memory (VM) in UNIX</strong> <strong>operating system</strong> is just what it says -- "virtual" -- it really doesn't exist. The VM size is NOT consuming any disk space. Unless a user's X system is performing swapping there's absolutely no need to worry about the swap file size nor its location. Swapping activity is provided by observing the "0(0) pageouts" in the last header line of the Terminal top command. Another useful Terminal command is the vm_stat(1) command (see man vm_stat). This command also displays the number of Pageouts. The pageout value is an indication that physical memory is being paged(swapped) to the swap file. This i/o is done in page chunks. A page chunk is 4096 bytes in size. When physical memory is paged (swapped) to the swap file it is being done so because physical memory is being over-subscribed. The best solution for avoiding frequent over-subscription of physical memory is to have fewer Apps running at same time or install more physical memory. When physical memory becomes over-subscribed the OS will seek out inactive memory pages and copy them to the swap file in order to make room for the active memory pages -- which may have to be copied from the swap file back into physical memory. a </span><a title="Page (computing)" href="http://en.wikipedia.org/wiki/Page_(computing)"><strong><span style="color:#000000;">page</span></strong></a><span style="color:#000000;"><strong><u> fault</u></strong> is a fixed length block of memory that is used as a unit of transfer between </span><a title="Physical memory" href="http://en.wikipedia.org/wiki/Physical_memory"><span style="color:#000000;">physical memory</span></a><span style="color:#000000;"> and external storage like a </span><a title="Hard disk" href="http://en.wikipedia.org/wiki/Hard_disk"><span style="color:#000000;">disk</span></a><span style="color:#000000;">, and a page fault is an </span><a title="Interrupt" href="http://en.wikipedia.org/wiki/Interrupt"><span style="color:#000000;">interrupt</span></a><span style="color:#000000;"> (or </span><a title="Exception handling" href="http://en.wikipedia.org/wiki/Exception_handling"><span style="color:#000000;">exception</span></a><span style="color:#000000;">) to the software raised by the hardware, when a program accesses a page that is mapped in address space, but not loaded in physical memory.The hardware that detects this situation is the </span><a title="Memory management unit" href="http://en.wikipedia.org/wiki/Memory_management_unit"><span style="color:#000000;">memory management unit</span></a><span style="color:#000000;"> in a processor. The </span><a title="Exception handling" href="http://en.wikipedia.org/wiki/Exception_handling"><span style="color:#000000;">exception handling</span></a><span style="color:#000000;"> software that handles the page fault is generally part of an </span><a title="Operating system" href="http://en.wikipedia.org/wiki/Operating_system"><span style="color:#000000;">operating system</span></a><span style="color:#000000;">. The operating system tries to handle the page fault by making the required page accessible at a location in physical memory or kills the program in case it is an illegal access. <strong>working set</strong> of information W(t,τ) of a </span><a title="Process (computing)" href="http://en.wikipedia.org/wiki/Process_(computing)"><span style="color:#000000;">process</span></a><span style="color:#000000;"> at time t to be the collection of information referenced by the process during the process time interval (t − τ,t)”. Typically the units of information in question are considered to be </span><a title="Page (computing)" href="http://en.wikipedia.org/wiki/Page_(computing)"><span style="color:#000000;">memory pages</span></a><span style="color:#000000;">. This is suggested to be an approximation of the set of pages that the process will access in the future (say during the next τ time units), and more specifically is suggested to be an indication of what pages ought to be kept in main memory to allow most progress to be made in the execution of that process.<br />The effect of choice of what pages to be kept in main memory (as distinct from being paged out to auxiliary storage) is important: if too many pages of a process are kept in main memory, then fewer other processes can be ready at any one time. If too few pages of a process are kept in main memory, then the page fault frequency is greatly increased and the number of active (non-suspended) processes currently executing in the system are set to zero.<br />The working set model states that a process can be in RAM if and only if all of the pages that it is currently using (the most recently used pages) can be in RAM. The model is an all or nothing model, meaning if the pages it needs to use increases, and there is no room in RAM, the process is kicked from memory to free the memory for other processes to use.<br />In other words, the working set strategy prevents </span><a title="Thrash (computer science)" href="http://en.wikipedia.org/wiki/Thrash_(computer_science)"><span style="color:#000000;">thrashing</span></a><span style="color:#000000;"> while keeping the degree of multiprogramming as high as possible. Thus it optimizes CPU utilization and throughput.<br />The main hurdle in implementing the working set model is keeping track of the working set. The working set window is a moving window. At each memory reference a new reference appears at one end and the oldest reference drops off the other end. A page is in the working set if it is referenced in the working set window.<strong>Page size</strong> is usually determined by a processor architecture. Traditionally, pages in a system had uniform size, for example 4096 bytes. However, processor designs often allow two or more, sometimes simultaneous, page sizes due to the benefits and penalties. There are several points that can factor into choosing the best page size.</span><a title="Intel x86" href="http://en.wikipedia.org/wiki/Intel_x86"><span style="color:#000000;">Intel x86</span></a><span style="color:#000000;"> supports 4MB pages (2MB pages if using </span><a title="Physical Address Extension" href="http://en.wikipedia.org/wiki/Physical_Address_Extension"><span style="color:#000000;">PAE</span></a><span style="color:#000000;">) in addition to its standard 4kB pages, and other architectures may often have similar feature. </span><a title="IA-64" href="http://en.wikipedia.org/wiki/IA-64"><span style="color:#000000;">IA-64</span></a><span style="color:#000000;"> supports as much as eight different page sizes, from 4kB up to 256MB. This support for huge pages (or, in </span><a title="Microsoft Windows" href="http://en.wikipedia.org/wiki/Microsoft_Windows"><span style="color:#000000;">Microsoft Windows</span></a><span style="color:#000000;"> terminology, large pages) allows "the best of both worlds", reducing the pressure on the TLB cache (sometimes increasing speed by as much as 15%, depending on the application and the allocation size) for large allocations and still keeping memory usage at a reasonable level for small allocations.<br />Huge pages, despite being implemented in most contemporary </span><a title="Personal computer" href="http://en.wikipedia.org/wiki/Personal_computer"><span style="color:#000000;">personal computers</span></a><span style="color:#000000;">, are not in common use except in large servers and </span><a title="High-performance computing" href="http://en.wikipedia.org/wiki/High-performance_computing"><span style="color:#000000;">computational clusters</span></a><span style="color:#000000;">. Commonly, their use requires elevated privileges, cooperation from the application making the large allocation (usually setting a flag to ask the operating system for huge pages) or manual administrator configuration; operating systems commonly, sometimes by design, cannot </span><a title="Paging" href="http://en.wikipedia.org/wiki/Paging"><span style="color:#000000;">page them out</span></a><span style="color:#000000;"> to disk.<br /></span><a title="Linux" href="http://en.wikipedia.org/wiki/Linux"><span style="color:#000000;">Linux</span></a><span style="color:#000000;"> has supported huge pages on several architectures since the 2.6 series. </span><a title="Windows Server 2003" href="http://en.wikipedia.org/wiki/Windows_Server_2003"><span style="color:#000000;">Windows Server 2003</span></a><span style="color:#000000;"> (SP1 and newer) and </span><a title="Windows Server 2008" href="http://en.wikipedia.org/wiki/Windows_Server_2008"><span style="color:#000000;">Windows Server 2008</span></a><span style="color:#000000;"> supports huge pages under the name of large pages, but </span><a title="Windows XP" href="http://en.wikipedia.org/wiki/Windows_XP"><span style="color:#000000;">Windows XP</span></a><span style="color:#000000;"> and </span><a title="Windows Vista" href="http://en.wikipedia.org/wiki/Windows_Vista"><span style="color:#000000;">Windows Vista</span></a><span style="color:#000000;"> do not.The read/write speed of a hard drive is much slower than RAM, and the technology of a hard drive is not geared toward accessing small pieces of data at a time. If your system has to rely too heavily on virtual memory, you will notice a significant performance drop. The key is to have enough RAM to handle everything you tend to work on simultaneously -- then, the only time you "feel" the slowness of virtual memory is is when there's a slight pause when you're changing tasks. When that's the case, virtual memory is perfect.<br />When it is not the case, the operating system has to constantly swap information back and forth between RAM and the hard disk. This is called <strong>thrashing,</strong> and it can make your computer feel incredibly slow. </span></div><div align="justify"></div><div align="justify"><a href="http://computer.howstuffworks.com/virtual-memory.htm">http://computer.howstuffworks.com/virtual-memory.htm</a></div><div align="justify"><a href="http://en.wikipedia.org/wiki/Virtual_memory">http://en.wikipedia.org/wiki/Virtual_memory</a></div><div align="justify"></div>Glennhttp://www.blogger.com/profile/12050898129795620974noreply@blogger.com0