Tuesday, November 20, 2007

|Assignment # 1

THE TI-99 HOME COMPUTER TIMELINE
Bill Gaskill
1993 marks the 10th anniversary of the decision by Texas Instruments to abandon the Home Computer. I have compiled the information in this timeline not in celebration of TI's decision to orphan the 99/4A, but rather to honor the community that remains ten years after TI's decision. I hope you enjoy the reading.
THE BIRTH OF THE MICROCOMPUTER INDUSTRY
1947: Bell Labs engineers John Bardeen, Walter Brattain and William Shockley invent the transistor, which paves the way for the creation of smaller computers.
1955: IBM becomes the first computer manufacturer to offer plug-in peripherals for their computers. Although the computers are of the mainframe type, the concept will catch on and become an integral part of microcomputer technology.
1959: Texas Instruments releases the first integrated circuit after its engineers figure out how to put more than one transistor on the same material and connect them without wires.
1964: John G. Kemeny and Thomas E. Kurtz develop the BASIC programming language at Dartmouth College. BASIC will become a mainstay in the microcomputer world.
1969: Intel, then a one-year old company, releases a 1K-bit RAM chip, which is the largest amount of RAM ever put on an integrated circuit up to that time.
1972: Intel introduces the 8008 chip in April 1972. It becomes the first 8-bit microprocessor to hit the market.
- Nolan Bushnell founds Atari and ships the Pong game.
1973: The first "mini" floppy disk is introduced.
1974: Intel introduces the 8080 chip in April 1974. The 8080 is the first microprocessor capable of addressing 64K bytes of memory.
-Texas Instruments releases the TMS 1000 4-bit chip. It becomes an immediate success as over 100 million are sold for use in video games, microwave ovens, calculators and other electronics products.
- In an article appearing in the July 1974 issue of Radio Electronics, author Jonathan Titus tells readers how to build the Mark 8 "personal minicomputer."
- Motorola begins work on the M6800 chip, designed by Chuck Peddle. Peddle would later leave Motorola to join MOS Technology, the creators of the 6502 chip. Peddle ultimately became Commodore's Systems Division Director, responsible for the release of the PET 2001 in October 1977, after Commodore acquired MOS Technology in order to have its own chip source.
- Naval Post-graduate School instructor Gary Kildall creates a new operating system for Intel's 8080 microprocessor called CP/M, an acronym for Control Program for Microcomputers. It sells for $70.
- Creative Computing magazine is founded by David H. Ahl in Morristown New Jersey.
1978: The Plato computer aided instruction system is developed at the University of Illinois. Control Data Corporation would license these applications to Texas Instruments late in 1983, but by then, the fate of the Home Computer was already sealed.
- Machine and operating system independent UCSD Pascal is released by the Regents of the University of California at San Diego for $200.
- In March, Texas Instruments begins trying to recruit personal computer specialists by running full page ads entitled "Your Experience with personal computers is going to open an unlimited career at TI." in trade publications. The ads seek qualified applicants for Personal Computer Product Marketing Managers, Systems Programmers, Digital Design Engineers, Product Design Engineers, Application Software Specialists and Marketing Support Engineers. The recruitment efforts are largely unsuccessful when potential applicants discover the job is in Lubbock, Texas rather than close to the center of the microcomputer industry, which is northern California's Silicon Valley, situated only an hour's drive from San Francisco.
- In April, Texas Instruments releases a recreational Solid State Software Leisure Library module for the TI58 and 59 programmable calculators, coining and trademarking the term Solid State Software.
- Intel introduces the 8086 microprocessor.
- In August MICROpro releases Seymour Rubenstein's Word-Master word processor, which is the predecessor to WordStar.
- Illinois residents Ward Christensen and Randy Suess create the first microcomputer bulletin board system, conceived, designed, built, programmed, tested and installed in the 30 day period between January 16th and February 16th 1978.
- The $895 Exidy Sorcerer is released in October by Exidy Computers of Sunnyvale, California. The machine sports 8K RAM, a 64 column by 30 row screen and the ability to use plug-in modules which are the size of 8-track tapes. The Sorcerer appears to be the first "Home Computer" to support ROM cartridge use.
- In December Axiom Corporation introduces the EX-801 printer and EX-820 printer/plotter for $495 and $795 respectively. Both have available interfaces for the Apple II, TRS-80, PET and Exidy personal computers.
- Epson introduces the MX-80 dot matrix printer, shocking the industry with its low price and high performance.
- Over 14 million microprocessors are manufactured by year's end, with the 8-bit 6502 chip and TI's 4-bit TMS 1000 chip leading the pack.
JAN 1979: Double sided disk drives are announced but few are available as manufacturers run into difficulty gearing up for production.
FEB 1979: Rumors begin to fly about TI's new personal computer, despite the fact that it has not been formally announced. The rumors say the computer will have 40K of ROM, it will generate 20 lines of 40 characters on a standard television, have provisions for accommodating video disk players and video tape recorders, and it will have support for sophisticated sound production.
2. a regional bank might decide to buy six server instead of one supercomputer fo two reasons:
1. because a super computer isonly use in major company and the regional bank need many computer for there employees to help people who are in need of services and to shorten there work.
2.A regional bank need six server computer rather than a supercomputer because a supercomputer is only one computer rather than a six server computer that many employee of the bank can use.

Sunday, November 18, 2007

Assignment #2

Virtual memory is a computer system technique which gives an application program the impression that it has contiguous working memory, while in fact it is physically fragmented and may even overflow on to disk storage. Systems which use this technique make programming of large applications easier and use real physical memory (e.g. RAM) more efficiently than those without virtual memory.
Virtual Memory Operating systems based on Microsoft Windows NT technologies have always provided applications with a flat 32-bit virtual address space that describes 4 gigabytes (GB) of virtual memory. The address space is usually split so that 2 GB of address space is directly accessible to the application and the other 2 GB is only accessible to the Windows executive software.The virtual address space of processes and applications is still limited to 2 GB unless the /3GB switch is used in the Boot.ini file. When the physical RAM in the system exceeds 16 GB and the /3GB switch is used, the operating system will ignore the additional RAM until the /3GB switch is removed. This is because of the increased size of the kernel required to support more Page Table Entries. The assumption is made that the administrator would rather not lose the /3GB functionality silently and automatically; therefore, this requires the administrator to explicitly change this setting.Virtual memory (VM) in UNIX operating system is just what it says -- "virtual" -- it really doesn't exist. The VM size is NOT consuming any disk space. Unless a user's X system is performing swapping there's absolutely no need to worry about the swap file size nor its location. Swapping activity is provided by observing the "0(0) pageouts" in the last header line of the Terminal top command. Another useful Terminal command is the vm_stat(1) command (see man vm_stat). This command also displays the number of Pageouts. The pageout value is an indication that physical memory is being paged(swapped) to the swap file. This i/o is done in page chunks. A page chunk is 4096 bytes in size. When physical memory is paged (swapped) to the swap file it is being done so because physical memory is being over-subscribed. The best solution for avoiding frequent over-subscription of physical memory is to have fewer Apps running at same time or install more physical memory. When physical memory becomes over-subscribed the OS will seek out inactive memory pages and copy them to the swap file in order to make room for the active memory pages -- which may have to be copied from the swap file back into physical memory. a page fault is a fixed length block of memory that is used as a unit of transfer between physical memory and external storage like a disk, and a page fault is an interrupt (or exception) to the software raised by the hardware, when a program accesses a page that is mapped in address space, but not loaded in physical memory.The hardware that detects this situation is the memory management unit in a processor. The exception handling software that handles the page fault is generally part of an operating system. The operating system tries to handle the page fault by making the required page accessible at a location in physical memory or kills the program in case it is an illegal access. working set of information W(t,τ) of a process at time t to be the collection of information referenced by the process during the process time interval (t − τ,t)”. Typically the units of information in question are considered to be memory pages. This is suggested to be an approximation of the set of pages that the process will access in the future (say during the next τ time units), and more specifically is suggested to be an indication of what pages ought to be kept in main memory to allow most progress to be made in the execution of that process.
The effect of choice of what pages to be kept in main memory (as distinct from being paged out to auxiliary storage) is important: if too many pages of a process are kept in main memory, then fewer other processes can be ready at any one time. If too few pages of a process are kept in main memory, then the page fault frequency is greatly increased and the number of active (non-suspended) processes currently executing in the system are set to zero.
The working set model states that a process can be in RAM if and only if all of the pages that it is currently using (the most recently used pages) can be in RAM. The model is an all or nothing model, meaning if the pages it needs to use increases, and there is no room in RAM, the process is kicked from memory to free the memory for other processes to use.
In other words, the working set strategy prevents
thrashing while keeping the degree of multiprogramming as high as possible. Thus it optimizes CPU utilization and throughput.
The main hurdle in implementing the working set model is keeping track of the working set. The working set window is a moving window. At each memory reference a new reference appears at one end and the oldest reference drops off the other end. A page is in the working set if it is referenced in the working set window.Page size is usually determined by a processor architecture. Traditionally, pages in a system had uniform size, for example 4096 bytes. However, processor designs often allow two or more, sometimes simultaneous, page sizes due to the benefits and penalties. There are several points that can factor into choosing the best page size.
Intel x86 supports 4MB pages (2MB pages if using PAE) in addition to its standard 4kB pages, and other architectures may often have similar feature. IA-64 supports as much as eight different page sizes, from 4kB up to 256MB. This support for huge pages (or, in Microsoft Windows terminology, large pages) allows "the best of both worlds", reducing the pressure on the TLB cache (sometimes increasing speed by as much as 15%, depending on the application and the allocation size) for large allocations and still keeping memory usage at a reasonable level for small allocations.
Huge pages, despite being implemented in most contemporary
personal computers, are not in common use except in large servers and computational clusters. Commonly, their use requires elevated privileges, cooperation from the application making the large allocation (usually setting a flag to ask the operating system for huge pages) or manual administrator configuration; operating systems commonly, sometimes by design, cannot page them out to disk.
Linux has supported huge pages on several architectures since the 2.6 series. Windows Server 2003 (SP1 and newer) and Windows Server 2008 supports huge pages under the name of large pages, but Windows XP and Windows Vista do not.The read/write speed of a hard drive is much slower than RAM, and the technology of a hard drive is not geared toward accessing small pieces of data at a time. If your system has to rely too heavily on virtual memory, you will notice a significant performance drop. The key is to have enough RAM to handle everything you tend to work on simultaneously -- then, the only time you "feel" the slowness of virtual memory is is when there's a slight pause when you're changing tasks. When that's the case, virtual memory is perfect.
When it is not the case, the operating system has to constantly swap information back and forth between RAM and the hard disk. This is called thrashing, and it can make your computer feel incredibly slow.