Philip Grin Notes for Wednesday, 4/16/03 **************************** Purpose of this lecture: Each thread has its own stack but all threads share the code and static data. Code is read-only while the static data can change in content but is a fixed size. Purpose of last times lecture: -A compiler is a big program that takes instructions written in HLL and creates an assembly file written in assembly language. -Schedule execution of contexts in any order Process-pile of code currently issuing a sequence of instructions in some predetermined order, given a set of inputs. Computer-pile of hardware executing a pile of code currently issuing a sequence of instructions in some predetermined order, given a set of inputs. A program can execute in different ways, either like a process or like a task Process: execution in a predetermined order Task: execution in any order If you can execute code in any order than the CPU will be more utilized, instead of executing code in a predetermined order which causes the CPU to wait when the programs does I/O. When executing code in any order the CPUs idle time is minimized. Since CPUs cost more than RAM, it is cheaper to buy more RAM and to execute multiple programs with one CPU than to provide each program with its won CPU. Operating Systems of the past and the present: MS-DOS: 1 address space, couldn't have multiple because the 8086 architecture didn't handle it. 1 thread Only worked with a small number of programs Old UNIX: Multiple address spaces, 1 thread per address space, had a privilege bit & ASID, costs more that MS-DOS but got cheaper overtime Why did it cost more? Because more people to make it and less people bought it Mach and current UNIX: Multiple address space, more than one address space. Winners: everyone Java: 1 address space and more that one thread per address space Execution environment controls threads. Set of rules: type safety, protection integrated with the language Address Space: Code and static data are fixed size. Static data is allocated at the program load time. Stack and heap grow in opposite directions to maximize the use of memory. Stack, heap, and static data changes, but code doesn't. Change takes work and time, which makes the heap and stack harder to work with than with code and static data. TASKS: For each task, instead of having one stack that grows up and down, each thread is given a stack that is a fixed size of 64KB. Each stack is kept track with a current size and a maximum size. Since each stack has a fixed size there can only be a fixed number of stacks and therefore a fixed number of threads. All threads share the code and only one copy of the code is needed since it is read-only. Static data is also shared but each thread is able to write to it, which complicates things due to each thread might expect some data to be in static data that another thread has changed. The threads also share the heap. The heap and static data are hard to keep track of, especially the heap since each thread can change the content of the head and the size of the heap varies.