Resources about shell/user-level stuff
Lab 1 skeleton was locked out
Labs 2-4: not quite ready yet
Abstractions let us build large-scale systems
Choice of algorithm, implementation language, compiler flags, & low-level implementation details can make dramatic differences in performance
Most of the time, programmer time is more expensive than computer time
Some cases where performance matters (not exhaustive):
Real-time systems: hard (i.e. must-be-met) deadlines on computation
Zavie story: in the 1980s, research weather models were producing decent 10-day forecasts--3 days late
Payroll systems have hard deadlines
History: how we got to where we are (from single-process to batch to multi-processing)
Operating systems: no standard definition
Key concerns:
Layered model TODO: diagram
Processor modes
Processes make system calls/requests/traps to ask kernel for something
TODO: diagram
Processor (CPU) operates in 2 (or more) modes
User mode
Kernel mode
On system startup (bootstrap): processor is in supervisor mode
Interrupt/trap transfers control back to kernel & switches processor back to supervisor mode
IRET
) to switch to user modeUser --> supervisor mode
Trap/interrupt: program counter jumps to address contained in interrupt vector
ISR may disable (mask) interrupts while serving current interrupt
Non-maskable/priority interrupt: cannot be masked
Kernel -> user mode
IRET
Core program
Leaky abstraction (muddying the model):
Userland processes interact with kernel via system calls (traps)
Want kernel to be small & fast
I/O: core functions implemented with 4 calls: open, close, read n bytes & write n bytes
printf
, <<
, etc.) implemented in userland library functions
Kernel is responsible for:
Compiler support: non-standard language features
Process: essentially a running program
Kernel starts up initial process, process #1
init
init
reads config file(s), makes system calls to ask kernel to create other processes
getty
)init
--> getty
--> login
--> bash
(shell)
Process is (essentially) running program: how to create & tear down processes?
Program startup
-f
--flag
--key=value
filename...
stdin
, cin
, ...), 1 (stdout
, cout
, ...), 2 (stderr
, cerr
, ...)Exit status
EXIT_SUCCESS
= 0EXIT_FAILURE
= 1grep
: found/not-foundtest
: boolean expressionPlus other stuff not important right now
Unix design decision: separate process creation from program invocation
fork()
: clone current process (the one that called fork)
init
) is root
When child process terminates, kernel holds child's exit status value until parent process collects it
wait
/waitpid
system calls
wait
doesn't necessarily waitwait
, kernel maintains child's entry in process table
init
)init
's other function is to reap orphan zombiesDesign principles:
In C (API layer), system calls look like ordinary function calls
Examples
fork
, exec
, wait
, signal
, kill
rename
, unlink
, chmod
, chown
open
, close
, read
, write
, lseek
open
, close
, read
, write
, ioctl
bind
, listen
, connect
, send
, read
, write
signal
, kill
, pipe
, shmget
, mmap
umask
, getpid
, getppid
, getcwd
, chdir
Arguments to/results from system calls typically stored in registers
Return values from system calls are integers
File descriptor: small integer representing open file
Convention:
stdin
or cin
, or...)stdout
, cout
,...)stderr
, cerr
,...)Most programs expect files 0, 1, 2 to be pre-opened
fork
exec
wait
to get child exit status
$?
C library:
fopen
returns FILE*
object (handle into userland library object)
open
which returns file descriptorfileno
returns file descriptor for given FILE*
objectfdopen
takes open file descriptor, wraps new FILE
object
around it & returns FILE*
Common paradigm (pattern): producer-consumer
Output of one program is input to another
Problems:
Solution: pipe
data "flows" between processes
pipe
system call returns pair of open file descriptors for producer (writer) & consumer (reader)
fork
& exec