DISCLAIMER: The content of this page is still being updated to 2026. In particular, the assignments links are not set yet, they will be on progressively starting Jan 27, 2026 when we discuss the first assignment. We also provide lectures from Spring’24 (all, for reference), as well as the links to the Spring’26 (usually identical) slides for the same lecture, which is typically made available shortly before the lecture starts. Last update: February 10, 2026

Key to Notation
Readings will be from the Operating Systems Concepts book by Silberschatz, Galvin, and Gagne 10th edition. John Wiley & Sons, Inc. ISBN-13: 978-1119800361. [SCG]

Additional Useful References
(1)
Andrew S Tanenbaum and Herbert Bos. Modern Operating Systems. 4th Edition, 2014. Prentice Hall.
ISBN: 013359162X/978-0133591620. [AT]
(2) Thomas Anderson and Michael Dahlin. Operating Systems: Principles and Practice, 2nd Edition.
Recursive Books. ISBN: 0985673524/978-0985673529. [AD]
(3) Kay Robbins & Steve Robbins. Unix Systems Programming, 2nd edition, Prentice Hall
ISBN-13: 978-0-13-042411-2. [RR]
(4) C Programming Language (2nd Edition). Brian W. Kernighan and Dennis M. Ritchie.
Prentice Hall. ISBN: 0131103628/978-0131103627
(5) Concurrent Programming in Java(TM): Design Principles and Pattern (2nd Edition).
Doug Lea. Prentice Hall. ISBN: 0201310090/978-0201310092.

IMPORTANT NOTE: Below is the schedule for CS370 in Spring’24. We provide for your reference the slides for each lecture from CS370 SP24, the material by Pr. Pallickara that Pouchet uses to teach CS370. Exact slides content will receive minor adjustments during the semester, and will be posted typically the day of the lecture. Students can safely peek at future lectures from prior editions of CS370, only minor adjustments are expected.
Note the dates for assignments are the dates they become officially available, due dates are typically 3+ weeks after the assignment date.

Introduction References and HW
This module provides an overview of the course, grading criteria, and a brief introduction to high level operating systems concepts. We will explore the differences between kernel mode and user-mode and why they exist. Ch {1,2} [SGG]
Ch {1} [RR]
Ch {1} [AT]

Ch {1} AD
HW1 1/27/26

Term-Project
(TP) 1/29/26
Objectives:

  1. Summarize basic operating systems concepts
  2. Highlight key developments in the history of operating systems
1/20/26
1/22/26
Lecture 1 (Spring’26)
Lecture 2 (Spring’26)
Lecture 1 (Spring’24, for reference)
Lecture 2 (Spring’24, for reference)
Processes Readings
Processes are a foundational construct in organizing computations with a program. This module will contrast differences between programs and processes. A key idea covered in this module is the notion of multiprogramming which can used to give the illusion that multiple processes are executing concurrently. We will explore the layout of processes in memory and the various metadata elements regarding a process that are organized within a Process Control Block (PCB). The PCB plays a foundational role in how the OS context-swictches between different processes. Ch {3} [SGG]
Ch {2} [AT]
Ch {2, 3} [RR]
Ch {2, 3} [AD]

HW2 01/29/26
Objectives:

  1. Contrast programs and processes
  2. Explain the memory layout of processes
  3. Describe Process Control Blocks
  4. Explain the notion of Interrupts and Context Switches
  5. Describe process groups
01/27/26
01/29/26

Lecture 3 (Spring’26)


Lecture 4 (Spring’26)


Lecture 3 (Spring’24, for reference)

Lecture 4 (Spring’24, for reference)
Inter-Process Communications Readings
A key role of the OS is to ensure that processes execute in isolation and have no ability to influence each other’s execution. In this module, we will explore how the OS allows processes to communicate with each other. We will look at the 3 different mechansims to accomplish this. Ch {3} [SGG]
Ch {2} [AT]
Ch {2, 3} [AD]

HW3
02/05/26
Objectives:

  1. Explain inter-process communications based on Shared Memory
  2. Explain inter-process communications based on Pipes
  3. Explain inter-process communications based on message passing
    Contrast inter-process communications based on shared memory, pipes, and message passing
  4. Design programs that implement inter-process communications
02/03/26
02/05/26

Lecture 5 (Spring’26)

Lecture 6 (Spring’26)

Lecture 5 (Spring’24, for reference)

Lecture 6 (Spring’24, for reference)
Threads
A thread can be thought of as a lightweight process. Threads also exist within the confines of a singular process. Why would we want a kind of process within a process? The key reason is simplified data sharing and fast context switches. The sharing occurs at a scale and simplicty that would be very difficult to accomplish across processes. Ch {4} [SCG]
Ch {2} [AT]
Ch {12} [RR]
Ch {4} [AD]
Objectives:

  1. Explain differences between processes and threads
  2. Compare multithreading models
  3. Contrast differences between user and kernel threads
  4. Relate dominant threading libraries: POSIX, Win32, and Java
  5. Design threaded programs that can synchronize their actions
2/10/26
2/12/26
>Lecture 7 (Spring’26)

Lecture 7 (Spring’24, for reference)
Lecture 8 (Spring’24, for reference)

Process Synchronization Ch {5}[SCG]
Ch {4} [AT]
HW4 02/19/26
When multiple processes cooperate with each other concurrently they must synchronize their actions. A key consieration is correctness and safety of the mechanisms that we use: incorrect solution have We will look at several classical problems in synchronization to help you understand the core issues that arise during process synchronization.
Objectives:

  1. Formulate the critical section problem.
  2. Dissect a software solution to the critical section problem (case study: Peterson’s solution)
  3. Explain Synchronization hardware and Instruction Set Architecture support for concurrency primitives.
  4. Assess classic problems in synchronization: bounded buffers, readers-writers, dining philosophers.
02/17/26

02/19/26

02/24/26

Lecture 9 (Spring’24, for reference)

Lecture 10 (Spring’24, for reference)

Lecture 11 (Spring’24, for reference)

Atomic Transcations
This module will cover issues relating to preserving atomicty of transcactions. We will explore issues that arise when a multiplicty of transcactions need to execute concurrently while preserving safety properties. Ch {5}[SCG]
Objectives:

  1. Explain serializability of transactions
  2. Assess locking protocols
  3. Explain checkpointing and rollback recovery in transactional systems
02/26/26 Lecture 12 (Spring’24, for reference)
Mid Term I (03/10/26): Covers all topics covered up until the lecture 13 included.
CPU Scheduling algoirithms
The CPU is reponsible for ensuring that multiple processes can make forward progress. The scheduling algorithm must accomplish several competing objectives: latency, throughput, priorty, and fairness. We will look at a slew of scheduling algorithms to accomplish these objectives. Ch {6} [SCG]
Ch {7} [AD]
Ch {2} [AT]

HW5 03/05/26
Objectives:

  1. Assess scheduling criteria including fairness and time quanta.
  2. Explain and contrast different approaches to scheduling: preemptive and non-preemptive
  3. Explain and assess scheduling algorithms: FCFS, shortest jobs, priority, round-robin, multilevel feedback queues, and the Linux completely fair scheduler.
  4. Understand how CPU scheduling algorithms function on multiprocessors.
03/03/26
03/05/26
03/12/26

Lecture 13 (Spring’24, for reference)

Lecture 14 (Spring’24, for reference)
Lecture 16 (Spring’24, for rerefence)
Deadlocks
A large number of processes compete for limited resources on the machine. Incorrect synchronization between these competing processes leads to deadlocks. In this module, we will look at how to characterize deadlocks and the various mechanisms we can use to prevent them by negating structural requirments necessary for deadlocks. Ch {7} [SCG]
Ch {6} [AT]
Ch {4} [AD]
HW-ExtraCredit 03/24/26
Objectives:

  1. Explain deadlock characterization
  2. Contrast and explain schemes for deadlock prevention
  3. Evaluate approaches to deadlock avoidance
  4. Understand recovery from deadlocks
03/24/26
03/26/26
Lecture 17 (Spring’24, for reference)
Lecture 18 (Spring’24, for reference)
Memory Management
Memory is a shared resource that must be effectively managed across different processes that are executing concurrently, Given that Instruction Set Architectures (ISA) operate on on data in registers and memory, how memory is managed and shared across competing processes has implications for performance including completion times and throughput. Ch {8} [SCG]
Ch {3} [AT]
Ch {9} [AD]
Objectives:

  1. Understand address binding and address spaces
  2. Explain contiguous memory allocations: including their advantages and disadvantages.
  3. Analyze the key constructs underpinning paging systems including hardware support, shared pages, and structure of page tables.
  4. Explain memory protection in paging environments
  5. Explain segmentation based approaches to memory management alongside settings in which they are particularly applicable.
03/31/26
04/02/26

04/07/26

04/09/26

Lecture 19 (Spring’24, for reference)
Lecture 20 (Spring’24, for reference)
Lecture 21 (Spring’24, for reference)
Lecture 22 (Spring’24, for reference)
Virtual Memory
A pure paging based memory allocation schemes require processes to be entirely memory-resident. This is often infeasible and wasteful. In this module we will explore algorithms that facilitate effective allocation of memory while minimizing wasteful allocations. We consider aspects of program behavior (such as the working set model) which reduces the total number of pages that need to be allocated to a process. [Ch {9} [SCG]
Ch {3} [AT]
Objectives:

  1. Explain demand paging and page faults
  2. Contrast page replacement algorithms and explain Belady’s anomaly
  3. Justify the rationale for stack algorithms
  4. Explain frame allocations
  5. Synthesize the concepts of thrashing and working sets
04/14/26
04/16/26
Lecture 23 (Spring’24, for reference)
Lecture 24 (Spring’24, for reference)
Virtualization
Virtualization creates the illusion of multiple (virtual) machines on the same physical hardware. Virtualization allows a single computer to host multiple virtual machines; each virtual machine potentially running a different OS.As part of this module we will look at Type-1 and Type-2 hypervisors and techniques for effective virtualization. Ch {7} [AT]
Ch {16} [SCG]
Ch {10} [AD]
Objectives:

  1. Explain Virtual Machine Monitors (VMMs)
  2. Justify the Popek and Goldberg requirements for virtualization
  3. Explain how Virtualization works in the x86 architecture
  4. Compare Type-1 and Type-2 Hypervisors
04/21/26
04/23/26
Lecture 25 (Spring’24, for reference)
Lecture 26 (Spring’24, for reference)
File Systems
Data managed on a hard disk must be amenable to updates, discovery, and retrievals. The underlying storage system only deals with disk blocks. In this module we explore a foundational construct in file systems — the file control block. We will explore how the design of the file control block informs efficiency in retrievals of content. We will round out our discussion of file systems with a look at the unix file system, file allocation table, and the NT File system. Ch {5} [AT]
Ch {4} [RR]
Ch {10, 11} [SCG]
Objectives:

  1. Summarize file system structure
  2. Contrast contiguous allocation vs indexed allocations
  3. Explain the Unix File System
  4. Explain and contrast Windows File Systems: the File Allocation table and NTFS
04/28/26
04/30/26
Lecture 27 (Spring’24, for reference)
Lecture 28 (Spring’24, for reference)
Summary lecture for CS370
Objectives:

  1. Review concepts evaluated in the comprehensive final exam
05/05/26 Lecture 29 (Spring’24, for reference)
Comprehensive Final Exam Online via Canvas+Respondus, as the midterm.