Dissertation Defense

Uniparallel Execution and its Uses

Kaushik Veeraraghavan
SHARE:

Many properties are difficult or slow to achieve on a multiprocessor but easy to achieve on a uniprocessor. In this thesis, we introduce uniparallelism: a new style of execution that allows applications to benefit from the simplicity of uniprocessor execution while scaling performance with increasing processors.
A uniparallel execution consists of a thread-parallel execution where each thread runs on its own processor, and an epoch-parallel execution where multiple time in- tervals (epochs) of the program run concurrently. The epoch-parallel execution runs all threads of a given epoch on a single processor at a time; this enables the use of techniques that run only on a uniprocessor. Unlike a traditional thread-parallel exe- cution that scales with the number of cores by running different threads on different cores, an epoch-parallel execution achieves scalability in a different way, namely by concurrently running different epochs (time slices) of the execution on multiple cores.
We address two challenging problems using uniparallelism: guaranteed determinis- tic multiprocessor replay and data race detection and survival for production systems. Deterministic replay systems log non-deterministic events during a recording phase and reproduce them during replay. While uniprocessor replay is easy, multiprocessor replay remains impractical due to the overhead of logging and reproducing frequent non-deterministic shared-memory accesses. DoublePlay is a new replay system that uses uniparallelism to run two executions of the program: the epoch-parallel execu- tion constrains all threads in an epoch to a single processor so only infrequent thread context-switches need be logged to recreate the order of shared-memory accesses; the thread-parallel execution provides checkpoints of future program state so multiple epochs can execute in parallel, thus scaling performance with increasing processors.
Frost is a new system that protects a program from data race errors at runtime by combining uniparallelism and complementary schedules. Frost uses uniparallelism to timeslice the program into epochs. Unlike DoublePlay which runs a single epoch- parallel execution of the program, Frost runs multiple epoch-parallel replicas with the goal that at least one of the replicas avoids the order of racing instructions that leads to incorrect program execution for most harmful data races. Frost achieves this goal by enforcing complementary schedules which are a set of thread schedules crafted to ensure that replicas diverge only if a data race occurs and to make it very likely that harmful data races cause divergences. Frost detects divergences by comparing the output and memory state of replicas at the end of each epoch. Upon detecting a divergence, Frost helps programs survive bugs in production by diagnosing the data race bug and selecting an appropriate recovery strategy.

Sponsored by

Jason N. Flinn