Concurrency¶

Goal: No raw synchronization

Definitions¶

Concurrency is when tasks start, run, and complete in overlapping time periods

Parallelism is when two or more tasks execute simultaneously

  • Threads enable concurrency
  • Processor cores enable parallelism

Motivation¶

  • Better performance can be achieved through parallelism
  • Better interactivity, and reduced latency, can be achieved through concurrency
GFlops in the machine
GFlops in the machine

Challenge¶

Amdahl's Law provides an equation that limits the expected speedup based on how much of an application is serialized.

$$ S_{latency}(s) = { 1 \over{(1 - p) + {p \over{s}}}} $$

Where:

  • $S_{latency}$ is the theoretical speedup of the execution of the whole task;
  • $s$ is the speedup of the part of the task that benefits from improved system resources;
  • $p$ is the proportion of execution time that the part benefiting from improved resources originally occupied.

Amdahl's Law tells us the limit we can speed up any application is bounded by $1 \over{1 - p}$.

Amdahl\
Amdahl's Law
— By Daniels220 at English Wikipedia, CC BY-SA 3.0, Link
Amdahl\
Amdahl's Law, Linear Scale
Each line represents 10% more synchronization

Synchronization Primitives¶

  • Atomics
  • Mutex
  • Lock
  • Condition Variable
  • Semaphore
  • Memory Fence
  • All of these are explicit synchronization points
  • All run the risk of misuse causing race errors or deadlocks