Race Conditions and Deadlock1

Table of contents

  1. Data race
  2. Race condition
  3. Concurrent programming guidelines
    1. Avoid data races
    2. Use consistent locking
    3. Use fine-grained locking only if contention limits speedup
    4. Design for atomic operations
    5. Use libraries whenever available

Mutual exclusion locks ensure synchronized access to shared memory because only one thread can acquire a lock at a time. Locks help prevent two types of synchronization issues: data races and race conditions.

Data race

A data race occurs when there are two (or more) concurrent memory accesses in a program where both:

  1. access the same location in memory
  2. at least one access is writing/modifying memory
  3. at least one access is unsynchronized

The original (unsynchronized) version of the BankAccount class allowed multiple threads to read/write to the balance at the same time. When a data race occurs, we don’t know what will happen. The Java compiler and the processor architecture perform numerous optimizations that make it extremely difficult to predict what happens in the case of a data race.

Race condition

A race condition occurs when the result of a computation depends on scheduling: how threads are interleaved. Consider the following array-based stack implementation. (Constructor omitted. Assume the array is very large.)

public class Stack<E> {
    private E[] data;
    private int size;

    public synchronized void push(E element) {
        data[size] = element;
        size += 1;

    public synchronized E pop() {
        if (size == 0) {
            return null;
        E element = data[size - 1];
        size -= 1;
        return element;
Does the Stack class have any data races?

There are no data races because all read/write accesses to the size and the data are synchronized on the entire Stack instance, this.

Suppose we want to implement a peek method using only the public methods. (Assume there is at least one item in the stack.)

class IntStackClient {
    static int peek(Stack<Integer> stack) {
        // Race condition
        int result = stack.pop();
        return result;

There are still no data races in this implementation because pop and push are synchronized. Since pop and push are synchronized methods, we know that other threads cannot read or write to the stack while the peek thread is computing pop or push. But it turns out that this code has a race condition even though the Stack is synchronized on every one of its public methods.

Since each Stack method is synchronized, where else can a thread scheduling issue occur?

There are only three lines of code in the body of peek. Since the synchronized pop or push methods block other threads from reading or writing to the stack, the only other problem could be with the sliver of time between the lines of code.

Suppose we have two threads, thread 1 calling IntStackClient.peek and thread 2 calling push(1), push(2), and then pop(). Ideally, the call to pop by thread 2 should return 2 but this bad interleaving shows how thread 2 can get the value 1 instead.

Thread 1 (calls peek)Thread 2
int result = stack.pop() 
return result 
Give a bad interleaving if thread 1 calls peek and thread 2 calls peek.

Items can be pushed back onto the stack in the wrong order.

Thread 1Thread 2
int result = stack.pop() 
 int result = stack.pop()
return result 
 return result

One way to address this problem is to make the peek method is synchronized on the stack instance. This prevents other threads from reading or writing to the stack while another thread is calling the peek method.

class IntStackClient {
    static int peek(Stack<Integer> stack) {
        // Synchronized on the stack instance, not this IntStackClient
        synchronized (stack) {
            int result = stack.pop();
            return result;
We define data races as a separate idea from race conditions. While both are synchronization issues, not all data races are race conditions. (This is differs from Dan Grossman’s text.)

Concurrent programming guidelines

How should we design parallel algorithms keeping concurrency issues in mind? It’s helpful to begin by first organizing each memory location (object) created during the lifecycle of an algorithm into one of three categories.

  1. Thread-local. Only ever accessed by one thread.
  2. Immutable. No thread ever writes to this memory location.
  3. Shared and mutable. Synchronization is needed, which means we’ll need to consider concurrency issues.

Most of our earlier examples of parallel algorithms were specifically designed to use either thread-local or immutable memory only. Programs that use more shared and mutable memory raise potential concurrency issues.

Avoid data races

Use locks to ensure that two threads never simultaneously read/write or write/write to the same field.

Use consistent locking

For each location needing synchronization, have a lock that is always held when reading or writing to the location.

The lock guards the location. The same lock can guard multiple locations: in Java, this is best achieved by guarding the entire instance with one synchronized mutual exclusion lock.

Does guarding every entry point of an instance prevent data races, race conditions, both, or neither?

This only prevents data races. Race conditions can still occur even when all of the public methods are guarded: see the peek method above.

Use fine-grained locking only if contention limits speedup

Guarding the entire instance can result in unnecessary blocking. Contention describes the situation where threads are blocked waiting for each other. Consider the following approaches for guarding a hash table.

Fewer locks, such as one lock for the entire hash table. Easier to implement, but results in more contention.
More locks, such as one lock per hash table array index. Reduces contention, but makes some operations, such as resizing, trickier to implement.

Design for atomic operations

An operation is atomic (appears indivisible) if no other thread can see its partial execution. For example, the stack implementation above implemented atomic push and pop methods. Amdahl’s law is an important consideration because introducing longer critical sections (larger synchronized blocks) can result in increased contention and limits the speedup of a parallel algorithm.

How do we know if a critical section is too short?

This question boils down to identifying race conditions. This requires some analysis: critical sections that are too short can result in unanticipated race conditions. But make the critical section too long and we limit the speedup of the algorithm.

Use libraries whenever available

For example, Java provides ConcurrentHashMap for a thread-safe hash table map implementation. These implementations prevent data races but we still need to design our algorithms keeping race conditions in mind.

  1. Dan Grossman. 2016. A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency.