Optimistic Locking vs Pessimistic Locking
March 22, 2026
In real systems, there might be multiple transactions that happen at the same time like:
- Multiple users reading the same row
- Multiple users updating the same row
- Reads happening while writes are in progress
This leads to different types of conflicts:
- Read-Read
- Read-Write
- Write-Read
- Write-Write
These concurrent interactions can cause anomalies like dirty reads, non-repeatable reads, phantom reads, lost updates.
To control these concurrency anomalies databases provide isolation levels like Read Committed, Repeatable Read, and Serializable.
You can read about Concurrent Anomalies and Isolation Levels in depth in one of my previous article here:
Relational Database & ACID Transactions : Part 2
But an important question is:
How are these isolation levels actually implemented under the hood?_**
The answer is: using different concurrency control mechanisms.
There are three major approaches:
- Pessimistic Concurrency Control
- Optimistic Concurrency Control
- MVCC (We will talk about this in our next article)
Optimistic locking or Pessimistic locking are just approaches for concurrency control, they are not specific keywords in the database query language sense.
When dealing with conflicts, we have two options:
- we can try to avoid the conflict, and that is what Pessimistic Locking does.
- Or, we allow transactions to proceed concurrently and detect conflicts at commit time, and that is what Optimistic Locking does.
Lets understand the 2 basic locks before we understand concurrency control techniques.

Shared Lock (S-Lock)
A shared lock is used when a transaction needs to read a resource (such as a database row or table) without modifying it. Multiple transactions can acquire a shared lock on the same resource at the same time. However, as long as one or more shared locks exist on a resource, no transaction can modify/update/write to that resource.

Credits : https://parottasalna.com/2025/01/06/learning-notes-41-shared-lock-and-exclusive-locks-postgres/
Exclusive Lock (X-Lock)
An exclusive lock is used when a transaction wants to change data. Only one transaction can hold this lock at a time, and while it is held, no other transaction can read or write that data.

Pessimistic Concurrency Control
The literal meaning of pessimistic is "believing that bad things will happen".
Pessimistic concurrency control assumes that conflicts between transactions are likely. So, before a transaction performs operations on data, it acquires locks to prevent other transactions from accessing the same data in a conflicting way.
In simple terms, it follows this idea:
since conflicts are likely, prevent them upfront instead of detecting them later.
If a transaction wants to read data, it may acquire a shared lock.
If a transaction wants to modify data, it acquires an exclusive lock.
This means:
- Multiple transactions can still read the same data at the same time using shared locks
- But once a transaction acquires an exclusive lock on a resource, other transactions must wait before they can read or write that same resource
So pessimistic concurrency control avoids concurrency anomalies and other conflicting operations by making transactions wait until the current lock is released.
Example

Implementation of Pessimistic control
Limitations and Challenges
- Reduced Concurrency - Exclusive locks prevent other transactions from accessing the locked resource, which can lead to bottlenecks in highly concurrent systems.
- Blocking Writes - Shared locks can delay write operations, potentially impacting performance in write-heavy systems.
- Deadlocks - Example:
- T1 locks row A, wants row B
- T2 locks row B, wants row A
- Now both wait forever unless database detects and breaks the deadlock.
- Higher latency - Transactions will be spending time waiting for locks to be released. So response time can increases.
- Lock Management Overhead - Tracking, maintaining, and resolving locks adds internal overhead to the database.
Optimistic Concurrency Control
The literal meaning of Optimistic is "hoping good things will happen".
Optimistic concurrency control assumes that conflicts between transactions are rare. So instead of locking data before performing operations, it allows transactions to proceed freely and checks for conflicts only later, usually at commit time.
In this approach, transactions typically read the currently visible committed data without taking long-held locks upfront, perform their work, and then verify at commit time whether the data has changed in the meantime.
The optimistic lock is not implemented by the locking mechanism of the database management system. Instead, it is more on the application code level lock.
For example, a transaction may:
- read a row along with its version
- perform updates in memory
- attempt to update the row only if the version has not changed
If the version has changed, it means another transaction has already modified the data, and the current transaction fails and must be retried.
Example

Implementation of Optimistic control
Limitations and Challenges
- Late Conflict Detection - Conflicts are detected only at commit time, after the transaction has already done its work.
- Wasted Work - Multiple transactions may perform the same operations, but only one succeeds while others fail.
- Poor performance under high contention - frequent conflicts can lead to repeated failures and retries.
- Additional Metadata Requirement - Requires versioning or timestamps to detect conflicts.
- Retries Required - Failed transactions must be retried, increasing application complexity.
Conclusion
Each of these strategies has its own use cases.
Pessimistic concurrency control is preferred when conflicts are likely, when retrying failed work would be expensive, and when strong correctness and consistency are critical. It is a good fit for systems like banking, ticket booking, inventory management, and other high-contention workloads where it is better to make transactions wait rather than risk conflicting updates.
Optimistic concurrency control is preferred when conflicts are rare, contention is low, and occasional retries are acceptable. It works well for read-heavy systems, low-conflict business workflows, and application-level updates where blocking transactions upfront would unnecessarily reduce concurrency.
The right choice depends on the workload, the likelihood of conflicts, and whether waiting or retrying is more acceptable for the system.
In the next article, we will explore MVCC and see how modern databases avoid the reader-writer blocking seen in traditional pessimistic locking by maintaining multiple versions of data.