Relational Databases & ACID Transactions
December 02, 2025
Series
I’m sure everyone already knows what ACID means and roughly how it works. We’ve all memorized Atomicity, Consistency, Isolation, Durability at some point, probably for an exam or an interview.
In this post, we’re going to step back and look at the bigger picture: what relational databases are, why they exist, where they came from, and how ACID actually fits into that story. We’ll go into the history, the “why” behind the design, and then break down how ACID really works in practice.
By the end, you won’t just remember the acronym you’ll genuinely understand what it’s protecting you from and how to reason about it when you design systems.
So let’s start.
Relational Databases
The database is one such component which is most brittle, which has the most latency, most response time, and anything that breaks is usually because of the DB.
Relational databases store data in tables made of rows and columns. Each table represents a type of entity (users, orders, products), each row is a single record, and each column is a field. The “relational” part comes from the fact that tables can reference each other via keys, which is what SQL is built to query efficiently.

But do you know why relational databases need these properties in the first place?
What was broken before RDBMS that RDBMS and ACID was invented?
History of Databases
Early on, companies stored data in files (CSVs, etc.). For accounting, this sucked because:
- Hard to keep numbers correct when multiple people updated the files.
- No guarantee that updates are fully applied (what if power goes off mid-write?).
- No clear rules (constraints) to prevent invalid data.
- Data was scattered across many files/programs.
So relational databases (RDBMS) were built to fix exactly these problems.
The key properties that the RDBMS came up with were:

And to help RDBMS achieve all these properties, transactions were invented.

What is a Database Transaction?
A transaction in the context of databases is a collection of queries or a sequence of one or more operations (such as inserting, updating, or deleting records) that the database treats as one single unit of work. It either fully succeeds or fully fails, with no in-between states.
If you consider a bank account example, where you want to transfer $100 from Account A to Account B. These are the steps you would follow right.

These steps form one transaction. If either step fails, both are canceled.
Without transactions, databases could end up in inconsistent states like money deducted from Account A, but not received in Account B (partial updates) or two people booking the last movie ticket at the same time (conflict).
Transactions solve these problems by enforcing rules like ACID properties (Atomicity, Consistency, Isolation, Durability).
1. Atomicity
The word atomic comes from the Greek “atomos”, meaning “indivisible, cannot be cut”. Here in the context of transactions, Atomicity means:
All statements within a transaction either take effect together or not at all. If even one query in the transaction fails, the entire transaction is rolled back.
Say if you hadn’t implemented a transaction and if you tried the same example of sending $100 to Account B. And mid way as you update and send $100 to Account B, the PC faces some problem and shuts down.


Now you would see that Account A has $100 deducted, but Account B hasn’t received it yet. Now this is an inconsistent state, hence using such queries under transactions is important. Hope you got a fair understanding of what Atomicity means.
Curious how databases implement Atomicity?
How Databases Implement Atomicity
The key mechanisms to guarantee atomicity:
Write-Ahead Log (WAL)
- This mechanism records all database modifications before they are applied to the actual data pages.
- If a transaction fails (crashes), the log provides the necessary information to undo any partial changes made, ensuring that the entire transaction is treated as if it never happened.


Write-ahead logs are written on persistent storage (a disk) which allows them to survive a node failure. In a distributed database, once a follower node restarts after a failure, it reads its WAL file from the start and executes updates in the log file sequentially. This allows the node to return to its state before the failure. Think of it as how a bank statement lists the credit or debit transactions. If those transactions are run one by one on the initial balance, the final balance of the account can be calculated.
Once the follower node has reconstructed the data state, it synchronizes its WAL with the leader node’s WAL by requesting all the client updates that happened while the node was in the failed state. After the leader node shares all the client updates, the follower node updates its WAL and executes all updates on the data to bring the data state consistent with the leader node and other follower nodes.

There can be a scenario when there are millions of entries in the WAL which can impact the restart time of a node. This is addressed by taking snapshots of the database at regular intervals. On restart after a failure, only the WAL entries that were appended after the last snapshot are executed. This greatly reduces the number of WAL entries that need to be executed to reconstruct the prior state of the database and hence improves performance.
2. Consistency
Consistency in the context of ACID transactions ensures that any transaction will bring the database from one valid state to another valid state, never leaving it in a broken or invalid state.
By consistency, we refer to both types:
- Consistency in reads
- Consistency in data
Lack of atomicity and lack of isolation leads to inconsistent data.
Consistency means that all data integrity rules are true before and after a transaction. Those rules are things like:
- PRIMARY KEY – no duplicate IDs
- FOREIGN KEY – child rows must point to existing parent rows
- CHECK – values must satisfy conditions (e.g.,
age >= 0)
If a transaction tries to violate any of these constraints, the database simply refuses to commit it and rolls back to the previous valid state.
Say you have a users table:
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
age INT NOT NULL,
CONSTRAINT age_non_negative CHECK (age >= 0)
);
Transaction flow:
BEGIN TRANSACTION;
INSERT INTO users (name, age) VALUES ('Alice', 25);
-- Later in the same transaction, a bug accidentally sets a negative age
UPDATE users
SET age = -5 WHERE name = 'Alice';
COMMIT;
At COMMIT time, the database checks all constraints:
age_non_negativeis violated (-5<0).- The transaction cannot leave the DB in this invalid state.
- So the whole transaction is rejected/rolled back – you won’t see a user with
age = -5in the table.
How to Implement Consistency?
Atomicity is implemented by the database itself, so there is no user involvement there, but consistency is user-defined.
- Database schema constraints:
NOT NULL,UNIQUE,PRIMARY KEY,FOREIGN KEY,CHECKconstraints, and other schema definitions ensure no invalid entries are allowed. - Triggers and stored procedures (You need not understand the code below, it is just for a higher level understanding)
Trigger example – prevent negative stock on update
CREATE TABLE products (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
stock_quantity INT NOT NULL
);
CREATE OR REPLACE FUNCTION prevent_negative_stock()
RETURNS TRIGGER AS $$
BEGIN
IF NEW.stock_quantity < 0 THEN
RAISE EXCEPTION 'Stock cannot be negative';
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER check_stock_before_update
BEFORE UPDATE ON products
FOR EACH ROW
EXECUTE FUNCTION prevent_negative_stock();
Any UPDATE products that would make stock_quantity < 0 is automatically rejected.
Stored procedure example
CREATE OR REPLACE PROCEDURE place_order(p_user_id INT, p_product_id INT, p_qty INT)
LANGUAGE plpgsql
AS $$
BEGIN
-- validate quantity
IF p_qty <= 0 THEN
RAISE EXCEPTION 'Quantity must be positive';
END IF;
-- insert order + update stock in one transaction
INSERT INTO orders (user_id, product_id, quantity)
VALUES (p_user_id, p_product_id, p_qty);
UPDATE products
SET stock_quantity = stock_quantity - p_qty
WHERE id = p_product_id;
END;
$$;
Call:
CALL place_order(1, 42, 3);
The procedure centralizes validation + updates, which means fewer chances for apps to misuse the DB.
- Application-level safeguards: the application should still protect the database from obviously bad input, by providing robust validation in the backend code before any operation on the DB.
In the next article we will go in depth with hands-on to understand isolation and durability. This article may become long in case I try to cover all of them. Sorry about that.
I hope you liked the article, if you did, please Subscribe to my weekly newsletter!
Take care.
See you in the next one.