Network Flow Algorithms: Theory, Implementation, and Applications

Network flow algorithms form a cornerstone of graph theory and are widely applied in fields such as logistics, telecommunications, computer vision, and data mining. This tutorial walks you through the fundamental concepts, classic algorithms, practical implementation tips, and real‑world use cases, all presented in a clear, professional style.

Fundamental Concepts

A flow network is a directed graph G = (V, E) where each edge (u, v) has a non‑negative capacity c(u, v). Two special vertices are designated: a source s and a sink t. A feasible flow f must satisfy:

  • Capacity constraint: 0 ≤ f(u, v) ≤ c(u, v) for every edge (u, v).
  • Skew symmetry: f(u, v) = -f(v, u).
  • Flow conservation: For every vertex v ≠ s, t, the net flow into v is zero (∑ f(u, v) = ∑ f(v, w)).

The objective of a maximum‑flow problem is to find a feasible flow that maximizes the total amount leaving the source (equivalently, entering the sink).

Classic Maximum‑Flow Algorithms

1. Ford‑Fulkerson Method

The Ford‑Fulkerson method repeatedly augments the current flow along augmenting paths—paths from s to t in the residual graph with positive capacity. The algorithm terminates when no such path exists.

⚠ Warning: If capacities are irrational, Ford‑Fulkerson may not terminate. Always use integer capacities for guaranteed termination.
# Python implementation of Ford‑Fulkerson using DFS

def bfs(residual, s, t, parent):
    visited = [False] * len(residual)
    queue = [s]
    visited[s] = True
    while queue:
        u = queue.pop(0)
        for v, cap in enumerate(residual[u]):
            if not visited[v] and cap > 0:
                queue.append(v)
                visited[v] = True
                parent[v] = u
                if v == t:
                    return True
    return False

def ford_fulkerson(graph, source, sink):
    residual = [row[:] for row in graph]
    parent = [-1] * len(graph)
    max_flow = 0
    while bfs(residual, source, sink, parent):
        # Find bottleneck capacity
        path_flow = float('inf')
        v = sink
        while v != source:
            u = parent[v]
            path_flow = min(path_flow, residual[u][v])
            v = u
        # Update residual capacities
        v = sink
        while v != source:
            u = parent[v]
            residual[u][v] -= path_flow
            residual[v][u] += path_flow
            v = u
        max_flow += path_flow
    return max_flow

# Example usage
graph = [
    [0, 16, 13, 0, 0, 0],
    [0, 0, 10, 12, 0, 0],
    [0, 4, 0, 0, 14, 0],
    [0, 0, 9, 0, 0, 20],
    [0, 0, 0, 7, 0, 4],
    [0, 0, 0, 0, 0, 0]
]
print(f"Maximum flow: {ford_fulkerson(graph, 0, 5)}")

2. Edmonds‑Karp Algorithm

Edmonds‑Karp is a specialization of Ford‑Fulkerson that uses Breadth‑First Search (BFS) to find the shortest (in terms of edge count) augmenting path. This guarantees a polynomial time bound of O(V·E²).

📝 Note: The implementation shown above already uses BFS, thus it is effectively Edmonds‑Karp.

3. Dinic’s Algorithm

Dinic’s algorithm introduces the concept of a level graph and blocking flow. It runs in O(E·V²) for general graphs and O(E·√V) for unit‑capacity graphs.

// C++ implementation of Dinic's algorithm (simplified)
#include <bits/stdc++.h>
using namespace std;

struct Edge {
    int to, rev;
    long long cap;
};

class Dinic {
public:
    int N;                     // number of vertices
    vector<vector<Edge>> G;    // adjacency list
    vector<int> level, it;
    Dinic(int n) : N(n), G(n), level(n), it(n) {}

    void addEdge(int fr, int to, long long cap) {
        Edge a{to, (int)G[to].size(), cap};
        Edge b{fr, (int)G[fr].size(), 0};   // reverse edge
        G[fr].push_back(a);
        G[to].push_back(b);
    }

    bool bfs(int s, int t) {
        fill(level.begin(), level.end(), -1);
        queue<int> q; q.push(s); level[s] = 0;
        while (!q.empty()) {
            int v = q.front(); q.pop();
            for (auto &e : G[v]) if (e.cap > 0 && level[e.to] < 0) {
                level[e.to] = level[v] + 1;
                q.push(e.to);
            }
        }
        return level[t] >= 0;
    }

    long long dfs(int v, int t, long long f) {
        if (v == t) return f;
        for (int &i = it[v]; i < (int)G[v].size(); ++i) {
            Edge &e = G[v][i];
            if (e.cap > 0 && level[v] < level[e.to]) {
                long long d = dfs(e.to, t, min(f, e.cap));
                if (d > 0) {
                    e.cap -= d;
                    G[e.to][e.rev].cap += d;
                    return d;
                }
            }
        }
        return 0;
    }

    long long maxFlow(int s, int t) {
        long long flow = 0, INF = 1e18;
        while (bfs(s, t)) {
            fill(it.begin(), it.end(), 0);
            long long f;
            while ((f = dfs(s, t, INF)) > 0) flow += f;
        }
        return flow;
    }
};

int main(){
    int N = 6; // vertices 0..5
    Dinic dinic(N);
    dinic.addEdge(0,1,16);
    dinic.addEdge(0,2,13);
    dinic.addEdge(1,2,10);
    dinic.addEdge(1,3,12);
    dinic.addEdge(2,1,4);
    dinic.addEdge(2,4,14);
    dinic.addEdge(3,2,9);
    dinic.addEdge(3,5,20);
    dinic.addEdge(4,3,7);
    dinic.addEdge(4,5,4);
    cout << "Maximum flow: " << dinic.maxFlow(0,5) << endl;
    return 0;
}

4. Push‑Relabel (Goldberg‑Tarjan) Algorithm

Push‑Relabel works with a preflow that may temporarily violate flow conservation. It maintains a height label for each vertex and repeatedly performs push and relabel operations. Its worst‑case time is O(V³), but with the highest‑label selection rule it runs in O(V²·√E) in practice.

💡 Tip: For dense graphs (E ≈ V²), the highest‑label push‑relabel variant is often the fastest.

Algorithm Comparison

AlgorithmTime ComplexityTypical Use‑CaseImplementation Difficulty
Ford‑Fulkerson (DFS)O(F·E) – depends on max flow FEducational, small integer capacitiesEasy
Edmonds‑Karp (BFS)O(V·E²)General graphs where simplicity mattersModerate
DinicO(E·V²) / O(E·√V) (unit)Large sparse networksModerate
Push‑RelabelO(V³) / O(V²·√E)Very dense or real‑time systemsHard

Practical Implementation Tips

  • Use an adjacency list with edge objects storing capacity and reverse‑edge index for O(1) residual updates.
  • Store capacities in 64‑bit integers (long long) to avoid overflow on large networks.
  • When implementing Dinic, reuse the level graph across DFS calls and reset the iterator array to avoid redundant scans.
  • For push‑relabel, maintain a list (or bucket) of active vertices sorted by height to achieve the highest‑label rule efficiently.
📝 Note: Always test your implementation on known benchmark graphs (e.g., the Stanford Network Analysis Platform (SNAP) datasets) to validate correctness and performance.

Real‑World Applications

  1. Network bandwidth allocation – finding the maximum data throughput between two routers.
  2. Image segmentation – the min‑cut / max‑flow formulation partitions pixels into foreground/background.
  3. Project selection – modeling profits and prerequisites as a flow network to choose optimal projects.
  4. Sports tournament scheduling – determining feasible match assignments under venue constraints.
The max‑flow/min‑cut theorem states that the value of a maximum flow equals the capacity of a minimum s‑t cut. This duality underpins many optimization problems beyond pure networking.

Summary

📘 Summary: Network flow algorithms provide powerful tools for solving a wide variety of optimization problems. Starting from the simple Ford‑Fulkerson method, more sophisticated techniques like Edmonds‑Karp, Dinic, and Push‑Relabel deliver better performance on large or dense graphs. Understanding residual networks, augmenting paths, level graphs, and preflows is essential for both theoretical insight and practical implementation.

Frequently Asked Questions

Q: Can maximum‑flow algorithms handle multiple sources and sinks?
A: Yes. Introduce a super‑source connected to all original sources with infinite capacity, and a super‑sink connected from all original sinks. The resulting single‑source‑single‑sink problem yields the combined maximum flow.


Q: What is the difference between a min‑cut and a maximum flow?
A: A min‑cut is a partition of vertices separating source and sink with minimum total capacity of crossing edges. The max‑flow/min‑cut theorem guarantees that the value of any maximum flow equals the capacity of a minimum cut.


Q: When should I prefer Dinic over Push‑Relabel?
A: Dinic excels on sparse graphs and when you need a relatively simple implementation. Push‑Relabel shines on dense graphs or when you need the absolute fastest runtime in practice, at the cost of a more complex codebase.


Quiz

Q. Which of the following statements about the Edmonds‑Karp algorithm is TRUE?
  • It always finds the path with the largest capacity first.
  • Its time complexity is O(V·E²).
  • It uses depth‑first search to locate augmenting paths.
  • It cannot handle graphs with parallel edges.

Answer: Its time complexity is O(V·E²).
Edmonds‑Karp uses BFS to find the shortest (fewest edges) augmenting path, giving a guaranteed O(V·E²) bound. The other statements are false.

Q. In a residual graph, an edge (u, v) with capacity 0 can still be traversed if:
  • The original graph had a reverse edge (v, u) with positive capacity.
  • The current flow on (u, v) is less than its original capacity.
  • The algorithm is using a heuristic search.
  • None of the above.

Answer: The original graph had a reverse edge (v, u) with positive capacity.
Even if the forward residual capacity is 0, a reverse edge may have positive residual capacity, allowing flow to be pushed back.

Further Reading & References

References