When you hear “Elixir” for the first time, you might wonder why a language that runs on top of Erlang is worth learning. The answer lies in the problems you’ll face when you build modern, Internet‑scale services: handling thousands of concurrent users, surviving hardware failures, and updating code without dropping connections. Erlang was created in the 1980s to solve exactly those challenges for a telecom giant, and Elixir simply makes the experience of writing Erlang‑based code a lot more pleasant.
What We’ll Cover
- The traits that make a system “highly available”
- How Erlang’s process model gives you fault‑tolerance, scalability, distribution and responsiveness
- The OTP framework and the tools that ship with the Erlang runtime
- How Elixir reduces boilerplate, introduces macros, and adds a pipeline operator
- Practical, domain‑agnostic examples that illustrate each concept
Imagine a multiplayer online board game where players from around the globe can join a match at any moment. If the server crashes while a game is in progress, every player loses their state and the experience is ruined. A good system must therefore:
- Keep running even when parts of it fail (fault‑tolerance)
- Grow to accommodate more players without a complete rewrite (scalability)
- Run on several machines so a hardware failure doesn’t bring everything down (distribution)
- Respond quickly, even under heavy load (responsiveness)
- Allow you to deploy a new version without kicking players off (live upgrade)
Erlang was built with exactly these goals in mind, and the language’s runtime (the BEAM VM) provides the primitives that make them possible.
Core Explanation
1. The Erlang Process – Light‑Weight Concurrency
Unlike operating‑system processes or POSIX threads, an Erlang process is a tiny, isolated unit of execution managed entirely by the BEAM VM. A single BEAM instance can host millions of such processes, each with its own heap and stack. Because they share no memory, a crash in one process cannot corrupt another – the classic “one bad apple spoils the bunch” problem disappears.
Processes talk to each other via asynchronous message passing. The message queue is a first‑in‑first‑out (FIFO) mailbox attached to each process. No locks, no mutexes, no shared state – just pure message passing.
Example: A Tiny “Scoreboard” in Elixir
# (Note: we write the Erlang code in Elixir syntax just for readability,
# the concepts are the same in pure Erlang.)
defmodule Scoreboard do
use GenServer
# Starts a new server that holds a map of {player => score}
def start_link(_opts) do
GenServer.start_link(__MODULE__, %{}, name: __MODULE__)
end
# Public API: add points to a player
def add_points(player, pts) do
GenServer.cast(__MODULE__, {:add, player, pts})
end
# Public API: fetch a player’s score
def get_score(player) do
GenServer.call(__MODULE__, {:get, player})
end
## Callbacks ---------------------------------------------------------
def init(state), do: {:ok, state}
def handle_cast({:add, player, pts}, state) do
new_state = Map.update(state, player, pts, &(&1 + pts))
{:noreply, new_state}
end
def handle_call({:get, player}, _from, state) do
{:reply, Map.get(state, player, 0), state}
end
end
Each call to add_points/2 spawns a message that the server processes asynchronously. If the Scoreboard process crashes, other parts of the system keep running; a supervisor can simply restart it (more on that later).
2. OTP – A Library and Set of Conventions
OTP (Open Telecom Platform) is the “batteries‑included” framework that sits on top of the BEAM. It defines a handful of behaviours – GenServer, Supervisor, Application, among others – that encode common patterns:
- GenServer – Generic server processes that manage state and handle messages.
- Supervisor – A process that monitors child processes and restarts them according to a strategy (one‑for‑one, one‑for‑all, etc.).
- Application – A description of a self‑contained component that can be started, stopped, and upgraded as a unit.
Example: A Supervision Tree for a Chat Room Service
defmodule ChatRoom.Server do
use GenServer
# Public API – client joins the room
def join(room, user) do
GenServer.call(via_name(room), {:join, user})
end
# Public API – send a message to a room
def broadcast(room, msg) do
GenServer.cast(via_name(room), {:broadcast, msg})
end
# -------------------------------------------------------------------
def start_link(room) do
GenServer.start_link(__MODULE__, %{room: room, users: %{}}, name: via_name(room))
end
def init(state), do: {:ok, state}
def handle_call({:join, user}, _from, state) do
new_users = Map.put(state.users, user, self())
{:reply, :ok, %{state | users: new_users}}
end
def handle_cast({:broadcast, msg}, state) do
# Forward the message to each user process
Enum.each(state.users, fn {_user, pid} -> send(pid, {:msg, state.room, msg}) end)
{:noreply, state}
end
# Helper – builds a unique name for each room process
defp via_name(room), do: {:via, Registry, {ChatRoom.Registry, room}}
end
defmodule ChatRoom.Supervisor do
use Supervisor
def start_link(_opts) do
Supervisor.start_link(__MODULE__, :ok, name: __MODULE__)
end
def init(:ok) do
children = [
{Registry, keys: :unique, name: ChatRoom.Registry}
]
Supervisor.init(children, strategy: :one_for_one)
end
# Dynamically start a room when needed
def start_room(room_name) do
Supervisor.start_child(__MODULE__, %{
id: room_name,
start: {ChatRoom.Server, :start_link, [room_name]},
restart: :permanent,
shutdown: 5000,
type: :worker
})
end
end
Here we have a ChatRoom.Supervisor that can spin up a new ChatRoom.Server for each chat room on demand. If a room process crashes, its supervisor restarts it, preserving the overall availability of the chat service.
3. Distribution – Same Code, Multiple Nodes
Erlang’s message‑passing works whether the sender and receiver live in the same BEAM instance or across a network. All you have to do is start multiple nodes (think of them as separate OS processes) and let them know each other’s names. The runtime takes care of routing messages.
Example: A Distributed Counter
defmodule Distributed.Counter do
use GenServer
# Starts a counter on the local node
def start_link(name) do
GenServer.start_link(__MODULE__, 0, name: {:global, name})
end
# Increment the counter from any connected node
def inc(name) do
GenServer.cast({:global, name}, :inc)
end
# Fetch the current value from any node
def value(name) do
GenServer.call({:global, name}, :value)
end
# -------------------------------------------------------------------
def init(initial), do: {:ok, initial}
def handle_cast(:inc, count), do: {:noreply, count + 1}
def handle_call(:value, _from, count), do: {:reply, count, count}
end
Because we register the process under the :global registry, any node that knows the atom my_counter can invoke Distributed.Counter.inc/1 or value/1. Add a new node to the cluster, and the counter continues to work without any code changes.
4. The Elixir Advantage: Less Boilerplate, More Expressiveness
Erlang’s power comes with a certain amount of “noise”: you must declare the full gen_server behaviour, list every callback, and write boilerplate structs. Elixir’s syntax and macro system smooth out those rough edges.
4.1. From Verbose to Concise
Consider the same scoreboard server written in pure Erlang:
# Erlang (illustrative) – many lines just to start a server
-module(scoreboard).
-behaviour(gen_server).
-export([start_link/0, init/1, handle_call/3, handle_cast/2, handle_info/2]).
...
Now the Elixir version (already shown above) folds the same behaviour into a use GenServer line and removes the need to hand‑craft the tuple‑based replies. The intent is instantly clear: “this is a server, here are its public functions, here’s the state transformation.”
4.2. Macros – Writing Language Extensions
Elixir lets you write code that runs at compile time. A macro receives the abstract syntax tree (AST) of the code you wrote, rewrites it, and returns a new AST. This lets you create domain‑specific mini‑languages.
Let’s build a tiny DSL for defining “ping‑pong” actors, useful for teaching or testing:
defmodule PingPong do
# The macro provides a clean syntax like:
# ping_pong do
# ping "hello"
# pong fn msg -> IO.puts("Received: #{msg}") end
# end
defmacro ping_pong(do: block) do
quote do
# The block can contain `ping/1` and `pong/1` calls.
# We evaluate it in a fresh process.
spawn(fn ->
unquote(block)
end)
end
end
defmacro ping(message) do
quote do
send(self(), {:ping, unquote(message)})
end
end
defmacro pong(fun) do
quote do
receive do
{:ping, msg} -> unquote(fun).(msg)
end
end
end
end
# Using the DSL
require PingPong
PingPong.ping_pong do
ping "hey there"
pong fn msg -> IO.puts("Pong received: #{msg}") end
end
The macro hides all the low‑level spawn/1 and receive boilerplate, allowing the user to focus on the intent: “ping something, then handle it with a pong function.”
4.3. The Pipeline Operator – Readable Function Composition
In a functional style you often see a chain of transformations: data → transform1 → transform2 → … → final_result. Erlang forces you to nest calls or introduce temporary variables, both of which hurt readability.
Elixir’s |> (pipeline) operator automatically threads a value as the first argument of the next function. It reads like a series of steps in natural language.
defmodule OrderProcessor do
def receive_order(raw) do
raw
|> parse_json()
|> validate()
|> calculate_total()
|> store()
end
defp parse_json(json), do: Jason.decode!(json)
defp validate(map), do: Map.put_new(map, :valid?, true)
defp calculate_total(map), do: Map.update!(map, :total, fn _ -> 42.0 end)
defp store(map), do: :mnesia.transaction(fn -> :mnesia.write({:order, map}) end)
end
Each step consumes the result of the previous one, so the overall flow is linear and easy to follow. Under the hood the compiler rewrites this into nested calls, but you never have to write that ugliness yourself.
Common Patterns in Production Code
- Supervision Trees: Group related workers under a supervisor so crashes are isolated and automatically recovered.
- GenServer + Registry: Use a
Registryto give each logical entity (e.g., a room, a user, a game) a unique name and look it up without storing PIDs manually. - Dynamic Workers: Spawn processes on demand (chat rooms, shopping carts, game sessions) and let the supervisor clean them up when they finish.
- Hot Code Upgrade: Define
code_change/3callbacks to transform state when a new version of a module is loaded, enabling zero‑downtime deployments. - Pipeline‑Driven Data Flow: Build pipelines for data validation, enrichment, and persistence, especially in API gateways or ETL pipelines.
Pitfalls to Watch Out For
- Over‑Spawning Processes – While you can safely create many processes, each one still consumes memory. Create long‑lived processes only when it makes sense; short‑lived “worker” processes are fine, but an unbounded number of idle processes can bloat the VM.
- Blocking Calls in GenServer – Avoid long‑running, blocking operations (e.g., HTTP requests, heavy CPU loops) inside a
handle_callorhandle_cast. Offload them to separate workers or useTask.async/awaitto keep the server responsive. - Misusing Global Registries –
:globalis convenient but incurs a distributed consensus algorithm. Use it sparingly; preferRegistrywith a local scope unless you truly need a global name. - Hot Upgrade Complexity – Implementing
code_change/3correctly requires thinking about state migrations. If you don’t need zero‑downtime, a classic rolling restart is simpler. - Macro Overuse – Macros can make code look magic. Keep them small, well‑documented, and avoid hiding essential logic that future readers need to understand.
Summary
- Erlang provides a runtime tuned for high availability: fault tolerance, scalability, distribution, and responsive scheduling are baked into the BEAM VM.
- OTP supplies proven patterns (GenServer, Supervisor, Application) that let you compose robust systems with minimal effort.
- Elixir builds on top of Erlang, delivering a modern syntax, powerful macros, and the pipeline operator, all of which dramatically reduce boilerplate.
- When you combine these tools, you can build services like chat rooms, game servers, or real‑time analytics pipelines that stay alive, scale gracefully, and can be upgraded without dropping connections.
Armed with this understanding, you’re ready to dive into the deeper parts of Elixir – from building OTP‑based applications to creating your own macro‑driven DSLs. The next chapters will show you how to turn these concepts into production‑grade code that runs reliably, no matter how many users knock on your door.