One of the biggest reasons Elixir can power large‑scale, fault‑tolerant services is its underlying runtime – the BEAM virtual machine. The BEAM gives you a lightweight concurrency model that looks very different from threads or OS processes you might be used to. In this article we’ll explore the essential building blocks that make BEAM concurrency tick, learn how to create and coordinate processes, and see practical patterns you’ll use every day when writing real‑world applications.
Why Concurrency Matters
When a web service receives thousands of requests per second, it must keep those requests from stepping on each other’s toes. If a single request blocks the whole system, latency spikes, queues grow, and users notice the slowdown. Moreover, a bug in one request handler should never bring down the whole server.
BEAM tackles these challenges with three core guarantees:
- Isolation – each process runs in its own memory space. A crash in one process cannot corrupt another.
- Scalability – the VM can spawn millions of tiny processes, each consuming only a few kilobytes of memory.
- Fault‑tolerance – supervisors can monitor processes and automatically restart them when they fail.
The rest of this article focuses on the first two guarantees: spawning processes and making them talk to each other.
Creating Lightweight Processes
In the BEAM world the word “process” refers to an Erlang‑style lightweight entity, not an OS process. Think of a process as a sealed box that holds data and a piece of code that runs until it finishes or crashes.
Spawning a Process
The easiest way to start a new process is with spawn/1. It expects a zero‑arity function (a function that takes no arguments) and runs that function in a brand‑new process.
# Simulate a long‑running operation that converts a temperature reading
convert = fn raw_value ->
:timer.sleep(1500) # pretend we talk to hardware
raw_value * 1.8 + 32 # Celsius → Fahrenheit
end
# Fire off the conversion in its own process
pid = spawn(fn -> IO.puts("Result: #{convert.(25)}") end)
IO.inspect(pid) # => #PID<0.123.0>
When spawn/1 returns, the caller is free to keep working while the new process does its job in the background. The identifier returned (the pid) can later be used to send messages to the process.
Passing Data Into a Process
Data is passed to a newly created process via closure capture. The value is copied (deep‑copied) into the child’s memory; there is no shared state.
# Imagine we have a list of image file names we need to thumbnail.
files = ["photo1.jpg", "photo2.png", "avatar.gif"]
thumb = fn file_name ->
spawn(fn ->
# pretend we run an external thumbnail generator
:timer.sleep(2000)
IO.puts("Created thumbnail for #{file_name}")
end)
end
Enum.each(files, &thumb.(&1))
# All three thumbnails start concurrently.
Communicating With Messages
Since processes do not share memory, they exchange information by sending messages. A message can be any Elixir term – a tuple, a map, a list, or even another pid.
Sending a Message
Use send/2 where the first argument is the recipient’s pid and the second argument is the payload.
# A simple logger process that prints anything it receives.
logger = spawn(fn ->
loop = fn loop_fun ->
receive do
msg ->
IO.puts("[LOG] #{inspect(msg)}")
loop_fun.(loop_fun) # tail‑recursive loop
end
end
loop.(loop)
end)
# Send a few log lines
send(logger, {:info, "System started"})
send(logger, {:error, "Failed to connect to DB"})
The logger will print the two messages as soon as they arrive, but the caller continues without waiting.
Receiving a Message
Every process has a mailbox – a FIFO queue that stores incoming messages. The receive construct pulls a message from the mailbox, pattern‑matches it, and executes the matching clause.
# A worker that asks a “calculator” process for the sum of two numbers.
calculator = spawn(fn ->
loop = fn loop_fun ->
receive do
{:add, a, b, reply_to} ->
send(reply_to, {:answer, a + b})
loop_fun.(loop_fun)
end
end
loop.(loop)
end)
caller = self()
send(calculator, {:add, 12, 30, caller})
receive do
{:answer, value} -> IO.puts("Got result: #{value}")
after
3000 -> IO.puts("Timed out waiting for answer")
end
The after clause makes the receive non‑blocking after a timeout, which is handy when you don’t want a process to wait forever for a reply.
Message Ordering Guarantees
When several messages are sent from the same sender to the same receiver, they are guaranteed to appear in the receiver’s mailbox in the order they were sent. This does not apply to messages coming from different senders – they can be interleaved arbitrarily.
Common Communication Patterns
Fire‑and‑Forget
When you simply need to start a job and don’t care about its result, spawn the job and let it send any diagnostic messages you like (or none at all). This pattern is great for background cleanup, logging, or metric collection.
Request‑Response (Synchronous‑style Messaging)
Even though all messages are asynchronous, you can emulate a synchronous call by embedding the caller’s pid in the request and then performing a receive for the reply.
defmodule Counter do
def start_link(initial \\ 0) do
spawn(fn -> loop(initial) end)
end
defp loop(value) do
receive do
{:inc, by, caller} ->
new = value + by
send(caller, {:new_value, new})
loop(new)
{:get, caller} ->
send(caller, {:value, value})
loop(value)
end
end
end
counter = Counter.start_link(10)
# Increment by 5 and wait for the new counter value.
send(counter, {:inc, 5, self()})
receive do
{:new_value, v} -> IO.puts("Counter is now #{v}")
end
One‑Way Broadcast
Sometimes a process needs to notify many listeners about an event (e.g., a new chat message). The easiest approach is to keep a list of subscriber pids and send the same payload to each.
defmodule PubSub do
def start_link do
spawn(fn -> loop([]) end)
end
defp loop(subscribers) do
receive do
{:subscribe, pid} ->
loop([pid | subscribers])
{:publish, msg} ->
Enum.each(subscribers, fn sub -> send(sub, {:event, msg}) end)
loop(subscribers)
end
end
end
pub = PubSub.start_link()
send(pub, {:subscribe, self()})
send(pub, {:publish, "New announcement!"})
receive do
{:event, text} -> IO.puts("Received: #{text}")
after
1000 -> :ok
end
Practical Tips & Common Pitfalls
- Never rely on
self()inside a spawned function without capturing it first.self()always returns the pid of the process that is executing the code at that moment. If you callself()inside the child’s closure without saving the caller’s pid, the child will send messages to itself, leading to dead ends. - Beware of unbounded mailboxes. A process that receives messages slower than they are sent will accumulate messages in memory, potentially exhausting the VM. Use back‑pressure or limit the rate at which you send.
- Message pattern collisions. If two different parts of your system use the same tuple shape, a
receivemight pick up a message intended for another component. Namespace your messages (e.g.,{:sensor, :temp, value}vs{:sensor, :pressure, value}). - Do not block indefinitely unless you really intend to. Adding a reasonable
afterclause prevents a process from hanging forever if a peer crashes or a network partition occurs. - Keep processes short‑lived when possible. Short‑lived workers are cheap to spawn and easier to supervise. Long‑running processes should be part of a supervision tree so they can be restarted on failure.
Putting It All Together – A Mini “Order Fulfillment” System
Below is a compact example that demonstrates spawning workers, a request‑response pattern, and a simple supervisor. The scenario: a client process places an order, a “warehouse” process picks items, a “payment” process charges the credit card, and finally a “notifier” process tells the user the order succeeded.
defmodule Warehouse do
def start_link do
spawn(fn -> loop() end)
end
defp loop do
receive do
{:pick, order_id, items, caller} ->
:timer.sleep(1000) # simulate packing
send(caller, {:picked, order_id, items})
loop()
end
end
end
defmodule Payment do
def start_link do
spawn(fn -> loop() end)
end
defp loop do
receive do
{:charge, order_id, amount, caller} ->
:timer.sleep(500) # simulate external gateway
send(caller, {:charged, order_id, :ok})
loop()
end
end
end
defmodule Notifier do
def start_link do
spawn(fn -> loop() end)
end
defp loop do
receive do
{:notify, order_id, status} ->
IO.puts("Order #{order_id} #{status}")
loop()
end
end
end
defmodule OrderCoordinator do
def start_link(warehouse, payment, notifier) do
spawn(fn -> loop(warehouse, payment, notifier) end)
end
defp loop(warehouse, payment, notifier) do
receive do
{:order, order_id, items, amount, client} ->
# 1. Ask the warehouse to pick items
send(warehouse, {:pick, order_id, items, self()})
# 2. Wait for the pick result
receive do
{:picked, ^order_id, _picked_items} ->
# 3. Charge the payment
send(payment, {:charge, order_id, amount, self()})
end
# 4. Wait for payment confirmation
receive do
{:charged, ^order_id, :ok} ->
# 5. Notify the user and the original client
send(notifier, {:notify, order_id, "completed"})
send(client, {:order_done, order_id})
after
5000 ->
send(notifier, {:notify, order_id, "failed (timeout)"})
send(client, {:order_failed, order_id})
end
loop(warehouse, payment, notifier)
end
end
end
# ----- Start the system -----
warehouse = Warehouse.start_link()
payment = Payment.start_link()
notifier = Notifier.start_link()
coordinator = OrderCoordinator.start_link(warehouse, payment, notifier)
# Simulate a client placing an order
client = self()
send(coordinator, {:order, 42, [:widget, :gadget], 1299, client})
receive do
{:order_done, id} -> IO.puts("Client: order #{id} succeeded")
{:order_failed, id} -> IO.puts("Client: order #{id} failed")
after
6000 -> IO.puts("Client: no reply, something went wrong")
end
Notice how each component is a tiny process that only knows about the messages it cares about. The coordinator stitches the workflow together by chaining receive blocks, inserting a timeout to avoid hanging forever if, for example, the payment gateway never replies.
Summary
- The BEAM’s lightweight processes let you split work into many independent units without the overhead of OS threads.
- Processes are isolated – they do not share memory, and they communicate solely through asynchronous message passing.
- Use
spawn/1(orspawn/3,Task, etc.) to run code concurrently; keep the caller’s pid handy if you need a response. - Messages are just Elixir terms. Send them with
send/2and retrieve them withreceive, optionally guarded by anaftertimeout. - Pattern matching inside
receiveallows you to filter and route messages cleanly. - Common patterns include fire‑and‑forget, request‑response, and broadcast.
- Watch out for unbounded mailboxes, accidental pid mix‑ups, and indefinite blocking.
Armed with these primitives, you can start building robust, highly concurrent applications that scale across cores and machines, taking full advantage of what the BEAM was designed for.