- processes, this post
- actors and subjects
- supervised actors (not yet published)
Processes
A process on the beam is an independently running piece of software. It:- is light weight
- has its own memory (and garbage collector)
- is scheduled by the BEAM
- communicates with other processes via messages
A gleam actor is a process, processing incoming typed messages. A supervisor is also a process, whose only job it is to keep an eye on other processes. Registries are processes that can look things up for you, possibly processes.
When to use 'plain' processes?
You should use processes for what they are good at, that means you use actors to act repeatedly, supervisors to supervise, etc - what would you use 'plain' processes for?When it is OK that the process crashes.
Whenever you run a process (of any type) you are interested in:
- a side effect (e.g. an e-mail sent, or a message sent to another BEAM process)
- the result of the computation (again a message sent to another process, probably the originating process)
Time to have a look at what happens when we run 'plain' process and let them crash.
Be Good
When the process does not crash, all is fine. Note: the side effect of this example is "output".
import gleam/erlang/process
pub fn main() {
process.spawn(fn() { echo "Hello, world!" })
}
$ gleam run -m spawn_002 Compiled in 0.02s Running spawn_002.main src/spawn_002.gleam:4 "Hello, world!"
Be Bad
What happens when our plain process crashes?
import gleam/erlang/process
pub fn main() {
process.spawn(fn() { panic as "Hello, world!" })
process.sleep(123)
}
$ gleam run -m spawn_panic_004
Compiled in 0.05s
Running spawn_panic_004.main
=CRASH REPORT==== 25-Feb-2026::17:57:27.407061 ===
crasher:
initial call: spawn_panic_004:'-main/0-anonymous-0-'/0
pid: <0.84.0>
registered_name: []
exception error: #{function => <<"main">>,line => 4,
message => <<"Hello, world!">>,
module => <<"spawn_panic_004">>,
file => <<"src/spawn_panic_004.gleam">>,
gleam_error => panic}
in function spawn_panic_004:'-main/0-anonymous-0-'/0 (src/spawn_panic_004.gleam:6)
ancestors: [<0.83.0>]
message_queue_len: 0
messages: []
links: [<0.83.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 233
stack_size: 29
reductions: 19
neighbours:
neighbour:
pid: <0.83.0>
registered_name: []
initial_call: {erlang,apply,2}
current_function: {erlang,prepare_loading_1,2}
ancestors: []
message_queue_len: 0
links: [<0.10.0>,<0.84.0>]
trap_exit: false
status: running
heap_size: 233
stack_size: 13
reductions: 414
current_stacktrace: [{erlang,prepare_loading_1,2,[]},
{code,ensure_loaded,1,[{file,"code.erl"},{line,582}]},
{error_handler,undefined_function,3,
[{file,"error_handler.erl"},{line,86}]},
{processes@@main,run_module,1,
[{file,
"/home/kero/CodeChange/new-web-content/gleam-blog/20260225-2-processes/build/dev/erlang/processes/_gleam_artefacts/processes@@main.erl"},
{line,27}]}]
runtime error: panic
Hello, world!
stacktrace:
spawn_panic_004.-main/0-anonymous-0- src/spawn_panic_004.gleam:4
proc_lib.init_p proc_lib.erl:317
Whoops! Our process terminated abnormally, and worse, it took out our fancy application.
In Gleam, processes start linked, meaning that when one a process terminated abnormally, all linked processes are sent a special exit message. Without any precaution, that means linked processes also terminate. The list of links contains one link[<0.83.0>], which is our main function (it is also
the ancestor). There is also the list of neighbours, which gives more
information about what our main function was doing, FWIW.
! Such a link goes in both
directions. If we were to panic in
our main function, the spawned process
would go down with it.
! Do note
the process.sleep. We need our main
process to be alive when our spawned function dies. Without sleeping,
it might have terminated already, or it might not - that is what
concurrency is.
! We did not need to sleep in our previous example. I/O is done by a special process started by the BEAM. Even though our main function had terminated, current gleam (1.14) waits for one second after that. That is enough time for the I/O process to do its printing.
Spawn Unlinked
import gleam/erlang/process
pub fn main() {
process.spawn_unlinked(fn() { panic as "Hello, world!" })
process.sleep(123)
}
Compiling processes
Compiled in 0.60s
Running spawn_unlinked_007.main
=CRASH REPORT==== 25-Feb-2026::17:56:51.541772 ===
crasher:
initial call: spawn_unlinked_007:'-main/0-anonymous-0-'/0
pid: <0.84.0>
registered_name: []
exception error: #{function => <<"main">>,line => 4,
message => <<"Hello, world!">>,
module => <<"spawn_unlinked_007">>,
file => <<"src/spawn_unlinked_007.gleam">>,
gleam_error => panic}
in function spawn_unlinked_007:'-main/0-anonymous-0-'/0 (src/spawn_unlinked_007.gleam:6)
ancestors: [<0.83.0>]
message_queue_len: 0
messages: []
links: []
dictionary: []
trap_exit: false
status: running
heap_size: 233
stack_size: 29
reductions: 19
neighbours:
OK, that is better. Some logger still dumped the crash report on stdout, and as you can see the list of neightbours is now empty. Our main function was allowed to finish in peace.
Monitoring a process
You may monitor the process, so you are notified when the process
crashes. Yet as soon as you retry the work of that process, that is
supervision (use supervisors). Doing something else is OK, say,
keeping a log. For instance, a testing framework is interested in the
crash (assert False), but will not restart the test.
import gleam/erlang/process
pub fn main() {
let #(_pid, mon) = spawn_monitored(fn() { panic as "Hello, world!" })
process.new_selector()
|> process.select_specific_monitor(mon, Wrap)
|> process.selector_receive(123)
|> echo
}
pub type Msg {
Wrap(process.Down)
}
@external(erlang, "erlang", "spawn_monitor")
fn spawn_monitored(f: fn() -> Nil) -> #(process.Pid, process.Monitor)
$ gleam run -m spawn_monitored_010
Compiled in 0.02s
Running spawn_monitored_010.main
src/spawn_monitored_010.gleam:8
Ok(Wrap(ProcessDown(//erl(#Ref<0.3063925226.1700003845.102100>), //erl(<0.84.0>), Abnormal(#(dict.from_list([#(Function, "main"), #(Line, 4), #(Message, "Hello, world!"), #(Module, "spawn_monitored_010"), #(File, "src/spawn_monitored_010.gleam"), #(GleamError, Panic)]), [SpawnMonitored10(atom.create("-main/0-anonymous-0-"), 0, [File(charlist.from_string("src/spawn_monitored_010.gleam")), Line(7)])])))))
=ERROR REPORT==== 25-Feb-2026::18:34:52.022236 ===
Error in process <0.84.0> with exit value:
{#{function => <<"main">>,line => 4,message => <<"Hello, world!">>,
module => <<"spawn_monitored_010">>,
file => <<"src/spawn_monitored_010.gleam">>,gleam_error => panic},
[{spawn_monitored_010,'-main/0-anonymous-0-',0,
[{file,"src/spawn_monitored_010.gleam"},{line,7}]}]}
Quite the message we receive.
! We have an error report now, not a crash report.
! This erlang function is not in the gleam library (yet?)
Trapping exits
You can even trap exits of linked processes, as I looked at in my original blog post. It seems no better than monitoring in any way that I can think of, therefor I am not giving you a code example again.
Moral of the Story
Plain processes are OK for short-lived programs, where you see the result immediately.
In permanently running services, you probably want supervised actors (posts will be published soon). Only when the side effect is optional, should you use plain processes.