(NOTE: this page is based on the 0.7.1 code; many things have changed since then, including a split between tasks and executors, and a reorganization of the code under storm-client/src rather than src/.)

This page explains in detail the lifecycle of a topology from running the “storm jar” command to uploading the topology to Nimbus to the supervisors starting/stopping workers to workers and tasks setting themselves up. It also explains how Nimbus monitors topologies and how topologies are shutdown when they are killed.

First a couple of important notes about topologies:

  1. The actual topology that runs is different than the topology the user specifies. The actual topology has implicit streams and an implicit “acker” bolt added to manage the acking framework (used to guarantee data processing). The implicit topology is created via the system-topology! function.
  2. system-topology! is used in two places:

Starting a topology

Topology Monitoring

Killing a topology