WindowManager
and WindowsStore
for storing tuples and triggers related information.msgId
.msgId
.JmsMessageID
.System.currentTimeMillis()
as the tracking ts.IColumn
interface.ICounter
interface.WindowManager
.CombinerAggregator
.DNSToSwitchMapping
interface It alternates bewteen RACK1 and RACK2 for the hosts.Pair
of the values as the result.Assembly
to a given Stream
.Assembly
to this Stream
.HdfsSpout.setArchiveDir(String)
Artifact
object to String for printing.ConcurrentMap
view of the current entries in the cache.Assembly
interface provides a means to encapsulate logic applied to a Stream
.AsyncExecutor
instance.AsyncExecutor
per storm executor.ObjectResourcesItem.availableResources
, this is the average ratio of resource to the total available in group.HdfsSpout.setBadFilesDir(String)
KeyValueState
which encoded types of key and value are both binary type.CassandraWriterBolt
instance.CassandraWriterBolt
instance.ITuple
to a CQL Statement
.ExecutionResultHandler
which fail the incoming tuple when an DriverException
is thrown.Operation
interface.KeyValueState
.BatchAsyncResultHandler
instance.CassandraWriterBolt
instance.CassandraWriterBolt
instance.BatchCQLStatementTupleMapper
instance.BlacklistScheduler
will assume the supervisor is bad based on bad slots or not.BoundCQLStatementMapperBuilder
instance.BoundCQLStatementMapperBuilder
instance.BoundCQLStatementTupleMapper
instance.Stream.branch(Predicate[])
to split a stream into multiple branches based on predicates.CQLStatementTupleMapper
instance.CQLStatementTupleMapper
instance.CQLStatementTupleMapper
instance.StormTopology
for the computation expressed via the stream api.org.apache.storm.Config
instance.SolrFieldsMapper
builder class.SolrFieldsMapper
builder class.SolrJsonMapper
builder class.SolrJsonMapper
builder class.TridentMinMaxOfDevicesTopology.Vehicle
and TridentMinMaxOfDevicesTopology.Driver
respectively.TridentMinMaxOfVehiclesTopology.Vehicle
and TridentMinMaxOfVehiclesTopology.Driver
respectively.CassandraConf
instance.CassandraConf
instance.CassandraContext
instance.CassandraConf
from a Storm topology configuration.CassandraContext.ClientFactory
from a Storm topology configuration.CassandraWriterBolt
instance.ProcessorContext
.IStatefulComponent
across the topology.CheckpointSpout
.IRichBolt
and forwards checkpoint tuples in a stateful topology.BaseResourceAwareStrategy.schedule(Cluster, TopologyDetails)
.HdfsSpout.setClocksInSync(boolean)
Cluster
instance.Cluster
instance.JmsSpout.session
and JmsSpout.connection
.WindowState
.HdfsSpout.setCommitFrequencyCount(int)
HdfsSpout.setCommitFrequencySec(int)
Aggregator
for comparing two values in a stream.Testing.completeTopology
.validatorClass()
For every annotation there must validator class to do the validation To add another annotation for config validation, add another annotation @interface class.ConnectionFactory
.RestClient
using given EsConfig
.ContextQuery.BoundQueryContext
implementation to retrieve a bound query identified by the provided key.ContextQuery.BoundQueryNamedByFieldContext
implementation to retrieve a bound query named by the value of a specified tuple field.ContextQuery
interface.SupervisorWorkerHeartbeat
to nimbus local report executor heartbeats.BatchStatement.Type.COUNTER
batch statement for the specified CQL statement builders.TriggerHandler.onTrigger()
when the count threshold is hit.CqlMapper
to map all tuple values to column.ITuple
to a CQL Statement
.Exclusion
object.user(String), deviceId(String), location(String), temperature(float),
humidity(float)
.BaseWindowedBolt.Duration
corresponding to the the given value in days.Debug
filter with a string identifier.source
, index
, type
and id
.JmsSpout.tupleProducer
to determine which fields are about to be emitted.Cluster
instance.DefaultClient
instance.CqlMapper.DefaultCqlMapper
instance.EsLookupResultOutput
.DNSToSwitchMapping
interface It returns the DEFAULT_RACK for every host.SequenceFormat
implementation that uses LongWritable
for keys and Text
for values.SequenceFormat
implementation that uses LongWritable
for keys and Text
for values.ShellLogHandler
.DefaultStateSerializer
instance with the given list of classes registered in kryo.byteBuffer
instance.Destination
(topic or queue) from which the JmsSpout
will receive messages.EvictionPolicy.evict(Event)
is invoked.Math.pow(2,i-1)
secs for retry i
where i = 2,3,...
.SqlParserImplFactory
implementation for creating parser.msgId
as failed.JmsMessageID
.ExecutionResultCollector.FailedCollector
instance.ExecutionResultCollector.FailedCollector
instance.Fields
to be declared for output.FieldSelector
instance.Filter
implementation that filters out any tuples that have fields with a value of null
.Assembly
implementation.Iterable
of elements as its result.FlatMapFunction
function to the value of each key-value pairs in this stream.ShellBolt
implementation that allows you specify output fields and even streams without having to subclass ShellBolt
to do so.ShellSpout
implementation that allows you specify output fields and even streams without having to subclass ShellSpout
to do so.Tuple
.KafkaTridentSpoutBatchMetadata.toMap()
.org.apache.storm.topology.IRichBolt
implementation for testing/debugging the Storm JMS Spout and example topologies.GenericBolt
instance.Assignment
for a storm.Supervisor
to fetch assignments when start up.Supervisor
to fetch assignments when start up.Utils.getAvailablePort(int)
with 0 as the preferred port.EvictionPolicy
instance which evicts elements after a count of given window length.EvictionPolicy
instance which evicts elements after window duration is reached.EvictionPolicy
instance which evicts elements after a count of given window length.EvictionPolicy
instance which evicts elements after given window duration.EvictionPolicy
instance for this strategy with the given configuration.AsyncExecutor
per storm executor.
try (LocalCluster cluster = new LocalCluster.Builder()....build()) {
...
}
KafkaSpoutMessageId
for the record on the given topic partition and offset.OpenTsdbMetricDatapoint
for a given tuple
.Config.TOPOLOGY_STATE_PROVIDER
or a InMemoryKeyValueState
if no provider is configured.IWorkerHeartbeatsRecoveryStrategy
with conf.nodeId
.host
.TimestampExtractor
for extracting timestamps from a tuple for event time based processing, or null for processing time.TridentTuple
s from given tupleEvents
.TriggerPolicy
which triggers for every count of given sliding window.TriggerPolicy
which triggers for every configured sliding window duration.TriggerPolicy
which triggers for every count of given sliding window.TriggerPolicy
which triggers for every given sliding duration.TriggerPolicy
by creating with triggerHandler
and evictionPolicy
with the given configuration.PairStream.groupByKeyAndWindow(Window)
and PairStream.reduceByKeyAndWindow(Reducer, Window)
.GroupingBatchBuilder
instance.ExecutionResultHandler.onQuerySuccess(org.apache.storm.task.OutputCollector, org.apache.storm.tuple.Tuple)
before acknowledging an single input tuple.HBaseKeyValueState
.HBaseKeyValueState
.org.apache.storm.tuple.Tuple
object to a row in an HBase table.HBaseWindowsStore
instances.HdfsSpout.setHdfsUri(String)
org.apache.storm.tuple.Tupe
object to a row in an Hive table.BaseWindowedBolt.Duration
corresponding to the the given value in hours.Function
that returns the input argument itself as the result.HdfsSpout.setIgnoreSuffix(String)
KeyValueState
.State
.InMemoryKeyValueState
.ITridentWindowManager
instance stores all the tuples and trigger related information inmemory.WindowsStore
which can be backed by persistent store.InMemoryWindowsStore
which will be used for storing tuples and triggers of the window.IRichBolt
and IRichSpout
are the main interfaces to use to implement components of the topology.IRichBolt
and IRichSpout
are the main interfaces to use to implement components of the topology.IStormClusterState.syncRemoteAssignments(Map)
.period
since the timer was initiated or reset.OffsetAndMetadata
was committed by a KafkaSpout
instance in this topology.KafkaSpoutMessageId
is ready to be retried, i.e is scheduled and has retry time that is less than current time.KafkaSpoutMessageId
is scheduled to be retried.ITuple
with OpenTsdbMetricDatapoint
.org.apache.storm.tuple.Tuple
objects from a Storm topology and publishes JMS Messages to a destination (topic or queue).org.apache.storm.tuple.Values
instance into a javax.jms.Message
object.JmsProvider
object encapsulates the ConnectionFactory
and Destination
JMS objects the JmsSpout
needs to manage a topic/queue connection over the course of it’s lifecycle.Spout
implementation that listens to a JMS topic or queue and outputs tuples based on the messages it receives.Values
objects from a javax.jms.Message
object>.PairStream.join(PairStream)
to join multiple streams.JPmmlModelRunner
writing to the default stream.PmmlModelRunner
.JmsTupleProducer
that expects to receive JMS TextMessage
objects with a body in JSON format.ConsumerRecord
for an offset is marked as processed, i.e.KafkaSpoutRetryService
using the exponential backoff formula.KafkaSpoutTopologyMainNamedTopics
, but demonstrates subscribing to Kafka topics with a regex.ISpoutPartition
that wraps TopicPartition
information.ProcessBuilder
with a given callback.HdfsSpout.setLockDir(String)
HdfsSpout.setLockTimeoutSec(int)
EventLoggerBolt
receives a tuple from the spouts or bolts that has event logging enabled.BatchStatement.Type.LOGGED
batch statement for the specified CQL statement builders.Stream.map(MapFunction)
and Stream.flatMap(FlatMapFunction)
functions.PairFunction
on each value of this stream.Function
to the value of each key-value pairs in this stream.comparator
with TridentTuple
s.HdfsSpout.setMaxOutstanding(int)
inputFieldName
and it is assumed that its value is an instance of Comparable
.inputFieldName
in a stream by using the given comparator
.comparator
with TridentTuple
s.inputFieldName
and it is assumed that its value is an instance of Comparable
.inputFieldName
in a stream by using the given comparator
.ObjectResourcesItem.availableResources
, this is the minimum ratio of resource to the total available in group.BaseWindowedBolt.Duration
corresponding to the the given value in minutes.Testing.withSimulatedTimeCluster
, Testing.withTrackedCluster
and Testing.withLocalCluster
.TrackedTopology
directly.PMMLPredictorBolt
.Utils.readStormConfig()
.ModelRunnerFromBlobStore
.CustomStreamGrouping
that uses Murmur3 algorithm to choose the target task of a tuple.Murmur3StreamGrouping
instance.Murmur3StreamGrouping
instance.ISchedulingState.needsScheduling(TopologyDetails)
but does not take into account the number of workers requested.Filter
implementation that inverts another delegate Filter
.Evaluator
object representing the PMML model defined in the PMML
argument.Evaluator
object representing the PMML model defined in the XML File
specified as argument.Evaluator
object representing the PMML model defined in the XML File
specified as argument.Evaluator
object representing the PMML model uploaded to the Blobstore using the blob key specified as argument.Evaluator
object representing the PMML model uploaded to the Blobstore using the blob key specified as argument.JPmmlModelRunner
writing to the default stream.PMML
object representing the PMML model defined in the XML File
specified as argument.PMML
object representing the PMML model defined in the InputStream
specified as argument.PMML
object representing the PMML model uploaded to the Blobstore with key specified as argument.PMML
object representing the PMML model uploaded to the Blobstore with key specified as argument.Stream
of tuples from the given IRichSpout
.Stream
of tuples from the given IRichSpout
with the given parallelism.Stream
of values from the given IRichSpout
by extracting field(s) from tuples via the supplied TupleValueMapper
.Stream
of values from the given IRichSpout
by extracting field(s) from tuples via the supplied TupleValueMapper
with the given parallelism.PairStream
of key-value pairs from the given IRichSpout
by extracting key and value from tuples via the supplied PairValueMapper
.PairStream
of key-value pairs from the given IRichSpout
by extracting key and value from tuples via the supplied PairValueMapper
and with the given value of parallelism.I
@see org.apache.storm.nimbus.ITopologyActionNotifierPlugin for details.getConfiguredClient
instead of calling this directly.getConfiguredClientAs
instead of calling this directly.getConfiguredClient
instead of calling this directly.NormalizedResources
wrappers that handle memory.com.datastax.driver.mapping.annotations.Table
to CQL statements.com.datastax.driver.mapping.annotations.Table
to CQL statements.CombinerAggregator
based on initial value, accumulator and combiner.StateUpdater
based on an initial value of the state and a state update function.BaseWindowedBolt.Count
of given value.BaseWindowedBolt.Duration
corresponding to the the given value in milli seconds.TriggerPolicy
.TriggerPolicy
.BaseExecutionResultHandler
is not overridden.EvictionPolicy
.QueryValidationException
is thrown.QueryValidationException
is thrown.ReadTimeoutException
is thrown.ReadTimeoutException
is thrown.TriggerPolicy
condition is satisfied.UnavailableException
is thrown.UnavailableException
is thrown.WriteTimeoutException
is thrown.WriteTimeoutException
is thrown.collector
and initializes pending
.ISpout
implementation.State
implementation for OpenTSDB.StateFactory
implementation for OpenTSDB.StateUpdater
implementation for OpenTSDB.Filter
s and Function
s.PairBatchStatementTuples
instance.PairStatementTuple
instance.ValueJoiner
that joins two values to produce a Pair
of the two values as the result.PairValueMapper
that constructs a pair from a tuple based on the key and value index.Dependency
object.RemoteRepository
object.Pattern
.permit()
method is invoked for each incoming Thrift request.IStatefulWindowedBolt
and handles the execution.IStatefulWindowedBolt
with window persistence.PMMLPredictorBolt
that executes, for every tuple, the runner constructed with the ModelRunnerFactory
specified in the parameter The PMMLPredictorBolt
instantiated with this constructor declares the output fields as specified by the ModelOutputs
parameter.IBolt.prepare(Map, TopologyContext, OutputCollector)
except that while emitting, the tuples are automatically anchored to the tuples in the inputWindow.Operation
is first initialized.org.apache.storm.trident.planner.TridentProcessor
’s prepare method.WindowState
for commit.BaseResourceAwareStrategy.schedule(Cluster, TopologyDetails)
.Processor
.ProcessSimulator
keeps track of Shutdownable objects in place of actual processes (in cluster mode).keepFields
.maxNumber
value as maximum with the given fields
.HdfsSpout.setReaderType(String)
ConsumerRecord
to a tuple.RedisKeyValueState
.RedisKeyValueState
.Reducer
performs an operation on two values of the same type producing a result of the same type.WindowPartitionCache.RemovalListener
to be invoked when entries are evicted.isReady
.Schema
object from the schema returned by the SchemaRequest.TopicPartition
belongs to the specified Collection<TopicPartition>
.“{:a 1 :b 1 :c 2} -> {1 [:a :b] 2 :c}”
.RexNode
) to Java source code String.WindowState
.RoutingKeyGenerator
instance.KafkaSpoutMessageId
if not yet scheduled, or updates retry time if it has already been scheduled.BaseWindowedBolt.Duration
corresponding to the the given value in seconds.CqlMapper.DefaultCqlMapper
instance.Tuple
objects to HDFS sequence file key-value pairs.TridentTuple
objects to HDFS sequence file key-value pairs.Utils
to delegate meta serialization.ByteBuffer
backed by the byte array returned by Kryo serialization.IStormClusterState.isAssignmentsBackendSynchronized()
.
bolts:
- className: org.apache.storm.flux.wrappers.bolts.FluxShellBolt
id: my_bolt
constructorArgs:
- [python, my_bolt.py]
configMethods:
- name: setDefaultStream
args:
- [word, count]
spouts:
- className: org.apache.storm.flux.wrappers.bolts.FluxShellSpout
id: my_spout
constructorArgs:
- [python, my_spout.py]
configMethods:
- name: setDefaultStream
args:
- [word, count]
JmsSpout.jmsProvider
.JmsTupleProducer
implementation that will convert javax.jms.Message
object to org.apache.storm.tuple.Values
objects to be emitted.SolrRequest
requests in JSON format.
bolts:
- className: org.apache.storm.flux.wrappers.bolts.FluxShellBolt
id: my_bolt
constructorArgs:
- [python, my_bolt.py]
configMethods:
- name: setNamedStream
args:
- first
- [word, count]
spouts:
- className: org.apache.storm.flux.wrappers.bolts.FluxShellSpout
id: my_spout
constructorArgs:
- [python, my_spout.py]
configMethods:
- name: setNamedStream
args:
- first
- [word, count]
KafkaConsumer
property.KafkaConsumer
properties.KafkaConsumer
properties.KafkaSpoutConfig.ProcessingGuarantee
other than KafkaSpoutConfig.ProcessingGuarantee.AT_LEAST_ONCE
.ShellProcess
so it can output the process info string later.ShellLogHandler.log(org.apache.storm.multilang.ShellMsg)
for each spout and bolt.ShellUtils
implementation.AutoCloseable.close()
instead.AutoCloseable.close()
insteadorg.apache.storm.trident.planner.TridentProcessor
’s cleanup method.SimpleCQLStatementMapper
instance.SimpleCQLStatementMapper
instance.SimpleCQLStatementMapper
instance.SimpleCQLStatementMapperBuilder
instance.SimpleCQLStatementMapperBuilder
instance.SingleAsyncResultHandler
instance.slidingCountWindow
configuration.slidingCountWindow
configuration.windowCount
of tuples and slides the window after slideCount
.slidingInterval
and completes a window at windowDuration
.IWindowedBolt
to calculate sliding window sum.NodeSorter.sortAllNodes()
which eventually calls this method whose behavior can be altered by setting NodeSorter.nodeSortType
.NodeSorterHostProximity.sortAllNodes()
which eventually calls this method whose behavior can be altered by setting NodeSorterHostProximity.nodeSortType
.HdfsSpout.setSourceDir(String)
IRichSpout
.JmsProvider
that uses the spring framework to obtain a JMS ConnectionFactory
and Desitnation
objects.SpringJmsProvider
object given the name of a classpath resource (the spring application context file), and the bean names of a JMS connection factory and destination.broker
and starts it.connection
.IStatefulBolt
) or managed by the the individual components themselves.State
instances.IStatefulBolt
and manages the state of the bolt.IStatefulBolt
to manage state.IStatefulWindowedBolt
and handles the execution.IStatefulWindowedBolt
to save the state of the windowing operation to avoid re-computation in case of failures.StatefulWindowManager
.PairStream.updateStateByKey(StateUpdater)
to save the counts in a key value state.StateFactory
to create a new state instances.Stream.stateQuery(StreamState)
to query the stateContextQuery.StaticContextQuery
instance.WindowsStore
for storing tuples and other trigger related information.Config.TOPOLOGY_METRICS_REPORTERS
instead.ISubmitterHook
@see ISubmitterHook for details.StormTopology
via storm streams api (DSL).StreamBuilder
.PMMLPredictorBolt
.Stream.stateQuery(StreamState)
.ISubmitterHook
could not be initialized or invoked.ExecutionResultCollector
instance.ExecutionResultCollector
instance.LeaderListenerCallback
.Tuple
for use with testing.Tuple
for use with testing.TriggerHandler.onTrigger()
after the duration.ModelOutputs
that declares the predicted
and output
fields specified in the PMML
model specified as argument into the default
stream.org.apache.storm.tuple.Tuple
object to a javax.jms.Message
object.Config.TOPOLOGY_ERROR_THROTTLE_INTERVAL_SECS
.ShellLogHandler
to handle output from non-JVM processes e.g.IStatefulBolt
bolts are involved.StateProvider
implementation.StateProvider
implementation.TopologyContext
is given to bolts and spouts in their prepare()
and open()
methods, respectively.StormTopology
objects.ModelOutputs
that declares the predicted
and output
fields specified in the PMML
model specified as argument into the list of streams specified.Rankings
.Values
.Values
.MqttMessage
to a set of Values that can be emitted as a Storm Tuple.EvictionPolicy.evict(Event)
should evict it or not.org.apache.storm.trident.tuple.TridentTuple
object to a row in an HBase table.HBaseWindowsStoreFactory
’s store for storing tuples in window.TridentKafkaClientTopologyWildcardTopics
, but demonstrates subscribing to Kafka topics with a regex.Stream.map(MapFunction)
and Stream.flatMap(FlatMapFunction)
functions.Stream.minBy(String)
* Stream.maxBy(String)
operations on trident Stream
.Stream.minBy(String, Comparator)
* Stream.min(Comparator)
* Stream.maxBy(String, Comparator)
* Stream.max(Comparator)
operations on trident Stream
.TriggerPolicy
when the trigger condition is satisfied.slidingCountWindow
configuration.slidingCountWindow
configuration.windowCount
of tuples.windowDuration
.TimestampExtractor
that extracts timestamp from a specific field in the tuple.ITuple
to OpenTsdbMetricDatapoint
.Tuple
to typed values.Tuple
based on indicies.TupleValueMappers
.BatchStatement.Type.UNLOGGED
batch statement for the specified CQL statement builders.ValueMapper
that extracts the value at index ‘i’ from a tuple.ValuesMapper
that extracts value from a Tuple
at specified indices.InterruptedException
.TriggerHandler.onTrigger()
for each window interval that has events to be processed up to the watermark ts.Stream
.IWindowedBolt
wrapper that does the windowing of tuples.WindowManager
.WindowLifecycleListener
callbacks on expiry of events or activation of the window due to TriggerPolicy
.WindowManager
.WindowState.WindowPartition
.WindowPartitionCache
.State
implementation for windowing operation.StateFactory
instance for creating WindowsState
instances.StateUpdater<WindowState>
instance which removes successfully emitted triggers from store.putAll
method.WindowsStore
.TridentProcessor
implementation for windowing operations on trident stream.config
for creating cassandra client.JmsProvider
implementation that this Spout will use to connect to a JMS javax.jms.Desination
.
try (LocalCluster cluster = new LocalCluster()) {
...
}
try (LocalCluster cluster = new LocalCluster.Builder()....build()) {
...
}
fields
as outputfields with stream id as Utils.DEFAULT_STREAM_ID
.BaseStatefulWindowedBolt.withMaxEventsInMemory(long)
.
try (Time.SimulatedTime time = new Time.SimulatedTime()) {
...
}
try (LocalCluster cluster = new LocalCluster.Builder().withSimulatedTime().build()) {
...
}
try (LocalCluster cluster = new LocalCluster.Builder().withSimulatedTime()....build()) {
...
}
fields
as outputfields for the given stream
.
try (LocalCluster cluster = new LocalCluster.Builder().withTracked().build()) {
...
}
try (LocalCluster cluster = new LocalCluster.Builder().withTracked()....build()) {
...
}
JmsTupleProducer
implementation that will convert javax.jms.Message
object to backtype.storm.tuple.Values
objects to be emitted.Config.TOPOLOGY_WORKER_TIMEOUT_SECS
.WorkerCtx
instance.Copyright © 2022 The Apache Software Foundation. All rights reserved.