class KStream[K, V] extends AnyRef

Wraps the Java class KStream and delegates method calls to the underlying Java object.

K

Type of keys

V

Type of values

See also

org.apache.kafka.streams.kstream.KStream

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. KStream
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Instance Constructors

  1. new KStream(inner: kstream.KStream[K, V])

    inner

    The underlying Java abstraction for KStream

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @native()
  6. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  7. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  8. def filter(predicate: (K, V) => Boolean, named: Named): KStream[K, V]

    Create a new KStream that consists all records of this stream which satisfies the given predicate.

    Create a new KStream that consists all records of this stream which satisfies the given predicate.

    predicate

    a filter that is applied to each record

    named

    a Named config used to name the processor in the topology

    returns

    a KStream that contains only those records that satisfy the given predicate

    See also

    org.apache.kafka.streams.kstream.KStream#filter

  9. def filter(predicate: (K, V) => Boolean): KStream[K, V]

    Create a new KStream that consists all records of this stream which satisfies the given predicate.

    Create a new KStream that consists all records of this stream which satisfies the given predicate.

    predicate

    a filter that is applied to each record

    returns

    a KStream that contains only those records that satisfy the given predicate

    See also

    org.apache.kafka.streams.kstream.KStream#filter

  10. def filterNot(predicate: (K, V) => Boolean, named: Named): KStream[K, V]

    Create a new KStream that consists all records of this stream which do not satisfy the given predicate.

    Create a new KStream that consists all records of this stream which do not satisfy the given predicate.

    predicate

    a filter that is applied to each record

    named

    a Named config used to name the processor in the topology

    returns

    a KStream that contains only those records that do not satisfy the given predicate

    See also

    org.apache.kafka.streams.kstream.KStream#filterNot

  11. def filterNot(predicate: (K, V) => Boolean): KStream[K, V]

    Create a new KStream that consists all records of this stream which do not satisfy the given predicate.

    Create a new KStream that consists all records of this stream which do not satisfy the given predicate.

    predicate

    a filter that is applied to each record

    returns

    a KStream that contains only those records that do not satisfy the given predicate

    See also

    org.apache.kafka.streams.kstream.KStream#filterNot

  12. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable])
  13. def flatMap[KR, VR](mapper: (K, V) => Iterable[(KR, VR)], named: Named): KStream[KR, VR]

    Transform each record of the input stream into zero or more records in the output stream (both key and value type can be altered arbitrarily).

    Transform each record of the input stream into zero or more records in the output stream (both key and value type can be altered arbitrarily).

    The provided mapper, function (K, V) => Iterable[(KR, VR)] is applied to each input record and computes zero or more output records.

    mapper

    function (K, V) => Iterable[(KR, VR)] that computes the new output records

    named

    a Named config used to name the processor in the topology

    returns

    a KStream that contains more or less records with new key and value (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#flatMap

  14. def flatMap[KR, VR](mapper: (K, V) => Iterable[(KR, VR)]): KStream[KR, VR]

    Transform each record of the input stream into zero or more records in the output stream (both key and value type can be altered arbitrarily).

    Transform each record of the input stream into zero or more records in the output stream (both key and value type can be altered arbitrarily).

    The provided mapper, function (K, V) => Iterable[(KR, VR)] is applied to each input record and computes zero or more output records.

    mapper

    function (K, V) => Iterable[(KR, VR)] that computes the new output records

    returns

    a KStream that contains more or less records with new key and value (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#flatMap

  15. def flatMapValues[VR](mapper: (K, V) => Iterable[VR], named: Named): KStream[K, VR]

    Create a new KStream by transforming the value of each record in this stream into zero or more values with the same key in the new stream.

    Create a new KStream by transforming the value of each record in this stream into zero or more values with the same key in the new stream.

    Transform the value of each input record into zero or more records with the same (unmodified) key in the output stream (value type can be altered arbitrarily). The provided mapper, a function (K, V) => Iterable[VR] is applied to each input record and computes zero or more output values.

    mapper

    a function (K, V) => Iterable[VR] that computes the new output values

    named

    a Named config used to name the processor in the topology

    returns

    a KStream that contains more or less records with unmodified keys and new values of different type

    See also

    org.apache.kafka.streams.kstream.KStream#flatMapValues

  16. def flatMapValues[VR](mapper: (K, V) => Iterable[VR]): KStream[K, VR]

    Create a new KStream by transforming the value of each record in this stream into zero or more values with the same key in the new stream.

    Create a new KStream by transforming the value of each record in this stream into zero or more values with the same key in the new stream.

    Transform the value of each input record into zero or more records with the same (unmodified) key in the output stream (value type can be altered arbitrarily). The provided mapper, a function (K, V) => Iterable[VR] is applied to each input record and computes zero or more output values.

    mapper

    a function (K, V) => Iterable[VR] that computes the new output values

    returns

    a KStream that contains more or less records with unmodified keys and new values of different type

    See also

    org.apache.kafka.streams.kstream.KStream#flatMapValues

  17. def flatMapValues[VR](mapper: (V) => Iterable[VR], named: Named): KStream[K, VR]

    Create a new KStream by transforming the value of each record in this stream into zero or more values with the same key in the new stream.

    Create a new KStream by transforming the value of each record in this stream into zero or more values with the same key in the new stream.

    Transform the value of each input record into zero or more records with the same (unmodified) key in the output stream (value type can be altered arbitrarily). The provided mapper, a function V => Iterable[VR] is applied to each input record and computes zero or more output values.

    mapper

    a function V => Iterable[VR] that computes the new output values

    named

    a Named config used to name the processor in the topology

    returns

    a KStream that contains more or less records with unmodified keys and new values of different type

    See also

    org.apache.kafka.streams.kstream.KStream#flatMapValues

  18. def flatMapValues[VR](mapper: (V) => Iterable[VR]): KStream[K, VR]

    Create a new KStream by transforming the value of each record in this stream into zero or more values with the same key in the new stream.

    Create a new KStream by transforming the value of each record in this stream into zero or more values with the same key in the new stream.

    Transform the value of each input record into zero or more records with the same (unmodified) key in the output stream (value type can be altered arbitrarily). The provided mapper, a function V => Iterable[VR] is applied to each input record and computes zero or more output values.

    mapper

    a function V => Iterable[VR] that computes the new output values

    returns

    a KStream that contains more or less records with unmodified keys and new values of different type

    See also

    org.apache.kafka.streams.kstream.KStream#flatMapValues

  19. def flatTransform[K1, V1](transformerSupplier: TransformerSupplier[K, V, Iterable[KeyValue[K1, V1]]], named: Named, stateStoreNames: String*): KStream[K1, V1]

    Transform each record of the input stream into zero or more records in the output stream (both key and value type can be altered arbitrarily).

    Transform each record of the input stream into zero or more records in the output stream (both key and value type can be altered arbitrarily). A Transformer (provided by the given TransformerSupplier) is applied to each input record and computes zero or more output records. In order to assign a state, the state must be created and added via addStateStore before they can be connected to the Transformer. It's not required to connect global state stores that are added via addGlobalStore; read-only access to global state stores is available by default.

    transformerSupplier

    the TransformerSuplier that generates Transformer

    named

    a Named config used to name the processor in the topology

    stateStoreNames

    the names of the state stores used by the processor

    returns

    a KStream that contains more or less records with new key and value (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#transform

  20. def flatTransform[K1, V1](transformerSupplier: TransformerSupplier[K, V, Iterable[KeyValue[K1, V1]]], stateStoreNames: String*): KStream[K1, V1]

    Transform each record of the input stream into zero or more records in the output stream (both key and value type can be altered arbitrarily).

    Transform each record of the input stream into zero or more records in the output stream (both key and value type can be altered arbitrarily). A Transformer (provided by the given TransformerSupplier) is applied to each input record and computes zero or more output records. In order to assign a state, the state must be created and added via addStateStore before they can be connected to the Transformer. It's not required to connect global state stores that are added via addGlobalStore; read-only access to global state stores is available by default.

    transformerSupplier

    the TransformerSuplier that generates Transformer

    stateStoreNames

    the names of the state stores used by the processor

    returns

    a KStream that contains more or less records with new key and value (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#transform

  21. def flatTransformValues[VR](valueTransformerSupplier: ValueTransformerWithKeySupplier[K, V, Iterable[VR]], named: Named, stateStoreNames: String*): KStream[K, VR]

    Transform the value of each input record into zero or more records (with possible new type) in the output stream.

    Transform the value of each input record into zero or more records (with possible new type) in the output stream. A ValueTransformer (provided by the given ValueTransformerSupplier) is applied to each input record value and computes a new value for it. In order to assign a state, the state must be created and added via addStateStore before they can be connected to the ValueTransformer. It's not required to connect global state stores that are added via addGlobalStore; read-only access to global state stores is available by default.

    valueTransformerSupplier

    a instance of ValueTransformerWithKeySupplier that generates a ValueTransformerWithKey

    named

    a Named config used to name the processor in the topology

    stateStoreNames

    the names of the state stores used by the processor

    returns

    a KStream that contains records with unmodified key and new values (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#transformValues

  22. def flatTransformValues[VR](valueTransformerSupplier: ValueTransformerWithKeySupplier[K, V, Iterable[VR]], stateStoreNames: String*): KStream[K, VR]

    Transform the value of each input record into zero or more records (with possible new type) in the output stream.

    Transform the value of each input record into zero or more records (with possible new type) in the output stream. A ValueTransformer (provided by the given ValueTransformerSupplier) is applied to each input record value and computes a new value for it. In order to assign a state, the state must be created and added via addStateStore before they can be connected to the ValueTransformer. It's not required to connect global state stores that are added via addGlobalStore; read-only access to global state stores is available by default.

    valueTransformerSupplier

    a instance of ValueTransformerWithKeySupplier that generates a ValueTransformerWithKey

    stateStoreNames

    the names of the state stores used by the processor

    returns

    a KStream that contains records with unmodified key and new values (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#transformValues

  23. def flatTransformValues[VR](valueTransformerSupplier: ValueTransformerSupplier[V, Iterable[VR]], named: Named, stateStoreNames: String*): KStream[K, VR]

    Transform the value of each input record into zero or more records (with possible new type) in the output stream.

    Transform the value of each input record into zero or more records (with possible new type) in the output stream. A ValueTransformer (provided by the given ValueTransformerSupplier) is applied to each input record value and computes a new value for it. In order to assign a state, the state must be created and added via addStateStore before they can be connected to the ValueTransformer. It's not required to connect global state stores that are added via addGlobalStore; read-only access to global state stores is available by default.

    valueTransformerSupplier

    a instance of ValueTransformerSupplier that generates a ValueTransformer

    named

    a Named config used to name the processor in the topology

    stateStoreNames

    the names of the state stores used by the processor

    returns

    a KStream that contains records with unmodified key and new values (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#transformValues

  24. def flatTransformValues[VR](valueTransformerSupplier: ValueTransformerSupplier[V, Iterable[VR]], stateStoreNames: String*): KStream[K, VR]

    Transform the value of each input record into zero or more records (with possible new type) in the output stream.

    Transform the value of each input record into zero or more records (with possible new type) in the output stream. A ValueTransformer (provided by the given ValueTransformerSupplier) is applied to each input record value and computes a new value for it. In order to assign a state, the state must be created and added via addStateStore before they can be connected to the ValueTransformer. It's not required to connect global state stores that are added via addGlobalStore; read-only access to global state stores is available by default.

    valueTransformerSupplier

    a instance of ValueTransformerSupplier that generates a ValueTransformer

    stateStoreNames

    the names of the state stores used by the processor

    returns

    a KStream that contains records with unmodified key and new values (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#transformValues

  25. def foreach(action: (K, V) => Unit, named: Named): Unit

    Perform an action on each record of KStream

    Perform an action on each record of KStream

    action

    an action to perform on each record

    named

    a Named config used to name the processor in the topology

    See also

    org.apache.kafka.streams.kstream.KStream#foreach

  26. def foreach(action: (K, V) => Unit): Unit

    Perform an action on each record of KStream

    Perform an action on each record of KStream

    action

    an action to perform on each record

    See also

    org.apache.kafka.streams.kstream.KStream#foreach

  27. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  28. def groupBy[KR](selector: (K, V) => KR)(implicit grouped: Grouped[KR, V]): KGroupedStream[KR, V]

    Group the records of this KStream on a new key that is selected using the provided key transformation function and the Grouped instance.

    Group the records of this KStream on a new key that is selected using the provided key transformation function and the Grouped instance.

    The user can either supply the Grouped instance as an implicit in scope or they can also provide an implicit serdes that will be converted to a Grouped instance implicitly.

    Example:
    
    // brings implicit serdes in scope
    import Serdes._
    
    val textLines = streamBuilder.stream[String, String](inputTopic)
    
    val pattern = Pattern.compile("\\W+", Pattern.UNICODE_CHARACTER_CLASS)
    
    val wordCounts: KTable[String, Long] =
      textLines.flatMapValues(v => pattern.split(v.toLowerCase))
    
        // the groupBy gets the Grouped instance through an implicit conversion of the
        // serdes brought into scope through the import Serdes._ above
        .groupBy((k, v) => v)
    
        .count()
    selector

    a function that computes a new key for grouping

    returns

    a KGroupedStream that contains the grouped records of the original KStream

    See also

    org.apache.kafka.streams.kstream.KStream#groupBy

  29. def groupByKey(implicit grouped: Grouped[K, V]): KGroupedStream[K, V]

    Group the records by their current key into a KGroupedStream

    Group the records by their current key into a KGroupedStream

    The user can either supply the Grouped instance as an implicit in scope or they can also provide an implicit serdes that will be converted to a Grouped instance implicitly.

    Example:
    
    // brings implicit serdes in scope
    import Serdes._
    
    val clicksPerRegion: KTable[String, Long] =
      userClicksStream
        .leftJoin(userRegionsTable, (clicks: Long, region: String) => (if (region == null) "UNKNOWN" else region, clicks))
        .map((_, regionWithClicks) => regionWithClicks)
    
        // the groupByKey gets the Grouped instance through an implicit conversion of the
        // serdes brought into scope through the import Serdes._ above
        .groupByKey
        .reduce(_ + _)
    
    // Similarly you can create an implicit Grouped and it will be passed implicitly
    // to the groupByKey call
    grouped

    the instance of Grouped that gives the serdes

    returns

    a KGroupedStream that contains the grouped records of the original KStream

    See also

    org.apache.kafka.streams.kstream.KStream#groupByKey

  30. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  31. val inner: kstream.KStream[K, V]
  32. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  33. def join[GK, GV, RV](globalKTable: GlobalKTable[GK, GV], named: Named)(keyValueMapper: (K, V) => GK, joiner: (V, GV) => RV): KStream[K, RV]

    Join records of this stream with GlobalKTable's records using non-windowed inner equi join.

    Join records of this stream with GlobalKTable's records using non-windowed inner equi join.

    globalKTable

    the GlobalKTable to be joined with this stream

    named

    a Named config used to name the processor in the topology

    keyValueMapper

    a function used to map from the (key, value) of this stream to the key of the GlobalKTable

    joiner

    a function that computes the join result for a pair of matching records

    returns

    a KStream that contains join-records for each key and values computed by the given joiner, one output for each input KStream record

    See also

    org.apache.kafka.streams.kstream.KStream#join

  34. def join[GK, GV, RV](globalKTable: GlobalKTable[GK, GV])(keyValueMapper: (K, V) => GK, joiner: (V, GV) => RV): KStream[K, RV]

    Join records of this stream with GlobalKTable's records using non-windowed inner equi join.

    Join records of this stream with GlobalKTable's records using non-windowed inner equi join.

    globalKTable

    the GlobalKTable to be joined with this stream

    keyValueMapper

    a function used to map from the (key, value) of this stream to the key of the GlobalKTable

    joiner

    a function that computes the join result for a pair of matching records

    returns

    a KStream that contains join-records for each key and values computed by the given joiner, one output for each input KStream record

    See also

    org.apache.kafka.streams.kstream.KStream#join

  35. def join[VT, VR](table: KTable[K, VT])(joiner: (V, VT) => VR)(implicit joined: Joined[K, V, VT]): KStream[K, VR]

    Join records of this stream with another KTable's records using inner equi join with serializers and deserializers supplied by the implicit Joined instance.

    Join records of this stream with another KTable's records using inner equi join with serializers and deserializers supplied by the implicit Joined instance.

    table

    the KTable to be joined with this stream

    joiner

    a function that computes the join result for a pair of matching records

    joined

    an implicit Joined instance that defines the serdes to be used to serialize/deserialize inputs and outputs of the joined streams. Instead of Joined, the user can also supply key serde, value serde and other value serde in implicit scope and they will be converted to the instance of Joined through implicit conversion

    returns

    a KStream that contains join-records for each key and values computed by the given joiner, one for each matched record-pair with the same key

    See also

    org.apache.kafka.streams.kstream.KStream#join

  36. def join[VO, VR](otherStream: KStream[K, VO])(joiner: (V, VO) => VR, windows: JoinWindows)(implicit streamJoin: StreamJoined[K, V, VO]): KStream[K, VR]

    Join records of this stream with another KStream's records using windowed inner equi join with serializers and deserializers supplied by the implicit StreamJoined instance.

    Join records of this stream with another KStream's records using windowed inner equi join with serializers and deserializers supplied by the implicit StreamJoined instance.

    otherStream

    the KStream to be joined with this stream

    joiner

    a function that computes the join result for a pair of matching records

    windows

    the specification of the JoinWindows

    streamJoin

    an implicit StreamJoin instance that defines the serdes to be used to serialize/deserialize inputs and outputs of the joined streams. Instead of StreamJoin, the user can also supply key serde, value serde and other value serde in implicit scope and they will be converted to the instance of Stream through implicit conversion. The StreamJoin instance can also name the repartition topic (if required), the state stores for the join, and the join processor node.

    returns

    a KStream that contains join-records for each key and values computed by the given joiner, one for each matched record-pair with the same key and within the joining window intervals

    See also

    org.apache.kafka.streams.kstream.KStream#join

  37. def leftJoin[GK, GV, RV](globalKTable: GlobalKTable[GK, GV], named: Named)(keyValueMapper: (K, V) => GK, joiner: (V, GV) => RV): KStream[K, RV]

    Join records of this stream with GlobalKTable's records using non-windowed left equi join.

    Join records of this stream with GlobalKTable's records using non-windowed left equi join.

    globalKTable

    the GlobalKTable to be joined with this stream

    named

    a Named config used to name the processor in the topology

    keyValueMapper

    a function used to map from the (key, value) of this stream to the key of the GlobalKTable

    joiner

    a function that computes the join result for a pair of matching records

    returns

    a KStream that contains join-records for each key and values computed by the given joiner, one output for each input KStream record

    See also

    org.apache.kafka.streams.kstream.KStream#leftJoin

  38. def leftJoin[GK, GV, RV](globalKTable: GlobalKTable[GK, GV])(keyValueMapper: (K, V) => GK, joiner: (V, GV) => RV): KStream[K, RV]

    Join records of this stream with GlobalKTable's records using non-windowed left equi join.

    Join records of this stream with GlobalKTable's records using non-windowed left equi join.

    globalKTable

    the GlobalKTable to be joined with this stream

    keyValueMapper

    a function used to map from the (key, value) of this stream to the key of the GlobalKTable

    joiner

    a function that computes the join result for a pair of matching records

    returns

    a KStream that contains join-records for each key and values computed by the given joiner, one output for each input KStream record

    See also

    org.apache.kafka.streams.kstream.KStream#leftJoin

  39. def leftJoin[VT, VR](table: KTable[K, VT])(joiner: (V, VT) => VR)(implicit joined: Joined[K, V, VT]): KStream[K, VR]

    Join records of this stream with another KTable's records using left equi join with serializers and deserializers supplied by the implicit Joined instance.

    Join records of this stream with another KTable's records using left equi join with serializers and deserializers supplied by the implicit Joined instance.

    table

    the KTable to be joined with this stream

    joiner

    a function that computes the join result for a pair of matching records

    joined

    an implicit Joined instance that defines the serdes to be used to serialize/deserialize inputs and outputs of the joined streams. Instead of Joined, the user can also supply key serde, value serde and other value serde in implicit scope and they will be converted to the instance of Joined through implicit conversion

    returns

    a KStream that contains join-records for each key and values computed by the given joiner, one for each matched record-pair with the same key

    See also

    org.apache.kafka.streams.kstream.KStream#leftJoin

  40. def leftJoin[VO, VR](otherStream: KStream[K, VO])(joiner: (V, VO) => VR, windows: JoinWindows)(implicit streamJoin: StreamJoined[K, V, VO]): KStream[K, VR]

    Join records of this stream with another KStream's records using windowed left equi join with serializers and deserializers supplied by the implicit StreamJoined instance.

    Join records of this stream with another KStream's records using windowed left equi join with serializers and deserializers supplied by the implicit StreamJoined instance.

    otherStream

    the KStream to be joined with this stream

    joiner

    a function that computes the join result for a pair of matching records

    windows

    the specification of the JoinWindows

    streamJoin

    an implicit StreamJoin instance that defines the serdes to be used to serialize/deserialize inputs and outputs of the joined streams. Instead of StreamJoin, the user can also supply key serde, value serde and other value serde in implicit scope and they will be converted to the instance of Stream through implicit conversion. The StreamJoin instance can also name the repartition topic (if required), the state stores for the join, and the join processor node.

    returns

    a KStream that contains join-records for each key and values computed by the given joiner, one for each matched record-pair with the same key and within the joining window intervals

    See also

    org.apache.kafka.streams.kstream.KStream#leftJoin

  41. def map[KR, VR](mapper: (K, V) => (KR, VR), named: Named): KStream[KR, VR]

    Transform each record of the input stream into a new record in the output stream (both key and value type can be altered arbitrarily).

    Transform each record of the input stream into a new record in the output stream (both key and value type can be altered arbitrarily).

    The provided mapper, a function (K, V) => (KR, VR) is applied to each input record and computes a new output record.

    mapper

    a function (K, V) => (KR, VR) that computes a new output record

    named

    a Named config used to name the processor in the topology

    returns

    a KStream that contains records with new key and value (possibly both of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#map

  42. def map[KR, VR](mapper: (K, V) => (KR, VR)): KStream[KR, VR]

    Transform each record of the input stream into a new record in the output stream (both key and value type can be altered arbitrarily).

    Transform each record of the input stream into a new record in the output stream (both key and value type can be altered arbitrarily).

    The provided mapper, a function (K, V) => (KR, VR) is applied to each input record and computes a new output record.

    mapper

    a function (K, V) => (KR, VR) that computes a new output record

    returns

    a KStream that contains records with new key and value (possibly both of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#map

  43. def mapValues[VR](mapper: (K, V) => VR, named: Named): KStream[K, VR]

    Transform the value of each input record into a new value (with possible new type) of the output record.

    Transform the value of each input record into a new value (with possible new type) of the output record.

    The provided mapper, a function (K, V) => VR is applied to each input record value and computes a new value for it

    mapper

    , a function (K, V) => VR that computes a new output value

    named

    a Named config used to name the processor in the topology

    returns

    a KStream that contains records with unmodified key and new values (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#mapValues

  44. def mapValues[VR](mapper: (K, V) => VR): KStream[K, VR]

    Transform the value of each input record into a new value (with possible new type) of the output record.

    Transform the value of each input record into a new value (with possible new type) of the output record.

    The provided mapper, a function (K, V) => VR is applied to each input record value and computes a new value for it

    mapper

    , a function (K, V) => VR that computes a new output value

    returns

    a KStream that contains records with unmodified key and new values (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#mapValues

  45. def mapValues[VR](mapper: (V) => VR, named: Named): KStream[K, VR]

    Transform the value of each input record into a new value (with possible new type) of the output record.

    Transform the value of each input record into a new value (with possible new type) of the output record.

    The provided mapper, a function V => VR is applied to each input record value and computes a new value for it

    mapper

    , a function V => VR that computes a new output value

    named

    a Named config used to name the processor in the topology

    returns

    a KStream that contains records with unmodified key and new values (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#mapValues

  46. def mapValues[VR](mapper: (V) => VR): KStream[K, VR]

    Transform the value of each input record into a new value (with possible new type) of the output record.

    Transform the value of each input record into a new value (with possible new type) of the output record.

    The provided mapper, a function V => VR is applied to each input record value and computes a new value for it

    mapper

    , a function V => VR that computes a new output value

    returns

    a KStream that contains records with unmodified key and new values (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#mapValues

  47. def merge(stream: KStream[K, V], named: Named): KStream[K, V]

    Merge this stream and the given stream into one larger stream.

    Merge this stream and the given stream into one larger stream.

    There is no ordering guarantee between records from this KStream and records from the provided KStream in the merged stream. Relative order is preserved within each input stream though (ie, records within one input stream are processed in order).

    stream

    a stream which is to be merged into this stream

    named

    a Named config used to name the processor in the topology

    returns

    a merged stream containing all records from this and the provided KStream

    See also

    org.apache.kafka.streams.kstream.KStream#merge

  48. def merge(stream: KStream[K, V]): KStream[K, V]

    Merge this stream and the given stream into one larger stream.

    Merge this stream and the given stream into one larger stream.

    There is no ordering guarantee between records from this KStream and records from the provided KStream in the merged stream. Relative order is preserved within each input stream though (ie, records within one input stream are processed in order).

    stream

    a stream which is to be merged into this stream

    returns

    a merged stream containing all records from this and the provided KStream

    See also

    org.apache.kafka.streams.kstream.KStream#merge

  49. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  50. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  51. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  52. def outerJoin[VO, VR](otherStream: KStream[K, VO])(joiner: (V, VO) => VR, windows: JoinWindows)(implicit streamJoin: StreamJoined[K, V, VO]): KStream[K, VR]

    Join records of this stream with another KStream's records using windowed outer equi join with serializers and deserializers supplied by the implicit Joined instance.

    Join records of this stream with another KStream's records using windowed outer equi join with serializers and deserializers supplied by the implicit Joined instance.

    otherStream

    the KStream to be joined with this stream

    joiner

    a function that computes the join result for a pair of matching records

    windows

    the specification of the JoinWindows

    streamJoin

    an implicit StreamJoin instance that defines the serdes to be used to serialize/deserialize inputs and outputs of the joined streams. Instead of StreamJoin, the user can also supply key serde, value serde and other value serde in implicit scope and they will be converted to the instance of Stream through implicit conversion. The StreamJoin instance can also name the repartition topic (if required), the state stores for the join, and the join processor node.

    returns

    a KStream that contains join-records for each key and values computed by the given joiner, one for each matched record-pair with the same key and within the joining window intervals

    See also

    org.apache.kafka.streams.kstream.KStream#outerJoin

  53. def peek(action: (K, V) => Unit, named: Named): KStream[K, V]

    Perform an action on each record of KStream.

    Perform an action on each record of KStream.

    Peek is a non-terminal operation that triggers a side effect (such as logging or statistics collection) and returns an unchanged stream.

    action

    an action to perform on each record

    named

    a Named config used to name the processor in the topology

    See also

    org.apache.kafka.streams.kstream.KStream#peek

  54. def peek(action: (K, V) => Unit): KStream[K, V]

    Perform an action on each record of KStream.

    Perform an action on each record of KStream.

    Peek is a non-terminal operation that triggers a side effect (such as logging or statistics collection) and returns an unchanged stream.

    action

    an action to perform on each record

    See also

    org.apache.kafka.streams.kstream.KStream#peek

  55. def print(printed: Printed[K, V]): Unit

    Print the records of this KStream using the options provided by Printed

    Print the records of this KStream using the options provided by Printed

    printed

    options for printing

    See also

    org.apache.kafka.streams.kstream.KStream#print

  56. def process(processorSupplier: ProcessorSupplier[K, V, Void, Void], named: Named, stateStoreNames: String*): Unit

    Process all records in this stream, one record at a time, by applying a Processor (provided by the given processorSupplier).

    Process all records in this stream, one record at a time, by applying a Processor (provided by the given processorSupplier). In order to assign a state, the state must be created and added via addStateStore before they can be connected to the Processor. It's not required to connect global state stores that are added via addGlobalStore; read-only access to global state stores is available by default.

    Note that this overload takes a ProcessorSupplier instead of a Function to avoid post-erasure ambiguity with the older (deprecated) overload.

    processorSupplier

    a supplier for org.apache.kafka.streams.processor.api.Processor

    named

    a Named config used to name the processor in the topology

    stateStoreNames

    the names of the state store used by the processor

    See also

    org.apache.kafka.streams.kstream.KStream#process

  57. def process(processorSupplier: ProcessorSupplier[K, V, Void, Void], stateStoreNames: String*): Unit

    Process all records in this stream, one record at a time, by applying a Processor (provided by the given processorSupplier).

    Process all records in this stream, one record at a time, by applying a Processor (provided by the given processorSupplier). In order to assign a state, the state must be created and added via addStateStore before they can be connected to the Processor. It's not required to connect global state stores that are added via addGlobalStore; read-only access to global state stores is available by default.

    Note that this overload takes a ProcessorSupplier instead of a Function to avoid post-erasure ambiguity with the older (deprecated) overload.

    processorSupplier

    a supplier for org.apache.kafka.streams.processor.api.Processor

    stateStoreNames

    the names of the state store used by the processor

    See also

    org.apache.kafka.streams.kstream.KStream#process

  58. def repartition(implicit repartitioned: Repartitioned[K, V]): KStream[K, V]

    Materialize this stream to a topic and creates a new KStream from the topic using the Repartitioned instance for configuration of the Serde key serde, Serde value serde, StreamPartitioner, number of partitions, and topic name part.

    Materialize this stream to a topic and creates a new KStream from the topic using the Repartitioned instance for configuration of the Serde key serde, Serde value serde, StreamPartitioner, number of partitions, and topic name part.

    The created topic is considered as an internal topic and is meant to be used only by the current Kafka Streams instance. Similar to auto-repartitioning, the topic will be created with infinite retention time and data will be automatically purged by Kafka Streams. The topic will be named as "${applicationId}-<name>-repartition", where "applicationId" is user-specified in StreamsConfig via parameter APPLICATION_ID_CONFIG APPLICATION_ID_CONFIG, "<name>" is either provided via Repartitioned#as(String) or an internally generated name, and "-repartition" is a fixed suffix.

    The user can either supply the Repartitioned instance as an implicit in scope or they can also provide implicit key and value serdes that will be converted to a Repartitioned instance implicitly.

    Example:
    
    // brings implicit serdes in scope
    import Serdes._
    
    //..
    val clicksPerRegion: KStream[String, Long] = //..
    
    // Implicit serdes in scope will generate an implicit Produced instance, which
    // will be passed automatically to the call of through below
    clicksPerRegion.repartition
    
    // Similarly you can create an implicit Repartitioned and it will be passed implicitly
    // to the repartition call
    repartitioned

    the Repartitioned instance used to specify Serdes, StreamPartitioner which determines how records are distributed among partitions of the topic, part of the topic name, and number of partitions for a repartition topic.

    returns

    a KStream that contains the exact same repartitioned records as this KStream

    See also

    org.apache.kafka.streams.kstream.KStream#repartition

  59. def selectKey[KR](mapper: (K, V) => KR, named: Named): KStream[KR, V]

    Set a new key (with possibly new type) for each input record.

    Set a new key (with possibly new type) for each input record.

    The function mapper passed is applied to every record and results in the generation of a new key KR. The function outputs a new KStream where each record has this new key.

    mapper

    a function (K, V) => KR that computes a new key for each record

    named

    a Named config used to name the processor in the topology

    returns

    a KStream that contains records with new key (possibly of different type) and unmodified value

    See also

    org.apache.kafka.streams.kstream.KStream#selectKey

  60. def selectKey[KR](mapper: (K, V) => KR): KStream[KR, V]

    Set a new key (with possibly new type) for each input record.

    Set a new key (with possibly new type) for each input record.

    The function mapper passed is applied to every record and results in the generation of a new key KR. The function outputs a new KStream where each record has this new key.

    mapper

    a function (K, V) => KR that computes a new key for each record

    returns

    a KStream that contains records with new key (possibly of different type) and unmodified value

    See also

    org.apache.kafka.streams.kstream.KStream#selectKey

  61. def split(named: Named): BranchedKStream[K, V]

    Split this stream.

    Split this stream. BranchedKStream can be used for routing the records to different branches depending on evaluation against the supplied predicates. Stream branching is a stateless record-by-record operation.

    named

    a Named config used to name the processor in the topology and also to set the name prefix for the resulting branches (see BranchedKStream)

    returns

    BranchedKStream that provides methods for routing the records to different branches.

    See also

    org.apache.kafka.streams.kstream.KStream#split

  62. def split(): BranchedKStream[K, V]

    Split this stream.

    Split this stream. BranchedKStream can be used for routing the records to different branches depending on evaluation against the supplied predicates. Stream branching is a stateless record-by-record operation.

    returns

    BranchedKStream that provides methods for routing the records to different branches.

    See also

    org.apache.kafka.streams.kstream.KStream#split

  63. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  64. def to(extractor: TopicNameExtractor[K, V])(implicit produced: Produced[K, V]): Unit

    Dynamically materialize this stream to topics using the Produced instance for configuration of the Serde key serde, Serde value serde, and StreamPartitioner.

    Dynamically materialize this stream to topics using the Produced instance for configuration of the Serde key serde, Serde value serde, and StreamPartitioner. The topic names for each record to send to is dynamically determined based on the given mapper.

    The user can either supply the Produced instance as an implicit in scope or they can also provide implicit key and value serdes that will be converted to a Produced instance implicitly.

    Example:
    
    // brings implicit serdes in scope
    import Serdes._
    
    //..
    val clicksPerRegion: KTable[String, Long] = //..
    
    // Implicit serdes in scope will generate an implicit Produced instance, which
    // will be passed automatically to the call of through below
    clicksPerRegion.to(topicChooser)
    
    // Similarly you can create an implicit Produced and it will be passed implicitly
    // to the through call
    extractor

    the extractor to determine the name of the Kafka topic to write to for reach record

    produced

    the instance of Produced that gives the serdes and StreamPartitioner

    See also

    org.apache.kafka.streams.kstream.KStream#to

  65. def to(topic: String)(implicit produced: Produced[K, V]): Unit

    Materialize this stream to a topic using the Produced instance for configuration of the Serde key serde, Serde value serde, and StreamPartitioner

    Materialize this stream to a topic using the Produced instance for configuration of the Serde key serde, Serde value serde, and StreamPartitioner

    The user can either supply the Produced instance as an implicit in scope or they can also provide implicit key and value serdes that will be converted to a Produced instance implicitly.

    Example:
    
    // brings implicit serdes in scope
    import Serdes._
    
    //..
    val clicksPerRegion: KTable[String, Long] = //..
    
    // Implicit serdes in scope will generate an implicit Produced instance, which
    // will be passed automatically to the call of through below
    clicksPerRegion.to(topic)
    
    // Similarly you can create an implicit Produced and it will be passed implicitly
    // to the through call
    topic

    the topic name

    produced

    the instance of Produced that gives the serdes and StreamPartitioner

    See also

    org.apache.kafka.streams.kstream.KStream#to

  66. def toString(): String
    Definition Classes
    AnyRef → Any
  67. def toTable(named: Named, materialized: Materialized[K, V, ByteArrayKeyValueStore]): KTable[K, V]

    Convert this stream to a KTable.

    Convert this stream to a KTable.

    named

    a Named config used to name the processor in the topology

    materialized

    a Materialized that describes how the StateStore for the resulting KTable should be materialized.

    returns

    a KTable that contains the same records as this KStream

    See also

    org.apache.kafka.streams.kstream.KStream#toTable

  68. def toTable(materialized: Materialized[K, V, ByteArrayKeyValueStore]): KTable[K, V]

    Convert this stream to a KTable.

    Convert this stream to a KTable.

    materialized

    a Materialized that describes how the StateStore for the resulting KTable should be materialized.

    returns

    a KTable that contains the same records as this KStream

    See also

    org.apache.kafka.streams.kstream.KStream#toTable

  69. def toTable(named: Named): KTable[K, V]

    Convert this stream to a KTable.

    Convert this stream to a KTable.

    named

    a Named config used to name the processor in the topology

    returns

    a KTable that contains the same records as this KStream

    See also

    org.apache.kafka.streams.kstream.KStream#toTable

  70. def toTable: KTable[K, V]

    Convert this stream to a KTable.

    Convert this stream to a KTable.

    returns

    a KTable that contains the same records as this KStream

    See also

    org.apache.kafka.streams.kstream.KStream#toTable

  71. def transform[K1, V1](transformerSupplier: TransformerSupplier[K, V, KeyValue[K1, V1]], named: Named, stateStoreNames: String*): KStream[K1, V1]

    Transform each record of the input stream into zero or more records in the output stream (both key and value type can be altered arbitrarily).

    Transform each record of the input stream into zero or more records in the output stream (both key and value type can be altered arbitrarily). A Transformer (provided by the given TransformerSupplier) is applied to each input record and computes zero or more output records. In order to assign a state, the state must be created and added via addStateStore before they can be connected to the Transformer. It's not required to connect global state stores that are added via addGlobalStore; read-only access to global state stores is available by default.

    transformerSupplier

    the TransformerSuplier that generates Transformer

    named

    a Named config used to name the processor in the topology

    stateStoreNames

    the names of the state stores used by the processor

    returns

    a KStream that contains more or less records with new key and value (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#transform

  72. def transform[K1, V1](transformerSupplier: TransformerSupplier[K, V, KeyValue[K1, V1]], stateStoreNames: String*): KStream[K1, V1]

    Transform each record of the input stream into zero or more records in the output stream (both key and value type can be altered arbitrarily).

    Transform each record of the input stream into zero or more records in the output stream (both key and value type can be altered arbitrarily). A Transformer (provided by the given TransformerSupplier) is applied to each input record and computes zero or more output records. In order to assign a state, the state must be created and added via addStateStore before they can be connected to the Transformer. It's not required to connect global state stores that are added via addGlobalStore; read-only access to global state stores is available by default.

    transformerSupplier

    the TransformerSuplier that generates Transformer

    stateStoreNames

    the names of the state stores used by the processor

    returns

    a KStream that contains more or less records with new key and value (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#transform

  73. def transformValues[VR](valueTransformerSupplier: ValueTransformerWithKeySupplier[K, V, VR], named: Named, stateStoreNames: String*): KStream[K, VR]

    Transform the value of each input record into a new value (with possible new type) of the output record.

    Transform the value of each input record into a new value (with possible new type) of the output record. A ValueTransformer (provided by the given ValueTransformerSupplier) is applied to each input record value and computes a new value for it. In order to assign a state, the state must be created and added via addStateStore before they can be connected to the ValueTransformer. It's not required to connect global state stores that are added via addGlobalStore; read-only access to global state stores is available by default.

    valueTransformerSupplier

    a instance of ValueTransformerWithKeySupplier that generates a ValueTransformerWithKey

    named

    a Named config used to name the processor in the topology

    stateStoreNames

    the names of the state stores used by the processor

    returns

    a KStream that contains records with unmodified key and new values (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#transformValues

  74. def transformValues[VR](valueTransformerSupplier: ValueTransformerWithKeySupplier[K, V, VR], stateStoreNames: String*): KStream[K, VR]

    Transform the value of each input record into a new value (with possible new type) of the output record.

    Transform the value of each input record into a new value (with possible new type) of the output record. A ValueTransformer (provided by the given ValueTransformerSupplier) is applied to each input record value and computes a new value for it. In order to assign a state, the state must be created and added via addStateStore before they can be connected to the ValueTransformer. It's not required to connect global state stores that are added via addGlobalStore; read-only access to global state stores is available by default.

    valueTransformerSupplier

    a instance of ValueTransformerWithKeySupplier that generates a ValueTransformerWithKey

    stateStoreNames

    the names of the state stores used by the processor

    returns

    a KStream that contains records with unmodified key and new values (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#transformValues

  75. def transformValues[VR](valueTransformerSupplier: ValueTransformerSupplier[V, VR], named: Named, stateStoreNames: String*): KStream[K, VR]

    Transform the value of each input record into a new value (with possible new type) of the output record.

    Transform the value of each input record into a new value (with possible new type) of the output record. A ValueTransformer (provided by the given ValueTransformerSupplier) is applied to each input record value and computes a new value for it. In order to assign a state, the state must be created and added via addStateStore before they can be connected to the ValueTransformer. It's not required to connect global state stores that are added via addGlobalStore; read-only access to global state stores is available by default.

    valueTransformerSupplier

    a instance of ValueTransformerSupplier that generates a ValueTransformer

    named

    a Named config used to name the processor in the topology

    stateStoreNames

    the names of the state stores used by the processor

    returns

    a KStream that contains records with unmodified key and new values (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#transformValues

  76. def transformValues[VR](valueTransformerSupplier: ValueTransformerSupplier[V, VR], stateStoreNames: String*): KStream[K, VR]

    Transform the value of each input record into a new value (with possible new type) of the output record.

    Transform the value of each input record into a new value (with possible new type) of the output record. A ValueTransformer (provided by the given ValueTransformerSupplier) is applied to each input record value and computes a new value for it. In order to assign a state, the state must be created and added via addStateStore before they can be connected to the ValueTransformer. It's not required to connect global state stores that are added via addGlobalStore; read-only access to global state stores is available by default.

    valueTransformerSupplier

    a instance of ValueTransformerSupplier that generates a ValueTransformer

    stateStoreNames

    the names of the state stores used by the processor

    returns

    a KStream that contains records with unmodified key and new values (possibly of different type)

    See also

    org.apache.kafka.streams.kstream.KStream#transformValues

  77. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  78. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  79. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()

Deprecated Value Members

  1. def branch(predicates: (K, V) => Boolean*): Array[KStream[K, V]]

    Creates an array of KStream from this stream by branching the records in the original stream based on the supplied predicates.

    Creates an array of KStream from this stream by branching the records in the original stream based on the supplied predicates.

    predicates

    the ordered list of functions that return a Boolean

    returns

    multiple distinct substreams of this KStream

    Annotations
    @deprecated
    Deprecated

    (Since version 2.8) use split() instead

    See also

    org.apache.kafka.streams.kstream.KStream#branch

  2. def process(processorSupplier: () => Processor[K, V], named: Named, stateStoreNames: String*): Unit

    Process all records in this stream, one record at a time, by applying a Processor (provided by the given processorSupplier).

    Process all records in this stream, one record at a time, by applying a Processor (provided by the given processorSupplier). In order to assign a state, the state must be created and added via addStateStore before they can be connected to the Processor. It's not required to connect global state stores that are added via addGlobalStore; read-only access to global state stores is available by default.

    processorSupplier

    a function that generates a org.apache.kafka.streams.processor.Processor

    named

    a Named config used to name the processor in the topology

    stateStoreNames

    the names of the state store used by the processor

    Annotations
    @deprecated
    Deprecated

    (Since version 3.0) Use process(ProcessorSupplier, String*) instead.

    See also

    org.apache.kafka.streams.kstream.KStream#process

  3. def process(processorSupplier: () => Processor[K, V], stateStoreNames: String*): Unit

    Process all records in this stream, one record at a time, by applying a Processor (provided by the given processorSupplier).

    Process all records in this stream, one record at a time, by applying a Processor (provided by the given processorSupplier). In order to assign a state, the state must be created and added via addStateStore before they can be connected to the Processor. It's not required to connect global state stores that are added via addGlobalStore; read-only access to global state stores is available by default.

    processorSupplier

    a function that generates a org.apache.kafka.streams.processor.Processor

    stateStoreNames

    the names of the state store used by the processor

    Annotations
    @deprecated
    Deprecated

    (Since version 3.0) Use process(ProcessorSupplier, String*) instead.

    See also

    org.apache.kafka.streams.kstream.KStream#process

  4. def through(topic: String)(implicit produced: Produced[K, V]): KStream[K, V]

    Materialize this stream to a topic and creates a new KStream from the topic using the Produced instance for configuration of the Serde key serde, Serde value serde, and StreamPartitioner

    Materialize this stream to a topic and creates a new KStream from the topic using the Produced instance for configuration of the Serde key serde, Serde value serde, and StreamPartitioner

    The user can either supply the Produced instance as an implicit in scope or they can also provide implicit key and value serdes that will be converted to a Produced instance implicitly.

    Example:
    
    // brings implicit serdes in scope
    import Serdes._
    
    //..
    val clicksPerRegion: KStream[String, Long] = //..
    
    // Implicit serdes in scope will generate an implicit Produced instance, which
    // will be passed automatically to the call of through below
    clicksPerRegion.through(topic)
    
    // Similarly you can create an implicit Produced and it will be passed implicitly
    // to the through call
    topic

    the topic name

    produced

    the instance of Produced that gives the serdes and StreamPartitioner

    returns

    a KStream that contains the exact same (and potentially repartitioned) records as this KStream

    Annotations
    @deprecated
    Deprecated

    (Since version 2.6.0) use repartition() instead

    See also

    org.apache.kafka.streams.kstream.KStream#through

Inherited from AnyRef

Inherited from Any

Ungrouped