Class PythonTransforms

java.lang.Object
com.hazelcast.jet.python.PythonTransforms

public final class PythonTransforms extends Object
Transforms which allow the user to call Python user-defined functions from inside a Jet pipeline.
Since:
Jet 4.0
  • Method Details

    • mapUsingPython

      @Nonnull public static FunctionEx<StreamStage<String>,StreamStage<String>> mapUsingPython(@Nonnull PythonServiceConfig cfg)
      A stage-transforming method that adds a "map using Python" pipeline stage. Use it with stage.apply(PythonService.mapUsingPython(pyConfig)). See PythonServiceConfig for more details.
    • mapUsingPython

      @Deprecated @Nonnull public static <K> FunctionEx<StreamStage<String>,StreamStage<String>> mapUsingPython(@Nonnull FunctionEx<? super String,? extends K> keyFn, @Nonnull PythonServiceConfig cfg)
      Deprecated.
      Jet now has first-class support for data rebalancing, see GeneralStage.rebalance() and GeneralStage.rebalance(FunctionEx).
      A stage-transforming method that adds a partitioned "map using Python" pipeline stage. It applies partitioning using the supplied keyFn. You need partitioning if your input stream comes from a non-distributed data source (all data coming in on a single cluster member), in order to distribute the Python work across the whole cluster.

      Use it like this: stage.apply(PythonService.mapUsingPython(keyFn, pyConfig)). See PythonServiceConfig for more details.

    • mapUsingPythonBatch

      @Nonnull public static FunctionEx<BatchStage<String>,BatchStage<String>> mapUsingPythonBatch(@Nonnull PythonServiceConfig cfg)
      A stage-transforming method that adds a "map using Python" pipeline stage. Use it with stage.apply(PythonService.mapUsingPythonBatch(pyConfig)). See PythonServiceConfig for more details.
    • mapUsingPythonBatch

      @Nonnull public static <K> FunctionEx<BatchStage<String>,BatchStage<String>> mapUsingPythonBatch(@Nonnull FunctionEx<? super String,? extends K> keyFn, @Nonnull PythonServiceConfig cfg)
      A stage-transforming method that adds a partitioned "map using Python" pipeline stage. It applies partitioning using the supplied keyFn. You need partitioning if your input stream comes from a non-distributed data source (all data coming in on a single cluster member), in order to distribute the Python work across the whole cluster.

      Use it like this: stage.apply(PythonService.mapUsingPythonBatch(keyFn, pyConfig)). See PythonServiceConfig for more details.