public class JobConfig extends Object implements IdentifiedDataSerializable
Constructor and Description |
---|
JobConfig() |
Modifier and Type | Method and Description |
---|---|
JobConfig |
addClass(Class... classes)
Adds the given classes and recursively all their nested (inner & anonymous)
classes to the Jet job's classpath.
|
JobConfig |
addClasspathResource(File file)
Adds a file that will be available as a resource on the Jet job's classpath.
|
JobConfig |
addClasspathResource(File file,
String id)
Adds a file that will be available as a resource on the Jet job's classpath.
|
JobConfig |
addClasspathResource(String path)
Adds a file that will be available as a resource on the Jet job's classpath.
|
JobConfig |
addClasspathResource(String path,
String id)
Adds a file that will be available as a resource on the Jet job's classpath.
|
JobConfig |
addClasspathResource(URL url)
Adds a resource that will be available on the Jet job's classpath.
|
JobConfig |
addClasspathResource(URL url,
String id)
Adds a resource that will be available on the Jet job's classpath.
|
JobConfig |
addCustomClasspath(String name,
String path)
Adds custom classpath element to a stage with the given name.
|
JobConfig |
addCustomClasspaths(String name,
List<String> paths)
Adds custom classpath elements to a stage with the given name.
|
JobConfig |
addJar(File file)
Adds a JAR whose contents will be accessible to all the code attached to the
underlying pipeline or DAG, but not to any other code.
|
JobConfig |
addJar(String path)
Adds a JAR whose contents will be accessible to all the code attached to the
underlying pipeline or DAG, but not to any other code.
|
JobConfig |
addJar(URL url)
Adds a JAR whose contents will be accessible to all the code attached to the
underlying pipeline or DAG, but not to any other code.
|
JobConfig |
addJarsInZip(File file)
Adds a ZIP file with JARs whose contents will be accessible to all the code
attached to the underlying pipeline or DAG, but not to any other code.
|
JobConfig |
addJarsInZip(String path)
Adds a ZIP file with JARs whose contents will be accessible to all the code
attached to the underlying pipeline or DAG, but not to any other code.
|
JobConfig |
addJarsInZip(URL url)
Adds a ZIP file with JARs whose contents will be accessible to all the code
attached to the underlying pipeline or DAG, but not to any other code.
|
JobConfig |
addPackage(String... packages)
Adds recursively all the classes and resources in given packages to the Jet
job's classpath.
|
JobConfig |
attachAll(Map<String,File> idToFile)
Attaches all the files/directories in the supplied map, as if by calling
attachDirectory(dir, id) for every
entry that resolves to a directory and attachFile(file, id) for every entry that resolves to a regular file. |
JobConfig |
attachDirectory(File file)
Adds the supplied directory to the list of files that will be available to
the job while it's executing in the Jet cluster.
|
JobConfig |
attachDirectory(File file,
String id)
Adds the supplied directory to the list of files that will be available to
the job while it's executing in the Jet cluster.
|
JobConfig |
attachDirectory(String path)
Adds the directory identified by the supplied pathname to the list of files
that will be available to the job while it's executing in the Jet cluster.
|
JobConfig |
attachDirectory(String path,
String id)
Adds the directory identified by the supplied pathname to the list of files
that will be available to the job while it's executing in the Jet cluster.
|
JobConfig |
attachDirectory(URL url)
Adds the directory identified by the supplied URL to the list of directories
that will be available to the job while it's executing in the Jet cluster.
|
JobConfig |
attachDirectory(URL url,
String id)
Adds the directory identified by the supplied URL to the list of directories
that will be available to the job while it's executing in the Jet cluster.
|
JobConfig |
attachFile(File file)
Adds the supplied file to the list of resources that will be available to the
job while it's executing in the Jet cluster.
|
JobConfig |
attachFile(File file,
String id)
Adds the supplied file to the list of files that will be available to the job
while it's executing in the Jet cluster.
|
JobConfig |
attachFile(String path)
Adds the file identified by the supplied pathname to the list of files that
will be available to the job while it's executing in the Jet cluster.
|
JobConfig |
attachFile(String path,
String id)
Adds the file identified by the supplied pathname to the list of files that
will be available to the job while it's executing in the Jet cluster.
|
JobConfig |
attachFile(URL url)
Adds the file identified by the supplied URL as a resource that will be
available to the job while it's executing in the Jet cluster.
|
JobConfig |
attachFile(URL url,
String id)
Adds the file identified by the supplied URL to the list of resources that
will be available to the job while it's executing in the Jet cluster.
|
boolean |
equals(Object o) |
<T> T |
getArgument(String key)
Returns the value to which the specified key is mapped, or null if there is
no mapping for the key.
|
int |
getClassId()
Returns type identifier for this class.
|
JobClassLoaderFactory |
getClassLoaderFactory()
Returns the configured
JobClassLoaderFactory . |
Map<String,List<String>> |
getCustomClassPaths()
Returns configured custom classpath elements,
See
addCustomClasspath(String, String) and addCustomClasspaths(String, List) |
int |
getFactoryId()
Returns DataSerializableFactory factory ID for this class.
|
String |
getInitialSnapshotName()
Returns the configured initial
snapshot name or
null if no initial snapshot is configured. |
long |
getMaxProcessorAccumulatedRecords()
Returns the maximum number of records that can be accumulated by any single
Processor instance in the context of the job. |
String |
getName()
Returns the name of the job or
null if no name was given. |
ProcessingGuarantee |
getProcessingGuarantee()
Returns the configured
processing guarantee . |
Map<String,ResourceConfig> |
getResourceConfigs()
Returns all the registered resource configurations.
|
Map<String,String> |
getSerializerConfigs()
Returns all the registered serializer configurations.
|
long |
getSnapshotIntervalMillis()
Returns the configured
snapshot
interval . |
long |
getTimeoutMillis()
Returns maximum execution time for the job in milliseconds.
|
int |
hashCode() |
boolean |
isAutoScaling()
Returns whether auto scaling is enabled, see
setAutoScaling(boolean) . |
boolean |
isMetricsEnabled()
Returns if metrics collection is enabled for the job.
|
boolean |
isSplitBrainProtectionEnabled()
Tells whether
split brain
protection is enabled. |
boolean |
isStoreMetricsAfterJobCompletion()
Returns whether metrics should be stored in the cluster after the job
completes.
|
boolean |
isSuspendOnFailure()
Returns whether the job will be suspended on failure, see
setSuspendOnFailure(boolean) . |
void |
lock()
Used to prevent further mutations the config after submitting it with a job execution.
|
void |
readData(ObjectDataInput in)
Reads fields from the input stream
|
<T,S extends StreamSerializer<T>> |
registerSerializer(Class<T> clazz,
Class<S> serializerClass)
Registers the given serializer for the given class for the scope of the
job.
|
JobConfig |
setArgument(String key,
Object value)
Associates the specified value with the specified key.
|
JobConfig |
setAutoScaling(boolean enabled)
Sets whether Jet will scale the job up or down when a member is added or
removed from the cluster.
|
JobConfig |
setClassLoaderFactory(JobClassLoaderFactory classLoaderFactory)
Sets a custom
JobClassLoaderFactory that will be used to load
job classes and resources on Jet members. |
JobConfig |
setInitialSnapshotName(String initialSnapshotName)
Sets the exported state snapshot name
to restore the initial job state from.
|
JobConfig |
setMaxProcessorAccumulatedRecords(long maxProcessorAccumulatedRecords)
Sets the maximum number of records that can be accumulated by any single
Processor instance in the context of the job. |
JobConfig |
setMetricsEnabled(boolean enabled)
Sets whether metrics collection should be enabled for the job.
|
JobConfig |
setName(String name)
Sets the name of the job.
|
JobConfig |
setProcessingGuarantee(ProcessingGuarantee processingGuarantee)
Set the
processing guarantee for the job. |
JobConfig |
setSnapshotIntervalMillis(long snapshotInterval)
Sets the snapshot interval in milliseconds — the interval between the
completion of the previous snapshot and the start of a new one.
|
JobConfig |
setSplitBrainProtection(boolean isEnabled)
Configures the split brain protection feature.
|
JobConfig |
setStoreMetricsAfterJobCompletion(boolean storeMetricsAfterJobCompletion)
Sets whether metrics should be stored in the cluster after the job completes.
|
JobConfig |
setSuspendOnFailure(boolean suspendOnFailure)
Sets what happens if the job execution fails:
If enabled, the job will be suspended.
|
JobConfig |
setTimeoutMillis(long timeoutMillis)
Sets the maximum execution time for the job in milliseconds.
|
String |
toString() |
void |
writeData(ObjectDataOutput out)
Writes object fields to output stream
|
@Nonnull public JobConfig setName(@Nullable String name)
JetService.newJobIfAbsent(com.hazelcast.jet.core.DAG, com.hazelcast.jet.config.JobConfig)
.
An active job is a job that is running, suspended or waiting to be run.
The job name is printed in logs and is visible in Management Center.
The default value is null
. Must be set to null
for
light jobs.
this
instance for fluent APIpublic boolean isSplitBrainProtectionEnabled()
split brain
protection
is enabled.@Nonnull public JobConfig setSplitBrainProtection(boolean isEnabled)
cluster size at job submission time / 2 + 1
.
The job can be restarted only if the size of the cluster after restart is at least the quorum value. Only one of the clusters formed due to a split-brain condition can satisfy the quorum. For example, if at the time of job submission the cluster size was 5 and a network partition causes two clusters with sizes 3 and 2 to form, the job will restart only on the cluster with size 3.
Adding new nodes to the cluster after starting the job may defeat this mechanism. For instance, if there are 5 members at submission time (i.e., the quorum value is 3) and later a new node joins, a split into two clusters of size 3 will allow the job to be restarted on both sides.
Split-brain protection is disabled by default.
If auto scaling is disabled and you
manually Job.resume()
the job, the job won't start executing until the
quorum is met, but will remain in the resumed state.
Ignored for light jobs.
this
instance for fluent APIpublic JobConfig setAutoScaling(boolean enabled)
+--------------------------+-----------------------+----------------+ | Auto scaling | Member added | Member removed | +--------------------------+-----------------------+----------------+ | Enabled | restart (after delay) | restart | | Disabled - snapshots on | no action | suspend | | Disabled - snapshots off | no action | fail | +--------------------------+-----------------------+----------------+
this
instance for fluent APIConfiguring the scale-up delay
,
Enabling/disabling snapshots
public boolean isAutoScaling()
setAutoScaling(boolean)
.public JobConfig setSuspendOnFailure(boolean suspendOnFailure)
By default it's disabled. Ignored for light jobs.
this
instance for fluent APIpublic boolean isSuspendOnFailure()
setSuspendOnFailure(boolean)
.@Nonnull public ProcessingGuarantee getProcessingGuarantee()
processing guarantee
.@Nonnull public JobConfig setProcessingGuarantee(@Nonnull ProcessingGuarantee processingGuarantee)
processing guarantee
for the job. When
the processing guarantee is set to at-least-once or
exactly-once, the snapshot interval can be configured via
setSnapshotIntervalMillis(long)
, otherwise it will default to 10
seconds.
The default value is ProcessingGuarantee.NONE
. Must be set to
NONE
for light
jobs.
this
instance for fluent APIpublic long getSnapshotIntervalMillis()
snapshot
interval
.@Nonnull public JobConfig setSnapshotIntervalMillis(long snapshotInterval)
Default value is set to 10 seconds.
this
instance for fluent API@Nonnull public JobConfig addClass(@Nonnull Class... classes)
IMap
data source, which can instantiate only
the classes from the Jet instance's classpath.)
See also addJar(java.net.URL)
and addClasspathResource(java.net.URL)
.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig addPackage(@Nonnull String... packages)
IMap
data source, which can instantiate only the classes from
the Jet instance's classpath.)
See also addJar(java.net.URL)
and addClasspathResource(java.net.URL)
.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig addJar(@Nonnull URL url)
IMap
data source, which can instantiate only the classes from
the Jet instance's classpath.)
This variant identifies the JAR with a URL, which must contain at least one path segment. The last path segment ("filename") will be used as the resource ID, so two JARs with the same filename will be in conflict.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig addJar(@Nonnull File file)
IMap
data source, which can instantiate only the classes from
the Jet instance's classpath.)
This variant identifies the JAR with a File
. The filename part of the
path will be used as the resource ID, so two JARs with the same filename will
be in conflict.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig addJar(@Nonnull String path)
IMap
data source, which can instantiate only the classes from
the Jet instance's classpath.)
This variant identifies the JAR with a path string. The filename part will be used as the resource ID, so two JARs with the same filename will be in conflict.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig addJarsInZip(@Nonnull URL url)
IMap
data source, which can instantiate only
the classes from the Jet instance's classpath.)
This variant identifies the ZIP file with a URL, which must contain at least one path segment. The last path segment ("filename") will be used as the resource ID, so two ZIPs with the same filename will be in conflict.
The ZIP file should contain only JARs. Any other files will be ignored.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig addJarsInZip(@Nonnull File file)
IMap
data source, which can instantiate only
the classes from the Jet instance's classpath.)
This variant identifies the ZIP file with a File
. The filename part
will be used as the resource ID, so two ZIPs with the same filename will be
in conflict.
The ZIP file should contain only JARs. Any other files will be ignored.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig addJarsInZip(@Nonnull String path)
IMap
data source, which can instantiate only
the classes from the Jet instance's classpath.)
This variant identifies the ZIP file with a path string. The filename part will be used as the resource ID, so two ZIPs with the same filename will be in conflict.
The ZIP file should contain only JARs. Any other files will be ignored.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig addClasspathResource(@Nonnull URL url)
This variant identifies the resource with a URL, which must contain at least one path segment. The last path segment ("filename") will be used as the resource ID, so two resources with the same filename will be in conflict.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig addClasspathResource(@Nonnull URL url, @Nonnull String id)
id
becomes the path under which the resource is available
from the class loader.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig addClasspathResource(@Nonnull File file)
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig addClasspathResource(@Nonnull File file, @Nonnull String id)
id
becomes the path under which the resource is
available from the class loader.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig addClasspathResource(@Nonnull String path)
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig addClasspathResource(@Nonnull String path, @Nonnull String id)
id
becomes the path under which the resource is
available from the class loader.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull @Beta public JobConfig addCustomClasspath(@Nonnull String name, @Nonnull String path)
BatchSource<String> source = ...
JobConfig config = new JobConfig();
config.addCustomClasspath(source.name(), "hazelcast-client-3.12.13.jar");
name
- name of the stage, must be unique for the whole pipeline
(the stage name can be set via Stage.setName(String)
)path
- path to the jar relative to the `ext` directorythis
instance for fluent API@Nonnull @Beta public JobConfig addCustomClasspaths(@Nonnull String name, @Nonnull List<String> paths)
BatchSource<String> source = ...
JobConfig config = new JobConfig();
config.addCustomClasspaths(source.name(), jarList);
name
- name of the stage, must be unique for the whole pipeline
(the stage name can be set via Stage.setName(String)
)paths
- paths to the jar relative to the `ext` directorythis
instance for fluent API@Nonnull public JobConfig attachFile(@Nonnull URL url)
To retrieve the file from within the Jet job, call
ctx.attachedFile(id)
,
where ctx
is the ProcessorSupplier
context available, for
example, to ServiceFactory.createContextFn()
. The file will have the
same name as the one supplied here, but it will be in a temporary directory
on the Jet server.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig attachFile(@Nonnull URL url, @Nonnull String id)
To retrieve the file from within the Jet job, call
ctx.attachedFile(id)
,
where ctx
is the ProcessorSupplier
context available, for
example, to ServiceFactory.createContextFn()
. The file will have the
same name as the one supplied here, but it will be in a temporary directory
on the Jet server.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig attachFile(@Nonnull File file)
To retrieve the file from within the Jet job, call
ctx.attachedFile(id)
,
where ctx
is the ProcessorSupplier
context available, for
example, to ServiceFactory.createContextFn()
. The file will have the
same name as the one supplied here, but it will be in a temporary directory
on the Jet server.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig attachFile(@Nonnull File file, @Nonnull String id)
To retrieve the file from within the Jet job, call
ctx.attachedFile(id)
,
where ctx
is the ProcessorSupplier
context available, for
example, to ServiceFactory.createContextFn()
. The file will have the
same name as the one supplied here, but it will be in a temporary directory
on the Jet server.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig attachFile(@Nonnull String path)
To retrieve the file from within the Jet job, call
ctx.attachedFile(id)
,
where ctx
is the ProcessorSupplier
context available, for
example, to ServiceFactory.createContextFn()
. The file will have the
same name as the one supplied here, but it will be in a temporary directory
on the Jet server.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig attachFile(@Nonnull String path, @Nonnull String id)
To retrieve the file from within the Jet job, call
ctx.attachedFile(id)
,
where ctx
is the ProcessorSupplier
context available, for
example, to ServiceFactory.createContextFn()
. The file will have the
same name as the one supplied here, but it will be in a temporary directory
on the Jet server.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig attachDirectory(@Nonnull URL url)
To retrieve the directory from within the Jet job, call
ctx.attachedDirectory(id)
, where ctx
is the
ProcessorSupplier
context available, for example, to
ServiceFactory.createContextFn()
. It will be a temporary directory on
the Jet server.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig attachDirectory(@Nonnull URL url, @Nonnull String id)
To retrieve the directory from within the Jet job, call
ctx.attachedDirectory(id)
, where ctx
is the
ProcessorSupplier
context available, for example, to
ServiceFactory.createContextFn()
. It will be a temporary directory on
the Jet server.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig attachDirectory(@Nonnull String path)
To retrieve the directory from within the Jet job, call
ctx.attachedDirectory(id)
, where ctx
is the
ProcessorSupplier
context available, for example, to
ServiceFactory.createContextFn()
. It will be a temporary directory on
the Jet server.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig attachDirectory(@Nonnull String path, @Nonnull String id)
To retrieve the directory from within the Jet job, call
ctx.attachedDirectory(id)
, where ctx
is the
ProcessorSupplier
context available, for example, to
ServiceFactory.createContextFn()
. It will be a temporary directory on
the Jet server.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig attachDirectory(@Nonnull File file)
To retrieve the directory from within the Jet job, call
ctx.attachedDirectory(id)
, where ctx
is the
ProcessorSupplier
context available, for example, to
ServiceFactory.createContextFn()
. It will be a temporary directory on
the Jet server.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig attachDirectory(@Nonnull File file, @Nonnull String id)
To retrieve the directory from within the Jet job, call
ctx.attachedDirectory(id)
, where ctx
is the
ProcessorSupplier
context available, for example, to
ServiceFactory.createContextFn()
. It will be a temporary directory on
the Jet server.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull public JobConfig attachAll(@Nonnull Map<String,File> idToFile)
attachDirectory(dir, id)
for every
entry that resolves to a directory and attachFile(file, id)
for every entry that resolves to a regular file.
Cannot be used for light jobs.
this
instance for fluent API@Nonnull @PrivateApi public Map<String,ResourceConfig> getResourceConfigs()
public Map<String,List<String>> getCustomClassPaths()
addCustomClasspath(String, String)
and addCustomClasspaths(String, List)
@Nonnull @EvolvingApi public <T,S extends StreamSerializer<T>> JobConfig registerSerializer(@Nonnull Class<T> clazz, @Nonnull Class<S> serializerClass)
clazz
- class to register serializer forserializerClass
- class of the serializer to be registeredthis
instance for fluent API@Nonnull @PrivateApi public Map<String,String> getSerializerConfigs()
@Nonnull public JobConfig setArgument(String key, Object value)
key
- key with which the specified value is to be associatedvalue
- value to be associated with the specified keythis
instance for fluent API@Nullable public <T> T getArgument(String key)
key
- the key whose associated value is to be returned@Nonnull public JobConfig setClassLoaderFactory(@Nullable JobClassLoaderFactory classLoaderFactory)
JobClassLoaderFactory
that will be used to load
job classes and resources on Jet members. Not supported for light jobsthis
instance for fluent API@Nullable public JobClassLoaderFactory getClassLoaderFactory()
JobClassLoaderFactory
.@Nullable public String getInitialSnapshotName()
null
if no initial snapshot is configured.@Nonnull public JobConfig setInitialSnapshotName(@Nullable String initialSnapshotName)
The job will use the state even if
processing
guarantee is set to ProcessingGuarantee.NONE
.
Cannot be used for light jobs.
initialSnapshotName
- the snapshot name given to
Job.exportSnapshot(String)
this
instance for fluent API@Nonnull public JobConfig setMetricsEnabled(boolean enabled)
BaseMetricsConfig.isEnabled()
to be on in order to function.
Metrics for running jobs can be queried using Job.getMetrics()
It's enabled by default. Ignored for light jobs.
public boolean isMetricsEnabled()
public boolean isStoreMetricsAfterJobCompletion()
BaseMetricsConfig.isEnabled()
and
isMetricsEnabled()
to be on in order to function.
If enabled, metrics can be retrieved by calling Job.getMetrics()
.
It's disabled by default.
public JobConfig setStoreMetricsAfterJobCompletion(boolean storeMetricsAfterJobCompletion)
Job.getMetrics()
.
If disabled, once the configured job stops running Job.getMetrics()
will always return empty metrics for it, regardless of the settings for
global metrics collection
or
per job metrics collection
.
It's disabled by default. Ignored for light jobs.
public long getMaxProcessorAccumulatedRecords()
Processor
instance in the context of the job.public JobConfig setMaxProcessorAccumulatedRecords(long maxProcessorAccumulatedRecords)
Processor
instance in the context of the job.
For more info see InstanceConfig.setMaxProcessorAccumulatedRecords(long)
.
If set, it has precedence over InstanceConfig
's one.
The default value is -1
- in that case InstanceConfig
's value
is used.
public long getTimeoutMillis()
public JobConfig setTimeoutMillis(long timeoutMillis)
0
,
which denotes no time limit on the execution of the job.public int getFactoryId()
IdentifiedDataSerializable
getFactoryId
in interface IdentifiedDataSerializable
public int getClassId()
IdentifiedDataSerializable
getClassId
in interface IdentifiedDataSerializable
public void writeData(ObjectDataOutput out) throws IOException
DataSerializable
writeData
in interface DataSerializable
out
- outputIOException
- if an I/O error occurs. In particular,
an IOException
may be thrown if the
output stream has been closed.public void readData(ObjectDataInput in) throws IOException
DataSerializable
readData
in interface DataSerializable
in
- inputIOException
- if an I/O error occurs. In particular,
an IOException
may be thrown if the
input stream has been closed.@PrivateApi public void lock()
It's not a public API, can be removed in the future.
Copyright © 2022 Hazelcast, Inc.. All rights reserved.