3DEXCITE DELTAGEN Stellar Cluster Setup Administration...

11
DELTAGEN STELLAR DISTRIBUTED RENDERING - CLUSTER SETUP Administration Guide

Transcript of 3DEXCITE DELTAGEN Stellar Cluster Setup Administration...

Page 1: 3DEXCITE DELTAGEN Stellar Cluster Setup Administration Guidesupport.3dexcite.com/wp-content/uploads/...ClusterSetup_AdminGuid… · DELTAGEN Stellar - Distributed Rendering - Cluster

DELTAGEN STELLAR DISTRIBUTED RENDERING - CLUSTER SETUP

Administration Guide

Page 2: 3DEXCITE DELTAGEN Stellar Cluster Setup Administration Guidesupport.3dexcite.com/wp-content/uploads/...ClusterSetup_AdminGuid… · DELTAGEN Stellar - Distributed Rendering - Cluster

DELTAGEN Stellar - Distributed Rendering - Cluster Setup Administration Guide 1

CONTENTS

Prerequisites 2

Overview 2

Setup 2Set up Artifacts 3

Cluster Setup 3

Cluster Startup 3Start and Shutdown Order 3

Remote Node 3

Gateway Service 4

Options 4

Client Setup 4

Reference 4Remote Node 4

Synopsis 4

Options 4

Synopsis 5

Options 5

Batch Renderer 6Executing the Batch Renderer 6

Options 6

Parallel distributed Batch Rendering 6

The batch renderer can also connect to an existing cluster to accelerate the rendering. For this a cluster of render_node must be started first (see previous section to see how to configure and start a cluster). Then the batch renderer can connect to the cluster the following way: 6

Options 6

Options 7

Distributed STELLAR in a multi cell cluster environment 8Definition 8

Setting up a multi cell cluster environment for Distributed STELLAR 8

Setting up a set of connected Remote Nodes in each of the cell networks 9

Options 9

Setting up a Bridge Broker Node for one of the cell networks 9

Options 9

Setting up other Broker Nodes for all other cell networks 9

Options 9

Connect the Distributed Stellar application to the bridge Broker Node 10

Synopsis 10

Options 10

Page 3: 3DEXCITE DELTAGEN Stellar Cluster Setup Administration Guidesupport.3dexcite.com/wp-content/uploads/...ClusterSetup_AdminGuid… · DELTAGEN Stellar - Distributed Rendering - Cluster

DELTAGEN Stellar - Distributed Rendering - Cluster Setup Administration Guide 2

PREREQUISITESDistributed Stellar binaries can be executed on the following systems:

Operating systems Supported Versions Additional requirements

Microsoft Windows® Windows® 7 64bit version

Visual Studio 2015 Redistributable installed

Patch KB2533623 installed

Windows® 10 64bit version

Visual Studio 2015 Redistributable installed

Linux® CentOS/RedHat Enterprise Linux 6.5

64bit version

package libgomp-4.4.7-4.1 or later

Suse Linux Enterprise System 11 SP3

64bit version

package libgomp1-5.2.1+r226025-5.3 or later

OVERVIEW

SETUPGateway Service allows client applications like DELTAGEN to connect to a remote cluster where the Worker Nodes are invisible from outside networks. Connections are established via a head node that is visible from the outside. The service forwards rendering requests to Remote Node services. It may act as a Bridge Node for establishing a cluster of Remote Node services. As a DELTAGEN user you can configure interactive rendering such that your view is also shown in a web-browser on a separate computer and screen. The gateway therefore can send streams to stream clients. It only acts as a streamer to the browser if configured to do so.

Remote Node is a service for distributed computation on Cluster Nodes. It turns a cluster node into a distributed computer that processes tasks that operate on distributed data as specified by the DStellar client.

Remote Node services set up a P2P network in order to form a cluster. A Remote Node must establish an initial connection to one other Remote Node or Gateway Service to join. We call this initial entry Bridge Node for this service. Its end-point needs to be specified when starting a Remote Node. The startup order of the services is however arbitrary as connections to the Bridge Node are retried.

Page 4: 3DEXCITE DELTAGEN Stellar Cluster Setup Administration Guidesupport.3dexcite.com/wp-content/uploads/...ClusterSetup_AdminGuid… · DELTAGEN Stellar - Distributed Rendering - Cluster

DELTAGEN Stellar - Distributed Rendering - Cluster Setup Administration Guide 3

Set up ArtifactsThe following binaries necessary for the setup can be found in the folder DELTAGEN_Distributed_Stellar, located in the installer source folder.

CloudGate <win64/linux> GatewayService binaries and dependencies

RemoteNodes <win64/linux> RemoteNode binaries and dependencies

RenderExecutables Plug-ins for the distribution system

BatchRenderer <win64/linux> BatchRenderer binaries and dependencies

Cluster Setup

1. EXTRACT THE “REMOTENODES” ARCHIVE TO A FILE SYSTEM ACCESSIBLE FROM THE CLUSTER MACHINES

2. EXTRACT THE “BATCHRENDERER” ARCHIVE TO A FILE SYSTEM ACCESSIBLE FROM A CLUSTER HEAD NODE

3. EXTRACT THE “CLOUDGATE” ARCHIVE TO A FILE SYSTEM ACCESSIBLE FROM A CLUSTER HEAD NODE

4. EXTRACT THE RENDER_EXECUTABLES ARCHIVE TO A FILE SYSTEM ACCESSIBLE FROM A CLUSTER HEAD NODE

The first two folders contain two executables (remote_node, cloudgate_server) and their dependencies for 64-bit Linux or Windows® operating systems. The executables can be started from a file system without further installation.

The render_executables folder contains zip archives and .md5 hashes for plug-ins.

They do not have to be extracted; they will be automatically deployed to the Worker Nodes as needed by the distribution system.

Both, Stellar and DStellar require to have a temporary folder where their dependencies are stored. 1. CREATE A TEMPORARY DIRECTORY2. SET THE ENVIRONMENT VARIABLE TMPDIR TO POINT IT

Example:

export $TMPDIR= ~/temp

It is important to note that each of the Nodes (Head Node and Worker Nodes) must have their own Temp folder and its TMPDIR variable set to it.

It is strongly recommended to have the Temp folder in the local disk space of each node rather than on a shared file system since it will degrade the performance the in later case.

CLUSTER STARTUP

Start and shutdown orderThe starting process follows the given order:

• The distributed rendering system is started on a cluster by launching first one Remote Node as Bridge Node.

• The Remote Node is started on all remaining Worker Nodes which should connect to the Bridge Node.

• The Cloudgate Server is started and connects to the Bridge Node.

For a clean shutdown of the rendering service, all the started Remote Node processes and the Cloudegate Server process have to be terminated by sending them a SIGTERM signal.

Remote node

1. START A FIRST REMOTE_NODE AS BRIDGE NODE

Example:

remote_node -l 90002. START REMOTE_NODE ON ALL REMAINING WORKER NODES AND SPECIFY THE BRIDGE

NODE AS ENTRY FOR JOINING THE CLUSTER

Example:

remote_node -l 9000 –h <bridge_node hostname or IP address> –p 9000

Page 5: 3DEXCITE DELTAGEN Stellar Cluster Setup Administration Guidesupport.3dexcite.com/wp-content/uploads/...ClusterSetup_AdminGuid… · DELTAGEN Stellar - Distributed Rendering - Cluster

DELTAGEN Stellar - Distributed Rendering - Cluster Setup Administration Guide 4

Options

Option Description-l 9000 Configures the port remote_node listens on for connections from

other remote_node services and the cloudgate_server.

-p 9000 Configures the port for initial connection to the cluster - use the Bridge Node port.

-h <Bridge Node host name or IP address>

Configures the host for initial connection to the cluster - use the Bridge Node host name.

Gateway service

START THE GATEWAY SERVICE ON A HEAD NODE AND CONNECT IT TO THE BRIDGE NODE

Example:

cloudgate_server -s 5000 -l 0 -h <Bridge Node host name or IP address> -p 9000 -v 0 -e <task executable path> -I <host name of gateway node external interface>

Options

Option Description-s 5000 Specifies the service port for connections from DStellar client

applications and streaming clients.

-l 0 Configures the port cloudgate_server listens on for connections from other remote_node services. If set to 0 a free port is automatically chosen.

-v 0/1 Configures the service to use IPv4 for the service port if set to 0. If set to 1 it activates IPv6.

-e <task executable path> Configures the location of plug-ins for the distribution system. Set it to the task_executables folder containing render_executable-linux64-release.zip/.md5.

-h <Bridge Node host name> Configures the host for initial connection to the cluster - use the Bridge Node host name.

-p 9000 Configures the port for initial connection to the cluster - use the Bridge Node port.

-I <host name of gateway node external interface>

Configures the interface where the cloudgate can be reached from the outside.

Client setupSee the 3DEXCITE DELTAGEN Installation Guide.

REFERENCE

Remote nodeRemote Node is a service turning a machine into a distributed computer, used by Stellar clients for distributed rendering. START REMOTE_NODE ON WORKER NODES IN THE CLUSTER

Synopsisremote_node [OPTION]...

Options

Program options Description--help Produces a help message.

-v [--version ] Displays the version information.

-l [ --localport ] Is the local listening port.

-h [ --host ] Is the hostname of the Bridge Node used to connect to the cluster.

-p [ --port ] Is the port of the Bridge Node used to connect to the cluster. Default is 0. This does not connect to bridge node.

Page 6: 3DEXCITE DELTAGEN Stellar Cluster Setup Administration Guidesupport.3dexcite.com/wp-content/uploads/...ClusterSetup_AdminGuid… · DELTAGEN Stellar - Distributed Rendering - Cluster

DELTAGEN Stellar - Distributed Rendering - Cluster Setup Administration Guide 5

Advanced options Description-i [ --interface ] Is the local network interface to listen on.

-w [--worker ] Is the flag telling if the node will execute work package (true by default).

-m [ --monitoringport ] Is the monitoring port (limited feature). Default is 0.

-L [ --logging_config_path ] Is the path of the config file for the logging. Default is logging.conf.

-d [ --dist_config_path ] arg Is the path of the config file for distribution service. Default is node_config.json.

Gateway service

This is a service allowing Stellar clients to connect to a remote cluster via one host and well-known service port. START CLOUDGATE_SERVER ON A HEAD NODE OF THE CLUSTER

This can be set up for example on a node that is visible from the outside and is able to resolve the remote_node services running on Worker Nodes.

Synopsiscloudgate_server [OPTION]...

Options

Program options Description --help Produces a help message.

-v [ --version ] Display the version information.

-l [ --localport ] Is the cluster-internal listening port.

-s [ --websocketport ] Is the service (web-socket server) listening port.

-I [ --external_interface ] Defines the interface where the cloudgate can be reached from the outside - usually set to the machine hostname. Default is -hostname.

-e [ --execpath ] Is the directory the executables are located in. Default value is ./

-i [ --interface ] Is the cluster-internal network interface to listen for connections from remote_node services.

-h [ --host ] Is the host name of the Bridge Node used to connect to the cluster.

-p [ --port ] Is the port of the Bridge Node used to connect to the cluster. The default is 0.

Advanced options Description -L [ --logging_config_path ] Is the path of the config file for the logging. Default is

logging.conf.

-a [--activemonitoringproxy ] If set to true, then monitor agents can use the cloud gate server as a proxy to connect to the cluster. Default is false.

-m [ --monitoringport ] Is the monitoring port. The default is 0.

-c [ --dist_config_path ] Is the path of the config file for distribution service.

-n [ --websocketpingtimeout ] Sets the time in [ms] when a ping is timed out. The default is 60000.

-o [ --websocketconnectiontimeout ] Sets the time in [ms] after a connection is considered being dead. The default is 60000.

-w [ --websocketuseipv6 ] Sets the use of IPv6 protocol. The default is 1.

-x [ --websocketmaxmessagesize ] Is the maximum sendable message size in [bytes]. Default is 10000000000.

-d [ --docroot ] Is hosted documents root folder - contains a stream client web-application. Default is ./docroot.

NOTE: The options - L, - m, -c, -n, -0, -x, and -d are optional settings and currently not needed to be specified.

NOTE: The system is using the default network interface. Only if multiple network interfaces are provided should you specify the correct one. Otherwise there is no need to specify these options.

Page 7: 3DEXCITE DELTAGEN Stellar Cluster Setup Administration Guidesupport.3dexcite.com/wp-content/uploads/...ClusterSetup_AdminGuid… · DELTAGEN Stellar - Distributed Rendering - Cluster

DELTAGEN Stellar - Distributed Rendering - Cluster Setup Administration Guide 6

BATCH RENDERERDELTAGEN allows exporting batch jobs and ships a batch renderer command-line program for executing such jobs.

This batch renderer can easily be run under the control of a job scheduler or a render manager. It can use a cluster of Worker Nodes to speed-up a job by employing parallel distributed rendering. Using the batch renderer as a parallel program allows accelerating both: the rendering of single images and image sequences. This requires connecting the batch renderer to a set of Worker Nodes running the remote node service.

Alternatively the batch renderer can be run alone without additional workers. This mode may be advantageous if cluster utilization is high and not enough Nodes are available at the same time for parallel distributed rendering. By splitting a job at frame boundaries and scheduling each frame sub-job using the batch renderer in non-distributed mode, the rendering of image sequences may still get accelerated.

Executing the batch rendererThe batch renderer can be executed as a stand-alone console application on a single machine in the following way:

batch_renderer –b <batch job file> -e <task executable path> -r <output directory>

Options

Option Description-b <batch job file> Specifies the path of the batch job file that was produced by

DELTAGEN Render Export.

-e <task executable path> Configures the location of plug-ins for the distribution system. Set it to the task_executables folder containing render_executable-linux64-release.zip/.md5.

-r < output directory > Specifies the path where the output images will be stored.

Once started the batch renderer reads the content of the batch job file, renders the different job, and stores the resulting images in the specified directory.

Parallel distributed batch rendering

The batch renderer can also connect to an existing cluster to accelerate the rendering. For this a cluster of render_node must be started first (see previous section to see how to configure and start a cluster). Then the batch renderer can connect to the cluster the following way:

batch_renderer –b <batch job file> –h <bridge_node hostname or IP address> –p 9000 -e <task executable path> -r <output directory>

Options

Option Description

-b <batch job file> Specifies the path of the batch job file that was produced by DELTAGEN Render Export.

-e <task executable path> Configures the location of plug-ins for the distribution system. Set it to the task_executables folder containing render_executable-linux64-release.zip/.md5.

-r < output directory > Specifies the path where the output images will be stored.

-p 9000 Configures the port for initial connection to the cluster - use the Bridge Node port.

-h <Bridge Node host name or IP address>

Configures the host for initial connection to the cluster - use the Bridge Node host name.

NOTE: The options -L, -i, -l, -h, -p, -c, -w, -s, -f, -x, -y, -t, -m, -o and -r are optional settings and can be omitted.

Page 8: 3DEXCITE DELTAGEN Stellar Cluster Setup Administration Guidesupport.3dexcite.com/wp-content/uploads/...ClusterSetup_AdminGuid… · DELTAGEN Stellar - Distributed Rendering - Cluster

DELTAGEN Stellar - Distributed Rendering - Cluster Setup Administration Guide 7

Reference batch renderer

Synopsis

batch_renderer [OPTION] …

Options

Program options Description --help Produces a help message.

-v [--version ] Displays the version information.

-b [ --batchjobpath ] Is the path of the batch job file.

-e [ --execpath ] Is the directory the task executable plug-ins are located in. Default is current directory “ . “.

-h [ --host ] Is the host name of the bridge node to connect to when using a cluster.

-p [ --port ] Is the port of the bridge node to connect to when using a cluster. Default is 0.

-w [ --worker ] Is the worker flag. If set to 1 this node participates in rendering, otherwise rendering is only done on connected Worker Nodes. Default is 0.

-M [ --meta_info_output ] Is the path (including file name) of the job meta information file. If specified status for every frame of the job is written to the file.

-o [ --time_limit ] Is the time limit (in minutes) for executing the batch job. The job is aborted after the specified time elapses. Default is 1.

Advanced options Description -s [ --start_frame ] Sets the start frame of the batch rendering. Default is 0.

-f [ --end_frame ] Sets the end frame of the batch rendering. Default is 4294967295.

-x [ --override_image_resolution_x ]

Sets the override image resolution(x) of the batch rendering. Default is 1.

-y [ --override_image_resolution_y ]

Sets the override image resolution(y) of the batch rendering. Default is 1.

-t [ --override_iterations ] Sets overriding iterations (target quality) of batch rendering. Default is 1.

-r [ --rendered_job_file_path ] Is the output folder where the rendered images will be stored. Default is “./”.

-L [ --logging_config_path ] Is the path of the config file for the logging. Default is logging.conf.

-i [ --interface ] Is the local network interface name or IP address to listen on for connections from Worker Nodes in a cluster.

-l [ --localport ] Is the local listening port for connections from Worker Nodes in a cluster. A free port is automatically chosen if set to 0. Default is 0.

-c [ --dist_config_path ] Is the path of the distribution service configuration file.

-m [--monitoringport ] Is the monitoring port. Default is 0.

-I [ --initial_interval_duration_ms ]

Sets the initial intermediate image update interval duration in ms. Default is 1000.

-F [--final_interval_duration_ms ] Sets the initial intermediate image update interval duration in ms. Default is 5000.

-T [--nb_render_threads ] Is the number of threads used by renderer. When set to 0 Stellar will use all the resources ( by default).When set to < 0 ,Stellar will use all threads minus numThreads. When set to > 0, it will use numThreads threads. This parameter does not influence Worker Nodes. See remote_node command line help to configure this on Worker Nodes.

Page 9: 3DEXCITE DELTAGEN Stellar Cluster Setup Administration Guidesupport.3dexcite.com/wp-content/uploads/...ClusterSetup_AdminGuid… · DELTAGEN Stellar - Distributed Rendering - Cluster

DELTAGEN Stellar - Distributed Rendering - Cluster Setup Administration Guide 8

DISTRIBUTED STELLAR IN A MULTI CELL CLUSTER ENVIRONMENT

DefinitionA Broker Node is a Bridge Node forwarding tasks and data from an Originator Network to a Substitute Network and task results from the Substitute Network to the Originator Network.

An Originator Network is a network connecting one or more Broker Nodes and the application.

A Substitute Network is network connecting Worker Nodes and a Broker Node, running on cluster cells. The Broker Node runs on the head node of the cell, connecting the Substitute Network to the Originator Network.

Broker Nodes, similarly to Gateway Services, allow Distributed Stellar applications to connect to a remote cluster, where the Worker Nodes are invisible from outside networks. While a Gateway Service is used in situation, where the network between the application and the Gateway Service is potentially having low bandwidth and high latency, Broker Nodes require a fast dedicated network to connect to the application.

In turn, the Distributed Stellar application is able to connect to multiple Broker Nodes at once. This allows the application to use the computational power of multiple, not interconnected cluster networks (cells, Substitute Networks), which are connected by one Originator Network.

Setting up a multi cell cluster environment for Distributed STELLARThis section describes how to setup Distributed STELLAR on the following example network:

In this simple setup there are two small cells with a Head Node each. The Application Node cannot directly access any of the Cluster Nodes. There are example IP addresses next to each node. The Head Nodes have two IP addresses, one identifying the network adapter connected to the cell, the other one identifying the network adapter connectable from the Application Node.

Page 10: 3DEXCITE DELTAGEN Stellar Cluster Setup Administration Guidesupport.3dexcite.com/wp-content/uploads/...ClusterSetup_AdminGuid… · DELTAGEN Stellar - Distributed Rendering - Cluster

DELTAGEN Stellar - Distributed Rendering - Cluster Setup Administration Guide 9

Setting up a set of connected Remote Nodes in each of the cell networks

1. START BRIDGE REMOTE NODES ON EACH CELL CLUSTER, ON CLUSTER NODE 1 AND CLUSTER NODE 3 EXECUTE:

remote_node –l 90002. LAUNCH REMOTE NODES ON THE OTHER CELL CLUSTER NODES, CONNECTING TO THE BRIDGE

REMOTE NODE ON CLUSTER NODE 2 AND CLUSTER NODE 4:

remote_node –l 0 –h 10.0.0.1 –p 9000

Options

Options Description -l 9000 Configures the port Remote Node listens on the network for connections

from other Remote Node, Broker Node services or the application.

–h 10.0.0.1 Configures the host for initial connection to the network cluster, in this case we try to connect to the Bridge Remote Node.

-p 9000 Configures the port for initial connection to the network cluster, we specify the bridge Remote Node listening port here.

-l 0 No connections expected here, let remote node use an automatic port.

Setting up a Bridge Broker Node for one of the cell networks

ON CELL HEAD NODE 1 SET:

broker_node –l 9001 –i 192.168.0.11 –H 10.0.0.1 -P 9000 –I 10.0.0.10 -L 0

Options

Options Description -l 9001 Configures the port Broker Node listens on the Originator Network for

connections from other Broker Node services and the application.

–i 192.168.0.11 The IP of the network adapter used to connect the Broker Node to the Originator Network.

-P 9000 Configures the port for initial connection to the cluster on the Substitute Network side - the Bridge Node port is used.

-H 10.0.0.1 Configures the host for initial connection to the Substitute Networks cluster, Cluster Node 1’s IP is used.

–I 10.0.0.10 The IP of the network adapter used to connect the Broker Node to the Substitute Network.

-L 0 Configures the port Broker Node listens on the Substitute Network for connections from remote node services. 0 means automatic.

Setting up other Broker Nodes for all other cell networks

ON CELL HEAD NODE 2 SET:

broker_node –l 0 –i 192.168.0.12 -h 192.168.0.11 -p 9001 –H 10.0.0.1 -P 9000 –I 10.0.0.10 -L 0

Options

Options Description -h 192.168.0.11 Configures the host for initial connection to the Originator Networks

cluster. We use the Bridge Broker Node’s IP address.

–p 9001 Configures port for initial connection to cluster on Originator Network side; the Bridge Broker Node port on the Originator side 9001 is used.

-l 0 We do not expect any incoming connections from the originator side here, so just let the Broker Node use an automatic port.

–i 192.168.0.12 The IP of the network adapter used to connect the Broker Node to the Originator Network.

-P 9000 Configures the port for initial connection to the cluster on the Substitute Network side - the Bridge Node port is used.

-H 10.0.0.1 Configures the host for initial connection to the Substitute Networks cluster, Cluster Node 1’s IP is used.

Page 11: 3DEXCITE DELTAGEN Stellar Cluster Setup Administration Guidesupport.3dexcite.com/wp-content/uploads/...ClusterSetup_AdminGuid… · DELTAGEN Stellar - Distributed Rendering - Cluster

Our 3DEXPERIENCE® platform powers our brand applications, serving 12 industries, and provides a rich portfolio of industry solution experiences. Dassault Systèmes, the 3DEXPERIENCE® Company, provides business and people with virtual universes to imagine sustainable innovations. Its world-leading solutions transform the way products are designed, produced, and supported. Dassault Systèmes’ collaborative solutions foster social innovation, expanding possibilities for the virtual world to improve the real world. The group brings value to over 170,000 customers of all sizes in all industries in more than 140 countries. For more information, visit www.3ds.com.

Europe/Middle East/AfricaDassault Systèmes10, rue Marcel DassaultCS 4050178946 Vélizy-Villacoublay CedexFrance

AmericasDassault Systèmes175 Wyman StreetWaltham, Massachusetts02451-1223USA

Asia-PacificDassault Systèmes K.K.ThinkPark Tower2-1-1 Osaki, Shinagawa-ku,Tokyo 141-6020Japan

©20

14 D

assa

ult S

ystè

mes

. All

righ

ts re

serv

ed. 3

DEX

PER

IEN

CE®

, the

Com

pass

icon

and

the

3DS

logo

, CA

TIA

, SO

LID

WO

RKS

, EN

OVI

A, D

ELM

IA, S

IMU

LIA

, GEO

VIA

, EXA

LEA

D, 3

D V

IA, B

IOVI

A, N

ETVI

BES

, and

3D

EXCI

TE a

re c

omm

erci

al tr

adem

arks

or

regi

ster

ed tr

adem

arks

of D

assa

ult S

ystè

mes

or i

ts s

ubsi

diar

ies

in th

e U

.S. a

nd/o

r oth

er c

ount

ries

. All

othe

r tra

dem

arks

are

ow

ned

by th

eir r

espe

ctiv

e ow

ners

. Use

of a

ny D

assa

ult S

ystè

mes

or i

ts s

ubsi

diar

ies

trad

emar

ks is

sub

ject

to th

eir e

xpre

ss w

ritt

en a

ppro

val.

–I 10.0.0.10 The IP of the network adapter used to connect the Broker Node to the Substitute Network.

-L 0 Configures the port Broker Node listens on the Substitute Network for connections from remote node services. 0 means automatic.

Connect the Distributed Stellar application to the bridge Broker NodeAt this point the cluster is ready to receive connections and execute tasks from any Distributed STELLAR application. The application has to connect to the Bridge Broker Node the same way as you would connect it to a Remote Node, using 192.168.0.11 as IP and 9001 as port.

See the 3DEXCITE DELTAGEN Installation Guide for details on how to do that with DELTAGEN.

Broker Node reference

Synopsisbroker_node [OPTION]...

Options

Program options Description --help Produces a help message.

-v [--version ] Displays the version information.

Originator interface options Description

-l [ --originator_port ] Port where the Originator listen on.

-i [ --originator_interface ] Network interface where the Originator listen on.

-h [ --originator_bridge_host ] Hostname of the bridge node the Originator connects to.

-p [ --originator_bridge_port ] Port of the bridge node the Originator connects to. Default is 0 (we do not connect to a bridge node).

Substitute interface options Description -L [ --substitute_port ] Port where the Substitute listen on.

-I [ --substitute_interface ] Network interface where the Substitute listen on.

-H [ --substitute_bridge_host ] Hostname of the bridge node the Substitute connects to.

-P [ --substitute_bridge_port ] Port of the bridge node the Substitute connects to. Default is 0 (we do not connect to a bridge node).

Advanced options Description -w [ --worker ] Flag telling if the Broker Node will execute work package

(false by default).

-m [ --originator_monitoringport] Monitoring port for Originator. Default value is 0.

-M [ --substitute_monitoringport] Monitoring port for Substitute. Default value is 0.

-o [ --logging_config_path ] Path of the config file for the logging. Default value is logging.conf.

-c [ --originator_config_path ] Path of the config file for theOriginator distribution service. Default value is originator_interface_config.json.

-C [ --substitute_config_path ] Path of the config file for the Substitute distribution service. Default value is substitute_interface_config.json.

NOTE: -w is not a recommended option, especially for low-performance workstations, as it could influence the performance of DELTAGEN in general.