Skip to content

Eumetcast data exchange system

Martin Raspaud edited this page Jun 12, 2019 · 3 revisions

Using Trollmoves, it is possible to have a real-time, low latency file exchange system, to use for example as a backup system for eumetcast reception between different reception stations.

Trollmoves provides three scripts that are to be used to setup such a system: move_it_server, move_it_client, and move_it_mirror.

Setting up data exchange locally

The first step to achieve redundancy is to set up the data exchange system on the local network. Let's call the server running tellicast TelliServ and the destination server ProcServ. move_it_server needs to run on TelliServ and move_it_client needs to run on ProcServ. Here is example configurations of both scripts and how to start them.

Server side

An example configuration for sharing HRIT 0 degree from TelliServ

# this is telliserv_server.ini
[eumetcast-hrit-0deg]
origin=/local_disk/eumetcast/received/unpacked/H-000-{series:_<6s}-{platform_name:_<5s}_______-{channel:_<9s}-{segment:_<9s}-{nominal_time:%Y%m%d%H%M}-{compressed:_<2s}
working_directory=/local_disk/eumetcast/received/unpacked/
compression=xrit
prog=/local_disk/eumetcast/opt/move_it/bin/xRITDecompress
delete=True
request_port=9093
info=sensor=seviri;variant=0DEG
topic=/1b/hrit-segment/0deg

To start the process, run:

move_it_server.py telliserv_server.ini

The usage of move_it_server.py shows

usage: move_it_server.py [-h] [-l LOG] [-p PORT] [-v] [--disable-backlog]
                         config_file

positional arguments:
  config_file           The configuration file to run on.

optional arguments:
  -h, --help            show this help message and exit
  -l LOG, --log LOG     The file to log to. stdout otherwise.
  -p PORT, --port PORT  The port to publish on. 9010 is the default
  -v, --verbose         Toggle verbose logging
  --disable-backlog     Disable glob and handling of backlog of files at
                        start/restart

The port option can be important to have TelliServ provide data to different independent systems (e.g. prod vs test environment)

Both ports 9010 and 9093 will need to be opened for tcp traffic in the TelliServ firewall

The available options for the server config file are:

  • origin - the place where the data to exchange is stored.
  • working_directory - the place where to unpack the data if unpacking is needed
  • compression - The type of compression. Can be omitted if the data isn't compressed. Supported atm: xrit, bzip2
  • prog - the xrit decompression program. Can be omitted if the data isn't xrit-compressed
  • delete - whether to delete file after transfer or not. If True, the file will be deleted after 30 seconds.
  • request_port - The port the client will perform requests on. This port needs to be different for each section in the config file
  • info - additional metadata items to add to the announcement message (see Principle of operation)
  • topic - the topic of the announcement message (see Principle of operation)

Client side

# This is procserv_client.ini
[eumetcast_hrit_0deg]
providers=TelliServ:9010
destination=ftp:///geo_in/0deg/
login=myusername:mysecretpassword
topic=/1b/hrit-segment/0deg
publish_port=0
platform_name=MSG1:Meteosat-8|MSG2:Meteosat-9|MSG3:Meteosat-10|MSG4:Meteosat-11

To start the process, run move_it_client.py procserv_client.ini

usage: move_it_client.py [-h] [-l LOG] [-s STATS] [-v] config_file

positional arguments:
  config_file           The configuration file to run on.

optional arguments:
  -h, --help            show this help message and exit
  -l LOG, --log LOG     The file to log to. stdout otherwise.
  -s STATS, --stats STATS
                        Save stats to this file
  -v, --verbose         Toggle verbose logging

The available options for the client config file are:

  • providers - a whitespace-separated list of addresses (servers) the client should listen to for announcements
  • destination - a destination address for the files. Supported protocols are ftp, ssh, scp and file (regular local copy, default if the protocol is omitted)
  • login - login credentials for the protocol. Can be omitted for local copy
  • topic - topic to filter the announcement messages with
  • publish_port - the port to relay announcement messages on.
  • <other keywords> - other keywords will be matched to the metadata of the announcement messages and the contents will be used to replace the matching metadata values

Principle of operation

The data exchange system implemented in trollmove can be summarised in the following steps:

  • A new file arrives
  • The server announces the file
  • The client checks if the file needs to be downloaded
    • if not, the server is notified
    • otherwise, a push request is sent from the client to the server
      • the server pushes the file
      • the server sends a notification to the client
      • the client announces the received file (for further processing for example)
  • The server removes the file after 30 seconds if delete is set to True.

In case of multiple announcement messages arriving to the client for the same file, the message arriving first will be used. Given network latencies, this ensures local files are preferred to remote files.

Using trollmoves across different networks

Usually, internal networks are shielded from the outside, and it is not possible to make move_it_server and move_it_client communicate directly. For these situations, the move_it_mirror script is available to act as a gateway on a machine that is expose to the outside. The move_it_mirror script is basically a back-to-back client and server, relaying announcement message and forwarding files.

Let's suppose our data is received at location A and the location B wants to receive the data.

A has a gateway server called GateA it can use to send data to B, we would have the following config file:

# This is gate_a_out.ini
[eumetcast-hrit-0deg]
# server part
origin=/tmp/H-000-{series:_<6s}-{platform_name:_<5s}_______-{channel:_<9s}-{segment:_<9s}-{nominal_time:%Y%m%d%H%M}-{compressed:_<2s}
scheme=ftp
working_directory=/tmp/
#compression=xrit
#prog=/home/a001673/usr/contrib/PublicDecompWT/Image/Linux_32bits/xRITDecompress
delete=True
request_port=9338
# IP address of GateA
request_address=192.168.1.11
info=sensor=seviri
topic=/1b/hrit-segment/0deg

# client part
providers=TelliServOnA:9010
login=myusername:mypassword
destination=ftp:///tmp/0deg/
client_topic=/1b/hrit-segment/0deg
publish_port=0

The configuration file is just a concatenation of a server config and a client config, so the options are the same.

To run the mirror, you can call move_it_mirror.py gate_a_out.ini

Similarly, on the B-side, you might need to have a gateway called GateB to receive data from A, so we have this mirror configuration:

# This is gate_b_in.ini
[eumetcast-hrit-0deg]
# server part
origin=/tmp/H-000-{series:_<6s}-{platform_name:_<5s}_______-{channel:_<9s}-{segment:_<9s}-{nominal_time:%Y%m%d%H%M}-{compressed:_<2s}
scheme=ftp
working_directory=/tmp/
#compression=xrit
#prog=/home/a001673/usr/contrib/PublicDecompWT/Image/Linux_32bits/xRITDecompress
delete=True
request_port=9098
info=sensor=seviri
topic=/1b/hrit-segment/0deg

# client part
# IP address of GateA as B knows it
providers=10.0.0.123:9010
login=username_A_can_use:password_A_can_use
destination=ftp://gateb.b.com/tmp/0deg/
client_topic=/1b/hrit-segment/0deg
publish_port=0
platform_name=MSG1_:Meteosat-8|MSG2_:Meteosat-9|MSG3_:Meteosat-10|MSG4_:Meteosat-11

Finally, B can combine this new source of data with its local reception by adding a provider in its client configuration on ProcServ:

# This is procserv_client.ini
[eumetcast_hrit_0deg]
providers=TelliServOnB:9010 GateB:9010 
destination=ftp:///geo_in/0deg/
login=myusername:mysecretpassword
topic=/1b/hrit-segment/0deg
publish_port=0
platform_name=MSG1:Meteosat-8|MSG2:Meteosat-9|MSG3:Meteosat-10|MSG4:Meteosat-11
Clone this wiki locally