Commit febe13f7 authored by Daniel Lee's avatar Daniel Lee 🐐
Browse files

Merge branch 'next' into 'main'

Next

Closes #868, #869, #854, and #844

See merge request data-tailor/data-tailor!550
parents ae678aa7 6a9bf2e5
......@@ -47,6 +47,7 @@
variables: &variables
EPCT_TEST_DATA_DIR: /data/data-tailor
NETCDFGEN_RUNTIME_DIR: /netcdfgen-runtime
BUILD_OPTS: "--output-folder conda-channel"
LC_ALL: "en_US.utf8"
LANG: "en_US.utf8"
......@@ -144,6 +145,7 @@ build linux plugins:
- git clone -b development https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.eumetsat.int/data-tailor/umarf-plugins/pfd-plugins-master.git epct_plugin_umarf/pfd-plugins-master
- git clone -b development https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.eumetsat.int/data-tailor/umarf-plugins/netcdf.git epct_plugin_umarf/netcdf
- git clone -b development https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.eumetsat.int/data-tailor/umarf-plugins/msgclmk_grib.git epct_plugin_umarf/msgclmk_grib
- if [ -d "$NETCDFGEN_RUNTIME_DIR" ]; then cp -r $NETCDFGEN_RUNTIME_DIR epct_plugin_netcdf_generator/; fi
- conda build $CHANNEL_OPTS $BUILD_OPTS ./epct_plugin_umarf ./epct_plugin_netcdf_generator ./epct_plugin_fist_iasil1c
build linux gdal:
......@@ -183,7 +185,7 @@ build win webui:
build win plugin-gis:
<<: *build_common_win
script:
- conda build %CHANNEL_OPTS_WIN% %BUILD_OPTS% ./epct_plugin_gis ./epct_plugin_netcdf_generator ./epct_plugin_fist_iasil1c
- conda build %CHANNEL_OPTS_WIN% %BUILD_OPTS% ./epct_plugin_gis ./epct_plugin_fist_iasil1c
- IF %errorlevel% NEQ 0 exit /b %errorlevel%
build win gdal:
......@@ -201,6 +203,7 @@ installer linux:
- build linux webui
- build linux plugin-gis
- build linux gdal
- build linux plugins
before_script:
- pip download --no-deps falcon-multipart
- conda index $CI_PROJECT_DIR/conda-channel
......@@ -561,6 +564,8 @@ tests win validation:
- IF %errorlevel% NEQ 0 setx ERROR_FOUND 1
- pytest --durations=0 --junitxml=%CI_PROJECT_DIR%\win-epct-validation-tests-ERR.xml -m "not longrunning" -k test_ERR validation_tests\
- IF %errorlevel% NEQ 0 setx ERROR_FOUND 1
- pytest --durations=0 --junitxml=%CI_PROJECT_DIR%\win-epct-validation-tests-INST.xml -m "not longrunning" -k test_INST validation_tests\
- IF %errorlevel% NEQ 0 setx ERROR_FOUND 1
- pytest --durations=0 --junitxml=%CI_PROJECT_DIR%\win-epct-validation-tests-MSG.xml -m "not longrunning" -k test_MSG validation_tests\
- IF %errorlevel% NEQ 0 setx ERROR_FOUND 1
- pytest --durations=0 --junitxml=%CI_PROJECT_DIR%\win-epct-validation-tests-SAF.xml -m "not longrunning" -k test_SAF validation_tests\
......@@ -573,6 +578,7 @@ tests win validation:
- $CI_PROJECT_DIR\win-epct-validation-tests-CLI.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-EPS.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-ERR.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-INST.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-MSG.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-SAF.xml
paths:
......@@ -580,6 +586,7 @@ tests win validation:
- $CI_PROJECT_DIR\win-epct-validation-tests-CLI.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-EPS.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-ERR.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-INST.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-MSG.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-SAF.xml
expire_in: 4 days
......@@ -709,7 +716,6 @@ code_quality:
stage: quality
needs: []
dependencies: []
when: manual
tags:
- linux
script:
......@@ -765,9 +771,12 @@ deploy linux:
script:
- conda index $CI_PROJECT_DIR/conda-channel
- ls -R conda-channel
- mkdir -p dtws-assets
- cp -rf $CI_PROJECT_DIR/assets/dtws/* dtws-assets/
artifacts:
paths:
- conda-channel
- dtws-assets
expire_in: 4 days
when: manual
......@@ -780,6 +789,7 @@ deploy to conda:
- anaconda -t $EUM_CONDA_TOKEN upload conda-channel/*/epct_restapi*.tar.bz2
- anaconda -t $EUM_CONDA_TOKEN upload conda-channel/*/epct_webui*.tar.bz2
- anaconda -t $EUM_CONDA_TOKEN upload conda-channel/*/epct_plugin_gis*.tar.bz2
- anaconda -t $EUM_CONDA_TOKEN upload conda-channel/*/epct_plugin_umarf*.tar.bz2
- anaconda -t $EUM_CONDA_TOKEN upload conda-channel/*/msg-gdal-driver*.tar.bz2
artifacts:
paths:
......
......@@ -4,14 +4,53 @@ All notable changes to this project are documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## Unreleased
## [2.10.0]
### Added
- Add UMARF plugins to installer constructor and pipelines (#823)
- Provided descriptions for OpenAPI specs definitions (#808)
- Deploy native to HRIT conda package to EUMETSAT conda repository (#803)
- Added capability to register Custom Plugins with pluggable configuration, products and formats (#802)
- Added support for multiple products and formats, for integration EUMETSAT Prototype Satellite Data Cube plugin (#796)
- Introduced mechanism signaling any ongoing or planned maintenance issues to GUI users (#771)
- HRV to netCDF conversion: Providing calibration data (#442)
### Changed
- Adapt DTWS dockerfile to execute commands as user (#853)
- Temporarily remove support for MTG IRS L1 as format is being updated (#840)
- Removing also the processing-dir (if any) when a customisation is deleted by user or administrator (#837)
- Improved propagation of Data Store API URL within code (#781)
- Optimised loading of status pane in GUI (#751)
- Switch DTWS logout button to logout link redirection (#668)
- Projection and ROI enhancements from GUI (#597)
- Show feedback for uploading shapefile from GUI (#596)
- Input products downloaded from the Data Store are now automatically removed as the customisation ends (#775)
### Removed
- Removed check_olda_cache option as Data Store input products are now downloaded in temporary processing_dir (#775)
### Fixed
- Properly parse content disposition as products downloaded via GUI on chrome contain start and trail underscore (#869)
- Ensure netcdf-satellite output files can correctly be downloaded from GUI via DTWS (#865)
- Fix 50-x service pages configuration (#854)
- Conversion of full-orbit AVHRRL1 to EPS-native fixed in DTWS as ROI tab now enabled with roi_by_time (#832)
- Sanitise inputs casting ROI boundary values to float before saving them in processing_info dictionary (#798)
- Native HRV to HRIT now works correctly (#794)
- Native to netCDF did not preserve metadata correctly (#793)
- Ensure previously active fair queuing user cap mechanism is not applied anymore when turned off (#782)
- HRSEVIRI 1.5 data - ToolsUI, Panoply, & IDV could not find coordinates after converting to netCDF (#759)
- Sanitised configuration reading so that temporary configuration files are not copied to user configuration (#748)
- Fix handling of empty uuids set within Delete customisation requests (#727)
- Avoid truncated information in GUI Log Pane (#709)
- Bug fix: ASCATL1SZ0 and ASCAL1SZR longitude coordinates converted to +/-180 degrees to be read by GDAL (#587)
- ROI by sensing time not available if feature does not contain roi_by_time (#560)
## [2.9.1]
### Changed
- Input products downloaded from the Data Store are now automatically removed as the customisation ends (#775)
## [2.9.0]
### Added
......
This diff is collapsed.
......@@ -112,7 +112,6 @@ MSG products
* - MSGAMVE (EO:EUM:DAT:MSG:AMV, EO:EUM:DAT:MSG:AMV-IODC)
MSGCLAP (EO:EUM:DAT:MSG:CLA, EO:EUM:DAT:MSG:CLA-IODC)
MTGIRSL1
- on Windows environment output products in BUFR format are not generated because eccodes tool
(https://confluence.ecmwf.int/display/ECC) is not supported
......@@ -128,3 +127,11 @@ SAF products
OR1SWW025_BUFR (EO:EUM:DAT:QUIKSCAT:REPSW25)
- on Windows environment input products in BUFR format are no supported because eccodes tool
(https://confluence.ecmwf.int/display/ECC) is not supported
Inventory Notice
----------------
Licenses and copyright information for software dependencies up to version 2.8.0-rc1
is documented in file ``NOTICE.txt``, as well as within the ``inventory`` folder.
EUMETSAT Data Tailor Web Service - Operational procedures
*********************************************************
.. contents:: **Table of contents**
This document describes how to maintain the Data Tailor Web Service (DTWS) on
the EUMETSAT ICSI infrastructure.
......@@ -31,6 +34,12 @@ are created (assuming that, as described in the deployment instructions, the sta
- `dtws_dtws-webapp`: the GUI service, on the swarm manager
- `dtws_dtws-worker`: the `worker` services that receive customisation requests, on each worker node.
Note that worker nodes have a fixed number of worker services that share
the resources available on that node while executing work.
It is therefore possible to scale worker services
(as detailed in the `Control - Add a worker node` section below)
independently of the nodes to which they are deployed.
Monitoring
==========
The DTWS provides monitoring capabilities at several levels. They are described below.
......@@ -97,8 +106,8 @@ To monitor a specific Docker service, the following commands may be useful:
.. code-block::
sudo docker service ps $SERVICE_NAME # lists service tasks, can be useful e.g. to detect its latest restart
sudo docker service inspect $SERVICE_NAME # displays information on the service, including start-up parameters
docker service ps $SERVICE_NAME # lists service tasks, can be useful e.g. to detect its latest restart
docker service inspect $SERVICE_NAME # displays information on the service, including start-up parameters
DTWS service logs
......@@ -137,7 +146,7 @@ Remove a worker node
--------------------
To remove a worker node, follow the steps below.
On the master node, check which nodes are part of the swarm:
On the master node, check which nodes are part of the swarm (`M = N-1`):
.. code-block::
......@@ -147,7 +156,8 @@ Then, scale down the worker service:
.. code-block::
sudo docker service scale dtws_dtws-worker=$N
docker service scale dtws_dtws-worker=$M
Now, identify which nodes the worker service is running on:
......@@ -161,13 +171,15 @@ On the worker node to be removed, execute:
.. code-block::
sudo docker swarm leave
docker swarm leave
Then on the manager node, remove the target node:
.. code-block::
sudo docker node rm --force $NODE_ID
docker node rm --force $NODE_ID
Restart service
---------------
......@@ -176,7 +188,8 @@ To restart any docker service running on the swarm, execute from the master node
.. code-block::
sudo docker service update --force $SERVICE_NAME
docker service update --force $SERVICE_NAME
This can be useful e.g. to make the REST API (`dtws_dtws-restapi`) re-read static parts of the configuration,
or to re-deploy the worker service to nodes.
......@@ -191,17 +204,19 @@ If restarting the entire DTWS, it is important to restart the scheduler service
Temporary service shutdown
--------------------------
In order to momentarily shutdown a service (e.g. for availability testing purposes), first scale it down:
In order to momentarily shutdown a service (e.g. for availability testing purposes), first scale it down
(`M=N-1`):
.. code-block::
sudo docker service scale $SERVICE_NAME=$N
docker service scale $SERVICE_NAME=$M
Then to resume the service, scale it back up:
.. code-block::
sudo docker service scale $SERVICE_NAME=$N
docker service scale $SERVICE_NAME=$N
In order to confirm services are correctly resumed, it is always good practice to check again their status with:
......@@ -222,7 +237,8 @@ To permanently shutdown any docker service running on the swarm, execute from th
.. code-block::
sudo docker service rm $SERVICE_NAME
docker service rm $SERVICE_NAME
Purge user space - console
---------------------------
......@@ -457,6 +473,33 @@ As an example, the following would set the timeout to 10 seconds:
customisation_timeout: 10
Configure a planned maintenance message
---------------------------------------
It is possible to configure a general message that will be displayed as a modal to users connecting to the Data Tailor.
Dismissing the message will make it disappear for the *current session* (that means: closing the browser window,
or opening another ones will shown the message again, if still active).
The message is taken by reading a **markdown formatted file** named ``maintenance_message.md`` inside the ``$DTWS_CONFIG/epct/``.
If that file is not found, no message is displayed.
So, on master node, create the ``maintenance_message.md`` in the ``$DTWS_CONFIG/epct/``.
To identify the ``etc_dir`` run ``epct info``.
The file can use *markdown metadata* too, for example:
.. code-block:: md
Title: A title (optional)
Summary: A brief description of the problem (optional)
Markdown **text** here.
Simple [markdown syntax](https://daringfireball.net/projects/markdown/syntax) allowed.
[...]
To disable the message, just remove the file.
Troubleshoooting
================
......
......@@ -168,6 +168,7 @@ Make the directory accessible only to `root`:
Copy the certificates in `/etc/ssl/certs/`.
Install and configure `nginx`
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
......@@ -205,6 +206,14 @@ Start `nginx` and ensure it will restart after a reboot:
with the sample one provided in `assets/dtws/nginx/nginx-sample.conf
<assets/dtws/nginx/nginx-sample.conf>`_
Custom error page (HTTP 50x)
.............................
We are providing a static HTTP error page with a visual theme like the official EUMETSAT website.
Files to be copied inside the ``/usr/share/nginx/html`` directory are available in the `nginx folder`__.
__ ./nginx
Master node
-----------
......@@ -212,8 +221,8 @@ Master node
Firewall configuration
^^^^^^^^^^^^^^^^^^^^^^^
The command in this paragraph use the `firewalld` firewall; if it is not installed, install and enable it
as follows:
The command in this paragraph use the `firewalld` firewall; if it is not installed, as the administrator
install and enable it as follows:
.. code::
......@@ -235,7 +244,7 @@ Add the required ports:
Docker installation
^^^^^^^^^^^^^^^^^^^
Install `docker`, **using the latest stable version** (not
As the administrator, install `docker`, **using the latest stable version** (not
the one in the standard repository), following these
`instructions <https://docs.docker.com/engine/install/centos/>`_.
......@@ -266,6 +275,17 @@ Then restart Docker:
Also install the `docker-compose` executable as described in
`<https://docs.docker.com/compose/install/#linux>`_
For the rest of the installation using Docker and for
day-to-day docker administration, add one or more users to the `docker` group on the master node
(replace `<username>` with the actual username):
.. code::
usermod -aG docker <username>
Exit and login again on the master nodes with the user above to have the group membership updated.
The Docker commands in the following paragraph are issued by such a user.
Docker swarm setup
^^^^^^^^^^^^^^^^^^
......@@ -316,14 +336,14 @@ Add the required ports:
firewall-cmd --reload
Install and configure Docker as for the master node.
Install and configure Docker and add a standard user to the `docker` group as for the master node.
`docker-compose` is not needed on workers.
Joining the Swarm
^^^^^^^^^^^^^^^^^
To join the swarm, on each node execute:
To join the swarm, on each node execute as the user in the `docker` group:
.. code::
......@@ -337,15 +357,7 @@ where `$SWARM_TOKEN` is the worker token retrieved on the master.
Closing steps
-------------
For day-to-day docker administration, add one or more users to the `docker` group on the master node
(replace `<username>` with the actual username):
.. code::
usermod -aG docker <username>
Exit and login again on the master nodeas the user above to have the group membership updated,
and check the nodes with:
Check the nodes with:
.. code::
......@@ -387,6 +399,7 @@ Ensure that the needed directories exist:
mkdir -p $DTWS_BUILD $DTWS_CONFIG $DTWS_WORKSPACES
Set-up
------
......@@ -406,7 +419,7 @@ Execute the following on the master node (you may need to install `unzip` first)
- Copy and unzip the `artifacts.zip` file to the `$DTWS_BUILD` folder
Rename the unpacked `conda-channel` folder to `conda-channel-latest`
- Also copy the `Dockerfile` and the `docker-compose.yml` files available
from the `assets/dtws <assets/dtws>`_ folder in the repository to `$DTWS_BUILD` .
in the `dtws-assets` folder in the artifacts to `$DTWS_BUILD`.
Build the release image
......@@ -480,7 +493,7 @@ The first time the DTWS runs (e.g. in the test above), its configuration is crea
We need to modify the stock configuration to suit the cluster we are going to deploy.
In the `epct.yaml` configuration file:
In the `epct.yaml` configuration file modify:
`WORKSPACE_DIR``:
Set to ``/var/dtws``
......@@ -528,6 +541,12 @@ Inside the `epct-webui.yaml` configuration file, the following data are required
``client_key``
set to the client key of the EPCS app in WSO2
``logout_service_url``
Set to: ``https://api.eumetsat.int/oidc/logout``
``logout_page_url``
Set to: ``https://eoportal.eumetsat.int/cas/logout``
Update the service configuration
================================
......@@ -537,7 +556,7 @@ Once the configuration has been updated, restart the services with:
.. code::
docker service update --force dtws_dtws-restapi
docker service update --force dtws_dtws-webui
docker service update --force dtws_dtws-webapp
Some tests
......@@ -596,7 +615,10 @@ Instructions are provided in the `assets/dtws/GEMS.README.rst
DTWS Update
===========
We have two strategies:
Before updating the DTWS, it would be better to make a copy of the configuration folder
`$DTWS_CONFIG`.
To update the service, we have two strategies:
- install a new stack on a new cluster, then update the IP in `nginx` `epcs.conf`,
......@@ -678,3 +700,15 @@ following line:
Save and mount with `mount -a`.
Appendix: Adding a user for log monitoring
==========================================
To add a `dtws-diagnostic` user specific for log monitoring, execute the following on the master node
as an administrator:
.. code::
sudo useradd dtws-diagnostics
Also ensure that the `$DTWS_WORKSPACES` and its subdirectories can be read by the user.
\ No newline at end of file
FROM conda/miniconda3-centos7
ARG USER_ID=1000
ARG GROUP_ID=1001
ENV LC_ALL="en_US.utf8"
ENV LANG="en_US.utf8"
ENV CHANNEL_OPTS="--override-channels -c anaconda -c conda-forge -c /mnt/conda-channel"
......@@ -10,6 +14,11 @@ RUN conda update -n base -c defaults conda && \
ARG CONDA_CHANNEL_DIR=conda-channel-latest
ADD $CONDA_CHANNEL_DIR /mnt/conda-channel
RUN conda index /mnt/conda-channel
RUN groupadd --gid $GROUP_ID user
RUN adduser -c '' --uid $USER_ID --gid $GROUP_ID user
USER user
RUN conda create --name dtws python=3.6
RUN ls /mnt/conda-channel
RUN conda init bash && source ~/.bashrc && conda activate dtws && \
......@@ -19,6 +28,7 @@ RUN conda init bash && source ~/.bashrc && conda activate dtws && \
epct_webui \
epct_plugin_gis \
epct_plugin_umarf \
epct_plugin_netcdf_generator \
msg-gdal-driver && \
conda clean --all --yes && \
pip install --no-deps --ignore-installed falcon_multipart
......@@ -49,9 +49,18 @@ Ensure the content of ``/etc/logstash/pipelines.yml`` looks as follows:
Copy the DTWS definition for logstash at `<./dtws-logstash.conf>`__ to ``/etc/logstash/conf.d``.
Change the path values in the `input->file` section adding the paths where `epct` and `epct_restapi` logfiles are. Use wildcard `*` to point to the specific files.
Change the path values in the `input->file->path` section, replacing `/mnt/dtws-shared/dtws-workspaces` with the value of
`$DTWS_WORKSPACES` used during the DTWS deployment.
Also change the path in the `output->file` section adding the path where all the user logs will be aggregated in a single file. This is needed by GEMS System in order to correctly collect all users' logs.
Create the following directory (mentioned in `output->file-path` section at the
end of ``/etc/logstash/conf.d``):
.. code::
mkdir -p /opt/facilities/GEMS/users_logs/
The user logs will be aggregated in a single file in this directory. This is needed by GEMS System
in order to correctly collect all users' logs.
Start the services
------------------
......@@ -72,19 +81,17 @@ Access and configure Kibana
---------------------------
On the service host, Kibana GUI can be accessed with a browser at ``http://localhost:5601/``.
First step is to create an index:
To load the predefined reports:
- from the terminal run:
- click on the menu on the top left and select "Stack Management", then "Index Patterns"
- search for the `log-epct-*` index, then copy the exact name of the index and click on "New"
- select `date` as the indication of the time.
.. code::
Next we want to load the predefined reports:
curl -X POST localhost:5601/api/kibana/dashboards/import -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d @kibana_dashboard.json
- click on the menu on the top left and select "Stack Management", then "Saved Object"
- click on "Import", and import the report `<./kibana_loglevel_module_monitor.ndjson>`__ , then click on "Done"
- click again on "Import", and import the report `<./kibana_total_user_customization_time.ndjson>`__ . Click on "Done".
where `<./kibana_dashboard.json>`__ is the dashboard json file in the `assets` folder.
Reports are now accessible by clicking on the link to the imported objects.
Reports shall be now accessible by clicking on the imported dashboard `DataTailor` in the dashboards section.
To set a single specific dashboard as the default landing page when connecting to the Kibana service:
......
......@@ -6,24 +6,29 @@ its two basic processes.
Prerequisites
-------------
JDK 8
JDK 8 (openjdk may be used)
Installation
------------
Create the installation folder (e.g. `/opt/facilities/GEMS`) and inside it unzip the GEMS release::
Create the installation folder (e.g. `/opt/facilities/GEMS`) and inside it unzip the GEMS release:
mkdir /opt/facilities/GEMS;
.. code::
mkdir -p /opt/facilities/GEMS;
cd /opt/facilities/GEMS
gunzip GEMS-X.Y.Z-client.tar.gz
Then create some sub directories::
Then create some sub directories:
.. code::
mkdir ftp-out
mkdir log
mkdir mod
mkdir -p users_log #the directory where logstash saves the aggregated user los.
`ftp-out` is usually the place where GEMS Client writes events on files and then reads them to send events to the GEMS Server.
`log` is where GEMS Client will write logs.
......@@ -45,7 +50,7 @@ The main information which the LogFileAgent will read in the specific configurat
- Path where to save the generated events in order for the GEMS sender to read them
- Rules to generate events from the analysed logs
- Format of the events to create
- Facilities to be used (MME_DS_OPE_DATATAILOR)
- Facility to be used (MME_DS_OPE_DATATAILOR)
The main information which the Sender will read in the specific configuration XML file are:
......@@ -63,11 +68,6 @@ Before running both processes the relative java classes need to be available in
export CLASSPATH=/opt/facilities/GEMS/lib/*
Then in two different bash tabs run::
java -Xmx250m GEMS_Sender /opt/facilities/GEMS/conf/sender-legacy.xml
java -Xmx250m GEMS_LogFileAgent /opt/facilities/GEMS/conf/log-file-agent.xml
GEMS Client: Custom Configuration and Run with the DTWS
-------------------------------------------------------
......@@ -76,25 +76,30 @@ In order to integrate the GEMS Client to the DTWS, the custom configuration file
available at `assets/dtws/GEMS_log-file-agent.xml <assets/dtws/GEMS_log-file-agent.xml>`_ can be used,
modifying the following lines:
- l.36: insert the path where the Data Tailor REST API logs reside. Use regular expressions to eventually point to more than one file.
- l.58: insert the path where the Data Tailor users logs reside (this is generated by ELK Logstash, find the path in the `output->file` section of its `assets/dtws/dtws-logstash.conf <assets/dtws/dtws-logstash.conf>`_ configuration file.
- replace `/mnt/dtws-shared/dtws-workspaces/` in the `path` attribute of the `logFileAgent->list->file` `LOG-RESTAPI` element
with the value used for `$DTWS_WORKSPACES` during the DTWS installation. This is the path where the Data Tailor
REST API logs reside; regular expressions are used to point to multiple files.
- ensure the folder in the `path` attribute of the `logFileAgent->list->file` `LOG-USER` element
is the same as the `output->file` section of its `assets/dtws/dtws-logstash.conf <assets/dtws/dtws-logstash.conf>`_
configuration file.
The custom configuration file for the Sender available at `assets/dtws/GEMS_sender.xml <assets/dtws/GEMS_sender.xml>`_
then also needs to be modified:
- l.11: insert the path to the sender's log to populate at run time
- l.21: insert the receiving GEMS server IP
- l.22: insert a valid GEMS user
- l.23: insert the password for the user specified in l.24
- l.11: verify that the path to the sender's log to populate at run time is the same as the one configured in the
`GEMS_log-file-agent.xml` file
- `sender->sink->ftp` element, `ipAddress` attribute: insert the receiving GEMS server IP
- `sender->sink->ftp` element, `user` attribute:: insert a valid GEMS user
- `sender->sink->ftp` element, `password` attribute:: insert the password for the GEMS user.
Before running both processes the relative java classes need to be available in the `CLASSPATH` environment variable::
export CLASSPATH=/opt/facilities/GEMS/lib/*
Then in two different bash tabs, with the main path where the Data Tailor is installed as $DATA_TAILOR, run::
Then in two different bash tabs, where `CLASSPATH` has been set as above, run::
java -Xmx250m GEMS_Sender $DATA_TAILOR/assets/dtws/GEMS_sender.xml
java -Xmx250m GEMS_LogFileAgent $DATA_TAILOR/assets/dtws/GEMS_log-file-agent.xml
java -Xmx250m GEMS_Sender <path to GEMS_sender.xml>
java -Xmx250m GEMS_LogFileAgent <path to GEMS_log-file-agent.xml>
......@@ -33,7 +33,7 @@
<file name="LOG-RESTAPI"
charset="UTF-8"
strategyBean="regexStrategy"
path="/opt/facilities/GEMS/sandbox/epct_restapi_.+\.log"
path="/mnt/dtws-shared/dtws-workspaces/epct_restapi_.+\.log"
checkBehaviour="Contain"
transferBean="localEventTransfer"
text=" RESTAPI LINE: "
......@@ -55,7 +55,7 @@
<file name="LOG-USERS"
charset="UTF-8"
strategyBean="regexStrategy"
path="/opt/facilities/GEMS/sandbox/aggregate_users_processing_.+\.log"
path="/opt/facilities/GEMS/users_logs/aggregate_users_processing_.+\.log"
checkBehaviour="Contain"
transferBean="localEventTransfer"