Commit 191acd40 authored by Daniel Lee's avatar Daniel Lee 🐐
Browse files

Merge branch '2.9' into 'master'

2.9

Closes #747, #740, #741, #695, #712, #558, and #561

See merge request data-tailor/data-tailor!484
parents aeb21f59 2217bffe
......@@ -11,6 +11,39 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# As a default, for commits on a branch which modify code in the
# epct*, co* and docker folders,
# the pipeline runs all Linux build and test jobs,
# with the exception of epct-webui and short and long validation tests.
#
# If commits modify code elsewhere, the default behaviour is to only run
# quality check jobs code_quality and code_quality_radon on Linux machines.
#
#
# For Merge Requests, all jobs are launched with the exception of long validation tests.
#
#
# On Tags, all jobs are launched, including long validation tests.
#
#
# The behaviour can be modified as desired to launch custom pipelines,
# by setting the following input variable keys to an integer value at
# https://gitlab.eumetsat.int/data-tailor/data-tailor/pipelines/new :
#
# BUILD_LINUX: Launches builds of main Data Tailor packages on Linux machines. Only includes main test jobs.
# START_WIN: Launches builds of main Data Tailor packages both on Linux and Windows machines. Includes main test jobs.
# CI_MERGE_REQUEST_ID: Simulates pipeline launched at merge request time. Excludes long validation tests on all OS.
# CI_COMMIT_TAG: Simulates pipeline launched at tag time. Includes all pipeline jobs.
#
#
# In order to optionally build and install the UMARF plugins,
# the CI_JOB_TOKEN variable must be set to a valid conda token
# which allows for the download of UMARF plugins code from the repository
#
# The EUM_CONDA_TOKEN variable must instead be set in order to optionally activate the
# upload of deploy artifacts to EUMETSAT conda channels
variables: &variables
EPCT_TEST_DATA_DIR: /data/data-tailor
......@@ -33,6 +66,7 @@ stages:
- installer
- test
- test_installer
- test_proc
- quality
- deploy
......@@ -60,7 +94,7 @@ pdf docs:
- |
for f in */; do cd $f;
if [ -f Makefile ]; then
make latexpdf LATEXOPTS="-interaction nonstopmode" || :; make latexpdf LATEXOPTS="-interaction nonstopmode" || :; cp _build/latex/*.pdf ../../pdf-docs/; cd -;
make latexpdf LATEXOPTS="-interaction nonstopmode" && make latexpdf LATEXOPTS="-interaction nonstopmode" && cp _build/latex/*.pdf ../../pdf-docs/; cd -;
else
cd -;
fi;done
......@@ -79,7 +113,6 @@ pdf docs:
rules:
- changes:
- epct*/**/*
- assets/**/*
- co*/**/*
- docker/**/*
- if: $BUILD_LINUX
......@@ -106,8 +139,6 @@ build linux plugin-gis:
build linux plugins:
<<: *build_common_linux
rules:
- if: $BUILD_PLUGINS
script:
- conda install git
- git clone -b development https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.eumetsat.int/data-tailor/umarf-plugins/pfd-plugins-master.git epct_plugin_umarf/pfd-plugins-master
......@@ -192,7 +223,6 @@ installer linux:
rules:
- changes:
- epct*/**/*
- assets/**/*
- co*/**/*
- docker/**/*
- if: $CI_MERGE_REQUEST_ID
......@@ -313,16 +343,21 @@ tests linux epct-webui:
tests linux validation:
<<: *test_common_linux
needs:
- build linux core
- build linux webui
- build linux plugin-gis
- build linux gdal
- build linux plugins
rules:
- if: $BUILD_LINUX
- if: $CI_MERGE_REQUEST_ID
- if: $CI_COMMIT_TAG
script:
- conda install -c eumetsat perl libiconv libjpeg-turbo-cos6-x86_64 libpng eugene && ln -s $CONDA_PREFIX/x86_64-conda_cos6-linux-gnu/sysroot/usr/lib64/libjpeg.so.62 $CONDA_PREFIX/lib/libjpeg.so.62
- conda install -y $CHANNEL_OPTS epct epct_plugin_gis msg-gdal-driver epct_restapi
- conda install -y $CHANNEL_OPTS epct epct_plugin_gis msg-gdal-driver epct_restapi epct_plugin_umarf
- pip install --no-deps --ignore-installed falcon_multipart
- epct info
- pytest --durations=0 --junitxml=$CI_PROJECT_DIR/linux-epct-validation-tests-PROC.xml -k test_PROC -m "not longrunning" validation_tests/
- pytest --durations=0 --junitxml=$CI_PROJECT_DIR/linux-epct-validation-tests-API.xml -k test_API -m "not longrunning" validation_tests/
- pytest --durations=0 --junitxml=$CI_PROJECT_DIR/linux-epct-validation-tests-CLI.xml -k test_CLI -m "not longrunning" validation_tests/
- pytest --durations=0 --junitxml=$CI_PROJECT_DIR/linux-epct-validation-tests-EPS.xml -k test_EPS -m "not longrunning" validation_tests/
......@@ -337,7 +372,6 @@ tests linux validation:
- $CI_PROJECT_DIR/linux-epct-validation-tests-EPS.xml
- $CI_PROJECT_DIR/linux-epct-validation-tests-ERR.xml
- $CI_PROJECT_DIR/linux-epct-validation-tests-MSG.xml
- $CI_PROJECT_DIR/linux-epct-validation-tests-PROC.xml
- $CI_PROJECT_DIR/linux-epct-validation-tests-SAF.xml
paths:
- $CI_PROJECT_DIR/linux-epct-validation-tests-API.xml
......@@ -345,7 +379,6 @@ tests linux validation:
- $CI_PROJECT_DIR/linux-epct-validation-tests-EPS.xml
- $CI_PROJECT_DIR/linux-epct-validation-tests-ERR.xml
- $CI_PROJECT_DIR/linux-epct-validation-tests-MSG.xml
- $CI_PROJECT_DIR/linux-epct-validation-tests-PROC.xml
- $CI_PROJECT_DIR/linux-epct-validation-tests-SAF.xml
expire_in: 4 days
when: always
......@@ -458,6 +491,37 @@ tests linux installer:
- epct-webui/cypress/videos/**/*.mp4
- epct-webui/cypress/screenshots/**/*.png
tests linux proc validation:
<<: *common_linux
stage: test_proc
before_script:
- conda index $CI_PROJECT_DIR/conda-channel
- conda create --name epct-tests python=3.6 pytest pytest-cov
- conda init bash && source ~/.bashrc && conda activate epct-tests
needs:
- build linux core
- build linux webui
- build linux plugin-gis
- build linux gdal
- tests linux validation
rules:
- if: $BUILD_LINUX
- if: $CI_MERGE_REQUEST_ID
- if: $CI_COMMIT_TAG
script:
- conda install -c eumetsat perl libiconv libjpeg-turbo-cos6-x86_64 libpng eugene && ln -s $CONDA_PREFIX/x86_64-conda_cos6-linux-gnu/sysroot/usr/lib64/libjpeg.so.62 $CONDA_PREFIX/lib/libjpeg.so.62
- conda install -y $CHANNEL_OPTS epct epct_plugin_gis msg-gdal-driver epct_restapi
- pip install --no-deps --ignore-installed falcon_multipart
- pytest --durations=0 --junitxml=$CI_PROJECT_DIR/linux-epct-validation-tests-PROC.xml -k test_PROC -m "not longrunning" validation_tests/
artifacts:
reports:
junit:
- $CI_PROJECT_DIR/linux-epct-validation-tests-PROC.xml
paths:
- $CI_PROJECT_DIR/linux-epct-validation-tests-PROC.xml
expire_in: 4 days
when: always
# WINDOWS test section
.tests_common_win: &test_common_win
......@@ -489,8 +553,6 @@ tests win validation:
<<: *test_common_win
script:
- conda install -y %CHANNEL_OPTS_WIN% epct epct_restapi epct_plugin_gis msg-gdal-driver && epct info && pip install --no-deps --ignore-installed falcon_multipart
- pytest --durations=0 --junitxml=%CI_PROJECT_DIR%\win-epct-validation-tests-PROC.xml -m "not longrunning" -k test_PROC validation_tests\
- IF %errorlevel% NEQ 0 setx ERROR_FOUND 1
- pytest --durations=0 --junitxml=%CI_PROJECT_DIR%\win-epct-validation-tests-API.xml -m "not longrunning" -k test_API validation_tests\
- IF %errorlevel% NEQ 0 setx ERROR_FOUND 1
- pytest --durations=0 --junitxml=%CI_PROJECT_DIR%\win-epct-validation-tests-CLI.xml -m "not longrunning" -k test_CLI validation_tests\
......@@ -512,7 +574,6 @@ tests win validation:
- $CI_PROJECT_DIR\win-epct-validation-tests-EPS.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-ERR.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-MSG.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-PROC.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-SAF.xml
paths:
- $CI_PROJECT_DIR\win-epct-validation-tests-API.xml
......@@ -520,7 +581,6 @@ tests win validation:
- $CI_PROJECT_DIR\win-epct-validation-tests-EPS.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-ERR.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-MSG.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-PROC.xml
- $CI_PROJECT_DIR\win-epct-validation-tests-SAF.xml
expire_in: 4 days
when: always
......@@ -602,12 +662,54 @@ tests win epct-restapi:
expire_in: 4 days
when: always
tests win proc validation:
variables:
<<: *variables
EPCT_TEST_DATA_DIR: C:\data\data-tailor
stage: test_proc
tags:
- windows
before_script:
- mkdir -p %CI_PROJECT_DIR%\conda-channel
- conda index %CI_PROJECT_DIR%\conda-channel
- conda create --name epct-tests python=3.6 pytest pytest-cov
- conda activate epct-tests
retry:
max: 2
when: runner_system_failure
needs:
- build win core
- build win webui
- build win plugin-gis
- build win gdal
- tests win validation
script:
- conda install -y %CHANNEL_OPTS_WIN% epct epct_restapi epct_plugin_gis msg-gdal-driver && epct info && pip install --no-deps --ignore-installed falcon_multipart
- pytest --durations=0 --junitxml=%CI_PROJECT_DIR%\win-epct-validation-tests-PROC.xml -m "not longrunning" -k test_PROC validation_tests\
- IF %errorlevel% NEQ 0 setx ERROR_FOUND 1
- IF DEFINED ERROR_FOUND exit /b 1
artifacts:
reports:
junit:
- $CI_PROJECT_DIR\win-epct-validation-tests-PROC.xml
paths:
- $CI_PROJECT_DIR\win-epct-validation-tests-PROC.xml
expire_in: 4 days
when: always
rules:
- if: $CI_COMMIT_TAG
- if: $CI_MERGE_REQUEST_ID
- if: $START_WIN
# quality section
include:
- template: Code-Quality.gitlab-ci.yml
code_quality:
stage: quality
needs: []
dependencies: []
when: manual
tags:
- linux
script:
......@@ -645,6 +747,9 @@ code_quality:
code_quality_radon:
<<: *common_linux
stage: quality
needs: [ ]
dependencies: [ ]
when: manual
script:
- conda install -y radon
- radon raw -s $PWD > radon.txt
......@@ -652,8 +757,8 @@ code_quality_radon:
paths:
- radon.txt
# deploy section
# FIXME: "needs" here should be different
deploy linux:
<<: *common_linux
stage: deploy
......@@ -666,6 +771,21 @@ deploy linux:
expire_in: 4 days
when: manual
deploy to conda:
<<: *common_linux
stage: deploy
script:
- conda install anaconda-client
- anaconda -t $EUM_CONDA_TOKEN upload conda-channel/*/epct-*.tar.bz2
- anaconda -t $EUM_CONDA_TOKEN upload conda-channel/*/epct_restapi*.tar.bz2
- anaconda -t $EUM_CONDA_TOKEN upload conda-channel/*/epct_webui*.tar.bz2
- anaconda -t $EUM_CONDA_TOKEN upload conda-channel/*/epct_plugin_gis*.tar.bz2
- anaconda -t $EUM_CONDA_TOKEN upload conda-channel/*/msg-gdal-driver*.tar.bz2
artifacts:
paths:
- conda-channel
when: manual
deploy win:
stage: deploy
tags:
......
......@@ -4,13 +4,48 @@ All notable changes to this project are documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [2.9.0]
### Added
- Add instructions to launch selected pipeline jobs (#739)
- Add HRSEVIRI archive product type (#730)
- Requirement, configuration and test procedure for GEMS monitoring (#680, #690, #718)
- Test and document customisation timeout functionality (#666, #674)
- Add functional tests pipeline job (#660)
- Add for logs produced for ELK (#640)
- Allow manual deployment to EUMETSAT conda channel and use it in installs (#558)
### Changed
- Reformat creation of dask client (#732)
- Improve api.ensure_config to avoid rereading configuration when API function calls internally another function (#729)
- For the operational phase, Docker running as a non-priviledged user (#712)
- Updated launching scheduler service with Data Tailor plugin (#677)
- Moved support functions from api.py to a new dedicated python module (#664)
- Automatized error test EPCT.ERR.TP.01.07 (#595)
- Long-queued customisations fail when token expires (#549)
- Split epct_restapi/__init__.py in separate python files to improve code readability and efficiency (#467)
### Fixed
- Timeout scheduler leaking memory (#731)
- Xrit DT plugin missing output files (#726)
- Add test data fom rect2lpToOpenMTP plugin (#723)
- UMARF plugin missing environment variable (#722)
- UMARF plugins tests (#696)
- Capturing exit status in functional tests Windows pipeline job (#682)
- Fixed and improved tests for manual killing process (#675)
- Clean scheduler plugin swap variables (#672)
- report_quota path fix (#636)
- ELK stack fixes and testing (#635)
- Fix bug as some IASISND02 products generated an error during the processing (#616)
- Improved validity check fixing error arising when basket config address used as input product path (#456)
## [2.8.1]
### Added
- Added configurable parameter about EUMETSAT Data Store URL netloc (#700)
### Fixed
- Preventing re-projecting geostationary products to geostationary projections (#688)
- Preventing re-projecting geostationary products to geostationary projections (#688, #684)
- Update to outdated deployment instructions for DTWS (#683)
- Logging quota overflows, despite logs contribute to user quota (#642)
- Report quota path fix (#636)
......@@ -36,7 +71,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Changed
- Optimized execution of long tests in CI pipelines (#665)
- Making the dask dashboard of the DTWS accessible and informative (#659)
- Reformatted code to standard (#654)
- Applied to Python code the [Black](https://pypi.org/project/black/) coding style and f-string syntax (#654)
- Restructured DTWS fair queuing logic and handling of exceptions (#632, #633)
- Rename "output_dir" variable "root_path", remove ref to "test" deployment in epct-restapi/__init__.py (#607, #621)
- Webapp GUI changes for admin user to manage customisations (#600)
......
......@@ -10,10 +10,17 @@ The following Operating Systems are supported:
- Red Hat Enterprise Linux 7 64bit
- Windows 10 Pro 64bit.
Install the EUMETSAT Data Tailor on a host which has no internet connection is also possible and is described
in the `Installing EUMETSAT Data Tailor without an internet connection`_ paragraph. Such procedure is currently available on
Linux machines only.
There are three ways to install the EUMETSAT Data Tailor:
* from the conda packages in the EUMETSAT Anaconda repository (requires an Internet connection),
as described below in `Installation from the Anaconda repository`_
* from a stand-alone installer on a host with no or unreliable Internet connection, as described below in
the `Installation without an internet connection`_ (Linux only).
* from conda packages downloaded as artifacts of CI pipelines on the target machine (mostly for testing purposes),
as described in `Installation from an 'artifacts' file`_ .
Section `Test and use the Data Tailor installation`_ describes a few basic tests to check that the installation
is working.
Hardware pre-requisites
~~~~~~~~~~~~~~~~~~~~~~~~
......@@ -25,26 +32,95 @@ The Installation of the EUMETSAT Data Tailor needs at least:
- 4 GB of free memory.
Software pre-requisites
~~~~~~~~~~~~~~~~~~~~~~~~
Installation from the Anaconda repository
-----------------------------------------
Installation requires:
Pre-requisites
~~~~~~~~~~~~~~
- the Data Tailor `conda` packages. The packages for a given release
are currently available as a single `zip` file.
Installation requires:
- `conda`, installed as described
`here <https://conda.io/projects/conda/en/latest/user-guide/install/index.html>`_.
`here <https://conda.io/projects/conda/en/latest/user-guide/install/index.html>`_
- a connection to internet
Installation
~~~~~~~~~~~~~
Start by creating a new `conda` environment. Let's call it `epct-2.5`, but
any valid name would do (change the following instructions accordingly)::
conda create -n epct-2.5 python=3.6
Activate the environment::
conda activate epct-2.5
- On Windows, execute::
conda install -y -c defaults -c conda-forge -c eumetsat epct epct_restapi epct_webui epct_plugin_gis msg-gdal-driver
- On Linux, execute::
conda install -y --override-channels -c anaconda -c conda-forge -c eumetsat epct \
epct_restapi epct_webui epct_plugin_gis msg-gdal-driver
Then install `falcon-multipart`::
Additional software pre-requisites for the core components (Windows 10 Pro)
---------------------------------------------------------------------------
pip install --no-deps falcon-multipart
Installation without an internet connection
--------------------------------------------
It is possible to install the EUMETSAT Data Tailor on a Linux machine
without a connection to the internet.
Pre-requisites
~~~~~~~~~~~~~~
- Visual Studio 2015 with C++ installed
The installation requires the following installer files, that can be obtained
from EUMETSAT:
* the installer proper; this is a bash executable named `data-tailor-<version-identifier>.sh`
* a Python wheel package for the `falcon_multipart` package.
Installing EUMETSAT Data Tailor
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Installation
~~~~~~~~~~~~
Create a `/tmp/conda-channel` folder and copy the `falcon-multipart` Python wheel package in it
(replace `</path/to/wheel folder>` with the path to the folder where the installer files are)::
mkdir /tmp/conda-channel
cp </path/to/package_folder>/falcon_multipart-*.whl /tmp/conda-channel
Then run the installer and follow the instructions::
bash </path/to/package_folder>/data-tailor-*.sh
Installation from an 'artifacts' file
--------------------------------------
Note: these instructions are mostly useful in testing environments or in legacy installations (pre 2.9).
Pre-requisites
~~~~~~~~~~~~~~
Installation requires:
- the Data Tailor `conda` packages, downloaded as a single `zip` file
from the project CI pipelines
- `conda`, installed as described
`here <https://conda.io/projects/conda/en/latest/user-guide/install/index.html>`_.
Installation
~~~~~~~~~~~~~
Start by creating a new `conda` environment. Let's call it `epct-2.5`, but
any valid name would do (change the following instructions accordingly)::
......@@ -72,10 +148,11 @@ Then install `falcon-multipart`::
pip install --no-deps falcon-multipart
Using the Data Tailor
~~~~~~~~~~~~~~~~~~~~~~~~
Activate the environment created above first, e.g.::
Test and use the Data Tailor installation
-----------------------------------------
Activate the environment created in one of the installation methods above, e.g.::
conda activate epct-2.5
......@@ -96,13 +173,14 @@ The GUI can be started by running::
epct_webui
:Note: an alternative way to launch the Data Tailor commands is to pass them to conda as follows::
:Note: an alternative way to launch the Data Tailor commands is to
pass them to conda as follows::
conda run -n epct-2.5 epct_webui
conda run -n epct-2.5 epct_webui
Installing a customised EUMETSAT Data Tailor
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Installation a customised EUMETSAT Data Tailor
----------------------------------------------
To install a customised version of the EUMETSAT Data Tailor,
change the `conda install` line in the `Installing EUMETSAT Data Tailor`
......@@ -111,26 +189,5 @@ and at least one customisation plugin (typically `epct_plugin_gis`) are
required.
Installing EUMETSAT Data Tailor without an internet connection
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is possible to installing the EUMETSAT Data Tailor on a Linux machine
without a connection to the internet.
The following installer files need to be available; they can be obtained
from EUMETSAT:
* the installer proper; this is a bash executable named `data-tailor-<version-identifier>.sh`
* a Python wheel package for the `falcon_multipart` package-
Create a `/tmp/conda-channel` folder and copy the falcon-multipart Python wheel file in it::
mkdir /tmp/conda-channel
cp </path/to/unzipped_folder>/falcon_multipart-*.whl /tmp/conda-channel
Then run the installer and follow the instruction::
bash </path/to/folder>/data-tailor-*.sh
......@@ -73,6 +73,8 @@ General limitations
* - projection and ROI extraction
- Extraction of Region of Interest (ROI) can be performed only if the customisation required a
new output projection
* - refresh of an access token
- the access token is not refreshed after the old one has expired during a submitted processing
Limitations about input products
......@@ -92,6 +94,16 @@ EPS Native products
in the EPS-native format, the NetCDF4 output file includes only datasets about CO data,
according to the corresponding product disseminated via Eumetsat DataCentre.
MFG products
''''''''''''
.. list-table::
:header-rows: 0
:widths: 80 80
* - MVIRI (EO:EUM:DAT:0080, EO:EUM:DAT:0081, EO:EUM:DAT:0082)
- MVIRI Level 1.5 Climate Data Record -MFG (0/57/63 degree) products are not supported even
if they are available on Data Store
MSG products
''''''''''''
.. list-table::
......@@ -110,7 +122,6 @@ SAF products
:header-rows: 0
:widths: 80 80
* - OAS025_BUFR (EO:EUM:DAT:METOP:OAS025)
OASWC12_BUFR (EO:EUM:DAT:METOP:OSI-104)
OR1ASWC12_BUFR (EO:EUM:DAT:METOP:OSI-150-B)
......
......@@ -31,19 +31,18 @@ are created (assuming that, as described in the deployment instructions, the sta
- `dtws_dtws-webapp`: the GUI service, on the swarm manager
- `dtws_dtws-worker`: the `worker` services that receive customisation requests, on each worker node.
Monitoring
==========
The DTWS provides monitoring capabilities at several levels. They are described below.
DTWS performance (ELK stack)
----------------------------
DTWS performance can be monitored by opening `<service node internal ip>:5607` with a browser,
DTWS performance can be monitored by opening `$SERVICE_NODE_INTERNAL_IP:5607` with a browser,
then accessing the Dashboard.
Scheduler and worker health
----------------------------
The health of scheduler and workers can be monitored by opening `<service node internal ip>:8787` with a browser.
The health of scheduler and workers can be monitored by opening `$SERVICE_NODE_INTERNAL_IP:8787` with a browser.
This is mainly useful to:
- check if the worker processes are up and running
......@@ -54,11 +53,11 @@ Note that the number of worker processes should be the number of worker nodes ti
Docker Swarm infrastructure
---------------------------
The composition and health of the Docker Swarm infrastructure can be monitored
by accessing the manager node as a superuser (user with sudo privileges), with:
by accessing the manager node as user in the `docker` group, with:
.. code-block::
sudo docker node ls
docker node ls
For a swarm in nominal conditions, the output will show all nodes `Ready` and `Active`,
with one node with Manager Status "Leader".
......@@ -67,11 +66,11 @@ with one node with Manager Status "Leader".
DTWS Docker services
--------------------
The status of Docker services can be monitored
by accessing the manager node as a superuser (user with sudo privileges), with:
by accessing the manager node as a user in the `docker` group, with:
.. code-block::
sudo docker service ls
docker service ls
In nominal conditions, the output will show:
......@@ -98,8 +97,13 @@ To monitor a specific Docker service, the following commands may be useful:
.. code-block::
sudo docker service ps <service name> # lists service tasks, can be useful e.g. to detect its latest restart
sudo docker service inspect <service name> # displays information on the service, including start-up parameters
<<<<<<< assets/dtws/DTWS-operational-procedures.rst
sudo docker service ps $SERVICE_NAME # lists service tasks, can be useful e.g. to detect its latest restart
sudo docker service inspect $SERVICE_NAME # displays information on the service, including start-up parameters
=======
docker service ps <service name> # lists service tasks, can be useful e.g. to detect its latest restart
docker service inspect <service name> # displays information on the service, including start-up parameters
>>>>>>> assets/dtws/DTWS-operational-procedures.rst
DTWS service logs
......@@ -107,31 +111,32 @@ DTWS service logs
The DTWS generates usage logs in the following files (paths assume that the default in the deployment guide have been
used):
- `/mnt/dtws-shared/dtws-workspace/epct_restapi_<YYYYMMDD>.log`: logs of the REST API
- `/mnt/dtws-shared/dtws-workspace/<username>/logs/*.log`: logs of user's customisations; the naming convention is:
`<username>_<timestamp>_<plugin>_<product_type>_<applied customisations>_<customisation_id>.log`
- `/mnt/dtws-shared/dtws-workspace/epct_restapi_$YYYYMMDD.log`: logs of the REST API
- `/mnt/dtws-shared/dtws-workspace/$DTWS_USER/logs/*.log`: logs of user's customisations; the naming convention is:
`"$DTWS_USER"_"$TIMESTAMP"_"$PLUGIN"_"$PRODUCT_TYPE"_"$APPLIED_CUSTOMISATIONS"_"$CUSTOMISATION_ID".log`
Control
=======
Add a worker node
------------------
Adding a worker node requires to:
Scaling the number of workers
-----------------------------
- set-up the worker node and join it to the swarm as described in the deployment guide `DTWS nodes setup - Worker nodes`
- scale the `dtws_dtws-worker` service up, executing from the master node (with `N` is the previous number of nodes):
If necessary, add a worker node to the swarm.
This is described in the section "DTWS nodes setup - Worker nodes" of the `deployment guide <https://gitlab.eumetsat.int/data-tailor/data-tailor/-/blob/master/DTWS.README.rst>`_
Then scale the `dtws_dtws-worker` service up, executing from the master node (with `$N` as the desired number of workers to be distributed among the worker nodes):
.. code-block::
sudo docker service scale dtws_dtws-worker=<N+1>
docker service scale dtws_dtws-worker=$N
Verify that the `dtws_dtws-worker` is now using one more node ("replicas") with:
.. code-block::
sudo docker service ls
docker service ls