Waste Management System Backend
Waste Management System Backend
Ares(2019)3573668 - 03/06/2019
ENG
Other
Public
Reviewers: VIRTUALWARE, SGI, UPATRAS
Table of contents
Executive summary ............................................................................................................... 7
Abbreviations ......................................................................................................................... 8
1. Introduction ..................................................................................................................... 9
5. Security implementation................................................................................................ 50
7. References ................................................................................................................... 88
1
Deliverable D2.3
List of figures:
Figure 1 - Back-End conceptual architecture ......................................................................... 9
Figure 2. Back-End implementation ..................................................................................... 10
Figure 3. FIWARE Lab Waste4Think Organization. ............................................................. 11
Figure 4 - Access to the FIWARE Lab resources ................................................................. 12
Figure 5 – FIWARE Lab Cloud Portal authentication ........................................................... 12
Figure 6 – FIWARE Lab Cloud Portal .................................................................................. 13
Figure 7 – W4T VM instances .............................................................................................. 14
Figure 8 - VMs physical network .......................................................................................... 14
Figure 9 - W4T Security Groups .......................................................................................... 15
Figure 10 - W4T Security Group rules .................................................................................. 15
Figure 11 - W4T Volumes .................................................................................................... 16
Figure 12 – W4T VM images ............................................................................................... 16
Figure 13 - Back-End Services ............................................................................................ 19
Figure 14 - Waste4Think organization in Docker HUB ......................................................... 20
Figure 15 - Orion DB size for all pilots .................................................................................. 25
Figure 16 - Number of entities for all pilots ........................................................................... 25
Figure 17 - Orion DB size for a single pilot ........................................................................... 26
Figure 18 - Number of entities for a single pilot .................................................................... 26
Figure 19 - Cygnus agent .................................................................................................... 29
Figure 20 – Waste4Think Open Data Platform ..................................................................... 39
Figure 21 - W4T CRUD user interface ................................................................................. 46
Figure 22 - FIWARE Security GEs ....................................................................................... 50
Figure 23 - W4T Back-End security layer ............................................................................. 51
Figure 24 - IdM resource permission with dynamic values ................................................... 52
Figure 25 - Security level 1 – Authentication ........................................................................ 53
Figure 26. Security level 2 - Basic authorization................................................................... 53
Figure 27 - W4T IDM instance ............................................................................................. 56
Figure 28 - IDM Applications ................................................................................................ 59
Figure 29 - IDM authorized user .......................................................................................... 60
Figure 30 - Connection of Waste4Think Back-End with the Front-End applications ............. 63
Figure 31 - Sensors pushing information to the Waste4Think Back-End .............................. 65
Figure 32 - APIs swagger description .................................................................................. 68
Figure 33 - swagger API list ................................................................................................. 68
2
Deliverable D2.3
Figure 34 - swagger data model info .................................................................................... 69
Figure 35 - NGSIConnectorWEB Login page ....................................................................... 71
Figure 36 - NGSIConnectorWEB Home page ...................................................................... 72
Figure 37 - NGSIConnectorWEB Entities page .................................................................... 72
Figure 38 - NGSIConnectorWEB Details page ..................................................................... 73
Figure 39 - NGSIConnectorWEB Type page ........................................................................ 73
Figure 40 - NGSIConnectorWEB Create page ..................................................................... 74
Figure 41 - NGSIConnectorWEB Update page .................................................................... 74
Figure 42 - NGSIConnectorWEB Rules page ...................................................................... 75
Figure 43 - NGSIConnectorWEB Access token form ........................................................... 75
Figure 44 – Scenario: Near Real-time process..................................................................... 78
Figure 45 – Scenario: batch process for massive data upload ............................................. 79
Figure 46 – Uploading data using with NGSIConne ............................................................. 80
Figure 47 – Waste collection in Zamudio ............................................................................. 81
Figure 48 – Waste collection in Halandri .............................................................................. 82
Figure 49 – Waste collection in Seveso ............................................................................... 82
Figure 50 – Waste collection in Cascais............................................................................... 83
Figure 51 - GET /entities .................................................................................................... 100
Figure 52 - POST /entities ................................................................................................. 101
Figure 53 - GET/entities/{entityID}...................................................................................... 102
Figure 54 - GET /entities/type/{typeID} ............................................................................... 103
Figure 55 - POST /entities/update ...................................................................................... 104
Figure 56 - GET /rules ....................................................................................................... 105
Figure 57 - GET /rules/{ruleID} ........................................................................................... 105
3
Deliverable D2.3
List of tables:
Table 1 – NGINX docker compose file ................................................................................. 17
Table 2 - NGINX Configuration file ....................................................................................... 18
Table 3 – Orion docker compose file.................................................................................... 23
Table 4 – MongoDB docker image ....................................................................................... 23
Table 5 – MongoDB docker volumes (data persistent) ......................................................... 23
Table 6 – MongoDB docker volumes (logs) ......................................................................... 23
Table 7 – MongoDB docker commands ............................................................................... 23
Table 8 – Orion docker image .............................................................................................. 23
Table 9 – Orion docker volumes (logs)................................................................................ 24
Table 10 – Orion docker volumes (certificates) ................................................................... 24
Table 11 – Orion docker links ............................................................................................. 24
Table 12 – Orion docker ports............................................................................................. 24
Table 13 – Orion docker commands ................................................................................... 24
Table 14 – Cygnus docker compose file .............................................................................. 28
Table 15 - Agent configuration file NGSI configuration section. ............................................ 29
Table 16 - Agent configuration file CKAN configuration section. .......................................... 30
Table 17 - Cygnus instance configuration file ....................................................................... 31
Table 18 – Name mappings configuration file ..................................................................... 32
Table 19 – Docker compose file of the CEP-Sandwich module............................................ 34
Table 20 – Nginx configuration file for the CEP-Sandwich module ....................................... 35
Table 21 – Docker compose file for the History module ....................................................... 37
Table 22 – Nginx configuration file for the History module ................................................... 37
Table 23 - CKAN docker compose file ................................................................................. 41
Table 24 - CKAN docker volumes ........................................................................................ 41
Table 25 - CKAN docker dependencies ............................................................................... 41
Table 26 - CKAN docker port ............................................................................................... 41
Table 27 – Deployment properties for the Admin backend module (development) ............... 43
Table 28 – Application properties for the Admin backend module module (development) .... 43
Table 29 – Docker compose file for the Admin backend module (development) .................. 44
Table 30 – Deployment properties for the Admin backend module (production)................... 44
Table 31 – Application properties for the Admin backend module (production) .................... 44
Table 32 – Docker compose file for the Admin backend module (production) ...................... 45
Table 33 – Nginx file for the Admin backend module (development and production) ........... 45
Table 34 – Deployment properties for the CRUD module (development) ............................. 47
Table 35 – Application properties for the CRUD module (development) .............................. 47
4
Deliverable D2.3
Table 36 – Docker compose file for the CRUD module (development) ................................ 48
Table 37 – Deployment properties for the CRUD module (production) ................................. 48
Table 38 – Application properties for the CRUD module (production) .................................. 48
Table 39 – Docker compose file for the CRUD module (production) .................................... 49
Table 40 – Nginx file for the CRUD module (development and production).......................... 50
Table 41 – PEP Proxy Docker compose file......................................................................... 54
Table 42 – PEP Proxy config file .......................................................................................... 55
Table 43 – IDM docker compose file .................................................................................... 57
Table 44 – AuthZForce docker compose file ........................................................................ 58
Table 45 - IDM Applications ................................................................................................. 59
Table 46 - IDM Users and Roles .......................................................................................... 61
Table 47 - IDM Roles and Permissions ................................................................................ 62
Table 48 – Example configuration of the PyFiware connector to access the Context Broker 64
Table 49 – Example configuration of the PyFiware connector to access the History module 64
Table 50 – NGSIConnectorAPI docker compose fileconnector:....................................... 66
Table 51 - NGSIConnectorAPI config file ............................................................................. 67
Table 52 – NGSIConnectorWEB docker-compose file ......................................................... 70
Table 53 – NGSIConnectorWEB configuration file. .............................................................. 71
Table 54 – Waste4Think CSV file ........................................................................................ 76
Table 55 – Waste4Think CSV metadata .............................................................................. 76
Table 56 – Waste entity with metadata ................................................................................ 77
Table 57 – Waste4Think JSON file data .............................................................................. 77
Table 58 – NgsiConnectorApi Waste Transaction rule example........................................... 78
Table 59 – NgsiConnectorApi upload data in near real-time ................................................ 79
Table 60 – NgsiConnectorApi massive upload of data from a CSV file ................................ 80
Table 61 - Example CSV file with the information of the status in R20 ................................. 84
Table 62 - Example CSV file with the information of the status in R19 ................................. 84
Table 63 - Example CSV file with the information of processed materials in R20 ................. 84
Table 64 - Example CSV file with the information of processed materials in R19 ................. 85
Table 65 - Example JSON with the information of a user session in the Citizen App ........... 85
Table 66 - Example JSON with the information of a user transaction in the Food App ......... 86
Table 67 - Example JSON with the information of a user session in the Serious Games ..... 87
Table 68 - Example CSV with the information of the Learning Materials .............................. 87
Table 69 - Example FIWARE ORION subscriptions to InfluxDB ........................................... 97
Table 70 - Example FIWARE ORION subscription to CEP................................................... 98
Table 71 - Example FIWARE ORION subscriptions to FIWARE Cygnus ............................. 99
5
Deliverable D2.3
Table 72 - W4T Cygnus Name Mappings file ..................................................................... 107
6
Deliverable D2.3
Executive summary
This deliverable presents the implementation of the Waste4Think Back-End platform as the
middleware layer of the waste operation and management system architecture defined in the
Deliverable D2.1 [3]. The Back-End platform implemented is mainly based on the FIWARE
[1] Generic Enablers components and it is responsible for handling context information
coming from the sensors deployed in the Waste4Think Pilots, as well as control
instrumentation of the treatment plants, apps, learning material, serious games, and
dispatching it to components that are in the Business / Service layer in order to support the
results R1 “Operation and Management Module, R2 “Collection Module”, and R3 “Planning
Module”. The specific details of these components and the rationale behind their
implementation is detailed in Deliverable D2.1. Technical Documentation of R1: Operation
and Management Module [3].
The conceptual architecture of the Back-End Platform, instantiated in the FIWARE Lab Cloud
Infrastructure, has been presented in this deliverable providing an overview of the features,
that are briefly summarized as:
The work in this deliverable presents also all the activities needed to deploy and configure
the Back-End Platform on the Spain 2 node of FIWARE Lab Cloud, through the defined
Waste4Think Organization by providing a single entry point for the management of the Back-
End platform, the creation and management of dedicated Security Group defining access
rules to the virtual machines, the creation and management of Volumes that provide
additional data storage attached to the virtual machines, the creation and management of
Virtual Machines Images for the Waste4Think components.
Each of the Back-End services, as well as Security components, are also presented from the
implementation point of view, including specific configuration and the deployment
mechanisms put in place.
The status of the development of the backend services is detailed in Deliverables D1.4-D1.6
[29, 32].
7
Deliverable D2.3
Abbreviations
W4T Waste4think
IDM Identity Management System
GE Generic Enabler
ICT Information and Communication Technologies
SSO Single Sign On
PEP Policy Enforcement Point
XACML eXtensible Access Control Markup Language
PDP Policy Decision Point
VM Virtual Machine
CEP Complex Event Processing
TCP Transmission Control Protocol
8
Deliverable D2.3
1. Introduction
This deliverable is focused on the implementation of the Waste4Think Back-End platform
(see Figure 1) for the results R1 “Operation and Management Module, R2 “Collection
Module”, R3 “Planning Module” and R4 “Circular Economy Model. The Back-End platform
implemented is mainly based on FIWARE technologies [1].
Section 3 covers the activities carried out in the Cloud Infrastructure FIWARE Lab, to set-up
the virtualised infrastructure underlying the Back-End Platform.
Section 5 contains the installation and set-up of the FIWARE Generic Enablers to secure the
Back-End Platform.
Finally, Section 6 covers the Back-End interfaces to sensors deployed to the pilots, treatment
plants, serious game & apps, and learning material.
At the end of this deliverable, a series of Annexes about technical configurations and
development activities carried out are included.
9
Deliverable D2.3
To achieve this objective and to support the results R1 “Operation and Management Module,
R2 “Collection Module”, R3 “Planning Module”, a Waste Management Back-End system has
been deployed.
• to manage context information, through the FIWARE Orion Context Broker Generic
Enabler (see section 4.3), coming from:
o the sensors deployed in each of the Waste4Think pilots;
o the control instrumentation of the Halandri treatment plants;
o the serious game and other apps developed in the Waste4Think project;
o learning materials.
• to process and analyse real-time events and triggering of instantaneous predefined
actions (such as alerts and/or anomalies) through the CEP-Sandwich module (see
section 4.5);
• to store context information as historical data on the History module (see section 4.6);
• to publish context information as open data, through the FIWARE Cygnus (see
section 4.4) and CKAN Generic Enabler (see section 4.7);
• to provide a set of public and/or private APIs to retrieve both context information and
historical data;
10
Deliverable D2.3
• to provide a web interface to operate over the data available both in the ORION
Context broker and Historical module, through the CRUD module (see section 4.9).
• to provide authentication and authorization systems through a set of FIWARE security
components (see section 5), specifically Identity Management KeyRock, PEP Proxy
WILMA, and Authorization PDP AuthZForce;
• to provide a unique user entry point to the different applications in Waste4Think (
Social Actions, Zero Waste Ecosystems, Planning Tool, Green Procurement, etc.),
through the Admin Back-End module (see section 4.8).
All Waste4Think Back-End components have been provisioned and managed using the
FIWARE Cloud capabilities of the FIWARE LAB node “Spain2”.
A Waste4Think Organization (see Figure 4) has been created to let more than one user
access the shared cloud resources.
11
Deliverable D2.3
3.2. Access to FIWARE Lab
Two kinds of actors have access to the FIWARE LAB resources, as shown in Figure 5:
3.2.1. Admin
The Admin actor is able to manage VMs, Security groups, Floating IPs and Keypairs. It
accesses to the FIWARE Cloud resources through the global instance of FIWARE Identity
Management GE [4] by using an authorized FIWARE account and then, through the Cloud
Portal GE [5] (see Figure 6 and Figure 7).
12
Deliverable D2.3
Access to both the Back-End services and Development Environment ports is granted with
security rules defined by the Admin actor on Cloud Portal GE. More details about
Waste4Think Security Group and rules can be found at section 3.4 “Security groups” of this
deliverable.
13
Deliverable D2.3
The two public IPs provided have been assigned to the Development Environment VM “w4t-
dev” with IP 130.206.120.215 and to the VM “w4t-backend-prod” with IP 130.206.117.164.
Two third-level domains have been created to register both public IPs:
14
Deliverable D2.3
3.5. Volumes
As shown in Figure 12, three Volumes have been created to provide Waste4Think VMs an
additional data storage space and in particular:
15
Deliverable D2.3
The Docker image used in the Waste4Think project is hosted on DOCKER Hub at the
following link https://hub.docker.com/r/_/nginx/ .
16
Deliverable D2.3
3.7.3. Installation/Configuration
In the Waste4Think project, NGINX Reverse Proxy has been deployed by using Docker
technologies.
Two different ways can be used to deploy the NGINX Reverse Proxy by using Docker.
Docker-compose method
In Table 1, the docker-compose.yml file used to deploy the NGINX Reverse Proxy in the
Waste4Think Back-End environment. It creates a docker container NGINX and maps to an
external volume the configuration file and log files.
version: "3"
services:
nginx:
image: nginx:latest
volumes:
- data/docker_nginx/log/access.log:/var/log/nginx/access.log
- /data/docker_nginx/log/error.log:/var/log/nginx/error.log
-
/data/docker_nginx/conf/w4t.conf:/etc/nginx/conf.d/default.conf
ports:
- "80:80"
- "8080:8080"
networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1400
Image: The last version of the official NGINX service on Docker hub.
Volumes: Docker volumes has been created to map the following files to an external folder:
Ports: The NGINX service has been configured to run on port 80 and port 8080.
17
Deliverable D2.3
NGINX Configuration file
server {
listen 80;
server_name localhost;
client_max_body_size 100M;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://192.168.229.62:82/;
proxy_redirect off;
}
location /connector {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://192.168.229.62:3001/v1;
proxy_redirect off;
}
}
upstream ckan-webui {
server 192.168.216.171:8080;
}
server {
client_max_body_size 40M;
listen 8080;
server_name localhost;
location / {
proxy_pass http://ckan-webui;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Nginx-Proxy true;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
Table 2 - NGINX Configuration file
18
Deliverable D2.3
4. Back-End Services
4.1. Implementation of the Back-End services
The following FIWARE Generic Enablers and other third-party and custom components (see
Figure 14) have been used and combined to implement the main services of the Back-End
system as Context Management, Open Data, Historical Data, and Processing system:
• FIWARE ORION Context Broker (see section 4.3) as Publish/Subscribe system which
is able to handle context information using the FIWARE NGSI standard;
• FIWARE Cygnus (see section 4.4) as connector system which persists FIWARE
Orion context information in the CKAN Open Data system;
• CKAN (see section 4.7) as Open Data system to publish, share, find, use and
visualize datasets of interest for the Waste4Think project such as socio-economic
data, geographic information or sensors data.
• PostgreSQL with time series extension (see section 4.6) as Historical DB;
• CEP Sandwich (see section 4.5) as Complex Event Processing system;
• CRUD (see section 4.9) as custom tool which provides a WEB interface to create,
update and delete data available both in the FIWARE Orion Context Broker and the
Historical DB.
• Admin Back-End (see section 4.8) as custom tool for authenticating and authorizing
users to access each of the different applications developed in the Waste4Think
project (Social Actions, Zero Waste Ecosystems, Planning Tool, Green Procurement,
etc.).
The status of the development of the backend services is detailed in Deliverables D1.4-D1.6
[29, 32].
19
Deliverable D2.3
Docker is a software containerisation platform guaranteeing that software will always run the
same, regardless of its environment. Docker offers many benefits over traditional application
deployment, including:
• Simplicity – Once an application is Dockerized, a full control (start, stop, restart, etc.)
with few commands is provided. As these are generic Docker commands, it is easy
for anyone unfamiliar with the specifics of an application to get started.
• It’s already Dockerized – Docker Hub is the central marketplace for Docker images to
be shared with other Docker users. Often Docker images for an application already
exist. A specific Waste4Think organization has been created into the official Docker
HUB in order to store and distribute the container images created for the
Waste4Think project https://hub.docker.com/u/waste4think (see Figure 15).
20
Deliverable D2.3
• Blueprint of application configuration – A Docker file provides the blueprint or
instructions to build an application. This can be stored in the source version control
system and refined over time to improve the build. It also removes any ambiguity of
build/configuration differences between various deployments.
The Back-End services have been deployed by using the Docker-Compose which is a tool
for defining and running multi-container Docker applications. With Docker-Compose, a set of
YAML configuration files have been created to configure the Waste4Think services.
More details about Docker can be found in the Annex A - Docker manual of this deliverable.
As part of the Waste4Think Backend, the Orion Context Broker has been implemented to:
• manage all NGSI context information compliant to the Data Model defined in the
Waste4Think project (section 7 of the D2.1 [3]), coming from Pilots and other
Backend services;
• notify context information changes to other Backend services as CEP system (see
section 4.5 of this deliverable), Open Data system (see section 4.7 of this deliverable)
or Historical DB (see section 4.6 of this deliverable) by configuring specific
subscriptions;
• retrieve, upload and update context information directly in the Orion Context Broker
by using NGSI v2 APIs[24].
The Docker image used in the Waste4Think project is being hosted on DOCKER Hub at the
following link https://hub.docker.com/r/fiware/orion/ .
The official MongoDB source code repository is being hosted on GITHUB at the following link
https://github.com/mongodb/mongo.
The Docker image used in the Waste4Think project is being hosted on DOCKER Hub at the
following link https://hub.docker.com/_/mongo/.
21
Deliverable D2.3
4.3.3. Installation/Configuration
In the Waste4Think project, FIWARE Orion Context broker has been deployed by using
Docker technologies.
Just to mention, there are additional ways to deploy the FIWARE Orion service. More
information on the different deployment options can be found in the official FIWARE Orion
Context Broker Installation & Administration Manual [13].
Two different ways can be used to deploy the Orion Context Broker by using Docker.
MongoDB installation.
docker run –name < container name > mongodb -d mongo:3.4
Docker-compose method
In the Table 3 the docker-compose.yml file used to deploy the FIWARE Orion instance in the
Waste4Think Back-End environment. It creates two docker containers, FIWARE Orion
Context Broker and MongoDB, and defines volumes to persist data. With this approach we
build a scalable application, which can be moved between any host without losing its core
functionalities.
version: "2"
services:
mongo:
image: mongo:3.4
volumes:
- /data/docker-mongo/db:/data/db
- /data/docker-
mongo/log/mongodb.log:/var/log/mongodb/mongod.log
command: --nojournal
orion:
image: fiware/orion
volumes:
- /data/docker-
mongo/log/contextBroker.log:/tmp/contextBroker.log
- /data/docker-mongo/config/localhost.key:/localhost.key
- /data/docker-mongo/config/localhost.pem:/localhost.pem
links:
- mongo
22
Deliverable D2.3
ports:
- "1026:1026"
command: -dbhost mongo -https -key /localhost.key -cert
/localhost.pem -logLevel DEBUG
networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1400
Table 3 – Orion docker compose file
Two services have been configured in the docker compose file, MongoDB and FIWARE
Orion.
MongoDB
Image: The MongoDB docker image version 3.4 has been used as recommended in the
official FIWARE Orion documentation (see Table 4).
image: mongo:3.4
Volumes: Data is non-persistent in the dockerized Orion instance. This means that turning off
the MongoDB container all data will be lost. To make the MongoDB data persistent, a docker
volume has been created in the docker-compose file. This volume maps the MongoDB data
to an external folder located to the host system (see Table 5).
volumes:
- /data/docker-mongo/db:/data/db
Table 5 – MongoDB docker volumes (data persistent)
The docker volume has been also used to map the MongoDB log file to an external folder
(see Table 6).
- /data/docker-mongo/log/mongodb.log:/var/log/mongodb/mongod.log
Table 6 – MongoDB docker volumes (logs)
Command: The “—nojournal” command has been defined to disable journaling that
MongoDB enables by default (see Table 7).
command: --nojournal
Table 7 – MongoDB docker commands
Image: The last version of the official FIWARE Orion on Docker hub (see Table 8).
image: fiware/orion
Table 8 – Orion docker image
23
Deliverable D2.3
Volumes: A docker volume has been created to map the Orion Context Broker log to an
external folder (see Table 9).
- /data/docker-mongo/log/contextBroker.log:/tmp/contextBroker.log
Table 9 – Orion docker volumes (logs)
The docker volumes have been also used to map the Orion certificates (.key and .pem) used
to run Orion in HTTPS mode (see Table 10).
- /data/docker-mongo/config/localhost.key:/localhost.key
- /data/docker-mongo/config/localhost.pem:/localhost.pem
Table 10 – Orion docker volumes (certificates)
links:
- mongo
Table 11 – Orion docker links
Ports: This setup ports on which service will run. The default Orion port 1026 has been used
in the Waste4Think backend (see Table 12).
ports:
- "1026:1026"
Table 12 – Orion docker ports
Command: Command options used when the container is created (see Table 13).
The -https configures FIWARE Orion to run in HTTPS mode, which in addition needs the -
key and -cert options, to specify the files containing the private key and certificates for the
server.
Access to the container of FIWARE Orion: docker exec -it <fiware-orion-container> bash.
24
Deliverable D2.3
4.3.5. Estimating FIWARE Orion Context Broker data size
This section gives an estimation about the expected data growth of the Orion Context Broker
over next five years taking into account different factors such as number of citizens involved,
number of collection points handled, average size of single data units and data frequency.
• expected data growth, in term of data size and number of entities handled, for all
pilots involved in the Waste4Think project;
• expected data growth, in term of data size and number of entities handled, for a
single sample pilot.
All pilots
For this estimation, we have considered the four pilots involved in the Waste4Think project
and in detail about 100.000 citizens involved, 6.000 collection points handled and 135.000
waste transactions.
The size of the Context Broker data expected for the next five years is shown in the Figure
16 while number of entities handled is shown in the Figure 17.
Orion DB size
2500
2,156
2000 1,770
1500 1,384
1,038
1000 DB size (MB)
692
500
0
2019 2020 2021 2022 2023
N° entities
600,000 539,664
500,000
431,731
400,000
323,797
300,000
215,865 N° entities
200,000 152,904
100,000
0
2019 2020 2021 2022 2023
25
Deliverable D2.3
Single pilot
For this estimation, we have considered a sample pilot with 25.000 citizens involved, 1.500
collection points handled and 30.000 waste transactions.
The size of the Context Broker data expected for the next five years is shown in the Figure
18 while number of entities handled is shown in the Figure 19.
Orion DB size
600
519
500
423
400 346
300 259
DB size (MB)
200 173
100
0
2019 2020 2021 2022 2023
N° entities
140,000
120,000 114,678
100,000 95,565
76,452
80,000
57,339 N° entities
60,000
38,226
40,000
20,000
0
2019 2020 2021 2022 2023
• FIWARE Cygnus, to publish context information into the Open Data Platform CKAN;
26
Deliverable D2.3
• History module, to upload context information as Historical Data;
The interaction between FIWARE Orion and the other Back-End components takes place
through the configuration of FIWARE Orion subscriptions to notify context information to
other services. More details about the FIWARE Orion subscriptions implemented to notify
entities to FIWARE Cygnus, History, and CEP, can be found in the Annex B – FIWARE Orion
subscriptions
Internally, Cygnus is based on Apache Flume [16], a technology addressing the design and
execution of data collection and persistence agents. An agent is basically composed of a
listener or source in charge of receiving the data, a channel where the source puts the data
once it has been transformed into a Flume event, and a sink, which takes Flume events from
the channel to persist the data within its body into a third-party storage.
Cygnus and more specifically the Cygnus-NGSI agent plays the role of a connector between
Orion Context Broker which is an NGSI source of data, and many FIWARE and third-party
storage systems such as CKAN [14], Cosmos Big Data (Hadoop)[17] and STH Comet[18].
As part of the Waste4Think Back-End, Cygnus has been implemented to accept NGSI
context information coming from Orion Context Broker and to write it to the Waste4Think
CKAN Open Data Platform (see section 4.7).
The Docker image used in the Waste4Think project is being hosted on DOCKER Hub at the
following link https://hub.docker.com/r/fiware/cygnus-ngsi/.
4.4.3. Installation/Configuration
In the Waste4Think context, FIWARE Cygnus has been deployed using the docker-compose
method.
There are also other different ways to deploy FIWARE Cygnus. More information on that can
be found in the official FIWARE Cygnus Installation & Administration Manual [6].
Docker-compose method
27
Deliverable D2.3
In the Table 14 the docker-compose file used to deploy the FIWARE Cygnus instance in the
Waste4Think Back-End environment is presented.
version: '3'
services:
cygnus:
image: telefonicaiot/fiware-cygnus
volumes:
- /data/docker_cygnus/conf/agent_w4t.conf:/opt/apache-
flume/conf/agent.conf
- /data/docker_cygnus/conf/name_mappings.conf:/usr/cygnus/
conf/name_mappings.conf
- /data/docker_cygnus/conf/cygnus_instance_w4t.conf:/opt/fiware-
cygnus/cygnus-common/conf/cygnus_instance.conf
- /data/docker_cygnus/log/cygnus.log:/var/log/cygnus/cygnus.log
ports:
- "5050:5050"
- "8081:8081"
networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1400
Table 14 – Cygnus docker compose file
The Cygnus image used service of the docker-compose file use the official Cygnus image.
Mapping volumes helps to configure the behaviour of Cygnus, mostly focus on key areas
such as name mappings, agent configuration, and Cygnus instance.
Name mappings to determine the way of handling entities from incoming notifications, specify
setting parameters like destination CKAN data store and format in which entities are being
persisted. There is another counterpart which can be used that is Grouping rules, but since
Cygnus version 1.6 that feature is deprecated in favour of name mappings.
Agent configuration file is used to set channel, source and sink specific settings. Only the
CKAN sink has been configured in the Waste4Think project.
Files mentioned above are part of Cygnus configuration. They are required for key Cygnus
aspects such as sinks on which data received from notification will be sent, the behaviour of
those sinks regulated with channels options as well Cygnus instance settings (port, default
service, and service path and notification endpoint).
28
Deliverable D2.3
The Agent configuration file is used to address Flume[16] parameters [16], configuring the
source (NGSI), the channel and the sink (CKAN) that compose the Flume agent [16] behind
Cygnus instance, which is responsible for consuming context events notified by the FIWARE
Orion Context Broker and persisting them on the CKAN instance (see Figure 20).
The Agent configuration file can be split into two main parts:
• NGSI source configuration;
• CKAN sink configuration.
cygnus-ngsi.sources = http-source
cygnus-ngsi.sinks = ckan-sink
cygnus-ngsi.channels = ckan-channel
cygnus-ngsi.sources.http-source.channels = ckan-channel
cygnus-ngsi.sources.http-source.type = http
cygnus-ngsi.sources.http-source.port = 5050
cygnus-ngsi.sources.http-source.handler =
com.telefonica.iot.cygnus.handlers.NGSIRestHandler
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
cygnus-ngsi.sources.http-source.handler.default_service =
cygnus-ngsi.sources.http-source.handler.default_service_path = /
cygnus-ngsi.sources.http-source.handler.events_ttl = 10
cygnus-ngsi.sources.http-source.interceptors = ts nmi
cygnus-ngsi.sources.http-source.interceptors.ts.type = timestamp
cygnus-ngsi.sources.http-source.interceptors.nmi.type =
com.telefonica.iot.cygnus.interceptors.NGSINameMappingsInterceptor$Builder
cygnus-ngsi.sources.http-source.interceptors.nmi.name_mappings_conf_file =
/usr/cygnus/conf/name_mappings.conf
Table 15 - Agent configuration file NGSI configuration section.
29
Deliverable D2.3
cygnus-ngsi.sources.http-source.port = 5050
Listening port that FIWARE Cygnus source is using for receiving incoming notifications.
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
cygnus-ngsi.sources.http-source.handler.default_service =
cygnus-ngsi.sources.http-source.handler.default_service_path = /
cygnus-ngsi.sources.http-source.interceptors.nmi.name_mappings_conf_file =
/opt/apache-flume/conf/name_mappings.conf
cygnus-ngsi.sinks.ckan-sink.type =
com.telefonica.iot.cygnus.sinks.NGSICKANSink
cygnus-ngsi.sinks.ckan-sink.channel = ckan-channel
cygnus-ngsi.sinks.ckan-sink.enable_encoding = false
cygnus-ngsi.sinks.ckan-sink.enable_grouping = false
cygnus-ngsi.sinks.ckan-sink.enable_name_mappings = true
cygnus-ngsi.sinks.ckan-sink.data_model = dm-by-entity
cygnus-ngsi.sinks.ckan-sink.attr_persistence = column
cygnus-ngsi.sinks.ckan-sink.ckan_host = 192.168.216.171
cygnus-ngsi.sinks.ckan-sink.ckan_port = 8080
cygnus-ngsi.sinks.ckan-sink.ckan_viewer = recline_grid_view
cygnus-ngsi.sinks.ckan-sink.ssl = false
cygnus-ngsi.sinks.ckan-sink.api_key = 2d63e69c-9829-4b86-a921-
60de39750253
cygnus-ngsi.sinks.ckan-sink.orion_url = http://localhost:1026
cygnus-ngsi.sinks.ckan-sink.batch_size = 100
cygnus-ngsi.sinks.ckan-sink.batch_timeout = 30
cygnus-ngsi.sinks.ckan-sink.batch_ttl = 10
cygnus-ngsi.sinks.ckan-sink.batch_retry_intervals = 5000
cygnus-ngsi.sinks.ckan-sink.backend.max_conns = 500
cygnus-ngsi.sinks.ckan-sink.backend.max_conns_per_route = 100
#cygnus-ngsi.sinks.ckan-sink.persistence_policy.max_records = 5
cygnus-ngsi.sinks.ckan-sink.persistence_policy.expiration_time = -1
cygnus-ngsi.sinks.ckan-sink.persistence_policy.checking_time = 600
Table 16 - Agent configuration file CKAN configuration section.
30
Deliverable D2.3
cygnus-ngsi.sinks.ckan-sink.enable_name_mappings = true
Enables name mappings rules for CKAN sinks, these rules are then used to customize the
operations of persisting data into CKAN.
cygnus-ngsi.sinks.ckan-sink.api_key = 2d63e69c-9829-4b86-a921-60de39750253
CKAN API key, used by Cygnus to authenticate itself when making a request to CKAN.
cygnus-ngsi.sinks.ckan-sink.orion_url = http://localhost:1026
FIWARE Orion URL used to compose the resource URL with the convenience operation URL
to query it.
cygnus-ngsi.sinks.ckan-sink.ckan_port = 8080
Table 17 presents the Cygnus instance configuration file which addresses all none-Flume
parameters, such as the Flume agent name, the specific log file for this instance, the
administration port.
CYGNUS_USER=cygnus
CONFIG_FOLDER=/usr/cygnus/conf
CONFIG_FILE=/usr/cygnus/conf/agent_w4t.conf
AGENT_NAME=cygnus-ngsi
LOGFILE_NAME=cygnus.log
ADMIN_PORT=8081
POLLING_INTERVAL=30
Table 17 - Cygnus instance configuration file
CONFIG_FILE=/usr/cygnus/conf/agent_w4t.conf
Name of the agent. The name of the agent is important since it is the base for the Flume
parameters naming conventions, it suggests what agent configuration will be used for
configuration of Flume environment.
In Table 18 a part of “name_mappings.conf” file used for the purpose of customizing the way
of writing data to the CKAN instance. The whole configuration file used in the Waste4Think
Project can be found in the Annex D – Cygnus Name Mappings
of this document.
Name mappings are an advanced global feature of Cygnus. It is global because it is available
for all NGSI sinks. They allow changing the notified FIWARE service path and the
concatenation of the entity ID and entity type.
31
Deliverable D2.3
{
"serviceMappings": [
{
"originalService": "waste4think",
"servicePathMappings": [
{
"originalServicePath": "/deusto/w4t/cascais/real",
"entityMappings": [
{
"originalEntityId": ".*",
"originalEntityType": "DepositPointType",
"newEntityId": "",
"newEntityType": "depositpointtype",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "SortingType",
"newEntityId": "",
"newEntityType": "sortingtype",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "DepositPointIsle",
"newEntityId": "",
"newEntityType": "depositpointisle",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "DepositPoint",
"newEntityId": "",
"newEntityType": "depositpoint",
"attributeMappings": []
},
]
},
...
Table 18 – Name mappings configuration file
originalService: Fiware Orion service from incoming notification, used to map incoming
request to set of rules for that specific service.
originalServicePath: Fiware Orion service path from incoming notification used to map
set of rules that are specific to service path, allowing us distinction of entities depending on
their service path rather than attributes (id, type …).
32
Deliverable D2.3
originalEntityType: Type of incoming entity, also unique to Fiware Service and Fiware
ServicePath.
newEntityId: Used to specify a new value for each entity ID, which is then used for sending
data into CKAN sink by concatenating <newEntityId_newEntityType>, that value will
represent destination of CKAN data store which holds data specific to given resources.
newEntityType: Used to specify a new value for each entity Type, which is then used for
sending data into CKAN sink by concatenating <newEntityId_newEntityType>, that value will
represent destination of CKAN data store which holds data specific to given resources.
4.5. CEP
4.5.1. Service overall description
FIWARE already provides a Generic Enabler, named Proton, for Complex Event Processing.
However, initial testing proved that it did not fulfil the technical requirements. The main
problem is the impossibility to easily keep the state between invocations (needed to perform
forecasting, online calculation of KPI and tariff) and to make invocations “at will” (to
consolidate the tariff and KPIs). These problems are being overcome by the development of
a custom CEP, named as Sandwich. Sandwich is an application to trigger logic at the
occurrence of certain events. Its purpose is to build small programs (called sandwiches) in an
interactive way by dragging and dropping already predefined functions (called ingredients).
These functions are linked to determine whose outputs need to be sent to others’ inputs and
create the logical order in which information must flow. Combining the ingredients, a user can
build (in a visual way) logical and mathematical operations to perform complex calculations
(such as the mean attribute of a group of entities, execute a statistical model, etc.).
By using the Context Broker’s subscription mechanism, the sandwiches can be invoked
whenever an entity is changed. The CEP module will asynchronously launch the sandwich
executing in sequence all its ingredients and after, sending the result back to the Context
Broker or to another module. In the background, the CEP module is built using a NodeJS
web server[20] coupled with a PostgreSQL[21] database. NodeJS exposes the API routes
and dispatches the received queries into the database.
4.5.3. Installation/Configuration
In the Waste4Think context, the CEP-Sandwich module has been deployed by using Docker
technologies, specifically by using the docker-compose method.
33
Deliverable D2.3
Table 19 and Table 20 show the docker.yml configuration file and the NGINX configuration of
the CEP-Sandwich module.
version: "3"
services:
sandwich-db:
image: postgres:9.6
ports:
- "9041:5432"
volumes:
- postgresql_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "7f650369fa13fd4fe07e0a8e19f16ae7"
POSTGRES_DB: "sandwich"
logging:
driver: "json-file"
options:
max-file: 5
max-size: 10m
sandwich-node:
image: "node:8"
ports:
- "8030:8030"
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "7f650369fa13fd4fe07e0a8e19f16ae7"
POSTGRES_HOST : "sandwich-db"
POSTGRES_PORT : 5432
POSTGRES_DB : "sandwich"
AUTH_CLIENT_ID : "VQ3Ik0elr1xJ8u5MZxZTcFsn4r5u3f9sqWPIfXj0"
AUTH_CLIENT_SECRET :
"uVKOGkJiuNRcDJRIaun9dqrYDEihcFrGk56hToIml5FeKA4fRoY8xpRAO8z2ZVhJqM2kFBZYuT
6kExeeXdfZdLvuiAtU3EyCLWWQGtSMzuOezzvDzYsvIVh01szNvWNX"
RESET_MODELS_ : 1
RESET_SESSIONS_ : 1
user: "node"
working_dir: /home/docker/sandwich
volumes:
- /home/docker/sandwich:/home/docker/sandwich
command: /bin/bash -c "npm install && npm start"
logging:
driver: "json-file"
options:
max-file: 5
max-size: 10m
dns:
- 130.206.100.1
- 130.206.100.2
volumes:
postgresql_data:
Table 19 – Docker compose file of the CEP-Sandwich module
34
Deliverable D2.3
server {
server_name sandwich.waste4think.eu;
listen 80;
listen 443 ssl;
include conf.d/geoworldsim_ssl;
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
proxy_pass http://192.168.235.170:8030;
}
}
Table 20 – Nginx configuration file for the CEP-Sandwich module
• Low update profile. This profile covers all the entities that describe the information of
the pilot and which modification will take place separate in time. In the Waste4Think
project, these entities refer to the description of the municipality (population,
population density), the configuration of the waste management system (deposit
points, sorting type, waste categories), etc. Considering the data already uploaded in
the Back-end we estimate we could store around 200-300MB of data per pilot and
year.
• High update profile. This profile covers all the entities that are created or modified
almost constantly and are commonly related to the actions that are performed
autonomously by the sensors (proactiveness) or by the execution of an action over a
sensor (reactiveness). In the context of Waste4Think, these entities refer to events
such as the throwing garbage to a waste container, the collection of waste containers
35
Deliverable D2.3
by a truck, the measures of the nappies and composting plants, the number of users,
food app transactions, etc. Considering the data already uploaded in the Back-end we
estimate we could store around 900MB-1GB of data per pilot and year.
Using the stored data, the module exposes an API for querying past states the entities had in
the Context Broker given a timestamp or time range. In the background, the History module
is built using a NodeJS[20] web server coupled with an PostgreSQL[23] time-series
database. NodeJS exposes the API routes and dispatches the received queries into the
database.
The Docker image used in the Waste4Think project is being hosted on DOCKER Hub at
https://hub.docker.com/u/waste4think/.
4.6.3. Installation/Configuration
In the Waste4Think context, the History module has been deployed by using Docker
technologies, specifically by using the docker-compose method.
Table 21 shows the Docker compose configuration file of the History module. Table 22
shows the Nginx configuration file for the History module.
version: "3"
services:
history-db:
image: timescale/timescaledb
ports:
- "8041:5432"
volumes:
- postgresql_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "7f650369fa13fd4fe07e0a8e19f16ae7"
POSTGRES_DB: "history"
logging:
driver: "json-file"
options:
max-file: 5
max-size: 10m
history-node:
image: "node:8"
ports:
- "8040:8040"
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "7f650369fa13fd4fe07e0a8e19f16ae7"
36
Deliverable D2.3
POSTGRES_HOST : "history-db"
POSTGRES_PORT : 5432
POSTGRES_DB : "history"
AUTH_CLIENT_ID : "Zq2COahUdSIOaAOEz5g2RW8YEf7Ysnrd0hWBGTWA"
AUTH_CLIENT_SECRET :"hEMpUb4xto8nuGNbSegYjRIwkN8TFP1X4KP1x"
RESET_MODELS_ : 1
RESET_SESSIONS_ : 1
user: "node"
working_dir: /home/docker/history
volumes:
- /home/docker/history:/home/docker/history
command: /bin/bash -c "npm install && npm start"
logging:
driver: "json-file"
options:
max-file: 5
max-size: 10m
dns:
- 130.206.100.1
- 130.206.100.2volumes:
postgresql_data:
server {
server_name history.waste4think.eu;
listen 443 ssl;
listen 80;
include conf.d/geoworldsim_ssl;
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
proxy_pass http://192.168.235.170:8040/;
}
}
Table 22 – Nginx configuration file for the History module
Once data is published, users can use its faceted search features to browse and find the
data they need, and preview it using maps, graphs and tables-whether they are developers,
journalists, researchers, NGOs, citizens, etc.…
As part of the Waste4Think Back-End, the open data platform CKAN is used to publish,
share, find, use and visualize datasets of interest for the Waste4Think project such as socio-
economic data, geographic information or sensors data.
37
Deliverable D2.3
Figure 21) has been deployed and configured. The Waste4Think Open Data platform is
available at http://backend.waste4think.eu:8080/.
38
Deliverable D2.3
The Docker image used in the Waste4Think project is being hosted on DOCKER Hub at the
following link https://hub.docker.com/r/ckan/ckan/.
39
Deliverable D2.3
4.7.3. Installation/Configuration
In the Waste4Think project, the CKAN instance has been deployed by using Docker
technologies.
There are different methods to install CKAN as the installation from an operating system
package manager or from source. More information about it can be found in the official
CKAN documentation [14].
Docker-compose method
In the Table 3 the docker-compose.yml file used to deploy the CKAN instance in the
Waste4Think Backend environment is presented.
volumes:
ckan_config:
ckan_home:
ckan_storage:
pg_data:
services:
ckan:
container_name: ckan
build:
context: ../../
args:
- CKAN_SITE_URL=${CKAN_SITE_URL}
links:
- db
- solr
- redis
ports:
- "0.0.0.0:${CKAN_PORT}:5000"
environment:
# Defaults work with linked containers, change to use own Postgres,
SolR, Redis or Datapusher
-
CKAN_SQLALCHEMY_URL=postgresql://ckan:${POSTGRES_PASSWORD}@db/ckan
-
CKAN_DATASTORE_WRITE_URL=postgresql://ckan:${POSTGRES_PASSWORD}@db/datast
ore
-
CKAN_DATASTORE_READ_URL=postgresql://datastore_ro:${DATASTORE_READONLY_PA
SSWORD}@db/datastore
- CKAN_SOLR_URL=http://solr:8983/solr/ckan
- CKAN_REDIS_URL=redis://redis:6379/1
- CKAN_DATAPUSHER_URL=http://datapusher:8800
- CKAN_SITE_URL=${CKAN_SITE_URL}
- CKAN_MAX_UPLOAD_SIZE_MB=${CKAN_MAX_UPLOAD_SIZE_MB}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- DS_RO_PASS=${DATASTORE_READONLY_PASSWORD}
volumes:
- ckan_config:/etc/ckan
- ckan_home:/usr/lib/ckan
- ckan_storage:/var/lib/ckan
40
Deliverable D2.3
datapusher:
container_name: datapusher
image: clementmouchet/datapusher
ports:
- "8800:8800"
db:
container_name: db
build:
context: ../../
dockerfile: contrib/docker/postgresql/Dockerfile
args:
- DS_RO_PASS=${DATASTORE_READONLY_PASSWORD}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
environment:
- DS_RO_PASS=${DATASTORE_READONLY_PASSWORD}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
volumes:
- pg_data:/var/lib/postgresql/data
solr:
container_name: solr
build:
context: ../../
dockerfile: contrib/docker/solr/Dockerfile
redis:
container_name: redis
image: redis:latest
Table 23 - CKAN docker compose file
volumes:
- ckan_config:/etc/ckan
- ckan_home:/usr/lib/ckan
- ckan_storage:/var/lib/ckan
Table 24 - CKAN docker volumes
Docker volumes have been used to map the CKAN configuration files and to persist data on
external folders. This makes sure data is persisted and limit the risk of data lost due to issues
that can occur in work with containers (see Table 24).
solr:
redis:
db:
datapusher:
Table 25 - CKAN docker dependencies
41
Deliverable D2.3
On the registry, each user is given a role and a set of permissions that detail which
application and which actions within each application the user is permitted to interact with.
On log in, the Admin backend verifies the credentials of the user and returns a cross-
navigation bar with all the necessary links and actions of the allowed application. This cross-
navigation bar will be available in all the applications during the user session and will verify
all the operations performed by them.
The Docker image used in the Waste4Think project is being hosted on DOCKER Hub at
https://hub.docker.com/u/waste4think/.
4.8.3. Installation/Configuration
In the Waste4Think context, the Admin backend module has been deployed by using Docker
technologies, specifically by using the docker-compose method through the docker-remote
scripts that manage all the configuration of the docker containers.
• Deployment properties file: Configures parameters such as the host/port where the
Admin backend docker container will be deployed, the docker registry to which the
Admin backend image will be uploaded to, and the names of the stack, image and tag
related to the container.
42
Deliverable D2.3
• Docker compose file: Configures the parameters for building the docker container.
Table 27, Table 28, and Table 29 show the configuration files to deploy the Admin Backend
in the development environment. Table 30, Table 31, and Table 32 show the configuration
files to deploy the Admin Backend in the production environment. Table 33 shows the Nginx
configuration file for the Admin Backend.
HOST= 10.32.8.203
PORT=22
REGISTRY=10.32.8.203:5000
STACK=w4t_admin_dev
IMAGE=w4t_admin
TAG=dev
DJANGO_SECRET_KEY= "13+cs97='w(ixk2c*!8sskfksnaf*(__c!44)$lb(v0ejci"
DJANGO_ALLOWED_HOSTS=auth-dev.waste4think.eu
DJANGO_HOST_PORT=8011
DJANGO_DEBUG=True
DJANGO_ADMIN_USER=w4t_user
DJANGO_ADMIN_PASSWORD=#DJANGO_ADMIN_PASS#
DJANGO_ADMIN_EMAIL=#DJANGO_ADMIN_EMAIL#
DJANGO_STATIC_URL=/static/
DJANGO_STATIC_ROOT=/var/www/static
DJANGO_STATIC_HOST_PATH=/var/www/${STACK}/static
DJANGO_MEDIA_URL=/media/
DJANGO_MEDIA_ROOT=/var/www/media
DJANGO_MEDIA_HOST_PATH=/var/www/${STACK}/media
DJANGO_GUNICORN_LOG=/var/log/${STACK}/gunicorn
DJANGO_GUNICORN_NAME=${STACK}
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB_NAME=${STACK}_db
POSTGRES_USER=w4t_user
POSTGRES_PASSWORD=#POSTGRES_PASS#
POSTGRES_HOST_PORT=8013
POSTGRES_DATA=/var/postgres/${STACK}
Table 28 – Application properties for the Admin backend module module (development)
version: '3.1'
services:
django:
image: '${REGISTRY}/${IMAGE}:${TAG}'
ports:
- ${DJANGO_HOST_PORT}:8000
env_file:
- ${STACK}.env
depends_on:
- postgres
volumes:
- ${DJANGO_STATIC_HOST_PATH}:${DJANGO_STATIC_ROOT}
- ${DJANGO_MEDIA_HOST_PATH}:${DJANGO_MEDIA_ROOT}
- ${DJANGO_GUNICORN_LOG}:/var/log/gunicorn
postgres:
image: postgres:10
ports:
- ${POSTGRES_HOST_PORT}:${POSTGRES_PORT}
env_file:
43
Deliverable D2.3
- ${STACK}.env
volumes:
- ${POSTGRES_DATA}:/var/lib/postgresql/data
Table 29 – Docker compose file for the Admin backend module (development)
HOST= 192.168.235.170
PORT=22
REGISTRY= https://hub.docker.com/u/waste4think/
STACK=w4t_admin
IMAGE=w4t_admin
TAG=latest
DJANGO_SECRET_KEY= "13+cs97='w(kvnvjdj764jsad*(__c!44)$lb(v0efsdf"
DJANGO_ALLOWED_HOSTS=admin.waste4think.eu
DJANGO_HOST_PORT=8010
DJANGO_DEBUG=False
DJANGO_ADMIN_USER=w4t_user
DJANGO_ADMIN_PASSWORD=#DJANGO_ADMIN_PASS#
DJANGO_ADMIN_EMAIL=#DJANGO_ADMIN_EMAIL#
DJANGO_STATIC_URL=/static/
DJANGO_STATIC_ROOT=/var/www/static
DJANGO_STATIC_HOST_PATH=/var/www/${STACK}/static
DJANGO_MEDIA_URL=/media/
DJANGO_MEDIA_ROOT=/var/www/media
DJANGO_MEDIA_HOST_PATH=/var/www/${STACK}/media
DJANGO_GUNICORN_LOG=/var/log/${STACK}/gunicorn
DJANGO_GUNICORN_NAME=${STACK}
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB_NAME=${STACK}_db
POSTGRES_USER=w4t_user
POSTGRES_PASSWORD=#PROD_POSTGRES_PASS#
POSTGRES_DATA=/var/postgres/${STACK}
Table 31 – Application properties for the Admin backend module (production)
version: '3.1'
services:
django:
image: '${REGISTRY}/${IMAGE}:${TAG}'
ports:
- ${DJANGO_HOST_PORT}:8000
env_file:
- ${STACK}.env
depends_on:
- postgres
volumes:
- ${DJANGO_STATIC_HOST_PATH}:${DJANGO_STATIC_ROOT}
- ${DJANGO_MEDIA_HOST_PATH}:${DJANGO_MEDIA_ROOT}
- ${DJANGO_GUNICORN_LOG}:/var/log/gunicorn
postgres:
image: postgres:10
ports:
- ${POSTGRES_HOST_PORT}:${POSTGRES_PORT}
44
Deliverable D2.3
env_file:
- ${STACK}.env
volumes:
- ${POSTGRES_DATA}:/var/lib/postgresql/data
Table 32 – Docker compose file for the Admin backend module (production)
server {
server_name auth.waste4think.eu;
listen 443 ssl;
location /static/ {
alias /var/www/w4t_authservice/static/;
}
location /media/ {
alias /var/www/w4t_authservice/media/;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
proxy_pass http://192.168.235.170:8010/;
}
}
server {
server_name auth-dev.waste4think.eu;
listen 443 ssl;
location /static/ {
alias /var/www/w4t_authservice/static/;
}
location /media/ {
alias /var/www/w4t_authservice/media/;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
proxy_pass http://192.168.235.170:8011/;
}
}
Table 33 – Nginx file for the Admin backend module (development and production)
45
Deliverable D2.3
4.9. CRUD
4.9.1. Service overall description
The CRUD module provides a web interface for operating over the data available in the data-
model. The operations available are create, update and delete. This module will be available
only to the data administrators, providing them with a failsafe in case there is a misreading in
any sensor or there is corruption in the data.
CRUD will provide an interface to work with both the API of the FIWARE Orion Context
Broker (Section 4.3) and the History module (Section 4.6). Figure 22 shows one of the
CRUD’s user interface.
The Docker image used in the Waste4Think project is being hosted on DOCKER Hub at
https://hub.docker.com/u/waste4think/
4.9.3. Installation/Configuration
In the Waste4Think context, the CRUD module has been deployed by using Docker
technologies, specifically by using the docker-compose method through the docker-remote
scripts that manage all the configuration of the docker containers.
46
Deliverable D2.3
• To stop the module:
docker stop /${STACK}
• Deployment properties file: Configures parameters such as the host/port where the
Admin backend docker container will be deployed, the docker registry to which the
Admin backend image will be uploaded to, and the names of the stack, image and tag
related to the container.
• Docker compose file: Configures the parameters for building the docker container.
Table 34, Table 35, Table 36 show the configuration files to deploy the CRUD in the
development environment. Table 37, Table 38, Table 39 show the configuration files to
deploy the CRUD in the production environment. Table 33 shows the Nginx configuration file
for the CRUD.
HOST= 10.32.8.203
PORT=22
REGISTRY=10.32.8.203:5000
STACK=w4t_crud_dev
IMAGE=w4t_crud
TAG=dev
DJANGO_ADMIN_USER=w4t_user
DJANGO_ADMIN_PASSWORD=#DJANGO_ADMIN_PASS#
DJANGO_ADMIN_EMAIL=#DJANGO_ADMIN_EMAIL#
DJANGO_STATIC_URL=/static/
DJANGO_STATIC_ROOT=/var/www/static
DJANGO_STATIC_HOST_PATH=/var/www/${STACK}/static
DJANGO_MEDIA_URL=/media/
DJANGO_MEDIA_ROOT=/var/www/media
DJANGO_MEDIA_HOST_PATH=/var/www/${STACK}/media
DJANGO_GUNICORN_LOG=/var/log/${STACK}/gunicorn
DJANGO_GUNICORN_NAME=${STACK}
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB_NAME=${STACK}_db
POSTGRES_USER=w4t_user
POSTGRES_PASSWORD=#POSTGRES_PASS#
POSTGRES_HOST_PORT=9083
POSTGRES_DATA=/var/postgres/${STACK}
Table 35 – Application properties for the CRUD module (development)
47
Deliverable D2.3
version: '3.1'
services:
django:
image: '${REGISTRY}/${IMAGE}:${TAG}'
ports:
- ${DJANGO_HOST_PORT}:8000
env_file:
- ${STACK}.env
depends_on:
- postgres
volumes:
- ${DJANGO_STATIC_HOST_PATH}:${DJANGO_STATIC_ROOT}
- ${DJANGO_MEDIA_HOST_PATH}:${DJANGO_MEDIA_ROOT}
- ${DJANGO_GUNICORN_LOG}:/var/log/gunicorn
postgres:
image: postgres:10
ports:
- ${POSTGRES_HOST_PORT}:${POSTGRES_PORT}
env_file:
- ${STACK}.env
volumes:
- ${POSTGRES_DATA}:/var/lib/postgresql/data
Table 36 – Docker compose file for the CRUD module (development)
HOST= 192.168.235.170
PORT=22
REGISTRY= https://hub.docker.com/u/waste4think/
STACK=w4t_crud
IMAGE=w4t_crud
TAG=latest
DJANGO_SECRET_KEY= "uqn&q$yr6(fyq*!z$g8f5zc&wxna_48)x!l+orhnjpbd7^3skd"
DJANGO_ALLOWED_HOSTS=crud.waste4think.eu
DJANGO_HOST_PORT= 9080
DJANGO_DEBUG=False
DJANGO_ADMIN_USER=w4t_user
DJANGO_ADMIN_PASSWORD=#DJANGO_ADMIN_PASS#
DJANGO_ADMIN_EMAIL=#DJANGO_ADMIN_EMAIL#
DJANGO_STATIC_URL=/static/
DJANGO_STATIC_ROOT=/var/www/static
DJANGO_STATIC_HOST_PATH=/var/www/${STACK}/static
DJANGO_MEDIA_URL=/media/
DJANGO_MEDIA_ROOT=/var/www/media
DJANGO_MEDIA_HOST_PATH=/var/www/${STACK}/media
DJANGO_GUNICORN_LOG=/var/log/${STACK}/gunicorn
DJANGO_GUNICORN_NAME=${STACK}
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB_NAME=${STACK}_db
POSTGRES_USER=w4t_user
POSTGRES_PASSWORD=#PROD_POSTGRES_PASS#
POSTGRES_DATA=/var/postgres/${STACK}
Table 38 – Application properties for the CRUD module (production)
48
Deliverable D2.3
version: '3.1'
services:
django:
image: '${REGISTRY}/${IMAGE}:${TAG}'
ports:
- ${DJANGO_HOST_PORT}:8000
env_file:
- ${STACK}.env
depends_on:
- postgres
volumes:
- ${DJANGO_STATIC_HOST_PATH}:${DJANGO_STATIC_ROOT}
- ${DJANGO_MEDIA_HOST_PATH}:${DJANGO_MEDIA_ROOT}
- ${DJANGO_GUNICORN_LOG}:/var/log/gunicorn
postgres:
image: postgres:10
ports:
- ${POSTGRES_HOST_PORT}:${POSTGRES_PORT}
env_file:
- ${STACK}.env
volumes:
- ${POSTGRES_DATA}:/var/lib/postgresql/data
Table 39 – Docker compose file for the CRUD module (production)
server {
server_name crud.waste4think.eu;
listen 443 ssl;
location /static/ {
alias /var/www/w4t_crud/static/;
}
location /media/ {
alias /var/www/w4t_crud/media/;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
proxy_pass http://192.168.235.170:9080/;
}
}
server {
server_name crud-dev.waste4think.eu;
listen 443 ssl;
location /static/ {
alias /var/www/w4t_crud/static/;
}
location /media/ {
alias /var/www/w4t_crud/media/;
}
49
Deliverable D2.3
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
proxy_pass http://192.168.235.170:9081/;
}
}
Table 40 – Nginx file for the CRUD module (development and production)
5. Security implementation
5.1. FIWARE security
The Security module implemented in the Waste4Think project manages the user account
information, it authenticates the user by a FIWARE account and checks authorizations to
access resources.
In the Waste4Think context, the following FIWARE Generic Enablers (see Figure 23) have
been used and combined to implement secure access to the Back-End:
• The Keyrock Identity Management Generic Enabler[11] brings support to secure and
private OAuth2-based[9] authentication of users and devices, user profile
management, privacy-preserving disposition of personal data, Single Sign-On (SSO)
and Identity Federation across multiple administration domains.
• The Wilma PEP Proxy Generic Enabler[12] brings the support of proxy functions
within OAuth2-based authentication schemas. It also implements PEP functions
within an XACML-based access control schema.
• The AuthZForce PDP/PAP Generic Enabler[10] brings support to PDP/PAP functions
within an access control schema based on the XACML standard[19].
50
Deliverable D2.3
The Identity Manager (IdM) stores the FIWARE user’s account and allows Single Sign On
(SSO) authentication by using the OAuth2 protocol.
Upon logging in, the user authenticated receives an authentication token which is used by
the AuthZForce component to check the role of the user and permission associated.
The PEP Proxy acts as a proxy server redirecting the allowed requests or blocking the
unauthorized requests (see Figure 24).
During the integration and test phase of the FIWARE security components in the
Waste4Think Back-End, it has been realized that the IDM component does not allow a user
to create particular authorization permissions to HTTP resources containing a dynamic value
in the URI. For instance, it is not possible to create an IDM authorization permission to the
HTTP resource http://backend.waste4think.eu/v2/entities/<entity_id where the <entity_id> is
a dynamic value.
To solve this issue, it has been necessary to make changes to the source code of the PEP
Proxy GE that carries out authorization checks and authorize or deny access to the Back-
End resources and moreover it has been necessary to define a method to create these
permissions in the IDM GE.
51
Deliverable D2.3
The PEP Proxy is deployed on top of the Back-End service, transforming the service
endpoint in the PEP Proxy endpoint and redirecting the request filtered to the service. The
Back-End application must be registered in the IDM with Oauth2 mechanism that enables
login user a FIWARE account. The OAuth2 token generated must be included in the HTTP
headers sent the request to the Back-End service. If the token included in the request is
valid, then PEP Proxy will redirect the request to the Back-End.
The WILMA PEP Proxy GE provides three levels of security as just described in the section
4.6.4 of the D2.1 [3].
• Level 1 (Authentication): PEP proxy checks if the token included in the request
corresponds to an authenticated user in FIWARE IDM.
• Level 2 (Basic Authorization): PEP Proxy checks the authenticated user as defined in
the Level 1 and it also checks the roles and permissions configured for that user, and
allow him to access to the resource specified in the request (based in the HTTP verb
+ path);
• Level 3 (Advanced Authorization): PEP Proxy provides the same level security of the
Level 2, so it checks user authentication, roles, and permissions with the only
52
Deliverable D2.3
difference that the Level 3 use the XACML standard language to define the
permissions.
In the Waste4Think project, only Level 1 and Level 2 have been considered to secure the
Back-End (see Figure 26and Figure 27).
The Docker image used in the Waste4Think project is the custom version of the official 6.2
version, and it is being hosted on DOCKER Hub at the following link
https://hub.docker.com/r/waste4think/pep_proxy/.
5.2.3. Installation/Configuration
In the Waste4Think project, FIWARE PEP Proxy has been deployed by using Docker
technologies.
There are different ways to deploy the FIWARE PEP Proxy GE. More information on this can
be found in the official FIWARE PEP Proxy GE Installation & Administration Manual [12].
Two different ways can be used to deploy PEP Proxy by using Docker.
53
Deliverable D2.3
• Installation by using docker-compose.
The docker-compose method has been used to deploy the PEP Proxy component in the
Waste4Think context.
In the Table 41 the docker-compose.yml file used to deploy the FIWARE PEP Proxy instance
in the Waste4Think Back-End environment is presented.
version: "3"
services:
pep-proxy:
image: waste4think/pep_proxy:release-6.2
volumes:
- /data/docker_pepproxy/conf/config.js:/opt/fiware-pep-
proxy/config.js
ports:
- "82:82"
networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1400
Table 41 – PEP Proxy Docker compose file
Image: The PEP Proxy Docker image used is a custom version (more details can be found in
the section 5.1 of this deliverable) hosted on the Waste4Think project of Docker HUB.
Volumes: Docker volumes has been created to map the configuration file of the PEP Proxy
component “config.js” to an external folder.
Ports: The PEP Proxy service has been configured to run on port 82.
In the Table 42 the configuration file “config.js” used to configure the FIWARE PEP Proxy
instance in the Waste4Think Back-End environment is presented.
config = {};
// Used only if https is disabled
config.pep_port = 82;
// Set this var to undefined if you don't want the server to listen on
HTTPS
config.https = {
enabled: false,
cert_file: 'cert/cert.crt',
key_file: 'cert/key.key',
port: 443
};
config.account_host = 'http://192.168.229.62:8000';
config.keystone_host = '192.168.229.62';
config.keystone_port = 5000;
config.app_host = '192.168.229.62';
config.app_port = '1026';
// Use true if the app server listens in https
config.app_ssl = true;
// Credentials obtained when registering PEP Proxy in Account Portal
config.username = 'pep_proxy_1d2c28cb9dc140629efd669a36023242';
54
Deliverable D2.3
config.password = 'f5c42df5e4304961a810417ceb5ac8b6';
// in seconds
config.cache_time = 300;
// if enabled PEP checks permissions with AuthZForce GE.
// only compatible with oauth2 tokens engine
//
// you can use custom policy checks by including programatic scripts
// in policies folder. An script template is included there
config.azf = {
enabled: false,
protocol: 'http',
host: '192.168.229.62',
port: 83,
custom_policy: undefined // use undefined to default policy checks
(HTTP verb + path).
};
// list of paths that will not check authentication/authorization
// example: ['/public/*', '/static/css/']
config.public_paths = [];
// options: oauth2/keystone
config.tokens_engine = 'oauth2';
config.magic_key = undefined;
module.exports = config;
Table 42 – PEP Proxy config file
Access to the container of FIWARE PEP Proxy: docker exec -it < pep-proxy-container> bash
In the Waste4Think context, a private instance of FIWARE IDM (see Figure 28) has been
deployed and configured. It is running at the following link:
55
Deliverable D2.3
http://backend.waste4think.eu:8000/.
The Docker image used in the Waste4Think project is the version 6.2, and it is being hosted
on DOCKER Hub at the following link https://hub.docker.com/r/waste4think/fiware-idm/.
Installation/Configuration
In the Waste4Think project, FIWARE IDM has been deployed by using Docker technologies.
There are different ways to deploy the FIWARE IDM service. More information on this can be
found in the official FIWARE IDM Installation & Administration Manual [11].
docker ps
In Table 43 the docker-compose.yml file used to deploy the FIWARE IDM version 6.2
instance in the Waste4Think Back-End environment is presented.
Version: “3”
services:
56
Deliverable D2.3
idm:
image: waste4think/fiware-idm:latest
hostname: keyrock
container_name: keyrock
ports:
- "8000:8000"
- "5000:5000"
networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1400
Table 43 – IDM docker compose file
Image: The IDM docker image used is the version 6.2 stored in the Waste4Think
organization of the Docker HUB.
Ports: The port 8000 has been configured for the IDM component HORIZON (Front-End),
and the port 5000 has been configured for the IDM KEYSTONE component (Back-End).
Access to the container of FIWARE IDM: docker exec -it < idm-container> bash
57
Deliverable D2.3
The Docker image used in the Waste4Think project is the version 8.0.1, and it is being
hosted on the Waste4Think project of the DOCKER Hub at the following link
https://hub.docker.com/r/waste4think/authzforce/.
5.4.3. Installation/Configuration
In the Waste4Think project, FIWARE AuthZForce has been deployed by using Docker
technologies.
There are different ways to deploy the FIWARE AuthZForce service. More information on
that can be found in the official FIWARE AuthZForce Installation & Administration Manual
[10].
In Table 44, the docker-compose.yml file used to deploy the FIWARE AuthZForce
component in the Waste4Think Back-End environment is presented.
version: "3"
services:
authz:
image: waste4think/authzforce:release-8.0.1
ports:
- "83:8080"
networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1400
Table 44 – AuthZForce docker compose file
services: authz:
image: waste4think/authzforce:release-8.0.1
ports: “83:8080”
Access to the container of FIWARE AuthZForce: docker exec -it < authz-container> bash
58
Deliverable D2.3
In the Waste4Think IDM instance it was necessary to configure the following object:
• User: Any signed up user able to identify themselves with an email and password.
Users can be assigned rights individually or as a group
• Role: A role is a descriptive bucket for a set of permissions. A role can be assigned to
either a single user or an organization. A signed-in user gains all the permissions
from all of their own roles plus all of the roles associated with their organization
5.5.1. Applications
This section shows the Back-End applications registered on IDM (see Figure 29).
Application Description
59
Deliverable D2.3
The following Table 46 shows the list of authorized users and their roles associated with the
IDM “waste4think” application.
SEVESO PILOT
Seveso User
(seveso) (Pilot)
ZAMUDIO PILOT
Zamudio User
(zamudio) (Pilot)
CASCAIS PILOT
Cascais User
(cascais) (Pilot)
HALANDRI PILOT
Halandri
(halandri) (Pilot)
60
Deliverable D2.3
specific to IDM.
5.5.3. Permissions
The following Table 47 represents the list of the IDM permissions for each role.
Permission to
PILOT create/update
POST v2/op/update
(pilot) entities in batch
operations.
Permission to get
ADMIN
Version FIWARE Orion version
(Admin)
ADMIN Permission to
GET v2/entities//attrs retrieve entity
(Admin) attributes
Permission to
ADMIN create/update
POST v2/op/update
(Admin) entities in batch
operations.
61
Deliverable D2.3
ADMIN Permission to update
POST v2/entities//attrs or append entity
(Admin) attributes
PROVIDER/OWNER Permission to
Get and assign all public
retrieve and assign
(Provider/Owner) application roles
roles.
Permission to
PROVIDER/OWNER authorize users in
Manage Authorizations
(Provider/Owner) application and add
them to it.
Permission to assign
PROVIDER/OWNER Get and assign all
internal roles in
internal application
(Provider/Owner) application ones who
roles
are for public users
62
Deliverable D2.3
Dashboards
The PyFiware Connector is a Python library that provides a unique endpoint to access the
information stored in the Waste4Think Back-End (the Orion Context Borker and the History
Module). PyFiware makes the communication and the use of the FIWARE Context Broker
OAuth protocol transparent to the application consumer.
Source code
Installation/Configuration
• To configure PyFiware to read from the Orion Context Broker, the user needs to
define two sets of parameters, a) one for the specific instance of the Orion Context
Broker (host, service, service path, and the OAuth connector, and b) the parameters
relevant to the OAuth specification (id, secret, user and password). Table 48 shows
and example configuration of the PyFiware Connector to Access the Context Broker.
oauth= OAuthManager(
oauth_server_url='http://backend.waste4think.eu:8000/oauth2',
63
Deliverable D2.3
client_id="3bb5a3ee06854161a05bfdcdeab7c1cf",
client_secret="82e2f867b9db441ea0dd3659e05cbdcc",
user="josu.bermudez@deusto.es",
password="82e2f867b9db441ea0dd3659e05cbdcc"
)
oc = OrionConnector(
host="http://backend.waste4think.eu/",
service="waste4think",
service_path="/#",
oauth_connector=oauth
)
oc.search("SortingType")[3]['color']['value']
Table 48 – Example configuration of the PyFiware connector to access the Context Broker
• To configure PyFiware to read from the History module, the user needs to define two
sets of parameters, a) one for the specific instance of history module (host, service,
service path, and the OAuth connector, and b) the parameters relevant to the OAuth
specification (id, secret, user and password). Table 49 shows and example
configuration of the PyFiware Connector to Access the History Module.
from pyfiware import HistoryConnector
from pyfiware.oauth import OAuthManager
oauth= OAuthManager(
oauth_server_url='http://backend.waste4think.eu:8000/oauth2',
client_id="3bb5a3ee06854161a05bfdcdeab7c1cf",
client_secret="82e2f867b9db441ea0dd3659e05cbdcc",
user="josu.bermudez@deusto.es",
password="82e2f867b9db441ea0dd3659e05cbdcc"
)
hc = HistoryConnector(
host="http://history.waste4think.eu/",
oauth_connector=oauth
)
scenario = “/”
6.2. Sensors
The information provided by the different pilots of Waste4Think pilots can be originated either
by data coming from on-site sensors ( (lock system, weighting system, GPS system,
SCADAS in treatment plants, interaction with the apps and games, etc) and/or by data
coming from legacy systems that are already deployed in the pilots (MAWIS, Ge.R.A., ILink).
64
Deliverable D2.3
As described in section 9 of Deliverable D2.1 [3] , Waste4Think pilots can opt for different
methods to upload their data (sensors, legacy system, etc.) to the Back-End (see Figure 32 -
). These systems provide pilots different approaches to handle uploading data, with the aim
to cover several scenarios such as near real-time process, batch process and upload
process that require user interaction. The Waste4Think Back-End provides three methods in
which to upload information coming from the sensors or the legacy systems: the NGSI
Connector API (Section 6.2.1), the NGSI Connector WEB (Section 6.2.2) and the NGS v2
APIs (Section 6.2.3).
When connecting a sensor or legacy system to the backend comes the decision of which
connector to use. This decision is taken depending on the frequency of data availability and
the technical possibility of connecting directly different software systems and/or sensors to
the Waste4Think back-end. Nevertheless, regardless of the connector being used, prior to
sending the information, the sensor layer or software system must format the data gathered
into the NGSI compliant Waste4Think data model, which is specified in Section 7 of
Deliverable D2.1 [3]. Once formatted, the information is sent through one of the connectors
previously mentioned to the Waste4Think Back-End, stored in the Orion Context Broker and
subsequently in the History module. This information is then available for all the modules in
the Front-End (Section 6.1).
6.2.1. NGSIConnectorAPI
Service overall description
The NGSIConnectorAPI is a RESTful web service that allows authorised Waste4Think users
to create, update and retrieve Waste4Think context entities in the Back-End platform.
65
Deliverable D2.3
NGSIConnectorAPI endpoint
Source code
The source code of the NGSIConnectorAPI is being hosted on GITLAB at the following link:
http://dev.waste4think.eu/waste4think/NgsiConnector.
The docker image used in the Waste4Think project is being hosted on the Waste4Think
project of the DOCKER Hub and is available at the following link:
https://hub.docker.com/r/waste4think/ngsi-connector/.
Installation/Configuration
In the Table 50 the docker-compose file used to configure and deploy the
NGSIConnectorAPI to Waste4Think back-end is presented.
version: "3"
services:
connector:
image: waste4think/ngsi-connector:3.0.0
volumes:
- /data/docker_connector/conf/config.js:/opt/ocbconnector/config.js
ports:
- "3000:3000"
networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1400
Table 50 – NGSIConnectorAPI docker compose file
connector:
image: waste4think/ngsi-connector
ports:“3000:3000”
service./data/docker_connector/conf/config.js:/opt/ocbconnector/config.js
Configuration file
66
Deliverable D2.3
In the Table 51 the configuration file used by the NGSIConnectorAPI service is presented.
config.orion_url:
FIWARE Orion URL. In the Waste4Think context FIWARE ORION Context Broker is
protected by PEP-PROXY and then the value 'http://192.168.229.62:82/ represents the URL
of PEP Proxy service.
config.api_port:
NGSIConnectorAPI port.
config.ext:
Supported file extensions to upload/update context information in the FIWARE Orion Context
Broker. At the time of writing of this deliverable, the NGISConnectorAPI supports csv and
json extensions.
config.https:
Running the NGSIConnectorAPI in HTTPS mode. Values admitted “true” and “false”.
config.returnEntities:
Number of entities to get from the Context Broker. This is a parameter of the FIWARE
ORION Context Broker which return by default 20 entities when using GET all entities
operations. With this option, we can increase the number up to 1000.
67
Deliverable D2.3
• General description of the project that includes developers, https/http license etc. (see
Figure 33)
• Information on data models defined in the Waste4Think project (see Figure 35).
68
Deliverable D2.3
Annex_C contains an exploded view of each API call definition which includes i)
implementation notes, ii) query parameters and iii) response data model.
6.2.2. NGSIConnectorWEB
Service overall description
The NGSIConnectorWEB is a web application which exposes the NGSIConnectorAPI
functionalities and provides Waste4Think users with a graphical interface for work on them.
On top of functionalities provided by NGSIConnectorAPI, NGSIConnectorWEB is also
responsible for access token management, specifically for automating the process of token
retrieval.
NGSIConnectorWEB endpoint
http://backend.waste4think.eu:3001
Source code
The source code of the NGSIConnectorWEB is being hosted on the following link:
http://dev.waste4think.eu/waste4think/NgsiConnectorWeb
69
Deliverable D2.3
The Docker image is being hosted on the Waste4Think project of the Docker Hub on the
following link:
https://hub.docker.com/r/waste4think/ngsi-connector-web/
Installation/Configuration
In Table 52, the docker-compose file used to configure and deploy the NGSIConnectorWEB
to Waste4Think back-end is presented.
version: "3"
services:
connector:
image: waste4think/ngsi-connector-web:2.0.1
volumes:
-
/data/docker_connector_web/conf/config.js:/opt/NgsiConnectorWeb/config.js
ports:
- "3001:3001"
networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1400
Table 52 – NGSIConnectorWEB docker-compose file
connector:
image: waste4think/ngsi-connector-web:2.0.1
ports:"3001:3001"
/data/docker_connector_web/conf/config.js:/opt/NgsiConnectorWeb/config.js
Configuration file
“url”:” https://192.168.229.62:3000/v1”,
“port”: 3001,
“clientID”: “3bb5a3ee06854161a05bfdcdeab7clcf”,
“clientSecret”: “82e2f867b9db441ea0dd3659e05cbdcc”
70
Deliverable D2.3
Table 53 – NGSIConnectorWEB configuration file.
url:
port:
NGSIConnectorWEB port.
clientID:
clientSecret:
ClientID and ClientSecret of the “Waste4think” application defined in the FIWARE IDM
instance (see section 5.3 of this deliverable) to get access token in NGSIConnectorWEB
interface.
NGSIConnectorWEB interface
Login page
The NGSIConnectorWEB login page is used to authenticate Waste4Think users. (see Figure
36).
Home page
71
Deliverable D2.3
It provides quick access feature to most used functionalities as well as navigation menu to all
pages available (see Figure 37).
Entities page
Entities page provides the functionality to get all entities specifying Fiware-Service and
Fiware-ServicePath (see Figure 38).
Details page
Details page provides the functionality to get a single entity specifying Fiware-Service and
Fiware-ServicePath (see Figure 39).
72
Deliverable D2.3
Type page
Type page provides the functionality to get entities specifying Entity Type, Fiware-Service
and Fiware-ServicePath (see Figure 40).
Create page
Create page provides the functionality to create entities from user-supplied file specifying
Fiware-Service and Fiware-ServicePath (see Figure 41).
73
Deliverable D2.3
Update page
Update page provides the functionality to update entities from user-supplied file specifying
Fiware-Service and Fiware-ServicePath (see Figure 42).
Rules page
Rules page provides the functionality to show information the available rules defined in the
Waste4Think data model (see Figure 43).
74
Deliverable D2.3
Access token
NGSIConnectorWEB access token form (see Figure 44) provides a function to retrieve the
access token using a FIWARE IDM credential (see section 5.3 of this deliverable).
Data sources
Data that is being sent to NGSIConnector tools, can come from different sources, each
source represents certain use case of Waste4Think project.
Sensors
All the sensors/devices that act as input to define the waste management system. In
particular:
Third-Party Systems
75
Deliverable D2.3
All the Third-Party waste management systems (e.g. MOBA Systems, Wintarif, GeRa) that
provide data to the Waste4Think Back-End.
These sources comprise a series of different input mechanisms such as APPs, mobile
games, surveys, learning materials and other monitoring actions that feed information to the
system.
Data formats
Data coming from different sources (Sensors, Third-Party Systems and other sources info )
can be sent to the Back-End, through the NGSIConnector tools, in various formats
depending on its source.
CSV:
CSV files (see Table 54) used to store and send data to the Back-end, must fulfil conditions
set by project design, these aspects are main building blocks by which data is modelled and
sent to back-end services.
id,family,type,name,description,refCategory,wastecode,definitionSource
waste:1,Resource,Waste,Biowaste,,wastecategory:1,200108,easetech
waste:2,Resource,Waste,Meat and bone meal,,wastecategory:1,200108,easetech
waste:3,Resource,Waste,Rape meal,,wastecategory:1,200108,easetech
Table 54 – Waste4Think CSV file
id,family,type,%%"metadata":{"unit":{"value":"C62","type": "String"}}%%name
waste:1,Resource,Waste,Biowaste
waste:2,Resource,Waste,Meat and bone meal
waste:3,Resource,Waste,Rape meal
Table 55 – Waste4Think CSV metadata
[
{
“id”: “waste:1”,
“type”: “Waste”,
76
Deliverable D2.3
“family: {
“type”: “String”,
“value”: “Resource”,
“metadata”: {}
},
“name”: {
“type”: “String”,
“value”: “Biowaste”,
“metadata”: {
“unit”: {
“value”: “C62”,
“type”: “String”
}
}
},
}
]
Table 56 – Waste entity with metadata
JSON:
JSON files (see Table 57) used to send data to the Back-end, through the NGSIConnector
tools, must be structured following the FIWARE NGSI v2 standard.
[
{
"id": "route:1",
"type": "Route",
"arrivalPoint": {
"metadata": {},
"type": "StructuredValue",
"value": {
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
"9.15241",
"45.6388"
]
}
}
},
"departurePoint": {
"metadata": {},
"type": "StructuredValue",
"value": {
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
"9.15238",
"45.63904"
]
}
}
},
"description": {
"metadata": {},
"type": "String",
"value": ""
},
]
Table 57 – Waste4Think JSON file data
77
Deliverable D2.3
Request body:
The data structure is the same as for JSON files, but data is not stored in a JSON file but it is
included in the API request body. Since structure is same as for JSON files it also must
contain “id” and “type”.
Rules
NGSIConnector tools contain a "rules" module to check both structure and integrity of data
that is being sent to the FIWARE Back-End.
Rules (see Table 58) structure the data that is being sent to the FIWARE Back-End and are
created by W4T system admin using the value of the entity's attribute type as naming
convention (e.g. WasteTransaction.js).
const WasteTransaction = {
id: rules.idCheck,
type: rules.typeCheck,
family: rules.stringCheck,
refEmitter: rules.mandatoryCheck,
refReceiver: rules.mandatoryCheck,
refCapturer: rules.stringCheck,
date: rules.mandatoryCheck,
emittedResources: rules.structuredListMandatory,
receivedResources: rules.structuredListMandatory,
incidence: rules.stringCheck,
incidenceReason: rules.stringCheck
};
module.exports = WasteTransaction;
Table 58 – NgsiConnectorApi Waste Transaction rule example
Data upload/update
Waste4Think data, coming from different sources (e.g. Sensors, Third-Party System, APPs,
forms), does not have suitable data structure required by FIWARE Back-End. Both
NgsiConnectorApi or NgsiConnectorWeb are the connector tools to upload or update data to
the FIWARE Back-End.
These connectors provide pilots with different approaches to handle uploading data, with the
aim covering different scenarios such as near real-time process, batch process and upload
process that require a user interaction.
78
Deliverable D2.3
In this case the sensors can send context data in automated way to the Back-end by using
directly the NGSIConnectorAPI with the request body option, since data typically contains the
values of a single entity.
curl -X POST \
http://backend.waste4think.eu/connector/entities \
-H 'Content-Type: application/json' \
-H 'Fiware-Service: waste4think' \
-H 'Fiware-ServicePath: /deusto/w4t/seveso/real' \
-H 'X-Auth-Token: 1u374V8CFfc822kzfEcCNE0aKDLsyK' \
-d '[
{
"id": "route",
"type": "Route",
"arrivalPoint": {
"metadata": {},
"type": "StructuredValue",
"value": {
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
"9.15251",
"45.63864"
]
}
}
},
]'
Table 59 – NgsiConnectorApi upload data in near real-time
The third-party system collects and handle data in its data management system. A batch
process retrieves the data and stores it in a file CSV or JSON compliant with the data formats
described in the section 6.2.3 “Data format” and finally sends the file with the entities to the
NGSIConnectorAPI using the userFile option.
79
Deliverable D2.3
curl -X POST \
http://backend.waste4think.eu/connector/entities \
-H 'Content-Type: multipart/form-data' \
-H 'Fiware-Service: waste4think' \
-H 'Fiware-ServicePath: /deusto/w4t/seveso/real' \
-H 'X-Auth-Token: 1u374V8CFfc822kzfEcCNE0aKDLsyK' \
-F 'userFile=@C:\Users\waste.csv'
Table 60 – NgsiConnectorApi massive upload of data from a CSV file
80
Deliverable D2.3
Details about FIWARE-NGSI v2 APIs has been already described in the deliverable 2.1 [3].
a) Zamudio
The waste containers in Zamudio (Figure 40) are provided with an RFID tag that uniquely
identifies each container and an e-lock which can only be opened by using one of the citizen
cards specific to the municipality. On collection, the garbage truck reads the tag and weight
of the container as well and the information stored in the e-lock. This information is sent to
the TruckDataApp (Annex G of Deliverable D2.1 [3]) along with the GPS information of the
route. When the garbage truck finishes the collection route and returns back to the garage,
the TruckDataApp formats the gathered data according to the specifications of the
Waste4Think data model and sends it over to the Waste4Think Back-End using the NGSI
connectors previously detailed.
b) Halandri
The waste containers in Halandri (Figure 41) are provided with an RFID tag that uniquely
identifies each container. On collection, the garbage truck reads the tag and weight of the
container. This information is registered in the PowerFleet platform along with the GPS
information of the route. A specific connector gathers the data from the PowerFleet platform
according to the specifications of the Waste4Think data model and sends it over to the
Waste4Think Back-End using the NGSI connectors previously detailed.
81
Deliverable D2.3
c) Seveso
Seveso already had a system that covered several aspects of waste management. In this
case, the bags for residual waste in Seveso (Figure 42) are provided with an RFID tag that
allows to identify which user has used which bag. On collection, the garbage truck reads the
tag of the bag and sends this data to WinTarif (Deliverable D5.1 [28]) which has
implemented a special routine that makes use of the NGSI connectors previously detailed to
send information to the Waste4Think Back-End on a daily basis. Ideally, Gelsia could also
implement a specific routine to send the data to the Back-End. However, due to them not
being part of the project consortium, it has been difficult to engage them and convince them
to share the information. To overcome this issue, several predefined routes have been
identified based on real data collected by Gelsia and uploaded to the Back-End.
d) Cascais
Cascais already had a system that covered several aspects of waste management. The
underground containers in Cascais (Figure 43) feature a filling sensor and an electronic key-
lock which identifies those volunteers that make use of them. The electronic key-lock sends
continuous information to the EMZ platform (EMZ is the company responsible for the key-
locks) via GPRS which is then collected by MAWIS (Deliverable D2.5 [27]) through an
82
Deliverable D2.3
interface implemented for that particular purpose. On collection, the garbage truck reads the
tag of the container along the information from the GPS through the onboard CPU, known as
MiniOperand. Once the collection route is finished, the MiniOperand sends the data over to
MAWIS. Whenever MAWIS receives new information, either about the use of a key-lock, a
new measure of the filling sensor on a specific container, o a finished route, it sends the
information over to the Waste4Think Back-End using the NGSI connectors previously
detailed.
• Biomass production from food/kitchen waste monitorization (R19) Section 7.3 D3.1
[25];
• Biofuel and Compost production from disposable nappies (R20) Section 8 of
D3.3[26];
In the case of the status of the operation, the CSV output files produced by the SCADA
systems of the treatment plants will be parsed by the NGSIConnectorAPI and the information
will reach the different attributes of the data model. On the other hand, for the waste
processed, handmade CSV files will be uploaded and parsed by the NGSIConnectorAPI.
Both procedures will be performed after every batch of operation of the treatment plant.
Further information about the specifications of the SCADA systems can be found in
Deliverable D3.1, Section 7.3 for R19, and Deliverable D3.3, Section 8 for R20.
Table 61 and Table 62 shows an example of the CSV file detailing the status of the treatment
operation for R19 and R20. Table 63 and Table 64 show an example of the CSV file detailing
the amount of waste processed for R19 and R20.
temp
pH Methanogenic pH Acidogenic temp Acidogenic
Time Metyhanogenic
Reactor Reactor Reactor
Reactor
02/10/2015
Bad Bad Bad Bad
10:52
02/10/2015
Bad Bad Bad Bad
10:55
02/10/2015
9.05696 44.4416 5.80772 15.3927
11:59
83
Deliverable D2.3
02/10/2015
9.05696 44.4387 5.80732 15.3939
12:00
02/10/2015
9.08748 42.6506 5.80543 15.3902
16:00
02/10/2015
9.11096 40.8522 5.80579 15.2838
20:00
Table 61 - Example CSV file with the information of the status in R20
Electrical
Total Volatile N
Temperature Moisture Content pH conductivity (EC) N
Day solids (TS) TS [%] solids TOC [%] (TKN)
(°C ) (sensor) (%) (manually) (manually) [mS/cm] (mg/g)
[g] (VS) [g] [%]
(manually)
1 - - -
2 52.1 -
3 57.7 39.12 6.3 - 50.560 2.222 22.217
4 55.4 35.99 - -
5 61.6 35.22 - - 49.393 2.286 22.861
6 61.4 35.19 6.27 2.08
7 62 39.6 6.6 2.2 1.74 62.81 1.4021 47.185 2.437 24.366
8 60 38.58 6.91 2.28 1.55 66.41 1.3683
9 59.2 39.55 6.82 2.28 2.01 67.56 1.6906 48.762 2.439 24.388
10 49.6 35.64 6.82 2.28 - - -
11 55.9 45.03 - - - - - 2.657 26.570
12 61.6 - - - - - -
13 57.2 41.24 6.93 2.4 1.32 60.10 1.0981
14 62 42.95 - - - - -
15 56.8 45.52 7.22 2.46 1.67 54.46 1.3959
16 46.7 44.57 7.24 2.4 2.05 55.91 1.7154
17 51.6 41.96 7.41 2.48 2.30 63.86 1.8289
Table 62 - Example CSV file with the information of the status in R19
3/1/1 4/1/1 5/1/1 8/1/1 9/1/1 10/1/1 11/1/1 12/1/1 15/1/1 16/1/1
Date
8 8 8 8 8 8 8 8 8 8
Morning batch
weight of HFW (Kg) 111 115 111 115 115 116 115 116 116 135
weight of FORBI
22.8 17.7 27 34 31.7 26 26.4 17 25.2 25.9
(Kg)
electricity counter
26650 26756 26843 26934 27117 27389 27560 27740 27925
index (start of
84
Deliverable D2.3
every cycle) (kW)
electricity counter
index (end of every 26756 26843 26934 27026 27197 27389 27475 27641 27827 28036
cycle) (kW)
energy
106 87 91 92 80 27389 86 81 87 111
consumption (kW)
Afternoon batch
{
"id": "5bed2fcc-ef25-45f1-92fc-3d7992498991",
"type": "CitizenAppUserMetrics",
"search": {
"type": "Text",
"value": "paper containers",
"metadata": {}
}
}
Table 65 - Example JSON with the information of a user session in the Citizen App
{
"id": "de46fcce-c951-11e8-a8d5-f2801f1b9fd1",
"type": "FoodAppUserTransaction",
"donor": {
"type": "Text",
"value": "5bed2fcc-ef25-45f1-92fc-3d7992498991",
"metadata": {}
},
"donee": {
"type": "Text",
"value": "6799a08c-c950-11e8-a8d5-f2801f1b9fd1",
"metadata": {}
},
85
Deliverable D2.3
"product": {
"type": "Text",
"value": "Cooked pasta",
"metadata": {}
},
"weight": {
"type": "Number",
"value": "20",
"metadata": {
"unit": "kg",
}
},
"allergens": {
"type": "Text",
"value": "cheese, nuts"
},
}
Table 66 - Example JSON with the information of a user transaction in the Food App
{
"id": "5bed2fcc-ef25-45f1-92fc-3d7992498991",
"type": "SortingGameUserMetrics",
"age": {
"type": "Number",
"value": 27,
"metadata": {}
},
"gender": {
"type": "Text",
"value": "MALE",
"metadata": {}
},
"municipalityName": {
"type": "Text",
"value": "Zamudio",
"metadata": {}
},
"countryName": {
"type": "Text",
"value": "Spain",
"metadata": {}
},
"maxLevel": { //The furthest level the user has reached
"type": "Number",
"value": 2,
"metadata": {}
},
"totalPlayedLevels": { //The total number of level play-throughs,
including replayed levels
"type": "Number",
"value": 2,
"metadata": {}
},
"totalPoints": {
"type": "Number",
"value": 150,
"metadata": {}
},
"totalItems_All": {
"type": "Number",
"value": 20,
"metadata": {}
},
"totalIncorrectItems_All": {
"type": "Number",
86
Deliverable D2.3
"value": 4,
"metadata": {}
},
"totalItems_TypeID_1053": { // Total number of items from the sorting
type with ID 1053 that have been presented to the user
"type": "Number",
"value": 10,
"metadata": {}
},
"totalIncorrectItems_TypeID_1053": { // Total number of errors in the
sorting type with ID 1053 that the user has made
"type": "Number",
"value": 2,
"metadata": {}
},
"totalItems_TypeID_1054": {
"type": "Number",
"value": 5,
"metadata": {}
},
"totalIncorrectItems_TypeID_1054": {
"type": "Number",
"value": 1,
"metadata": {}
},
"totalItems_TypeID_1055": {
"type": "Number",
"value": 5,
"metadata": {}
},
"totalIncorrectItems_TypeID_1055": {
"type": "Number",
"value": 1,
"metadata": {}
}
}
Table 67 - Example JSON with the information of a user session in the Serious Games
Learning materials will retrieve information using several online surveys. The results of these
surveys will be processed regularly and introduced in the platform as a UserMetric entity. To
this end, a person will manually download the results of the surveys using the CSV export
tool and will use the NGSIConnectorAPI to upload the information to the platform. An
example of this CSV with information of the Learning Materials is shown in the Table 68.
Age Students
LM_ID Effectiveness Relevance Objective Contents Motivate Recipients Evaluation … Data
Range invoved
1 5-7 12 4 4 2 3 4 5 3 … 4
2 5-7 12 3 5 4 3 5 4 3 … 4
3 5-7 12 4 5 4 3 4 5 4 … 5
4 5-7 12 3 4 5 3 5 6 5 … 4
6 5-7 12 5 5 3 3 3 4 5 … 4
… … … … … … … … … … … …
Table 68 - Example CSV with the information of the Learning Materials
87
Deliverable D2.3
7. References
88
Deliverable D2.3
Usage of Docker
• Developers write code locally and share their work with their colleagues using
Docker containers.
• Use Docker to push their applications into a test environment and execute
automated and manual tests.
• When developers find bugs, they can fix them in a development environment and
redeploy them to the test environment for testing and validation.
• When testing Is complete, getting the fix to the customer is a simple as pushing
the updated image to the production environment.
Docker’s container-based platform allows for highly portable workloads. Docker containers
can run on a developer’s local laptop, on physical or virtual machines in a data center, on
cloud providers, or in a mixture of environments.
Docker’s portability and lightweight nature also make it easy to dynamically manage
workloads, scaling up or tearing down applications and services as business needs dictate,
in near real time.
Installation of Docker
The first step determines whether any updates are available for our installed packages.
Then running following command will add the official Docker repository, download the latest
version of Docker, and install it:
89
Deliverable D2.3
sudo curl -fsSl https://get.docker.com/ | sh
The output of this command should be similar to the following, showing that service is active
and running.
Installing Docker now give not just the Docker service (daemon) but also the docker
command line utility, or the Docker client.
By default, running the docker command requires root privileges – that is, you have to prefix
the command with sudo. It can also be run by a user in docker group, which is automatically
created during the installation of Docker. If you attempt to run the docker command without a
prefix it with sudo or without being in the docker group, you will get an output like this:
To avoid typing sudo to run docker command, add username to docker group:
Logout of Virtual Machine for settings to work. If there is a need of adding some other user
that is not logged in use following command.
Docker commands
90
Deliverable D2.3
With Docker installed and working, time to become familiar with the command line utility.
Using docker consist of passing it a chain of option and subcommands followed by
arguments. The syntax takes this form:
docker
docker info
Docker images
Docker containers are run from Docker images. By default, it pulls these images from Docker
Hub, a Docker registry managed by Docker, the company behind the Docker project.
Anybody can build and host their Docker images on Docker Hub, so most applications and
Linux distribution need to run Docker container have images that are hosted on Docker Hub.
91
Deliverable D2.3
To check whether user can access and download images from Docker Hub, type:
The output, which should include the following, should indicate that Docker in working
correctly:
There is an option to search for images on Docker Hub by using the docker command with
the search subcommand. For example, to search for the waste4think image, type:
This will crawl Docker Hub and return a listing of all images whose name match the search
string.
In the OFFICIAL column, OK indicates an image built and supported by the company behind
the project.
docker images
The output should look like the following (it is different depending images that are being
downloaded):
After using Docker for a while, there will be many active(running) and inactive containers on
the machine.
docker ps
docker ps -a
92
Deliverable D2.3
To see the latest container that was created, pass the -l switch:
docker ps -l
To remove container/image
Docker-Compose Service
Compose is a tool for defining and running multi-container Docker applications. Compose
use a YAML file to configure application services. Then, with a single command, you create
and start all the services from your configuration.
93
Deliverable D2.3
version: ’3’
services
- Different parts of the application are called services, they are nothing but containers
that will be created witch specific commands being added to them.
ports
- Main use of volumes is persisting data on the user machine, once docker container is
removed it loses all its saved data. Volumes allow us to map location on the local
drive to containers drive, or in some case to pass config file if some service requires
one.
94
Deliverable D2.3
[{
"id": "5b51b05464493217476ed4ca",
"description": "A subscription to get info about DepositPoint
entity",
"expires": "2040-01-01T14:00:00.00Z",
"status": "active",
"subject": {
"entities": [{
"idPattern": ".*",
"type": "DepositPoint"
}],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 1,
"lastNotification": "2018-07-20T09:50:12.00Z",
"attrs": [],
"attrsFormat": "normalized",
"http": {
"url":
"http://history.deusto.io/api/fiware/scenario/deusto:w4t:zamudio:real/entit
ies"
},
"lastSuccess": "2018-07-20T09:50:12.00Z"
},
"throttling": 5
},
{
"id": "5b51b06564493217476ed4cb",
"description": "A subscription to get info about DepositPointType
entity",
"expires": "2040-01-01T14:00:00.00Z",
"status": "active",
"subject": {
"entities": [{
"idPattern": ".*",
"type": "DepositPointType"
}],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 1,
"lastNotification": "2018-07-20T09:50:29.00Z",
"attrs": [],
"attrsFormat": "normalized",
"http": {
"url":
95
Deliverable D2.3
"http://history.deusto.io/api/fiware/scenario/deusto:w4t:zamudio:real/entit
ies"
},
"lastSuccess": "2018-07-20T09:50:30.00Z"
},
"throttling": 5
},
{
"id": "5b51b07764493217476ed4cc",
"description": "A subscription to get info about Vehicle entity",
"expires": "2040-01-01T14:00:00.00Z",
"status": "active",
"subject": {
"entities": [{
"idPattern": ".*",
"type": "Vehicle"
}],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 1,
"lastNotification": "2018-07-20T09:50:47.00Z",
"attrs": [],
"attrsFormat": "normalized",
"http": {
"url":
"http://history.deusto.io/api/fiware/scenario/deusto:w4t:zamudio:real/entit
ies"
},
"lastSuccess": "2018-07-20T09:50:47.00Z"
},
"throttling": 5
},
{
"id": "5b51b08f64493217476ed4cd",
"description": "A subscription to get info about VehicleType entity",
"expires": "2040-01-01T14:00:00.00Z",
"status": "active",
"subject": {
"entities": [{
"idPattern": ".*",
"type": "VehicleType"
}],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 1,
"lastNotification": "2018-07-20T09:51:11.00Z",
"attrs": [],
"attrsFormat": "normalized",
"http": {
"url":
"http://history.deusto.io/api/fiware/scenario/deusto:w4t:zamudio:real/entit
96
Deliverable D2.3
ies"
},
"lastSuccess": "2018-07-20T09:51:11.00Z"
},
"throttling": 5
},
{
"id": "5b51bed164493217476ed4ce",
"description": "A subscription to get info about SortingType entity",
"expires": "2040-01-01T14:00:00.00Z",
"status": "active",
"subject": {
"entities": [{
"idPattern": ".*",
"type": "SortingType"
}],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 1,
"lastNotification": "2018-07-20T10:52:01.00Z",
"attrs": [],
"attrsFormat": "normalized",
"http": {
"url":
"http://history.deusto.io/api/fiware/scenario/deusto:w4t:zamudio:real/entit
ies"
}
},
"throttling": 5
}]
Table 69 shows an example of Orion subscription which notifies entities to CEP system.
97
Deliverable D2.3
"areaServed", "dateModified", "dateCreated"
}
},
"notification": {
"http": {
"url": "http://sandwich.geoworldsim.com/api/app/:SANDWICH_ID/execute"
},
"attrs": [
"location", "address", "name", "description", "refDepositPoint",
"areaServed", "dateModified", "dateCreated"
]
},
"throttling": 5
}
EOF
Table 71 shows an example of Orion subscriptions which notify entities to FIWARE Cygnus
connector.
{
"description": "A subscription to get info about DepositPointIsle
entity",
"subject": {
"entities": [
{
"idPattern": ".*",
"type": "DepositPointIsle"
}
],
"condition": {
"attrs": [
]
}
},
"notification": {
"http": {
"url": "http://192.168.229.62:5050/notify"
},
"attrsFormat": "legacy",
"attrs": [
]
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 5
}
{
"description": "A subscription to get info about DepositPoint entity",
"subject": {
98
Deliverable D2.3
"entities": [
{
"idPattern": ".*",
"type": "DepositPoint"
}
],
"condition": {
"attrs": [
]
}
},
"notification": {
"http": {
"url": "http://192.168.229.62:5050/notify"
},
"attrsFormat": "legacy",
"attrs": [
]
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 5
}
{
"description": "A subscription to get info about DepositPointType
entity",
"subject": {
"entities": [
{
"idPattern": ".*",
"type": "DepositPointType"
}
],
"condition": {
"attrs": [
]
}
},
"notification": {
"http": {
"url": "http://192.168.229.62:5050/notify"
},
"attrsFormat": "legacy",
"attrs": [
]
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 5
}
99
Deliverable D2.3
Get list of entities specifying Service and ServicePath (see Figure 52).
POST /entities
Upload entities in the back-end from user supplied file (csv, json) (see Figure 53).
100
Deliverable D2.3
GET /entities/{entityID}
Get list of entities specifying entityID, Service and ServicePath (see Figure 54).
101
Deliverable D2.3
Figure 54 - GET/entities/{entityID}
GET /entities/type/{typeID}
Get list of entities specifying typeID, Service and ServicePath (see Figure 55).
102
Deliverable D2.3
GET /entities/update
Update entities from user supplied file (csv and json) (see Figure 56).
103
Deliverable D2.3
GET /rules
Show all rules available in the NGSIConnectorAPI defined in the Data Model. (see Figure
57).
104
Deliverable D2.3
GET /rules/{ruleID}
105
Deliverable D2.3
{
"serviceMappings": [
{
"originalService": "waste4think",
"servicePathMappings": [
{
"originalServicePath": "/deusto/w4t/cascais/real",
"entityMappings": [
{
"originalEntityId": ".*",
"originalEntityType": "DepositPointType",
"newEntityId": "",
"newEntityType": "depositpointtype",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "SortingType",
"newEntityId": "",
"newEntityType": "sortingtype",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "DepositPointIsle",
"newEntityId": "",
"newEntityType": "depositpointisle",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "DepositPoint",
"newEntityId": "",
"newEntityType": "depositpoint",
"attributeMappings": []
},
]
},
{
"originalServicePath": "/deusto/w4t/seveso/real",
"entityMappings": [
{
"originalEntityId": ".*",
"originalEntityType": "SortingType",
"newEntityId": "",
"newEntityType": "sortingtype",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "VehicleType",
"newEntityId": "",
"newEntityType": "vehicletype",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "Vehicle",
"newEntityId": "",
106
Deliverable D2.3
"newEntityType": "vehicle",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "DepositPoint",
"newEntityId": "",
"newEntityType": "depositpoint",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "DepositPointType",
"newEntityId": "",
"newEntityType": "depositpointtype",
"attributeMappings": []
}
]
},
{
"originalServicePath": "/deusto/w4t/zamudio/real",
"entityMappings": [
{
"originalEntityId": ".*",
"originalEntityType": "SortingType",
"newEntityId": "",
"newEntityType": "sortingtype",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "VehicleType",
"newEntityId": "",
"newEntityType": "vehicletype",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "Vehicle",
"newEntityId": "",
"newEntityType": "vehicle",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "DepositPoint",
"newEntityId": "",
"newEntityType": "depositpoint",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "DepositPointType",
"newEntityId": "",
"newEntityType": "depositpointtype",
"attributeMappings": []
}
]
}
]
}
]
}
107
Deliverable D2.3
Jon Arambarri
jarambarri@virtualwaregroup.com
Virtualware labs
108
Deliverable D2.3
Michael Kornaros
Email: kornaros@chemeng.upatras.gr
Partner: UPATRAS
109
Deliverable D2.3
Andreas Jalsøe
aj@seriousgames.net
Serious Games Interactive
110