0% found this document useful (0 votes)
40 views111 pages

Waste Management System Backend

This document describes the implementation of the back-end services for the Waste4Think project. It details the cloud infrastructure deployed on FIWARE Lab and the various Docker containers that make up the back-end, including Orion Context Broker, Cygnus, and IDM for authentication. It also covers the interfaces between the back-end and other modules like sensors and front-end applications.

Uploaded by

Dimas Ptr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views111 pages

Waste Management System Backend

This document describes the implementation of the back-end services for the Waste4Think project. It details the cloud infrastructure deployed on FIWARE Lab and the various Docker containers that make up the back-end, including Orion Context Broker, Cygnus, and IDM for authentication. It also covers the interfaces between the back-end and other modules like sensors and front-end applications.

Uploaded by

Dimas Ptr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 111

Ref.

Ares(2019)3573668 - 03/06/2019

D2.3 - Implementation of the Back-End


for R1, R2 and R3
WP2 Development of Computed Assisted Integral
Waste Management Systems

ENG
Other
Public
Reviewers: VIRTUALWARE, SGI, UPATRAS

Version Date Description of main changes Author


1.0 28/09/2018 First versión ENG
1.1 18/10/2018 VIRTUALWARE review ENG/VW
1.2 19/10/2018 UPATRAS review ENG/UPATRAS
1.3 24/10/2018 SGI review ENG/SGI
1.4 21/05/2019 Updated according to ENG/DEUSTO
reviewer comments
Deliverable D2.3

Table of contents
Executive summary ............................................................................................................... 7

Abbreviations ......................................................................................................................... 8

1. Introduction ..................................................................................................................... 9

2. Waste4Think Back-End architecture ............................................................................. 10

3. Cloud infrastructure implementation.............................................................................. 11

4. Back-End Services ....................................................................................................... 19

5. Security implementation................................................................................................ 50

6. Back-End interfaces to others Waste4Think modules ................................................... 63

7. References ................................................................................................................... 88

Annex A - Docker manual .................................................................................................... 89

Annex B – FIWARE Orion subscriptions .............................................................................. 95

Annex C – NGSIConnectorAPI service APIs ...................................................................... 100

Annex D – Cygnus Name Mappings .................................................................................. 106

1
Deliverable D2.3

List of figures:
Figure 1 - Back-End conceptual architecture ......................................................................... 9
Figure 2. Back-End implementation ..................................................................................... 10
Figure 3. FIWARE Lab Waste4Think Organization. ............................................................. 11
Figure 4 - Access to the FIWARE Lab resources ................................................................. 12
Figure 5 – FIWARE Lab Cloud Portal authentication ........................................................... 12
Figure 6 – FIWARE Lab Cloud Portal .................................................................................. 13
Figure 7 – W4T VM instances .............................................................................................. 14
Figure 8 - VMs physical network .......................................................................................... 14
Figure 9 - W4T Security Groups .......................................................................................... 15
Figure 10 - W4T Security Group rules .................................................................................. 15
Figure 11 - W4T Volumes .................................................................................................... 16
Figure 12 – W4T VM images ............................................................................................... 16
Figure 13 - Back-End Services ............................................................................................ 19
Figure 14 - Waste4Think organization in Docker HUB ......................................................... 20
Figure 15 - Orion DB size for all pilots .................................................................................. 25
Figure 16 - Number of entities for all pilots ........................................................................... 25
Figure 17 - Orion DB size for a single pilot ........................................................................... 26
Figure 18 - Number of entities for a single pilot .................................................................... 26
Figure 19 - Cygnus agent .................................................................................................... 29
Figure 20 – Waste4Think Open Data Platform ..................................................................... 39
Figure 21 - W4T CRUD user interface ................................................................................. 46
Figure 22 - FIWARE Security GEs ....................................................................................... 50
Figure 23 - W4T Back-End security layer ............................................................................. 51
Figure 24 - IdM resource permission with dynamic values ................................................... 52
Figure 25 - Security level 1 – Authentication ........................................................................ 53
Figure 26. Security level 2 - Basic authorization................................................................... 53
Figure 27 - W4T IDM instance ............................................................................................. 56
Figure 28 - IDM Applications ................................................................................................ 59
Figure 29 - IDM authorized user .......................................................................................... 60
Figure 30 - Connection of Waste4Think Back-End with the Front-End applications ............. 63
Figure 31 - Sensors pushing information to the Waste4Think Back-End .............................. 65
Figure 32 - APIs swagger description .................................................................................. 68
Figure 33 - swagger API list ................................................................................................. 68

2
Deliverable D2.3
Figure 34 - swagger data model info .................................................................................... 69
Figure 35 - NGSIConnectorWEB Login page ....................................................................... 71
Figure 36 - NGSIConnectorWEB Home page ...................................................................... 72
Figure 37 - NGSIConnectorWEB Entities page .................................................................... 72
Figure 38 - NGSIConnectorWEB Details page ..................................................................... 73
Figure 39 - NGSIConnectorWEB Type page ........................................................................ 73
Figure 40 - NGSIConnectorWEB Create page ..................................................................... 74
Figure 41 - NGSIConnectorWEB Update page .................................................................... 74
Figure 42 - NGSIConnectorWEB Rules page ...................................................................... 75
Figure 43 - NGSIConnectorWEB Access token form ........................................................... 75
Figure 44 – Scenario: Near Real-time process..................................................................... 78
Figure 45 – Scenario: batch process for massive data upload ............................................. 79
Figure 46 – Uploading data using with NGSIConne ............................................................. 80
Figure 47 – Waste collection in Zamudio ............................................................................. 81
Figure 48 – Waste collection in Halandri .............................................................................. 82
Figure 49 – Waste collection in Seveso ............................................................................... 82
Figure 50 – Waste collection in Cascais............................................................................... 83
Figure 51 - GET /entities .................................................................................................... 100
Figure 52 - POST /entities ................................................................................................. 101
Figure 53 - GET/entities/{entityID}...................................................................................... 102
Figure 54 - GET /entities/type/{typeID} ............................................................................... 103
Figure 55 - POST /entities/update ...................................................................................... 104
Figure 56 - GET /rules ....................................................................................................... 105
Figure 57 - GET /rules/{ruleID} ........................................................................................... 105

3
Deliverable D2.3

List of tables:
Table 1 – NGINX docker compose file ................................................................................. 17
Table 2 - NGINX Configuration file ....................................................................................... 18
Table 3 – Orion docker compose file.................................................................................... 23
Table 4 – MongoDB docker image ....................................................................................... 23
Table 5 – MongoDB docker volumes (data persistent) ......................................................... 23
Table 6 – MongoDB docker volumes (logs) ......................................................................... 23
Table 7 – MongoDB docker commands ............................................................................... 23
Table 8 – Orion docker image .............................................................................................. 23
Table 9 – Orion docker volumes (logs)................................................................................ 24
Table 10 – Orion docker volumes (certificates) ................................................................... 24
Table 11 – Orion docker links ............................................................................................. 24
Table 12 – Orion docker ports............................................................................................. 24
Table 13 – Orion docker commands ................................................................................... 24
Table 14 – Cygnus docker compose file .............................................................................. 28
Table 15 - Agent configuration file NGSI configuration section. ............................................ 29
Table 16 - Agent configuration file CKAN configuration section. .......................................... 30
Table 17 - Cygnus instance configuration file ....................................................................... 31
Table 18 – Name mappings configuration file ..................................................................... 32
Table 19 – Docker compose file of the CEP-Sandwich module............................................ 34
Table 20 – Nginx configuration file for the CEP-Sandwich module ....................................... 35
Table 21 – Docker compose file for the History module ....................................................... 37
Table 22 – Nginx configuration file for the History module ................................................... 37
Table 23 - CKAN docker compose file ................................................................................. 41
Table 24 - CKAN docker volumes ........................................................................................ 41
Table 25 - CKAN docker dependencies ............................................................................... 41
Table 26 - CKAN docker port ............................................................................................... 41
Table 27 – Deployment properties for the Admin backend module (development) ............... 43
Table 28 – Application properties for the Admin backend module module (development) .... 43
Table 29 – Docker compose file for the Admin backend module (development) .................. 44
Table 30 – Deployment properties for the Admin backend module (production)................... 44
Table 31 – Application properties for the Admin backend module (production) .................... 44
Table 32 – Docker compose file for the Admin backend module (production) ...................... 45
Table 33 – Nginx file for the Admin backend module (development and production) ........... 45
Table 34 – Deployment properties for the CRUD module (development) ............................. 47
Table 35 – Application properties for the CRUD module (development) .............................. 47

4
Deliverable D2.3
Table 36 – Docker compose file for the CRUD module (development) ................................ 48
Table 37 – Deployment properties for the CRUD module (production) ................................. 48
Table 38 – Application properties for the CRUD module (production) .................................. 48
Table 39 – Docker compose file for the CRUD module (production) .................................... 49
Table 40 – Nginx file for the CRUD module (development and production).......................... 50
Table 41 – PEP Proxy Docker compose file......................................................................... 54
Table 42 – PEP Proxy config file .......................................................................................... 55
Table 43 – IDM docker compose file .................................................................................... 57
Table 44 – AuthZForce docker compose file ........................................................................ 58
Table 45 - IDM Applications ................................................................................................. 59
Table 46 - IDM Users and Roles .......................................................................................... 61
Table 47 - IDM Roles and Permissions ................................................................................ 62
Table 48 – Example configuration of the PyFiware connector to access the Context Broker 64
Table 49 – Example configuration of the PyFiware connector to access the History module 64
Table 50 – NGSIConnectorAPI docker compose fileconnector:....................................... 66
Table 51 - NGSIConnectorAPI config file ............................................................................. 67
Table 52 – NGSIConnectorWEB docker-compose file ......................................................... 70
Table 53 – NGSIConnectorWEB configuration file. .............................................................. 71
Table 54 – Waste4Think CSV file ........................................................................................ 76
Table 55 – Waste4Think CSV metadata .............................................................................. 76
Table 56 – Waste entity with metadata ................................................................................ 77
Table 57 – Waste4Think JSON file data .............................................................................. 77
Table 58 – NgsiConnectorApi Waste Transaction rule example........................................... 78
Table 59 – NgsiConnectorApi upload data in near real-time ................................................ 79
Table 60 – NgsiConnectorApi massive upload of data from a CSV file ................................ 80
Table 61 - Example CSV file with the information of the status in R20 ................................. 84
Table 62 - Example CSV file with the information of the status in R19 ................................. 84
Table 63 - Example CSV file with the information of processed materials in R20 ................. 84
Table 64 - Example CSV file with the information of processed materials in R19 ................. 85
Table 65 - Example JSON with the information of a user session in the Citizen App ........... 85
Table 66 - Example JSON with the information of a user transaction in the Food App ......... 86
Table 67 - Example JSON with the information of a user session in the Serious Games ..... 87
Table 68 - Example CSV with the information of the Learning Materials .............................. 87
Table 69 - Example FIWARE ORION subscriptions to InfluxDB ........................................... 97
Table 70 - Example FIWARE ORION subscription to CEP................................................... 98
Table 71 - Example FIWARE ORION subscriptions to FIWARE Cygnus ............................. 99

5
Deliverable D2.3
Table 72 - W4T Cygnus Name Mappings file ..................................................................... 107

6
Deliverable D2.3

Executive summary
This deliverable presents the implementation of the Waste4Think Back-End platform as the
middleware layer of the waste operation and management system architecture defined in the
Deliverable D2.1 [3]. The Back-End platform implemented is mainly based on the FIWARE
[1] Generic Enablers components and it is responsible for handling context information
coming from the sensors deployed in the Waste4Think Pilots, as well as control
instrumentation of the treatment plants, apps, learning material, serious games, and
dispatching it to components that are in the Business / Service layer in order to support the
results R1 “Operation and Management Module, R2 “Collection Module”, and R3 “Planning
Module”. The specific details of these components and the rationale behind their
implementation is detailed in Deliverable D2.1. Technical Documentation of R1: Operation
and Management Module [3].

The conceptual architecture of the Back-End Platform, instantiated in the FIWARE Lab Cloud
Infrastructure, has been presented in this deliverable providing an overview of the features,
that are briefly summarized as:

• management of context information;


• processing and analysis of real-time events and triggering of instantaneous
predefined actions;
• storage of context information as historical data;
• publication of context information as open data;
• availability of a set of public and/or private APIs to retrieve both context information
and historical data;
• authentication and authorization systems both for the pilots and front-end
applications.

The work in this deliverable presents also all the activities needed to deploy and configure
the Back-End Platform on the Spain 2 node of FIWARE Lab Cloud, through the defined
Waste4Think Organization by providing a single entry point for the management of the Back-
End platform, the creation and management of dedicated Security Group defining access
rules to the virtual machines, the creation and management of Volumes that provide
additional data storage attached to the virtual machines, the creation and management of
Virtual Machines Images for the Waste4Think components.

Each of the Back-End services, as well as Security components, are also presented from the
implementation point of view, including specific configuration and the deployment
mechanisms put in place.

Finally, a description of the Back-End interfaces to others Waste4Think modules as sensors


deployed to the pilots, treatment plants, apps, learning material, and serious game, has been
included in this document.

The status of the development of the backend services is detailed in Deliverables D1.4-D1.6
[29, 32].

7
Deliverable D2.3

Abbreviations
W4T Waste4think
IDM Identity Management System
GE Generic Enabler
ICT Information and Communication Technologies
SSO Single Sign On
PEP Policy Enforcement Point
XACML eXtensible Access Control Markup Language
PDP Policy Decision Point
VM Virtual Machine
CEP Complex Event Processing
TCP Transmission Control Protocol

8
Deliverable D2.3

1. Introduction
This deliverable is focused on the implementation of the Waste4Think Back-End platform
(see Figure 1) for the results R1 “Operation and Management Module, R2 “Collection
Module”, R3 “Planning Module” and R4 “Circular Economy Model. The Back-End platform
implemented is mainly based on FIWARE technologies [1].

Figure 1 - Back-End conceptual architecture

The four main subjects of the deliverable are:

• implementation of the Cloud infrastructure underlying the Back-End Platform;

• implementation of the Back-End services;

• implementation of the components to secure the Back-End platform;

• back-end interfaces to other Waste4Think Module.

Section 2 describes the conceptual architecture of the Back-End Platform.

Section 3 covers the activities carried out in the Cloud Infrastructure FIWARE Lab, to set-up
the virtualised infrastructure underlying the Back-End Platform.

Section 4 describes the implementation of the Back-End components (FIWARE Generic


Enablers and others) and in particular how each component has been installed, configured
and managed.

Section 5 contains the installation and set-up of the FIWARE Generic Enablers to secure the
Back-End Platform.

Finally, Section 6 covers the Back-End interfaces to sensors deployed to the pilots, treatment
plants, serious game & apps, and learning material.

At the end of this deliverable, a series of Annexes about technical configurations and
development activities carried out are included.

9
Deliverable D2.3

2. Waste4Think Back-End architecture


The objective of the WP2 “Development of computer assisted integral waste management
system” is the definition and development of the ICT tools for improving the short and long-
term waste management at local and territorial level as well as the planning and monitoring
solutions of the different Zero Waste Environments of the Waste4Think project.

To achieve this objective and to support the results R1 “Operation and Management Module,
R2 “Collection Module”, R3 “Planning Module”, a Waste Management Back-End system has
been deployed.

Figure 2. Back-End implementation

Figure 3 represents the implementation of the Waste Management Back-End system,


instantiated in the FIWARE Lab Cloud Infrastructure (see Section 3), which is able:

• to manage context information, through the FIWARE Orion Context Broker Generic
Enabler (see section 4.3), coming from:
o the sensors deployed in each of the Waste4Think pilots;
o the control instrumentation of the Halandri treatment plants;
o the serious game and other apps developed in the Waste4Think project;
o learning materials.
• to process and analyse real-time events and triggering of instantaneous predefined
actions (such as alerts and/or anomalies) through the CEP-Sandwich module (see
section 4.5);
• to store context information as historical data on the History module (see section 4.6);
• to publish context information as open data, through the FIWARE Cygnus (see
section 4.4) and CKAN Generic Enabler (see section 4.7);
• to provide a set of public and/or private APIs to retrieve both context information and
historical data;

10
Deliverable D2.3
• to provide a web interface to operate over the data available both in the ORION
Context broker and Historical module, through the CRUD module (see section 4.9).
• to provide authentication and authorization systems through a set of FIWARE security
components (see section 5), specifically Identity Management KeyRock, PEP Proxy
WILMA, and Authorization PDP AuthZForce;
• to provide a unique user entry point to the different applications in Waste4Think (
Social Actions, Zero Waste Ecosystems, Planning Tool, Green Procurement, etc.),
through the Admin Back-End module (see section 4.8).

3. Cloud infrastructure implementation


3.1. FIWARE Lab
The Waste4Think Back-End has been deployed on FIWARE Lab infrastructure. FIWARE Lab
is an open working instance of FIWARE available for experimentation based on OpenStack
[2]. More details about FIWARE Lab can be found in Section 3 of the D2.1 [3].

All Waste4Think Back-End components have been provisioned and managed using the
FIWARE Cloud capabilities of the FIWARE LAB node “Spain2”.

A Waste4Think Organization (see Figure 4) has been created to let more than one user
access the shared cloud resources.

Figure 4. FIWARE Lab Waste4Think Organization.

11
Deliverable D2.3
3.2. Access to FIWARE Lab
Two kinds of actors have access to the FIWARE LAB resources, as shown in Figure 5:

• the Admin that manage the whole FIWARE Cloud resources;


• the Service Consumer actor who is going to access and manage the deployed Back-
End services.

Figure 5 - Access to the FIWARE Lab resources

3.2.1. Admin
The Admin actor is able to manage VMs, Security groups, Floating IPs and Keypairs. It
accesses to the FIWARE Cloud resources through the global instance of FIWARE Identity
Management GE [4] by using an authorized FIWARE account and then, through the Cloud
Portal GE [5] (see Figure 6 and Figure 7).

Figure 6 – FIWARE Lab Cloud Portal authentication

12
Deliverable D2.3

Figure 7 – FIWARE Lab Cloud Portal

3.2.2. Service Consumer


The Service Consumer actor can access the Back-End services through an NGNIX Reverse
proxy [8]. All details about the NGINX service implementation and related configuration are
defined in section 3.7 “NGINX Reverse Proxy” of this deliverable.

Access to both the Back-End services and Development Environment ports is granted with
security rules defined by the Admin actor on Cloud Portal GE. More details about
Waste4Think Security Group and rules can be found at section 3.4 “Security groups” of this
deliverable.

3.3. Virtual Machines


The Waste4Think project on the FIWARE Lab (Spain2 Node) includes the following
instances, also shown in Figure 8:

● w4t_backend_prod: the VM contains the mainly Back-End components as the


Context Broker Orion [section 4.3], the connector Cygnus [section 4.4], the security
components (IDM [section 5.3], AuthZForce [section 5.4] and PEP Proxy [section
5.2]) and NGINX Reverse proxy [section 3.7];
● w4t_suite: the VM contains other Back-End components as the History module, the
CEP-Sandwich module, the CRUD, and those needed to support the operations of
the different front-end modules in the Waste4Think ecosystem (Planning module,
Invoice module, Green Procurement module, etc.)
● w4t-opendata: the VM contains the CKAN Instance [section 4.7];
● w4t-dev: the VM contains the development environment through a GitLab instance
(see section 3.3 of the D2.1[3]).

13
Deliverable D2.3

Figure 8 – W4T VM instances

The two public IPs provided have been assigned to the Development Environment VM “w4t-
dev” with IP 130.206.120.215 and to the VM “w4t-backend-prod” with IP 130.206.117.164.

Two third-level domains have been created to register both public IPs:

• backend.waste4think.eu VM “w4t-backend-prod” (130.206.117.164);

• dev.waste4think.eu VM “w4t-dev” with (130.206.120.215).

Figure 9 shows the physical network of the VMs deployed.

Figure 9 - VMs physical network

3.4. Security groups


A specific security group named “w4t-security-group” has been created to manage TCP
traffic towards the VMs through specific ports.

14
Deliverable D2.3

Figure 10 - W4T Security Groups

Figure 11 shows the rules created in the Waste4Think context.

Figure 11 - W4T Security Group rules

3.5. Volumes
As shown in Figure 12, three Volumes have been created to provide Waste4Think VMs an
additional data storage space and in particular:

• w4t_volume_backend is a volume of 100GB attached to w4t_backend_prod VM and


mounted on ./data folder which is used to store data, logs and configuration files of
the dockerized Back-End services deployed on this VM;

• w4t-suite-volume is a volume of 50GB attached to w4t-suite VM and mounted on


./data folder which is used to store data, logs and configuration files of the dockerized
Back-End services deployed on this VM;

• git-data-repo is a volume of 50GB attached to w4t-dev which is used to store the


GITLab data.

15
Deliverable D2.3

Figure 12 - W4T Volumes

3.6. VMs images


Private VM images have been created in the Waste4Think organization for all the VMs used
in the project (see Figure 13). These images can be re-used to deploy the Waste4Think
Back-End platform in a fast and easy manner.

Figure 13 – W4T VM images

3.7. NGINX Reverse Proxy


In the context of the Waste4Think project, NGINX [8] component has been used specifically
as a Reverse Proxy [7] in order to retrieves resources on behalf of a Waste4Think Back-End
user, from one or more servers within the internal network of the Waste4Think organization
of FIWARE Lab.

3.7.1. Service overall description


NGINX [8] is an open source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and
IMAP protocols, as well as a load balancer, HTTP cache, and a web server (origin server).

3.7.2. Source code repository


The official NGINX source code repository is being hosted on GITHUB
https://github.com/nginx.

The Docker image used in the Waste4Think project is hosted on DOCKER Hub at the
following link https://hub.docker.com/r/_/nginx/ .

16
Deliverable D2.3

3.7.3. Installation/Configuration
In the Waste4Think project, NGINX Reverse Proxy has been deployed by using Docker
technologies.

Two different ways can be used to deploy the NGINX Reverse Proxy by using Docker.

• Installation by using docker cli command;


• Installation by using docker-compose.
In the context of the Waste4Think project, the docker-compose method has been used to
deploy the NGINX Reverse Proxy.

Docker-compose method

In Table 1, the docker-compose.yml file used to deploy the NGINX Reverse Proxy in the
Waste4Think Back-End environment. It creates a docker container NGINX and maps to an
external volume the configuration file and log files.

version: "3"

services:
nginx:
image: nginx:latest

volumes:
- data/docker_nginx/log/access.log:/var/log/nginx/access.log
- /data/docker_nginx/log/error.log:/var/log/nginx/error.log
-
/data/docker_nginx/conf/w4t.conf:/etc/nginx/conf.d/default.conf

ports:
- "80:80"
- "8080:8080"

networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1400

Table 1 – NGINX docker compose file

The installation is configured through the following parameters:

Image: The last version of the official NGINX service on Docker hub.

Volumes: Docker volumes has been created to map the following files to an external folder:

• NGINX log files;

• NGINX configuration file.

Ports: The NGINX service has been configured to run on port 80 and port 8080.

17
Deliverable D2.3
NGINX Configuration file

In Table 2, the NGINX configuration file is presented.

server {
listen 80;
server_name localhost;
client_max_body_size 100M;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://192.168.229.62:82/;
proxy_redirect off;
}
location /connector {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://192.168.229.62:3001/v1;
proxy_redirect off;
}
}
upstream ckan-webui {
server 192.168.216.171:8080;
}
server {
client_max_body_size 40M;
listen 8080;
server_name localhost;

location / {
proxy_pass http://ckan-webui;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Nginx-Proxy true;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
Table 2 - NGINX Configuration file

18
Deliverable D2.3

4. Back-End Services
4.1. Implementation of the Back-End services

Figure 14 - Back-End Services

The following FIWARE Generic Enablers and other third-party and custom components (see
Figure 14) have been used and combined to implement the main services of the Back-End
system as Context Management, Open Data, Historical Data, and Processing system:

• FIWARE ORION Context Broker (see section 4.3) as Publish/Subscribe system which
is able to handle context information using the FIWARE NGSI standard;
• FIWARE Cygnus (see section 4.4) as connector system which persists FIWARE
Orion context information in the CKAN Open Data system;
• CKAN (see section 4.7) as Open Data system to publish, share, find, use and
visualize datasets of interest for the Waste4Think project such as socio-economic
data, geographic information or sensors data.
• PostgreSQL with time series extension (see section 4.6) as Historical DB;
• CEP Sandwich (see section 4.5) as Complex Event Processing system;
• CRUD (see section 4.9) as custom tool which provides a WEB interface to create,
update and delete data available both in the FIWARE Orion Context Broker and the
Historical DB.
• Admin Back-End (see section 4.8) as custom tool for authenticating and authorizing
users to access each of the different applications developed in the Waste4Think
project (Social Actions, Zero Waste Ecosystems, Planning Tool, Green Procurement,
etc.).

The status of the development of the backend services is detailed in Deliverables D1.4-D1.6
[29, 32].

19
Deliverable D2.3

4.2. Deployment approach


Waste4Think Back-End components have been deployed by using Docker technology.

Docker is a software containerisation platform guaranteeing that software will always run the
same, regardless of its environment. Docker offers many benefits over traditional application
deployment, including:

• Simplicity – Once an application is Dockerized, a full control (start, stop, restart, etc.)
with few commands is provided. As these are generic Docker commands, it is easy
for anyone unfamiliar with the specifics of an application to get started.
• It’s already Dockerized – Docker Hub is the central marketplace for Docker images to
be shared with other Docker users. Often Docker images for an application already
exist. A specific Waste4Think organization has been created into the official Docker
HUB in order to store and distribute the container images created for the
Waste4Think project https://hub.docker.com/u/waste4think (see Figure 15).

Figure 15 - Waste4Think organization in Docker HUB

20
Deliverable D2.3
• Blueprint of application configuration – A Docker file provides the blueprint or
instructions to build an application. This can be stored in the source version control
system and refined over time to improve the build. It also removes any ambiguity of
build/configuration differences between various deployments.

The Back-End services have been deployed by using the Docker-Compose which is a tool
for defining and running multi-container Docker applications. With Docker-Compose, a set of
YAML configuration files have been created to configure the Waste4Think services.

More details about Docker can be found in the Annex A - Docker manual of this deliverable.

4.3. FIWARE Orion Context Broker


4.3.1. Service overall description
Orion Context Broker is a C++ implementation of the NGSIv2 REST API [24] binding
developed as a part of the FIWARE platform. Orion Context Broker handles the entire
lifecycle of context information including updates, queries, registrations, and subscriptions. It
is an NGSIv2 server implementation to manage context information and its availability. Orion
Context Broker allows to create context elements and to manage them through updates and
queries. In addition, external services are allowed to subscribe to context information so
when an attribute of the context elements changes the subscribers receive a notification
through an NGSI context update message.

As part of the Waste4Think Backend, the Orion Context Broker has been implemented to:

• manage all NGSI context information compliant to the Data Model defined in the
Waste4Think project (section 7 of the D2.1 [3]), coming from Pilots and other
Backend services;
• notify context information changes to other Backend services as CEP system (see
section 4.5 of this deliverable), Open Data system (see section 4.7 of this deliverable)
or Historical DB (see section 4.6 of this deliverable) by configuring specific
subscriptions;
• retrieve, upload and update context information directly in the Orion Context Broker
by using NGSI v2 APIs[24].

4.3.2. Source code repository


The official FIWARE Orion Context Broker source code repository is being hosted on
GITHUB at the following link https://github.com/telefonicaid/fiware-orion.

The Docker image used in the Waste4Think project is being hosted on DOCKER Hub at the
following link https://hub.docker.com/r/fiware/orion/ .

The official MongoDB source code repository is being hosted on GITHUB at the following link
https://github.com/mongodb/mongo.

The Docker image used in the Waste4Think project is being hosted on DOCKER Hub at the
following link https://hub.docker.com/_/mongo/.

21
Deliverable D2.3
4.3.3. Installation/Configuration
In the Waste4Think project, FIWARE Orion Context broker has been deployed by using
Docker technologies.

Just to mention, there are additional ways to deploy the FIWARE Orion service. More
information on the different deployment options can be found in the official FIWARE Orion
Context Broker Installation & Administration Manual [13].

Two different ways can be used to deploy the Orion Context Broker by using Docker.

• Installation by using docker cli command;


• Installation by using docker-compose.
Docker cli method

MongoDB installation.
docker run –name < container name > mongodb -d mongo:3.4

The MongoDB container will run on the default port 27107.

FIWARE Orion installation.


docker run -d --name <orion name> --link mongodb:mongodb -p 1026:1026
fiware/orion -dbhost mongodb

FIWARE Orion container will run on port 1026.

Docker-compose method

In the Table 3 the docker-compose.yml file used to deploy the FIWARE Orion instance in the
Waste4Think Back-End environment. It creates two docker containers, FIWARE Orion
Context Broker and MongoDB, and defines volumes to persist data. With this approach we
build a scalable application, which can be moved between any host without losing its core
functionalities.

version: "2"
services:
mongo:
image: mongo:3.4
volumes:
- /data/docker-mongo/db:/data/db
- /data/docker-
mongo/log/mongodb.log:/var/log/mongodb/mongod.log
command: --nojournal
orion:
image: fiware/orion
volumes:
- /data/docker-
mongo/log/contextBroker.log:/tmp/contextBroker.log
- /data/docker-mongo/config/localhost.key:/localhost.key
- /data/docker-mongo/config/localhost.pem:/localhost.pem
links:
- mongo

22
Deliverable D2.3
ports:
- "1026:1026"
command: -dbhost mongo -https -key /localhost.key -cert
/localhost.pem -logLevel DEBUG
networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1400
Table 3 – Orion docker compose file

Two services have been configured in the docker compose file, MongoDB and FIWARE
Orion.

MongoDB

Image: The MongoDB docker image version 3.4 has been used as recommended in the
official FIWARE Orion documentation (see Table 4).

image: mongo:3.4

Table 4 – MongoDB docker image

Volumes: Data is non-persistent in the dockerized Orion instance. This means that turning off
the MongoDB container all data will be lost. To make the MongoDB data persistent, a docker
volume has been created in the docker-compose file. This volume maps the MongoDB data
to an external folder located to the host system (see Table 5).

volumes:
- /data/docker-mongo/db:/data/db
Table 5 – MongoDB docker volumes (data persistent)

The docker volume has been also used to map the MongoDB log file to an external folder
(see Table 6).

- /data/docker-mongo/log/mongodb.log:/var/log/mongodb/mongod.log
Table 6 – MongoDB docker volumes (logs)

Command: The “—nojournal” command has been defined to disable journaling that
MongoDB enables by default (see Table 7).

command: --nojournal
Table 7 – MongoDB docker commands

Orion Context Broker

Image: The last version of the official FIWARE Orion on Docker hub (see Table 8).

image: fiware/orion
Table 8 – Orion docker image

23
Deliverable D2.3
Volumes: A docker volume has been created to map the Orion Context Broker log to an
external folder (see Table 9).

- /data/docker-mongo/log/contextBroker.log:/tmp/contextBroker.log
Table 9 – Orion docker volumes (logs)

The docker volumes have been also used to map the Orion certificates (.key and .pem) used
to run Orion in HTTPS mode (see Table 10).

- /data/docker-mongo/config/localhost.key:/localhost.key
- /data/docker-mongo/config/localhost.pem:/localhost.pem
Table 10 – Orion docker volumes (certificates)

Links: Link to mongoDB service (see Table 11).

links:
- mongo
Table 11 – Orion docker links

Ports: This setup ports on which service will run. The default Orion port 1026 has been used
in the Waste4Think backend (see Table 12).

ports:
- "1026:1026"
Table 12 – Orion docker ports

Command: Command options used when the container is created (see Table 13).

The -dbhost configures the MongoDB database used by FIWARE Orion.

The -https configures FIWARE Orion to run in HTTPS mode, which in addition needs the -
key and -cert options, to specify the files containing the private key and certificates for the
server.

The -logLevel configures the Orion log level.

command: -dbhost mongo -https -key /localhost.key -cert


/localhost.pem -logLevel
Table 13 – Orion docker commands

4.3.4. Manage FIWARE Orion Context Broker Service


Start service: docker-compose up.

Stop service: docker-compose stop.

Monitoring live state of FIWARE Orion: docker stats.

Access to the container of FIWARE Orion: docker exec -it <fiware-orion-container> bash.

24
Deliverable D2.3
4.3.5. Estimating FIWARE Orion Context Broker data size
This section gives an estimation about the expected data growth of the Orion Context Broker
over next five years taking into account different factors such as number of citizens involved,
number of collection points handled, average size of single data units and data frequency.

Two kind of estimations are provided:

• expected data growth, in term of data size and number of entities handled, for all
pilots involved in the Waste4Think project;
• expected data growth, in term of data size and number of entities handled, for a
single sample pilot.
All pilots

For this estimation, we have considered the four pilots involved in the Waste4Think project
and in detail about 100.000 citizens involved, 6.000 collection points handled and 135.000
waste transactions.

The size of the Context Broker data expected for the next five years is shown in the Figure
16 while number of entities handled is shown in the Figure 17.

Orion DB size
2500
2,156
2000 1,770

1500 1,384
1,038
1000 DB size (MB)
692
500

0
2019 2020 2021 2022 2023

Figure 16 - Orion DB size for all pilots

N° entities
600,000 539,664

500,000
431,731
400,000
323,797
300,000
215,865 N° entities
200,000 152,904

100,000

0
2019 2020 2021 2022 2023

Figure 17 - Number of entities for all pilots

25
Deliverable D2.3

Single pilot

For this estimation, we have considered a sample pilot with 25.000 citizens involved, 1.500
collection points handled and 30.000 waste transactions.

The size of the Context Broker data expected for the next five years is shown in the Figure
18 while number of entities handled is shown in the Figure 19.

Orion DB size
600
519
500
423
400 346

300 259
DB size (MB)
200 173

100

0
2019 2020 2021 2022 2023

Figure 18 - Orion DB size for a single pilot

N° entities
140,000

120,000 114,678

100,000 95,565

76,452
80,000
57,339 N° entities
60,000
38,226
40,000

20,000

0
2019 2020 2021 2022 2023

Figure 19 - Number of entities for a single pilot

4.3.6. Interaction with other Back-End components

FIWARE Orion interacts with other Back-End components:

• FIWARE Cygnus, to publish context information into the Open Data Platform CKAN;

26
Deliverable D2.3
• History module, to upload context information as Historical Data;

• CEP-Sandwich module, to process and analyse real-time events and to trigger


instantaneous predefined actions;

The interaction between FIWARE Orion and the other Back-End components takes place
through the configuration of FIWARE Orion subscriptions to notify context information to
other services. More details about the FIWARE Orion subscriptions implemented to notify
entities to FIWARE Cygnus, History, and CEP, can be found in the Annex B – FIWARE Orion
subscriptions

4.4. FIWARE Cygnus Connector Framework


4.4.1. Service overall description
FIWARE Cygnus is a connector in charge of persisting Orion context data in certain
configured third-party storages, creating a historical view of such data.

Internally, Cygnus is based on Apache Flume [16], a technology addressing the design and
execution of data collection and persistence agents. An agent is basically composed of a
listener or source in charge of receiving the data, a channel where the source puts the data
once it has been transformed into a Flume event, and a sink, which takes Flume events from
the channel to persist the data within its body into a third-party storage.

Cygnus and more specifically the Cygnus-NGSI agent plays the role of a connector between
Orion Context Broker which is an NGSI source of data, and many FIWARE and third-party
storage systems such as CKAN [14], Cosmos Big Data (Hadoop)[17] and STH Comet[18].

As part of the Waste4Think Back-End, Cygnus has been implemented to accept NGSI
context information coming from Orion Context Broker and to write it to the Waste4Think
CKAN Open Data Platform (see section 4.7).

4.4.2. Source code repository


The official FIWARE Cygnus source code repository is being hosted on GITHUB at the
following link https://github.com/telefonicaid/fiware-cygnus.

The Docker image used in the Waste4Think project is being hosted on DOCKER Hub at the
following link https://hub.docker.com/r/fiware/cygnus-ngsi/.

4.4.3. Installation/Configuration
In the Waste4Think context, FIWARE Cygnus has been deployed using the docker-compose
method.

There are also other different ways to deploy FIWARE Cygnus. More information on that can
be found in the official FIWARE Cygnus Installation & Administration Manual [6].

Docker-compose method

27
Deliverable D2.3
In the Table 14 the docker-compose file used to deploy the FIWARE Cygnus instance in the
Waste4Think Back-End environment is presented.

version: '3'
services:
cygnus:
image: telefonicaiot/fiware-cygnus
volumes:
- /data/docker_cygnus/conf/agent_w4t.conf:/opt/apache-
flume/conf/agent.conf
- /data/docker_cygnus/conf/name_mappings.conf:/usr/cygnus/
conf/name_mappings.conf
- /data/docker_cygnus/conf/cygnus_instance_w4t.conf:/opt/fiware-
cygnus/cygnus-common/conf/cygnus_instance.conf
- /data/docker_cygnus/log/cygnus.log:/var/log/cygnus/cygnus.log
ports:
- "5050:5050"
- "8081:8081"
networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1400
Table 14 – Cygnus docker compose file

The Cygnus image used service of the docker-compose file use the official Cygnus image.

Mapping volumes helps to configure the behaviour of Cygnus, mostly focus on key areas
such as name mappings, agent configuration, and Cygnus instance.

Name mappings to determine the way of handling entities from incoming notifications, specify
setting parameters like destination CKAN data store and format in which entities are being
persisted. There is another counterpart which can be used that is Grouping rules, but since
Cygnus version 1.6 that feature is deprecated in favour of name mappings.

Agent configuration file is used to set channel, source and sink specific settings. Only the
CKAN sink has been configured in the Waste4Think project.

Files mentioned above are part of Cygnus configuration. They are required for key Cygnus
aspects such as sinks on which data received from notification will be sent, the behaviour of
those sinks regulated with channels options as well Cygnus instance settings (port, default
service, and service path and notification endpoint).

Cygnus configuration files used in Waste4Think Back-End environment are as follows:


• Agent configuration file <agent_w4t.conf>;
• Cygnus instance configuration file <cygnus_instance_w4t.conf>;
• Name mappings configuration file <name_mappings.conf >.

28
Deliverable D2.3

Agent configuration file

Figure 20 - Cygnus agent

The Agent configuration file is used to address Flume[16] parameters [16], configuring the
source (NGSI), the channel and the sink (CKAN) that compose the Flume agent [16] behind
Cygnus instance, which is responsible for consuming context events notified by the FIWARE
Orion Context Broker and persisting them on the CKAN instance (see Figure 20).

The Agent configuration file can be split into two main parts:
• NGSI source configuration;
• CKAN sink configuration.

NGSI source configuration


In Table 15 the NGSI source configuration of the Flume agent. This part of Agent
configuration file defines the port on witch Cygnus will receive the event notified by the
Context Broker, the default service and the service path used for data persistence into the
CKAN instance as well as the name mappings file used to determine the style of data that
will be written in the CKAN instance.

cygnus-ngsi.sources = http-source
cygnus-ngsi.sinks = ckan-sink
cygnus-ngsi.channels = ckan-channel
cygnus-ngsi.sources.http-source.channels = ckan-channel
cygnus-ngsi.sources.http-source.type = http
cygnus-ngsi.sources.http-source.port = 5050
cygnus-ngsi.sources.http-source.handler =
com.telefonica.iot.cygnus.handlers.NGSIRestHandler
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
cygnus-ngsi.sources.http-source.handler.default_service =
cygnus-ngsi.sources.http-source.handler.default_service_path = /
cygnus-ngsi.sources.http-source.handler.events_ttl = 10
cygnus-ngsi.sources.http-source.interceptors = ts nmi
cygnus-ngsi.sources.http-source.interceptors.ts.type = timestamp
cygnus-ngsi.sources.http-source.interceptors.nmi.type =
com.telefonica.iot.cygnus.interceptors.NGSINameMappingsInterceptor$Builder
cygnus-ngsi.sources.http-source.interceptors.nmi.name_mappings_conf_file =
/usr/cygnus/conf/name_mappings.conf
Table 15 - Agent configuration file NGSI configuration section.

29
Deliverable D2.3
cygnus-ngsi.sources.http-source.port = 5050

Listening port that FIWARE Cygnus source is using for receiving incoming notifications.

cygnus-ngsi.sources.http-source.handler.notification_target = /notify

Endpoint on which FIWARE Cygnus is listening for incoming notifications.

cygnus-ngsi.sources.http-source.handler.default_service =

cygnus-ngsi.sources.http-source.handler.default_service_path = /

Fiware-Service and Fiware-ServicePath default configuration.

cygnus-ngsi.sources.http-source.interceptors.nmi.name_mappings_conf_file =
/opt/apache-flume/conf/name_mappings.conf

Path of the name_mappings.conf file.

CKAN sink configuration


In Table 16 the CKAN sink configuration of the Flume agent. This part of Agent configuration
file defines the CKAN parameters which are being used by Cygnus to persist data into.
These settings determine how data will be written (row or column), specific connection rules,
times, CKAN endpoint, port and API key (used to authenticate Cygnus when making
requests to CKAN endpoint).

cygnus-ngsi.sinks.ckan-sink.type =
com.telefonica.iot.cygnus.sinks.NGSICKANSink
cygnus-ngsi.sinks.ckan-sink.channel = ckan-channel
cygnus-ngsi.sinks.ckan-sink.enable_encoding = false
cygnus-ngsi.sinks.ckan-sink.enable_grouping = false
cygnus-ngsi.sinks.ckan-sink.enable_name_mappings = true
cygnus-ngsi.sinks.ckan-sink.data_model = dm-by-entity
cygnus-ngsi.sinks.ckan-sink.attr_persistence = column
cygnus-ngsi.sinks.ckan-sink.ckan_host = 192.168.216.171
cygnus-ngsi.sinks.ckan-sink.ckan_port = 8080
cygnus-ngsi.sinks.ckan-sink.ckan_viewer = recline_grid_view
cygnus-ngsi.sinks.ckan-sink.ssl = false
cygnus-ngsi.sinks.ckan-sink.api_key = 2d63e69c-9829-4b86-a921-
60de39750253
cygnus-ngsi.sinks.ckan-sink.orion_url = http://localhost:1026
cygnus-ngsi.sinks.ckan-sink.batch_size = 100
cygnus-ngsi.sinks.ckan-sink.batch_timeout = 30
cygnus-ngsi.sinks.ckan-sink.batch_ttl = 10
cygnus-ngsi.sinks.ckan-sink.batch_retry_intervals = 5000
cygnus-ngsi.sinks.ckan-sink.backend.max_conns = 500
cygnus-ngsi.sinks.ckan-sink.backend.max_conns_per_route = 100
#cygnus-ngsi.sinks.ckan-sink.persistence_policy.max_records = 5
cygnus-ngsi.sinks.ckan-sink.persistence_policy.expiration_time = -1
cygnus-ngsi.sinks.ckan-sink.persistence_policy.checking_time = 600
Table 16 - Agent configuration file CKAN configuration section.

30
Deliverable D2.3

cygnus-ngsi.sinks.ckan-sink.enable_name_mappings = true

Enables name mappings rules for CKAN sinks, these rules are then used to customize the
operations of persisting data into CKAN.

cygnus-ngsi.sinks.ckan-sink.api_key = 2d63e69c-9829-4b86-a921-60de39750253

CKAN API key, used by Cygnus to authenticate itself when making a request to CKAN.

cygnus-ngsi.sinks.ckan-sink.orion_url = http://localhost:1026

FIWARE Orion URL used to compose the resource URL with the convenience operation URL
to query it.
cygnus-ngsi.sinks.ckan-sink.ckan_port = 8080

The port for the CKAN API endpoint.

Cygnus instance configuration file

Table 17 presents the Cygnus instance configuration file which addresses all none-Flume
parameters, such as the Flume agent name, the specific log file for this instance, the
administration port.

CYGNUS_USER=cygnus
CONFIG_FOLDER=/usr/cygnus/conf
CONFIG_FILE=/usr/cygnus/conf/agent_w4t.conf
AGENT_NAME=cygnus-ngsi
LOGFILE_NAME=cygnus.log
ADMIN_PORT=8081
POLLING_INTERVAL=30
Table 17 - Cygnus instance configuration file

CONFIG_FILE=/usr/cygnus/conf/agent_w4t.conf

Name of the agent. The name of the agent is important since it is the base for the Flume
parameters naming conventions, it suggests what agent configuration will be used for
configuration of Flume environment.

Name mappings configuration file

In Table 18 a part of “name_mappings.conf” file used for the purpose of customizing the way
of writing data to the CKAN instance. The whole configuration file used in the Waste4Think
Project can be found in the Annex D – Cygnus Name Mappings
of this document.

Name mappings are an advanced global feature of Cygnus. It is global because it is available
for all NGSI sinks. They allow changing the notified FIWARE service path and the
concatenation of the entity ID and entity type.

31
Deliverable D2.3
{
"serviceMappings": [
{
"originalService": "waste4think",
"servicePathMappings": [
{
"originalServicePath": "/deusto/w4t/cascais/real",
"entityMappings": [
{
"originalEntityId": ".*",
"originalEntityType": "DepositPointType",
"newEntityId": "",
"newEntityType": "depositpointtype",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "SortingType",
"newEntityId": "",
"newEntityType": "sortingtype",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "DepositPointIsle",
"newEntityId": "",
"newEntityType": "depositpointisle",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "DepositPoint",
"newEntityId": "",
"newEntityType": "depositpoint",
"attributeMappings": []
},
]
},
...
Table 18 – Name mappings configuration file

originalService: Fiware Orion service from incoming notification, used to map incoming
request to set of rules for that specific service.

originalServicePath: Fiware Orion service path from incoming notification used to map
set of rules that are specific to service path, allowing us distinction of entities depending on
their service path rather than attributes (id, type …).

originalEntityId: ID of single entity, also unique to Fiware Service and Fiware


ServicePath.

32
Deliverable D2.3
originalEntityType: Type of incoming entity, also unique to Fiware Service and Fiware
ServicePath.

newEntityId: Used to specify a new value for each entity ID, which is then used for sending
data into CKAN sink by concatenating <newEntityId_newEntityType>, that value will
represent destination of CKAN data store which holds data specific to given resources.

newEntityType: Used to specify a new value for each entity Type, which is then used for
sending data into CKAN sink by concatenating <newEntityId_newEntityType>, that value will
represent destination of CKAN data store which holds data specific to given resources.

4.5. CEP
4.5.1. Service overall description
FIWARE already provides a Generic Enabler, named Proton, for Complex Event Processing.
However, initial testing proved that it did not fulfil the technical requirements. The main
problem is the impossibility to easily keep the state between invocations (needed to perform
forecasting, online calculation of KPI and tariff) and to make invocations “at will” (to
consolidate the tariff and KPIs). These problems are being overcome by the development of
a custom CEP, named as Sandwich. Sandwich is an application to trigger logic at the
occurrence of certain events. Its purpose is to build small programs (called sandwiches) in an
interactive way by dragging and dropping already predefined functions (called ingredients).
These functions are linked to determine whose outputs need to be sent to others’ inputs and
create the logical order in which information must flow. Combining the ingredients, a user can
build (in a visual way) logical and mathematical operations to perform complex calculations
(such as the mean attribute of a group of entities, execute a statistical model, etc.).

By using the Context Broker’s subscription mechanism, the sandwiches can be invoked
whenever an entity is changed. The CEP module will asynchronously launch the sandwich
executing in sequence all its ingredients and after, sending the result back to the Context
Broker or to another module. In the background, the CEP module is built using a NodeJS
web server[20] coupled with a PostgreSQL[21] database. NodeJS exposes the API routes
and dispatches the received queries into the database.

4.5.2. Source code repository

The official CEP-Sandwich repository is available at


http://dev.waste4think.eu/waste4think/w4t-sandwich.
The Docker image used in the Waste4Think project is being hosted on DOCKER Hub at
https://hub.docker.com/u/waste4think/.

4.5.3. Installation/Configuration
In the Waste4Think context, the CEP-Sandwich module has been deployed by using Docker
technologies, specifically by using the docker-compose method.

• To run the module:


docker stack deploy -c sandwich/docker.yml sandwich

33
Deliverable D2.3

• The logs will be available at /var/log/sandwich

• To stop the module:


docker stop sandwich

Table 19 and Table 20 show the docker.yml configuration file and the NGINX configuration of
the CEP-Sandwich module.

version: "3"
services:
sandwich-db:
image: postgres:9.6
ports:
- "9041:5432"
volumes:
- postgresql_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "7f650369fa13fd4fe07e0a8e19f16ae7"
POSTGRES_DB: "sandwich"
logging:
driver: "json-file"
options:
max-file: 5
max-size: 10m

sandwich-node:
image: "node:8"
ports:
- "8030:8030"
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "7f650369fa13fd4fe07e0a8e19f16ae7"
POSTGRES_HOST : "sandwich-db"
POSTGRES_PORT : 5432
POSTGRES_DB : "sandwich"
AUTH_CLIENT_ID : "VQ3Ik0elr1xJ8u5MZxZTcFsn4r5u3f9sqWPIfXj0"
AUTH_CLIENT_SECRET :
"uVKOGkJiuNRcDJRIaun9dqrYDEihcFrGk56hToIml5FeKA4fRoY8xpRAO8z2ZVhJqM2kFBZYuT
6kExeeXdfZdLvuiAtU3EyCLWWQGtSMzuOezzvDzYsvIVh01szNvWNX"
RESET_MODELS_ : 1
RESET_SESSIONS_ : 1
user: "node"
working_dir: /home/docker/sandwich
volumes:
- /home/docker/sandwich:/home/docker/sandwich
command: /bin/bash -c "npm install && npm start"
logging:
driver: "json-file"
options:
max-file: 5
max-size: 10m
dns:
- 130.206.100.1
- 130.206.100.2

volumes:
postgresql_data:
Table 19 – Docker compose file of the CEP-Sandwich module

34
Deliverable D2.3

server {
server_name sandwich.waste4think.eu;
listen 80;
listen 443 ssl;

include conf.d/geoworldsim_ssl;

location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
proxy_pass http://192.168.235.170:8030;
}
}
Table 20 – Nginx configuration file for the CEP-Sandwich module

4.6. History module


4.6.1. Service overall description
The DoA explicitly states the use of a History module within the Waste4Think project. The
History technology developed by the FIWARE initiative (STH-Comet) was tested, however it
did not fulfil the technical requirements. One of the main problems was that the services
provided were built upon v1 of the NGSI API, instead of v2 which is used by the Orion
Context Broker. Also, considering the type of information that is going to be stored (waste
transactions), there is a need for a relational database such as PostgreSQL with extension
for time-series, constraint checking and trigger customization (instead of a non-relational one
like Mongo DB, on which STH Comet is built upon). These problems are being overcome by
the development of a custom historical module, named as History. The History module
contains a time-series database that stores the entire temporality of an entity. Its purpose is
to be subscribed to every Context Broker entity to monitor and keep track of their changes.
Whenever an entity is modified, the Context Broker will emit a POST to the Historical module,
and this last will create a new time record with when the entity was changed and its new
value. The frequency of creation or modification of the data in the Context Broker is directly
related to the nature of the entities. This nature can be divided in two profiles:

• Low update profile. This profile covers all the entities that describe the information of
the pilot and which modification will take place separate in time. In the Waste4Think
project, these entities refer to the description of the municipality (population,
population density), the configuration of the waste management system (deposit
points, sorting type, waste categories), etc. Considering the data already uploaded in
the Back-end we estimate we could store around 200-300MB of data per pilot and
year.
• High update profile. This profile covers all the entities that are created or modified
almost constantly and are commonly related to the actions that are performed
autonomously by the sensors (proactiveness) or by the execution of an action over a
sensor (reactiveness). In the context of Waste4Think, these entities refer to events
such as the throwing garbage to a waste container, the collection of waste containers

35
Deliverable D2.3
by a truck, the measures of the nappies and composting plants, the number of users,
food app transactions, etc. Considering the data already uploaded in the Back-end we
estimate we could store around 900MB-1GB of data per pilot and year.

Using the stored data, the module exposes an API for querying past states the entities had in
the Context Broker given a timestamp or time range. In the background, the History module
is built using a NodeJS[20] web server coupled with an PostgreSQL[23] time-series
database. NodeJS exposes the API routes and dispatches the received queries into the
database.

4.6.2. Source code repository


The official module source code repository is available at:
http://dev.waste4think.eu/waste4think/w4t-history.

The Docker image used in the Waste4Think project is being hosted on DOCKER Hub at
https://hub.docker.com/u/waste4think/.

4.6.3. Installation/Configuration
In the Waste4Think context, the History module has been deployed by using Docker
technologies, specifically by using the docker-compose method.

• To run the module:


docker stack deploy -c history/docker.yml history

• The logs will be available at /var/log/history

• To stop the module:


docker stop history

Table 21 shows the Docker compose configuration file of the History module. Table 22
shows the Nginx configuration file for the History module.
version: "3"
services:
history-db:
image: timescale/timescaledb
ports:
- "8041:5432"
volumes:
- postgresql_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "7f650369fa13fd4fe07e0a8e19f16ae7"
POSTGRES_DB: "history"
logging:
driver: "json-file"
options:
max-file: 5
max-size: 10m

history-node:
image: "node:8"
ports:
- "8040:8040"
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "7f650369fa13fd4fe07e0a8e19f16ae7"

36
Deliverable D2.3
POSTGRES_HOST : "history-db"
POSTGRES_PORT : 5432
POSTGRES_DB : "history"
AUTH_CLIENT_ID : "Zq2COahUdSIOaAOEz5g2RW8YEf7Ysnrd0hWBGTWA"
AUTH_CLIENT_SECRET :"hEMpUb4xto8nuGNbSegYjRIwkN8TFP1X4KP1x"
RESET_MODELS_ : 1
RESET_SESSIONS_ : 1
user: "node"
working_dir: /home/docker/history
volumes:
- /home/docker/history:/home/docker/history
command: /bin/bash -c "npm install && npm start"
logging:
driver: "json-file"
options:
max-file: 5
max-size: 10m
dns:
- 130.206.100.1
- 130.206.100.2volumes:
postgresql_data:

Table 21 – Docker compose file for the History module

server {
server_name history.waste4think.eu;
listen 443 ssl;
listen 80;

include conf.d/geoworldsim_ssl;

location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
proxy_pass http://192.168.235.170:8040/;
}
}
Table 22 – Nginx configuration file for the History module

4.7. CKAN Open Data


4.7.1. Service overall description
CKAN is a tool for making open data websites. It helps manage and publish collections of
data. It is used by national and local governments, research institutions, and other
organizations who collect a lot of data.

Once data is published, users can use its faceted search features to browse and find the
data they need, and preview it using maps, graphs and tables-whether they are developers,
journalists, researchers, NGOs, citizens, etc.…

As part of the Waste4Think Back-End, the open data platform CKAN is used to publish,
share, find, use and visualize datasets of interest for the Waste4Think project such as socio-
economic data, geographic information or sensors data.

37
Deliverable D2.3

Figure 21) has been deployed and configured. The Waste4Think Open Data platform is
available at http://backend.waste4think.eu:8080/.

38
Deliverable D2.3

Figure 21 – Waste4Think Open Data Platform

4.7.2. Source code repository


The official CKAN source code repository is being hosted on GITHUB at the following link
https://github.com/ckan/ckan.

The Docker image used in the Waste4Think project is being hosted on DOCKER Hub at the
following link https://hub.docker.com/r/ckan/ckan/.

39
Deliverable D2.3
4.7.3. Installation/Configuration
In the Waste4Think project, the CKAN instance has been deployed by using Docker
technologies.

There are different methods to install CKAN as the installation from an operating system
package manager or from source. More information about it can be found in the official
CKAN documentation [14].

Docker-compose method

In the Table 3 the docker-compose.yml file used to deploy the CKAN instance in the
Waste4Think Backend environment is presented.

volumes:
ckan_config:
ckan_home:
ckan_storage:
pg_data:

services:
ckan:
container_name: ckan
build:
context: ../../
args:
- CKAN_SITE_URL=${CKAN_SITE_URL}
links:
- db
- solr
- redis
ports:
- "0.0.0.0:${CKAN_PORT}:5000"
environment:
# Defaults work with linked containers, change to use own Postgres,
SolR, Redis or Datapusher
-
CKAN_SQLALCHEMY_URL=postgresql://ckan:${POSTGRES_PASSWORD}@db/ckan
-
CKAN_DATASTORE_WRITE_URL=postgresql://ckan:${POSTGRES_PASSWORD}@db/datast
ore
-
CKAN_DATASTORE_READ_URL=postgresql://datastore_ro:${DATASTORE_READONLY_PA
SSWORD}@db/datastore
- CKAN_SOLR_URL=http://solr:8983/solr/ckan
- CKAN_REDIS_URL=redis://redis:6379/1
- CKAN_DATAPUSHER_URL=http://datapusher:8800
- CKAN_SITE_URL=${CKAN_SITE_URL}
- CKAN_MAX_UPLOAD_SIZE_MB=${CKAN_MAX_UPLOAD_SIZE_MB}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- DS_RO_PASS=${DATASTORE_READONLY_PASSWORD}

volumes:
- ckan_config:/etc/ckan
- ckan_home:/usr/lib/ckan
- ckan_storage:/var/lib/ckan

40
Deliverable D2.3

datapusher:
container_name: datapusher
image: clementmouchet/datapusher
ports:
- "8800:8800"

db:
container_name: db
build:
context: ../../
dockerfile: contrib/docker/postgresql/Dockerfile
args:
- DS_RO_PASS=${DATASTORE_READONLY_PASSWORD}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
environment:
- DS_RO_PASS=${DATASTORE_READONLY_PASSWORD}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
volumes:
- pg_data:/var/lib/postgresql/data

solr:
container_name: solr
build:
context: ../../
dockerfile: contrib/docker/solr/Dockerfile

redis:
container_name: redis
image: redis:latest
Table 23 - CKAN docker compose file

volumes:
- ckan_config:/etc/ckan
- ckan_home:/usr/lib/ckan
- ckan_storage:/var/lib/ckan
Table 24 - CKAN docker volumes

Docker volumes have been used to map the CKAN configuration files and to persist data on
external folders. This makes sure data is persisted and limit the risk of data lost due to issues
that can occur in work with containers (see Table 24).

solr:
redis:
db:
datapusher:
Table 25 - CKAN docker dependencies

CKAN dependencies, main task of which is data management/persistence, in general, they


are responsible for CKAN data flow. They handle received data but also write and manage
internal CKAN data structures (see Table 25).
ports:
- "0.0.0.0:${CKAN_PORT}:5000"
Table 26 - CKAN docker port

Port used to set endpoint of CKAN service (see Table 26).

41
Deliverable D2.3

4.8. Admin backend


4.8.1. Service overall description
The Admin backend provides a unique entry point to the Waste4Think ecosystem. Based on
OAuth, this module is responsible for authenticating and authorizing users to access each of
the different applications in Waste4Think: Social Actions, Zero Waste Ecosystems, Planning
Tool, Green Procurement, etc.

On the registry, each user is given a role and a set of permissions that detail which
application and which actions within each application the user is permitted to interact with.
On log in, the Admin backend verifies the credentials of the user and returns a cross-
navigation bar with all the necessary links and actions of the allowed application. This cross-
navigation bar will be available in all the applications during the user session and will verify
all the operations performed by them.

4.8.2. Source code repository


The official Admin backend module source code repository is available at:
http://dev.waste4think.eu/waste4think/w4t-authservice.

The Docker image used in the Waste4Think project is being hosted on DOCKER Hub at
https://hub.docker.com/u/waste4think/.

The Admin backend endpoint module is available at https://auth.geoworldsim.com (available


at https://auth.waste4think.eu from June 20st 2019)

4.8.3. Installation/Configuration
In the Waste4Think context, the Admin backend module has been deployed by using Docker
technologies, specifically by using the docker-compose method through the docker-remote
scripts that manage all the configuration of the docker containers.

• To run the module depending on the desired environment:


docker-remote release-deploy config/production

• The logs will be available at /var/log/${STACK}

• To stop the module:


docker stop /${STACK}

Regardless of deploying either in the development or production environment, the Admin


Backend requires three types of files:

• Deployment properties file: Configures parameters such as the host/port where the
Admin backend docker container will be deployed, the docker registry to which the
Admin backend image will be uploaded to, and the names of the stack, image and tag
related to the container.

• Application properties file: Configures parameters relevant to the application, such


as the users and passwords of the application and the database, logs, static and
media folders, etc.

42
Deliverable D2.3
• Docker compose file: Configures the parameters for building the docker container.

Table 27, Table 28, and Table 29 show the configuration files to deploy the Admin Backend
in the development environment. Table 30, Table 31, and Table 32 show the configuration
files to deploy the Admin Backend in the production environment. Table 33 shows the Nginx
configuration file for the Admin Backend.

HOST= 10.32.8.203
PORT=22
REGISTRY=10.32.8.203:5000
STACK=w4t_admin_dev
IMAGE=w4t_admin
TAG=dev

Table 27 – Deployment properties for the Admin backend module (development)

DJANGO_SECRET_KEY= "13+cs97='w(ixk2c*!8sskfksnaf*(__c!44)$lb(v0ejci"
DJANGO_ALLOWED_HOSTS=auth-dev.waste4think.eu
DJANGO_HOST_PORT=8011
DJANGO_DEBUG=True

DJANGO_ADMIN_USER=w4t_user
DJANGO_ADMIN_PASSWORD=#DJANGO_ADMIN_PASS#
DJANGO_ADMIN_EMAIL=#DJANGO_ADMIN_EMAIL#

DJANGO_STATIC_URL=/static/
DJANGO_STATIC_ROOT=/var/www/static
DJANGO_STATIC_HOST_PATH=/var/www/${STACK}/static
DJANGO_MEDIA_URL=/media/
DJANGO_MEDIA_ROOT=/var/www/media
DJANGO_MEDIA_HOST_PATH=/var/www/${STACK}/media

DJANGO_GUNICORN_LOG=/var/log/${STACK}/gunicorn
DJANGO_GUNICORN_NAME=${STACK}

POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB_NAME=${STACK}_db
POSTGRES_USER=w4t_user
POSTGRES_PASSWORD=#POSTGRES_PASS#
POSTGRES_HOST_PORT=8013
POSTGRES_DATA=/var/postgres/${STACK}
Table 28 – Application properties for the Admin backend module module (development)

version: '3.1'
services:
django:
image: '${REGISTRY}/${IMAGE}:${TAG}'
ports:
- ${DJANGO_HOST_PORT}:8000
env_file:
- ${STACK}.env
depends_on:
- postgres
volumes:
- ${DJANGO_STATIC_HOST_PATH}:${DJANGO_STATIC_ROOT}
- ${DJANGO_MEDIA_HOST_PATH}:${DJANGO_MEDIA_ROOT}
- ${DJANGO_GUNICORN_LOG}:/var/log/gunicorn

postgres:
image: postgres:10
ports:
- ${POSTGRES_HOST_PORT}:${POSTGRES_PORT}
env_file:

43
Deliverable D2.3
- ${STACK}.env
volumes:
- ${POSTGRES_DATA}:/var/lib/postgresql/data
Table 29 – Docker compose file for the Admin backend module (development)

HOST= 192.168.235.170
PORT=22
REGISTRY= https://hub.docker.com/u/waste4think/
STACK=w4t_admin
IMAGE=w4t_admin
TAG=latest

Table 30 – Deployment properties for the Admin backend module (production)

DJANGO_SECRET_KEY= "13+cs97='w(kvnvjdj764jsad*(__c!44)$lb(v0efsdf"
DJANGO_ALLOWED_HOSTS=admin.waste4think.eu
DJANGO_HOST_PORT=8010
DJANGO_DEBUG=False

DJANGO_ADMIN_USER=w4t_user
DJANGO_ADMIN_PASSWORD=#DJANGO_ADMIN_PASS#
DJANGO_ADMIN_EMAIL=#DJANGO_ADMIN_EMAIL#

DJANGO_STATIC_URL=/static/
DJANGO_STATIC_ROOT=/var/www/static
DJANGO_STATIC_HOST_PATH=/var/www/${STACK}/static
DJANGO_MEDIA_URL=/media/
DJANGO_MEDIA_ROOT=/var/www/media
DJANGO_MEDIA_HOST_PATH=/var/www/${STACK}/media

DJANGO_GUNICORN_LOG=/var/log/${STACK}/gunicorn
DJANGO_GUNICORN_NAME=${STACK}

POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB_NAME=${STACK}_db
POSTGRES_USER=w4t_user
POSTGRES_PASSWORD=#PROD_POSTGRES_PASS#
POSTGRES_DATA=/var/postgres/${STACK}
Table 31 – Application properties for the Admin backend module (production)

version: '3.1'
services:
django:
image: '${REGISTRY}/${IMAGE}:${TAG}'
ports:
- ${DJANGO_HOST_PORT}:8000
env_file:
- ${STACK}.env
depends_on:
- postgres
volumes:
- ${DJANGO_STATIC_HOST_PATH}:${DJANGO_STATIC_ROOT}
- ${DJANGO_MEDIA_HOST_PATH}:${DJANGO_MEDIA_ROOT}
- ${DJANGO_GUNICORN_LOG}:/var/log/gunicorn

postgres:
image: postgres:10
ports:
- ${POSTGRES_HOST_PORT}:${POSTGRES_PORT}

44
Deliverable D2.3
env_file:
- ${STACK}.env
volumes:
- ${POSTGRES_DATA}:/var/lib/postgresql/data
Table 32 – Docker compose file for the Admin backend module (production)

server {
server_name auth.waste4think.eu;
listen 443 ssl;

error_log /var/log/nginx/error.log debug;


rewrite_log on;

location /static/ {
alias /var/www/w4t_authservice/static/;
}

location /media/ {
alias /var/www/w4t_authservice/media/;
}

location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
proxy_pass http://192.168.235.170:8010/;
}
}
server {
server_name auth-dev.waste4think.eu;
listen 443 ssl;

location /static/ {
alias /var/www/w4t_authservice/static/;
}
location /media/ {
alias /var/www/w4t_authservice/media/;
}

location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
proxy_pass http://192.168.235.170:8011/;
}
}
Table 33 – Nginx file for the Admin backend module (development and production)

45
Deliverable D2.3
4.9. CRUD
4.9.1. Service overall description

The CRUD module provides a web interface for operating over the data available in the data-
model. The operations available are create, update and delete. This module will be available
only to the data administrators, providing them with a failsafe in case there is a misreading in
any sensor or there is corruption in the data.

CRUD will provide an interface to work with both the API of the FIWARE Orion Context
Broker (Section 4.3) and the History module (Section 4.6). Figure 22 shows one of the
CRUD’s user interface.

Figure 22 - W4T CRUD user interface

4.9.2. Source code repository


The official CRUD module source code repository is available at:
http://dev.waste4think.eu/waste4think/w4t-crud

The Docker image used in the Waste4Think project is being hosted on DOCKER Hub at
https://hub.docker.com/u/waste4think/

The CRUD endpoint is available at https://crud.geoworldsim.com (available at


https://crud.waste4think.eu from June 20st 2019).

4.9.3. Installation/Configuration
In the Waste4Think context, the CRUD module has been deployed by using Docker
technologies, specifically by using the docker-compose method through the docker-remote
scripts that manage all the configuration of the docker containers.

• To run the module depending on the desired environment:


docker-remote release-deploy config/development
docker-remote release-deploy config/production

• The logs will be available at /var/log/${STACK}

46
Deliverable D2.3
• To stop the module:
docker stop /${STACK}

Regardless of deploying either in the development or production environment, the CRUD


requires three types of files:

• Deployment properties file: Configures parameters such as the host/port where the
Admin backend docker container will be deployed, the docker registry to which the
Admin backend image will be uploaded to, and the names of the stack, image and tag
related to the container.

• Application properties file: Configures parameters relevant to the application, such


as the users and passwords of the application and the database, logs, static and
media folders, etc.

• Docker compose file: Configures the parameters for building the docker container.

Table 34, Table 35, Table 36 show the configuration files to deploy the CRUD in the
development environment. Table 37, Table 38, Table 39 show the configuration files to
deploy the CRUD in the production environment. Table 33 shows the Nginx configuration file
for the CRUD.

HOST= 10.32.8.203
PORT=22
REGISTRY=10.32.8.203:5000
STACK=w4t_crud_dev
IMAGE=w4t_crud
TAG=dev

Table 34 – Deployment properties for the CRUD module (development)

DJANGO_SECRET_KEY= " uqn&q$yr6(fyq*!z$g8f5zc&wxna_48)x!l+orhnjpbd7^3skd"


DJANGO_ALLOWED_HOSTS=crud-dev.waste4think.eu
DJANGO_HOST_PORT=9081
DJANGO_DEBUG=True

DJANGO_ADMIN_USER=w4t_user
DJANGO_ADMIN_PASSWORD=#DJANGO_ADMIN_PASS#
DJANGO_ADMIN_EMAIL=#DJANGO_ADMIN_EMAIL#

DJANGO_STATIC_URL=/static/
DJANGO_STATIC_ROOT=/var/www/static
DJANGO_STATIC_HOST_PATH=/var/www/${STACK}/static
DJANGO_MEDIA_URL=/media/
DJANGO_MEDIA_ROOT=/var/www/media
DJANGO_MEDIA_HOST_PATH=/var/www/${STACK}/media

DJANGO_GUNICORN_LOG=/var/log/${STACK}/gunicorn
DJANGO_GUNICORN_NAME=${STACK}

POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB_NAME=${STACK}_db
POSTGRES_USER=w4t_user
POSTGRES_PASSWORD=#POSTGRES_PASS#
POSTGRES_HOST_PORT=9083
POSTGRES_DATA=/var/postgres/${STACK}
Table 35 – Application properties for the CRUD module (development)

47
Deliverable D2.3
version: '3.1'
services:
django:
image: '${REGISTRY}/${IMAGE}:${TAG}'
ports:
- ${DJANGO_HOST_PORT}:8000
env_file:
- ${STACK}.env
depends_on:
- postgres
volumes:
- ${DJANGO_STATIC_HOST_PATH}:${DJANGO_STATIC_ROOT}
- ${DJANGO_MEDIA_HOST_PATH}:${DJANGO_MEDIA_ROOT}
- ${DJANGO_GUNICORN_LOG}:/var/log/gunicorn

postgres:
image: postgres:10
ports:
- ${POSTGRES_HOST_PORT}:${POSTGRES_PORT}
env_file:
- ${STACK}.env
volumes:
- ${POSTGRES_DATA}:/var/lib/postgresql/data
Table 36 – Docker compose file for the CRUD module (development)

HOST= 192.168.235.170
PORT=22
REGISTRY= https://hub.docker.com/u/waste4think/
STACK=w4t_crud
IMAGE=w4t_crud
TAG=latest

Table 37 – Deployment properties for the CRUD module (production)

DJANGO_SECRET_KEY= "uqn&q$yr6(fyq*!z$g8f5zc&wxna_48)x!l+orhnjpbd7^3skd"
DJANGO_ALLOWED_HOSTS=crud.waste4think.eu
DJANGO_HOST_PORT= 9080
DJANGO_DEBUG=False

DJANGO_ADMIN_USER=w4t_user
DJANGO_ADMIN_PASSWORD=#DJANGO_ADMIN_PASS#
DJANGO_ADMIN_EMAIL=#DJANGO_ADMIN_EMAIL#

DJANGO_STATIC_URL=/static/
DJANGO_STATIC_ROOT=/var/www/static
DJANGO_STATIC_HOST_PATH=/var/www/${STACK}/static
DJANGO_MEDIA_URL=/media/
DJANGO_MEDIA_ROOT=/var/www/media
DJANGO_MEDIA_HOST_PATH=/var/www/${STACK}/media

DJANGO_GUNICORN_LOG=/var/log/${STACK}/gunicorn
DJANGO_GUNICORN_NAME=${STACK}

POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB_NAME=${STACK}_db
POSTGRES_USER=w4t_user
POSTGRES_PASSWORD=#PROD_POSTGRES_PASS#
POSTGRES_DATA=/var/postgres/${STACK}
Table 38 – Application properties for the CRUD module (production)

48
Deliverable D2.3
version: '3.1'
services:
django:
image: '${REGISTRY}/${IMAGE}:${TAG}'
ports:
- ${DJANGO_HOST_PORT}:8000
env_file:
- ${STACK}.env
depends_on:
- postgres
volumes:
- ${DJANGO_STATIC_HOST_PATH}:${DJANGO_STATIC_ROOT}
- ${DJANGO_MEDIA_HOST_PATH}:${DJANGO_MEDIA_ROOT}
- ${DJANGO_GUNICORN_LOG}:/var/log/gunicorn

postgres:
image: postgres:10
ports:
- ${POSTGRES_HOST_PORT}:${POSTGRES_PORT}
env_file:
- ${STACK}.env
volumes:
- ${POSTGRES_DATA}:/var/lib/postgresql/data
Table 39 – Docker compose file for the CRUD module (production)

server {
server_name crud.waste4think.eu;
listen 443 ssl;

error_log /var/log/nginx/error.log debug;


rewrite_log on;

location /static/ {
alias /var/www/w4t_crud/static/;
}

location /media/ {
alias /var/www/w4t_crud/media/;
}

location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
proxy_pass http://192.168.235.170:9080/;
}
}
server {
server_name crud-dev.waste4think.eu;
listen 443 ssl;

location /static/ {
alias /var/www/w4t_crud/static/;
}
location /media/ {
alias /var/www/w4t_crud/media/;
}

49
Deliverable D2.3
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
proxy_pass http://192.168.235.170:9081/;
}
}
Table 40 – Nginx file for the CRUD module (development and production)

5. Security implementation
5.1. FIWARE security
The Security module implemented in the Waste4Think project manages the user account
information, it authenticates the user by a FIWARE account and checks authorizations to
access resources.

In the Waste4Think context, the following FIWARE Generic Enablers (see Figure 23) have
been used and combined to implement secure access to the Back-End:

• The Keyrock Identity Management Generic Enabler[11] brings support to secure and
private OAuth2-based[9] authentication of users and devices, user profile
management, privacy-preserving disposition of personal data, Single Sign-On (SSO)
and Identity Federation across multiple administration domains.
• The Wilma PEP Proxy Generic Enabler[12] brings the support of proxy functions
within OAuth2-based authentication schemas. It also implements PEP functions
within an XACML-based access control schema.
• The AuthZForce PDP/PAP Generic Enabler[10] brings support to PDP/PAP functions
within an access control schema based on the XACML standard[19].

Figure 23 - FIWARE Security GEs

50
Deliverable D2.3

The Identity Manager (IdM) stores the FIWARE user’s account and allows Single Sign On
(SSO) authentication by using the OAuth2 protocol.

Upon logging in, the user authenticated receives an authentication token which is used by
the AuthZForce component to check the role of the user and permission associated.

The PEP Proxy acts as a proxy server redirecting the allowed requests or blocking the
unauthorized requests (see Figure 24).

Figure 24 - W4T Back-End security layer

During the integration and test phase of the FIWARE security components in the
Waste4Think Back-End, it has been realized that the IDM component does not allow a user
to create particular authorization permissions to HTTP resources containing a dynamic value
in the URI. For instance, it is not possible to create an IDM authorization permission to the
HTTP resource http://backend.waste4think.eu/v2/entities/<entity_id where the <entity_id> is
a dynamic value.

To solve this issue, it has been necessary to make changes to the source code of the PEP
Proxy GE that carries out authorization checks and authorize or deny access to the Back-
End resources and moreover it has been necessary to define a method to create these
permissions in the IDM GE.

As an example, in order to to get the permission to the resource


/v2/entities/<entity_id>/attrs/<attr_id> where <entity_id> and <attr_id> are dynamic values,
from the IDM side the resource will be defined as /v2/entities//attrs/ (see Figure 25).
Therefore a new logic has been implemented in a custom version of PEP Proxy GE used in
our project, to handle the permissions to this kind of resources and in particular to consider
the two backslash “//” as dynamic value.

51
Deliverable D2.3

Figure 25 - IdM resource permission with dynamic values

5.2. FIWARE Wilma PEP Proxy


5.2.1. Service overall description
The Wilma PEP Proxy GE[12], playing the role of PEP (Policy Enforcement Point), is a key
component to use combined with the Identity Management GE[11] and AuthZForce GE[10]
that brings authentication and authorization to the Back-End services. Its main functionality is
to check that only the authorized FIWARE users, registered in the Waste4Think Identity
Management instance (see section 5.3), are able to access the Back-End resources. PEP
Proxy works with OAuth2 and XACML protocols as standards for authorization and
authentication in FIWARE.

The PEP Proxy is deployed on top of the Back-End service, transforming the service
endpoint in the PEP Proxy endpoint and redirecting the request filtered to the service. The
Back-End application must be registered in the IDM with Oauth2 mechanism that enables
login user a FIWARE account. The OAuth2 token generated must be included in the HTTP
headers sent the request to the Back-End service. If the token included in the request is
valid, then PEP Proxy will redirect the request to the Back-End.

The WILMA PEP Proxy GE provides three levels of security as just described in the section
4.6.4 of the D2.1 [3].

• Level 1 (Authentication): PEP proxy checks if the token included in the request
corresponds to an authenticated user in FIWARE IDM.

• Level 2 (Basic Authorization): PEP Proxy checks the authenticated user as defined in
the Level 1 and it also checks the roles and permissions configured for that user, and
allow him to access to the resource specified in the request (based in the HTTP verb
+ path);

• Level 3 (Advanced Authorization): PEP Proxy provides the same level security of the
Level 2, so it checks user authentication, roles, and permissions with the only

52
Deliverable D2.3
difference that the Level 3 use the XACML standard language to define the
permissions.

In the Waste4Think project, only Level 1 and Level 2 have been considered to secure the
Back-End (see Figure 26and Figure 27).

Figure 26 - Security level 1 – Authentication

Figure 27. Security level 2 - Basic authorization

5.2.2. Source code repository


The source code of the custom version of the FIWARE PEP Proxy GE, with the changes
described in section 5.1 of this deliverable, is being hosted on GITHUB at the following link
http://dev.waste4think.eu/waste4think/fiware-pep-proxy-fix.

The Docker image used in the Waste4Think project is the custom version of the official 6.2
version, and it is being hosted on DOCKER Hub at the following link
https://hub.docker.com/r/waste4think/pep_proxy/.

5.2.3. Installation/Configuration
In the Waste4Think project, FIWARE PEP Proxy has been deployed by using Docker
technologies.

There are different ways to deploy the FIWARE PEP Proxy GE. More information on this can
be found in the official FIWARE PEP Proxy GE Installation & Administration Manual [12].

Two different ways can be used to deploy PEP Proxy by using Docker.

• Installation by using docker cli command;

53
Deliverable D2.3
• Installation by using docker-compose.
The docker-compose method has been used to deploy the PEP Proxy component in the
Waste4Think context.

In the Table 41 the docker-compose.yml file used to deploy the FIWARE PEP Proxy instance
in the Waste4Think Back-End environment is presented.

version: "3"
services:
pep-proxy:
image: waste4think/pep_proxy:release-6.2
volumes:
- /data/docker_pepproxy/conf/config.js:/opt/fiware-pep-
proxy/config.js
ports:
- "82:82"
networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1400
Table 41 – PEP Proxy Docker compose file

Image: The PEP Proxy Docker image used is a custom version (more details can be found in
the section 5.1 of this deliverable) hosted on the Waste4Think project of Docker HUB.

Volumes: Docker volumes has been created to map the configuration file of the PEP Proxy
component “config.js” to an external folder.

Ports: The PEP Proxy service has been configured to run on port 82.

In the Table 42 the configuration file “config.js” used to configure the FIWARE PEP Proxy
instance in the Waste4Think Back-End environment is presented.

config = {};
// Used only if https is disabled
config.pep_port = 82;
// Set this var to undefined if you don't want the server to listen on
HTTPS
config.https = {
enabled: false,
cert_file: 'cert/cert.crt',
key_file: 'cert/key.key',
port: 443
};
config.account_host = 'http://192.168.229.62:8000';
config.keystone_host = '192.168.229.62';
config.keystone_port = 5000;
config.app_host = '192.168.229.62';
config.app_port = '1026';
// Use true if the app server listens in https
config.app_ssl = true;
// Credentials obtained when registering PEP Proxy in Account Portal
config.username = 'pep_proxy_1d2c28cb9dc140629efd669a36023242';

54
Deliverable D2.3
config.password = 'f5c42df5e4304961a810417ceb5ac8b6';
// in seconds
config.cache_time = 300;
// if enabled PEP checks permissions with AuthZForce GE.
// only compatible with oauth2 tokens engine
//
// you can use custom policy checks by including programatic scripts
// in policies folder. An script template is included there
config.azf = {
enabled: false,
protocol: 'http',
host: '192.168.229.62',
port: 83,
custom_policy: undefined // use undefined to default policy checks
(HTTP verb + path).
};
// list of paths that will not check authentication/authorization
// example: ['/public/*', '/static/css/']
config.public_paths = [];
// options: oauth2/keystone
config.tokens_engine = 'oauth2';
config.magic_key = undefined;
module.exports = config;
Table 42 – PEP Proxy config file

5.2.4. Manage FIWARE PEP Proxy Service


Start service: docker-compose up/-d command.

Stop service: docker-compose stop command or docker stop <pep-proxy-container>.

Service logs: docker logs <pep-proxy-container>

Access to the container of FIWARE PEP Proxy: docker exec -it < pep-proxy-container> bash

5.3. FIWARE IDM – KeyRock


5.3.1. Service overall description
Identity Management covers several aspects involving users' access to networks, services,
and applications, including secure and private authentication from users to devices, networks
and services, authorization & trust management, user profile management, privacy-
preserving disposition of personal data, Single Sign-On (SSO) to service domains and
Identity Federation towards applications. The Identity Manager is the central component that
provides a bridge between IDM systems at connectivity-level and application-level.
Furthermore, Identity Management is used for authorizing foreign services to access
personal data stored in a secure environment. Hereby usually the owner of the data must
give consent to access the data; the consent-giving procedure also implies certain user
authentication.

In the Waste4Think context, a private instance of FIWARE IDM (see Figure 28) has been
deployed and configured. It is running at the following link:

55
Deliverable D2.3
http://backend.waste4think.eu:8000/.

Figure 28 - W4T IDM instance

5.3.2. Source code repository


The official FIWARE IDM source code repository is being hosted on GITHUB
https://github.com/ging/fiware-idm.

The Docker image used in the Waste4Think project is the version 6.2, and it is being hosted
on DOCKER Hub at the following link https://hub.docker.com/r/waste4think/fiware-idm/.

Installation/Configuration

In the Waste4Think project, FIWARE IDM has been deployed by using Docker technologies.

There are different ways to deploy the FIWARE IDM service. More information on this can be
found in the official FIWARE IDM Installation & Administration Manual [11].

Two different ways can be used to deploy IDM by using Docker.

• Installation by using docker cli command;


• Installation by using docker-compose.
The docker-compose method has been used to deploy the IDM component in the
Waste4Think context.

Docker cli method

docker run -d –name idm –p 8000:8000 -p 5000:5000 -t waste4think/fiware-idm

Check if all was successful by running:

docker ps

Docker compose method

In Table 43 the docker-compose.yml file used to deploy the FIWARE IDM version 6.2
instance in the Waste4Think Back-End environment is presented.

Version: “3”
services:

56
Deliverable D2.3
idm:
image: waste4think/fiware-idm:latest
hostname: keyrock
container_name: keyrock
ports:
- "8000:8000"
- "5000:5000"
networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1400
Table 43 – IDM docker compose file

Image: The IDM docker image used is the version 6.2 stored in the Waste4Think
organization of the Docker HUB.

Ports: The port 8000 has been configured for the IDM component HORIZON (Front-End),
and the port 5000 has been configured for the IDM KEYSTONE component (Back-End).

5.3.3. Manage FIWARE IDM Service


Start service: docker-compose up/-d command.

Stop service: docker-compose stop command or docker stop <idm-container>.

Service logs: docker logs <idm-container>

Access to the container of FIWARE IDM: docker exec -it < idm-container> bash

5.4. FIWARE AuthZForce


5.4.1. Service overall description
AUTHZFORCE is the reference implementation of the Authorization PDP Generic Enabler
(formerly called Access Control GE). Indeed, as mandated by the GE specification, this
implementation provides an API to get authorization decisions based on authorization
policies, and authorization requests from PEPs. The API follows the REST architecture style
and complies with XACML v3.0. XACML (extensible Access Control Markup Language) is an
OASIS standard for authorization policy format and evaluation logic, as well as for the
authorization decision request/response format. The PDP (Policy Decision Point) and the
PEP (Policy Enforcement Point) terms are defined in the XACML standard. This GEri plays
the role of a PDP.

5.4.2. Source code repository


The official FIWARE AuthZForce source code repository is being hosted on GITHUB
https://github.com/authzforce/fiware.

57
Deliverable D2.3
The Docker image used in the Waste4Think project is the version 8.0.1, and it is being
hosted on the Waste4Think project of the DOCKER Hub at the following link
https://hub.docker.com/r/waste4think/authzforce/.

5.4.3. Installation/Configuration
In the Waste4Think project, FIWARE AuthZForce has been deployed by using Docker
technologies.

There are different ways to deploy the FIWARE AuthZForce service. More information on
that can be found in the official FIWARE AuthZForce Installation & Administration Manual
[10].

Docker compose method

In Table 44, the docker-compose.yml file used to deploy the FIWARE AuthZForce
component in the Waste4Think Back-End environment is presented.

version: "3"
services:
authz:
image: waste4think/authzforce:release-8.0.1
ports:
- "83:8080"
networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1400
Table 44 – AuthZForce docker compose file

Services and image used (hosted on waste4think docker-hub):

services: authz:

image: waste4think/authzforce:release-8.0.1

Port on witch docker container (AUTHZFORCE) will run

ports: “83:8080”

5.4.4. Manage FIWARE AuthZForce Service


Start service: docker-compose up/-d command.

Stop service: docker-compose stop command or docker stop <authz-container>.

Service logs: docker logs <authz-container>

Access to the container of FIWARE AuthZForce: docker exec -it < authz-container> bash

5.5. Applications, Users, Roles and Permission

58
Deliverable D2.3
In the Waste4Think IDM instance it was necessary to configure the following object:

• Application: Any securable FIWARE application consisting of a series of


microservices

• User: Any signed up user able to identify themselves with an email and password.
Users can be assigned rights individually or as a group

• Role: A role is a descriptive bucket for a set of permissions. A role can be assigned to
either a single user or an organization. A signed-in user gains all the permissions
from all of their own roles plus all of the roles associated with their organization

• Permission: An ability to do something on a resource within the system.

5.5.1. Applications
This section shows the Back-End applications registered on IDM (see Figure 29).

Figure 29 - IDM Applications

The following Table 45 describes the application registered on IDM.

Application Description

Waste management application, it contains a REST


Waste4think API interface whose purpose is to create and manage
the entities of the pilot sites.

Table 45 - IDM Applications

5.5.2. Users and Roles


A set of users have been created and linked to the application in the FIWARE IDM service
(see section 5.3). Each user represents an authorized user to access “Waste4Think”
application registered on FIWARE IDM (see section 5.5.1). Moreover, for each user, a
specific authorization role has been assigned.

59
Deliverable D2.3

Figure 30 - IDM authorized user

The following Table 46 shows the list of authorized users and their roles associated with the
IDM “waste4think” application.

User Roles Description

SEVESO PILOT
Seveso User
(seveso) (Pilot)

ZAMUDIO PILOT
Zamudio User
(zamudio) (Pilot)

CASCAIS PILOT
Cascais User
(cascais) (Pilot)

HALANDRI PILOT
Halandri
(halandri) (Pilot)

Pilot role given to


W4TSORTINGANALYTICS PILOT external user in goal of
developing internal
(W4TSortingAnalytics) (Pilot) application/game for
waste4think project.

W4TOWNER PROVIDER/OWNER Owner/creator of


application, responsible
(W4tOwner) (Provider/Owner) for adding users and
general administration

60
Deliverable D2.3
specific to IDM.

ENGUSER ADMIN Administration role used


to handle the IDM
(engUser) (Admin) waste4think organization.

Table 46 - IDM Users and Roles

5.5.3. Permissions
The following Table 47 represents the list of the IDM permissions for each role.

Roles Permission Description

PILOT Permission to get


GET v2/entities all, single or entity
(pilot) based on type.

Permission to
PILOT create/update
POST v2/op/update
(pilot) entities in batch
operations.

Permission to get
ADMIN
Version FIWARE Orion version
(Admin)

ADMIN Permission to get


GET v2/entities all, single or entity
(Admin) based on type.

ADMIN Permission to get all


GET v2/subscriptions
(Admin) created subscriptions

ADMIN Permission to get all


GET v2/types entities types or
(Admin) single one.

ADMIN Permission to
GET v2/entities//attrs retrieve entity
(Admin) attributes

Permission to
ADMIN create/update
POST v2/op/update
(Admin) entities in batch
operations.

ADMIN Permission to create


POST v2/subscriptions
(Admin) subscriptions

ADMIN Permission to create


POST v2/entities
(Admin) single entity

61
Deliverable D2.3
ADMIN Permission to update
POST v2/entities//attrs or append entity
(Admin) attributes

ADMIN Permission to update


PUT v2/entities
(Admin) entity attribute data

ADMIN Permission to update


PUT
entity attribute
(Admin) v2/entities//attrs//value
value

ADMIN Permission to update


PATCH v2/subscriptions
(Admin) entity subscription

ADMIN Permission to update


PATCH v2/entities//attrs existing entity
(Admin) attributes

ADMIN Permission to remove


DELETE v2/subscriptions
(Admin) subscription

ADMIN Permission to remove


DELETE v2/entities
(Admin) single entity

ADMIN Permission to remove


DELETE v2/entities//attrs
(Admin) entity attribute

PROVIDER/OWNER Permission to update,


Manage the application remove or edit
(Provider/Owner) application.

PROVIDER/OWNER Permission to create,


Manage roles
(Provider/Owner) remove roles

PROVIDER/OWNER Permission to
Get and assign all public
retrieve and assign
(Provider/Owner) application roles
roles.

Permission to
PROVIDER/OWNER authorize users in
Manage Authorizations
(Provider/Owner) application and add
them to it.

PROVIDER/OWNER Get and assign only Permission to assign


(Provider/Owner) public owned roles public roles to user.

Permission to assign
PROVIDER/OWNER Get and assign all
internal roles in
internal application
(Provider/Owner) application ones who
roles
are for public users

Table 47 - IDM Roles and Permissions

62
Deliverable D2.3

6. Back-End interfaces to others Waste4Think modules


6.1. Front-End
The Front-End layer of Waste4Think is composed of all the applications (Dashboards,
Planning Tool, Green Procurement, Learning Analytics, etc..) that will feed from the data
stored in the Back-End, whether in the Orion Context Broker or in the History Module. These
Front-End applications will communicate with the Back-End through the PyFiware Connector
Python library.

Dashboards

Orion Waste4Think Planning Tool


Context PyFiware Connector
Broker Back-End GreenProcurement

Figure 31 - Connection of Waste4Think Back-End with the Front-End applications

6.1.1. PyFiware Connector


Service overall description

The PyFiware Connector is a Python library that provides a unique endpoint to access the
information stored in the Waste4Think Back-End (the Orion Context Borker and the History
Module). PyFiware makes the communication and the use of the FIWARE Context Broker
OAuth protocol transparent to the application consumer.

Source code

The source code of the PyFiware Connector can be found on:


https://github.com/josubg/pyfiware

Installation/Configuration

• To install PyFiware, run the following command:


pip install pyfiware

• To configure PyFiware to read from the Orion Context Broker, the user needs to
define two sets of parameters, a) one for the specific instance of the Orion Context
Broker (host, service, service path, and the OAuth connector, and b) the parameters
relevant to the OAuth specification (id, secret, user and password). Table 48 shows
and example configuration of the PyFiware Connector to Access the Context Broker.

from pyfiware import OrionConnector


from pyfiware.oauth import OAuthManager

oauth= OAuthManager(
oauth_server_url='http://backend.waste4think.eu:8000/oauth2',

63
Deliverable D2.3
client_id="3bb5a3ee06854161a05bfdcdeab7c1cf",
client_secret="82e2f867b9db441ea0dd3659e05cbdcc",
user="josu.bermudez@deusto.es",
password="82e2f867b9db441ea0dd3659e05cbdcc"
)

oc = OrionConnector(
host="http://backend.waste4think.eu/",
service="waste4think",
service_path="/#",
oauth_connector=oauth
)

oc.search("SortingType")[3]['color']['value']
Table 48 – Example configuration of the PyFiware connector to access the Context Broker

• To configure PyFiware to read from the History module, the user needs to define two
sets of parameters, a) one for the specific instance of history module (host, service,
service path, and the OAuth connector, and b) the parameters relevant to the OAuth
specification (id, secret, user and password). Table 49 shows and example
configuration of the PyFiware Connector to Access the History Module.
from pyfiware import HistoryConnector
from pyfiware.oauth import OAuthManager

oauth= OAuthManager(
oauth_server_url='http://backend.waste4think.eu:8000/oauth2',
client_id="3bb5a3ee06854161a05bfdcdeab7c1cf",
client_secret="82e2f867b9db441ea0dd3659e05cbdcc",
user="josu.bermudez@deusto.es",
password="82e2f867b9db441ea0dd3659e05cbdcc"
)

hc = HistoryConnector(
host="http://history.waste4think.eu/",
oauth_connector=oauth
)

scenario = “/”

hc.entity_list_by_type(scenario, 'SortingType', attrs=['name', 'color',


'wasteCharacterization'])
Table 49 – Example configuration of the PyFiware connector to access the History module

6.2. Sensors
The information provided by the different pilots of Waste4Think pilots can be originated either
by data coming from on-site sensors ( (lock system, weighting system, GPS system,
SCADAS in treatment plants, interaction with the apps and games, etc) and/or by data
coming from legacy systems that are already deployed in the pilots (MAWIS, Ge.R.A., ILink).

64
Deliverable D2.3

As described in section 9 of Deliverable D2.1 [3] , Waste4Think pilots can opt for different
methods to upload their data (sensors, legacy system, etc.) to the Back-End (see Figure 32 -
). These systems provide pilots different approaches to handle uploading data, with the aim
to cover several scenarios such as near real-time process, batch process and upload
process that require user interaction. The Waste4Think Back-End provides three methods in
which to upload information coming from the sensors or the legacy systems: the NGSI
Connector API (Section 6.2.1), the NGSI Connector WEB (Section 6.2.2) and the NGS v2
APIs (Section 6.2.3).

Figure 32 - Sensors pushing information to the Waste4Think Back-End

When connecting a sensor or legacy system to the backend comes the decision of which
connector to use. This decision is taken depending on the frequency of data availability and
the technical possibility of connecting directly different software systems and/or sensors to
the Waste4Think back-end. Nevertheless, regardless of the connector being used, prior to
sending the information, the sensor layer or software system must format the data gathered
into the NGSI compliant Waste4Think data model, which is specified in Section 7 of
Deliverable D2.1 [3]. Once formatted, the information is sent through one of the connectors
previously mentioned to the Waste4Think Back-End, stored in the Orion Context Broker and
subsequently in the History module. This information is then available for all the modules in
the Front-End (Section 6.1).

6.2.1. NGSIConnectorAPI
Service overall description

The NGSIConnectorAPI is a RESTful web service that allows authorised Waste4Think users
to create, update and retrieve Waste4Think context entities in the Back-End platform.

NGSIConnectorAPI has been developed by using NodeJS[20] technologies, specific Express


JS framework[22].

NGSIConnectorAPI uses NGSI V2 API provided by FIWARE Orion. It provides functionalities


to upload a large amount of data stored in CSV o JSON files, by automating the process,
checking data with predefined rules and summarising the operation results.

65
Deliverable D2.3

NGSIConnectorAPI endpoint

The base URL of the NGSIConnectorAPI is: http://backend.waste4think.eu/connector. The


list of the NGSIConnectorAPI resources are detailed in the swagger documentation at
http://backend.waste4think.eu:81.

Source code

The source code of the NGSIConnectorAPI is being hosted on GITLAB at the following link:

http://dev.waste4think.eu/waste4think/NgsiConnector.

The docker image used in the Waste4Think project is being hosted on the Waste4Think
project of the DOCKER Hub and is available at the following link:

https://hub.docker.com/r/waste4think/ngsi-connector/.

Installation/Configuration

NGSIConnectorAPI has been installed using docker-compose.

In the Table 50 the docker-compose file used to configure and deploy the
NGSIConnectorAPI to Waste4Think back-end is presented.

version: "3"
services:
connector:
image: waste4think/ngsi-connector:3.0.0
volumes:
- /data/docker_connector/conf/config.js:/opt/ocbconnector/config.js
ports:
- "3000:3000"
networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1400
Table 50 – NGSIConnectorAPI docker compose file

connector:

image: waste4think/ngsi-connector

Services and image used (hosted on Waste4Think docker-hub organization).

ports:“3000:3000”

Port of the NGSIConnectorAPI

service./data/docker_connector/conf/config.js:/opt/ocbconnector/config.js

Volume that map the NGSIConnectorAPI configuration file.

Configuration file

66
Deliverable D2.3
In the Table 51 the configuration file used by the NGSIConnectorAPI service is presented.

const config = {};


config.orion_url = 'http://192.168.229.62:82/';
config.api_port = '3000';
config.ext = ['.csv'];
config.https = true;
config.returnEntities = 20;
module.exports = config;
Table 51 - NGSIConnectorAPI config file

config.orion_url:

FIWARE Orion URL. In the Waste4Think context FIWARE ORION Context Broker is
protected by PEP-PROXY and then the value 'http://192.168.229.62:82/ represents the URL
of PEP Proxy service.

config.api_port:

NGSIConnectorAPI port.

config.ext:

Supported file extensions to upload/update context information in the FIWARE Orion Context
Broker. At the time of writing of this deliverable, the NGISConnectorAPI supports csv and
json extensions.

config.https:

Running the NGSIConnectorAPI in HTTPS mode. Values admitted “true” and “false”.

config.returnEntities:

Number of entities to get from the Context Broker. This is a parameter of the FIWARE
ORION Context Broker which return by default 20 entities when using GET all entities
operations. With this option, we can increase the number up to 1000.

NGSIConnectorAPI supporting tool

NGSIConnectorAPI has been documented using a powerful open source framework


Swagger [15] that helps to design, build, document and consume RESTful API. REST API
documentation is available within the deployed app through Swagger annotation at the
following link: http://backend.waste4think.eu:81/ The Swagger documentation can be split
into three main sections:

67
Deliverable D2.3
• General description of the project that includes developers, https/http license etc. (see
Figure 33)

Figure 33 - APIs swagger description

• List of APIs provided by the NGSIConnectorAPI service (see Figure 34).

Figure 34 - swagger API list

• Information on data models defined in the Waste4Think project (see Figure 35).

68
Deliverable D2.3

Figure 35 - swagger data model info

Annex_C contains an exploded view of each API call definition which includes i)
implementation notes, ii) query parameters and iii) response data model.

6.2.2. NGSIConnectorWEB
Service overall description
The NGSIConnectorWEB is a web application which exposes the NGSIConnectorAPI
functionalities and provides Waste4Think users with a graphical interface for work on them.
On top of functionalities provided by NGSIConnectorAPI, NGSIConnectorWEB is also
responsible for access token management, specifically for automating the process of token
retrieval.

NGSIConnectorWEB is developed with NodeJS [17] and ExpressJS [18] technologies, as


they offer great speed, a huge set of functionalities and very stable environment.

NGSIConnectorWEB endpoint

NGSIConnectorWEB is running at the following link:

http://backend.waste4think.eu:3001

Source code

The source code of the NGSIConnectorWEB is being hosted on the following link:

http://dev.waste4think.eu/waste4think/NgsiConnectorWeb

69
Deliverable D2.3

The Docker image is being hosted on the Waste4Think project of the Docker Hub on the
following link:

https://hub.docker.com/r/waste4think/ngsi-connector-web/

Installation/Configuration

NGSIConnectorWEB has been deployed using docker-compose.

In Table 52, the docker-compose file used to configure and deploy the NGSIConnectorWEB
to Waste4Think back-end is presented.

version: "3"
services:
connector:
image: waste4think/ngsi-connector-web:2.0.1
volumes:
-
/data/docker_connector_web/conf/config.js:/opt/NgsiConnectorWeb/config.js
ports:
- "3001:3001"
networks:
default:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1400
Table 52 – NGSIConnectorWEB docker-compose file

connector:

image: waste4think/ngsi-connector-web:2.0.1

Services and image used (hosted on Waste4Think docker-hub organization)

ports:"3001:3001"

Port on which the NGSIConnectorWEB docker container is running.

/data/docker_connector_web/conf/config.js:/opt/NgsiConnectorWeb/config.js

Volume that map the NGSIConnectorWEB configuration file on an external folder.

Configuration file

Configuration file used by the NGSIConnectorWEB service is presented.

“url”:” https://192.168.229.62:3000/v1”,
“port”: 3001,
“clientID”: “3bb5a3ee06854161a05bfdcdeab7clcf”,
“clientSecret”: “82e2f867b9db441ea0dd3659e05cbdcc”

70
Deliverable D2.3
Table 53 – NGSIConnectorWEB configuration file.

url:

NGSIConnectorAPI URL, used by NGSIConnectorWEB for API calls.

port:

NGSIConnectorWEB port.

clientID:

clientSecret:

ClientID and ClientSecret of the “Waste4think” application defined in the FIWARE IDM
instance (see section 5.3 of this deliverable) to get access token in NGSIConnectorWEB
interface.

NGSIConnectorWEB interface

The NGSIConnectorWEB interface is structured in different web pages which provide


different functionality to the Waste4Think users.

Login page

The NGSIConnectorWEB login page is used to authenticate Waste4Think users. (see Figure
36).

Figure 36 - NGSIConnectorWEB Login page

Home page

71
Deliverable D2.3
It provides quick access feature to most used functionalities as well as navigation menu to all
pages available (see Figure 37).

Figure 37 - NGSIConnectorWEB Home page

Entities page

Entities page provides the functionality to get all entities specifying Fiware-Service and
Fiware-ServicePath (see Figure 38).

Figure 38 - NGSIConnectorWEB Entities page

Details page

Details page provides the functionality to get a single entity specifying Fiware-Service and
Fiware-ServicePath (see Figure 39).

72
Deliverable D2.3

Figure 39 - NGSIConnectorWEB Details page

Type page

Type page provides the functionality to get entities specifying Entity Type, Fiware-Service
and Fiware-ServicePath (see Figure 40).

Figure 40 - NGSIConnectorWEB Type page

Create page

Create page provides the functionality to create entities from user-supplied file specifying
Fiware-Service and Fiware-ServicePath (see Figure 41).

73
Deliverable D2.3

Figure 41 - NGSIConnectorWEB Create page

Update page

Update page provides the functionality to update entities from user-supplied file specifying
Fiware-Service and Fiware-ServicePath (see Figure 42).

Figure 42 - NGSIConnectorWEB Update page

Rules page

Rules page provides the functionality to show information the available rules defined in the
Waste4Think data model (see Figure 43).

74
Deliverable D2.3

Figure 43 - NGSIConnectorWEB Rules page

Access token

NGSIConnectorWEB access token form (see Figure 44) provides a function to retrieve the
access token using a FIWARE IDM credential (see section 5.3 of this deliverable).

Figure 44 - NGSIConnectorWEB Access token form

6.2.3. NGSIConnector Data Management


Objective of this section is to provide information regarding data management aspects of the
NGSIConnector tools (NGSIConnetorAPI and NGSIConnetorWEB), with focus on how data
is being managed by various parts of Waste4Think systems.

Data sources

Data that is being sent to NGSIConnector tools, can come from different sources, each
source represents certain use case of Waste4Think project.

Sources responsible for data in the Waste4Think project are:


• Sensors;
• Third-Party Systems;
• Other sources information (APPs, mobile games, learning materials, surveys).

Sensors
All the sensors/devices that act as input to define the waste management system. In
particular:

• Sensors deployed at the bins and clean points;


• Sensors deployed at the trucks;
• Sensors deployed at the treatment plants.

Third-Party Systems

75
Deliverable D2.3
All the Third-Party waste management systems (e.g. MOBA Systems, Wintarif, GeRa) that
provide data to the Waste4Think Back-End.

Other sources information

These sources comprise a series of different input mechanisms such as APPs, mobile
games, surveys, learning materials and other monitoring actions that feed information to the
system.

Data formats

Data coming from different sources (Sensors, Third-Party Systems and other sources info )
can be sent to the Back-End, through the NGSIConnector tools, in various formats
depending on its source.

The formats currently managed are:


• CSV file
• JSON file
• API Request body

CSV:

CSV files (see Table 54) used to store and send data to the Back-end, must fulfil conditions
set by project design, these aspects are main building blocks by which data is modelled and
sent to back-end services.

id,family,type,name,description,refCategory,wastecode,definitionSource
waste:1,Resource,Waste,Biowaste,,wastecategory:1,200108,easetech
waste:2,Resource,Waste,Meat and bone meal,,wastecategory:1,200108,easetech
waste:3,Resource,Waste,Rape meal,,wastecategory:1,200108,easetech
Table 54 – Waste4Think CSV file

This CSV file must include:


• Header, which represents the first line of the CSV file and contains information that
determine how data will be modelled and in particular the attribute name of an entity;
• Body, which represents the attribute values, each row value contains the attribute
values with the same order of the header definition.

Important fields that each file must contain are:


• id/type – These are mandatory fields that every CSV file must have and represent the
entityID and entityType of an entity based on the NGSI standard.
• metadata (optional) – The CSV file can include also the metadata of an entity
attribute. Metadata is being added to the header of the CSV file and it is being put
before the attribute name on which metadata is being referenced. Metadata must
have the following structure: %%"metadata":{metadata values}}%%attribute_name.
Table 55 shows an example of a CSV file with metadata, while Table 56 shows the
NGSI entity created from CSV file.

id,family,type,%%"metadata":{"unit":{"value":"C62","type": "String"}}%%name
waste:1,Resource,Waste,Biowaste
waste:2,Resource,Waste,Meat and bone meal
waste:3,Resource,Waste,Rape meal
Table 55 – Waste4Think CSV metadata

[
{
“id”: “waste:1”,
“type”: “Waste”,

76
Deliverable D2.3
“family: {
“type”: “String”,
“value”: “Resource”,
“metadata”: {}
},
“name”: {
“type”: “String”,
“value”: “Biowaste”,
“metadata”: {
“unit”: {
“value”: “C62”,
“type”: “String”
}
}
},
}
]
Table 56 – Waste entity with metadata

JSON:
JSON files (see Table 57) used to send data to the Back-end, through the NGSIConnector
tools, must be structured following the FIWARE NGSI v2 standard.

[
{
"id": "route:1",
"type": "Route",
"arrivalPoint": {
"metadata": {},
"type": "StructuredValue",
"value": {
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
"9.15241",
"45.6388"
]
}
}
},
"departurePoint": {
"metadata": {},
"type": "StructuredValue",
"value": {
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
"9.15238",
"45.63904"
]
}
}
},
"description": {
"metadata": {},
"type": "String",
"value": ""
},
]
Table 57 – Waste4Think JSON file data

77
Deliverable D2.3
Request body:

The data structure is the same as for JSON files, but data is not stored in a JSON file but it is
included in the API request body. Since structure is same as for JSON files it also must
contain “id” and “type”.

Rules

NGSIConnector tools contain a "rules" module to check both structure and integrity of data
that is being sent to the FIWARE Back-End.
Rules (see Table 58) structure the data that is being sent to the FIWARE Back-End and are
created by W4T system admin using the value of the entity's attribute type as naming
convention (e.g. WasteTransaction.js).

const rules = require( "../utilities" );

const WasteTransaction = {
id: rules.idCheck,
type: rules.typeCheck,
family: rules.stringCheck,
refEmitter: rules.mandatoryCheck,
refReceiver: rules.mandatoryCheck,
refCapturer: rules.stringCheck,
date: rules.mandatoryCheck,
emittedResources: rules.structuredListMandatory,
receivedResources: rules.structuredListMandatory,
incidence: rules.stringCheck,
incidenceReason: rules.stringCheck
};

module.exports = WasteTransaction;
Table 58 – NgsiConnectorApi Waste Transaction rule example

Data upload/update

Waste4Think data, coming from different sources (e.g. Sensors, Third-Party System, APPs,
forms), does not have suitable data structure required by FIWARE Back-End. Both
NgsiConnectorApi or NgsiConnectorWeb are the connector tools to upload or update data to
the FIWARE Back-End.
These connectors provide pilots with different approaches to handle uploading data, with the
aim covering different scenarios such as near real-time process, batch process and upload
process that require a user interaction.

Scenario 1: Near Real-time Process


This scenario describes how a RFID sensor or a Filling Sensor deployed in a bin can upload
or update data to the Waste4Think Back-End in near real-time (see Figure 45).

Figure 45 – Scenario: Near Real-time process

78
Deliverable D2.3

In this case the sensors can send context data in automated way to the Back-end by using
directly the NGSIConnectorAPI with the request body option, since data typically contains the
values of a single entity.

The request URIs of the NGSIConnectorAPI used in this scenario are:


• uploading data: http post /connector/entities (request body)
• updating data: http post /connector/entities/update (request body)

Table 59 shows an example of upload process with request body.

curl -X POST \
http://backend.waste4think.eu/connector/entities \
-H 'Content-Type: application/json' \
-H 'Fiware-Service: waste4think' \
-H 'Fiware-ServicePath: /deusto/w4t/seveso/real' \
-H 'X-Auth-Token: 1u374V8CFfc822kzfEcCNE0aKDLsyK' \
-d '[
{
"id": "route",
"type": "Route",
"arrivalPoint": {
"metadata": {},
"type": "StructuredValue",
"value": {
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
"9.15251",
"45.63864"
]
}
}
},
]'
Table 59 – NgsiConnectorApi upload data in near real-time

Scenario 2: Batch process for a massive upload/update


This scenario describes how a third-party system (e.g. Wintarif system) can be integrate in
the Waste4Think Management System uploading or updating data into the Waste4Think
Back-End from a CSV or JSON file with many entities (Figure 46).

Figure 46 – Scenario: batch process for massive data upload

The third-party system collects and handle data in its data management system. A batch
process retrieves the data and stores it in a file CSV or JSON compliant with the data formats
described in the section 6.2.3 “Data format” and finally sends the file with the entities to the
NGSIConnectorAPI using the userFile option.

79
Deliverable D2.3

The request URIs of the NGSIConnectorAPI used in this scenario are:


• uploading data: http post /connector/entities (userFile option)
• updating data: http post /connector/entities/update (userFile option)
Table 60 shows an example of upload process from a CSV files.

curl -X POST \
http://backend.waste4think.eu/connector/entities \
-H 'Content-Type: multipart/form-data' \
-H 'Fiware-Service: waste4think' \
-H 'Fiware-ServicePath: /deusto/w4t/seveso/real' \
-H 'X-Auth-Token: 1u374V8CFfc822kzfEcCNE0aKDLsyK' \
-F 'userFile=@C:\Users\waste.csv'
Table 60 – NgsiConnectorApi massive upload of data from a CSV file

Scenario 3: Manual process


This scenario describes how a Waste4Think user/operator can manually upload/update data
into the Waste4Think Back-End using a Web User Interface.
An authorized Waste4Think user can manually send data using the NgsiConnectorWeb tool
which exposes the same functionality as NGSIConnectorAPI, but with a web user interface.
The user, after the authentication process, will be able to access both upload and update
functionalities as shown in the Figure 47. More details about the NGSIConnectorWEB are
described in the section 6.2.2.

Figure 47 – Uploading data using with NGSIConne

6.2.4. NGSI API


FIWARE-NGSI v2 APIs[24] is a RESTful API provided by FIWARE Orion Context Broker to
manage the entire lifecycle of context information, including updates, queries, registrations,
and subscriptions.

80
Deliverable D2.3

Details about FIWARE-NGSI v2 APIs has been already described in the deliverable 2.1 [3].

6.3. Waste Collection


The connections needed between the sensors/legacy systems and the Waste4Think Back-
end for each of the different pilots during the waste collection phase are explained next.

a) Zamudio

The waste containers in Zamudio (Figure 40) are provided with an RFID tag that uniquely
identifies each container and an e-lock which can only be opened by using one of the citizen
cards specific to the municipality. On collection, the garbage truck reads the tag and weight
of the container as well and the information stored in the e-lock. This information is sent to
the TruckDataApp (Annex G of Deliverable D2.1 [3]) along with the GPS information of the
route. When the garbage truck finishes the collection route and returns back to the garage,
the TruckDataApp formats the gathered data according to the specifications of the
Waste4Think data model and sends it over to the Waste4Think Back-End using the NGSI
connectors previously detailed.

Figure 48 – Waste collection in Zamudio

b) Halandri

The waste containers in Halandri (Figure 41) are provided with an RFID tag that uniquely
identifies each container. On collection, the garbage truck reads the tag and weight of the
container. This information is registered in the PowerFleet platform along with the GPS
information of the route. A specific connector gathers the data from the PowerFleet platform
according to the specifications of the Waste4Think data model and sends it over to the
Waste4Think Back-End using the NGSI connectors previously detailed.

81
Deliverable D2.3

Figure 49 – Waste collection in Halandri

c) Seveso

Seveso already had a system that covered several aspects of waste management. In this
case, the bags for residual waste in Seveso (Figure 42) are provided with an RFID tag that
allows to identify which user has used which bag. On collection, the garbage truck reads the
tag of the bag and sends this data to WinTarif (Deliverable D5.1 [28]) which has
implemented a special routine that makes use of the NGSI connectors previously detailed to
send information to the Waste4Think Back-End on a daily basis. Ideally, Gelsia could also
implement a specific routine to send the data to the Back-End. However, due to them not
being part of the project consortium, it has been difficult to engage them and convince them
to share the information. To overcome this issue, several predefined routes have been
identified based on real data collected by Gelsia and uploaded to the Back-End.

Figure 50 – Waste collection in Seveso

d) Cascais

Cascais already had a system that covered several aspects of waste management. The
underground containers in Cascais (Figure 43) feature a filling sensor and an electronic key-
lock which identifies those volunteers that make use of them. The electronic key-lock sends
continuous information to the EMZ platform (EMZ is the company responsible for the key-
locks) via GPRS which is then collected by MAWIS (Deliverable D2.5 [27]) through an

82
Deliverable D2.3
interface implemented for that particular purpose. On collection, the garbage truck reads the
tag of the container along the information from the GPS through the onboard CPU, known as
MiniOperand. Once the collection route is finished, the MiniOperand sends the data over to
MAWIS. Whenever MAWIS receives new information, either about the use of a key-lock, a
new measure of the filling sensor on a specific container, o a finished route, it sends the
information over to the Waste4Think Back-End using the NGSI connectors previously
detailed.

Figure 51 – Waste collection in Cascais

6.4. Treatment Plants


This procedure will update the status of the operation and the amount of waste processed in
each of the treatment plants featured in the Waste4Think project:

• Biomass production from food/kitchen waste monitorization (R19) Section 7.3 D3.1
[25];
• Biofuel and Compost production from disposable nappies (R20) Section 8 of
D3.3[26];

In the case of the status of the operation, the CSV output files produced by the SCADA
systems of the treatment plants will be parsed by the NGSIConnectorAPI and the information
will reach the different attributes of the data model. On the other hand, for the waste
processed, handmade CSV files will be uploaded and parsed by the NGSIConnectorAPI.
Both procedures will be performed after every batch of operation of the treatment plant.
Further information about the specifications of the SCADA systems can be found in
Deliverable D3.1, Section 7.3 for R19, and Deliverable D3.3, Section 8 for R20.

Table 61 and Table 62 shows an example of the CSV file detailing the status of the treatment
operation for R19 and R20. Table 63 and Table 64 show an example of the CSV file detailing
the amount of waste processed for R19 and R20.

temp
pH Methanogenic pH Acidogenic temp Acidogenic
Time Metyhanogenic
Reactor Reactor Reactor
Reactor
02/10/2015
Bad Bad Bad Bad
10:52
02/10/2015
Bad Bad Bad Bad
10:55
02/10/2015
9.05696 44.4416 5.80772 15.3927
11:59

83
Deliverable D2.3
02/10/2015
9.05696 44.4387 5.80732 15.3939
12:00
02/10/2015
9.08748 42.6506 5.80543 15.3902
16:00
02/10/2015
9.11096 40.8522 5.80579 15.2838
20:00
Table 61 - Example CSV file with the information of the status in R20

Electrical
Total Volatile N
Temperature Moisture Content pH conductivity (EC) N
Day solids (TS) TS [%] solids TOC [%] (TKN)
(°C ) (sensor) (%) (manually) (manually) [mS/cm] (mg/g)
[g] (VS) [g] [%]
(manually)
1 - - -
2 52.1 -
3 57.7 39.12 6.3 - 50.560 2.222 22.217
4 55.4 35.99 - -
5 61.6 35.22 - - 49.393 2.286 22.861
6 61.4 35.19 6.27 2.08
7 62 39.6 6.6 2.2 1.74 62.81 1.4021 47.185 2.437 24.366
8 60 38.58 6.91 2.28 1.55 66.41 1.3683
9 59.2 39.55 6.82 2.28 2.01 67.56 1.6906 48.762 2.439 24.388
10 49.6 35.64 6.82 2.28 - - -
11 55.9 45.03 - - - - - 2.657 26.570
12 61.6 - - - - - -
13 57.2 41.24 6.93 2.4 1.32 60.10 1.0981
14 62 42.95 - - - - -
15 56.8 45.52 7.22 2.46 1.67 54.46 1.3959
16 46.7 44.57 7.24 2.4 2.05 55.91 1.7154
17 51.6 41.96 7.41 2.48 2.30 63.86 1.8289
Table 62 - Example CSV file with the information of the status in R19

Pillot Feed Pilot Output


Used, Used,
Edible Thermal
Fruits and Edible animal disposable disposable Biogas
starchy SAP Plastics Compost energy
Week vegetables products/byproducts infant adult STP
products [Kg] [Kg] [Kg] equivalent
[kg] [kg] nappies nappies [L]
[kg] [MJ]
[#] [#]
1 100.1 200.3 450.6 320 105 75.9 70.2 65.3 20.8 58.8
2 150.2 275.2 500.9 300 120 80.7 52.1 46.5 27.3 60.2
3 200.5 450.2 415.9 275 136 90.4 32.6 58.8 25.9 35.2
Table 63 - Example CSV file with the information of processed materials in R20

3/1/1 4/1/1 5/1/1 8/1/1 9/1/1 10/1/1 11/1/1 12/1/1 15/1/1 16/1/1
Date
8 8 8 8 8 8 8 8 8 8

Morning batch

weight of HFW (Kg) 111 115 111 115 115 116 115 116 116 135

weight of FORBI
22.8 17.7 27 34 31.7 26 26.4 17 25.2 25.9
(Kg)
electricity counter
26650 26756 26843 26934 27117 27389 27560 27740 27925
index (start of

84
Deliverable D2.3
every cycle) (kW)

electricity counter
index (end of every 26756 26843 26934 27026 27197 27389 27475 27641 27827 28036
cycle) (kW)
energy
106 87 91 92 80 27389 86 81 87 111
consumption (kW)

Afternoon batch

weight of HFW (Kg) 115 115 116 115 130 130


weight of FORBI
22.4 26.7 19 24.3 30.8 27.1
(Kg)
electricity counter
index (start of 27026 27197 27475 27641 27827 28036
every cycle) (kW)
electricity counter
index (end of every 27117 27560 27740 27925 28132
cycle) (kW)
energy
91 85 99 98 96
consumption (kW)
Table 64 - Example CSV file with the information of processed materials in R19

6.5. Serious Games & APPs


Both the Serious Games and the APPs will connect directly to the FIWARE ORION Context
Broker using the FIWARE-NGSI version 2 API [24] interfaces directly or indirectly using the
PyFiware library. In both cases, these modules will update the relevant UserMetrics entities
defined in the Waste4Think data model after each use of the platform.
Examples of the data sent from the Serious Games and the APPs to the Context Broker are
shown in Table 65, Table 66 and Table 67.

{
"id": "5bed2fcc-ef25-45f1-92fc-3d7992498991",
"type": "CitizenAppUserMetrics",
"search": {
"type": "Text",
"value": "paper containers",
"metadata": {}
}
}

Table 65 - Example JSON with the information of a user session in the Citizen App

{
"id": "de46fcce-c951-11e8-a8d5-f2801f1b9fd1",
"type": "FoodAppUserTransaction",
"donor": {
"type": "Text",
"value": "5bed2fcc-ef25-45f1-92fc-3d7992498991",
"metadata": {}
},
"donee": {
"type": "Text",
"value": "6799a08c-c950-11e8-a8d5-f2801f1b9fd1",
"metadata": {}
},

85
Deliverable D2.3
"product": {
"type": "Text",
"value": "Cooked pasta",
"metadata": {}
},
"weight": {
"type": "Number",
"value": "20",
"metadata": {
"unit": "kg",
}
},
"allergens": {
"type": "Text",
"value": "cheese, nuts"
},
}

Table 66 - Example JSON with the information of a user transaction in the Food App

{
"id": "5bed2fcc-ef25-45f1-92fc-3d7992498991",
"type": "SortingGameUserMetrics",
"age": {
"type": "Number",
"value": 27,
"metadata": {}
},
"gender": {
"type": "Text",
"value": "MALE",
"metadata": {}
},
"municipalityName": {
"type": "Text",
"value": "Zamudio",
"metadata": {}
},
"countryName": {
"type": "Text",
"value": "Spain",
"metadata": {}
},
"maxLevel": { //The furthest level the user has reached
"type": "Number",
"value": 2,
"metadata": {}
},
"totalPlayedLevels": { //The total number of level play-throughs,
including replayed levels
"type": "Number",
"value": 2,
"metadata": {}
},
"totalPoints": {
"type": "Number",
"value": 150,
"metadata": {}
},
"totalItems_All": {
"type": "Number",
"value": 20,
"metadata": {}
},
"totalIncorrectItems_All": {
"type": "Number",

86
Deliverable D2.3
"value": 4,
"metadata": {}
},
"totalItems_TypeID_1053": { // Total number of items from the sorting
type with ID 1053 that have been presented to the user
"type": "Number",
"value": 10,
"metadata": {}
},
"totalIncorrectItems_TypeID_1053": { // Total number of errors in the
sorting type with ID 1053 that the user has made
"type": "Number",
"value": 2,
"metadata": {}
},
"totalItems_TypeID_1054": {
"type": "Number",
"value": 5,
"metadata": {}
},
"totalIncorrectItems_TypeID_1054": {
"type": "Number",
"value": 1,
"metadata": {}
},
"totalItems_TypeID_1055": {
"type": "Number",
"value": 5,
"metadata": {}
},
"totalIncorrectItems_TypeID_1055": {
"type": "Number",
"value": 1,
"metadata": {}
}
}

Table 67 - Example JSON with the information of a user session in the Serious Games

6.6. Learning Materials

Learning materials will retrieve information using several online surveys. The results of these
surveys will be processed regularly and introduced in the platform as a UserMetric entity. To
this end, a person will manually download the results of the surveys using the CSV export
tool and will use the NGSIConnectorAPI to upload the information to the platform. An
example of this CSV with information of the Learning Materials is shown in the Table 68.

Age Students
LM_ID Effectiveness Relevance Objective Contents Motivate Recipients Evaluation … Data
Range invoved
1 5-7 12 4 4 2 3 4 5 3 … 4

2 5-7 12 3 5 4 3 5 4 3 … 4

3 5-7 12 4 5 4 3 4 5 4 … 5

4 5-7 12 3 4 5 3 5 6 5 … 4

6 5-7 12 5 5 3 3 3 4 5 … 4

… … … … … … … … … … … …
Table 68 - Example CSV with the information of the Learning Materials

87
Deliverable D2.3

7. References

[1] FIWARE official site - https://www.fiware.org/


[2] OpenStack official site - https://www.openstack.org/
[3] Deliverable 2.1 “Technical Documentation of R1: Operation and Management Module”
[4] FIWARE LAB Global IDM instance - https://account.lab.fiware.org/
[5] FIWARE Cloud Portal GE - https://cloud.lab.fiware.org/auth/login/
[6] FIWARE Cygnus Documentation - https://fiware-cygnus.readthedocs.io/en/stable/
[7] Reverse Proxy definition - https://en.wikipedia.org/wiki/Reverse_proxy
[8] NGINX Official site . https://www.nginx.com/
[9] OAuth2 - https://oauth.net/2/
[10] FIWARE AuthZForce PDP - https://authzforce-ce-
fiware.readthedocs.io/en/latest/InstallationAndAdministrationGuide.html
[11] FIWARE IDM - https://fiware-idm.readthedocs.io/en/latest/
[12] FIWARE PEP Proxy - https://fiware-pep-proxy.readthedocs.io/en/latest/admin_guide/
[13] FIWARE Context Broker GE – ORION - https://fiware-orion.readthedocs.io/en/master/
[14] CKAN Official Documentation - https://docs.ckan.org/en/latest/
[15] Swagger Official Site - https://swagger.io/
[16] Apache FLUME - https://flume.apache.org/
[17] FIWARE Cosmos - https://catalogue-server.fiware.org/enablers/bigdata-analysis-
cosmos
[18] FIWARE STH Comet - https://fiware-sth-comet.readthedocs.io/en/latest/
[19] XACML - https://en.wikipedia.org/wiki/XACML
[20] NodeJS - https://nodejs.org/en/
[21] PostgreSQL - https://www.postgresql.org/
[22] ExpressJS - http://expressjs.com/
[23] InfluxDB - https://www.influxdata.com/
[24] NGSI V2 API - https://orioncontextbroker.docs.apiary.io
[25] Deliverable 3.1 “Technical Documentation of R19: Biamass Production from
food/kitchen Waste”
[26] Deliverable 3.3 “Technical Documentation of R20: Biofuels and Compost Production
From Disposable Napies”
[27] Deliverable 2.5. “Technical Documentation of R2: Collection Module”
[28] Deliverable 5.1 “Technical Documentation of the Invoicing Module for R2”
[29] Deliverable 1.4 “Preparatory Actions Report”
[30] Deliverable 1.5 “Deployment and Integration Report”
[31] Deliverable 1.6 “Preliminary Test Report”
[32] Deliverable 1.7 “Full Test Reeport”

88
Deliverable D2.3

Annex A - Docker manual


Docker is an open platform for developing, shipping, and running applications. Docker
enables you to separate your applications from your infrastructure, so you can deliver
software quickly. With Docker, you can manage your infrastructure in the same ways you
manage your applications. By taking advantage of Docker’s methodologies for shipping,
testing, and deploying code quickly, you can significantly reduce the delay between writing
code.

Usage of Docker

Docker streamlines the development lifecycle by allowing developers to work in standardized


environments using local containers which provide your application and services. Containers
are great for continuous integration and continuous delivery (CI/CD) workflows.

Consider the following example scenario:

• Developers write code locally and share their work with their colleagues using
Docker containers.
• Use Docker to push their applications into a test environment and execute
automated and manual tests.
• When developers find bugs, they can fix them in a development environment and
redeploy them to the test environment for testing and validation.
• When testing Is complete, getting the fix to the customer is a simple as pushing
the updated image to the production environment.

Responsive deployment and scaling

Docker’s container-based platform allows for highly portable workloads. Docker containers
can run on a developer’s local laptop, on physical or virtual machines in a data center, on
cloud providers, or in a mixture of environments.

Docker’s portability and lightweight nature also make it easy to dynamically manage
workloads, scaling up or tearing down applications and services as business needs dictate,
in near real time.

Running more workloads on the same hardware

Docker is lightweight and fast. It provides a viable, cost-effective alternative to hypervisor-


based virtual machines, so you can use more of you compute compacity to achieve your
business goals. Docker is perfect for high-density environments and for small and medium
deployments where you need to do more with fewer resources.

Installation of Docker

The first step determines whether any updates are available for our installed packages.

sudo yum check-update

Then running following command will add the official Docker repository, download the latest
version of Docker, and install it:

89
Deliverable D2.3
sudo curl -fsSl https://get.docker.com/ | sh

After installation is complete then we start the Docker daemon:

sudo systemctl start docker

Verifying that docker is running:

sudo systemctl status docker

The output of this command should be similar to the following, showing that service is active
and running.

Lastly, make sure it starts at every server reboot:

sudo systemctl enable docker

Installing Docker now give not just the Docker service (daemon) but also the docker
command line utility, or the Docker client.

Executing Docker Command without sudo (Optional)

By default, running the docker command requires root privileges – that is, you have to prefix
the command with sudo. It can also be run by a user in docker group, which is automatically
created during the installation of Docker. If you attempt to run the docker command without a
prefix it with sudo or without being in the docker group, you will get an output like this:

To avoid typing sudo to run docker command, add username to docker group:

sudo usermod -aG docker $(username)

Logout of Virtual Machine for settings to work. If there is a need of adding some other user
that is not logged in use following command.

sudo usermod -aG docker username

Docker commands

90
Deliverable D2.3
With Docker installed and working, time to become familiar with the command line utility.
Using docker consist of passing it a chain of option and subcommands followed by
arguments. The syntax takes this form:

docker [ option] [ command] [ arguments]

To see all available subcommands, type:

docker

Complete list of all commands will be displayed

To view more info about a specific command, type:

docker docker-subcommand –help

To see system-wide information, type:

docker info

Docker images

Docker containers are run from Docker images. By default, it pulls these images from Docker
Hub, a Docker registry managed by Docker, the company behind the Docker project.
Anybody can build and host their Docker images on Docker Hub, so most applications and
Linux distribution need to run Docker container have images that are hosted on Docker Hub.

91
Deliverable D2.3
To check whether user can access and download images from Docker Hub, type:

docker run hello-world

The output, which should include the following, should indicate that Docker in working
correctly:

There is an option to search for images on Docker Hub by using the docker command with
the search subcommand. For example, to search for the waste4think image, type:

docker search waste4think

This will crawl Docker Hub and return a listing of all images whose name match the search
string.

In the OFFICIAL column, OK indicates an image built and supported by the company behind
the project.

Lifecycle commands from pulling image to running it:

docker pull [image_name]

docker run image

To see images downloaded:

docker images

The output should look like the following (it is different depending images that are being
downloaded):

Listing and showing Docker containers

After using Docker for a while, there will be many active(running) and inactive containers on
the machine.

To see active ones:

docker ps

To see all containers active and inactive, pass the -a switch:

docker ps -a

92
Deliverable D2.3
To see the latest container that was created, pass the -l switch:

docker ps -l

To stop running or active container type:

docker stop container name/id can be obtain using docker ps command

To remove container/image

docker rm/rmi containerId/imageId

Docker-Compose Service

Compose is a tool for defining and running multi-container Docker applications. Compose
use a YAML file to configure application services. Then, with a single command, you create
and start all the services from your configuration.

Compose works in all environments: production, staging, development, testing, as well as CI


workflows.

To use Compose it require three-step process:

• Define apps environment with Dockerfile so it can be reproduced anywhere.


• Define the services that make up the app in docker-compose. yml so they can be
run together in an isolated environment.
• Run docker-compose up and Compose starts and run the entire app

Example of docker-compose.yml looks like this:

Key compose parts:

93
Deliverable D2.3
version: ’3’

- There are 3 versions of docker-compose 1, 2, and 3. Version represents what set of


features or limitations are available to the user. There is a difference between
versions.

services

- Different parts of the application are called services, they are nothing but containers
that will be created witch specific commands being added to them.

ports

- Representing ports on which service will be run.


volumes

- Main use of volumes is persisting data on the user machine, once docker container is
removed it loses all its saved data. Volumes allow us to map location on the local
drive to containers drive, or in some case to pass config file if some service requires
one.

94
Deliverable D2.3

Annex B – FIWARE Orion subscriptions


Table 69 shows an example of Orion subscriptions which notify entities to Historical Module
based on InfluxDB.

[{
"id": "5b51b05464493217476ed4ca",
"description": "A subscription to get info about DepositPoint
entity",
"expires": "2040-01-01T14:00:00.00Z",
"status": "active",
"subject": {
"entities": [{
"idPattern": ".*",
"type": "DepositPoint"
}],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 1,
"lastNotification": "2018-07-20T09:50:12.00Z",
"attrs": [],
"attrsFormat": "normalized",
"http": {
"url":
"http://history.deusto.io/api/fiware/scenario/deusto:w4t:zamudio:real/entit
ies"
},
"lastSuccess": "2018-07-20T09:50:12.00Z"
},
"throttling": 5
},
{
"id": "5b51b06564493217476ed4cb",
"description": "A subscription to get info about DepositPointType
entity",
"expires": "2040-01-01T14:00:00.00Z",
"status": "active",
"subject": {
"entities": [{
"idPattern": ".*",
"type": "DepositPointType"
}],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 1,
"lastNotification": "2018-07-20T09:50:29.00Z",
"attrs": [],
"attrsFormat": "normalized",
"http": {
"url":

95
Deliverable D2.3
"http://history.deusto.io/api/fiware/scenario/deusto:w4t:zamudio:real/entit
ies"
},
"lastSuccess": "2018-07-20T09:50:30.00Z"
},
"throttling": 5
},
{
"id": "5b51b07764493217476ed4cc",
"description": "A subscription to get info about Vehicle entity",
"expires": "2040-01-01T14:00:00.00Z",
"status": "active",
"subject": {
"entities": [{
"idPattern": ".*",
"type": "Vehicle"
}],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 1,
"lastNotification": "2018-07-20T09:50:47.00Z",
"attrs": [],
"attrsFormat": "normalized",
"http": {
"url":
"http://history.deusto.io/api/fiware/scenario/deusto:w4t:zamudio:real/entit
ies"
},
"lastSuccess": "2018-07-20T09:50:47.00Z"
},
"throttling": 5
},
{
"id": "5b51b08f64493217476ed4cd",
"description": "A subscription to get info about VehicleType entity",
"expires": "2040-01-01T14:00:00.00Z",
"status": "active",
"subject": {
"entities": [{
"idPattern": ".*",
"type": "VehicleType"
}],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 1,
"lastNotification": "2018-07-20T09:51:11.00Z",
"attrs": [],
"attrsFormat": "normalized",
"http": {
"url":
"http://history.deusto.io/api/fiware/scenario/deusto:w4t:zamudio:real/entit

96
Deliverable D2.3
ies"
},
"lastSuccess": "2018-07-20T09:51:11.00Z"
},
"throttling": 5
},
{
"id": "5b51bed164493217476ed4ce",
"description": "A subscription to get info about SortingType entity",
"expires": "2040-01-01T14:00:00.00Z",
"status": "active",
"subject": {
"entities": [{
"idPattern": ".*",
"type": "SortingType"
}],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 1,
"lastNotification": "2018-07-20T10:52:01.00Z",
"attrs": [],
"attrsFormat": "normalized",
"http": {
"url":
"http://history.deusto.io/api/fiware/scenario/deusto:w4t:zamudio:real/entit
ies"
}
},
"throttling": 5
}]

Table 69 - Example FIWARE ORION subscriptions to InfluxDB

Table 69 shows an example of Orion subscription which notifies entities to CEP system.

curl -v http://backend.waste4think.eu/v2/subscriptions -s -S --header 'Content-Type:


application/json' \
-d @- <<EOF
{
"description": "A subscription to get information about
DepositPointIsles",
"subject": {
"entities": [
{
"id": "#",
"type": "DepositPointIsle"
}
],
"condition": {
"attrs": [

"location", "address", "name", "description", "refDepositPoint",

97
Deliverable D2.3
"areaServed", "dateModified", "dateCreated"

}
},
"notification": {
"http": {
"url": "http://sandwich.geoworldsim.com/api/app/:SANDWICH_ID/execute"
},
"attrs": [
"location", "address", "name", "description", "refDepositPoint",
"areaServed", "dateModified", "dateCreated"
]
},
"throttling": 5
}
EOF

Table 70 - Example FIWARE ORION subscription to CEP

Table 71 shows an example of Orion subscriptions which notify entities to FIWARE Cygnus
connector.

{
"description": "A subscription to get info about DepositPointIsle
entity",
"subject": {
"entities": [
{
"idPattern": ".*",
"type": "DepositPointIsle"
}
],
"condition": {
"attrs": [

]
}
},
"notification": {
"http": {
"url": "http://192.168.229.62:5050/notify"
},
"attrsFormat": "legacy",
"attrs": [

]
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 5
}

{
"description": "A subscription to get info about DepositPoint entity",
"subject": {

98
Deliverable D2.3
"entities": [
{
"idPattern": ".*",
"type": "DepositPoint"
}
],
"condition": {
"attrs": [

]
}
},
"notification": {
"http": {
"url": "http://192.168.229.62:5050/notify"
},
"attrsFormat": "legacy",
"attrs": [

]
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 5
}

{
"description": "A subscription to get info about DepositPointType
entity",
"subject": {
"entities": [
{
"idPattern": ".*",
"type": "DepositPointType"
}
],
"condition": {
"attrs": [

]
}
},
"notification": {
"http": {
"url": "http://192.168.229.62:5050/notify"
},
"attrsFormat": "legacy",
"attrs": [

]
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 5
}

Table 71 - Example FIWARE ORION subscriptions to FIWARE Cygnus

99
Deliverable D2.3

Annex C – NGSIConnectorAPI service APIs


GET /entities

Get list of entities specifying Service and ServicePath (see Figure 52).

Figure 52 - GET /entities

POST /entities

Upload entities in the back-end from user supplied file (csv, json) (see Figure 53).

100
Deliverable D2.3

Figure 53 - POST /entities

GET /entities/{entityID}

Get list of entities specifying entityID, Service and ServicePath (see Figure 54).

101
Deliverable D2.3

Figure 54 - GET/entities/{entityID}

GET /entities/type/{typeID}

Get list of entities specifying typeID, Service and ServicePath (see Figure 55).

102
Deliverable D2.3

Figure 55 - GET /entities/type/{typeID}

GET /entities/update

Update entities from user supplied file (csv and json) (see Figure 56).

103
Deliverable D2.3

Figure 56 - POST /entities/update

GET /rules

Show all rules available in the NGSIConnectorAPI defined in the Data Model. (see Figure
57).

104
Deliverable D2.3

Figure 57 - GET /rules

GET /rules/{ruleID}

Get single rule (see Figure 58).

Figure 58 - GET /rules/{ruleID}

105
Deliverable D2.3

Annex D – Cygnus Name Mappings


Table 72 presents the list of rules defined in the “name_mappings” configuration file of
Cygnus.

{
"serviceMappings": [
{
"originalService": "waste4think",
"servicePathMappings": [
{
"originalServicePath": "/deusto/w4t/cascais/real",
"entityMappings": [
{
"originalEntityId": ".*",
"originalEntityType": "DepositPointType",
"newEntityId": "",
"newEntityType": "depositpointtype",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "SortingType",
"newEntityId": "",
"newEntityType": "sortingtype",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "DepositPointIsle",
"newEntityId": "",
"newEntityType": "depositpointisle",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "DepositPoint",
"newEntityId": "",
"newEntityType": "depositpoint",
"attributeMappings": []
},
]
},
{
"originalServicePath": "/deusto/w4t/seveso/real",
"entityMappings": [
{
"originalEntityId": ".*",
"originalEntityType": "SortingType",
"newEntityId": "",
"newEntityType": "sortingtype",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "VehicleType",
"newEntityId": "",
"newEntityType": "vehicletype",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "Vehicle",
"newEntityId": "",

106
Deliverable D2.3
"newEntityType": "vehicle",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "DepositPoint",
"newEntityId": "",
"newEntityType": "depositpoint",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "DepositPointType",
"newEntityId": "",
"newEntityType": "depositpointtype",
"attributeMappings": []
}
]
},
{
"originalServicePath": "/deusto/w4t/zamudio/real",
"entityMappings": [
{
"originalEntityId": ".*",
"originalEntityType": "SortingType",
"newEntityId": "",
"newEntityType": "sortingtype",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "VehicleType",
"newEntityId": "",
"newEntityType": "vehicletype",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "Vehicle",
"newEntityId": "",
"newEntityType": "vehicle",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "DepositPoint",
"newEntityId": "",
"newEntityType": "depositpoint",
"attributeMappings": []
},
{
"originalEntityId": ".*",
"originalEntityType": "DepositPointType",
"newEntityId": "",
"newEntityType": "depositpointtype",
"attributeMappings": []
}
]
}
]
}
]
}

Table 72 - W4T Cygnus Name Mappings file

107
Deliverable D2.3

Comments from External Reviewers


External Reviewer #1
18/10/2018
Score
Issue Yes No (1=low to Comments
5=high)
Is the format of the document X 4
correct?

Does the format of the document X 4


meet the objectives of the work
done?

Does the index of the document X 4


collect precisely the tasks and
issues that need to be reported?

Is the content of the document X 4


clear and well described?

Does the content of each section X 3


describe the advance done during
the task development?

Does the content have sufficient X 3


technical description to make clear
the research and development
performed?

Are all the figures and tables X 4


numerated and described?

Are the indexes correct? X 4

Is the written English correct? X 4

Main technical terms are correctly X 4


referenced?

Glossary present in the document? X 4

Jon Arambarri
jarambarri@virtualwaregroup.com
Virtualware labs

108
Deliverable D2.3

Comments from External Reviewers


External Reviewer #2
19/10/2018
Score
Issue Yes No (1=low to Comments
5=high)
Is the format of the document
X 5
correct?

Does the format of the document


meet the objectives of the work X 5
done?

Does the index of the document


collect precisely the tasks and X 5
issues that need to be reported?

Is the content of the document


X 5
clear and well described?

Does the content of each section


describe the advance done during X 5
the task development?

Does the content have sufficient


technical description to make clear
X 5
the research and development
performed?

Are all the figures and tables


X 5
numerated and described?

Are the indexes correct? X 5


Is the written English correct? X 4 Minor corrections
Main technical terms are correctly
X 5
referenced?

Glossary present in the document? X 5

Michael Kornaros

Email: kornaros@chemeng.upatras.gr

Partner: UPATRAS

109
Deliverable D2.3

Comments from External Reviewers


External Reviewer #3
24-10-2018
Score
Issue Yes No (1=low to Comments
5=high)
Is the format of the document X 5
correct?

Does the format of the document x 5


meet the objectives of the work
done?

Does the index of the document x 5


collect precisely the tasks and
issues that need to be reported?

Is the content of the document x 4


clear and well described?

Does the content of each section x 4


describe the advance done during
the task development?

Does the content have sufficient x 5


technical description to make clear
the research and development
performed?

Are all the figures and tables x 5


numerated and described?

Are the indexes correct? x 5 .


Is the written English correct? x 5
Main technical terms are correctly x 5
referenced?

Glossary present in the document? x 5

Andreas Jalsøe
aj@seriousgames.net
Serious Games Interactive

110

You might also like