| Branch | Status |
|---|---|
| master |
- Table of contents
- Description
- Install
- External Ressources Dependencies
- Docker
- Unit Testing
- Download WMS image legends
- Python Code Styling
- Updating Python Packages
- OTEL
- Varia
- Local Smoke test for Open Telemetry
Next generation services https://api3.geo.admin.ch for https://map.geo.admin.ch
In mid August 2022 the project has been migrated to python3, docker and eu-central-1.
The required environment variables are set in .env.default. They can be
adapted or you can use a copy of .env.default, e.g. .env.mine and use that
instead.
Install the python virtual environment (still virtualenvat this point)
make setupBuild the Pylons settings files and run the local waitress server
summon -p ssm make serveYou may want to customize the variables. Copy the file .env.default as .ven.mine,
change the variables you want and use them with
summon -p ssm make ENV_FILE=.env.mine serve📖 You need some external ressource to run the service, see External ressources dependencies
To run the service locally you need to have access to the following external ressources:
- Postgresql database
pg-geodata-replica.bgdi.ch - S3 bucket
service-mf-chsdi3-grid-geojsons-dev-swisstopo(NOTE: this bucket is only required by some of the Identity endpoints)
You can use the ssh port forwarding feature to have access to pg-geodata-replica.bgdi.ch by using the jump host:
ssh ssh0a.prod.bgdi.ch -L 5432:pg-geodata-replica.bgdi.ch:5432Then set the DBHOST environment variable to localhost (you can do this in your own environment file e.g. .env.mine and run the make file as follow: summon -p ssm make ENV_FILE=.env.mine serve)
or simpler, when you use the ssh_config provided by infra-ansible-bgdi-dev:
ssh jumphost-pg-geodata-replicaTo have access to the S3 bucket, you can either set your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variable.
Alternatively if you are using zsh you can use the aws plugin (see oh-my-zsh aws plugin) with the following command:
acp swisstopo-bgdi-devThis command will automatically use your AWS profile swisstopo-bgdi-dev for any AWS connection services.
make dockerbuildsummon -p ssm make dockerrun📖 You need some external ressource to run the service, see External ressources dependencies
First log in to the AWS ECR registry with:
make dockerloginafterwards you can push the locally built image to ECR with:
make dockerpush- PostgreSQL DB
pg-geodata-replica.bgdi.chmust be reachable - Access to AWS services
- Read access to S3 bucket
service-mf-chsdi3-grid-geojsons-dev-swisstopo(can be disable withS3_TESTS=0)
- Read access to S3 bucket
See External Ressources Dependencies for more infos on those prerequisites.
To run the tests enter
summon -p ssm make testOr if you use your own environment file
summon -p ssm make ENV_FILE=.env.mine testsummon -p ssm make S3_TESTS=0 testIn order to download all images of a layer in the correct format and with the correct dimensions, simply use:
make legends BODID=ch.layername WMSHOST=wms.geo.admin.chAlternatively, you can also download a WMS legend for a specific scale.
make legends BODID=ch.layername WMSHOST=wms.geo.admin.ch WMSSCALELEGEND=1000You will need the optipgn tool order to download the legends, use sudo apt install optipng to install it.
We are currently using the FLAKES 8 convention for Python code. You can find more information about our code styling here:
You can find additional information about autopep8 here:
To check the code styling:
make lintTo autocorrect most linting mistakes
make autolintAll packages used in production are pinned to a major version. Automatically updating these packages will use the latest minor (or patch) version available. Packages used for development, on the other hand, are not pinned unless they need to be used with a specific version of a production package (for example, boto3-stubs for boto3).
To update the packages to the latest minor/compatible versions, run:
pipenv update --devTo see what major/incompatible releases would be available, run:
pipenv update --dev --outdatedTo update packages to a new major release, run:
pipenv install logging-utilities~=5.0OpenTelemetry instrumentation can be done in many different ways, from fully automated zero-code instrumentation (otel-operator) to purely manual instrumentation.
We use the so called OTEL programmatical instrumentation approach where we import the specific instrumentation libraries and initialize them with the instrument() method of each library.
The following env variables can be used to configure OTEL
| Env Variable | Default | Description |
|---|---|---|
| OTEL_SDK_DISABLED | false | If set to "true", OTEL is disabled. See: https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/#general-sdk-configuration |
| OTEL_ENABLE_SQLALCHEMY | false | If opentelemetry-instrumentation-sqlalchemy should be enabled or not. |
| OTEL_ENABLE_REQUESTS | false | If opentelemetry-instrumentation-requests should be enabled or not. |
| OTEL_ENABLE_LOGGING | false | If opentelemetry-instrumentation-logging should be enabled or not. |
| OTEL_EXPERIMENTAL_RESOURCE_DETECTORS | OTEL resource detectors, adding resource attributes to the OTEL output. e.g. os,process |
|
| OTEL_EXPORTER_OTLP_ENDPOINT | http://localhost:4317 | The OTEL Exporter endpoint, e.g. opentelemetry-kube-stack-gateway-collector.opentelemetry-operator-system:4317 |
| OTEL_EXPORTER_OTLP_HEADERS | A list of key=value headers added in outgoing data. https://opentelemetry.io/docs/languages/sdk-configuration/otlp-exporter/#header-configuration | |
| OTEL_EXPORTER_OTLP_INSECURE | false | If exporter ssl certificates should be checked or not. |
| OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_REQUEST | A comma separated list of request headers added in outgoing data. Regex supported. Use '.*' for all headers | |
| OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_RESPONSE | A comma separated list of response headers added in outgoing data. Regex supported. Use '.*' for all headers | |
| OTEL_PYTHON_EXCLUDED_URLS | A comma separated list of url's to exclude, e.g. checker,static/* |
|
| OTEL_RESOURCE_ATTRIBUTES | A comma separated list of custom OTEL resource attributes, Must contain at least the service-name service.name=mf-chsdi3 |
|
| OTEL_TRACES_SAMPLER | parentbased_always_on | Sampler to be used, see https://opentelemetry-python.readthedocs.io/en/latest/sdk/trace.sampling.html#module-opentelemetry.sdk.trace.sampling. |
| OTEL_TRACES_SAMPLER_ARG | Optional additional arguments for sampler. |
export PATH=$(npm bin):$PATH
jsonlint-cli --pretty temp.json > chsdi/static/vectorStyles/ch.meteoschweiz.messwerte-foehn-10min.json- (optional)
aws --profile swisstopo-bgdi-dev sso login ssh jumphost-pg-geodata-replicato setup SSH Tunnel to DBdocker compose upto run a local Jaeger serversummon -p ssm make dockerrun
Open in a browser or curl: