Repository for the FIAP Tech Challenge 4, focused on developing a microservice backend system for managing orders after the payment of a user is approved in a fast-food restaurant.
Tech Challenge 4 specifications can be found here. Youtube video explaining this project can be found here
Note
Tech Challenge 1, 2 and 3 repositories can be found in main page of the FIAP-SOAT-G20 organization here
This project is part of a larger system that includes:
- Customer Service - Microservice responsible to authentication and management of customers
- Order Service - Microservice responsible to manage new orders, products, categories
- Payment Service - Microservice responsible to manage order checkout and comunicating with external payment service
- Infrastructure - Terraform - Project responsible of create infrastructure to support all TC4 microservices.
- Infrastructure - Deploy - Project responsible to publish the microservices into Kubernetes
- Customer (actor): Actor responsible for initiating the purchasing process
- Cook (actor): Actor responsible for preparing the customer's order
- Attendant (actor): Actor responsible for interacting with the customer, providing support for the order
- Identification method: Format in which the customer is identified on the platform: via CPF or anonymous.
- Identification: Customer identification on the platform
- Authorization: Grants permission to the customer to perform operations on the platform, such as placing an order, changing registration information
- Order: Represents all items selected by the customer in the store
- Order Status: Represents the stage of order preparation after payment is confirmed.
.
βββ bin
βββ cmd
β βββ server
β βββ worker
β βββ consumer
βββ docs
βββ internal
β βββ adapter
β β βββ controller
β β βββ gateway
β β βββ presenter
β βββ core
β β βββ domain
β β β βββ entity
β β β βββ value_object
β β βββ dto
β β βββ port
β β β βββ mocks
β β βββ usecase
β βββ infrastructure
β β βββ api
β β β βββ response
β β βββ aws
β β β βββ sqs
β β βββ config
β β βββ database
β β β βββ migrations
β β βββ datasource
β β β βββ model
β β βββ handler
β β β βββ request
β β β βββ response
β β βββ httpclient
β β βββ logger
β β βββ middleware
β β βββ route
β β βββ server
β β βββ setup
β βββ util
β βββ testdata
β βββ common
β βββ customer
β βββ order
β βββ order_product
β βββ product
β βββ staff
βββ mockserver
βββ scripts
βββ tests
βββ featuresProject Structure Explanation
domain/: Central business entities and rules.dto/: Data transfer objects.port/: Interfaces that define contracts between layers, ensuring independence.usecase/: Application use cases.
controller/: Coordinates the flow of data between use cases and infrastructure.presenter/: Formats data for presentation.gateway/: Implements access to data from external sources (databases, APIs, etc.).
api/: Comunication with external API clientsaws/: Integration with AWS Services, like Simple Queue Serviceconfig/: Application configuration management.database/: Configuration and connection to the database.datasource/: Concrete implementations of data sources.handler/: Handling of HTTP requests.httpclient/: HTTP client for external requests.logger/: Structured logger for detailed logs.middleware/: HTTP middlewares for handling requests.route/: Definition of API routes.server/: Initialization of the HTTP server.setup/: Single point of entrance to initiate all core classes to be used on HTTP Server or by the Worker SQS Consumer.
- Clean Architecture structure: The project was structured using the Clean Architecture pattern, which aims to separate the application into layers, making it easier to maintain and test. The project is divided into three layers: Core, Adapter, and Infrastructure.
- Presenter: The presenter (from Adapter layer) was created to format the data to be returned to the client. This layer is responsible for transforming the data into the desired format, such as JSON, XML, etc. Also, it is responsible for handling errors and returning the appropriate HTTP status code.
- Use Case: The use case (from Core layer) was created to define the business rules of the application. This layer is responsible for orchestrating the flow of data between the entities and the data sources.
- Middleware to handle errors: A middleware was created to handle errors and return the appropriate HTTP status code. This middleware is responsible for catching errors and returning the appropriate response to the client.
- Structured Logger: A structured logger was created to provide detailed logs. This logger is responsible for logging information about the application, such as requests, responses, errors, etc.
- Database Connection: The database connection was created using GORM, a popular ORM library for Go. This library provides an easy way to interact with the database and perform CRUD operations.
- Database Migrations: Database migrations were created to manage the database schema. This allows us to version control the database schema and apply changes to the database in a structured way.
- HTTP Server: The HTTP server was created using the Gin framework, a lightweight web framework for Go. This framework provides a fast and easy way to create web applications in Go.
- Dockerfile: small image with multi-stage docker build, and multi-platform build (Cross-Compilation)
- Makefile: to simplify the build and run commands
- Clean architecture
- PostgreSQL database
- Conventional commits
more
- Unit tests (testify)
- Tests Suite (testify)
- Code coverage report (go tool cover)
- BDD (godog)
- AWS integration
- Swagger documentation
- Postman collection
- Feature branch workflow
- Live reload (air)
- Pagination
- Health Check (liveness, readiness)
- Lint (golangci-lint)
- Vulnerability check (govulncheck)
- Mocks (gomock)
- Environment variables
- Graceful shutdown
- GitHub Actions (CI/CD)
- GitHub Container Registry (GHCR)
- Structured logs (slog)
- Database migrations (golang-migrate)
- API versioning
- Dev Container (VS Code)
- Semantic Versioning
- Golden Files
- Fixtures
Warning
You need to have Go version 1.24 or higher installed on your machine to run the application locally
git clone https://github.com/FIAP-SOAT-G20/tc4-kitchen-service.gitcd tc4-kitchen-serviceSet the environment variables
cp .env.example .envmake compose-buildImportant
After running the application, a mock server container will be created to simulate the payment gateway.
When you create a new payment (with POST payments/:order_id/checkout) the order status will be updated from OPEN to PENDING,
then the mock server will call the webhook POST payments/callback,
and the order status will be updated from PENDING to RECEIVED.
You can verify mock server logs by running docker logs mockserver.10soat-g22.dev.
Important
Order and Products routes are protected by authentication, you need to use the API Gateway connected to the Lambda function to authenticate the user. Lambda Repository: here
Tip
We have created a step-by-step guide to test the application, you can find it here.
API Documentation will be available at:
- Swagger:
- Postman collection: here
- Rest Client: here
make compose-upTip
To stop the application, run compose-down
To remove the application, run compose-clean
Note
The application will be available at http://localhost:8080 Ex: http://localhost:8080/api/v1/health
- Install Go: https://golang.org/doc/install
- Clone this repository:
git clone https://github.com/FIAP-SOAT-G20/tc4-kitchen-service - Change to the project directory:
cd tc4-kitchen-service - Checkout to the development branch:
make new-branch - Set the environment variables:
cp .env.example .env - Install dependencies by running
make install - Run the application by running
make run-airormake run - Access the application at
http://localhost:8080 - Make your changes π§βπ»
- Dont forget to run the tests by running
make test - Check the coverage report by running
make coverage - Check the lint by running
make lint - Update the swagger documentation by running
make swagger - Commit your changes following the Conventional Commits standard
- Push to the branch and Open a new PR by running
make pull-request - The GitHub Actions will run the tests, lint and vulnerability check automatically
- After the PR is approved, merge it to the main branch
- Generate a new
releasetag (here) with semantic versioning
Tip
7: make run will run the application locally, and will build and run PostgreSQL container using Docker Compose
Alternatively, you can run make run-air to run the application using Air (live reload)
Tip
18: When a new release tag is created, the GitHub Actions will build and push the image to the
GitHub Container Registry (GHCR) from GitHub Packages,
the image will be available at ghcr.io/fiap-soat-g20/tc4-kitchen-service:latest
About semantic versioning:
if you are fixing bugs, increment the patch version (v0.0.1)
if you are adding new features, increment the minor version (v0.1.0)
if you are changing the API, increment the major version (v1.0.0)
Our project implements a comprehensive testing strategy that ensures code quality and reliability. We use multiple testing approaches to cover different aspects of the application.
The project maintains a high test coverage of 82.9%, demonstrating our commitment to code quality and reliability.
- Framework: Testify
- Coverage: Core business logic, use cases, and adapters
- Command:
make testormake coverage
We use Godog framework to implement BDD tests that ensure our application behaves correctly from a user perspective.
# Run all tests
make test
# Run tests with coverage
make coverage
# Run BDD tests
make test-bddtests/
βββ features/ # BDD feature files
β βββ order.feature
βββ order_step_definition.go
internal/
βββ adapter/
β βββ controller/
β β βββ *_test.go
β βββ gateway/
β β βββ *_test.go
β βββ presenter/
β βββ *_test.go
βββ core/
β βββ domain/
β β βββ entity/
β β β βββ *_test.go
β β βββ value_object/
β β βββ *_test.go
β βββ usecase/
β βββ *_test.go
βββ infrastructure/
βββ handler/
β βββ *_test.go
βββ database/
β βββ *_test.go
βββ datasource/
βββ *_test.go
- Mocking: Using gomock for interface mocking
- Fixtures: Test data fixtures for consistent testing
- Golden Files: Expected output validation
- Test Suites: Organized test execution
- Coverage Reports: Detailed coverage analysis