Before setting up this repo, please read https://github.com/acapela/onboarding to get some better idea about our values and practices related to creating software.
Now, please head to Getting started guide in order to set up this repo on your machine.
- Node
- Node version 16+ is required
- You can use NVM to manage different Node versions on your system
- yarn
- To install yarn, call
npm install --global yarn
- To install yarn, call
- Docker
- Make sure that docker is properly installed by calling
docker compose versionin your console.
- Make sure that docker is properly installed by calling
Before running the project, you need to have proper config.
In root of this repo, copy .env.sample to .env.
After copying, your env is mostly ready for dev work. Ask other team members for missing values such as google oauth secrets etc.
After those steps, we should be good to go!
First - install all dependencies by running
yarn installTo start all the required services, in the correct order you can either run:
yarn devFor the first run you should run these commands separately to make sure every step works as expected.
or run the commands separately, here are the most important ones and what they mean:
| Command | Description |
|---|---|
yarn docker:up |
Starts a Docker container with PostgreSQL and Hasura running |
yarn hasura:update |
Runs migrations and metadata updates on Hasura |
yarn generate |
Updates Prisma & GraphQL schemas (needs to be re-run on db schema changes) |
yarn backend:dev |
Starts the Backend server |
yarn frontend:dev |
Starts the Next.JS server |
After starting all the services, you can go to http://localhost:3000, and you should see Acapela running on your machine.
If you want to see all available commands from the root package.json, you can run yarn start.
We use Hasura to generate a GraphQL API with permission for our PostgreSQL database.
To modify the database schema or access rules or other configurations use the hasura console (run yarn hasura:console to open it).
This automatically creates migrations. You can squash the migrations, using the hasura cli, into a single logical migration
We use ngrok to expose our backend to the internet. Get an invite into our premium account from one of our team mates, then navigate to https://dashboard.ngrok.com/get-started/your-authtoken
Add the authtoken from the Ngrok dashboard into your .env as NGROK_AUTH_TOKEN, with a NGROK_SUBDOMAIN of your choosing. The domain needs to be unique, and only used on one machine at a time, otherwise the tunnel will crash on startup.
Get yourself an invite to our Stripe account from one of our team members. Otherwise, you can also create an account for yourself.
- Copy the secret key from here into your
.envasSTRIPE_SECRET_KEY - Head over here, create a webhook
- Use your ngrok domain for the endpoint URL e.g.
https://gregapela.ngrok.io/api/backend/v1/stripe/webhook - For now we only need to listen to the
customer.subscription.createdevent - After adding the endpoint, copy its signing secret into your
.envasSTRIPE_WEBHOOK_SECRET
- Use your ngrok domain for the endpoint URL e.g.
Setting Slack up is optional in development, but if you want to work on it you have to set-up your own Slack app.
- Run
yarn shared clone-slack-manifest- Make sure to use the
NGROK_SUBDOMAINdefined above
- Make sure to use the
- Go to https://api.slack.com/apps?new_app=1 and choose the second option to use the manifest to create a new app
- Make sure to give it a unique name and command name (which also should be unique within our workspace)
- Head to "Basic Information" and find the "App-Level Tokens", where you will want to click "Generate token", give it a name of your choosing and add the
authorizations:readscope. You will need the token in the next step. - Fill out
SLACK_CLIENT_ID,SLACK_CLIENT_SECRET,SLACK_SIGNING_SECRET,SLACK_APP_TOKENandSLACK_SLASH_COMMAND. in your.env, based on your new app's info. Also addSLACK_STATE_SECRETwhich can be any random string. - To start using Slack's actions, open the frontend, navigate to a team's settings page and click the button for linking the team to Slack.
- Need to test working with multiple workspaces? You need to go to your Slack App Settings > Manage Distribution > Make App Public.
- Copy the manifest
- For testing: Run
yarn shared clone-slack-manifest --testing - For staging: Run
yarn shared clone-slack-manifest --staging - For production: Just copy
shared/slack/manifest.json
- For testing: Run
- Paste the manifest into the testing, staging or the production app and save it.
Similar to the slack integration, setting up Linear is optional in development.
- Go to https://linear.app/acapela/settings/api/applications/new
- Use your own name as part of the app name to make it easily distinguishable
- Set the callback url to
http://localhost:3000/api/backend/v1/linear/callback - Make sure to enable webhooks for
IssuesandIssue comments - Set the webhook url to
https://<replace-this-with-your-cool-tunnel-name>.ngrok.io/api/backend/v1/linear/webhook - Fill out
LINEAR_CLIENT_ID,LINEAR_CLIENT_SECRET, in your.env, based on your new app's info. Also addLINEAR_OAUTH_SECRETwhich can be any random string.
-
Go to https://developer.atlassian.com/apps/ and create an OAuth app
-
Within your newly created Atlassian app
- Go to Authorization/Configure and set the Callback URL to http://localhost:3000/api/auth/callback/atlassian
- Go to Settings and get values for your
.envkeysATLASSIAN_CLIENT_IDandATLASSIAN_CLIENT_SECRET
-
There's a grueling step in which you need to add a whole lot of permissions. This has to be done manually. The best source of truth is
frontend/pages/api/auth/[...nextauth].ts. Here theAtlassianProviderhas defined many scopes, and most of these need to be added as permissions to the app.Here's the breakdown of these scopes:
offline_accessnot needed in the app. This is used toget a refresh token in your initial authorization flowread:mecan be found inPermissions->User identity API*:jiracan be found inPermissions->Jira API
- Setup a new GitHub app using this link and update your ngrok endpoint.
- Generate a new client secret and configure
GITHUB_CLIENT_SECRET - Generate a new private key
- Encode the private key to base64 using
cat my-cool-acapela.private-key.pem | base64 -w 0 - Configure
GITHUB_APP_PRIVATE_KEY
- Encode the private key to base64 using
- Configure a webhook secret and set
GITHUB_WEBHOOK_SECRET - Make sure you also configure
GITHUB_CLIENT_ID,GITHUB_APP_IDandGITHUB_APP_NAMEcorrectly - Set up an OAuth app using this link
- Configure
GITHUB_ONBOARDING_OAUTH_CLIENT_IDandGITHUB_ONBOARDING_OAUTH_CLIENT_SECRET
- Configure
- Setup a new Asana App here.
- Find the client secret and the client id in the OAuth settings and configure
ASANA_CLIENT_SECRETandASANA_CLIENT_ID. - Set the redirect URL to
http://localhost:3000/api/backend/v1/asana/callback
- Setup a new ClickUp API App here.
- Set the app name and set the redirect URL to
localhost:3000. - Configure
CLICKUP_CLIENT_IDandCLICKUP_CLIENT_SECRETwith the provided client id and client secret.
Get a Google service account key JSON file, through the console or from your colleagues. Specify its path in GOOGLE_APPLICATION_CREDENTIALS.
- Head over to our a topic within Google's PubSub (for example https://console.cloud.google.com/cloudpubsub/topic/detail/acapela-gmails-dev?project=meetnomoreapp) and create a new subscription for it (scroll down to see the button for it).
- Set the
GMAIL_TOPIC_NAMEandGMAIL_SUBSCRIPTION_NAMEin your.env - Make sure the newly created subscription has all the necessary permissions, which you can do by comparing it with our dev subscription (click "Show info panel" to see the permissions)
This repository uses semantic-release for automatic releases.
That means it is necessary to stick to the Conventional Commits convention to trigger new releases.
A new release gets triggered automatically after a push to master. The version numbers get incemented automatically depending on the commit message prefixes in the merged branch.
fix(pencil): stop graphite breaking when too much pressure applied
feat(pencil): add 'graphiteWidth' option
perf(pencil): remove graphiteWidth option
BREAKING CHANGE: The graphiteWidth option has been removed. The default graphite width of 10mm is always used for performance reason.
This repository uses commitizen to automate commit message composition.
All you need is to use npx cz command instead of git commit each time you are ready to commit anything.
This repo is based on workspaces (https://classic.yarnpkg.com/en/docs/workspaces/).
Package - single, package.json-based part of this repo eg frontend and backend.
Shared dependency - Dependency which is used in the same version in every package.
It is possible that the same dependency (eg lodash) is used across multiple packages using the same version.
In such case, dependency is added to root package.json (using yarn add package -S or -D) at root folder and then added as peer dependency to packages using it with version *.
For example:
Let's say we want to use react version 17.0.0 in every package.
Root package.json
"dependencies": {
"react": "^17.0.0"
}and then frontend/package.json
"peerDependencies": {
"react": "*"
}Version of single packages (such as frontend or backend) should never be changed.
Version of root package can be freely changed if it makes sense for any use-case.
Each package can import content of other package.
To do it, it needs to define package as its dependency
eg.
utils/package.json
{
"name": "@aca/utils",
"version": "0.1.0"
}and then frontend/package.json
{
"dependencies": {
"@aca/utils": "0.1.0"
}
}In such a setup, after calling yarn install - symlink will be created, meaning ./frontend/node_modules/@aca/utils is symlink to ./utils (not clone!).
It also means that each change made inside ./utils would be instantly reflected inside ./frontend/node_modules/@aca/utils.
Note that packages like ./shared have only typescript files and are never built ts > js by itself. (There is no build script for ./shared alone).
Monorepo is organized in a way that forces package that uses shared to build it.
It means there is build setup only for 'end' packages (currently only frontend and backend).
In the frontend (next.js), we use proper next.js plugin to tell it to compile monorepo packages (next-transpile-modules - check next.config.js for more details).
This setup means we have 'hot-reloading' experience when modifying any used monorepo package.
It's possible to manually run any command in the scope of a package.
yarn frontend add reactAnother example might be yarn backend test:watch, etc.
When making mutative (i.e. non-additive) changes to the database schema it is important to do it in a staggered fashion. The recommended order of operation is:
- ClientDb entity fragment
- By still keeping it in the server we make sure older clients still work, though we are not generally trying to stay backwards compatible for too long
- Hasura metadata
- This prevents a failed metadata update (e.g. due to a deadlock) to leave metadata in an inconsistent state. We run migrations before the metadata refresh and this is unfortunately not a single transaction.
- Actual database schema migration
It's recommended to space deploying these changes out over some time. Create yourself linear issues with deadlines to remember when to kick off the next phase.
There is root level tsconfig.json file which is extended and modified if needed inside <package>/tsconfig.json
Eslint config is in the root level and is used as-is in every package.
There is generic, node-based .gitignore at root level, but each package has its own .gitignore. Root one is kind of a backup.
Prettier config is defined at root level and is used as-is in every package.
- Staging is automatically updated after every release on master.
- Production releases require manual deployment. This is done by using the custom deploy workflow in GitHub Actions.
After clicking on run workflow you can specify the stage, the application, and the version to deploy.
The application value all releases the desktop app and deploys the frontend and backend application.
If the application value is only set to backend the script will only update the backend service without creating a new releases of the desktop app.
The version value latest will deploy the latest release. To deploy a specific version you can pass here the exact version number (e.g. 1.2.3).
Fetch production heapdump from the server and save it locally.
# list all backend pods
kubectl -n production get pods -l app=api
# select a pod and setup a port forwarding
kubectl -n production port-forward api-7dc75f758c-h7sz4 38080:8080
# save the heapdump locally
curl -H 'Authorization: verysupersecure' http://localhost:38080/api/v1/debug/heapdump > "./backend-$(date +%s).heapsnapshot"