How to Get the Most out of GitHub API Rate Limits
Thinking about using Github's REST API within your system, or already doing so? If you have not already encountered this concept, one important thing to keep in mind while developing is Github's concept of rate limiting.
Thinking about using Github's REST API within your system, or already doing so? If you have not already encountered this concept, one important thing to keep in mind while developing is Github's concept of rate limiting.
Thinking about using Github's REST API within your system, or already doing so? If you have not already encountered this concept, one important thing to keep in mind while developing is Github's concept of rate limiting.
Thinking about using Github's REST API within your system, or already doing so? If you have not already encountered this concept, one important thing to keep in mind while developing is Github's concept of rate limiting.
Thinking about using Github's REST API within your system, or already doing so? If you have not already encountered this concept, one important thing to keep in mind while developing is Github's concept of rate limiting.
If you are unfamiliar with what rate limiting is, it is a performance management technique where the rate of requests to a system is controlled and limited. In Github's case, it refers to the rate at which a user may make requests to their API endpoints. Github does this in order to prevent excessive load on their systems from too many requests coming in at once.
If you have ever tried using Github's API, you have probably already run into Github's rate limiting errors. Without authorization, Github only allows 60 requests per hour (!!), and after creating and using your own personal access token, you are upgraded to 5000 requests per hour.
That rate limit can be easily reached in just a few minutes by calling the API to list basic information from a medium sized repository, and you would have to wait 50+ minutes for the rate limit to reset in order to continue or try again. You can up the rate limit to 15,000 with an enterprise account, but the problem remains the same:
If your service/program needs to make consistent and/or sustained requests to Github's APIs, you will have to work around and make the most of Github's rate limits.
We rely heavily on data obtained from Github's APIs at Endor Labs, and this blog will list the techniques we have discovered along the way to make the most out of Github's rate limits.
Github Rate Limit Brief Overview
We will quickly go over the specific rate limits for Github's REST APIs and some useful information to remember as well.
- Unauthenticated Requests: 60 requests/hr
- Authenticated User Requests: 5000 requests/hr
- Github App owned by a Github Enterprise Cloud org: 15,000 requests/hr
- Oauth App owned or approved by a Github Enterprise Cloud org: 15,000 requests/hr
- Rate limit information is available in each returned HTTP header of an API request.
- A direct Rate Limit API is also available
- Example usage:
- Calling this API does not count against your rate limit
- A Secondary Rate Limit is given regardless of the current rate limit for certain request behaviors, such as a large amount of concurrent requests. This rate limit refreshes very quickly, so you can just read the response header and wait until the limit is reset.
You can also refer to Github's Rate Limit documentation for more information.
Simple Techniques to Push your Rate Limit
This list includes techniques that are readily supported by libraries and/or Github, and should be easier to implement within systems already using Github's REST API.
Increasing Page Size Limits
This one is sneaky; Github defaults most page size limits to 30 when the maximum is usually 100. Not only will ensuring that you set the page size to the maximum save you a ton of requests, but your program's speed will also benefit from the decrease in network requests.
Conditional Requests aka ETag
Conditional requests are the official Github supported way to reduce your rate limiting, and is well covered in the Github documentation.
At Endor Labs, conditional requests have been a big contributor to decreasing our rate limit usage. A lot of Github objects such as PRs, issues, etc. don't change very often, and conditional requests allow us to avoid using our rate limit every time we want to read those objects.
For a quick walk-through of conditional requests, all REST API requests will return an ETag header with a "fingerprint" of the response. On subsequent requests, you may provide this ETag value in the `If-None-Match` request header, and if the resource has not changed, you will receive a 304 response and no impact on your rate limit.
Here is the example from Github's docs:
Example showing that the rate limit has not changed after using an ETag and receiving a 304 status:
In practice, how you implement ETags into your infrastructure is up to you. In our case, we found that storing the ETag in the DB (along with some metadata to identify which ETag was last used for which request and repository) worked the best within our architecture.
Multiple Github Accounts
Per Github's Terms of Service, you can only have one free account, but if you are able to create additional paid accounts, you can generate more tokens to add onto your rate limit. This will likely require token rotation logic, which is discussed in the next section.
Advanced Techniques to Push Your Rate Limit
These techniques likely require new infrastructure support within your system.
Incremental Ingestion
Github's List APIs will return the entire list of items (for example, the entire history of commits). While this might be necessary on the initial API call, you may not want to keep pulling the entire list on subsequent runs. In this case, incremental ingestion is a concept that you can implement into your listing logic.
Different ways to accomplish this include:
- Certain List APIs will have the `since` option, where you can get results after a certain time (the last time you called the API is a good choice). List APIs generally have a default sort method and direction, which you can also change with the `sort` and `direction` options.
- Compare the # of requests to List the entire list vs getting individually the missing items.
- For example, say the repository has 10,000 existing commits. To get the entire list, we would need 100 calls (100 pages with a 100 page size). However, we've already fetched the first 9,950 commits, and only need 50 commits which corresponds to 50 calls. In this case, it would be better to get each individual commit rather than listing the entire history of commits.
- BONUS TIP: Instead of always using Github's API to get information, you can directly use git to get certain kinds of data. In the above example, you can use git directly to get the entire list of commits.
Furthermore, you might not even need the entire list. In this case, you should always use `since`, `sort`, and `direction` options, or continue reading for a better technique.
GraphQL
Github also supports GraphQL queries, which has a separate rate limit pool than the API requests. You can find Github's GraphQL docs here and the GraphQL rate limit docs here.
GraphQL is great for certain use cases outside of reducing your rate limit usage. It can fetch information that is not available via the regular REST APIs, such as a tag's creation date, and do things that the regular API cannot, such as sorting tags. If you have use cases that would be better served by GraphQL, it would make sense to support GraphQL queries within your infrastructure to spread out your rate limit usage.
On a side note, GraphQL queries are more difficult to generalize than REST API requests, and pagination on GraphQL is not as well supported.
Token Rotation
If you are a part of an organization or group, it may be possible to gather a pool of tokens to feed to Github's APIs. Creating a token rotation infrastructure would be a long term, scalable solution with a variety of designs that could be implemented. I will leave the final design up to you, the reader, but here are some thoughts based on my experience at Endor Labs.
- Security Considerations: Since we are dealing with authorization tokens, we must be careful not to expose them publicly. Ensure that the methods in which you store, transport, and read the tokens are secure. Tokens must also have an expiration date.
- In terms of how to choose the next token:
- Load Balancing Algorithms
- Two Random Choices (Our Pick)
- Here is a wonderful research paper describing the power of two random choices in load balancing algorithms: https://www.eecs.harvard.edu/~michaelm/postscripts/mythesis.pdf
- There are plenty of other algorithms (such as Round Robin variants) that may be used as well
- Handling expired or invalid tokens
- Caching tokens for a set amount of time
- Retries: how to handle when your algorithm chooses a token that is expired/invalid or has already reached its limit.
Using a Github App Installation
If your use case can take advantage of Github Apps, you will be able to take advantage of generated tokens which may qualify for the higher Enterprise rate limits. Installations are able to generate their own tokens separate from your personal access tokens, and they can be used to make API calls on behalf of the installation.
You can find Github's App Installation doc here.
Last Resort
Inevitably, you will end up hitting the rate limit, even with all the techniques above. If that happens, you can capture the rate limit error (make sure you're checking for both the normal rate limit AND secondary rate limit errors) and use the response headers to determine how long to wait until the rate limit is refreshed. I will leave the exact implementation up to you as this waiting can be placed at multiple points.
Summary
Github's rate limits are quite harsh, and it may seem impossible to fetch the required amount of data from their APIs in a timely manner. Here at Endor Labs, we have successfully used the techniques above to speed up data fetching from Github. Although we still occasionally wait on a token's rate limit refresh to happen, we can fetch data at speeds exponentially faster than before.
Hopefully this blog will help you do the same in your projects!
Sebastian Cai is a Member of Technical Staff at Endor Labs. As the second backend engineer, Sebastian built the data ingestion platforms and infrastructures. He is currently focused on improving and adding more capabilities to data ingestion at Endor Labs. In his free time, he enjoys boxing, chess, music production, and playing with his cat.
Links
Github Rate Limit API: https://docs.github.com/en/rest/rate-limit
Github Overview on Rate Limiting: https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting
Github Conditional Requests: https://docs.github.com/en/rest/guides/getting-started-with-the-rest-api#conditional-requests
What is an ETag?: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag
Github GraphQL API: https://docs.github.com/en/graphql
Github GraphQL Overview on Rate Limiting: https://docs.github.com/en/graphql/overview/resource-limitations#rate-limit
The Power of Two Choices in Randomized Load Balancing: https://www.eecs.harvard.edu/~michaelm/postscripts/mythesis.pdf
Using Github App Installations: https://docs.github.com/en/developers/apps/managing-github-apps/installing-github-apps