name: Extension Build and Publish
on: push: paths: - 'extension/' pull_request: paths: - 'extension/'
jobs: build: runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: '20'
- name: Install Dependencies
run: npm install
- name: Build Extension
run: npm run build:extension
- name: Upload Extension Build
uses: actions/upload-artifact@v3
with:
name: extension-build
path: extension/dist/
This repo is intended to organize our Chrome devtools (recorder) extension within the same GitHub repository of our recordings that are executable by GitHub Action workflows.
- Consolidating these components into a single repository can streamline
- development, simplify version control, and enhance collaboration. However, it’s essential to structure the repository thoughtfully to maintain clarity and manageability as the project scales.
- Benefits of a Single Repository
- Repository Structure
- Integrating the Chrome Extension
- Setting Up GitHub Actions Workflows
- Handling Secrets and Configuration
- Publishing the Chrome Extension
- Best Practices
- Example Repository Structure
- Conclusion
Centralized Management:
- Unified Version Control: Track changes to both the extension and scraper recordings in one place.
- Simplified Collaboration: Team members can access all project components without navigating multiple repositories.
Consistent Configuration:
- Shared Configuration Files: Utilize shared configurations (e.g., ESLint, Prettier) across the extension and scraper scripts.
- Simplified Dependency Management: Manage dependencies for both components
within a single
package.json.
Streamlined CI/CD:
- Integrated Workflows: Create GitHub Actions workflows that can operate on both the extension and scraper recordings.
- Consistent Environment Setup: Ensure consistent development and build environments across all project parts.
Organizing your repository logically is crucial for maintainability and scalability. Here's a recommended structure:
recorder/
├── .github/
│ └── workflows/
│ ├── extension-build.yml
│ └── scraper-workflow.yml
├── extension/
│ ├── manifest.json
│ ├── devtools.html
│ ├── background.js
│ ├── scripts/
│ │ ├── recorderPlugin.js
│ │ ├── githubSync.js
│ │ └── ui.js
│ ├── popup.html
│ ├── options.html
│ ├── icons/
│ │ ├── icon16.png
│ │ ├── icon48.png
│ │ └── icon128.png
│ └── README.md
├── recordings/
│ ├── recording1.json
│ ├── recording2.json
│ └── ...
├── scripts/
│ ├── run-scraper.js
│ └── helpers.js
├── results/
│ ├── result1.json
│ ├── result2.json
│ └── ...
├── package.json
├── package-lock.json
├── README.md
└── ... (other project files)
Explanation of Key Directories:
-
.github/workflows/: Contains GitHub Actions workflow YAML files for CI/CD processes related to both the extension and scraper. -
extension/: Houses all Chrome extension-related files, including scripts, HTML pages, and assets. -
recordings/: Stores scraper recordings generated by the Chrome Recorder extension. -
scripts/: Contains utility scripts, such as the scraper execution script (run-scraper.js) and helper functions. -
results/: Stores output from the scraper, such as JSON data, screenshots, or logs.
Place the Chrome extension within its own directory (extension/) to
encapsulate all related files. This separation ensures that extension-specific
code doesn't clutter other parts of the repository and facilitates easier
maintenance.
If your extension or scraper scripts require Node.js dependencies (e.g.,
puppeteer), manage them centrally via the root package.json. Alternatively,
you can create separate package.json files within subdirectories for more
granular dependency management.
Option 1: Single package.json
{
"name": "chrome-recorder-sync",
"version": "1.0.0",
"scripts": {
"build:extension": "webpack --config extension/webpack.config.js",
"run:scraper": "node scripts/run-scraper.js"
},
"dependencies": {
"puppeteer": "^19.0.0"
},
"devDependencies": {
"webpack": "^5.0.0",
"webpack-cli": "^4.0.0"
}
}Option 2: Multiple package.json Files
recorder/
├── extension/
│ ├── package.json
│ └── ...
├── scripts/
│ ├── package.json
│ └── ...
└── package.json (optional)
Recommendation: For simplicity, especially in smaller projects, use a single
package.json at the root. For larger projects with distinct dependencies,
consider multiple package.json files.
1. Building the Extension:
Use build tools like Webpack or Parcel to bundle your extension scripts, especially if you're using modern JavaScript or TypeScript.
Example webpack.config.js for Extension:
// extension/webpack.config.js
const path = require("path");
module.exports = {
entry: {
background: "./background.js",
recorderPlugin: "./scripts/recorderPlugin.js",
githubSync: "./scripts/githubSync.js",
ui: "./scripts/ui.js",
},
output: {
path: path.resolve(__dirname, "dist"),
filename: "[name].bundle.js",
},
mode: "production",
};2. Running Scripts:
Use npm scripts to handle tasks like running the scraper.
Example package.json Scripts Section:
"scripts": {
"build:extension": "webpack --config extension/webpack.config.js",
"run:scraper": "node scripts/run-scraper.js"
}3. Live Reloading and Development:
Consider using webpack-dev-server or similar tools for a better developmentexperience with live reloading.
With both the Chrome extension and scraper in the same repository, you can define multiple workflows targeting different tasks.
Create a workflow to build, test, and possibly publish your Chrome extension.
Example extension-build.yml:
name: Extension Build and Publish
on:
push:
paths:
- 'extension/**'
pull_request:
paths:
- 'extension/**'
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v3
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: '20'
- name: Install Dependencies
run: npm install
- name: Build Extension
run: npm run build:extension
- name: Upload Extension Build
uses: actions/upload-artifact@v3
with:
name: extension-build
path: extension/dist/Optional: Publishing the Extension
If you intend to automate the publishing process to the Chrome Web Store, you'll need to integrate with Chrome's API using the Chrome Web Store Publish Action or similar.
Create a workflow that triggers on changes to the recordings/ directory to
runthe scraper.
Example scraper-workflow.yml:
name: Runner
on:
push:
paths:
- 'recordings/**'
workflow_dispatch:
jobs:
run-scraper:
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v3
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install Dependencies
run: npm install
- name: Run Scraper
run: npm run run:scraper
- name: Upload Scraper Results
uses: actions/upload-artifact@v3
with:
name: scraper-results
path: results/Explanation:
- Triggers: Runs on pushes to the
recordings/directory or can be manually triggered. - Jobs:
- Checkout: Clones the repository.
- Setup: Configures Node.js environment.
- Install: Installs necessary dependencies.
- Run Scraper: Executes the scraper script.
- Upload Results: Saves the scraper output as artifacts for later inspection.
Managing sensitive information securely is paramount, especially when interacting with the GitHub API.
Use GitHub Secrets to store sensitive data like GitHub Personal Access Tokens (PATs), which your extension and workflows will use to authenticate with the GitHub API.
-
Add Secrets to GitHub Repository:
- Navigate to your repository on GitHub.
- Go to Settings > Secrets and variables > Actions.
- Click "New repository secret" and add the following:
GITHUB_TOKEN(if not using the default)EXTENSION_GITHUB_TOKEN(for extension-specific operations)- Any other necessary tokens or keys.
-
Access Secrets in Workflows:
Reference secrets in your workflow YAML files using
${{ secrets.SECRET_NAME }}.Example:
- name: Run Scraper run: node scripts/run-scraper.js env: GITHUB_TOKEN: ${{ secrets.EXTENSION_GITHUB_TOKEN }}
To securely pass tokens from the extension to GitHub, consider the following:
-
User Input: Prompt users to input their GitHub PAT via the extension's UI and store it using
chrome.storage.sync. -
Avoid Hardcoding: Do not embed secrets directly into the extension code. Always retrieve them securely at runtime.
Example in githubSync.js:
// scripts/githubSync.js
class GitHubSync {
constructor() {
chrome.storage.sync.get(
["githubToken", "repoOwner", "repoName"],
(result) => {
this.githubToken = result.githubToken;
this.repoOwner = result.repoOwner;
this.repoName = result.repoName;
},
);
}
// ... existing methods
}User Interface to Set Credentials (popup.html and ui.js):
Ensure users can input and save their credentials securely.
After development and testing, you may want to publish your Chrome extension to the Chrome Web Store for broader use.
-
Build the Extension:
Run your build script to generate the final extension files.
npm run build:extension
-
Compress the Extension:
Navigate to the
extension/dist/directory and zip the contents.cd extension/dist/ zip -r ../extension.zip .
-
Create a Developer Account:
- Go to the Chrome Web Store Developer Dashboard and sign up.
- Pay the one-time registration fee if required.
-
Submit the Extension:
- Click "Add new item" and upload the
extension.zipfile. - Fill in the required details like description, screenshots, and icons.
- Set up pricing and distribution options.
- Click "Add new item" and upload the
-
Automate Publishing (Optional):
For advanced setups, use GitHub Actions to automate the publishing process using tools like Chrome Web Store Publish Action.
Example Workflow Step:
- name: Publish to Chrome Web Store
uses: ncipollo/release-action@v1
with:
api-key: ${{ secrets.CHROME_WEBSTORE_API_KEY }}
client-id: ${{ secrets.CHROME_WEBSTORE_CLIENT_ID }}
client-secret: ${{ secrets.CHROME_WEBSTORE_CLIENT_SECRET }}
extension-id: your-extension-id
zip-file: extension.zipNote: Refer to the specific action's documentation for accurate implementation.
- Separation of Concerns: Keep extension logic separate from scraper scripts.
- Reusable Components: Develop utility functions that can be shared across different parts of the project.
- README.md: Provide clear instructions on setting up, configuring, and using the extension and scraper.
- Code Comments: Add meaningful comments to explain complex logic or workflows.
- Contribution Guidelines: If collaborating, outline how others can contribute to the project.
- Branching Strategy: Use branches like
main,development, and feature-specific branches. - Commit Messages: Write descriptive commit messages to track changes effectively.
- Pull Requests: Use PRs for code reviews and to maintain code quality.
- Unit Tests: Implement tests for critical functions, especially those interacting with APIs.
- Integration Tests: Test the end-to-end flow from recording to synchronization and scraper execution.
- Manual Testing: Regularly test the extension in Chrome to ensure functionality.
- Graceful Failures: Handle API errors, network issues, and unexpected inputs gracefully.
- User Feedback: Inform users of successes, failures, and required actions via the extension's UI.
- Logging: Maintain logs for both the extension and scraper scripts to facilitate debugging.
- Secure Storage: Store tokens and sensitive data securely using Chrome's storage APIs.
- Input Validation: Validate and sanitize all inputs to prevent injection attacks.
- Least Privilege: Grant only necessary permissions to the extension and workflows.
Here’s an expanded view of how your repository might look with the Chrome extension integrated alongside scraper recordings and GitHub Actions workflows:
chrome-recorder-sync/
├── .github/
│ └── workflows/
│ ├── extension-build.yml
│ └── scraper-workflow.yml
├── extension/
│ ├── dist/
│ │ ├── background.bundle.js
│ │ ├── recorderPlugin.bundle.js
│ │ ├── githubSync.bundle.js
│ │ ├── ui.bundle.js
│ │ ├── popup.html
│ │ ├── options.html
│ │ └── icons/
│ │ ├── icon16.png
│ │ ├── icon48.png
│ │ └── icon128.png
│ ├── src/
│ │ ├── background.js
│ │ ├── scripts/
│ │ │ ├── recorderPlugin.js
│ │ │ ├── githubSync.js
│ │ │ └── ui.js
│ │ ├── popup.html
│ │ ├── options.html
│ │ └── icons/
│ │ ├── icon16.png
│ │ ├── icon48.png
│ │ └── icon128.png
│ ├── manifest.json
│ ├── webpack.config.js
│ └── README.md
├── recordings/
│ ├── recording1.json
│ ├── recording2.json
│ └── ...
├── scripts/
│ ├── run-scraper.js
│ ├── helpers.js
│ └── package.json (optional)
├── results/
│ ├── result1.json
│ ├── result2.json
│ └── ...
├── package.json
├── package-lock.json
├── README.md
└── ... (other project files)
Highlights:
-
extension/src/: Contains source files for the extension, which are bundled intodist/using build tools. -
recordings/: Centralized location for all scraper recordings. -
scripts/: Houses scraper execution scripts and utilities. -
results/: Stores outputs from the scraper runs. -
package.json: Manages dependencies and scripts for both the extension and scraper. -
.github/workflows/: Contains separate workflows for building the extension and running the scraper based on different triggers.
Integrating your Chrome extension with scraper recordings and GitHub Actions within a single repository is not only feasible but also beneficial for maintaining a cohesive development environment. By following the structured approach outlined above, you can ensure that your project remains organized, scalable, and secure.
Key Takeaways:
-
Organize Logically: Separate different project components into distinct directories to maintain clarity.
-
Manage Dependencies Effectively: Use centralized or modular dependency management based on project size and complexity.
-
Automate with GitHub Actions: Leverage GitHub Actions to build, test, and deploy both your extension and scraper workflows seamlessly.
-
Prioritize Security: Handle all sensitive data with care, using secure storage and minimizing exposure.
-
Document Thoroughly: Maintain comprehensive documentation to facilitate collaboration and future maintenance.
By consolidating your extension and scraper functionalities within a single repository, you create an efficient workflow that simplifies development, testing, and deployment processes. This setup enhances productivity and ensures that all project components evolve in harmony, leveraging the full potential of GitHub’s ecosystem.