Skip to content

Conversation

@fivertran-karunveluru
Copy link
Collaborator

Jboard Connector

Created: 2025-01-31

Business Owner: Talent Acquisition & Recruitment Operations Team

Technical Owner: Data Engineering Team

Last Updated: 2025-01-31

Business Context

  • Data Source: Jboard API for comprehensive job board, employer, and recruitment data
  • Business Criticality: High - supports talent acquisition, market intelligence, and recruitment analytics
  • Data Consumers: Talent acquisition teams, recruitment specialists, HR analytics teams, executive leadership
  • Business SLAs: Data must be fresh within 6 hours for active job searches, 24 hours for market intelligence reporting
  • Compliance Requirements: GDPR compliance for candidate data, data privacy regulations for alert subscriptions
  • Budget Constraints: Jboard API access based on subscription tier, rate limits based on plan level

Technical Context

  • API Documentation: https://app.jboard.io/api/documentation
  • Authentication Method: Bearer token authentication with API key
  • Rate Limits: Varies by Jboard plan, typically 1000-5000 requests/hour
  • Data Volume:
    • Employers: 1,000-100,000+ employer profiles per integration
    • Categories: 50-500+ job categories with hierarchical structure
    • Alert Subscriptions: 500-50,000+ active user subscriptions
  • Data Velocity: Employer data updated on changes, categories updated weekly/monthly, alert subscriptions updated in real-time
  • Data Quality: Structured JSON with consistent schema, some fields may be null for incomplete records
  • Network Considerations: HTTPS only, RESTful API with standard reliability

Operational Context

  • Deployment Environment: Development (sandbox), staging, and production environments
  • Monitoring Requirements: Alert on >2% error rate, >3 hour sync time, missing employer or category data
  • Maintenance Windows: Weekends for non-critical updates, immediate deployment for recruitment-critical fixes
  • Team Structure: Data Engineering team, Talent Acquisition Operations, Recruitment Analytics, HR teams
  • Escalation Path: Data Engineer → Team Lead → Talent Acquisition Director → CHRO

API-Specific Details

  • Base Endpoint: https://app.jboard.io/api/v1 (production)
  • Authentication: Bearer token in Authorization header (API key)
  • Pagination: page and per_page parameters (max 100 per page)
  • Date Format: ISO 8601 (e.g., 2024-01-15T10:30:00Z)
  • Response Format: JSON with nested objects and arrays
  • Key Endpoints:
    • /employers - Employer profiles, company information, and job posting metadata
    • /categories - Job categories with hierarchical structure and organization
    • /alert_subscriptions - User job alert subscriptions with search criteria and preferences

Data Schema Overview

  • employers: Employer profiles with company details, logos, featured status, and job posting history
  • categories: Job categories with hierarchical parent-child relationships and sorting metadata
  • alert_subscriptions: User alert subscriptions with search queries, location filters, job type preferences, and category filters

Data Replication Expectations

  • Initial Sync: Last 90 days of employer, category, and subscription data for baseline analysis
  • Incremental Sync: Data since last successful sync timestamp using created_at_from filters
  • Sync Frequency:
    • Production: Every 6 hours for employer data, daily for categories and subscriptions
    • Development: Daily for all data types
  • Data Retention: 2 years of historical employer and subscription data for trend analysis
  • Backfill Capability: Full historical data available based on Jboard retention policies
  • Data Consistency: Near real-time with 6-hour maximum lag for active recruitment operations

Operational Requirements

  • Uptime SLA: 99.5% availability during business hours
  • Performance SLA:
    • Initial sync: <4 hours for 90 days of data
    • Incremental sync: <45 minutes for daily updates
  • Error Handling:
    • Automatic retry with exponential backoff and jitter
    • Dead letter queue for failed employer or subscription records
    • Alert on consecutive sync failures during peak recruitment periods
  • Monitoring:
    • API response times and error rates
    • Employer count trends and anomaly detection
    • Subscription activity and engagement metrics
    • Category hierarchy completeness validation
  • Security:
    • API keys rotated every 90 days
    • Access logs maintained for 2 years
    • User email privacy handling per data regulations

Rate Limiting Strategy

  • Starter Plan: 1,000 requests/hour, 10,000 requests/day
  • Professional Plan: 2,500 requests/hour, 25,000 requests/day
  • Enterprise Plan: 5,000 requests/hour, 50,000 requests/day
  • Recommended: Implement exponential backoff with jitter for 429 responses
  • Error Handling: 429 status code indicates rate limit exceeded, respect Retry-After header
  • Monitoring: Track rate limit utilization and plan for subscription upgrades

Data Quality Considerations

  • Required Fields: employer_id, category_id, subscription_id, name, email (for subscriptions)
  • Optional Fields: description, website, logo_url, parent_id (categories), tags, location
  • Data Validation:
    • Employer IDs must be unique
    • Category IDs must be unique within hierarchy
    • Email addresses must be valid format for alert subscriptions
    • Website URLs must be valid format
    • Tags arrays properly serialized to JSON strings
  • Data Completeness:
    • Employers: 95%+ have basic profile information
    • Categories: 100% have name and structure data
    • Alert Subscriptions: 90%+ have search criteria defined
  • Duplicate Handling: Primary key constraints prevent duplicate records

Integration Points

  • Fivetran Destinations: Snowflake, BigQuery, Redshift, PostgreSQL
  • Downstream Systems:
    • Applicant tracking systems (ATS)
    • Recruitment analytics platforms
    • Talent intelligence systems
    • HR information systems (HRIS)
    • Market intelligence dashboards
  • Data Dependencies: None - standalone job board data source
  • External Dependencies: Jboard API availability, job posting update frequency

Disaster Recovery

  • Backup Strategy: Daily snapshots of all employer, category, and subscription tables
  • Recovery Time Objective: 6 hours for full data recovery
  • Recovery Point Objective: 4 hours maximum data loss for recruitment-critical data
  • Failover: Automatic failover to backup API credentials
  • Testing: Monthly disaster recovery drills with Talent Acquisition team validation

Compliance & Security

  • Data Classification: User email addresses - sensitive PII, employer data - business sensitive
  • Retention Policy: 2 years for recruitment analytics, 1 year for operational data
  • Access Controls: Strict role-based access with principle of least privilege
  • Audit Trail: All data access logged and monitored for compliance audits
  • Encryption: Data encrypted in transit and at rest with enterprise-grade security
  • Privacy: GDPR compliance for EU users, CCPA compliance for CA users, email privacy protection

Performance Optimization

  • Parallel Processing: Multiple API calls for different data types (employers, categories, subscriptions)
  • Caching: Category hierarchy cached for 24 hours
  • Indexing: Employer ID, category ID, subscription ID, email, and date columns indexed
  • Partitioning: Subscription data partitioned by creation date for efficient querying
  • Compression: Historical subscription data compressed for storage efficiency
  • Streaming: Memory-efficient generator patterns prevent data accumulation

Troubleshooting Guide

  • Common Issues:
    • Rate limit exceeded: Reduce sync frequency or upgrade Jboard plan
    • API key expired: Verify key validity and permissions
    • Missing employer data: Check API key permissions and employer active status
    • Timeout errors: Increase timeout values or reduce batch size
    • Category hierarchy issues: Validate parent_id relationships and sort_order
    • Subscription email format errors: Verify email validation and privacy handling
  • Debug Mode: Enable detailed logging for employer and subscription data troubleshooting
  • Support Contacts:
    • Technical: Data Engineering team
    • Business: Talent Acquisition Operations team
    • Vendor: Jboard support (for API and account issues)
    • Compliance: Legal/Compliance team (for privacy and regulatory issues)

Checklist

Some tips and links to help validate your PR:

  • Tested the connector with fivetran debug command.
  • Added/Updated example specific README.md file, refer here for template.
  • Followed Python Coding Standards, refer here
capture

@fivertran-karunveluru fivertran-karunveluru added the hackathon For all the PRs related to the internal Fivetran 2025 Connector SDK Hackathon. label Nov 1, 2025
@fivertran-karunveluru fivertran-karunveluru requested review from a team as code owners November 1, 2025 01:24
@github-actions github-actions bot added the size/XL PR size: extra large label Nov 1, 2025
@github-actions
Copy link

github-actions bot commented Nov 1, 2025

🧹 Python Code Quality Check

✅ No issues found in Python Files.

🔍 See how this check works

This comment is auto-updated with every commit.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds a new Jboard connector to sync employer profiles, job categories, and alert subscription data from the Jboard API. The connector implements memory-efficient streaming patterns with pagination, incremental sync support using timestamp-based cursors, and comprehensive error handling with retry logic.

Key changes:

  • Implements three-table sync (employers, categories, alert_subscriptions) with generator-based pagination
  • Provides Bearer token authentication with configurable timeout and retry settings
  • Uses incremental checkpointing for employers data with state management
  • Includes comprehensive retry logic with exponential backoff and jitter for rate limiting

Reviewed Changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 24 comments.

File Description
connectors/jboard/connector.py Main connector implementation with API integration, data fetching, transformation, and sync orchestration
connectors/jboard/configuration.json Configuration template defining API credentials and connector parameters
connectors/jboard/README.md Documentation describing connector features, configuration, authentication, and data schema
README.md Updated root README to add Jboard connector to the examples list

Comment on lines +509 to +512
checkpoint_state = {
"last_sync_time": record.get(
"updated_at", datetime.now(timezone.utc).isoformat()
),
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

State progression logic issue. The checkpoint state at line 510-512 uses record.get("updated_at") which represents the timestamp of the current record being processed, not necessarily the latest timestamp in the batch. This can lead to data being skipped if records are not processed in chronological order.

Instead, track the maximum updated_at across all processed records in the batch:

max_updated_at = record.get("updated_at", datetime.now(timezone.utc).isoformat())
if max_updated_at > checkpoint_state.get("last_sync_time", ""):
    checkpoint_state["last_sync_time"] = max_updated_at

This ensures the state advances correctly and no data is missed in subsequent syncs.

Copilot generated this review using guidance from repository custom instructions.

The connector includes several additional files to support functionality, testing, and deployment:

- `requirements.txt` – Python dependency specification for Jboard API integration and connector requirements including faker for mock testing.
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reference to non-existent requirements.txt file. Line 98 mentions "requirements.txt – Python dependency specification for Jboard API integration and connector requirements including faker for mock testing", but no requirements.txt file exists in the connector directory.

Either:

  1. Remove this reference from the "Additional files" section if no requirements.txt is needed
  2. Add the requirements.txt file if faker or other dependencies are actually needed for testing

Based on the connector code, no additional dependencies are needed, so this reference should be removed.

Suggested change
- `requirements.txt` – Python dependency specification for Jboard API integration and connector requirements including faker for mock testing.

Copilot uses AI. Check for mistakes.
"per_page": max_records,
"page": 1,
}

Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incremental sync not implemented for alert_subscriptions. The get_alert_subscriptions function accepts last_sync_time parameter but never uses it to filter data (unlike get_employers which uses it at line 333). This means alert_subscriptions will always perform a full sync even when incremental sync is enabled.

Add the incremental sync filter after line 429:

# Add incremental sync filters if last_sync_time provided
if last_sync_time:
    params["created_at_from"] = last_sync_time
Suggested change
# Add incremental sync filters if last_sync_time provided
if last_sync_time:
params["created_at_from"] = last_sync_time

Copilot uses AI. Check for mistakes.
Comment on lines +560 to +564
except Exception as e:
log.severe(f"Sync failed: {str(e)}")
raise RuntimeError(f"Failed to sync data: {str(e)}")


Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generic exception catching without proper error classification. The except Exception as e: block at line 560 catches all exceptions indiscriminately, including permanent errors that shouldn't be retried. This can lead to unnecessary retry attempts for non-transient failures.

Consider catching specific exceptions and handling them appropriately:

except requests.exceptions.HTTPError as e:
    if e.response.status_code in [401, 403, 404]:
        # Permanent errors - fail immediately
        log.severe(f"Authentication or resource error: {str(e)}")
        raise
    else:
        # Transient errors - allow retry
        log.severe(f"HTTP error during sync: {str(e)}")
        raise RuntimeError(f"Failed to sync data: {str(e)}")
except (requests.exceptions.Timeout, requests.exceptions.ConnectionError) as e:
    log.severe(f"Network error during sync: {str(e)}")
    raise RuntimeError(f"Failed to sync data: {str(e)}")
except Exception as e:
    log.severe(f"Unexpected error during sync: {str(e)}")
    raise

This provides better visibility into error types and allows for appropriate handling.

Suggested change
except Exception as e:
log.severe(f"Sync failed: {str(e)}")
raise RuntimeError(f"Failed to sync data: {str(e)}")
except requests.exceptions.HTTPError as e:
status_code = e.response.status_code if e.response is not None else None
if status_code in [401, 403, 404]:
log.severe(f"Authentication or resource error: {str(e)}")
raise
else:
log.severe(f"HTTP error during sync: {str(e)}")
raise RuntimeError(f"Failed to sync data: {str(e)}")
except (requests.exceptions.Timeout, requests.exceptions.ConnectionError) as e:
log.severe(f"Network error during sync: {str(e)}")
raise RuntimeError(f"Failed to sync data: {str(e)}")
except Exception as e:
log.severe(f"Unexpected error during sync: {str(e)}")
raise

Copilot uses AI. Check for mistakes.
RuntimeError: If all retry attempts fail or unexpected errors occur.
requests.exceptions.RequestException: For unrecoverable HTTP errors.
"""
url = f"{__API_ENDPOINT}{endpoint}"
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect API endpoint construction. The code defines __API_VERSION = "v1" but doesn't use it when building the URL. This creates a maintainability issue if the API version needs to change.

Change line 181 to:

url = f"{__API_ENDPOINT}/{__API_VERSION}{endpoint}"

This ensures the version constant is actually used and the URL is correctly formed.

Suggested change
url = f"{__API_ENDPOINT}{endpoint}"
url = f"{__API_ENDPOINT}/{__API_VERSION}{endpoint}"

Copilot uses AI. Check for mistakes.
Comment on lines +26 to +38
## Configuration file
```json
{
"api_key": "<YOUR_JBOARD_API_KEY>",
"sync_frequency_hours": "<YOUR_JBOARD_API_SYNC_FREQUENCY_HOURS>",
"initial_sync_days": "<YOUR_JBOARD_API_INITIAL_SYNC_DAYS>",
"max_records_per_page": "<YOUR_JBOARD_API_MAX_RECORDS_PER_PAGE>",
"request_timeout_seconds": "<YOUR_JBOARD_API_REQUEST_TIMEOUT_SECONDS>",
"retry_attempts": "<YOUR_JBOARD_API_RETRY_ATTEMPTS>",
"enable_incremental_sync": "<YOUR_JBOARD_API_ENABLE_INCREMENTAL_SYNC>",
"enable_debug_logging": "<YOUR_JBOARD_API_ENABLE_DEBUG_LOGGING>"
}
```
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing note about not checking configuration.json into version control. According to the coding guidelines, the Configuration file section must explicitly mention that configuration.json should not be versioned.

Add after the JSON block:

Note: Ensure that the `configuration.json` file is not checked into version control to protect sensitive information.

Copilot generated this review using guidance from repository custom instructions.
Comment on lines +94 to +103
## Additional files

The connector includes several additional files to support functionality, testing, and deployment:

- `requirements.txt` – Python dependency specification for Jboard API integration and connector requirements including faker for mock testing.

- `configuration.json` – Configuration template for API credentials and connector parameters (should be excluded from version control).


## Additional considerations
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect statement about requirements.txt. The README states "This connector does not require any additional packages" (line 51), but line 98 mentions "requirements.txt – Python dependency specification for Jboard API integration and connector requirements including faker for mock testing."

These statements contradict each other. If there's a requirements.txt file with faker, then the connector does have additional dependencies. Clarify which is correct and update accordingly.

Suggested change
## Additional files
The connector includes several additional files to support functionality, testing, and deployment:
- `requirements.txt` – Python dependency specification for Jboard API integration and connector requirements including faker for mock testing.
- `configuration.json` – Configuration template for API credentials and connector parameters (should be excluded from version control).
## Additional considerations
## Requirements file
The connector requires the `faker` package for mock testing and development purposes.

faker


Note: The `fivetran_connector_sdk:latest` and `requests:latest` packages are pre-installed in the Fivetran environment. To avoid dependency conflicts, do not declare them in your `requirements.txt`.

## Additional files

The connector includes several additional files to support functionality, testing, and deployment:

- `requirements.txt` – Python dependency specification for mock testing and development.
- `configuration.json` – Configuration template for API credentials and connector parameters (should be excluded from version control).

Copilot uses AI. Check for mistakes.
Comment on lines +500 to +504
# The 'upsert' operation is used to insert or update data in the destination table.
# The op.upsert method is called with two arguments:
# - The first argument is the name of the table to upsert the data into.
# - The second argument is a dictionary containing the data to be upserted,
op.upsert(table="employers", data=record)
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect comment format before op.upsert(). According to the coding guidelines, the required comment before EVERY op.upsert() call should be:

# The 'upsert' operation is used to insert or update data in the destination table.
# The first argument is the name of the destination table.
# The second argument is a dictionary containing the record to be upserted.
op.upsert(table="employers", data=record)

The current comment includes extra details about "The op.upsert method is called with two arguments:" which is not part of the standard format. Use the exact format specified in the guidelines.

Copilot generated this review using guidance from repository custom instructions.
Comment on lines +31 to +34
""" ADD YOUR SOURCE-SPECIFIC IMPORTS HERE
Example: import pandas, boto3, etc.
Add comment for each import to explain its purpose for users to follow.
"""
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Template comment not removed. The comment block at lines 31-34 is placeholder text from the template and should be removed. This connector doesn't need any additional source-specific imports beyond the standard ones already included.

Suggested change
""" ADD YOUR SOURCE-SPECIFIC IMPORTS HERE
Example: import pandas, boto3, etc.
Add comment for each import to explain its purpose for users to follow.
"""

Copilot uses AI. Check for mistakes.
Comment on lines +474 to +484
"""
Main synchronization function that fetches and processes data from the Jboard API.
This function orchestrates the entire sync process using memory-efficient streaming patterns.
Args:
configuration: Configuration dictionary containing API credentials and settings.
state: State dictionary containing sync cursors and checkpoints from previous runs.
Raises:
RuntimeError: If sync fails due to API errors or configuration issues.
"""
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect update function docstring. According to the coding guidelines, the update function must use the exact required docstring format:

def update(configuration: dict, state: dict):
    """
    Define the update function which lets you configure how your connector fetches data.
    See the technical reference documentation for more details on the update function:
    https://fivetran.com/docs/connectors/connector-sdk/technical-reference#update
    Args:
        configuration: a dictionary that holds the configuration settings for the connector.
        state: a dictionary that holds the state of the connector.
    """

Replace the current docstring with this exact format.

Copilot generated this review using guidance from repository custom instructions.
Copy link
Collaborator

@varundhall varundhall left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as #256 (review)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a few suggestions. Thanks.

Page-based pagination with automatic page traversal (refer to `get_employers`, `get_categories`, and `get_alert_subscriptions` functions). The connector uses `page` and `per_page` parameters to fetch data in configurable chunks. Generator-based processing prevents memory accumulation for large datasets by yielding individual records. Processes pages sequentially while yielding individual records for immediate processing, with pagination metadata used to determine when all data has been fetched.

## Data handling
Employer, category, and alert subscription data is mapped from Jboard API's format to normalized database columns (refer to the `__map_employer_data`, `__map_category_data`, and `__map_alert_subscription_data` functions). Nested objects like tags arrays are serialized to JSON strings, and all timestamps are converted to UTC format for consistency.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Employer, category, and alert subscription data is mapped from Jboard API's format to normalized database columns (refer to the `__map_employer_data`, `__map_category_data`, and `__map_alert_subscription_data` functions). Nested objects like tags arrays are serialized to JSON strings, and all timestamps are converted to UTC format for consistency.
Employer, category, and alert subscription data is mapped from Jboard API's format to normalized database columns (refer to the `__map_employer_data`, `__map_category_data`, and `__map_alert_subscription_data` functions). Nested objects like `tags` arrays are serialized to JSON strings, and all timestamps are converted to UTC format for consistency.

Comment on lines +87 to +92

**EMPLOYERS**: `id`, `name`, `description`, `website`, `logo_url`, `featured`, `source`, `created_at`, `updated_at`, `have_posted_jobs`, `have_a_logo`, `sync_timestamp`

**CATEGORIES**: `id`, `name`, `description`, `parent_id`, `sort_order`, `is_active`, `created_at`, `updated_at`, `sync_timestamp`

**ALERT_SUBSCRIPTIONS**: `id`, `email`, `query`, `location`, `search_radius`, `remote_work_only`, `category_id`, `job_type`, `tags`, `is_active`, `created_at`, `updated_at`, `sync_timestamp`

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
**EMPLOYERS**: `id`, `name`, `description`, `website`, `logo_url`, `featured`, `source`, `created_at`, `updated_at`, `have_posted_jobs`, `have_a_logo`, `sync_timestamp`
**CATEGORIES**: `id`, `name`, `description`, `parent_id`, `sort_order`, `is_active`, `created_at`, `updated_at`, `sync_timestamp`
**ALERT_SUBSCRIPTIONS**: `id`, `email`, `query`, `location`, `search_radius`, `remote_work_only`, `category_id`, `job_type`, `tags`, `is_active`, `created_at`, `updated_at`, `sync_timestamp`
EMPLOYERS: `id`, `name`, `description`, `website`, `logo_url`, `featured`, `source`, `created_at`, `updated_at`, `have_posted_jobs`, `have_a_logo`, `sync_timestamp`
CATEGORIES: `id`, `name`, `description`, `parent_id`, `sort_order`, `is_active`, `created_at`, `updated_at`, `sync_timestamp`
ALERT_SUBSCRIPTIONS: `id`, `email`, `query`, `location`, `search_radius`, `remote_work_only`, `category_id`, `job_type`, `tags`, `is_active`, `created_at`, `updated_at`, `sync_timestamp`

- [ibm_informix_using_ibm_db](https://github.com/fivetran/fivetran_connector_sdk/tree/main/connectors/ibm_informix_using_ibm_db) - This example shows how to connect and sync data from IBM Informix using Connector SDK. This example uses the `ibm_db` library to connect to the Informix database and fetch data.
- [influx_db](https://github.com/fivetran/fivetran_connector_sdk/tree/main/connectors/influx_db) - This example shows how to sync data from InfluxDB using Connector SDK. It uses the `influxdb3_python` library to connect to InfluxDB and fetch time-series data from a specified measurement.
- [iterate](https://github.com/fivetran/fivetran_connector_sdk/tree/main/connectors/iterate) - This example shows how to sync NPS survey data from the Iterate REST API and load it into your destination using Connector SDK. The connector fetches NPS surveys and their individual responses, providing complete survey analytics data for downstream analysis.
- [jboard](https://github.com/fivetran/fivetran_connector_sdk/tree/main/connectors/jboard) - This example shows how to sync employers, job categories, and alert subscriptions data from Jboard API to your destination warehouse. You need to provide your Jboard API key for this example to work.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- [jboard](https://github.com/fivetran/fivetran_connector_sdk/tree/main/connectors/jboard) - This example shows how to sync employers, job categories, and alert subscriptions data from Jboard API to your destination warehouse. You need to provide your Jboard API key for this example to work.
- [jboard](https://github.com/fivetran/fivetran_connector_sdk/tree/main/connectors/jboard) - This example shows how to sync employers, job categories, and alert subscriptions data from the Jboard API to your destination warehouse. You need to provide your Jboard API key for this example to work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

hackathon For all the PRs related to the internal Fivetran 2025 Connector SDK Hackathon. size/XL PR size: extra large

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants