Extract website data to CSV, Excel, JSON, Google Sheets, and webhooks with a no-code Chrome scraper
-
Updated
Apr 16, 2026
Extract website data to CSV, Excel, JSON, Google Sheets, and webhooks with a no-code Chrome scraper
Crawlee—A web scraping and browser automation library for Python to build reliable crawlers. Extract data for AI, LLMs, RAG, or GPTs. Download HTML, PDF, JPG, PNG, and other files from websites. Works with Parsel, BeautifulSoup, Playwright, and raw HTTP. Both headful and headless mode. With proxy rotation.
A micro-framework for asynchronous deep crawls and web scraping with Python
Declarative, resilient, and typed web scraping. Define what you want — topscrape figures out how to get it, even when the site changes.
Xcrap Parser is a parsing package for the Xcrap framework that handles things like HTML and JSON with declarative models.
🔍 A web scraper for fetching products from `www.fortex.ir`
Responsive real estate landing page (HTML/SCSS/Parcel).
Analise letras de musica
dude uncomplicated data extraction: A simple framework for writing web scrapers using Python decorators
Notebooks Jupyter conçus pour divers projets
This repository consists of a coding challenge for the Data Engineering role. Here you'll find a solution that crawls articles from a news site, cleans up the response, stores them in `BigQuery` and makes them available for searching via an `API`.
Analysis of Taylor Swift's discography and lyrics via webscraping and SQLite (Streamlit app)
The LinkedIn Company Job Scraper uses web scraping to collect job titles, company names, descriptions, qualifications, and locations.
An AWS serverless webscraping project pulling inventory data from Ariana Grande's webstore.
Technologies: HTML, SASS, JS, REST API, AJAX, Parcel. Adaptive Design. Group project.
Add a description, image, and links to the parsel topic page so that developers can more easily learn about it.
To associate your repository with the parsel topic, visit your repo's landing page and select "manage topics."