💡 AI-powered tool for understanding classic car market trends and sentiment analysis across the web.
vintage_ai_demo.mp4
Vintage AI scrapes social media, forums and Google Trends to deliver sentiment analysis and topic discovery on classic-car models.
The project is built with a Python backend using FastAPI and a Streamlit-based dashboard for user interaction. Data is managed using DuckDB, and various Python libraries are employed for data scraping, processing, and analysis.
Layer | Tech |
---|---|
Backend | FastAPI, Pydantic |
Frontend | Streamlit |
Storage | DuckDB |
Dev tools | uv, pre-commit |
Vintage AI was created for the Motor Valley Fest Accelerator Hackathon 2025, a unique event organized by AssetClassic and Motor Valley Accelerator, sponsored by TikTok.
The challenge invited students and recent graduates passionate about AI, data science, and vintage cars to develop innovative digital solutions. Teams leveraged provided datasets and open-source tools to uncover hidden insights into market sentiment and trends around classic cars, presenting their projects to a jury including representatives from AssetClassic, Motor Valley Accelerator, TikTok, and McKinsey & Company.
🙏 Special thanks to the organizers, sponsors, jury members, and mentors for supporting and facilitating the event.
This solution is designed to meet the hackathon's evaluation criteria:
- 💡Technical innovation – modular Python stack, NLP pipelines
- 📈 Accuracy & reliability – median aggregation, strict schema validation
- 🎨 Usability & UX – one-click dashboard, interactive charts
- ⚙️ Scalability & performance – DuckDB analytics, FastAPI async support, caching
- 💼 Business relevance – investor-oriented insights for classic-car valuation
- 📣 Presentation & clarity – clean architecture and documentation
-
Install uv globally
-
Clone this repo
git clone https://github.com/e-candeloro/vintage_ai.git
-
Go to the repo directory
cd vintage_ai
-
Create and install dependencies
uv sync
-
Install and test the pre-commit hooks
pre-commit install pre-commit run --all-files
-
Rename the
.env.example
to.env
. Inside this file there will be important environment settings for the program to use. -
Run the backend
PYTHONPATH=src uvicorn vintage_ai.api.main:app --reload
-
Open a new shell an run the streamlit front-end:
streamlit run dashboard/app.py
This will install all the dependencies, spin the FastAPI backend at http://127.0.0.1:8000
and start the Streamlit dashboard at http://localhost:8501
-
src/vintage_ai/
: Contains the core backend logic.api/
: Defines the FastAPI application, including routes (routes/cars.py
) and core schemas (core/schemas/v1.py
).main.py
: Sets up the FastAPI application and middleware.routes/cars.py
: Handles API requests related to car data, currently providing a snapshot endpoint.core/schemas/v1.py
: Defines Pydantic models for request and response data structures, ensuring type safety and validation.
services/
: Houses business logic.car_data_service.py
: Contains functions to fetch, aggregate, and process car-related data from DuckDB and potentially other sources like Google Trends (viafetch_trends_global
). It aggregates metrics from different platforms and stores/retrieves overall snapshots.trends_services.py
: (Referenced incar_data_service.py
) Likely contains logic for fetching trend data from external APIs like Google Trends.
settings.py
: Manages application settings using Pydantic'sBaseSettings
, allowing configuration via environment variables or a.env
file. This is crucial for managing different environments (dev, prod) and sensitive information.
-
dashboard/app.py
: A Streamlit application that serves as the user interface.- It takes user input for a car model.
- Calls the backend API to fetch processed data.
- Displays various metrics and visualizations, including:
- Time-series charts for price and popularity.
- Correlation between price and popularity.
- Sentiment analysis (top topics and overall score).
- Summary metrics in a tabular and JSON format.
- Includes error handling for API calls and data processing.
- Uses caching to improve performance.