Deep zoom viewer for gigantic NASA images with AI-powered search and collaborative annotations.
Built for a 36-hour hackathon, Astro-Zoom lets you explore massive astronomical images with smooth deep-zoom navigation powered by OpenSeadragon, find interesting features using CLIP-based semantic search, and annotate regions collaboratively.
- π€ Image Upload β Upload your own images via web UI with automatic tile processing
- π Deep Zoom Navigation β Explore gigapixel images with smooth pan/zoom using OpenSeadragon
- π€ AI Search β Find features with natural language queries powered by CLIP embeddings
- βοΈ Annotations β Create points, rectangles, and polygons with labels
- βοΈ Compare Mode β Side-by-side view with synchronized zoom/pan
- β±οΈ Timeline β View temporal changes in image datasets
- π¨ Modern UI β Dark theme, responsive design, keyboard shortcuts
- π Production Ready β Docker, rate limiting, authentication, CI/CD
astro-zoom/
βββ apps/
β βββ web/ Next.js 14 + OpenSeadragon viewer
β βββ api/ FastAPI backend with SQLite
β βββ ai/ CLIP + FAISS semantic search
βββ packages/
β βββ proto/ Shared TypeScript/Python schemas
β βββ ui/ React component library
βββ infra/
β βββ docker-compose.yml
β βββ tiles/ Sample DZI tile pyramids
β βββ Dockerfile.*
βββ .github/workflows/ CI/CD pipelines
Frontend
- Next.js 14 (App Router), TypeScript
- OpenSeadragon (deep zoom)
- Zustand (state), TanStack Query (data fetching)
- Tailwind CSS
Backend
- FastAPI, Uvicorn, SQLModel (SQLite)
- JWT authentication, rate limiting
- DZI tile serving
AI
- CLIP (OpenAI ViT-B/32) or stub fallback
- FAISS (CPU) vector search
- Numpy, Pillow
DevOps
- Monorepo: pnpm workspaces + Turborepo
- Docker Compose
- GitHub Actions CI
- Node.js 20+ (download)
- pnpm 8+ (installed automatically if missing)
- Python 3.11+ (download)
- Docker (optional, for containerized setup)
# Clone the repository
git clone <your-repo-url>
cd astro-zoom
# Run setup script (installs deps, generates tiles)
chmod +x infra/setup.sh
./infra/setup.sh
# Start all services
pnpm devServices will be available at:
- π Web: http://localhost:3000
- π API: http://localhost:8000 (docs at /docs)
- π€ AI: http://localhost:8001
β οΈ Troubleshooting: If you see "Failed to fetch datasets" error, the API server may not be running. See START_SERVICES.md for help.
# Generate sample tiles first
python3 infra/generate_sample_tiles.py
# Start all services
cd infra
docker compose up --buildWait ~30s for services to start, then visit http://localhost:3000.
The easiest way to add your own high-resolution images is through the web interface:
- Start the services:
pnpm dev - Navigate to http://localhost:3000
- Click "Upload Image" button
- Select or drag-and-drop your image (JPG, PNG, TIFF up to 500MB)
- Enter name and description
- Wait for processing (10-30 minutes depending on size)
- View your dataset!
The system will automatically generate optimized DZI tiles with progress tracking.
Supported formats: JPG, PNG, TIFF β’ Max size: 500MB β’ Min dimensions: 256Γ256px
Each dataset card on the homepage includes a delete button (trash icon):
- Click the delete button on any dataset
- Confirm the deletion
- The system will robustly remove all associated files (tiles, uploads, database entries)
- Deletion works even if the dataset is corrupted or partially missing
Storage Optimization: The system automatically cleans up temporary files:
- Original upload files are deleted after successful tile generation
- Temp directories are cleaned automatically
- Only the optimized tiles and database entries are kept
- This saves 40-60% disk space compared to keeping original uploads
See STORAGE_LOCATIONS.md for detailed storage information.
The project includes mock sample tiles by default. To manually process the real 209MB NASA Andromeda image:
# Install tiling dependencies
pip install -r infra/requirements_tiling.txt
# Process the real image (takes 10-30 minutes)
python infra/process_real_image.pyThis downloads the actual NASA Hubble Andromeda mosaic (42208x9870 pixels) and generates an optimized tile pyramid. See infra/TILE_GENERATION.md for details.
- Visit http://localhost:3000
- Click on "Andromeda Galaxy (Sample)" or "Andromeda Galaxy (NASA Hubble 2025)"
- Use mouse to pan/zoom, or:
- Scroll to zoom
- Drag to pan
- F key to fit image
- G key to toggle grid
- 1 β Explore mode
- 2 β Compare mode (side-by-side)
- 3 β Annotate mode
- F β Fit to viewport
- G β Toggle grid overlay
- Switch to Annotate mode (press
3or click toolbar) - Choose annotation type: Point or Rectangle
- Click on the image:
- Point: Single click
- Rectangle: Click start, then click end
- Annotations save automatically and persist on refresh
- Open the Search Box (left sidebar)
- Enter a natural language query, e.g.:
- "bright star cluster"
- "spiral arm structure"
- "dark dust lane"
- Click results to fly to matching regions
Note: Search uses stub embeddings by default. For real AI search, install
open-clip-torchandtorchinapps/ai.
apps/web/
βββ src/
β βββ app/ Next.js App Router pages
β βββ components/ React components
β β βββ DeepZoomViewer.tsx
β β βββ CompareSwipe.tsx
β β βββ Annotator.tsx
β β βββ ...
β βββ lib/ API client
β βββ store/ Zustand stores
apps/api/
βββ app/
β βββ main.py FastAPI app
β βββ models.py SQLModel database models
β βββ routers/ API endpoints
β β βββ datasets.py
β β βββ annotations.py
β β βββ search.py
β β βββ tiles.py
β βββ seed.py Database seeding
apps/ai/
βββ app/
β βββ main.py AI service
β βββ clip_stub.py Fallback implementation
β βββ indexer.py FAISS indexing
Web (Next.js)
cd apps/web
pnpm devAPI (FastAPI)
cd apps/api
make dev
# or: uvicorn app.main:app --reload --port 8000AI (Python)
cd apps/ai
make dev
# or: uvicorn app.main:app --reload --port 8001# Build all packages
pnpm build
# Run production web server
cd apps/web
pnpm build
pnpm start
# Run production API/AI (use gunicorn or similar)
uvicorn app.main:app --host 0.0.0.0 --port 8000 --workers 4# Lint all code
pnpm lint
# Typecheck TypeScript
pnpm typecheck
# Python linting
cd apps/api
ruff check app/
# Run tests (if implemented)
pnpm testThe easiest way! Upload through the web interface at http://localhost:3000:
- Click "Upload Image" button
- Select your image (JPG, PNG, or TIFF up to 500MB)
- Enter dataset name and description
- System automatically:
- Validates the image
- Generates multi-resolution DZI tiles
- Creates database entry
- Indexes for search
API Upload:
curl -X POST http://localhost:8000/uploads/upload \
-F "file=@your-image.jpg" \
-F "name=My Dataset" \
-F "description=Optional description"
# Check processing status
curl http://localhost:8000/uploads/status/{dataset-id}For advanced use cases or external tile generation:
1. Create DZI Tiles
Use Vips, OpenSlide, or Python:
# With vips
vips dzsave your_image.tif infra/tiles/my-dataset
# Or use the provided script
python infra/process_real_image.py # Edit paths in script2. Register Dataset
Add to apps/api/app/seed.py:
dataset = Dataset(
id="my-dataset",
name="My Amazing Dataset",
description="High-resolution image of...",
tile_type="dzi",
tile_url="/tiles/my-dataset",
levels=json.dumps([0, 1, 2, 3, 4]),
pixel_size=json.dumps([16384, 16384]),
)
session.add(dataset)3. Build Search Index (optional)
cd apps/ai
python build_index.py my-datasetDemo credentials (for annotation writes):
- Username:
editor - Password:
demo123
Login at /auth/login or via API:
curl -X POST http://localhost:8000/auth/login \
-H "Content-Type: application/json" \
-d '{"username":"editor","password":"demo123"}'Use the returned JWT token in Authorization: Bearer <token> header.
cd infra
docker compose builddocker compose logs -f web
docker compose logs -f api
docker compose logs -f aidocker compose down -v
docker compose up --buildFastAPI automatically generates OpenAPI docs:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
Datasets
GET /datasetsβ List all datasetsGET /datasets/{id}β Get dataset details
Annotations
GET /annotations?datasetId=Xβ List annotationsPOST /annotationsβ Create annotationPUT /annotations/{id}β Update annotationDELETE /annotations/{id}β Delete annotation
Search
GET /search?q=crater&datasetId=Xβ AI semantic search
Tiles
GET /tiles/{dataset}/info.dziβ DZI descriptorGET /tiles/{dataset}/{level}/{col}_{row}.jpgβ Tile image
Uploads
POST /uploads/uploadβ Upload and process imageGET /uploads/status/{id}β Check processing statusDELETE /uploads/{id}β Delete dataset and tiles
This is a hackathon project, but contributions are welcome!
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing) - Commit your changes (
git commit -am 'Add amazing feature') - Push to the branch (
git push origin feature/amazing) - Open a Pull Request
MIT License - see LICENSE file for details.
- OpenSeadragon β Incredible deep zoom library
- OpenAI CLIP β Semantic image search
- NASA/ESA β Inspiring imagery
- FastAPI β Modern Python web framework
- Next.js β React framework
# Find and kill process using port 3000
lsof -ti:3000 | xargs kill -9
# Or use different ports
PORT=3001 pnpm dev# Remove SQLite lock
rm data/astro.db-shm data/astro.db-walMake sure NEXT_PUBLIC_API_URL matches your API URL:
# .env
NEXT_PUBLIC_API_URL=http://localhost:8000Check that tiles exist:
ls -la infra/tiles/andromeda/
python3 infra/generate_sample_tiles.pyThe stub implementation generates random results. For real search:
cd apps/ai
pip install open-clip-torch torch
python build_index.py andromedaBuilt for a 36-hour hackathon by your team name here.
Happy exploring! πβ¨