Skip to content

tchief/astro-zoom

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

41 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🌌 Astro-Zoom

Deep zoom viewer for gigantic NASA images with AI-powered search and collaborative annotations.

Built for a 36-hour hackathon, Astro-Zoom lets you explore massive astronomical images with smooth deep-zoom navigation powered by OpenSeadragon, find interesting features using CLIP-based semantic search, and annotate regions collaboratively.

License Node Python

✨ Features

  • πŸ“€ Image Upload β€” Upload your own images via web UI with automatic tile processing
  • πŸ” Deep Zoom Navigation β€” Explore gigapixel images with smooth pan/zoom using OpenSeadragon
  • πŸ€– AI Search β€” Find features with natural language queries powered by CLIP embeddings
  • ✏️ Annotations β€” Create points, rectangles, and polygons with labels
  • βš–οΈ Compare Mode β€” Side-by-side view with synchronized zoom/pan
  • ⏱️ Timeline β€” View temporal changes in image datasets
  • 🎨 Modern UI β€” Dark theme, responsive design, keyboard shortcuts
  • πŸš€ Production Ready β€” Docker, rate limiting, authentication, CI/CD

πŸ—οΈ Architecture

astro-zoom/
β”œβ”€β”€ apps/
β”‚   β”œβ”€β”€ web/          Next.js 14 + OpenSeadragon viewer
β”‚   β”œβ”€β”€ api/          FastAPI backend with SQLite
β”‚   └── ai/           CLIP + FAISS semantic search
β”œβ”€β”€ packages/
β”‚   β”œβ”€β”€ proto/        Shared TypeScript/Python schemas
β”‚   └── ui/           React component library
β”œβ”€β”€ infra/
β”‚   β”œβ”€β”€ docker-compose.yml
β”‚   β”œβ”€β”€ tiles/        Sample DZI tile pyramids
β”‚   └── Dockerfile.*
└── .github/workflows/ CI/CD pipelines

Tech Stack

Frontend

  • Next.js 14 (App Router), TypeScript
  • OpenSeadragon (deep zoom)
  • Zustand (state), TanStack Query (data fetching)
  • Tailwind CSS

Backend

  • FastAPI, Uvicorn, SQLModel (SQLite)
  • JWT authentication, rate limiting
  • DZI tile serving

AI

  • CLIP (OpenAI ViT-B/32) or stub fallback
  • FAISS (CPU) vector search
  • Numpy, Pillow

DevOps

  • Monorepo: pnpm workspaces + Turborepo
  • Docker Compose
  • GitHub Actions CI

πŸš€ Quick Start

Prerequisites

  • Node.js 20+ (download)
  • pnpm 8+ (installed automatically if missing)
  • Python 3.11+ (download)
  • Docker (optional, for containerized setup)

Option 1: Native Development (Recommended)

# Clone the repository
git clone <your-repo-url>
cd astro-zoom

# Run setup script (installs deps, generates tiles)
chmod +x infra/setup.sh
./infra/setup.sh

# Start all services
pnpm dev

Services will be available at:

⚠️ Troubleshooting: If you see "Failed to fetch datasets" error, the API server may not be running. See START_SERVICES.md for help.

Option 2: Docker Compose

# Generate sample tiles first
python3 infra/generate_sample_tiles.py

# Start all services
cd infra
docker compose up --build

Wait ~30s for services to start, then visit http://localhost:3000.

πŸ“– Usage

Uploading Your Own Images

The easiest way to add your own high-resolution images is through the web interface:

  1. Start the services: pnpm dev
  2. Navigate to http://localhost:3000
  3. Click "Upload Image" button
  4. Select or drag-and-drop your image (JPG, PNG, TIFF up to 500MB)
  5. Enter name and description
  6. Wait for processing (10-30 minutes depending on size)
  7. View your dataset!

The system will automatically generate optimized DZI tiles with progress tracking.

Supported formats: JPG, PNG, TIFF β€’ Max size: 500MB β€’ Min dimensions: 256Γ—256px

Managing Datasets

Each dataset card on the homepage includes a delete button (trash icon):

  • Click the delete button on any dataset
  • Confirm the deletion
  • The system will robustly remove all associated files (tiles, uploads, database entries)
  • Deletion works even if the dataset is corrupted or partially missing

Storage Optimization: The system automatically cleans up temporary files:

  • Original upload files are deleted after successful tile generation
  • Temp directories are cleaned automatically
  • Only the optimized tiles and database entries are kept
  • This saves 40-60% disk space compared to keeping original uploads

See STORAGE_LOCATIONS.md for detailed storage information.

Using Real NASA Data (Manual Processing)

The project includes mock sample tiles by default. To manually process the real 209MB NASA Andromeda image:

# Install tiling dependencies
pip install -r infra/requirements_tiling.txt

# Process the real image (takes 10-30 minutes)
python infra/process_real_image.py

This downloads the actual NASA Hubble Andromeda mosaic (42208x9870 pixels) and generates an optimized tile pyramid. See infra/TILE_GENERATION.md for details.

Exploring Datasets

  1. Visit http://localhost:3000
  2. Click on "Andromeda Galaxy (Sample)" or "Andromeda Galaxy (NASA Hubble 2025)"
  3. Use mouse to pan/zoom, or:
    • Scroll to zoom
    • Drag to pan
    • F key to fit image
    • G key to toggle grid

Keyboard Shortcuts

  • 1 β€” Explore mode
  • 2 β€” Compare mode (side-by-side)
  • 3 β€” Annotate mode
  • F β€” Fit to viewport
  • G β€” Toggle grid overlay

Creating Annotations

  1. Switch to Annotate mode (press 3 or click toolbar)
  2. Choose annotation type: Point or Rectangle
  3. Click on the image:
    • Point: Single click
    • Rectangle: Click start, then click end
  4. Annotations save automatically and persist on refresh

AI Search

  1. Open the Search Box (left sidebar)
  2. Enter a natural language query, e.g.:
    • "bright star cluster"
    • "spiral arm structure"
    • "dark dust lane"
  3. Click results to fly to matching regions

Note: Search uses stub embeddings by default. For real AI search, install open-clip-torch and torch in apps/ai.

πŸ”§ Development

Project Structure

apps/web/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ app/              Next.js App Router pages
β”‚   β”œβ”€β”€ components/       React components
β”‚   β”‚   β”œβ”€β”€ DeepZoomViewer.tsx
β”‚   β”‚   β”œβ”€β”€ CompareSwipe.tsx
β”‚   β”‚   β”œβ”€β”€ Annotator.tsx
β”‚   β”‚   └── ...
β”‚   β”œβ”€β”€ lib/              API client
β”‚   └── store/            Zustand stores

apps/api/
β”œβ”€β”€ app/
β”‚   β”œβ”€β”€ main.py           FastAPI app
β”‚   β”œβ”€β”€ models.py         SQLModel database models
β”‚   β”œβ”€β”€ routers/          API endpoints
β”‚   β”‚   β”œβ”€β”€ datasets.py
β”‚   β”‚   β”œβ”€β”€ annotations.py
β”‚   β”‚   β”œβ”€β”€ search.py
β”‚   β”‚   └── tiles.py
β”‚   └── seed.py           Database seeding

apps/ai/
β”œβ”€β”€ app/
β”‚   β”œβ”€β”€ main.py           AI service
β”‚   β”œβ”€β”€ clip_stub.py      Fallback implementation
β”‚   └── indexer.py        FAISS indexing

Running Individual Services

Web (Next.js)

cd apps/web
pnpm dev

API (FastAPI)

cd apps/api
make dev
# or: uvicorn app.main:app --reload --port 8000

AI (Python)

cd apps/ai
make dev
# or: uvicorn app.main:app --reload --port 8001

Building for Production

# Build all packages
pnpm build

# Run production web server
cd apps/web
pnpm build
pnpm start

# Run production API/AI (use gunicorn or similar)
uvicorn app.main:app --host 0.0.0.0 --port 8000 --workers 4

πŸ§ͺ Testing

# Lint all code
pnpm lint

# Typecheck TypeScript
pnpm typecheck

# Python linting
cd apps/api
ruff check app/

# Run tests (if implemented)
pnpm test

πŸ“Š Adding Your Own Datasets

Option 1: Web Upload (Recommended)

The easiest way! Upload through the web interface at http://localhost:3000:

  1. Click "Upload Image" button
  2. Select your image (JPG, PNG, or TIFF up to 500MB)
  3. Enter dataset name and description
  4. System automatically:
    • Validates the image
    • Generates multi-resolution DZI tiles
    • Creates database entry
    • Indexes for search

API Upload:

curl -X POST http://localhost:8000/uploads/upload \
  -F "file=@your-image.jpg" \
  -F "name=My Dataset" \
  -F "description=Optional description"

# Check processing status
curl http://localhost:8000/uploads/status/{dataset-id}

Option 2: Manual Tile Generation

For advanced use cases or external tile generation:

1. Create DZI Tiles

Use Vips, OpenSlide, or Python:

# With vips
vips dzsave your_image.tif infra/tiles/my-dataset

# Or use the provided script
python infra/process_real_image.py  # Edit paths in script

2. Register Dataset

Add to apps/api/app/seed.py:

dataset = Dataset(
    id="my-dataset",
    name="My Amazing Dataset",
    description="High-resolution image of...",
    tile_type="dzi",
    tile_url="/tiles/my-dataset",
    levels=json.dumps([0, 1, 2, 3, 4]),
    pixel_size=json.dumps([16384, 16384]),
)
session.add(dataset)

3. Build Search Index (optional)

cd apps/ai
python build_index.py my-dataset

πŸ” Authentication

Demo credentials (for annotation writes):

  • Username: editor
  • Password: demo123

Login at /auth/login or via API:

curl -X POST http://localhost:8000/auth/login \
  -H "Content-Type: application/json" \
  -d '{"username":"editor","password":"demo123"}'

Use the returned JWT token in Authorization: Bearer <token> header.

🐳 Docker

Build Images

cd infra
docker compose build

View Logs

docker compose logs -f web
docker compose logs -f api
docker compose logs -f ai

Reset Volumes

docker compose down -v
docker compose up --build

🚦 API Documentation

FastAPI automatically generates OpenAPI docs:

Key Endpoints

Datasets

  • GET /datasets β€” List all datasets
  • GET /datasets/{id} β€” Get dataset details

Annotations

  • GET /annotations?datasetId=X β€” List annotations
  • POST /annotations β€” Create annotation
  • PUT /annotations/{id} β€” Update annotation
  • DELETE /annotations/{id} β€” Delete annotation

Search

  • GET /search?q=crater&datasetId=X β€” AI semantic search

Tiles

  • GET /tiles/{dataset}/info.dzi β€” DZI descriptor
  • GET /tiles/{dataset}/{level}/{col}_{row}.jpg β€” Tile image

Uploads

  • POST /uploads/upload β€” Upload and process image
  • GET /uploads/status/{id} β€” Check processing status
  • DELETE /uploads/{id} β€” Delete dataset and tiles

🀝 Contributing

This is a hackathon project, but contributions are welcome!

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing)
  3. Commit your changes (git commit -am 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing)
  5. Open a Pull Request

πŸ“ License

MIT License - see LICENSE file for details.

πŸ™ Acknowledgments

  • OpenSeadragon β€” Incredible deep zoom library
  • OpenAI CLIP β€” Semantic image search
  • NASA/ESA β€” Inspiring imagery
  • FastAPI β€” Modern Python web framework
  • Next.js β€” React framework

πŸ› Troubleshooting

Port already in use

# Find and kill process using port 3000
lsof -ti:3000 | xargs kill -9

# Or use different ports
PORT=3001 pnpm dev

Database locked

# Remove SQLite lock
rm data/astro.db-shm data/astro.db-wal

CORS errors

Make sure NEXT_PUBLIC_API_URL matches your API URL:

# .env
NEXT_PUBLIC_API_URL=http://localhost:8000

Tiles not loading

Check that tiles exist:

ls -la infra/tiles/andromeda/
python3 infra/generate_sample_tiles.py

AI search returns empty

The stub implementation generates random results. For real search:

cd apps/ai
pip install open-clip-torch torch
python build_index.py andromeda

πŸ“§ Contact

Built for a 36-hour hackathon by your team name here.


Happy exploring! 🌌✨

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 62.7%
  • TypeScript 27.6%
  • PowerShell 4.7%
  • Shell 3.3%
  • Makefile 1.0%
  • Batchfile 0.4%
  • Other 0.3%