During the process of machine updates and migration, the original code was unfortunately lost. This version is a re-implemented codebase based on the original design concepts and experimental results. While we have made every effort to maintain consistency with the original code, there may be slight discrepancies in the results due to differences in certain implementation details.
βββ configs/ # Configuration files
β βββ base.yaml # Main training configuration
βββ core/ # Core compression components
β βββ entropy_coder.py # Arithmetic coding implementation
β βββ hyper_prior.py # Hyperprior network
β βββ avrpm.py # Adaptive voxel residual prediction module
βββ data/
β βββ ivfb_loader.py # IVFBDataset implementation
β βββ preprocess.py # PLY to NPZ conversion
βββ models/
β βββ generator.py # PCACGenerator (main compression network)
β βββ discriminator.py # PointCloudDiscriminator
βββ losses/ # Loss functions
β βββ adversarial.py # GAN implementation
β βββ rate_distortion.py # Rate-distortion loss
βββ utils/
β βββ metrics.py # PSNR/SSIM/BPP calculations
β βββ transforms.py # Color space conversions
βββ train.py # Main training script
βββ evaluate.py # Model evaluation script
βββ dataset_gen.py # Shapenet+CoCo
# Base environment
conda create -n pcac python=3.8
conda activate pcac
# Install PyTorch
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
# Install MinkowskiEngine
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v \
--install-option="--blas=openblas" \
--install-option="--force_cuda"
# Install other dependencies
pip install open3d plyfile pycocotools tqdm
Training
Prepare dataset in PLY format under /data/yourpath/
Update dataset.root_dir in configs/base.yaml
Start training:
python train.py --config configs/base.yaml \
--batch_size 32 \
--lr_g 0.0001 \
--lr_d 0.0004
Evaluation
models: https://pan.baidu.com/s/1-5TRTShyW5pYBSNiDRGNoA 8a97
python evaluate.py \
--weights path/to/checkpoint \
--output results.json \
--batch_size 8