A library and benchmark suite for Approximate Nearest Neighbor Search (ANNS). This project is compatible with LibTorch.
We provide Docker support to simplify the setup process.
-
Navigate to the
./dockerdirectory:cd ./docker -
Build and start the Docker container:
./start.sh
This script will build the Docker container and start it.
-
Inside the Docker container, run the build script to install dependencies and build the project:
-
With CUDA support:
./buildWithCuda.sh
-
Without CUDA (CPU-only version):
./buildCPUOnly.sh
If you prefer to build without Docker, follow these steps.
To build CANDY and PyCANDY with CUDA support:
./buildWithCuda.shFor a CPU-only version:
./buildCPUOnly.shThese scripts will install dependencies and build the project.
After building, you can install PyCANDY to your default Python environment:
python3 setup.py install --userWhen developing in CLion, you must manually configure:
- CMake Prefix Path:
sudo apt install liblapack-dev libblas-dev libboost-all-dev swig-
Run the following command in your terminal to get the CMake prefix path:
python3 -c 'import torch; print(torch.utils.cmake_prefix_path)' -
Copy the output path and set it in CLion's CMake settings as:
-DCMAKE_PREFIX_PATH=<output_path>
- Environment Variable
CUDACXX:
-
Manually set the environment variable
CUDACXXto:/usr/local/cuda/bin/nvcc
Evaluation scripts are located under benchmark/scripts.
To run an evaluation (e.g., scanning the dimensions):
cd build/benchmark/scripts/scanIPDimensions
sudo ls # Required for perf events
python3 drawTogether.py 2
cd ../figuresFigures will be generated in the figures directory.
Click to Expand
- Extra CMake Options
- Manual Build Instructions
- CUDA Installation (Optional)
- Torch Installation
- PAPI Support (Optional)
- Distributed CANDY with Ray (Optional)
- Local Documentation Generation (Optional)
- Known Issues
You can set additional CMake options using cmake -D<option>=ON/OFF:
ENABLE_PAPI(OFF by default)- Enables PAPI-based performance tools.
- Setup:
- Navigate to the
thirdpartydirectory. - Run
installPAPI.shto enable PAPI support. - Alternatively, set
REBUILD_PAPItoON.
- Navigate to the
ENABLE_HDF5(OFF by default)- Enables loading data from HDF5 files.
- The HDF5 source code is included; no extra dependency is required.
ENABLE_PYBIND(OFF by default)- Enables building Python bindings (PyCANDY).
- Ensure the
pybind11source code in thethirdpartyfolder is complete.
- Compiler: G++11 or newer.
- The default
gcc/g++version on Ubuntu 22.04 (Jammy) is sufficient.
- The default
- BLAS and LAPACK:
sudo apt install liblapack-dev libblas-dev
- Graphviz (Optional):
sudo apt-get install graphviz pip install torchviz
-
Set the CUDA Compiler Path (if using CUDA):
export CUDACXX=/usr/local/cuda/bin/nvcc -
Create Build Directory:
mkdir build && cd build
-
Configure CMake:
cmake -DCMAKE_PREFIX_PATH=`python3 -c 'import torch; print(torch.utils.cmake_prefix_path)'` .. -
Build the Project:
make
For Debug Build:
cmake -DCMAKE_BUILD_TYPE=Debug -DCMAKE_PREFIX_PATH=`python3 -c 'import torch; print(torch.utils.cmake_prefix_path)'` ..
make-
Manually retrieve the CMake prefix path:
python3 -c 'import torch; print(torch.utils.cmake_prefix_path)' -
Set the
-DCMAKE_PREFIX_PATHin CLion's CMake settings. -
Set the environment variable
CUDACXXto/usr/local/cuda/bin/nvccin CLion.
Refer to the NVIDIA CUDA Installation Guide for more details.
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
sudo apt-get install cuda
sudo apt-get install nvidia-gds
sudo apt-get install libcudnn8 libcudnn8-dev libcublas-11-7Note: Ensure CUDA is installed before installing CUDA-based Torch. Reboot your system after installation.
-
No need to install CUDA if using a pre-built JetPack on Jetson.
-
Ensure
libcudnn8andlibcublasare installed:sudo apt-get install libcudnn8 libcudnn8-dev libcublas-*
Refer to the PyTorch Get Started Guide for more details.
sudo apt-get install python3 python3-pip-
With CUDA:
pip3 install torch==2.4.0 torchvision torchaudio
-
Without CUDA:
pip3 install --ignore-installed torch==2.4.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
Note: Conflict between torch2.4.0+cpu and torchaudio+cpu may occur with Python versions > 3.10.
PAPI provides a consistent interface for collecting performance counter information.
- Navigate to the
thirdpartydirectory. - Run
installPAPI.sh. - PAPI will be compiled and installed in
thirdparty/papi_build.
- Navigate to
thirdparty/papi_build/bin. - Run
sudo ./papi_availto check available events. - Run
./papi_native_availto view native events.
-
Set
-DENABLE_PAPI=ONwhen configuring CMake. -
Add the following to your top-level config file:
usePAPI,1,U64 perfUseExternalList,1,U64 -
To specify custom event lists, set:
perfListSrc,<path_to_your_list>,String -
Edit
perfLists/perfList.csvin your build directory to include desired events.
-
Install Ray:
pip install ray==2.8.1 ray-cpp==2.8.1
-
Get Ray Library Path:
ray cpp --show-library-path
-
Set
RAYPATHEnvironment Variable:export RAYPATH=<ray_library_path>
-
Configure CMake:
cmake -DENABLE_RAY=ON ..
-
Start the Head Node:
ray start --head
-
Start Worker Nodes:
ray start --address <head_node_ip>:6379 --node-ip-address <worker_node_ip>
-
Run the Program:
export RAY_ADDRESS=<head_node_ip>:6379 ./<your_program_with_ray_support>
Notes:
- Ensure the file paths and dependencies are identical across all nodes.
- For different architectures, recompile the source code on each node.
torch::Tensormay not be serializable; consider usingstd::vector<float>instead.
Refer to the Ray Observability Guide to set up a dashboard.
sudo apt-get install doxygen graphviz
sudo apt-get install texlive-latex-base texlive-fonts-recommended texlive-fonts-extra texlive-latex-extra./genDoc.SH- HTML Pages: Located in
doc/html/index.html. - PDF Manual: Found at
refman.pdfin the root directory.
- Conflicts may occur with certain versions of PyTorch and Python.