Protein Information System (PIS) is an integrated biological information system focused on extracting, processing, and managing protein-related data. PIS consolidates data from UniProt, PDB, and GOA, enabling the efficient retrieval and organization of protein sequences, structures, and functional annotations.
The primary goal of PIS is to provide a robust framework for large-scale protein data extraction, facilitating downstream functional analysis and annotation transfer. The system is designed for high-performance computing (HPC) environments, ensuring scalability and efficiency.
π§ FANTASIA was built on top of the Protein Information System (PIS) as an advanced tool for functional protein annotation using embeddings generated by protein language models.
The pipeline supports high-performance computing (HPC) environments and integrates tools such as ProtT5, ESM, and CD-HIT. These models can be extended or replaced with new variants without modifying the core software structure, simply by adding the new model to the PIS. This design enables scalable, modular, and reproducible GO term annotation from FASTA sequence files.
π In addition, a systematic protocol has been developed for the large-scale identification of structural metamorphisms and protein multifunctionality.
π Metamorphic and multifunctionality Search Repository
This protocol leverages the full capabilities of PIS to uncover non-obvious relationships between structure and function. Structural metamorphisms are detected by filtering large-scale structural alignments between proteins with high sequence identity, identifying divergent conformations. Multifunctionality is addressed through a semantic analysis of GO annotations, computing a functional distance metric to determine the two most divergent terms within each GO category per protein.
- Python 3.10
- RabbitMQ
- PostgreSQL with pgVector extension installed.
- PSQL client 16
Ensure Docker is installed on your system. If itβs not, you can download it from here.
Ensure PostgreSQL and RabbitMQ services are running.
docker run -d --name pgvectorsql \
-e POSTGRES_USER=usuario \
-e POSTGRES_PASSWORD=clave \
-e POSTGRES_DB=BioData \
-p 5432:5432 \
pgvector/pgvector:pg16 You can use pgAdmin 4, a graphical interface for managing and interacting with PostgreSQL databases, or any other SQL client.
Start a RabbitMQ container using the command below:
docker run -d --name rabbitmq \
-p 15672:15672 \
-p 5672:5672 \
rabbitmq:managementOnce RabbitMQ is running, you can access its management interface at RabbitMQ Management Interface.
To execute the full extraction process, install dependencies and run from project root:
pisThis command will trigger the complete workflow, starting from the initial data preprocessing stages and continuing through to the final data organization and storage.
You can customize the sequence of tasks executed by modifying main.py or adjusting the relevant parameters in the config.yaml file. This allows you to tailor the extraction process to meet specific research needs or to experiment with different data processing configurations.