A toolkit for transforming molecular dynamics (MD) trajectories into rich graph representations, sampling
random and self-avoiding walks, learning node embeddings, and visualizing residue interaction networks (RINs). SAWNERGY
keeps the full workflow — from cpptraj output to skip-gram embeddings (DeepWalk approach) — inside Python, backed by efficient Zarr-based archives and optional GPU acceleration.
pip install sawnergyOptional: For GPU training, install PyTorch separately (e.g.,
pip install torch). Note: RIN building requirescpptraj(AmberTools). Ensure it is discoverable via$PATHor theCPPTRAJenvironment variable. Probably the easiest solution: install AmberTools via Conda, activate the environment, and SAWNERGY will find the cpptraj executable on its own, so just run your code and don't worry about it.
- Dependency refresh. Bumped PureML to v1.2.7.
- Embedding visualizer color API.
sawnergy.embedding.Visualizernow accepts the same group/color tuples as the RIN visualizer (e.g.,(indices, sawnergy.visual.BLUE)), so embedding plots and RIN plots share a unified coloring interface.
- SAW dead-end guard. When self-avoidance zeroes out a transition row, the walker now logs a warning and takes an unconstrained RW step instead of raising, so sampling runs finish even on disconnected nodes.
- Added more visual examples into the README
- Dedicated docs site is live: https://ymishchyriak.com/docs/SAWNERGY-DOCS (mirrors this repo and stays current).
- Safer warm starts across backends.
- Torch: Skip-Gram (full softmax) now transposes provided out warm-starts before copying, matching Linear’s
(out_features, in_features)layout. - PureML: Both SGNS and SG defensively copy/validate warm-start arrays (correct shapes, immutable after construction); SG also transposes
(D, V)out weights to(V, D)for embedding access.
- Torch: Skip-Gram (full softmax) now transposes provided out warm-starts before copying, matching Linear’s
- Walker parallelism is configurable.
Walker.sample_walks(..., in_parallel=True)now acceptsmax_parallel_workersso you can lower the worker count belowos.cpu_count()when sharing machines or reserving cores for other workloads.
- Logging helper respects
forceresets.configure_logging()now documents the correct defaults, and an optionalforce=Trueclears existing handlers before installing fresh ones — useful for scripts/notebooks that reconfigure logging multiple times.
ArrayStorageis easier to introspect.- Added a readable
__repr__pluslist_blocks()so you can quickly inspect the stored datasets when debugging archives or working interactively.
- Added a readable
- Visualizer selectors are safer and lighter.
displayed_nodes(and related selectors) now reject non-integer inputs before converting to 0-based indices, and edge coordinate buffers are only materialized when an edge layer is requested, reducing unnecessary copies when plotting nodes only.
- Walker sampling is more robust.
- Transition rows are renormalized before RNG sampling (even without avoidance sets), and walk paths are accumulated in preallocated arrays, keeping long walks numerically stable and memory efficient.
- Training prep and tooling tweaks.
- Skip-gram runs skip building noise distributions entirely (SGNS still gets them), cutting redundant
np.bincount/normalization work, andlocate_cpptraj()now de-duplicates candidate paths before probing to avoid repeatedcpptraj -hcalls.
- Skip-gram runs skip building noise distributions entirely (SGNS still gets them), cutting redundant
SGNS_Torchis no longer deprecated.- The root cause was weight initialization; it’s fixed.
SG_TorchandSG_PureMLno longer use biases.- Affine/Linear layers no longer translate embeddings away from the origin.
- Warm starts for frame embeddings.
- Each frame initializes from the preceding frame’s representation. This speeds convergence and keeps the basis approximately consistent.
- Alignment function for comparing embeddings from different latent spaces.
- Based on the Orthogonal Procrustes solution: finds the best-fit orthogonal map between two embedding sets. Orthogonality preserves angles and relative distances, enabling direct comparison across bases.
- Temporary deprecation of
SGNS_Torchsawnergy.embedding.SGNS_Torchcurrently produces noisy embeddings in practice. The issue likely stems from weight initialization, although the root cause has not yet been conclusively determined.- Action: The class and its
__init__docstring now carry a deprecation notice. Constructing the class emits aDeprecationWarningand logs a warning. - Use instead: Prefer
SG_Torch(plain Skip-Gram with full softmax) or the PureML backendsSGNS_PureML/SG_PureML. - Compatibility: No breaking API changes; imports remain stable. PureML backends are unaffected.
- Embedding visualizer update
- Now you can L2 normalize your embeddings before display.
- Small improvements in the embedding module
- Improved API with a lot of good defaults in place to ease usage out of the box.
- Small internal model tweaks.
- Added plain Skip-Gram model
- Now, the user can choose if they want to apply the negative sampling technique (two binary classifiers) or train a single classifier over the vocabulary (full softmax). For more detail, see: deepwalk, word2vec, and negative_sampling.
- Set a harsher default for low interaction energies pruning during RIN construction
- Now we zero out 85% of the lowest interaction energies as opposed to the past 30% default, leading to more meaningful embeddings.
- BUG FIX: Visualizer
- Previously, the visualizer would silently draw edges of 0 magnitude, meaning they were actually being drawn but were invisible due to full transparency and 0 width. As a result, the displayed image/animation would be very laggy. Now, this was fixed, and given the higher pruning default, the displayed interaction networks are clean and smooth under rotations, dragging, etc.
- New Embedding Visualizer (3D)
- New lightweight viewer for per-frame embeddings that projects embeddings with PCA to a 3D scatter. Supports the same node coloring semantics, optional node labels, and the same antialiasing/depthshade controls. Works in headless setups using the same backend guard and uses a blocking
show=Truefor scripts.
- New lightweight viewer for per-frame embeddings that projects embeddings with PCA to a 3D scatter. Supports the same node coloring semantics, optional node labels, and the same antialiasing/depthshade controls. Works in headless setups using the same backend guard and uses a blocking
- Bridge simulations and graph ML: Convert raw MD trajectories into residue interaction networks ready for graph algorithms and downstream machine learning tasks.
- Deterministic, shareable artifacts: Every stage produces compressed Zarr archives that contain both data and metadata so runs can be reproduced, shared, or inspected later.
- High-performance data handling: Heavy arrays live in shared memory during walk sampling to allow parallel processing without serialization overhead; archives are written in chunked, compressed form for fast read/write.
- Flexible objectives & backends: Train Skip-Gram with negative sampling (
objective="sgns") or plain Skip-Gram (objective="sg"), using either PureML (default) or PyTorch. - Visualization out of the box: Plot and animate residue networks without leaving Python, using the data produced by RINBuilder.
MD Trajectory + Topology
│
▼
RINBuilder
│ → RIN archive (.zip/.zarr) → Visualizer (display/animate RINs)
▼
Walker
│ → Walks archive (RW/SAW per frame)
▼
Embedder
│ → Embedding archive (frame × vocab × dim)
▼
Downstream ML
Each stage consumes the archive produced by the previous one. Metadata embedded in the archives ensures frame order, node indexing, and RNG seeds stay consistent across the toolchain.
A minimal dataset is included in example_MD_for_quick_start/ on GitHub to let you run the full SAWNERGY pipeline immediately:
p53_DBD.prmtop(topology),p53_DBD.pdb(reference),p53_DBD.nc(trajectory)- 1 µs production trajectory of the p53 DNA-binding domain, 1000 snapshots saved every 1 ns
- Credits: MD simulation produced by Sean Stetson (ORCID: 0009-0007-9759-5977)
- Intended use: quick-start tutorial for building RINs, sampling walks, and training embeddings without setting up your own MD run
See example_MD_for_quick_start/brief_description.md.
Residue Interaction Network of Full Length p53 Protein (on the right) and its Embedding (on the left)
- Wraps the AmberTools
cpptrajexecutable to:- compute per-frame electrostatic (EMAP) and van der Waals (VMAP) energy matrices at the atomic level,
- project atom–atom interactions to residue–residue interactions using compositional masks,
- prune, symmetrize, remove self-interactions, and L1-normalize the matrices,
- compute per-residue centers of mass (COM) over the same frames.
- Outputs a compressed Zarr archive with transition matrices, optional pre-normalized energies, COM snapshots, and rich metadata (frame range, pruning quantile, molecule ID, etc.).
- Supports parallel
cpptrajexecution, batch processing, and keeps temporary stores tidy viaArrayStorage.compress_and_cleanup.
- Opens RIN archives, resolves dataset names from attributes, and renders nodes plus attractive/repulsive edge bundles in 3D using Matplotlib.
- Allows both static frame visualization and trajectory animation.
- Handles backend selection (
Aggfallback in headless environments) and offers convenient color palettes viavisualizer_util.
- Attaches to the RIN archive and loads attractive/repulsive transition matrices into shared memory using
walker_util.SharedNDArrayso multiple processes can sample without copying. - Samples random walks (RW) and self-avoiding walks (SAW), optionally time-aware, that is, walks move through transition matrices with transition probabilities proportional to cosine similarity between the current and next frame. Randomness is controlled by the seed passed to the class constructor.
- Persists walks as
(time, walk_id, length+1)tensors (1-based node indices) alongside metadata such aswalk_length,walks_per_node, and RNG scheme.
- Consumes walk archives, generates skip-gram pairs, and normalizes them to 0-based indices.
- Selects skip-gram (SG / SGNS) backends dynamically via
model_base="pureml"|"torch"with per-backend overrides supplied throughmodel_kwargs. - Handles deterministic per-frame seeding and returns the requested embedding
kind("in","out", or"avg") fromembed_frameandembed_all. - Persists per-frame matrices with rich provenance (walk metadata, objective, hyperparameters, RNG seeds) when
embed_alltargets an output archive.
sawnergy.sawnergy_utilArrayStorage: thin wrapper over Zarr v3 with helpers for chunk management, attribute coercion to JSON, and transparent compression to.ziparchives.- Parallel helpers (
elementwise_processor,compose_steps, etc.), temporary file management, logging, and runtime inspection utilities.
sawnergy.logging_util.configure_logging: configure rotating file/console logging consistently across scripts.
| Archive | Key datasets (name → shape, dtype) | Important attributes (root attrs) |
|---|---|---|
| RIN | ATTRACTIVE_transitions → (T, N, N), float32 • REPULSIVE_transitions → (T, N, N), float32 (optional) • ATTRACTIVE_energies → (T, N, N), float32 (optional) • REPULSIVE_energies → (T, N, N), float32 (optional) • COM → (T, N, 3), float32 |
time_created (ISO) • com_name = "COM" • molecule_of_interest (int) • frame_range = (start, end) inclusive • frame_batch_size (int) • prune_low_energies_frac (float in [0,1]) • attractive_transitions_name / repulsive_transitions_name (dataset names or None) • attractive_energies_name / repulsive_energies_name (dataset names or None) |
| Walks | ATTRACTIVE_RWs → (T, N·num_RWs, L+1), int32 (optional) • REPULSIVE_RWs → (T, N·num_RWs, L+1), int32 (optional) • ATTRACTIVE_SAWs → (T, N·num_SAWs, L+1), int32 (optional) • REPULSIVE_SAWs → (T, N·num_SAWs, L+1), int32 (optional) Note: node IDs are 1-based. |
time_created (ISO) • seed (int) • rng_scheme = "SeedSequence.spawn_per_batch_v1" • num_workers (int) • in_parallel (bool) • batch_size_nodes (int) • num_RWs / num_SAWs (ints) • node_count (N) • time_stamp_count (T) • walk_length (L) • walks_per_node (int) • attractive_RWs_name / repulsive_RWs_name / attractive_SAWs_name / repulsive_SAWs_name (dataset names or None) • walks_layout = "time_leading_3d" |
| Embeddings | FRAME_EMBEDDINGS → (T, N, D), float32 |
created_at (ISO) • frame_embeddings_name = "FRAME_EMBEDDINGS" • time_stamp_count = T • node_count = N • embedding_dim = D • model_base = "torch" or "pureml" • embedding_kind = `"in" |
Notes
- In RIN,
Tequals the number of frame batches written (i.e.,frame_rangeswept in steps offrame_batch_size).ATTRACTIVE/REPULSIVE_energiesare pre-normalized absolute energies (written only whenkeep_prenormalized_energies=True), whereasATTRACTIVE/REPULSIVE_transitionsare the row-wise L1-normalized versions used for sampling. - All archives are Zarr v3 groups. ArrayStorage also maintains per-block metadata in root attrs:
array_chunk_size_in_block,array_shape_in_block, andarray_dtype_in_block(dicts keyed by dataset name). You’ll see these in every archive. - In Embeddings,
alphaandnum_negative_samplesapply to SGNS only and are ignored forobjective="sg".
from pathlib import Path
from sawnergy.logging_util import configure_logging
from sawnergy.rin import RINBuilder
from sawnergy.walks import Walker
from sawnergy.embedding import Embedder
import logging
configure_logging("./logs", file_level=logging.WARNING, console_level=logging.INFO)
# 1. Build a Residue Interaction Network archive
rin_path = Path("./RIN_demo.zip")
rin_builder = RINBuilder()
rin_builder.build_rin(
topology_file="system.prmtop",
trajectory_file="trajectory.nc",
molecule_of_interest=1,
frame_range=(1, 100),
frame_batch_size=10,
prune_low_energies_frac=0.85,
output_path=rin_path,
include_attractive=True,
include_repulsive=False
)
# 2. Sample walks from the RIN
walker = Walker(rin_path, seed=123)
walks_path = Path("./WALKS_demo.zip")
walker.sample_walks(
walk_length=16,
walks_per_node=100,
saw_frac=0.25,
include_attractive=True,
include_repulsive=False,
time_aware=False,
output_path=walks_path,
in_parallel=False
)
walker.close()
# 3. Train embeddings per frame (PyTorch backend)
import torch
embedder = Embedder(walks_path, seed=999)
embeddings_path = embedder.embed_all(
RIN_type="attr",
using="merged",
num_epochs=10,
negative_sampling=False,
window_size=4,
device="cuda" if torch.cuda.is_available() else "cpu",
model_base="torch",
output_path="./EMBEDDINGS_demo.zip"
)
print("Embeddings written to", embeddings_path)For the PureML backend, set
model_base="pureml"and pass the optimizer / scheduler classes insidemodel_kwargs.
from sawnergy.visual import Visualizer
v = Visualizer("./RIN_demo.zip")
v.build_frame(1,
node_colors="rainbow",
displayed_nodes="ALL",
displayed_pairwise_attraction_for_nodes="DISPLAYED_NODES",
displayed_pairwise_repulsion_for_nodes="DISPLAYED_NODES",
show_node_labels=True,
show=True
)Visualizer lazily loads datasets and works even in headless environments (falls back to the Agg backend).
from sawnergy.embedding import Visualizer
viz = Visualizer("./EMBEDDINGS_demo.zip", normalize_rows=True)
viz.build_frame(1, show=True)- Time-aware walks: Set
time_aware=True, providestickinessandon_no_optionswhen callingWalker.sample_walks. - Shared memory lifecycle: Call
Walker.close()(or use a context manager) to release shared-memory segments. - PureML vs PyTorch: Select the backend at call time with
model_base="pureml"|"torch"(defaults to"pureml") and pass optimizer / scheduler overrides throughmodel_kwargs. - ArrayStorage utilities: Use
ArrayStoragedirectly to peek into archives, append arrays, or manage metadata.
├── sawnergy/
│ ├── rin/ # RINBuilder and cpptraj integration helpers
│ ├── walks/ # Walker class and shared-memory utilities
│ ├── embedding/ # Embedder + SG/SGNS backends (PureML / PyTorch)
│ ├── visual/ # Visualizer and palette utilities
│ │
│ ├── logging_util.py
│ └── sawnergy_util.py
│
└── README.md
Issues, enhancement suggestions, and discussions are always welcome! Also, please tell your friends about the project!
A quick note: Currently, the repository is view-only and updated only through a CI/CD pipeline connected to a private development repository. Unfortunately, this means that if you submit a pull request and it gets merged, you won’t receive contributor credit on GitHub — which I know isn’t ideal.
That said (!), if you contribute via a PR at this stage, you’ll be permanently credited in both CREDITS.md and README.md. I promise that as the project grows and I start relying more on community contributions, I’ll fix this by setting up a proper CI/CD workflow via GitHub Actions, so everyone gets visible and fair credit for their work.
Thank you, and apologies for the inconvenience!
SAWNERGY builds on the AmberTools cpptraj ecosystem, NumPy, Matplotlib, Zarr, and PyTorch (for GPU acceleration if necessary; PureML is available by default).
Big thanks to the upstream communities whose work makes this toolkit possible.