Skip to content

Graph generative models #26

@karalets

Description

@karalets

Graph generative models are important for the tasks we have been describing.

The core idea is to posit a model which defines some distribution over graphs P(G), for instance via a low dimensional model P(Z).

Example: P(G,Z) = P(G|Z) P(Z) like the class of models commonly described as Graph VAEs.

This is the probabilistic modeling of graphs space, there is also the self-supervised world which seems to implicitly represent P(G) without as clear a generative story.
There are also pure deep learning things, like transformer-type models and autoregressive graph models.

I am unsure how those would compare empirically in terms of performance.

In this issue, I suggest we survey the landscape of these models and their empirical comparisons and sketch out a strategy to compare them.

I referenced some work in a previous issue #24 , I pitch that we move this discussion here.

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions