Skip to content

Commit

Permalink
updated gekcs md and added iclr2024 news
Browse files Browse the repository at this point in the history
  • Loading branch information
loreloc committed Jan 16, 2024
1 parent 64363c6 commit 165234d
Show file tree
Hide file tree
Showing 3 changed files with 27 additions and 19 deletions.
7 changes: 7 additions & 0 deletions _news/iclr2024.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
title: "One paper accepted at ICLR 2024"
collection: news
permalink: /news/iclr-2024
date: 2024-01-16
---
One paper accepted at <b><i>ICLR 2024</i></b> on <a href="https://openreview.net/forum?id=xIHi5nxu9P"><b>how to represent and learn deep mixture models encoding subtractions via squaring</b></a> (as <b><note>spotlight</note></b>!).
18 changes: 9 additions & 9 deletions _publications/loconte2023gekcs.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,18 +8,18 @@ tags: nesy kge circuits constraints
image: "/images/papers/loconte2023gekcs/gekcs.png"
spotlight: "/images/papers/loconte2023gekcs/gekcs-spotlight.png"
authors: "Lorenzo Loconte, Nicola Di Mauro, Robert Peharz, Antonio Vergari"
paperurl: "https://arxiv.org/abs/2305.15944"
pdf: "https://arxiv.org/pdf/2305.15944.pdf"
paperurl: "https://openreview.net/forum?id=RSGNGiB1q4"
pdf: "https://openreview.net/pdf?id=RSGNGiB1q4"
venue: "NeurIPS 2023"
award: "oral (top 0.6%)"
code: "https://github.com/april-tools/gekcs"
excerpt: "KGE models such as CP, RESCAL, TuckER, ComplEx can be re-interpreted as circuits to unlock their generative capabilities, scaling up learning and guaranteeing the satisfaction of logical constraints by design."
excerpt: "KGE models such as CP, RESCAL, TuckER, ComplEx can be re-interpreted as circuits to unlock their generative capabilities, scaling up inference and learning and guaranteeing the satisfaction of logical constraints by design."
abstract: "Some of the most successful knowledge graph embedding (KGE) models for link prediction -- CP, RESCAL, TuckER, ComplEx -- can be interpreted as energy-based models. Under this perspective they are not amenable for exact maximum-likelihood estimation (MLE), sampling and struggle to integrate logical constraints. This work re-interprets the score functions of these KGEs as circuits -- constrained computational graphs allowing efficient marginalisation. Then, we design two recipes to obtain efficient generative circuit models by either restricting their activations to be non-negative or squaring their outputs. Our interpretation comes with little or no loss of performance for link prediction, while the circuits framework unlocks exact learning by MLE, efficient sampling of new triples, and guarantee that logical constraints are satisfied by design. Furthermore, our models scale more gracefully than the original KGEs on graphs with millions of entities. "
supplemental:
bibtex: "@article{loconte2023gekcs,<br/>
title={How to Turn Your Knowledge Graph Embeddings into Generative Models via Probabilistic Circuits},<br/>
author={Loconte, Lorenzo and Di Mauro, Nicola and Peharz, Robert and Vergari, Antonio},<br/>
journal={arXiv preprint arXiv:2305.15944},<br/>
year={2023}
}"
bibtex: "@inproceedings{loconte2023how,<br/>
title={How to Turn Your Knowledge Graph Embeddings into Generative Models},<br/>
author={Lorenzo Loconte and Nicola Di Mauro and Robert Peharz and Antonio Vergari},<br/>
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},<br/>
year={2023},<br/>
url={https://openreview.net/forum?id=RSGNGiB1q4}}"
---
21 changes: 11 additions & 10 deletions _publications/loconte2023subtractive.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,22 +3,23 @@ collection: publications
ref: "loconte2023subtractive"
permalink: "publications/loconte2023subtractive"
title: "Subtractive Mixture Models via Squaring: Representation and Learning"
date: 2023-09-30 00:00
date: 2024-01-16 00:00
tags: circuits probml
image: "/images/papers/loconte2023subtractive/subtractive-ring.png"
spotlight: "/images/papers/loconte2023subtractive/subtractive-spotlight.png"
authors: "Lorenzo Loconte, Aleksanteri M. Sladek, Stefan Mengel, Martin Trapp, Arno Solin, Nicolas Gillis, Antonio Vergari"
paperurl: "https://arxiv.org/abs/2310.00724"
pdf: "https://arxiv.org/abs/2310.00724"
venue: "arXiv 2023"
code:
paperurl: "https://openreview.net/forum?id=xIHi5nxu9P"
pdf: "https://openreview.net/pdf?id=xIHi5nxu9P"
venue: "ICLR 2024"
award: "spotlight (top 5%)"
code: "https://github.com/april-tools/squared-npcs"
excerpt: "We propose to build (deep) subtractive mixture models by squaring circuits. We theoretically prove their expressiveness by deriving an exponential lowerbound on the size of circuits with positive parameters only."
abstract: "Mixture models are traditionally represented and learned by adding several distributions as components. Allowing mixtures to subtract probability mass or density can drastically reduce the number of components needed to model complex distributions. However, learning such subtractive mixtures while ensuring they still encode a non-negative function is challenging. We investigate how to learn and perform inference on deep subtractive mixtures by squaring them. We do this in the framework of probabilistic circuits, which enable us to represent tensorized mixtures and generalize several other subtractive models. We theoretically prove that the class of squared circuits allowing subtractions can be exponentially more expressive than traditional additive mixtures; and, we empirically show this increased expressiveness on a series of real-world distribution estimation tasks."
supplemental:
bibtex: "@article{loconte2023subtractive,<br/>
bibtex: "@inproceedings{loconte2024subtractive,<br/>
title={Subtractive Mixture Models via Squaring: Representation and Learning},<br/>
author={Lorenzo Loconte and Aleksanteri M. Sladek and Stefan Mengel and Martin Trapp and Arno Solin and Nicolas Gillis and Antonio Vergari},<br/>
journal={arXiv preprint arXiv:2310.00724},<br/>
year={2023}
}"
author={Loconte, Lorenzo and Aleksanteri, M. Sladek and Mengel, Stefan and Trapp, Martin and Solin, Arno and Gillis, Nicolas and Vergari, Antonio},<br/>
booktitle={The Twelfth International Conference on Learning Representations},<br/>
year={2024},<br/>
url={https://openreview.net/forum?id=xIHi5nxu9P}}"
---

0 comments on commit 165234d

Please sign in to comment.