Skip to content

Commit

Permalink
adding vankrieken2024indep
Browse files Browse the repository at this point in the history
  • Loading branch information
arranger1044 committed Apr 22, 2024
1 parent 95eac40 commit ac7072d
Show file tree
Hide file tree
Showing 3 changed files with 29 additions and 0 deletions.
7 changes: 7 additions & 0 deletions _news/indep2024preprint.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
title: "Independence Assumption NeSy"
collection: news
permalink: /news/indep-nesy
date: 2024-04-12
---
New preprint on <a href="https://arxiv.org/abs/2404.08458"><b>what common assumptions in neurosymbolic systems imply in terms of expressiveness and learnability</b></a>.
22 changes: 22 additions & 0 deletions _publications/vankrieken2024indep.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
---
collection: publications
ref: "vankrieken2024indep"
permalink: "publications/vankrieken2024indep"
title: "On the Independence Assumption in Neurosymbolic Learning"
date: 2024-04-12 00:00
tags: nesy reasoning uq
image: "/images/papers/vankrieken2024indep/indep.png"
authors: "Emile van Krieken, Pasquale Minervini, Edoardo M. Ponti, Antonio Vergari"
paperurl: "https://arxiv.org/abs/2404.08458"
pdf: "https://arxiv.org/pdf/2404.08458.pdf"
venue: "arXiv 2024"
excerpt: "We theoretically analyze the common assumption that many NeSy models -- from the semantic loss to deep Problog -- do: the independence among terms of a logical formula, and highlight how this biases learning and make some solutions impossible to retrieve."
abstract: "State-of-the-art neurosymbolic learning systems use probabilistic reasoning to guide neural networks towards predictions that conform to logical constraints over symbols. Many such systems assume that the probabilities of the considered symbols are conditionally independent given the input to simplify learning and reasoning. We study and criticise this assumption, highlighting how it can hinder optimisation and prevent uncertainty quantification. We prove that loss functions bias conditionally independent neural networks to become overconfident in their predictions. As a result, they are unable to represent uncertainty over multiple valid options. Furthermore, we prove that these loss functions are difficult to optimise: they are non-convex, and their minima are usually highly disconnected. Our theoretical analysis gives the foundation for replacing the conditional independence assumption and designing more expressive neurosymbolic probabilistic models."
supplemental:
bibtex: "@article{vankrieken2024indep,<br/>
title={On the Independence Assumption in Neurosymbolic Learning},<br/>
author={Emile van Krieken, Pasquale Minervini, Edoardo M. Ponti, Antonio Vergari,<br/>
journal={arXiv preprint arXiv:404.08458},<br/>
year={2024}
}"
---
Binary file added images/papers/vankrieken2024indep/indep.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit ac7072d

Please sign in to comment.