Skip to content

Commit

Permalink
Added calanzone2024locolm
Browse files Browse the repository at this point in the history
  • Loading branch information
arranger1044 committed Apr 21, 2024
1 parent 3afe55c commit 1e89ef9
Show file tree
Hide file tree
Showing 3 changed files with 28 additions and 0 deletions.
7 changes: 7 additions & 0 deletions _news/locolm-accepted-r2fm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
title: "Loco LM @ R2FM"
collection: news
permalink: /news/loco-lm-r2fm
date: 2024-03-06
---
How to make LLMs more logically consistent? Check <a href="https://openreview.net/forum?id=q3SGbfj19d"><b>our work</b></a> at the <a href="https://iclr-r2fm.github.io/"><b>ICLR 2024 Workshop on on Reliable and Responsible Foundation Models</b></a>.
21 changes: 21 additions & 0 deletions _publications/calanzone2024locolm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
collection: publications
ref: "calanzone2024locolm"
permalink: "publications/calanzone2024locolm"
title: "Galerkin meets Laplace: Fast uncertainty estimation in neural PDEs"
date: 2024-03-05 10:00
tags: nesy probml llm
image: "/images/papers/calanzone2024locolm/locolm.png"
authors: "Diego Calanzone, Antonio Vergari, Stefano Teso"
paperurl: "https://openreview.net/forum?id=q3SGbfj19d"
pdf: "https://openreview.net/pdf?id=q3SGbfj19d"
venue: "R2FM Workshop @ ICLR 2024"
excerpt: "We introduce a training objective based on principled probabilistic reasoning that teaches a LLM to be logically consistent with a set of external facts and rules, allowing to extrapolate to unseen but semantically similar factual knowledge."
abstract: "Large language models (LLMs) are a promising venue for natural language understanding and generation tasks. However, current LLMs are far from reliable: they are prone to generate non-factual information and, more crucially, to contradict themselves when prompted to reason about beliefs of the world. These problems are currently addressed with large scale fine-tuning or by delegating consistent reasoning to external tools. In this work, we strive for a middle ground and introduce a training objective based on principled probabilistic reasoning that teaches a LLM to be consistent with external knowledge in the form of a set of facts and rules. Fine-tuning with our loss on a limited set of facts enables our LLMs to be more logically consistent than previous baselines and allows them to extrapolate to unseen but semantically similar factual knowledge more systematically."
bibtex: "@inproceedings{calanzone2024locolm,
title={Galerkin meets Laplace: Fast uncertainty estimation in neural PDEs},
author={Diego Calanzone, Antonio Vergari, Stefano Teso},
booktitle={ICLR 2024 Workshop on Reliable and Responsible Foundation Models},
year={2024}
}"
---
Binary file added images/papers/calanzone2024locolm/locolm.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 1e89ef9

Please sign in to comment.