Vaishak Belle
@vaishakbelle.bsky.social
📤 785
📥 89
📝 334
AI and Science. Faculty at U of Edinburgh. Write and think on:
http://www.vaishakbelle.org
At the Capgemini event last week, they had a scribe to capture details from discussions. Here’s what we got from my talk - scaling is not the answer, AI could offer huge potential to (say) NHS triaging, and the future is neuro-symbolic :-)
about 12 hours ago
0
0
0
Looking forward to this event on the 23rd of September - I’ll be talking about LLMs and (if I can) neuro-symbolic AI
3 days ago
0
0
0
I'll be heading to Glasgow for a keynote at the "Playtime is Over GenAI" event by Capgemini. Title: "Large Language Models & Reasoning -- Bridging Symbolic and Neural Approaches". Thanks to Dave Ord for the invitation, and to Richard Veit.
5 days ago
0
1
0
At HAR in Paris, I have a short position paper on why logic and probability could jointly play an important role for cognition in virtual and physical robots.
7 days ago
0
2
0
Jessica and Chiara are in Santa Cruz for the neurosymbolic AI conference! Presenting (resp) our work on morality + verification and constraining ML models for counterfactual fairness via tensorised logical representations.
10 days ago
0
1
0
On the trails of a recent position paper on logic and its role for AI,
journals.sagepub.com/doi/10.1177/...
I now have an update at HAR in Paris on what specifically first-order logic can add to the picture.
12 days ago
0
0
0
Hello Göteborg! Neurosymbolic + linguistics + philosophy =
gu-clasp.github.io/LARP/index.h...
13 days ago
0
0
0
LARP - Home
gu-clasp.github.io/LARP/index.h...
Heading to Sweden today for this talk. One of the key arguments is about the modeling of belief / theory of mind with LLMs, including symbolic executors, as well as the limitations of ascribing beliefs & anthropomorphism with these models.
14 days ago
0
1
0
Ambrose and I have a new paper at HAR in Paris: we look at LLMs to solve the so-called Wason Task, and outline steps a system would need to take to solve the task.
17 days ago
0
1
0
Excited to be in Sweden next week to give a keynote talk on large and small models in the neurosymbolic AI context. Event: Language models And RePresentations (LARP) organised by the Centre for Linguistic Theory and Studies in Probability, Uni of Gothenburg
gu-clasp.github.io/LARP/index.h...
19 days ago
0
0
0
Pablo Barcelo and I have a paper in Jelia 2025, in Tblisi, Georgia, that we are not able to attend. Tommie Meyer will present our talk; slides below -- A Uniform Language for Safety, Robustness and Explainability --
doi.org/10.5281/zeno...
21 days ago
0
0
0
Slides from my Keynote talk at PFIA 2025 in Burgundy, France:
pfia2025.u-bourgogne.fr/invites/
in July. -- Neuro-symbolic systems for Responsible AI --
doi.org/10.5281/zeno...
loading . . .
Neuro-symbolic systems for Responsible AI
Keynote talk at PFIA 2025 in Burgundy, France: https://pfia2025.u-bourgogne.fr/invites/  Machine learning (ML) techniques have become pervasive across a range of different applications, and are now…
https://doi.org/10.5281/zenodo.16985597
24 days ago
0
2
1
Chris and I have a new paper at HAR in Paris: We discuss how beliefs are understood w.r.t. LLMs, and why Anthropomorphism puts us off our footing.
26 days ago
0
2
0
Ruta and I have a new paper accepted at HAR in Paris: we try to quantify and qualify theory-of-mind reasoning abilities in LLMs!
28 days ago
0
0
0
Jessica and I have a new paper accepted at HAR in Paris: Its on integrating cultural preferences into reinforcement learning through the so-called Cross-Cultural Moral Reward Machine framework.
about 1 month ago
0
2
0
How can users effectively assess AI model predictions? We have a new study that explores combining uncertainty estimates with explanations in human-AI collaboration; considers interaction between model uncertainty, user confidence, & trust.
doi.org/10.3389/fcom...
loading . . .
Frontiers | Why not both? Complementing explanations with uncertainty, and self-confidence in human-AI collaboration
IntroductionAs AI systems integrate into high-stakes domains, effective human-AI collaboration requires users to be able to assess when and why to trust mode...
https://doi.org/10.3389/fcomp.2025.1560448
about 1 month ago
0
0
0
ML courses often overlook explaining a model's decision-making process, critical in high-stakes AI domains. We have a new paper that discussses how to go about a structured XAI course to equip students with vital interpretability techniques!
doi.org/10.3389/fedu...
about 1 month ago
0
4
0
reposted by
Vaishak Belle
Ted Underwood
about 1 month ago
This opportunity is related to the Doing AI Differently white paper, where I'm also honored to have been one of the co-authors.
add a skeleton here at some point
1
21
1
reposted by
Vaishak Belle
Andreas
about 1 month ago
Interesting. Looking forward to reading it!
add a skeleton here at some point
0
3
2
Drop in to explore the intersection of art and responsible AI at Edinburgh Art Festival's Tipping Point Exhibition, by our AHRC BRAID program. From Louise Ashcroft's "Real Stupidity" to Rachel Maclean's AI "They've Got Your Eyes!" & Arctic-inspired "Models of Care"
about 1 month ago
0
5
2
www.eventbrite.co.uk/e/bridging-r...
Happening today: tipping point - chat with us about artistic pieces here and AI.
about 1 month ago
0
0
0
I'll be joining Bev Hood for Tipping Point/Authenticity Unmasked Tour - where we'll talk about technical insights & discussions about the art and implications for the field of responsible AI. If you are here for the Edinburgh Fringe, pop in
inspace.ed.ac.uk/bridging-res...
about 1 month ago
0
0
0
Happy to be a co-author of the Doing AI Differently white paper at the The Alan Turing Institute. This is a call to action for meaningful change in AI, w. collective insights from arts & humanities. And includes a mention of neuro-symbolic AI :-).
www.turing.ac.uk/news/publica...
about 1 month ago
0
23
10
media.licdn.com/dms/document...
Have a look at our brochure on the executive education program on generative AI, fairness and regulation
about 2 months ago
0
1
0
I’m excited to be joining as the Director of Research and Innovation at the Bayes Centre. I’ll be 40% at the Bayes. I’m particularly looking to see how interest in generative AI and neurosymbolic AI could impact engagement with external stakeholders!
bayes-centre.ed.ac.uk
about 2 months ago
0
5
0
Directions in AI, Logic, LLMs and Learning: our Lab’s research Overview
loading . . .
Directions in AI, Logic, LLMs and Learning: our Lab’s research Overview
Our lab’s current research agenda spans multiple directions at the intersection of artificial intelligence, logic, and machine learning. I…
https://medium.com/@vaishakbelle/directions-in-ai-logic-llms-and-learning-our-labs-research-overview-869fce8ea4ed
about 2 months ago
0
2
0
Our summary report on the Royal Society UK/Canada “frontiers in AI” event; Ottawa last year.
www.facetsjournal.com/doi/10.1139/...
about 2 months ago
0
1
1
Reminder about our ExecEd offering in AI and GenAI covering goodies like bias and regulation too AI and Generative AI for Leaders Programme
medium.com/@vaishakbell...
about 2 months ago
0
0
0
There was an owl hovering around at Dagstuhl. This is the parliament. Neurosymbolic AI & knowledge graphs
about 2 months ago
0
2
0
If you are heading to
#ACL
in Vienna, do chat with Ruta Tang — we have a paper on LLMs, context free grammars and evolutionary search.
2 months ago
0
0
0
On the Relevance of Logic for Artificial Intelligence, and the Promise of Neurosymbolic Learning - my position paper on the flavors and depth of logic for general purpose AI
doi.org/10.1177/2949...
2 months ago
0
1
0
Jessica Ciupa has a new paper accepted on "exploring verification frameworks for social choice alignment" at the Neurosymbolic Conference, 2025. This is one of the late-breaking papers.
vaishakbelle.org/papers
loading . . .
Papers
Toward Robots That Reason: Logic, Probability & Causal Laws.
https://vaishakbelle.org/papers
2 months ago
0
1
0
So kicked that our children’s book - the girl and the robot - is fully pledged. It explores a range of topics from friendship to projecting feelings on artificial agents.
loading . . .
Inclusive micro-publisher (@parakeetbooks) • Instagram photo
2 likes, 1 comments - parakeetbooks on July 16, 2025: "Thank you!! Thanks everyone who has pledged to get The Girl and the Robot. There's still time to join in just head to kickstarter.com and search...
https://bit.ly/4nRGrsV
2 months ago
0
1
0
Ute Schmid on Donald Michie’s ultra strong machine learning
#dagstuhl
loading . . .
Dr Vaishak Belle (@vaishakbelle) • Instagram photo
0 likes, 0 comments - vaishakbelle on July 16, 2025: "Ute Schmid on Donald Michie’s ultra strong machine learning #dagstuhl".
https://bit.ly/4lw2vrn
2 months ago
0
1
0
Ute Schmid on the origins of the neuro-symbolic argument
loading . . .
Dr Vaishak Belle (@vaishakbelle) • Instagram photo
0 likes, 0 comments - vaishakbelle on July 16, 2025: "Ute Schmid on the origins of the neuro-symbolic argument".
https://bit.ly/4loCcTY
2 months ago
0
0
0
Artur Garcez at Dagstuhl on neurosymbolic AI
loading . . .
Dr Vaishak Belle (@vaishakbelle) • Instagram photo
0 likes, 0 comments - vaishakbelle on July 15, 2025: "Artur Garcez at Dagstuhl on neurosymbolic AI".
https://bit.ly/44OJ1r2
2 months ago
0
1
0
Nicholas Papernot and I co-chaired the Royal Society UK+Canada event last year. With invited talks by Yoshua Bengio and others, read our summary article in the facets journal
loading . . .
FACETS Journal (@facetsjournal.bsky.social)
New 📰| UK/Canada Frontiers of Science: Artificial Intelligence Report ✨ buff.ly/Dy3olJS @vaishakbelle.bsky.social & @nicolaspapernot.bsky.socia This report briefly summarizes the objectives and outcomes...
https://bit.ly/40gEFr8
2 months ago
0
0
0
The Dagstuhl castle seminar on neuro-symbolic AI and knowledge graphs
bit.ly/4lLgKIG
loading . . .
Dr Vaishak Belle (@vaishakbelle) • Instagram photo
0 likes, 0 comments - vaishakbelle on July 14, 2025: "The Dagstuhl castle seminar on neuro-symbolic AI and knowledge graphs".
https://bit.ly/4lLgKIG
2 months ago
0
0
0
Pablo Barcelo and I have a new paper at JELIA aiming to unify the definitions of fairness, robustness, and explainability, including abductive explanations as well as semantic loss. “A Uniform Language for Safety, Robustness and explainability.”
2 months ago
1
0
0
Parakeet Books were on the book club podcast and talk about our new children’s book “the girl and the robot”
loading . . .
Inclusive micro-publisher (@parakeetbooks) • Instagram photos and videos
11 likes, 1 comments - parakeetbooks on July 2, 2025: "We had a really fun time chatting with school librarian Rick Clarke on his brilliant podcast at Harris Girls’ Academy Bromley. Me and Sheju were...
https://bit.ly/46j8VFN
2 months ago
0
0
0
reposted by
Vaishak Belle
The Royal Society of Canada // La Société royale du Canada
2 months ago
A new
@facetsjournal.bsky.social
report, from
@vaishakbelle.bsky.social
and
@nicolaspapernot.bsky.social
(RSC College), briefly summarizes the objectives and outcomes from the 2024 Frontiers of Science: Artificial Intelligence Conference, which was held in Ottawa. đź”— Find out more:
bit.ly/3Ij2Q1X
0
3
2
reposted by
Vaishak Belle
FACETS Journal
2 months ago
New 📰| UK/Canada Frontiers of Science: Artificial Intelligence Report ✨
buff.ly/Dy3olJS
@vaishakbelle.bsky.social
& @nicolaspapernot.bsky.socia This report briefly summarizes the objectives and outcomes from the Frontiers of Science: Artificial Intelligence Conference. đź“· from
@src-rsc.bsky.social
loading . . .
https://www.facetsjournal.com/doi/10.1139/facets-2024-0346
0
2
1
Next week I’ll be in Germany for the Dagstuhl Seminar on Neurosymbolic AI and Knowledge Graphs See you there if you are coming
loading . . .
Dagstuhl Seminar 25291: (Actual) Neurosymbolic AI: Combining Deep Learning and Knowledge Graphs
In the past decade, both deep learning (DL) and knowledge graphs (KGs) have seen astonishing growth and groundbreaking milestones – DL due to newly available resources (e.g., accessibility of (modern)...
https://bit.ly/4nEUrpN
2 months ago
0
0
0
Morality & AI: what’s to be said? Our Lab’s perspective—
loading . . .
Morality & AI: what’s to be said? Our Lab’s perspective
Our lab has been interested in the intersection of morality, fairness, and ethics in the realm of Artificial Intelligence (AI). There are…
https://buff.ly/zxg3nqo
2 months ago
0
1
0
Interested in GANs and in logic, and how to bridge them? Nijesh has a new paper at ICLP on how to use tensorized logical formulas. "Logic Tensor Network-Enhanced Generative Adversarial Network”:
2 months ago
0
0
0
We have a new result on abduction (guessing explanations for observations) + epistemic modal logic, accepted at ICLP. With Sanderson Molick. "Propositional Abduction via Only-Knowing: A Non-Monotonic Approach."
3 months ago
0
0
0
What we need are not LLM-based reviews, called “perspectives” by the misguided. Rather, we need to be stopping reviewers from using LLMs to do their work. So build agents to spot that.
3 months ago
1
0
0
Modeling the non-quantifiable harms in responsible AI is challenging - informal language is both a barrier, and a bridge.
3 months ago
0
1
0
My slide for where neurosymbolic AI could / should go, involving first order, dynamic and modal logics, for modeling beliefs, norms and causality.
3 months ago
2
3
1
Was lovely to connect & reconnect with Francois, Lydia, Anaelle, Henri and many others at the platforme AI:
pfia2025.u-bourgogne.fr
Slides coming up.
3 months ago
0
0
0
Load more
feeds!
log in