AI + FM papers
@ai-fm-papers.bsky.social
📤 87
📥 0
📝 22
A feed of interesting AI / math / formal methods papers. Posts by
@m-dodds.bsky.social
Automated Proof Generation for Rust Code via Self-Evolution “we introduce SAFE, a novel framework that overcomes the lack of human-written proof […] [SAFE achieves] a 70.50% accuracy rate in a benchmark crafted by human experts, [vs] GPT-4o's performance of 24.46%”
arxiv.org/abs/2410.15756
loading . . .
Automated Proof Generation for Rust Code via Self-Evolution
Ensuring correctness is crucial for code generation. Formal verification offers a definitive assurance of correctness, but demands substantial human effort in proof construction and hence raises a pre...
https://arxiv.org/abs/2410.15756
6 months ago
0
1
0
Can LLMs Enable Verification in Mainstream Programming? “… we explore the ability of LLMs to produce verified code in three verification languages (Dafny, Nagini, and Verus) […] we use manually curated datasets derived from the state-ofthe-art Python benchmark, HumanEval”
arxiv.org/abs/2503.14183
loading . . .
Can LLMs Enable Verification in Mainstream Programming?
Although formal methods are capable of producing reliable software, they have seen minimal adoption in everyday programming. Automatic code generation using large language models is becoming increasin...
https://arxiv.org/abs/2503.14183
6 months ago
0
0
0
Formal Verification is Overrated “Zac Hatfield-Dodds [argues] that relying solely on verification methods may not provide real AI safety”
youtu.be/bs5snugP1VA?...
loading . . .
Zac Hatfield-Dodds – Formal Verification is Overrated [Alignment Workshop]
YouTube video by FAR․AI
https://youtu.be/bs5snugP1VA?si=5s6-lIdaL5TjzdOi
8 months ago
0
2
0
Proving the Coding Interview: A Benchmark for Formally Verified Code Generation “We introduce the Formally Verified Automated Programming Progress Standards, or FVAPPS, a benchmark of 4715 samples […] including 1083 curated and quality controlled samples”
arxiv.org/abs/2502.05714
loading . . .
Proving the Coding Interview: A Benchmark for Formally Verified Code Generation
We introduce the Formally Verified Automated Programming Progress Standards, or FVAPPS, a benchmark of 4715 samples for writing programs and proving their correctness, the largest formal verification ...
https://arxiv.org/abs/2502.05714
8 months ago
1
4
1
reposted by
AI + FM papers
Patrick Shafto
8 months ago
Super excited: my new @darpa program on AI for pure mathematics! Exponentiating Mathematics (expMath) aims to accelerate the rate of progress in pure math through the development of an AI collaborator and new professional-level math benchmarks.
sam.gov/opp/4def3c13...
0
16
6
LLM-Assisted Static Analysis for Detecting Security Vulnerabilities "[We combine] LLMs with static analysis to perform whole-repository reasoning for security vulnerability detection. [...] IRIS leverages LLMs to infer taint specifications and perform contextual analysis"
arxiv.org/abs/2405.17238
loading . . .
LLM-Assisted Static Analysis for Detecting Security Vulnerabilities
Software is prone to security vulnerabilities. Program analysis tools to detect them have limited effectiveness in practice due to their reliance on human labeled specifications. Large language models...
https://arxiv.org/abs/2405.17238
8 months ago
0
3
1
AlphaVerus: Bootstrapping Formally Verified Code Generation through Self-Improving Translation and Treefinement "AlphaVerus [is] a self-improving framework that bootstraps formally verified code generation by iteratively translating programs from a higher-resource language
arxiv.org/abs/2412.06176
loading . . .
AlphaVerus: Bootstrapping Formally Verified Code Generation through Self-Improving Translation and Treefinement
Automated code generation with large language models has gained significant traction, but there remains no guarantee on the correctness of generated code. We aim to use formal verification to provide ...
https://arxiv.org/abs/2412.06176
9 months ago
0
0
0
VerifAI: AI Verification in the Wild @ ICLR 2025 "This workshop explores the intersection of scale-driven generative artificial intelligence (AI) and the correctness-focused principles of verification."
verifai-workshop.github.io
loading . . .
The VerifAI Workshop
VerifAI: AI Verification in the Wild @ ICLR 2025
https://verifai-workshop.github.io
9 months ago
0
0
0
Laurel: Generating Dafny Assertions Using Large Language Models "...we propose Laurel, a tool that uses LLMs to automatically generate helper assertions for Dafny [...] Laurel is able to generate over 50% of the required helper assertions given only a few attempts"
arxiv.org/abs/2405.16792
loading . . .
Laurel: Generating Dafny Assertions Using Large Language Models
Dafny is a popular verification language, which automates proofs by outsourcing them to an SMT solver. This automation is not perfect, however, and the solver often requires guidance in the form of he...
https://arxiv.org/abs/2405.16792
10 months ago
0
2
0
Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs - “we introduce […] a method that maps informal proofs to formal proof sketches, and uses the sketches to guide an automated prover by directing its search to easier sub-problems”
arxiv.org/abs/2210.12283
loading . . .
Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs
The formalization of existing mathematical proofs is a notoriously difficult process. Despite decades of research on automation and proof assistants, writing formal proofs remains arduous and only acc...
https://arxiv.org/abs/2210.12283
10 months ago
0
3
0
“Today, we’re adding Automated Reasoning checks (preview) as a new safeguard in Amazon Bedrock Guardrails to help you mathematically validate the accuracy of responses generated by large language models (LLMs)”
aws.amazon.com/blogs/aws/pr...
loading . . .
Prevent factual errors from LLM hallucinations with mathematically sound Automated Reasoning checks (preview) | Amazon Web Services
Enhance conversational AI accuracy with Automated Reasoning checks - first and only gen AI safeguard that helps reduce hallucinations by encoding domain rules into verifiable policies.
https://aws.amazon.com/blogs/aws/prevent-factual-errors-from-llm-hallucinations-with-mathematically-sound-automated-reasoning-checks-preview/
10 months ago
1
1
0
reposted by
AI + FM papers
Swarat Chaudhuri
10 months ago
Since all my Twitter content is now gone, I will start reposting some of it here. Here are the slides for my talk on the coming wave of ML-accelerated formal methods, given at the Isaac Newton Institute last month. May interest some of you.
drive.google.com/file/d/1ybQx...
2
30
9
Big Sleep: Google’s Proj Zero team find a real vulnerability in SQLite using an LLM-based agent
googleprojectzero.blogspot.com/2024/10/from...
loading . . .
From Naptime to Big Sleep: Using Large Language Models To Catch Vulnerabilities In Real-World Code
Posted by the Big Sleep team Introduction In our previous post, Project Naptime: Evaluating Offensive Security Capabilities of Large L...
https://googleprojectzero.blogspot.com/2024/10/from-naptime-to-big-sleep.html?m=1
10 months ago
1
2
0
"Towards Neural Synthesis for SMT-Assisted Proof-Oriented Programming" - LLM-based proof generation for F*, and a 600K LoC dataset of F* programs and proofs, suitable for ML applications. Impressive results synthesizing real-world proofs about programs!
arxiv.org/abs/2405.01787
loading . . .
Towards Neural Synthesis for SMT-Assisted Proof-Oriented Programming
Proof-oriented programs mix computational content with proofs of program correctness. However, the human effort involved in programming and proving is still substantial, despite the use of Satisfiabil...
https://arxiv.org/abs/2405.01787
10 months ago
0
1
0
Grammar-Aligned Decoding - "[We propose] a decoding algorithm that guarantees the output to be grammatical while provably producing outputs that match the conditional probability of the LLM's distribution conditioned on the given grammar constraint"
arxiv.org/abs/2405.21047
@lorisdanto.bsky.social
loading . . .
Grammar-Aligned Decoding
Large Language Models (LLMs) struggle with reliably generating highly structured outputs, such as program code, mathematical formulas, or well-formed markup. Constrained decoding approaches mitigate t...
https://arxiv.org/abs/2405.21047
10 months ago
1
14
3
A new benchmark for LLM capabilities on AI R&D tasks
metr.org/blog/2024-11...
loading . . .
Evaluating frontier AI R&D capabilities of language model agents against human experts
We’re releasing RE-Bench, a new benchmark for measuring the performance of humans and frontier model agents on ML research engineering tasks. We also share data from 71 human expert attempts and resul...
https://metr.org/blog/2024-11-22-evaluating-r-d-capabilities-of-llms/
10 months ago
0
1
0
Arithmetic Without Algorithms: Language Models Solve Math With a Bag of Heuristics
arxiv.org/abs/2410.21272
loading . . .
Arithmetic Without Algorithms: Language Models Solve Math With a Bag of Heuristics
Do large language models (LLMs) solve reasoning tasks by learning robust generalizable algorithms, or do they memorize training data? To investigate this question, we use arithmetic reasoning as a rep...
https://arxiv.org/abs/2410.21272
10 months ago
0
1
0
you reached the end!!
feeds!
log in