Chicago Human+AI Lab
@chicagohai.bsky.social
📤 76
📥 3
📝 4
https://chicagohai.github.io/
,
https://substack.com/@cichicago
reposted by
Chicago Human+AI Lab
Dang Nguyen
3 months ago
📣 Announcing our poster session at COLM 2025: On the Effectiveness and Generalization of Race Representations for Debiasing High-Stakes Decisions I will talk about biases in LLMs and how to mitigate them. Come say hi! Poster #43, 4:30 PM
add a skeleton here at some point
1
1
3
reposted by
Chicago Human+AI Lab
3 months ago
Excited that we are having the first talk in AI & Scientific Discovery online seminar on Friday at 12pm ET/11am CT/9am PT by the awesome Lei Li from CMU! 🧪 Generative AI for Functional Protein Design🤖
#artificialintelligence
#scientificdiscovery
ai-scientific-discovery.github.io
0
4
4
Home-grown at CHAI and
@uchicagoci.bsky.social
!! The first ever AI-driven game from academia 🎮Give it a go and let us know your rank on the leaderboard!
add a skeleton here at some point
3 months ago
0
1
1
reposted by
Chicago Human+AI Lab
3 months ago
🚀 We’re thrilled to announce the upcoming AI & Scientific Discovery online seminar! We have an amazing lineup of speakers. This series will dive into how AI is accelerating research, enabling breakthroughs, and shaping the future of research across disciplines.
ai-scientific-discovery.github.io
1
23
16
reposted by
Chicago Human+AI Lab
3 months ago
As AI becomes increasingly capable of conducting analyses and following instructions, my prediction is that the role of scientists will increasingly focus on identifying and selecting important problems to work on ("selector"), and effectively evaluating analyses performed by AI ("evaluator").
2
10
8
reposted by
Chicago Human+AI Lab
4 months ago
We are proposing the second workshop on AI & Scientific Discovery at EACL/ACL. The workshop will explore how AI can advance scientific discovery. Please use this Google form to indicate your interest (corrected link):
forms.gle/MFcdKYnckNno...
More in the 🧵! Please share!
#MLSky
🧠
loading . . .
Program Committee Interest for the Second Workshop on AI & Scientific Discovery
We are proposing the second workshop on AI & Scientific Discovery at EACL/ACL (Annual meetings of The Association for Computational Linguistics, the European Language Resource Association and Internat...
https://forms.gle/MFcdKYnckNnohqap9
1
14
8
reposted by
Chicago Human+AI Lab
Xiaoyan Bai
5 months ago
⚡️Ever asked an LLM-as-Marilyn Monroe about the 2020 election? Our paper calls this concept incongruence, common in both AI and how humans create and reason. 🧠Read my blog to learn what we found, why it matters for AI safety and creativity, and what's next:
cichicago.substack.com/p/concept-in...
1
9
5
reposted by
Chicago Human+AI Lab
6 months ago
Prompting is our most successful tool for exploring LLMs, but the term evokes eye-rolls and grimaces from scientists. Why? Because prompting as scientific inquiry has become conflated with prompt engineering. This is holding us back. 🧵and new paper with
@ari-holtzman.bsky.social
.
2
37
15
A first small update, vllm has prevented the package from being installed on mac. Now you can `pip install hypogenic` on mac and generate hypotheses with APIs from your laptop.
add a skeleton here at some point
6 months ago
0
1
0
We are making som exciting updates to hypogenic this summer:
github.com/ChicagoHAI/h...
and will post updates here.
loading . . .
GitHub - ChicagoHAI/hypothesis-generation: This is the official repository for HypoGeniC (Hypothesis Generation in Context) and HypoRefine, which are automated, data-driven tools that leverage large l...
This is the official repository for HypoGeniC (Hypothesis Generation in Context) and HypoRefine, which are automated, data-driven tools that leverage large language models to generate hypothesis fo...
https://github.com/ChicagoHAI/hypothesis-generation
6 months ago
0
2
2
reposted by
Chicago Human+AI Lab
6 months ago
When you walk into the ER, you could get a doc: 1. Fresh from a week of not working 2. Tired from working too many shifts
@oziadias.bsky.social
has been both and thinks that they're different! But can you tell from their notes? Yes we can! Paper
@natcomms.nature.com
www.nature.com/articles/s41...
1
26
11
@chachachen.bsky.social
@haokunliu.bsky.social
@divingwithorcas.bsky.social
present posters on human-AI decision making, hypothesis generation, interpretability and fairness at MMLS 2025!
6 months ago
0
6
3
reposted by
Chicago Human+AI Lab
7 months ago
This is too cute not to share!
add a skeleton here at some point
0
5
1
reposted by
Chicago Human+AI Lab
Xiaoyan Bai
7 months ago
I am glad that you found our paper entertaining! This is a great point for my follow-up thread on the implications of concept incongruence. Our main goal is to raise awareness and provide clarity around concept incongruence.
add a skeleton here at some point
1
3
4
reposted by
Chicago Human+AI Lab
Xiaoyan Bai
7 months ago
🚨 New paper alert 🚨 Ever asked an LLM-as-Marilyn Monroe who the US president was in 2000? 🤔 Should the LLM answer at all? We call these clashes Concept Incongruence. Read on! ⬇️ 1/n 🧵
1
28
18
reposted by
Chicago Human+AI Lab
Mingxuan (Aldous) Li
8 months ago
1/n 🚀🚀🚀 Thrilled to share our latest work🔥: HypoEval - Hypothesis-Guided Evaluation for Natural Language Generation! 🧠💬📊 There’s a lot of excitement around using LLMs for automated evaluation, but many methods fall short on alignment or explainability — let’s dive in! 🌊
1
22
8
reposted by
Chicago Human+AI Lab
Mourad Heddaya
8 months ago
🧑⚖️How well can LLMs summarize complex legal documents? And can we use LLMs to evaluate? Excited to be in Albuquerque presenting our paper this afternoon at @naaclmeeting 2025!
2
23
13
reposted by
Chicago Human+AI Lab
8 months ago
Although I cannot make
#NAACL2025
,
@chicagohai.bsky.social
will be there. Please say hi!
@chachachen.bsky.social
GPT ❌ x-rays (Friday 9-10:30)
@mheddaya.bsky.social
CaseSumm and LLM 🧑⚖️ (Thursday 2-3:30)
@haokunliu.bsky.social
@qiaoyu-rosa.bsky.social
hypothesis generation 🔬 (Saturday at 4pm)
0
17
7
reposted by
Chicago Human+AI Lab
Dang Nguyen
9 months ago
1/n You may know that large language models (LLMs) can be biased in their decision-making, but ever wondered how those biases are encoded internally and whether we can surgically remove them?
1
18
13
reposted by
Chicago Human+AI Lab
11 months ago
Spent a great day at Boulder meeting new students and old colleagues. I used to take this view every day. Here are the slides for my talk titled "Alignment Beyond Human Preferences: Use Human Goals to Guide AI towards Complementary AI":
chenhaot.com/talks/alignm...
0
17
6
you reached the end!!
feeds!
log in