Awa Dieng
@adoubleva.bsky.social
📤 1671
📥 167
📝 68
Excited for the next edition of the Algorithmic Fairness workshop, this time at
#ICLR2026
! Consider submitting your work (full or short papers) by Jan 31. See the full call for papers below 👇🏾
add a skeleton here at some point
14 days ago
0
1
0
Carol’s audacity to say “we save the world”. You mean “we save me” 🙄
#pluribus
27 days ago
1
0
0
reposted by
Awa Dieng
Stephen Pfohl
3 months ago
Excited to share that our paper, “Understanding challenges to the interpretation of evaluations of algorithmic fairness” has been accepted to NeurIPS 2025! You can read the paper now on arXiv:
arxiv.org/abs/2506.04193
.
loading . . .
Understanding challenges to the interpretation of disaggregated evaluations of algorithmic fairness
Disaggregated evaluation across subgroups is critical for assessing the fairness of machine learning models, but its uncritical use can mislead practitioners. We show that equal performance across sub...
https://arxiv.org/abs/2506.04193
1
5
2
If Weapons gets noms/awards over Sinners, I will very mad…they are not even in the same league
5 months ago
0
0
0
reposted by
Awa Dieng
Vertaix (AI&Science Lab at Princeton University)
6 months ago
#NewPaper
A major development in the design and discovery of metal-organic frameworks (MOFs)! Thread below 👇👇👇 1/4
1
3
2
While I am sad to not be at
#FAccT2025
in person due to last minute visa delays, I am happy to follow along virtually. Here are the slides for this morning’s presentation
tinyurl.com/nteasee-facc...
. Feel free to reach out for any questions. I am happy to chat about this work!
add a skeleton here at some point
7 months ago
0
4
1
reposted by
Awa Dieng
Siobhan Mackenzie Hall
7 months ago
We are so proud of our ✨Honourable Mention✨ for the
#FAccT2025
Best Paper Awards! This work was borne out of (com)passion, determination & a commitment to challenging norms in dataset collection & evaluation. Read more ➡️
arxiv.org/abs/2406.09496
Catch the presentation on Thurs, 11:45 local time! 🇬🇷
add a skeleton here at some point
1
7
1
reposted by
Awa Dieng
ACM FAccT
7 months ago
Looking for posts about
#FAccT2025
? Check out our 🦋 custom feed 🦋 which is already lively and full of papers, events, and attendees for this year's conference in Athens! Click the pin 📌 in the upper right hand corner to keep this feed quickly accessible.
bsky.app/profile/mari...
add a skeleton here at some point
0
28
12
reposted by
Awa Dieng
Jessica Schrouff
8 months ago
Exciting update: there are multiple positions open on the Responsible AI team at GSK London UK, with a mix of applied research, foundational research and engineering. Apply here:
jobs.gsk.com/en-gb/jobs/4...
loading . . .
AI/ML Engineer, Responsible AI in London, United Kingdom | GSK Careers
GSK Careers is hiring a AI/ML Engineer, Responsible AI in London, United Kingdom. Review all of the job details and apply today!
https://jobs.gsk.com/en-gb/jobs/421785?lang=en-us&previousLocale=en-GB
0
6
3
reposted by
Awa Dieng
Siobhan Mackenzie Hall
9 months ago
Today I would like to share a journey that started 18 months ago, and while it is still ongoing, I'd like to celebrate the team's achievement of successfully having our work accepted to
#FAccT2025
! 🎉 🔗
arxiv.org/pdf/2406.09496
1
8
3
Sinners was perfect omg!! Thank you Ryan Coogler, thank you 🙏🏾 and Ludwig Göransson: you will always be goated in this house
9 months ago
1
1
0
Congrats to my co-leads
@dr-nyamewaa.bsky.social
, Iskandar Haykel and amazing co-authors👇🏾
add a skeleton here at some point
9 months ago
0
0
0
Excited to share that this work will be presented at
@facct.bsky.social
#facct2025
🎉! Looking forward to attending my first FAccT conference and meeting the community!
add a skeleton here at some point
9 months ago
0
8
1
reposted by
Awa Dieng
Stanford HAI
10 months ago
AI systems present an opportunity to reflect society's biases. “However, realizing this potential requires careful attention to both technical and social considerations,” says HAI Faculty Affiliate
@sanmikoyejo.bsky.social
in his latest op-ed via
@theguardian.com
:
www.theguardian.com/commentisfre...
loading . . .
Could AI help us build a more racially just society? | Sanmi Koyejo
We have an opportunity to build systems that don’t just replicate our current inequities. Will we take them?
https://www.theguardian.com/commentisfree/ng-interactive/2025/mar/21/ai-racial-justice-society-technology
0
9
5
tfw you realize how much you’ve grown when something happens and new you handles it way better than old you would have
10 months ago
1
0
0
reposted by
Awa Dieng
Angelina Wang
10 months ago
I've recently put together a "Fairness FAQ":
tinyurl.com/fairness-faq
. If you work in non-fairness ML and you've heard about fairness, perhaps you've wondered things like what the best definitions of fairness are, and whether we can train algorithms that optimize for it.
3
44
20
reposted by
Awa Dieng
Vertaix (AI&Science Lab at Princeton University)
11 months ago
#NewPaper
We designed an "algorithmic microscope" & called it the Vendiscope. It is a new companion in the discovery process for scientists across fields and a powerful tool for diagnosing datasets and models for AI researchers.
#AlgorithmicMicroscopy
Link to paper:
arxiv.org/abs/2502.10828
loading . . .
0
4
4
reposted by
Awa Dieng
Vertaix (AI&Science Lab at Princeton University)
12 months ago
Congrats to
@adjiboussodieng.bsky.social
on being named an Early-Career Distinguished Presenter at the MRS Spring Meeting 2025! This honor is meant to highlight exciting new research directions in materials science. 🐯
#ChemScky
#Princeton
#VendiScoring
#MatSci
www.cs.princeton.edu/news/adji-bo...
loading . . .
Adji Bousso Dieng honored by the Materials Research Society | CS
https://www.cs.princeton.edu/news/adji-bousso-dieng-honored-materials-research-society
0
15
5
reposted by
Awa Dieng
Thomas Steinke
about 1 year ago
I'm going to slowly repost my math notes from the other site🐦 here🦋; it's the only thing I posted over there that I think may have some long-term value & worth not deleting. These started out as notes for myself, but people seem to appreciate them. 😅 I'll keep track of all of them in this thread.
6
202
32
reposted by
Awa Dieng
Vertaix (AI&Science Lab at Princeton University)
about 1 year ago
Our paper probing out-of-distribution generalization in machine learning for materials discovery is now published at Communications Materials (@NaturePortfolio)
#MatSci
#AI4Science
#Chemsky
#Vertaix
📓Link:
nature.com/articles/s43...
loading . . .
Probing out-of-distribution generalization in machine learning for materials - Communications Materials
State-of-the-art machine learning models are often tested on their ability to generalize materials deemed ’dissimilar’ to training data, but such definitions frequently rely on heuristics. Here, an an...
https://nature.com/articles/s43246-024-00731-w
0
12
4
reposted by
Awa Dieng
about 1 year ago
📚 Incredible student projects from the 2024 Fall quarter's Machine Learning from Human Preferences course
web.stanford.edu/class/cs329h/
. Our students tackled some fascinating challenges at the intersection of AI alignment and human values. Selected project details follow... 1/n
loading . . .
CS329H: Machine Learning from Human Preferences
Machine Learning from Human Preferences
https://web.stanford.edu/class/cs329h/
1
4
1
Elphaba: but I swear someday there will be a celebration throughout Oz that’s all to do with me Me: 😬 I mean, there will be a celebration alright, but probably not the one you are imagining 👀
about 1 year ago
1
1
0
reposted by
Awa Dieng
Siobhan Mackenzie Hall
about 1 year ago
Ahead of the New Year, I wanted to take a moment to wish for everyone, in 2025, to have space to be themselves, time for family, friends and creativity, motivation to pursue their goals, and courage to stand out and speak up! Not to mention I hope for many laughs, and endless joy for all of us.🎉
0
4
2
Retweeting for myself because why do I set meetings at 9am. Why??
add a skeleton here at some point
about 1 year ago
0
4
0
reposted by
Awa Dieng
Eugene Vinitsky 🍒
about 1 year ago
Okay, new years resolution. No meetings before 12. Posting this so people can yell at me when I try to schedule a meeting before then
3
23
2
reposted by
Awa Dieng
Hellina Hailu Nigatu
about 1 year ago
Tons of work to do in creating vocabulary to discuss these issues so our ethics work is not just theory but can translate to practice...
1
1
1
reposted by
Awa Dieng
Hellina Hailu Nigatu
about 1 year ago
Finally got around to reading this awesome paper from
@adoubleva.bsky.social
and this finding is really making me think...I see it in my own circles of friends/fam having an overall +ve attitude towards AI and i usually find it hard to communicate about harms/AI in my native language...
add a skeleton here at some point
1
3
1
Me after NeurIPS 😂
about 1 year ago
2
29
1
That’s a wrap for
#AFME2024
!!! 🎉 Thank you to all the authors, attendees, roundtable leads and speakers for the great presentations and insightful discussions!
about 1 year ago
0
5
0
Our amazing panellists discussed how to define fairness and challenges of evaluation, ethical considerations and interdisciplinary collaboration for addressing different dimensions of fairness!
@jessicaschrouff.bsky.social
@sanmikoyejo.bsky.social
@sethlazar.org
Hoda Heidari
about 1 year ago
1
8
1
For the final contributed talk, Ben Laufer discussed the fundamental limits in the search for less discriminatory algorithms!!
about 1 year ago
0
4
0
Another great contributed talk!! Natalie Mackraz discussed her work on evaluating gender bias transfer between pre-trained and prompt-adapted LLMs
about 1 year ago
0
2
0
In the third contributed talk, Prakhar Ganesh presented his paper comparing g bias mitigation algorithms in ML!
about 1 year ago
0
2
0
To kick off the afternoon session, we have a great talk from
@angelinawang.bsky.social
on the need for group difference awareness and a new suite of benchmark for assessing this metric in LLMs!!
about 1 year ago
0
11
1
Ending the morning session with great discussions at the roundtables 🎊 See you after lunch
about 1 year ago
0
0
0
In the last invited talk of the morning, we have
@sethlazar.org
giving an insightful talk on evaluating the ethical competence of LLMs!!
about 1 year ago
0
2
0
The second contributed talk at
#AFME2024
. To Eun Kim discussed his paper “Towards Fair RAG: On the Impact of Fair Ranking in Retrieval-Augmented Generation”
about 1 year ago
1
3
0
Our first contributed talk is by Alex Tamkin presented his work on Evaluating and Mitigating Discrimination in Language Model Decisions!
about 1 year ago
0
0
0
Another great talk of the day by
@krvarshney.bsky.social
discussing his work on building harm detectors and guardian models for large generative models! He also addressed the need to broaden the dimensions of harms in different use cases
about 1 year ago
0
8
2
Great first talk by Hoda, giving an overview of fairness metrics in traditional fairness and generative models. She discussed the desiderata for a Good Measurement and steps towards building contextually aware fairness metrics in LLMs!
about 1 year ago
0
2
0
Off to a great start with the opening remarks by
@nandofioretto.bsky.social
on why this year’s topic.
about 1 year ago
0
3
0
It's time for our
#NeurIPS2024
Algorithmic Fairness Workshop
#AFME2024
🥳! Join us TODAY for a full day of discussions on fairness metrics & evaluation. Schedule:
afciworkshop.org/schedule
⏰ We start at 9 am with the opening remarks, followed by a keynote by on Fairness Measurement by Hoda Heidari
about 1 year ago
0
12
2
Heads up: Will be live tweeting the workshop
about 1 year ago
0
3
0
reposted by
Awa Dieng
Mercy Nyamewaa Asiedu
about 1 year ago
Check out our latest work on Contextual Evaluation of Large Language Models for Tropical and Infectious Diseases (
openreview.net/forum?id=yXe...
), accepted at two NeurIPs workshops: GenAI4health (
genai4health.github.io
) and AIM-FM (aim-fm-24.github.io/NeurIPS/).
#llms
#genai4health
#globalhealth
4
12
4
Not to offend anyone but risotto is really the poor man’s mbakhal. What is this 😭 If you like risotto, please go to a Senegalese restaurant and order mbakhal. It will change your life
about 1 year ago
2
10
0
📆AFME workshop: Sat, Dec 14 in room 111-112 My favorite part of the workshop 🥳 💬 Join our amazing leads* at the roundtables for insightful discussions on Fairness/Bias Metrics and Evaluation. *
@angelinawang.bsky.social
, Candace Ross (FAIR), Tom Hartvigsen (UofVirginia)
about 1 year ago
0
6
0
Looking forward to attending
#Neurips2024
this week! Catch me at the Black in AI workshop tomorrow! I will be at the Research mentorship session! Come and chat (Why is there no neurips mug this year? 😭)
about 1 year ago
0
6
0
Load more
feeds!
log in