Stéphane Thiell
@sthiell.bsky.social
📤 77
📥 40
📝 19
I do HPC storage at Stanford and always monitor channel 16 ⛵
Thrilled to host Lustre Developer Day at
@stanford-rc.bsky.social
post-LUG 2025! 🌟 With 14+ top organizations like DDN, LANL, LLNL, HPE, CEA, AMD, ORNL, AWS, Google, NVIDIA, Sandia and Jefferson Lab represented, we discussed HSM, Trash Can, and upstreaming Lustre in Linux.
7 months ago
1
5
0
@stanford-rc.bsky.social
was proud to host the Lustre User Group 2025 organized with OpenSFS! Thanks to everyone who participated and our sponsors! Slides are already available at
srcc.stanford.edu/lug2025/agenda
🤘Lustre!
#HPC
#AI
8 months ago
0
10
3
Getting things ready for next week's Lustre User Group 2025 at Stanford University!
8 months ago
0
6
1
Join us for the Lustre User Group 2025 hosted by
@stanford-rc.bsky.social
in collaboration with OpenSFS. Check out the exciting agenda! 👉
srcc.stanford.edu/lug2025/agenda
loading . . .
LUG 2025 Agenda
https://srcc.stanford.edu/lug2025/agenda
9 months ago
0
7
5
ClusterShell 1.9.3 is now available in EPEL and Debian. Not using clustershell groups on your
#HPC
cluster yet?! Check out the new bash completion feature! Demo recorded on Sherlock at
@stanford-rc.bsky.social
with ~1,900 compute nodes and many group sources!
asciinema.org/a/699526
loading . . .
clustershell bash completion (v1.9.3)
This short recording demonstrates the bash completion feature available in ClusterShell 1.9.3, showcasing its benefits when using the clush and cluset command-line tools.
https://asciinema.org/a/699526
10 months ago
0
9
6
We started it!
blocksandfiles.com/2025/01/28/s...
Check out my LAD'24 presentation:
www.eofs.eu/wp-content/u...
10 months ago
1
7
6
Just another day for Sherlock's home-built scratch Lustre filesystem at Stanford: Crushing it with 136+GB/s aggregate read on real research workload! 🚀
#Lustre
#HPC
#Stanford
11 months ago
0
24
3
A great show of friendly open source competition and collaboration: the lead developers of Environment Modules and Lmod (Xavier of CEA and Robert of
@taccutexas.bsky.social
) at
#SC24
. They often exchange ideas and push each other to improve their tools!
about 1 year ago
0
5
2
Newly announced at the
#SC24
Lustre BoF! Lustre User Group 2025, organized by OpenSFS, will be hosted at Stanford University on April 1-2, 2025. Save the date!
about 1 year ago
0
10
9
Fun fact: the Georgia Aquarium (nonprofit), next to the Congress center is the largest in the U.S. and the only one that houses whale sharks. I went on Sunday and it was worth it. Just in case you need a break from SC24 this week… 🦈
about 1 year ago
0
1
0
I always enjoy an update from JD Maloney (NCSA), but even more when it is about using S3 for Archival Storage, something we are deploying at scale at Stanford this year (using MinIO server and Lustre/HSM!)
about 1 year ago
0
6
1
Our branch of robinhood-lustre-3.1.7 on Rocky Linux 9.3 with our own branch of Lustre 2.15.4 and MariaDB 10.11 can ingest more than 35K Lustre changelogs/sec. Those gauges seem appropriate for Pi Day, no?
github.com/stanford-rc/...
github.com/stanford-rc/...
over 1 year ago
0
4
1
reposted by
Stéphane Thiell
kilian
almost 2 years ago
Sherlock goes full flash! The scratch file system of Sherlock, Stanford's HPC cluster, has been revamped to provide 10 PB of fast flash storage on Lustre
news.sherlock.stanford.edu/publications...
loading . . .
Sherlock goes full flash - Sherlock
What could be more frustrating than anxiously waiting for your computing job to finish? Slow I/O that makes it take even longer is certainly high on the list. But not anymore! Fir, Sherlock’s scratch file system, has just undergone a major
https://news.sherlock.stanford.edu/publications/sherlock-goes-full-flash
0
3
2
Filesystem upgrade complete! Stanford cares about HPC I/O! The Sherlock cluster has now ~10PB of full flash Lustre scratch storage at 400 GB/s, to support a wide range of research jobs on large datasets! Fully built in-house!
almost 2 years ago
1
6
1
My header image is an extract from this photo taken at the The Last Bookstore in Los Angeles, a really cool place.
about 2 years ago
0
1
0
👋
about 2 years ago
0
4
0
you reached the end!!
feeds!
log in