Stanford Research Computing
@stanford-rc.bsky.social
📤 61
📥 1
📝 9
Advancing science, one batch job at a time.
https://srcc.stanford.edu
#Stanford
Research Computing is hiring a Principal Storage Architect, to lead our HPC Data Platforms team. 100PB+ of research data, full-flash file systems, Lustre at scale. Come architect storage for Nobel-caliber research!
careersearch.stanford.edu/jobs/princip...
#HPC
#Lustre
#storage
#hiring
loading . . .
Principal Storage Architect & Team Lead (Research & HPC Data Platforms) in Business Affairs: University IT (UIT), Stanford, California, United States
We are seeking a world-class technical leader to oversee our primary research storage platforms. These services span the gamut from the 15PB...
https://careersearch.stanford.edu/jobs/principal-storage-architect-team-lead-research-hpc-data-platforms-30461
about 4 hours ago
0
0
2
We expanded Sherlock's scratch file system by 50%, added 5 PB of flash storage and 3.2 Tb/s of InfiniBand NDR bandwidth, to give our users one of the largest all-flash Lustre file systems in academia:
news.sherlock.stanford.edu/publications...
#Sherlock
#HPC
#Stanford
loading . . .
Scratch that: Sherlock now has 15 PB of flash - Sherlock changelog
We are excited to announce a major expansion of Sherlock's /scratch file system. We are adding 5 PB of full-flash storage, bringing the total capacity from 10 PB to 15 PB. This investment directly add...
https://news.sherlock.stanford.edu/publications/scratch-that-sherlock-now-has-15-pb-of-flash
26 days ago
0
5
1
Tired of watching your API credits disappear mid-session? New Sherlock docs cover running Zed + Ollama entirely on-cluster: full AI coding assistant, free GPU allocation, zero data leaving Stanford.
news.sherlock.stanford.edu/publications...
#Sherlock
#HPC
#Stanford
loading . . .
Free and private AI-assisted coding on Sherlock - Sherlock changelog
We're excited to announce a brand new section section in the Sherlock documentation, covering how to bring AI coding tools into your HPC workflows, for free, and without sending your code and data any...
https://news.sherlock.stanford.edu/publications/free-and-private-ai-assisted-coding-on-sherlock
about 1 month ago
0
2
1
H200 GPUs are now available on Sherlock:
news.sherlock.stanford.edu/publications...
#Sherlock
#HPC
#Stanford
loading . . .
Introducing SH4_G8TF64.1, now with 8x H200 GPUs - Sherlock changelog
We are excited to announce the immediate availability of a powerful new node configuration to accelerate your GPU workloads on Sherlock: SH4_G8TF64.1. Featuring 8x NVIDIA H200 Tensor Core GPUs, this n...
https://news.sherlock.stanford.edu/publications/introducing-sh4_g8tf64-1-now-with-8x-h200-gpus
8 months ago
0
1
0
We're back to job #1 again!
news.sherlock.stanford.edu/publications...
#Sherlock
#HPC
#Stanford
loading . . .
Back to job #1, thrice - Sherlock changelog
Not once, not twice, but three times! For the third time in Sherlock’s history, the Slurm job ID counter was reset over the weekend, and went from job #67,043,327 all the way back to job #1! JobIDRaw...
https://news.sherlock.stanford.edu/publications/back-to-job-1-thrice
10 months ago
0
2
0
Introducing a new service partition on Sherlock
news.sherlock.stanford.edu/publications...
#Sherlock
#HPC
#Stanford
loading . . .
Introducing a new service partition on Sherlock - Sherlock changelog
We’re very pleased to introduce a new service partition on Sherlock, specially designed to run non-computational management and administrative tasks. Jobs like data transfer tasks, backups, CI/CD pi...
https://news.sherlock.stanford.edu/publications/introducing-a-new-service-partition-on-sherlock
11 months ago
0
3
0
An update about our plans to retire Sherlock 2.0
news.sherlock.stanford.edu/publications...
#Sherlock
#HPC
#Stanford
loading . . .
An update about our plans to retire Sherlock 2.0 - Sherlock changelog
We wanted to share an important update about the future of some of Sherlock’s oldest compute nodes, in light of some of the more recent and worsening political and economic conditions. As many of you...
https://news.sherlock.stanford.edu/publications/an-update-about-our-plans-to-retire-sherlock-2-0
12 months ago
0
1
0
reposted by
Stanford Research Computing
Stéphane Thiell
about 1 year ago
@stanford-rc.bsky.social
was proud to host the Lustre User Group 2025 organized with OpenSFS! Thanks to everyone who participated and our sponsors! Slides are already available at
srcc.stanford.edu/lug2025/agenda
🤘Lustre!
#HPC
#AI
0
10
3
reposted by
Stanford Research Computing
Stéphane Thiell
about 1 year ago
Join us for the Lustre User Group 2025 hosted by
@stanford-rc.bsky.social
in collaboration with OpenSFS. Check out the exciting agenda! 👉
srcc.stanford.edu/lug2025/agenda
loading . . .
LUG 2025 Agenda
https://srcc.stanford.edu/lug2025/agenda
0
7
5
reposted by
Stanford Research Computing
Stéphane Thiell
about 1 year ago
Getting things ready for next week's Lustre User Group 2025 at Stanford University!
0
6
1
Doubling the FLOPs, another milestone for Sherlock's performance
news.sherlock.stanford.edu/publications...
#Sherlock
#HPC
#Stanford
loading . . .
Doubling the FLOPs, another milestone for Sherlock's performance - Sherlock changelog
We’re proud to announce that Sherlock has reached another significant performance milestone. Building on past successes, Sherlock continues to evolve and expand, integrating new technologies and enhan...
https://news.sherlock.stanford.edu/publications/doubling-the-flops-another-milestone-for-sherlocks-performance
about 1 year ago
1
4
1
reposted by
Stanford Research Computing
Stéphane Thiell
about 1 year ago
ClusterShell 1.9.3 is now available in EPEL and Debian. Not using clustershell groups on your
#HPC
cluster yet?! Check out the new bash completion feature! Demo recorded on Sherlock at
@stanford-rc.bsky.social
with ~1,900 compute nodes and many group sources!
asciinema.org/a/699526
loading . . .
clustershell bash completion (v1.9.3)
This short recording demonstrates the bash completion feature available in ClusterShell 1.9.3, showcasing its benefits when using the clush and cluset command-line tools.
https://asciinema.org/a/699526
0
9
6
reposted by
Stanford Research Computing
Stéphane Thiell
about 1 year ago
We started it!
blocksandfiles.com/2025/01/28/s...
Check out my LAD'24 presentation:
www.eofs.eu/wp-content/u...
1
7
6
reposted by
Stanford Research Computing
Stéphane Thiell
over 1 year ago
Newly announced at the
#SC24
Lustre BoF! Lustre User Group 2025, organized by OpenSFS, will be hosted at Stanford University on April 1-2, 2025. Save the date!
0
10
9
reposted by
Stanford Research Computing
Stéphane Thiell
over 1 year ago
Just another day for Sherlock's home-built scratch Lustre filesystem at Stanford: Crushing it with 136+GB/s aggregate read on real research workload! 🚀
#Lustre
#HPC
#Stanford
0
24
3
Hello BlueSky!
about 1 year ago
2
9
3
you reached the end!!
feeds!
log in