Stanford CIS
@stanfordcis.bsky.social
📤 93
📥 45
📝 16
Stanford Center for Internet & Society. See also
@vanschewick.bsky.social
Generative AI models offered by major AI companies are used by tens of millions of people every day, and we should encourage them to make their models as safe as they possibly can” says
@stanfordhai.bsky.social
Tech Policy Fellow
@riana.bsky.social
via
@techpolicypress.bsky.social
bit.ly/46E9oRn
loading . . .
How Congress Could Stifle The Onslaught of AI-Generated Child Sexual Abuse Material | TechPolicy.Press
Cleaning training data might not be enough to hinder a model from creating CSAM, writes Jasmine Mithani.
https://www.techpolicy.press/how-congress-could-stifle-the-onslaught-of-aigenerated-child-sexual-abuse-material/
about 23 hours ago
0
1
2
@hartzog.bsky.social
and
@markpmckenna.bsky.social
argue for more precision in their references to “scale” in regards to technology. Differentiating between “scale as more” and “scale as different” H & M suggest they have different implications for regulation
papers.ssrn.com/sol3/papers....
loading . . .
Taking Scale Seriously in Technology Law
<p>Issues of scale—the relationship between the amount of an activity and its associated costs and benefits—permeate discussions around law and technologies. In
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5530398
1 day ago
0
2
2
FTC vs Amazon opens this week over dark patterns - design tricks that make you accidentally subscribe and struggle to cancel. "The question is when design crosses the line where a reasonable consumer doesn't have a fair shot of understanding what's going on" says
@andreamm.bsky.social
bit.ly/4mvJV2f
loading . . .
The 'dark patterns' at the center of FTC's lawsuit against Amazon
This week, the trial starts in a consequential FTC lawsuit against Amazon. The suit alleges that Amazon for years "tricked" people into buying Prime memberships that were purposefully hard to cancel.
https://www.npr.org/transcripts/nx-s1-5543497
2 days ago
0
0
1
CIS Affiliate Alex Feerst argues that learning - by humans or AI - isn't a copyright-relevant act in his latest Foundation for American Innovation paper. "Regulate outputs, not inputs; legalize learning."
www.thefai.org/posts/promot...
loading . . .
Promote the Progress, Legalize Learning | The Foundation for American Innovation
The Foundation for American Innovation.
https://www.thefai.org/posts/promote-the-progress-legalize-learning
2 days ago
0
0
0
@stanfordhai.bsky.social
fellow
@riana.bsky.social
joins the
@techdirt.com
podcast to go even deeper into the legal weeds and explain how the recent FTC settlement with Aylo, the parent company of multiple adult websites, could doom criminal CSAM cases
www.techdirt.com/2025/09/23/t...
loading . . .
Techdirt Podcast Episode 431: The Many Problems With The FTC’s Pornhub Settlement
Support us on Patreon » Last week, we published three separate posts that looked at the FTC’s recent settlement with Aylo, the parent company of multiple adult websites including, …
https://www.techdirt.com/2025/09/23/techdirt-podcast-episode-431-the-many-problems-with-the-ftcs-pornhub-settlement/
3 days ago
0
0
0
xAI workers training Grok report encountering CSAM requests due to the company's approach allowing explicit content, unlike other AI companies that block such requests. "If you don't draw a hard line at anything unpleasant, you have a more complex problem" says
@riana.bsky.social
bit.ly/4gyVbcK
loading . . .
Behind Grok's 'sexy' settings, workers review explicit and disturbing content
Workers say they've faced sexually explicit content while xAI has marketed Grok to be deliberately provocative. Experts say the company should be cautious.
http://bit.ly/4gyVbcK
4 days ago
0
0
0
reposted by
Stanford CIS
Daphne Keller
5 days ago
The title of this post from
@riana.bsky.social
says it all. This is a blazing big deal for the 4th Amendment. Then again, it's a blazing big deal for like ten other reasons. Mostly because lets the FTC decide if platforms enforced their speech rules correctly.
www.techdirt.com/2025/09/18/t...
loading . . .
The World’s Most Popular Porn Site Is a Government Agent Now. Does It Matter?
On Monday, I published a two-part blog post about the Federal Trade Commission (FTC) settlement with Aylo, parent company of Pornhub. The FTC’s complaint alleged that Aylo violated federal consumer…
https://www.techdirt.com/2025/09/18/the-worlds-most-popular-porn-site-is-a-government-agent-now-does-it-matter/
3
32
21
A recent article from CIS Affiliate Omer Tene highlights the increasing enforcement of privacy regulation from the California Privacy Protection Agency (CPPA) and state Attorneys-General (AGs) and the requirement for businesses to honor consumer opt-out requests
www.goodwinlaw.com/en/insights/...
loading . . .
Multistate Privacy Enforcement Sweep Puts Global Privacy Control in the Spotlight | Insights & Resources | Goodwin
State AGs and CPPA crack down on weak opt-out tools and push for stricter data risk assessments in online advertising. Read more in Goodwin's alert.
https://www.goodwinlaw.com/en/insights/publications/2025/09/alerts-technology-dpc-multistate-privacy-enforcement-sweep
5 days ago
0
0
0
reposted by
Stanford CIS
Daphne Keller
9 days ago
When people first hear about "jawboning" -- meaning government pressure to suppress speech through threats and other extra-legal measures, like what the FCC is doing now -- they always want to talk about "coercion" by the govt. I've always thought this is a red herring. Current events show why. 1/
2
69
32
In her latest
@techdirt.com
post
@stanfordhai.bsky.social
Policy Fellow
@riana.bsky.social
discusses the recent FTC settlement and how it does "little to block" child sex abuse material (CSAM)
www.techdirt.com/2025/09/15/t...
loading . . .
The Trump FTC’s War On Porn Just Ensured That Accused CSAM Offenders Will Walk Free
Well, they finally did it. A federal agency finally shattered the precarious base that upholds the edifice of prosecutions for child sex abuse material (CSAM) in America. That agency is the Federal…
https://www.techdirt.com/2025/09/15/the-trump-ftcs-war-on-porn-just-ensured-that-accused-csam-offenders-will-walk-free/
11 days ago
0
1
0
CIS Affiliate
@rcalo.bsky.social
's new book Law and Technology: A Methodical Approach explores why law finds technology so difficult to regulate, and what to do about it. Book is now available on pre-order here
www.amazon.com/Law-Technolo...
and here
www.riversendbookstore.com/book/9780197...
loading . . .
Law and Technology: A Methodical Approach
Law and Technology: A Methodical Approach [Calo, Ryan] on Amazon.com. *FREE* shipping on qualifying offers. Law and Technology: A Methodical Approach
https://www.amazon.com/Law-Technology-Methodical-Ryan-Calo/dp/0197526144
16 days ago
0
4
1
"The best time to start using end-to-end encryption for your calls or for your messages was yesterday and the second best time is today" says
@stanfordhai.bsky.social
Policy Fellow
@riana.bsky.social
in her Terms of Service with
@claresduffy.bsky.social
interview
www.cnn.com/audio/podcas...
loading . . .
How to Keep Your Private Messages Truly Private - Terms of Service with Clare Duffy - Podcast on CNN Podcasts
DMs and text messages can feel like private forms of communication — but it’s not always that simple. There are scenarios where third parties might be able to access your messaging data, whether it's ...
https://www.cnn.com/audio/podcasts/terms-of-service-with-clare-duffy/episodes/563b9a5c-25e7-11f0-a31f-cb00ed6a8850
17 days ago
0
4
2
"...most schools are not yet addressing the risks of AI-generated child sexual abuse materials with their students. When schools do experience an incident, their responses often make it worse for the victims"says
@stanfordhai.bsky.social
fellow
@riana.bsky.social
hai.stanford.edu/news/how-do-...
loading . . .
How Do We Protect Children in the Age of AI? | Stanford HAI
Tools that enable teens to create deepfake nude images of each other are compromising child safety, and parents must get involved.
https://hai.stanford.edu/news/how-do-we-protect-children-in-the-age-of-ai
19 days ago
0
7
2
CIS Affiliate
@hartzog.bsky.social
argues that facial recognition technology is the most dangerous surveillance tool ever invented & given the unique threats this tool poses to privacy, civil liberties .. and democracy, the only appropriate response is a ban
papers.ssrn.com/sol3/papers....
loading . . .
Normalizing Facial Recognition Technology and The End of Obscurity
This article argues that facial recognition technology is the most dangerous surveillance tool ever invented. Given the unique threats this morally suspect tool
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5450895
19 days ago
0
0
1
Stanford HAI Policy Fellow
@riana.bsky.social
takes Another Look at the STOP CSAM Act in her recent
@techdirt.com
post
www.techdirt.com/2025/08/12/a...
loading . . .
Another Look At The STOP CSAM Act
Amidst the current batch of child safety bills in Congress, a familiar name appears: the STOP CSAM Act. It was previously introduced in 2023, when I wrote about the threat the bill posed to the ava…
https://www.techdirt.com/2025/08/12/another-look-at-the-stop-csam-act/
20 days ago
0
1
0
"Nothing about the EU’s Digital Services Act (DSA) requires platforms to change the speech that American users can see and share online" says
@daphnek.bsky.social
in her lateset post A Primer on Cross-Border Speech Regulation and the EU’s Digital Services Act
cyberlaw.stanford.edu/a-primer-on-...
loading . . .
A Primer on Cross-Border Speech Regulation and the EU’s Digital Services Act
Some U.S. politicians have recently characterized European platform and social media regulation laws as “censorship” of speech in the U.S. If this claim were true, it would be a very big deal. As some...
https://cyberlaw.stanford.edu/a-primer-on-cross-border-speech-regulation-and-the-eus-digital-services-act/
23 days ago
0
2
0
reposted by
Stanford CIS
Ryan Calo
about 1 month ago
If you're in Boston September 8, I'll be speaking about my new book at MIT Media Lab (11:00 AM) and Boston University School of Law (4:00 PM).
global.oup.com/academic/pro...
loading . . .
https://global.oup.com/academic/product/law-and-technology-9780197526132?cc=us&lang=en&#
0
2
1
reposted by
Stanford CIS
Ryan Calo
3 months ago
My new book Law and Technology: A Methodical Approach is available for pre-order ($40) over at Oxford University Press. The book explores what scholars and society can do about emerging technology.
global.oup.com/academic/pro...
(new link)
add a skeleton here at some point
2
40
10
"To at least preserve the option of using this color in connection with automated driving, safety regulators around the world should be on the lookout for turquoise—in new vehicles, in imported vehicles, and in retrofitted vehicles," says CIS Affiliate BW Smith
cyberlaw.stanford.edu/blog/2025/08...
loading . . .
Turquoise lamps on cars that cannot drive themselves
Back in 2011, as Nevada was developing regulations for automated driving, there was debate about whether vehicles should have a special external signal to indicate that they are in automated driving m...
https://cyberlaw.stanford.edu/blog/2025/08/turquoise-lamps-on-cars-that-cannot-drive-themselves/
25 days ago
0
1
0
reposted by
Stanford CIS
Brett Frischmann
27 days ago
From 5 years ago ...
www.youtube.com/watch?v=SgbC...
loading . . .
Re-Engineering Humanity in the 21st Century | Brett Frischmann | TEDxVillanovaU
YouTube video by TEDx Talks
https://www.youtube.com/watch?v=SgbC3hmhHAU
1
3
1
reposted by
Stanford CIS
Tech Policy Press
about 1 month ago
There are many contexts where “truthiness” won’t do, writes Riana Pfefferkorn. “From human rights activists to camera manufacturers, from academics to public servants, a lot of people are working very hard to keep it possible for society to tell what’s real from what’s fake.” It's an uphill battle.
loading . . .
The Ongoing Fight to Keep Evidence Intact in the Face of AI Deception | TechPolicy.Press
There are many contexts, from private interactions between individuals to the financial markets, where “truthiness” won’t do, writes Riana Pfefferkorn.
https://buff.ly/uWRWuVA
1
32
14
reposted by
Stanford CIS
Barbara van Schewick
7 months ago
Last year, a federal court upheld that law & the Supreme Court denied ISPs' petition to review the case in December. Today's decision denies ISPs' attempt to get the justices to reconsider that decision. New York state has been enforcing the law since January:
arstechnica.com/tech-policy/20
loading . . .
https://arstechnica.com/tech-policy/20
0
0
1
reposted by
Stanford CIS
Barbara van Schewick
7 months ago
Here's how we got here: In 2021, New York state adopted a broadband affordability law that requires ISPs to offer broadband plans to low-income consumers at low cost.
1
0
1
reposted by
Stanford CIS
Barbara van Schewick
7 months ago
They can create their own
#netneutrality
protections, like California and others do, require affordable broadband options like New York, and institute broadband privacy protections like Maine. All of these laws have been upheld in court.
1
0
1
reposted by
Stanford CIS
Barbara van Schewick
7 months ago
Today's decision means that when the FCC is powerless to protect consumers online (as it is after a federal court struck down the FCC's net neutrality protections in January), states can step in to protect their residents.
1
0
1
reposted by
Stanford CIS
Barbara van Schewick
7 months ago
Important
#netneutrality
news: Today, the Supreme Court (again) rejected Internet Service Providers' attempt to undo New York's broadband affordability law. The decision has important implications for states' ability to protect consumers against misbehavior by the companies they pay to get online.
1
2
4
"The FCC can still protect you when you make a regular phone call but not when you use the Internet," said SLS Professor
@vanschewick.bsky.social
for Broadband Breakfast in "Law Prof van Schewick Calls Net Neutrality Ruling 'a Radical Decision.'"
broadbandbreakfast.com/law-prof-van...
loading . . .
Law Prof van Schewick Calls Net Neutrality Ruling 'a Radical Decision’
What if the FAA lost oversight of Delta and American Airlines? she asks.
https://broadbandbreakfast.com/law-prof-van-schewick-calls-net-neutrality-ruling-a-radical-decision-2/
8 months ago
0
2
0
reposted by
Stanford CIS
Barbara van Schewick
8 months ago
Today, New York starts enforcing its affordable broadband law. Internet providers need to offer affordable broadband plans to low-income people: 25+ Mbps down for $15/month or 200+ Mbps for $20. Jon Brodkin
@arstechnica.com
explains what this means & how we got here:
arstechnica.com/tech-policy/...
loading . . .
New York starts enforcing $15 broadband law that ISPs tried to kill
Fresh off court victory, NY says low-income plans must be available Wednesday.
https://arstechnica.com/tech-policy/2025/01/new-york-starts-enforcing-15-broadband-law-that-isps-tried-to-kill/
0
1
2
reposted by
Stanford CIS
Stanford Law School
9 months ago
SLS's Barbara van Schewick talks to
@npr.org
about "what may be next after a federal court struck down the FCC's net neutrality rules" on a federal court's decision to strike down the Biden administration's net neutrality protections. Listen here:
www.delawarepublic.org/2025-01-06/w...
loading . . .
What may be next after a federal court struck down the FCC's net neutrality rules
NPR's Juana Summers speaks with Stanford Law Professor Barbara van Schewick about a federal court's decision to strike down the Biden administration's net neutrality protections.
https://www.delawarepublic.org/2025-01-06/what-may-be-next-after-a-federal-court-struck-down-the-fccs-net-neutrality-rules
0
3
4
reposted by
Stanford CIS
Barbara van Schewick
9 months ago
Important
#netneutrality
news: Last week a federal court struck down the FCC's 2024
#netneutrality
protections. I spoke with
@npr.org
's
@jsummers.bsky.social
about what this means & what may be next:
npr.org/2025/01/06/n...
(audio & transcript)
1
26
14
you reached the end!!
feeds!
log in