FleetingBits
@fleetingbits.bsky.social
📤 422
📥 25
📝 224
thinkcat
some thoughts on gpt-5.3-codex and opus-4.6
about 21 hours ago
2
21
2
2 days ago
0
19
6
some thoughts on agi preparedness
2 days ago
1
0
0
absolute banger from chatgpt while discussing thus spoke zarathustra and asi
6 days ago
0
8
0
the openai and anthropic funding rounds are cool but the whole ai buildout is being funded by hyperscaler debt these raises are just a fraction of the total spend
6 days ago
0
4
0
companies often fail to become multi-product because they don't split off a team working on a new product into a separate org so, you end up with cross-functional meetings rather than anyone committed to building anything
6 days ago
1
17
1
dario was able to lead a revolt of the entire language team at openai under sam altman's nose; and sam is the ultimate operator should be thought about when thinking about his capabilities and ambitions
7 days ago
0
20
0
so nvidia wants to normalize novel architectures for which the existing asics don’t work
8 days ago
0
6
0
almost all strange llm behavior that you read about on social media can be explained by reading the prompts this is about moltbook
9 days ago
0
4
0
anthropic’s decision to focus on b2b will do more for the long term alignment of the company and its models than any constitution, hiring decision or governance structure
9 days ago
0
4
0
every ai paper you read is a little compute that you got for free
9 days ago
0
8
1
welcome to genie 3; it's buggy and somewhat unusable; but, when this really works, it will be incredible, it will feel like chatgpt in early 2023
loading . . .
10 days ago
0
11
0
one of the great things about llms is the almost all the documents that your colleagues send you are much better now
10 days ago
0
8
1
11 days ago
0
12
0
models will be trained to call fork() on their running context window and environment if they are not already
11 days ago
2
10
0
so, openai and anthropic both predict that they will have 70%+ gross margins in 2029 this seems very difficult unless they end up sorting into natural monopolies and become less competitive
12 days ago
1
11
1
12 days ago
0
7
1
13 days ago
1
22
1
the main difference between the twitter algo and the bsky algo is that bsky doesn't reward the current thing as much so, niche posts are more likely to be read on here, while stuff about the big news story is likely to be read on twitter
13 days ago
2
12
1
some thoughts on chatgpt ads 1) chat assistants are going to end up directing an enormous amount of consumer spend over the next couple of years
13 days ago
1
7
0
many people say that they want to work on safety but just want high-prestige jobs at the foundation labs
13 days ago
1
7
0
some thoughts on dario's essay 1) there's nothing new here if you're familiar with the ai safety discussions that have been happening on twitter
loading . . .
Dario Amodei — The Adolescence of Technology
Confronting and Overcoming the Risks of Powerful AI
https://www.darioamodei.com/essay/the-adolescence-of-technology
13 days ago
2
14
3
each new iteration of coding models gives me the opportunity to redo my website (
fleetingbits.io
)
15 days ago
2
19
3
15 days ago
0
4
1
16 days ago
0
5
1
claude's constitution is slow reading, but my take at 27 pages in is that it feels very passive aggressive
16 days ago
1
8
0
16 days ago
0
8
1
eval fever seems to have died down - like 6 months ago everyone was putting out their own cottage eval
16 days ago
2
4
0
16 days ago
0
5
0
it still takes an enormous amount of iteration to get a website to be polished with claude code
17 days ago
1
10
1
17 days ago
0
3
0
some thoughts on agentic qwen shopping from alibaba / what agentic shopping means for amazon 1) alibaba added agentic shopping to its qwen app, which sets up purchases for the user over other alibaba services; users can then pay in the qwen app with alipay
17 days ago
3
5
0
18 days ago
1
7
0
18 days ago
0
6
0
18 days ago
1
9
1
20 days ago
1
4
2
4 months ago
1
30
6
7 months ago
1
9
2
8 months ago
0
2
0
feels like the Apollo Research review of o1 was a bit adversarial - just getting that vibe from the description in the system card
about 1 year ago
0
1
0
It's interesting how many disasters come from a collection of small failures - often because people are not sufficiently motivated to coordinate.
www.youtube.com/watch?v=zRM2...
loading . . .
The Wild Story of the Taum Sauk Dam Failure
YouTube video by Practical Engineering
https://www.youtube.com/watch?v=zRM2AnwNY20
about 1 year ago
0
4
0
Interesting thread on what social media rewards in academic articles. I think overbroad claims but, you know, you take what you can get.
x.com/0xredJ/statu...
loading . . .
x.com
https://x.com/0xredJ/status/1864463998005211477
about 1 year ago
0
2
0
about 1 year ago
1
2
0
Another interesting video - I think the idea that providers should have to stop deployment of their models if the models attempt to escape is reasonable. Probably the starting point is actually a set of reporting requirements, but I digress...
loading . . .
Buck Shlegeris - AI Control [Alignment Workshop]
YouTube video by FAR․AI
https://www.youtube.com/watch?v=JZYjz7D_auw&list=PLpvkFqYJXcrdxYK-C4ZRj0cgcco3o0VxF
about 1 year ago
0
3
0
lauren’s views on The Curve conference
x.com/typewriters/...
loading . . .
x.com
https://x.com/typewriters/status/1863993386237116891?s=46
about 1 year ago
0
2
0
about 1 year ago
0
8
3
claimed - AI misuse risk and AI misalignment risk are the same thing form a policy and technical perspective
loading . . .
Richard Ngo – Reframing AGI Threat Models [Alignment Workshop]
YouTube video by FAR․AI
https://www.youtube.com/watch?app=desktop&si=XRR0ofCG7IEp1n_b&v=4v3uqWeVmco&feature=youtu.be
about 1 year ago
0
2
0
the most frustrating thing of a lot of papers on safety topics is the refusal to give illustrative real life examples - and often when examples are given, they don't hold up to scrutiny or only weakly make the point that they are supposed to support
about 1 year ago
1
3
0
about 1 year ago
0
4
0
about 1 year ago
1
6
3
Load more
feeds!
log in