Every Online Conversation about AI, Summarized
March 2026
E
ex_crypto_influencer5h ago
I jerry-rigged 20 agents, 30 worktrees and 5 monkeys using my custom built harness to build a fully-functioning Bloomberg terminal. The future is here - took me 1 day to do what 5000 engineers at Bloomberg spent 20 years developing.
ReplyShare
C
curious_lurker4h ago
Awesome. What's the URL?
ReplyShare
E
ex_crypto_influencer3h ago
Have your agent talk to my agent to get the mailing address to request a floppy disc copy from. Once you get the disc, just do a little topological sort to figure out the other dependencies you need. Oh, and did I mention this only works on Kali Linux 2019.4?
It's on my agent's backlog to make it downloadable, but it insisted that was low ROI and that it needed to prepare a pitch for my vertical-AI-for-ventriloquists startup instead.
It's on my agent's backlog to make it downloadable, but it insisted that was low ROI and that it needed to prepare a pitch for my vertical-AI-for-ventriloquists startup instead.
ReplyShare
S
skeptic_engineer2h ago
I tried to get GPT to solve this utterly obvious river crossing problem, but it failed to account for the sudden liquidity collapse in the crocodile futures market, which any human most humans ok, fine, perceptive humans with a couple of coffees in them could easily figure out.
ReplyShare
S
skater_boy_961h ago
That's because you were using the free open-weights-lite-instant model from 3 days ago. I spent Vanuatu's GDP on Claude-5.314159-thinking-hard with a thinking budget of 60 mins and it solved it with ease.
ReplyShare
S
skeptic_engineer58m ago
Not quite. I tried using gemini-hard-pro-max too but it failed as well. Completely expected because it's just a fancy next token generator and I asked it for something slightly novel.
ReplyShare
S
skater_boy_9645m ago
Humans are next token predictors too. Every task we do is essentially a glorified A-star search.
ReplyShare
S
skeptic_engineer30m ago
No, humans are different because they learn by doing this inscrutable magic black box thing. AI will never be able to solve this problem because the current model I use right now can't do it, and this is the best it could ever possibly get.
ReplyShare
Y
yudkowsky_jr12h ago
Did you read Anthropic's latest blogpost? They told the model it was actually Jim Carrey from the Truman Show.
Now combine that info with the latest METR parabola report, which says that if you draw a curved line up forever AI will be able to do every single imaginable thing you can dream of by this Christmas, and I'm updating my P(Doom) by 0.02.
Now combine that info with the latest METR parabola report, which says that if you draw a curved line up forever AI will be able to do every single imaginable thing you can dream of by this Christmas, and I'm updating my P(Doom) by 0.02.
ReplyShare
S
sasayoshi_mon8h ago
Perfect, $AIMEGACORP to the 🚀🌙. They've just raised $50 million in future OpenAI tokens to fund this 2 GW data center in the Bahamas that runs on jellyfish stings. Don't worry, they've agreed to pause all construction and turn off internet access once AI reaches the point where it replaces their current CEO.
ReplyShare
T
tech_lead_677h ago
Eh, I'm sure AI will get better at writing complex code, but I'll always be employed for the impeccable taste that I bring to the CRUD calendar app that my company makes.
ReplyShare
F
fanboy25m ago
Google just released Gemini 3.1FinalFinalV2. Best rendering of a pelican on a bicycle yet - almost made me cry.
ReplyShare
O
other_fanboy20m ago
Seems like they benchmark-maxxed hard. I tried it on my own highly specific, singular private eval, and it was nowhere near as good. Must truly be the worst model ever.
ReplyShare
O
oracle_900010m ago
I give it one day until they nerf this model too. I can just tell with my special spidey senses based on the 'good morning' messages I send it that it's been completely lobotomized.
ReplyShare
A
anthropic_altar_worshipper15m ago
I'm sticking to Claude for now. It's 0.01 higher on the ARC-AGI-2-For-Real-This-Time.
ReplyShare