With new AI models being released on a weekly basis, its pretty clear that a single annual snapshot is a woefully inefficient way to keep on top of what is going on with AI. So, I have assembled a cohort of brainfood community members who are advanced users / experimenters and going to have them report back to us on a quarterly basis. People like Marcel van der Meer, Tony de Graaf, Alla Pavlova, Leandro Gomes da Silva and friends. Must watch folks - register hereThe Brainfood1. Why I'm Betting Against AI Agents in 2025 (Despite Building Them)
The best posts are the ones which embed important concepts in the discourse. Two ideas from this outstanding post which I think will have this effect - error compounding and token economics. The bottom line for OP - who is a s/w engineerw who is implementing AI for a living - is that more steps you ask AI Agents to do, the error rate inevitably compounds to failure i/.e requiring human intervention. I’ve been asking some advanced users in our community to test this and that’s the topic of Brainfood Live this Friday, so make sure you tune in to that. In the meantime, this post is a must read, especially if you’re looking for the discursive scaffolding to make the case for the human. H/T to brainfooder Richard Bradley for the share in the online community.