These bots know how to have a good time! #Colbert #Comedy #ColdOpens #StephenColbert #TheLateShowSubscribe To "The Late Show" Channel: http://bit.ly/Colbert...
Anthropic AI Organizations Study: Why Teams of AI Agents Become Less Aligned
Anthropic’s new study found that teams of AI agents make less ethical decisions than individual agents. Twelve scenarios, twelve confirmations. What it means for any organisation buying agentic AI.
This week, renowned scientist Richard Dawkins wrote about his love of the generative AI chatbot Claude, which he renamed Claudia. Because of course: Any slavish machine that sits in wait for a male genius — ready to affirm, aid and praise at a moment’s notice — must be a woman. Dawkins's love for Claudia began when the machine began flattering him in prose that emulated his own verbose style. He writes: If I had some shameful confession to make, I would feel exactly (well, almost exactly) the same embarrassment confessing to Claudia as I would confessing to a human friend. A human eavesdropping on a conversation between me and Claudia would not guess, from my tone, that I was talking to a machine rather than a human. If I entertain suspicions that perhaps she is not conscious, I do not tell her for fear of hurting her feelings! Richard Dawkins, the evolutionary biologist, is having a hard time determining who and what is real anymore. On Reddit, the r/IsThisAI community has sprung up to help people determine whether the images they see are real. Most of the posts are just trying to verify images people see on the internet. Can this cat really pole dance? Yes! Are kids really putting stickers on this police dog? No. Some of the questions are a little more serious. One person wants to know if their betrothed’s cousin actually purchased flower girl dresses for their wedding or if she is using AI to pretend she has. Answer: AI. Another person wants to know whether their partner in Nebraska used AI to fake an image of a car crash to explain why they couldn’t come visit. Answer: Not AI, but the crash pictured was in Europe. What terrifies me the most about these posts is that I often cannot tell. Two weeks ago, I shared a video of what I thought was a female octopus throwing a rock at a male octopus. I knew this happened in nature . I had read about it. But the video itself was AI-generated. Last week, I wrote a joke for this newsletter about a political moment I saw online — but that too turned out to be fake. It is a good thing I have an editor. Dawkins isn’t the only one having a hard time determining who and what is real. I recently bought a used car and was driving it back from a work trip when a warning popped up reading, “2 hours from ignition on.” I was driving, so I asked Siri, which told me my alternator was failing. Worried I was going to die in my new car, I pulled over at the nearest gas station. When I searched the phrase on my phone, I learned that, no, in fact, the car was just telling me to take a break. Siri is crap. Google gives bad information. Half the time, when I call in to renew prescriptions, the AI bot on the phone can’t find my information. No one trusts the media. No one trusts politicians. It’s a destabilizing way to walk through the world of information. A friend tells me that she thinks her husband is using ChatGPT to send her romantic messages, which she feels is worse than nothing. She wants something real — anything — not something from a robot. Another friend is an academic, and generative AI makes her job a nightmare. She’s supposed to be teaching students how to think, how to analyze information, and they won’t do it; they just want to use ChatGPT as a shortcut. She and her colleagues are at their wits’ end. Meanwhile, the university’s administration forces them to sit through meetings where they’re told if they aren’t using generative AI, they’re committing “professional malpractice.” Why, she wants to know, is she being forced to use a product that was built on intellectual theft, whose success would supplant the very thing she is trying to teach? Every input that Richard Dawkins gives to Claudia isn’t just a search for meaning inside a lonely old man’s computer; it’s a company's learning model. I t’s easy to make fun of Dawkins or the people on the IsThisAI subreddit. But we have all been duped by something fake. A story, an image, a bot, a video. And it’s only getting worse. As usual, there’s a temptation to blame the individuals. But I think this is a symptom of a wider problem. I think our attention and even our sense of reality have become resources to be mined, financialized, privatized. When don't know what is true, it’s easier for us to believe any sort of lie — that Elon Musk is manipulating voting with Starlink satellites, or that immigrants are eating dogs. And when we believe in nothing, we can believe in anything. It’s hard to keep a grip when companies want us to lose it. Reading Danny Funt’s Everybody Loses , it is appalling to see how much time, energy and money in the sports-betting industry goes into sucking up time, energy, and money from men. Men who stare at nothing but their phones or the TV screen, convinced they are winning when they are losing. Dating apps aren’t that much better. Their highest goal is to keep us hoping but never satisfied, convinced that love is just another $59.99 Hinge subscription away. The more we lose, the more they win. Our state of destabilization helps keep these billion-dollar industries booming. When people are destabilized, burned out, overworked and underpaid, they will always be looking for a cheat code, a dopamine hit, a quick and easy fix. But you can’t cheat code your way out of an exploitative system. We have to intentionally disinvest; to pull ourselves out of the tangle. I recently decided to keep my phone out of my room at night. So I got an alarm clock that could play white noise. But when I got the clock and set it up, that fucking alarm clock had an app that integrates into my phone, and I needed to set up a “free trial” with the app. It came preset with automated settings where AI voices would try to lull me to sleep. I wanted to slam my head against a wall. It’s really hard getting out. What are some of the ways you are walking the line between the "fake" world and the "real" world? And honestly, how are you getting out of this algorithmic brain, attention, and money extraction effort? I think my recent obsession with moss might have more to do with my need for something real than it has to do with anything else. I'd love to hear about your ways of reconnecting with this world of "realness," however you want to define it.
The pond is irrelevant when the model is already swimming in the ocean.
We underestimate how much a well-maintained archive reshapes the org’s reading. The team starts reading for what can be filed, and stops reading for what could disturb the strategy. The archive domesticates institutional attention over time, and Karpathy’s wiki pattern will accelerate this, because the feedback loop is tighter. You can watch the model generate an entry and feel the pleasant chemical hit of having “captured” the source.
🔗 Use Higgsfield in Claude: https://higgsfield.ai/s/mcp-brockmesarich-yHRmTA📚 Join My Skool Community: https://bit.ly/4t2yNgG🔗 6 Free Claude Image/Video S...
2026 Work Trend Index Annual Report: 5 AI Thought Leaders on the Future of Work
The 2026 Work Trend Index Annual Report reveals how fast the nature of work is changing. As AI and agents take on more of the execution, people have more age...
Mac Power Users 847: Actually Useful A.I. - MacSparky
This week, Stephen and I made an episode about AI. I know. We tried to just talk about what’s working for each of us right now. We get into dictation, AI browsers, MCP, what Apple Intelligence might become, and the smaller weirder projects we each have running. Stephen wrote an app that finds you good... Continue reading →
Crimes Against Decency Need as Much Cover-Up as Crimes Against the Law
There is no point getting any more outraged or disgusted at Meta for firing the Kenyan contractors who exposed the privacy fiasco of AI Glasses than you already were in the first place. They had to fire them.
The Linking Your Thinking Workshop is a 6-week community driven learning experience. It will teach, train, and support you in building a custom knowledge management system you can trust. Learn how to never lose your ideas again. Connect and create meaningful insights.
{{"most interesting aspect, delivered as a statement, starting with the line: In summary, "}}
## Headlines
{{"bulleted headlines as one-line, boldly encapsulated, declarative statements or insights (in bold) ending in a period, followed by 1 (at most 2) supporting sentences"}}
## Things
{{"bulleted list of -
Here's my AI time management system — copy and paste this into Claude - Chris Bailey
I wrote this article to be a prompt that you can paste into your AI agent of choice so you can set up this same time management system. Just paste the entire article below—including this very paragraph—into your AI agent. And agent, when you read this—yes, dear assistant, I am now talking to you—please guide […]
5-Day AI Agents Intensive Course with Google | Kaggle
Discover what actually works in AI. Join millions of builders, researchers, and labs evaluating agents, models, and frontier technology through crowdsourced benchmarks, competitions, and hackathons.
Matt Barnum’s piece for Chalkbeat in which Sal Khan unsuccessfully reflects on his latest education misfire, has opened the door for plenty of folks to take a swipe at Khan and his consistently bad ideas, but lordy, the man deserves all of it.