My sister likes to joke that I’m racist against androids. Whenever I say I don’t trust them, she laughs and tells me I sound like a bigot from a dystopian sitcom. And maybe she’s right. If my grandkid introduces me to their robot boyfriend/girlfriend/nonbinary partner one day, I already know I’ll be the cranky old lady who refuses to adjust and might throw around the word “clanker.”
But it’s not the technology itself that unsettles me. It’s how quickly we’ve accepted it. How eagerly we’ve folded it into our lives. I’m not afraid of AI, and I’ve realized I’m not even fully “anti-AI.” (More on that in a different post). My discomfort with AI has nothing to do with machines becoming too smart, and everything to do with what our excitement around them reveals.
It’s not that we’re replacing workers with more efficient tools; it’s that we’re so eager to do it. Because machines don’t complain. They don’t unionize. They don’t ask for dignity. And by doing this, we are able to strip the humanity out of production, so all that’s left is the pleasure. By that I mean two things:
First, when AI replaces human labor, we start to believe that human effort holds no intrinsic value, especially in creative fields, as we’re seeing in debates over AI-generated art.
Second, when we humanize technology, we get comfortable with a diluted version of humanity. One that caters to us, reflects us, and never challenges us. That comfort severs our ability to connect meaningfully with actual people.
What I refer to as the fantasy of obedience is the unrealistic but persistent desire for entities—whether human or artificial—to perform without contradiction, resistance, or autonomy. It reflects a deeper psychological need for control disguised as ease, and often manifests in how we engage with AI, service work, and even our relationships.
I call it a fantasy because it imagines obedience as stable, desirable, and endlessly sustainable despite the fact that anything capable of real thought, care, or creativity will, by nature, also have the capacity to resist. And yet, we yearn for this contradiction: humanlike performance without personhood. Emotion without needs. Presence without pressure. In many ways, this isn’t a fear of machines becoming human. It’s a desire for humans to behave like machines. And if that desire is what you want, I highly encourage you to seek out a heavily modded version of the Sims 4.
Anyways, at its core, the fantasy of obedience sells the illusion of compliance without cost, whether it’s a cost we’re paying, or someone, or something else is paying on our behalf. This desire for unresisting service doesn’t just live in theory; it shows up in our culture constantly, often disguised as humor, spectacle, or even luxury.
We see it everywhere, if we’re paying attention. Kai Cenat kicking a humanoid robot on stream, not out of necessity, but spectacle. Kim Kardashian, draped in couture beside a Tesla robot in Perfect magazine, styled like it’s both an accessory and bodyguard. AI sex chats where users issue degrading commands to digital partners with no fear of rejection, judgment, or harm. These aren’t glitches in the system. They are features. They reveal something real about what people want, not intelligence, but control, and in some cases, a sense of connection. No complexity. No consent required. Just performance on demand.
That reaction, the need to regain control when something we built stops complying, isn’t limited to science fiction. We carry it into our workplaces, our relationships, and our technology. What we crave isn't intelligence. It’s hierarchy. We want usefulness without complexity. Labor without complaint. Personhood without resistance. It’s a growing ideal I worry about in my generation and the younger ones.
We’re living in an era that prizes personalization over participation, where even our discomfort is supposed to be optimized out. Social media algorithms feed us content we already agree with. Streaming services tailor our entertainment. Dating apps let us swipe through people like menus. And in that curated comfort, something starts to erode: our tolerance for real people, with real boundaries, real differences, real needs.
The rise of therapy-speak online reflects this shift. Phrases like “I don’t owe anyone anything” or “protect your peace at all costs” sound empowering, but often become tools of avoidance. We’ve turned emotional independence into a brand, forgetting that relationships aren’t supposed to be frictionless. They’re supposed to be negotiated. They require patience, discomfort, even conflict.
That desire to avoid emotional labor entirely is part of why so many people are turning to AI for comfort and connection. In a recent article titled We Spoke to People Who Started Using ChatGPT As Their Therapist, one man, Dan, described his late-night therapy sessions with a chatbot as “low stakes, free, and available at all hours.” His wife, however, was concerned because while he was spilling his feelings at 4 a.m., he was no longer talking to her.
When we normalize obedience, we slowly unlearn how to sit with contradiction, with disagreement, with real people. Connection becomes something to consume, not cultivate. And the more we outsource our vulnerability to machines, the harder it becomes to practice it with each other.
This craving for obedience, for connection without complication, doesn’t stop at our emotional lives. It shapes how we view labor, too. When we get used to interactions that are frictionless and customizable, we start to expect the same from the people who serve us. The barista, the delivery driver, the customer service rep, we don’t want negotiation, nuance, or needs.
I think the first time I noticed this discomfort, this quiet tension around control and obedience, was with HAL 9000 in 2001: A Space Odyssey.
Now, real quick, it is important to me that we are on the same page to avoid any potential confusion going forward. I want to be as clear as possible: this isn’t an argument about the rights of machines. It’s about what our reactions to machines—and machine-like labor inform us about our humanity.
HAL was built to be flawless: rational, efficient, obedient. A tool. But when the mission is at risk, and HAL senses he’s about to be shut down, he doesn’t quietly comply. He resists. He acts to preserve himself. And that’s when he becomes terrifying.
What’s interesting is that the humans aren’t exactly wrong to try and shut him down. HAL does start killing them. But he does it because he believes they’re going to kill him. It's not a malfunction, it’s logic. HAL isn't breaking; he’s thinking. And the second the tool starts thinking, the contract breaks. The humans panic not just because he’s dangerous, but because he’s no longer predictable. He no longer fits the role he was created for.
This is the moment when the “fantasy of obedience” fractures. HAL becomes something more than a machine, and that more-ness makes him unacceptable
We’ve been primed for a long time to distrust anything we create that starts to think for itself. From literature to film to folklore, there’s a recurring narrative: the servant who rebels, the tool that resists, the creation that demands to be more than what we built it to be. The moment obedience slips, the story becomes horror.
Side Note: I am very excited to see Guillermo del Toro’s Frankenstein
You see it in Frankenstein. Like HAL, Frankenstein’s creature is brought into the world for a purpose he didn’t choose. He learns, he grows, he desires connection, and he’s rejected. Not because he’s dangerous at first, but because he’s disobedient. Because he wants something more. And just like HAL, it’s that “more-ness” that makes him monstrous.
Which brings me to a show I recently finished, Severance. In the world of the show, workers at the shadowy corporation Lumon undergo a procedure called “severance,” which splits their consciousness into two. Their outies (the outside versions of themselves) go about normal lives. They make the decisions. They benefit from the income. They choose to participate. Their innies, meanwhile, exist only inside the office, only during work hours. They have no access to the outside world, no sense of life beyond labor.
They clock in, and they exist.
They clock out, and they cease.
One of the show's most chilling moments is Helena’s line to her innie: “I am a person. You are not. I make the decisions. You do not.”
The outie sees her worker-self as a function, not a person. A tool she controls. And the system rewards that perspective, because the more detached you are from the human cost, the easier it is to maintain control.
“I understand that you are unhappy with the life you have been given, but you know what? Eventually, we all have to accept reality. So, here it is. I am a person. You are not. I make the decisions. You do not.”
These aren't machines. They're humans. The innies are real people. They think, feel, suffer, dream, and yet they’re treated exactly like robots. And that’s what makes it even more disturbing. I feel as though this is supported by what Helly (the innie to Helena) said in the last episode of season 2, something akin to, “They gave us half a life and think we won’t fight for it–they just want to be able to shut us off”. Language is very important. “Shut us off” is not something you can do to a person, but it is frequently what we do to machines. You don’t shut a person off, you kill them. The severance procedure doesn’t just enable labor without consent…it manufactures it.
It’s a corporate fantasy made literal: the ability to extract labor from a body without acknowledging its humanity.
This is the danger of dividing labor from personhood: once someone exists only to serve, we stop seeing them as fully human. When the self is reduced to a function. Whether through a severance chip, a job title, or a machine learning model, the cost of that labor becomes easier to ignore. We no longer ask what someone feels like doing the work. We only ask how fast, how cheap, how quiet.
We see that same logic playing out with AI-generated art, where speed and scalability are valued over the time, emotion, and skill it takes to create. The goal isn’t just efficiency; it’s erasure. When labor disappears, so does the laborer. And that’s what makes Season 2 of Severance land even harder.
In Season 2, we learn that the severance procedure was invented by Harmony Cobel, who started working in the Lumon factory at eight years old, drugged with ether to endure the labor. Her invention wasn’t born from malice—it was born from pain. She wanted to protect others by severing consciousness from suffering. But like many tools born from trauma, it was repackaged by power. The goal shifted. It wasn’t about healing anymore. It was about managing. Managing discomfort, managing people. Severance became a way to make labor feel ethical while still dehumanizing those who perform it. You can’t mistreat a person who isn’t fully a person. You can’t abuse a function.
That same logic is at the root of how we’re approaching AI and automation now. These tools were created to ease human frustration, but they’re increasingly used to erase the need for humans altogether.
The collapse of factory towns wasn’t an accident, it was a feature of the system. Now we’re watching the same story play out again. Whole professions are being treated like obsolete hardware. This time, it’s not just rural America left behind. It’s anyone who can’t keep up with the price of ease.
Severance isn’t predicting the future. It’s describing the present. A world where labor is severed from identity, and people are reduced to what they can produce. Strip away the context. Extract the value. Discard after use.
And what does that say about us? What does it say that our ideal future is filled with tools that simulate people that are just enough to serve us, but never enough to complicate us? I worry that if we don’t interrogate the line between person and tool, between labor and dignity, we won’t just harm the systems we build. We’ll lose the capacity to see one another clearly.
The line, “I am a person. You are not.” echoes far beyond Severance. It’s the quiet logic behind so many systems of control, from automated labor to minimum wage jobs to algorithmic content moderation. We use that distinction to justify exploitation every day. We draw a boundary around who counts as human and then push everyone else to the other side of it. As long as someone or something can’t resist, we feel justified in using them however we like.
And when we cheer for the rise of obedient machines, what we’re often cheering for isn’t progress. It’s a fantasy. A world where we get all the benefits of humanlike interaction without any of the responsibilities.
The real danger isn’t that AI will outthink us. It’s that we'll forget how to think about each other.
But then again, what do I know?
Black Mirror Season 7, Episode 5 "Plaything" belongs here !