The first time a New York judge fined lawyers for submitting court filings stuffed with cases that never existed—phantom rulings confidently invented by an AI—I felt more than embarrassment for the profession. I felt alarmed. When a machine can speak with such certainty while being so wrong, trust becomes the first casualty.

That episode was not an outlier; it was a warning label written in legal ink. AI systems are built to predict convincing sequences of words, not to understand truth in the human sense, and that gap matters. I have watched AI produce clean paragraphs, tidy citations, and authoritative tones that crumble the moment you verify them. The danger is not that AI lies like a villain; it misleads like a smooth talker who does not know it is bluffing.

What unsettles me most is the confidence. Errors do not limp into the room; they stride in wearing a barong of certainty, smiling, persuasive, and often unchecked. I have tested claims that sounded airtight only to discover dates shifted, facts blurred, and sources quietly invented. The machine does not blush when caught. It simply moves on, and the burden of correction falls on the human who trusted it.

And yet—this is where my frustration turns complicated—we have tied our daily work to these systems. Hospitals use AI to flag risks, banks lean on it to spot fraud, newsrooms use it to sift data, and classrooms are already rearranging themselves around it. One can’t help but rely on it, despite one’s misgivings, because refusing to engage feels like trying to write with a candle in a city that has already wired itself for electricity. This dependence is not a future problem; it is a present condition.

The irony is sharp: we demand speed and scale, and AI delivers, but accuracy becomes negotiable along the way. I see how easy it is to let convenience outrun judgment. A few seconds saved here, a shortcut taken there, until the habit forms and skepticism dulls. That is how minor errors begin to stack, quietly reshaping decisions that affect real people with real consequences.

There is also a cultural shift at play, and it’s worrisome. We are starting to treat machine output as a starting truth instead of a draft that needs bruising scrutiny. I bristle when I hear people say, “The AI said so,” as if the sentence ends the discussion. Tools were never meant to replace thinking, yet thinking is precisely what gets outsourced first.

Still, I am not calling for a bonfire of servers. I am calling for discipline. Use AI, if necessary—but interrogate it, verify it, and resist the temptation to let polished language stand in for reality. The machine should feel like a junior assistant who needs supervision, not an oracle whose words go unquestioned.

If there is a way forward, it lies in humility—ours, not the machine’s. We must remember that judgment, doubt, and conscience are not bugs in human thinking; they are features. AI can help carry the load, but the steering wheel should remain firmly in human hands, where responsibility still belongs.