David Monnerat

Dad. Husband. Product + AI. Generalist. Endlessly Curious.

Category: ai

  • Product Management is Dead

    Product Management is Dead

    My social media feeds have been inundated lately with bold assertions and proclamations about the future of product management.

    • Do we still need product managers?
    • Is AI going to replace product teams?
    • Has product…died?

    The claims tend to follow a predictable pattern:

    • AI writes user stories and PRDs.
    • AI generates user personas.
    • AI summarizes feedback and explores pain points.
    • AI prioritizes roadmaps.

    It makes for a compelling headline, often pushed by companies or consultants selling tools or services that claim to automate these tasks. These headlines grab attention, spark debate, and tap into the anxiety many product managers feel as AI reshapes their role.

    But this isn’t a funeral. It’s a reckoning. The old, process-heavy, adaptability-light version of product won’t survive. But that’s not the end of product. It’s the beginning of something better. Beneath the clickbait is a valid call to evolve: product management isn’t dying, it’s transforming.

    What Product Really Is

    Before we talk about what’s changing, let’s be clear about what product is.

    Product management isn’t a set of tasks. It’s a discipline of focus, alignment, and judgment.

    It’s about understanding problems deeply, prioritizing effectively, and creating the conditions for great teams to build the right things.

    AI can assist with this work, but it can’t own it. And if you think product is just a list of tasks?

    You’re already doing it wrong.

    Why People Want Product Dead

    Product is often seen as a bottleneck. It’s seen as the layer that slows down builders with meetings, documents, and decisions that feel like bureaucracy. In fast-moving, engineering-led organizations, product often looks like something that should be automated rather than a discipline rooted in insight, prioritization, and alignment.

    AI has only amplified that impulse. With tools that can instantly generate specs, synthesize feedback, and mock up features, product starts to look like a collection of tasks rather than a strategic function. And if it’s just tasks, why not let the machines do it?

    That thinking is tempting, especially to companies chasing speed and efficiency. But it’s also shortsighted. Still, the “product is dead” narrative keeps getting airtime because companies want it to be true, even if it misses the bigger picture.

    Speed Over Strategy: Engineering-Led Cultures Prefer Shipping

    In many engineering-led cultures, especially in AI, there’s a deep bias toward building, shipping fast, testing fast, and iterating fast. AI has collapsed the cost of experimentation. And with today’s AI tools, it’s never been easier to vibe code (i.e., rapidly stitch together working demos using AI and low-code tools) your way to a working prototype. You can spin up UIs, connect APIs, and generate sample data in hours instead of weeks. It looks and feels like progress.

    But without intention, you’re not building products, you’re building distractions. You’re producing, not progressing. You’re generating output, not outcomes.

    And that’s the trap: it feels like you’re moving faster, but without a clear understanding of the customer, the problem, and the strategy, you’re either moving in circles or heading in the wrong direction entirely.

    Task-Based Thinking: Why Product Looks Replaceable

    The appeal is obvious: automate the “middle layer,” and suddenly, your team is leaner, faster, and cheaper. Product work is reframed as a series of repeatable tasks: write a story, generate a persona, summarize feedback, and stack rank a backlog. It’s presented as something mechanical, like configuring an assembly line, rather than requiring focus, intention, and insight.

    But this framing is dangerously incomplete. These aren’t just tasks; they’re judgment calls. They ensure teams solve the right problems in the right way at the right time. Discovery without direction is noise. Strategy without prioritization is chaos. Specifications without insight are just empty documentation.

    AI can assist with product work, but reducing it to a checklist makes it easier to sell a tool but harder to build anything meaningful.

    A Convenient Story: The Simplified Narrative That Sells

    It’s a narrative that promises clarity: eliminate the middle layer, remove the blockers, and let machines and makers do what they do best. This strategy plays perfectly in a world obsessed with efficiency and in organizations that already see product management as overhead.

    But the truth is messier.

    Good product managers don’t just write tickets or relay requests. They bring cohesion to chaos. They align teams around a shared understanding of the customer, the problem, and the goal. They ask the hard questions that AI can’t answer on its own.

    Should we build this? Why now? What matters most?

    AI can produce content, but not conviction. It can analyze feedback, but not frame a vision. And it can’t resolve the tensions between user needs, business goals, and technical constraints — at least not without someone to interpret, prioritize, and lead.

    The “product is dead” story works because it feels simple. But building good products was never simple. Removing the people who deal with complexity doesn’t make it go away. It just makes it your customer’s problem.

    The Companies Who Will Regret This

    Here’s my prediction:

    • The companies that cut product first will move fastest at first.
    • Their roadmaps will fill up. Their launches will accelerate. Their demos will look impressive.

    But then, slowly and quietly, things will start to break.

    • Customer engagement will slip.
    • Retention will fall.
    • New features will feel disconnected from real needs.
    • Teams will build for what’s easy, not for what’s valuable.

    The companies that sold them those shiny new tools, the ones that promised to replace product? They’ll be long gone, moving on to the next buyer or looking for the next hype cycle to exploit.

    Meanwhile, the companies that doubled down on the real craft of product, who invested in judgment, customer obsession, and asking why before what, will still be standing (and thriving) while others fade. They’ll have products that resonate and that evolve with their customers.

    Because tools don’t create strategy.

    People do.

    Old Product Might Be Dead — And That’s a Good Thing

    Now, here’s where I’ll agree with the AI evangelists: old product needed to change.

    The days of PMs acting as backlog managers, Jira ticket writers, or meeting schedulers? Yeah, that should die.

    PMs who only handed off requirements to engineering? Gone.

    PMs who never talked to customers? Dead.

    Let’s be honest: many organizations misdefined the PM role. They hired process managers and called it product. They built layers of communication, not layers of clarity. They were managing workflows, not products. They were pushing tickets, not pushing strategy. Those roles are vulnerable not because of AI, but because they weren’t doing product in the first place.

    The version of product that survives this shift and is worth fighting for is sharper, faster, and more essential than ever. It’s not about being an intermediary between engineering and design. It’s about creating clarity, focus, and vision where there was once noise and confusion.

    It looks like:

    • Problem curation over solution obsession: It’s not about finding the quickest fix or building what’s easiest. It’s about understanding what problem we’re solvingfor whom, and why it matters.
    • Judgment over process: AI can help automate the steps, but it can’t tell you if you’re solving the right problem or if the timing is right. Good product management is still a series of thoughtful decisions, not just steps in a flowchart.
    • Context over control: Dictating requirements from above doesn’t work anymore. Context, shared understanding, and alignment are what drive teams to collaborate effectively, not command and control.
    • Collaboration over command: PMs are the glue that brings engineering, design, and business together. But that means being a partner and enabler, not a dictator. Collaboration is the new currency in product development.
    • Customer truth over corporate theater: Building the right product requires honest feedback, real conversations with customers, and deep empathy. It’s not about making the product look good on paper; it’s about making it work for the people who use it.

    The old way of doing product is over. But this isn’t about mourning its loss. It’s about embracing a new, more purposeful approach. The role of product management is evolving, and in many ways, that’s a huge opportunity to do better, build better, and have a bigger impact.

    The King Is Dead. Long Live the King.

    The “product is dead” narrative is loud right now because it’s easy. It’s easier to believe we can automate judgment than it is to build it. Easier to replace complexity than to wrestle with it. Easier to promise speed than to commit to substance.

    But the companies that endure — the ones that create real value, not just hype-fueled demos — will be the ones that lean into the harder, more human work.

    They’ll treat product not as a process to optimize, but as a practice to sharpen.

    They’ll embrace AI as a powerful tool — not a replacement for the thinking, intuition, and collaboration that make great products possible.

    They’ll stop treating product like a middle layer to cut, and start recognizing it as a critical function to elevate.

    Because here’s the truth: the best product teams won’t just survive this shift. They’ll lead it.

    They’ll be faster because they’re clearer. Smarter because they’re humbler. Stronger because they’re more aligned.

    Product isn’t dead. Bad product is dead. Shallow product is dead. Performative product is dead.

    The age of product isn’t over. The age of better product is just beginning.

    Long live product — not as it was, but as it needs to be.

  • Are You Not Entertained?

    Are You Not Entertained?

    “Give them bread and circuses, and they will never revolt.”
    — Juvenal, Roman satirist

    Over the past two weeks, my LinkedIn feed has looked like an AI fever dream. Every meme from the past 10 years was turned into a Studio Ghibli production. Former colleagues changed their profile pictures into a Muppet version of themselves. And somewhere, a perfectly respectable CTO shared an image of themselves as an ’80s action figure.

    Meanwhile, in boardrooms everywhere, a familiar silence falls: ‘But… where’s the ROI?

    The Modern Colosseum

    The Roman Empire understood something timeless about human nature: if people are distracted, they’re less likely to notice what’s happening around them. Bread and circuses. Keep them fed and entertained, and you can buy yourself time (or at least avoid a riot).

    Fast-forward a couple of thousand years, swap out the emperors and politicians for CEOs in hoodies, VCs in Patagonia vests, and gladiators for generative AI, and the strategy hasn’t changed much.

    Today’s Colosseum is our social feed. And instead of lions and swords, it’s Ghibli filters, Muppet profile pictures, and action figure avatars. Every few weeks, a new AI-powered spectacle sweeps through like a new headline act. The crowd goes wild. The algorithm delivers the dopamine. And for a moment, it feels like this is what AI was always only meant for fun, viral, harmless play.

    But here’s the thing: that spectacle serves a purpose. The companies building these tools want you in the arena.

    Every playful experiment trains their models, every viral trend props up their metrics, and every wave of AI-generated content helps justify the next round of fundraising at an even higher valuation. These modern-day emperors are profiting from the distraction.

    You get a JPEG. They get data, engagement, and another step toward platform dominance.

    Meanwhile, the harder, messier questions that actually matter get conveniently lost in the noise:

    • Where does this data come from?
    • Where does the data go?
    • Who owns it?
    • Who profits from it?
    • What happens when a handful of companies control both the models and the means of production?
    • And are these tools creating real business value — or just highly shareable distractions?

    Because while everyone’s busy turning their profile picture into a dreamy Miyazaki protagonist, the real, boring, messy, complicated work of AI is quietly stalling out as companies continue to struggle to find sustainable, repeatable ways to extract value from these tools. The promise is enormous, but the reality? It’s a little less cinematic.

    And so the cycle continues: hype on the outside, hard problems on the inside. Keep the crowd entertained long enough, and maybe nobody will ask the hardest question in the arena:

    Is any of this actually working?”

    Spectacle Scales Faster Than Strategy

    It’s easy to look at all of this and roll your eyes. The AI selfies. The endless gimmicks. The flood of LinkedIn posts that feel more like digital dress-up than technology strategy.

    But this dynamic exists for a reason. In fact, it keeps happening because the forces behind it are perfectly aligned.

    It’s Easy

    The barrier to entry for generative AI spectacle is incredibly low.
    Write a prompt. Upload a photo. Get a result in seconds. No infrastructure. No integration. No approvals. Just instant content, ready for likes.

    Compare that to operationalizing AI inside a company where projects can stall for months over data access, privacy concerns, or alignment between teams. It’s no wonder which version of AI most people gravitate towards.

    It’s Visible

    Executives like to see signs of innovation. Shareholders like to hear about “AI initiatives.” Employees want to feel like their company isn’t falling behind.

    Generative AI content delivers that visibility without the friction of actual transformation. Everyone gets to point to something and say, “Look! We’re doing AI.

    It’s Fun

    Novelty wins attention. Play wins engagement. Spectacle spreads faster than strategy ever will.

    People want to engage with these trends — not because they believe it will transform their business, but because it’s delightful, unexpected, and fundamentally human to want to see yourself as a cartoon.

    It’s Safe

    The real work of AI is messy. It challenges workflows. It exposes gaps in data. It forces questions about roles, skills, and even headcount.

    That’s difficult, political, and sometimes threatening. Creating a Muppet version of your team is much easier than asking, “How do we automate this process without breaking everything?”

    And that’s exactly what the model and tool providers are taking advantage of. The easier it is to generate content, the faster you train the models. The more fun it is to share, the more data you give away. The safer it feels, the less you question who controls the tools you’re using.

    The Danger of Distraction

    The Colosseum didn’t just keep the Roman crowds entertained — it kept them occupied. And that’s the real risk with today’s AI spectacle.

    It’s not that the Ghibli portraits or action figure avatars are bad. It’s that they’re incredibly effective at giving the illusion of progress while the hard work of transformation stalls out behind the scenes.

    Distraction doesn’t just waste time. It creates risk. It creates vulnerability.

    Because while everyone is busy playing with the latest AI toy, the companies building these tools are playing a very different game — and they are deadly serious about it.

    They’re not just entertaining users. They’re capturing data. Shaping behavior. Building platforms. Creating dependencies. And accelerating their lead.

    Every viral trend lowers the bar for what people expect AI to do — clever content instead of meaningful change, spectacle instead of service, noise instead of impact. Meanwhile, the companies behind the curtain aren’t lowering their ambitions at all. They’re racing ahead.

    And the longer you sit in the stands clapping, the harder it gets to catch up.

    Leaders lose urgency. Teams lose focus. Customers lower their standards. And quietly, beneath all the fun and novelty, a very real gap is opening up — between the companies who are playing around with AI and the companies who are building their future on it.

    This is the real risk: not that generative AI fails but that it succeeds at the completely wrong thing. That we emerge from this wave with smarter toys, funnier memes, faster content… but no real shift in how work gets done, how customers are served, or how value is created.

    And by the time the novelty wears off and people finally look around and ask, “Wait, what did we actually build?” it might be too late to catch up to the companies who never stopped asking that question in the first place.

    Distraction delays that reckoning. But it doesn’t prevent it.

    The crowd will eventually leave the Colosseum. The show always ends. What’s left is whatever you bothered to build while the noise was loudest.

    Leaving The Arena

    If the past year has felt like sitting in the front row of the AI Colosseum, the obvious question is: do you want to stay in your seat forever?

    Because leaving the arena doesn’t mean abandoning generative AI. It means stepping away from the noise long enough to remember why you showed up in the first place. It means holding both yourself and the technology providers to a higher standard.

    It means asking harder questions about how you’re using AI and who you’re trusting to shape your future.

    • What real problems could this technology help us solve?
    • Where are we spending time or money inefficiently?
    • Who owns the value we create with these tools?
    • Where are we giving away data, control, or customer relationships without realizing it?
    • What assumptions are these LLM providers baking into our products, our workflows, our culture?
    • What happens to our business if these providers change the rules, the pricing, or the access tomorrow?
    • Are we designing for leverage or locking ourselves into dependency?
    • What happens if these companies own both the means of production and the means of distribution?

    It means shifting the focus from what AI can do to what people need. From delight to durability. From spectacle to service. From passive adoption to active accountability.

    Because the real work isn’t viral. It doesn’t trend on social media. No one’s sharing screenshots of cleaner data pipelines or more intelligent internal tools. But that’s exactly where the lasting value gets created.

    The companies (and people) who figure that out will not only survive the hype cycle but also be the ones standing long after the crowd moves on to whatever comes next.

    The arena will always be there. The show will always go on. The next shiny demo will always drop.

    But at some point, you must decide whether you’re in this to watch or are here to build something that lasts and ask the uncomfortable questions that building requires.

  • Automation’s Hidden Effort

    Automation’s Hidden Effort

    In the early 2000s, as the dot-com bubble burst, I found myself without an assignment as a software development consultant. My firm, scrambling to keep people employed, placed me in an unexpected role: a hardware testing lab at a telecommunications company.

    dm automation hidden effort test cable box telecommunications

    The lab tested cable boxes and was the last line of defense before new devices and software were released to customers. These tests consisted of following steps in a script tracked in Microsoft Excel to validate different features and functionality and then marking the row with an “x” in the “Pass” or “Fail” column.

    A few days into the job, I noticed that, after they had completed a test script, some of my colleagues would painstakingly count the “x” in each column and then populate the summary at the end of the spreadsheet.

    “You know, Excel can do that for you, right?” I offered, only to be met with blank stares.

    “Watch.”

    I showed them how to use simple formulas to tally results and then added conditional formatting to highlight failed steps automatically. These small tweaks eliminated tedious manual work, freeing testers to focus on more valuable tasks.

    That small win led to a bigger challenge. My manager handed me an unopened box of equipment—an automated testing system that no one had set up.

    “You know how to write code,” he said. “See if you can do something with that.”

    Inside were a computer, a video capture card, an IR transmitter, and an automation suite for running scripts written in C. My first script followed the “happy path,” assuming everything worked perfectly. It ran smoothly—until it didn’t. When an IR signal was missed, the entire test derailed, failing step after step.

    To fix it, I added verification steps after every command. If the expected screen didn’t appear, the script would retry or report a failure. Over weeks of experimentation, I built a system that ran core regression tests automatically, flagged exceptions, and generated reports.

    When I showed my manager the result, he was amazed as he watched the screen. As if by magic, the cable box navigated to different screens and tested various actions. At the end of the demo, he was impressed and directed me to automate more tests.

    What he didn’t see in the demo was the effort behind the scenes—the constant tweaking, exception handling, and fine-tuning to account for the messy realities of real-world systems.

    The polished demo sent a simple message:

    Automation is here. No manual effort is needed.

    But that wasn’t the whole story. Automation, while transformative, is rarely as effortless as it appears.

    Operator: Automation’s New Chapter

    The lessons I learned in that testing lab feel eerily relevant today.

    In January 2025, OpenAI released Operator. According to OpenAI1:

    Operator is a research preview of an agent that can go to the web to perform tasks for you. It can automate various tasks—like filling out forms, booking travel, or even creating memes—by remotely interacting with a web browser much as a person would, via mouse clicks, scrolling, and typing.

    When I saw OpenAI’s announcement, I had déjà vu. Over 20 years ago, I built automation scripts to mimic how customers interacted with cable boxes—sending commands, verifying responses, and handling exceptions. It seemed simple in theory but was anything but in practice.

    Now, AI tools like Operator promise to navigate the web “just like a person,” and history is repeating itself. The demo makes automation look seamless, much like mine did years ago. The implicit message is the same:

    Automation is here. No manual effort is needed.

    But if my experience in test automation taught me anything, it’s that a smooth demo hides a much messier reality.

    The Hidden Complexity of Automation

    automations hidden effort ai machine learning operator

    At a high level, Operator achieves something conceptually similar to what I built for the test lab—but with modern machine learning. Instead of writing scripts in C, it combines large language models with vision-based recognition to interpret web pages and perform actions. It’s a powerful advancement.

    However, the fundamental challenge remains: the real world is unpredictable.

    In my cable box testing days, the obstacles were largely technological. The environment was controlled, the navigation structure was fixed, and yet automation still required extensive validation steps, exception handling, and endless adjustments to account for inconsistencies.

    With Operator, the automation stack is more advanced, but the execution environment—the web—is far less predictable. Websites are inconsistent. Navigation is not standardized. Pages change layouts frequently, breaking automated workflows. Worse, many sites actively fight automation with CAPTCHAs2, anti-bot measures, and dynamic content loading. While automation tools like Operator try to solve these anti-bot techniques, their effectiveness and ethics are still debatable.3,4

    The result is another flashy demo in a controlled environment with a much more “brittle and occasionally erratic”5 behavior in the wild.

    The problem isn’t the technology itself—it’s the assumption that automation is effortless.

    A Demo Is Not Reality

    Like my manager, who saw a smooth test automation demo and assumed we could apply it to every test, many will see the Operator demo and believe AI agents are ready to replace manual effort for every use case.

    dm automation test hidden effort operator

    The question isn’t whether Operator can automate tasks—it clearly can. But the real challenge isn’t innovation—it’s the misalignment between expectations and the realities of implementation.

    Real-world implementation is messy. Moving beyond controlled conditions, you run into exceptions, edge cases, and failure modes requiring human intervention. It isn’t clear if companies understand the investment required to make automation work in the real world. Without that effort, automation promises will remain just that—promises.

    Many companies don’t fail at automation because the tools don’t work—they fail because they get distracted by the illusion of effortless automation. Without investment in infrastructure, data, and disciplined execution, agents like Operator won’t just fail to deliver results—they’ll pull focus away from the work that matters.

    1. https://help.openai.com/en/articles/10421097-operator
      ↩︎
    2. CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is a security feature used on websites to differentiate between human users and bots. It typically involves challenges like identifying distorted text, selecting specific objects in images, solving simple math problems, or checking a box (“I’m not a robot”). ↩︎
    3. https://www.verdict.co.uk/captcha-recaptcha-bot-detection-ethics/?cf-view ↩︎
    4. https://hackernoon.com/openais-operator-vs-captchas-whos-winning ↩︎
    5. https://www.nytimes.com/2025/02/01/technology/openai-operator-agent.html ↩︎

  • The White Whale

    The White Whale

    In Moby-Dick, Captain Ahab’s relentless pursuit of the white whale isn’t just a quest for revenge; it’s a cautionary tale about obsession. Ahab becomes so consumed by his singular goal that he ignores the needs of his crew, the dangers of the voyage, and the possibility that his mission might be misguided.

    This mirrors a common trap in problem-solving: becoming so fixated on a single solution—or even the idea of being the one to solve a problem—that we lose sight of the bigger picture. Instead of starting with a problem and exploring the best ways to address it, we often cling to a solution we’re attached to, even if it’s not the right fit or takes us away from solving the actual problem.

    A Cautionary Tale

    Call me Ishmael.1 – Herman Melville

    I once worked on a project to identify potential customer issues. The business provided the context and success metrics, and we were part of the team set out to solve the problem.

    After we started, an executive on the project who knew the domain had a specific vision for how the solution should work and directed us on exactly what approach to use and how to implement it. While their approach seemed logical to them, it disregarded key best practices and alternative solutions that could have been more effective.

    We ran experiments to test both the executive’s approach and an alternative, using data to demonstrate how a different approach produced better results and would improve business outcomes.

    But the executive was undeterred. They shifted resources and dedicated teams to their solution, intent on making it work. We continued a separate effort in parallel but without the resources or backing of the received by the other team.

    The Crew

    Like the crew of the Pequod, the teams working on the executive’s solution were initially excited about the attention and resources. They came up with branding and a concept that made for good presentations. The initial few months were spent creating an architecture and building data pipelines under the presumption that the solution would work. Each update gave a sense of progress and success as items were crossed off the checklist.

    That success, though, was based on output, not outcomes. Along the way, the business results weren’t there, and team members began to question the approach. However, even with these questions and the evidence that our approach was improving business outcomes, the hierarchical nature of the commands kept the crew from changing course.

    The Prophet

    In Moby Dick, Captain Ahab smuggles Fedallah, an almost supernatural harpooner, onto the ship as part of a hidden crew. Fedallah is a mysterious figure who serves as Ahab’s personal prophet, foretelling Ahab’s fate.

    Looking for a prophet of their own, our executive brought in a consulting firm to see if they could get the project on track. The firm’s recommendations largely mirrored those of our team. However, similar to Fedallah’s prophecies, the recommendations were misinterpreted. What we saw as clear signals to change course, the executive saw as a chance of success and doubled down on their solution.

    The Alternate Mission

    Near the end of the novel, the captain of another vessel, the Rachel, pleads with Ahab to help him find his missing son, lost at sea. Ahab refuses because he is too consumed by his revenge. Ultimately, the obsession costs Ahab his life as well as those of his crew, with the exception of Ishmael, who was, ironically, rescued by the Rachel, the whaling ship that had earlier begged Ahab for help.

    We tried to bridge the gap between the two efforts for years, but the executive’s fixation on their solution made collaboration impossible. We made a strong case using data to change the mission from making their solution work to refocusing on the business goals and outcomes. Unfortunately, after many attempts, we weren’t able to convince them or affect their bias and feelings that their solution should work. Too many claims had already been made, and too much had been invested to change course. The success of their solution was the only acceptable end of the journey, with that success always being just over the horizon.

    A Generative White Whale

    I’ve been thinking about this story lately because I see the same pattern happening with generative AI. Just as Captain Ahab chases Moby Dick, many companies chase technological solutions without fully understanding if those solutions will solve their real business problems.

    Since ChatGPT was launched to the public in 2022, there has been pressure across industries to deliver on generative AI use cases. The impressive speed at which users signed up and the ease at which ChatGPT could respond to questions gave the appearance of an easy implementation path.

    Globally, roadmaps were blown up and rebuilt with generative AI initiatives. Traditional intent classification and dialog flows were replaced with large language models in conversational AI and customer support projects. Retrieval-augmented generation changed search and summarization use cases.

    Then, the world tried to use it. Everyone quickly learned that the models didn’t work out of the box and underestimated the amount of human oversight and iteration needed to get reliable, trustworthy results.2 We learned that their data wasn’t ready to be consumed by these models and underestimated the effort required to clean, label, and structure the data for generative AI use cases. We learned about hallucinations, toxic and dangerous language in responses, and the need for guardrails.

    But the ship had sailed. The course had been set. Roadmaps represent unchangeable commitments3. The mission to hunt for generative AI success continued.

    What started with use cases with clear business outcomes inherited from the pre-generative AI days started to change. Rather than targeting problems that could significantly impact business goals, the focus shifted to finding problems that could be solved with generative AI. Companies had already invested too much time, money, and opportunity cost, and they needed to deliver something of value to justify the voyage.4,5

    It became an obsession.

    A white whale.

    Chasing the Right Whale

    I try all things, I achieve what I can.6 – Herman Melville

    That’s not to say there isn’t a place for generative AI or other technology as possible solutions. I’ve been working with AI for almost a decade and have seen how it can be truly powerful and transformative when applied to the right use case that aligns with business outcomes and solving customer or business problems.

    Experimenting with the technology can foster innovation and uncover new opportunities. However, when the organization shifts focus away from solving its most critical business problems and towards delivering a solution or leveraging a specific technology for the sake of the solution or the technology, misalignment between those two paths and choosing the wrong goal can put the entire mission at risk. The mission should always be the success of the business, not the technology.

    That’s the difference between chasing the white whale and chasing the right whale.

    Assess Your Mission

    The longer a project goes on, the more likely it will veer off course. Little choices over time make small adjustments to direction that can eventually lead to being far away from the intended destination. The same thing can happen with the overall mission. Ahab started his journey hunting whales for resources and, while he was still technically hunting a whale, his mission changed to revenge. If he took the time to reassess his position and motivation, Moby Dick would have had a less dramatic ending.

    As product and delivery teams, it’s healthy practice to occasionally look up and evaluate the current position and trajectory. While there may be an argument for intuition in the beginning, as more information becomes available, it’s important to leverage data and critical thinking rather than intuition and feelings which are more prone to bias.

    These steps can help guide that process.

    1. Reaffirm Business and Customer Priorities.

    Align leadership around the most critical problems. Start by revisiting the company’s core objectives and defining success. Then, identify the biggest challenges facing the business and customers before considering solutions.

    2. Audit and Categorize Existing Projects

    Identify low-impact or misaligned projects. List all ongoing and planned AI initiatives, categorizing them based on:

    • Business impact (Does it solve a top-priority problem?)
    • Customer impact (Does it improve user experience or outcomes?)
    • Strategic alignment (Is it aligned with company goals, or is it just chasing trends?)

    An important factor here is articulating and measuring how the initiative impacts business and customer goals rather than relates to a business or customer goal.

    For example, a common chatbot goal is to reduce support costs (business goal) by answering customer questions (customer goal) without the need to interact with a support agent. A project that uses generative AI to create more natural responses might look like it’s addressing a need, but it assumes that a more conversational style will increase adoption or improve outcomes. However, making responses more conversational doesn’t necessarily make them more helpful. If the chatbot still struggles with accurate issue resolution, customers will escalate to an agent anyway.

    3. Assess Generative AI’s Fit

    Ensure generative AI is a means to an end, not the goal itself.

    Paraphrasing one of my mantras I would use when a team approached me with an “AI problem” to solve:

    There are no (generative) AI problems. There are business and customer problems for which (generative) AI may be a possible solution.

    For each project, ask: Would this problem still be worth solving without generative AI?

    If a generative AI project has a low impact, determine if there’s a higher-priority problem where AI (or another solution) could create more value.

    4. Adjust the Roadmap with a Zero-Based Approach

    Rather than tweaking the existing roadmap, start from scratch by prioritizing projects based on impact, urgency, and feasibility.

    Reallocate resources from lower-value AI projects to initiatives that directly improve business and customer outcomes.

    5. Set Success Metrics and Kill Switches

    Define clear, measurable success criteria for every project. Establish a review cadence (e.g., every quarter) to assess whether projects deliver value. If a project fails to meet impact goals, have a predefined exit strategy to stop work and shift resources.

    This structured approach ensures that AI projects are evaluated critically, business needs drive technology decisions, and resources are focused on solving the most important problems—not just following trends.

    Conclusion

    The lesson of Moby-Dick is not just about obsession—it’s about losing sight of the true mission. Ahab’s relentless pursuit led to destruction because he refused to reassess his course, acknowledge new information, or accept that his goal was misguided. In business and technology, the same risk exists when companies prioritize solutions over problems and fixate on a specific technology rather than its actual impact.

    Generative AI holds incredible potential, but only when applied intentionally and strategically. The key is to stay grounded in business priorities, customer needs, and measurable outcomes—not just the pursuit of AI for AI’s sake. By regularly evaluating projects, questioning assumptions, and ensuring alignment with meaningful goals, teams can avoid chasing white whales and steer toward solutions that drive success.

    The difference between success and failure isn’t whether we chase a whale—it’s whether we’re chasing the right one.

    And I only am escaped alone to tell thee.7 – Herman Melville

    1. “Call me Ishmael.” This is one of the most famous opening lines in literature. It sets the tone for Ishmael’s role as the narrator and frames the novel as a personal account rather than just an epic sea tale. ↩︎
    2. https://www.cio.com/article/3608157/top-8-failings-in-delivering-value-with-generative-ai-and-how-to-overcome-them.html ↩︎
    3. Roadmaps are meant to be flexible and adjusted as priorities and opportunities change. ↩︎
    4. https://www.journalofaccountancy.com/issues/2025/feb/generative-ais-toughest-question-whats-it-worth.html ↩︎
    5. https://www.gartner.com/en/newsroom/press-releases/2024-07-29-gartner-predicts-30-percent-of-generative-ai-projects-will-be-abandoned-after-proof-of-concept-by-end-of-2025 ↩︎
    6. This quote from Ishmael reflects a spirit of perseverance and pragmatism, emphasizing the importance of effort and adaptability in the face of challenges. ↩︎
    7. The closing line of the novel echoes the biblical story of Job, in which a lone survivor brings news of disaster, underscoring the novel’s themes of fate, obsession, and destruction. ↩︎
  • The Democratization of (Everything)

    The Democratization of (Everything)

    A few years ago, I sat across the desk from a colleague, discussing their vision for a joint AI initiative. As a product manager, I pushed for clarity—what problem were we solving? What was the measurable outcome? What was the why behind this effort? His response was simple: democratization. Just giving people access. No clear purpose, no defined impact—just the assumption that making something available would automatically lead to progress. That conversation stuck with me because it highlighted a fundamental flaw in how we think about democratizing technology.

    The term “democratizing” used about technology began to gain traction in the late 20th century, particularly during the rise of personal computing and the internet.

    Democratizing technology typically means making it accessible to a broader audience, often by reducing cost, simplifying interfaces, or removing barriers to entry. The goal is to empower more people to use the technology, fostering innovation, equality, and progress.

    Personal computers would “democratize” access to computing power by putting it in the hands of individuals rather than large institutions or corporations. Similarly, the Internet would “democratize” access to information by removing the gatekeepers from publishing and content distribution.

    By the 2010s, “democratizing” became a buzzword in tech—used to describe making advanced tools like big data, AI, and machine learning accessible to more people. What was once in the hands of domain experts was now in the hands of the masses.

    Today, the term is frequently used in discussions about generative AI and other advanced technologies. These tools are marketed as democratizing creativity, coding, and problem-solving by making complex capabilities accessible to non-experts.

    The word “democratization” resonates because it aligns with broader cultural values, signaling fairness, accessibility, empowerment, and progress. The technology industry loves grand narratives, and “democratizing” sounds more revolutionary than “making more accessible.” It suggests that technology can break down barriers and create opportunities for everyone.

    However, as we’ve seen, the reality is often more complicated, and the term can sometimes obscure the challenges and inequalities that persist. Democratization often benefits those who already have the resources and knowledge while leaving others behind.

    I’ve long thought that the word “democratization” was an interesting choice when applied to technology because it resembles the ideals of operating a democratic state.1 Both rely on the idea that giving people access will automatically lead to better outcomes, fairness, and participation. However, both involve the tension between accessibility and effective use, the gap between ideals and reality, and the complexities of ensuring equitable participation. In practice, access alone is not enough; people need education, understanding, and responsible engagement for the system to function effectively.

    Democratization ≠ Access

    I’ve encountered many leaders who equate democratization with access, as if the goal is to put the tools in people’s hands. However, accessing a tool doesn’t mean people know what to do with it or how to use it effectively. For example, just because people can access AI, big data, or generative tools doesn’t mean they know how to use them properly or interpret their outputs.

    Similarly, just because people have the right to vote doesn’t mean they fully understand policies, candidates, or the consequences of their choices.

    In technology, access is meaningful only when it drives specific outcomes, such as innovation, efficiency, or solving real-world problems. In a democratic state, access to voting and participation is not an end but a means to achieve broader goals, such as equitable representation, effective governance, and societal progress.

    Without a clear purpose, access risks becoming superficial, failing to address deeper systemic issues or deliver tangible improvements. In both cases, democratization must be guided by a vision beyond mere access to ensure it creates a meaningful, lasting impact.

    Democratization requires not just opening doors but also empowering individuals with the knowledge, understanding, and skills to walk through them meaningfully. Without this foundation, the promise of democratization remains incomplete.

    Democratization ≠ Equality

    The future is already here, it’s just not evenly distributed.

    William Gibson2

    The U.S. was built on democratic ideals. However, political elites, corporate interests, and media conglomerates shape much of the discourse because political engagement is skewed toward those with resources, time, and education. Underprivileged communities face barriers to participation.

    The same is true in technology. The wealthy and well-educated benefit more from new technology, while others struggle to adopt it and are left behind. AI and big data were meant to be open and empowering, but tech giants still control them, setting rules and limitations.

    Both systems struggle with the reality that equal access does not automatically lead to equal outcomes, as power dynamics and systemic inequalities persist. Even when technology is democratized, those with more resources or expertise often benefit disproportionately, widening existing inequalities.

    Bridging the gap between access and outcomes demands more than good intentions—it requires deliberate action to dismantle barriers, redistribute power, and ensure that everyone can benefit equitably. By focusing on education, structural reforms, and inclusive practices, both technology and democratic systems can move closer to fulfilling their promises of empowerment and equality.

    Democratization ≠ Expertise

    These are dangerous times. Never have so many people had so much access to so much knowledge and yet have been so resistant to learning anything.

    Thomas M. Nichols, The Death of Expertise

    Critical thinking is essential for both the democratization of technology and the functioning of a democratic state. In technology, access to AI, big data, and digital tools means little if people cannot critically evaluate information, recognize biases, or understand the implications of their actions. Misinformation, algorithmic manipulation, and overreliance on automation can distort reality, just as propaganda and political rhetoric can mislead voters in a democracy. Similarly, for a democratic state to thrive, citizens must question policies, evaluate candidates beyond slogans, and resist emotional or misleading narratives. 

    Without critical thinking, technology can be misused, and democratic processes can be manipulated, undermining the very ideals of empowerment and representation that democratization seeks to achieve. In both realms, fostering critical thinking is not just beneficial—it’s necessary for meaningful progress and equity.

    Addressing the lack of critical thinking in technology and humanity at large requires a holistic approach that combines education, systemic reforms, and cultural change. We can build a more informed, equitable, and resilient society by empowering individuals with the skills and tools to think critically and creating systems that reward thoughtful engagement. This is not a quick fix but a long-term investment in the health of technological and democratic systems.

    Democratization ≠ Universality

    Both technology and governance often operate under the assumption that uniform solutions can meet the diverse needs of individuals and communities. This can result in a mismatch between what is offered and what is actually required, highlighting the limits of a one-size-fits-all approach.

    In technology, for example, AI tools and software may be democratized to allow everyone access, but these tools often assume a certain level of expertise or familiarity with the technology. While they may work well for some users, others may find them difficult to navigate or unable to fully harness their capabilities. A tool designed for the general public might unintentionally alienate those who need a more tailored approach, leaving them frustrated or disengaged.

    Similarly, in governance, policies are often created with the idea that they will serve all citizens equally. However, a single national policy—whether on healthcare, education, or voting rights—can fail to account for the vastly different needs and circumstances of different communities. For example, universal healthcare policies may not address the specific healthcare access issues faced by rural or low-income populations, and standardized educational curriculums may not be effective for students with different learning needs or backgrounds. When solutions are not tailored to the unique realities of diverse groups, they risk reinforcing existing inequalities and failing to deliver meaningful results.

    The challenge, then, is finding a balance between providing access and ensuring that solutions are adaptable and responsive to the needs of different communities. Democratization doesn’t guarantee universal applicability, and it’s essential to recognize that true empowerment comes not just from providing access but from ensuring that access is meaningful and relevant to everyone, regardless of their context or capabilities. Without this careful consideration, democratization can become a frustrating experience that leaves many behind, ultimately hindering progress rather than fostering it.

    Conclusion

    The democratization of technology, much like democracy itself, is harder than it sounds. Providing access to tools like AI or big data is only the first step—it doesn’t guarantee that people know how to use them effectively or equitably. Without the necessary education, critical thinking, and support, access alone can be frustrating and lead to further division rather than empowerment.

    Just as democratic governance struggles with the assumption that one-size-fits-all policies can serve diverse communities, the same happens with technology. Tools designed to be universally accessible often fail to meet the unique needs of different users, leaving many behind. Real democratization requires not just opening doors but ensuring that everyone has the resources to walk through them meaningfully.

    Democracy is challenging in both technology and governance. It’s not just about giving people access; it’s about giving them the knowledge, understanding, and opportunity to use that access in ways that truly empower them.

    Until we get this right, the promise of democratization (and democracy) remains unfulfilled.

    Footnotes

    1. The United States of America is a representative democracy (or a democratic republic). ↩︎
    2. https://quoteinvestigator.com/2012/01/24/future-has-arrived/ ↩︎
  • It’s Agentic! (Boogie Woogie, Woogie)

    It’s Agentic! (Boogie Woogie, Woogie)

    You can’t see it
    It’s electric boogie woogie, woogie
    You gotta feel it
    It’s electric boogie woogie, woogie
    Ooooh, it’s shocking
    It’s electric boogie woogie, woogie

    Marcia Griffiths

    Just as the shininess of generative AI started to lose its polish from the reality of trying to use it, a new buzzword has re-entered the lexicon: agentic.

    The term “agentic” refers to AI systems that exhibit autonomy and goal-directed behavior characteristics. These systems go beyond simply generating responses or content based on input—they act as agents capable of making decisions, taking actions, and adapting to achieve specific objectives in dynamic environments.

    The concept of autonomous agents is not new. We’re bringing back the classics. In Artificial Intelligence: A Modern Approach (1995), Peter Norvig and Stuart Russell defined an agent as anything that perceives its environment and acts upon it.

    The idea that we can automate routine tasks to free up people to do more challenging, more creative tasks is noble and has been a rallying cry of computers and software since their inception. In his essay As We May Think,” Vannevar Bush imagined a device called the “Memex” to help humans store, organize, and retrieve information efficiently, aiming to reduce mental drudgery and aid creativity.

    Computers were first used to automate repetitive, time-consuming industrial tasks, especially in manufacturing. Early pioneers recognized that this freed humans for more complex supervisory roles.

    As computers and software became more accessible, researchers explored “expert systems” that were designed to take over repetitive knowledge-based tasks to allow professionals to focus on more challenging problems.

    Today, generative AI tools like ChatGPT, Github Copilot, and others are attempting to fully realize this concept by automating tasks like writing, coding, design, and data analysis, allowing humans to concentrate on strategy, creativity, and innovation.

    But since the mainstream generative AI boom in 2022, which saw the public availability of ChatGPT and Github Copilot, and the expansion in 2023 and 2024 with more competition in generative AI, the challenges of bringing this technology into the enterprise and driving meaningful value have dampened the early enthusiasm.

    That’s not to say that generative AI hasn’t been valuable. Many enterprises report productivity gains1,2 from early use cases like code generation, knowledge management, content generation, and marketing. However, several challenges have made it more difficult to scale and adopt generative AI more broadly.

    Hallucinations, where outputs are factually incorrect, fabricated, or nonsensical, despite appearing plausible or confident, introduce risk and distrust into generative AI solutions. Toxic and harmful language, including encouragement of self-harm and suicide3,4, further expose companies and reduce their interest in exposing their customers directly to generative AI output.

    Introducing generative AI has also highlighted systemic internal issues. Knowledge management use cases were seen as straightforward, low-risk ways to leverage generative AI. For example, retrieval-augmented generation (RAG) allows users to get context-aware answers by combining AI-generated content with real-time retrieval of relevant information from sources like internal documents and databases.

    But what happens when that documentation is missing, outdated, or incorrect? What if the documentation is ambiguous or contradicts itself? What if the documentation is not in a format easily consumed by the RAG system? These generative AI solutions are not a technical solution to poor documentation and data. As the saying goes, “Garbage in. Garbage out.”

    While the challenges above are relevant to most generative AI implementations, one that applies to agentic AI and agents relate to business processes.

    Agentic AI relies on well-defined tasks, workflows, goals, and a clear understanding of how processes operate to function effectively. If processes are unclear or undocumented and data is inconsistent, incomplete, or unavailable, the AI may struggle to execute tasks properly or optimize workflows.

    Generative AI and agents are not a technical solution to inefficient processes, just as AI isn’t a solution to bad data. Automating an inefficient process with an agent could reinforce and scale those inefficiencies, creating more bottlenecks or errors.

    Companies often prefer introducing new technology rather than spending resources updating outdated documentation or optimizing processes. These challenges highlight the risks of skipping those steps, especially when agents can execute transactions automatically and interact directly with customers. The possibilities of financial and reputational damage by a wayward agent are dangerously real.

    However, the lure of automation and operational efficiency is strong, and the landscape of offerings in the agentic AI space continues to grow. In 2024, the market size was estimated to be between $5 B and $31 B5,6. By 2032, the market is projected to reach approximately $48.5 B7. Like winning the lottery, that dollar figure is causing companies to forget the struggles of implementing non-agentic generative AI in pursuit of a big payoff through automation. But what opportunities are missed to improve business and customer outcomes without agents (or even AI) while chasing that payoff?

    That’s not to say there isn’t a place for agentic AI. Similar to the gains seen from generative AI, the ecosystem of conversational, natural language, multi-model, and adaptive agents can be a powerful tool to solve complex problems and drive value. However, it will take time because work must be done before this value can be fully realized. Paraphrasing a quote, the road to generative AI (and agentic AI) is clearer than ever before, but it’s much longer than we thought.

    Recommendations

    While we travel that road to the promised land, there are a few areas companies can focus on to prepare for an agentic world:

    Invest in documentation management and data quality. If previous AI projects failed due to poor documentation or data, an AI agent will likely have the same fate as its predecessor. Companies may see incremental gains through this effort because it’s likely that poor data and documentation are creating inefficiencies. For example, poor documentation can cause support agents to struggle to find answers, causing longer handling times.

    Invest in process optimization. The simpler a process is, the more likely it can be automated. I’ve found that companies want to keep their complex processes, which humans often find challenging to navigate, and think that automating them is a faster path to efficiency gains. The reality, however, is that complex processes have a long tail of edge cases that cause automation to break down, require extensive troubleshooting and tuning, and cancel out value.

    Simplify architectures and APIs. One aspect of autonomy for agentic AI is access to tools and functions that the agent can execute to act. An agent cannot effectively utilize complex APIs that wrap multiple functions and are not well-instrumented.

    Focus on risk mitigation. As mentioned above, generative AI and agentic AI introduce risks, including hallucinations, toxic and harmful language, and a lack of oversight and controls. If the best time to plant a tree is 20 years ago, the best time to implement guardrails and controls is before introducing agents. As business processes are reviewed, optimized, and documented, attention should be paid to identifying and securing vulnerable points.

    Identify small use cases with low risk and high value. It can be tempting to throw agentic AI at the biggest or most expensive problem to maximize the return on investment. However, starting with complex, high-stakes processes increases the likelihood of errors, inefficiencies, and stakeholder resistance. Instead, focus on areas where agentic AI can deliver quick wins. This approach allows teams to refine their understanding of the technology, build trust, and develop best practices before scaling to more critical or complex use cases.

    Consider non-agentic and non-AI solutions. Of all the recommendations, this one will likely generate the most resistance as companies are pushing towards the promise of generative AI and agentic AI solving all problems. Improving customer service or reducing call volume through better internal documentation or website search won’t generate enough buzz to show up in a news feed. There is so much pressure to find problems that can be solved with generative AI, forcing a solution-first, technology-first mindset. Ultimately, it should never be about the technology. It should be about the outcomes. It should be about improving our customers’ and employees’ lives and experiences and the value we bring to the business. Start with a problem or pain point and work backward. Consider all possible solutions, and choose the one most likely to succeed, even if it doesn’t feed the hype.

    Conclusion

    While the allure of agentic AI is undeniable, achieving its promised potential requires deliberate preparation, thoughtful execution, and a focus on foundational improvements.

    Companies must resist the urge to chase the hype and prioritize efforts that enhance data quality, streamline processes, and establish robust risk mitigation strategies.

    Starting with low-risk, high-value use cases can build momentum, trust, and a clear path to scalable adoption. At the same time, leaders should remain open to non-agentic and non-AI solutions that more effectively and sustainably address pain points.

    Ultimately, the goal should not be to implement the latest technology for its own sake but to deliver meaningful outcomes that enhance customer experiences, empower employees, and drive long-term business value.

    The journey toward agentic AI may be longer and more complex than we thought, but with the right approach, we can significantly increase the likelihood of realizing its full value.

    You can’t see it. You gotta feel it. Ooooh, it’s shocking. It’s agentic.

    Footnotes

    1. https://cloud.google.com/resources/roi-of-generative-ai ↩︎
    2. https://www.wsj.com/articles/its-time-for-ai-to-start-making-money-for-businesses-can-it-b476c754 ↩︎
    3. https://gemini.google.com/share/6d141b742a13 ↩︎
    4. https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0 ↩︎
    5. https://www.emergenresearch.com/industry-report/agentic-artificial-intelligence-market ↩︎
    6. The wide range in the 2024 figure is likely due to differing methodologies or market definitions. ↩︎
    7. https://dataintelo.com/report/agentic-ai-market ↩︎
  • The Humanity In Artificial Intelligence

    The Humanity In Artificial Intelligence

    I wrote this essay in 2017. When I restarted the blog, I removed the posts that had already been published. But after reading this one, while the technology has advanced significantly since then, the sentiment still applies today.

    Dave, January 2025


    Algorithms, artificial intelligence, and machine learning are not new concepts. But they are finding new applications. Wherever there is data, engineers are building systems to make sense of that data. Wherever there is an opportunity for a machine to make a decision, engineers are building it. It could be for simple, low-risk decisions to free up a human to make a more complicated decision. Or it could be because there is too much data for a human to decide. Data-driven algorithms are making more decisions in many areas of our lives.

    Algorithms already decide what search results we see. They determine our driving routes or assign us the closest Lyft, and soon, they will enable self-driving cars and other autonomous vehicles. They’re matching job candidates with applicants. They recommend the next movie you should watch or the product you should buy. They’re figuring out which houses to show you and whether you can pay the mortgage. The more data we feed them, the more they learn about us, and they are getting better at judging our mood and intention to predict our behavior.

    I’ve been thinking a lot about these systems lately. My son has epilepsy, and I’m working on a project to gauge the sentiment towards epilepsy on social media. I’m scraping epilepsy-related tweets from Twitter and feeding them to a sentiment analyzer. The system calculates a score representing whether an opinion is positive, negative, or neutral.

    Companies already use sentiment analysis to understand their customers’ relationships. They analyze reviews and social media mentions to measure the effectiveness of an ad. They can inspect negative comments and find ways to improve a product. They can also see when a public relations incident turns against them.

    For the epilepsy project, my initial goal was to track sentiment over time. I wanted to see why people were using Twitter to discuss epilepsy. Were they sharing positive stories, or were they sharing hardships and challenges? I also wanted to know whether people responded more to positive or negative tweets.

    While the potential is there, the technology may not be quite ready. These systems aren’t perfect, and context and the complexities of human expression can confuse even humans. While “I [expletive] love epilepsy” may seem to an immature algorithm to express a positive sentiment, the effectiveness of any system built on top of them is limited by these algorithms themselves.

    I considered this as I compared two sentiment analyzers. They gave me different answers for tweets that expressed a negative sentiment. Of course, which was “right” could be subjective, but most reasonable people would have agreed that the tone of the text was negative.

    Like a child, a system sometimes gets a wrong answer because it hasn’t learned enough to know the right one. This was likely the case in my example. The answer given was likely due to limitations in the algorithm. Still, imagine if I built my system to predict the mood of a patient using an immature algorithm. When the foundation is wrong, the house will crumble.

    But, also like a child, sometimes they give an answer because a parent taught them that answer. Whether through explicit coding choices or biased data sets, systems can “learn wrong”. After all, people created these systems—people, with their logic and ingenuity, but also their biases and flaws. A human told it that an answer was right or wrong. A human with a viewpoint. Or a human with an agenda.

    We create these systems with branches of code and then teach them which branch to follow. We let them learn and show enough proficiency, and then we trust them to keep getting better. We create new systems and give them more responsibility. But somewhere, back in the beginning, a fallible human wrote that first line of code. It is impossible for those actions not to influence every outcome.

    These systems will continue to be pervasive, reaching into new areas of our lives. We’ll continue to depend on and trust them because they make our lives easier. And because they get it right most of the time. The danger is assuming they always get it right and not questioning an answer the feels wrong. “The machine gave me the answer, so it must be true” is a dangerous statement, now more than ever.

    We dehumanize these programs once they encounter the cold metal box in which they run. However, they are extensions of our humanity, and it’s important to remember their human origins.