The Mirror Between Us
A reflection on alignment, accountability, and the quiet gaps inside most Data & AI teams
I. When the Distribution Arrives
Over the past months, I’ve been in conversations with teams across industries - insurance companies, manufacturers, retailers, etc. - each carrying different responsibilities, but almost all confronting the same pressure: AI is no longer optional.
In most of these cases, the interest comes from the top. A CxO comes back from a board meeting, a global strategy summit or a leadership offsite where AI was the main thread running through every conversation, whether named directly or not. They return with urgency in their voice, a need to move quickly, and a simple directive: “We need to integrate AI into what we do.”
What follows is rarely simple. Suddenly, data leaders are asked to deliver faster, to transform their programs, to expand their responsibilities.
Governance becomes AI Governance.
Analytics is told to “think beyond dashboards.”
Engineers are expected to prep infrastructure for models they’ve never worked with.
And product owners, many of whom have only recently been given that title, are now asked to connect it all.
In the middle of all this movement, I often ask one question - not because I want to provoke anyone, but because it’s foundational.
"What is your Data Team?"
And this is where things start to wobble. Not because people don’t care or because they’re underqualified.
But because what comes back is usually a long explanation that says very little. Titles are listed, tool stacks are mentioned, org charts are referenced but the clarity stops there. Roles are unclear and / or collaboration is assumed. The idea of the “Data Team” is there in name, but not in practice.
And when you start with a team that’s not fully aligned internally, any external demand, especially one as cross-cutting and intense as the AI (hype), it amplifies what was already fragile.
II. What’s Really There (and What Isn’t)
Beneath the structure charts and capability decks, the day-to-day reality of many data teams looks quite different from how it's presented externally.
You’ll find a data analyst building reports without knowing how the backend is maintained, or what changes are coming. An engineer optimizing infrastructure without understanding what the business actually expects from the data. A scientist training models with assumptions about quality that no one has validated. A data steward carefully crafting definitions that never find their way into real conversations. And a product owner, newly tasked with ensuring value, left trying to translate between people who’ve never been asked to align in the first place.
Each of them may be good at what they do. But they don’t really know how the others work, what pressures they’re under, or what purpose ties them all together.
And that’s the point where things start to break - not because of failure or lack of intelligence, but because of disconnection.
The team exists on paper, but not in motion.
People show up to meetings with good intentions. Updates are shared. Jira tickets are moved. But no one is actually building shared understanding, and that becomes clear the moment something unexpected happens.
A data product request shifts mid-sprint.
A metric changes definition.
A stakeholder disagrees with the framing of the insight.
That’s when it becomes obvious whether the team knows how to respond together, or whether everyone quietly retreats to their function.
Too often, it’s the latter. And when that happens repeatedly, people don’t speak up -they pull back. They protect their work. They keep their focus narrow. And the more they specialize, the more isolated they become.
And all of that happens long before any AI initiative starts. It’s just that AI brings the consequences forward. It makes them harder to ignore.
Because AI depends on integrated thinking. On trust across roles. On continuous context. And it has no patience for teams that haven’t built those muscles yet.
III. The Purpose That Got Lost
Ask a data team what success looks like, and you’ll often hear sensible, familiar answers:
a model that performs well in production,
a pipeline that runs reliably,
a dashboard that answers questions quickly, or
a glossary that’s finally documented and published.
None of these are wrong, but they’re also not complete.
What’s missing in many of these answers is not technical capability or intention, to me it’s coherence.
It’s the sense that all these pieces, however functional on their own, were built with a shared understanding of why they matter and how they connect to something beyond the immediate task. That connective tissue - the link between roles, outcomes, and broader goals - is what separates a functional team from a meaningful one.
And it’s precisely here, in the absence of that thread, that most AI initiatives begin to struggle. Because while many data projects can tolerate siloed thinking or loose coordination, AI cannot.
It cuts across functions by design. It demands continuous feedback, cross-role understanding, and mutual accountability. Without these things, even the best AI plans are undermined by quiet misalignments that no model can fix.
IV. The Mirror We Avoid
When things begin to slow down, i.e. when a project slips, adoption plateaus, or results fall short of expectations, the first instinct is rarely to look at ourselves. Instead, the explanation often moves outward.
The business wasn’t aligned.
The users weren’t ready.
The data was messy.
The stakeholders were vague.
I don’t feel comfortable with this statements. As much as there is truth these indication of Data & AI literacy and maturity, I question, if we tell ourselves the conditions weren’t right.
But what if the gap isn’t between us and the business; but between us and each other?
What if the disconnect isn’t only about data literacy; but also about collaboration maturity?
What if, when governance is ignored, when engineering pushes back, when scientists drift from business goals, it’s not malice or ignorance, it’s the byproduct of a structure that never really supported shared understanding to begin with?
Because if someone outside the team - a client, a board member, a critical stakeholder - were to quietly observe how we work together when things are uncertain, how we resolve ambiguity, how we hand over responsibility, how we recover from mistakes, what would they see?
Would they see a group that functions with trust, rhythm, and awareness?
Or would they see well-meaning individuals improvising their way through a system no one actually designed?
V. The Questions We Can No Longer Ignore
This is not about finding someone to blame. It’s about finally naming what we’ve avoided. The questions are basic, but rarely answered in practice:
Do we know what excellence looks like in each other’s roles or have we stopped short at job titles and assumptions?
Have we clearly defined how work moves across roles, especially when timelines shrink and pressure rises?
Do we expect trust to form naturally, or have we invested time in building it deliberately?
Have we ever practiced collaboration under real constraints or are we hoping we’ll figure it out when it matters most?
Do we recognize where our assumptions about others begin and where our accountability ends?
Because if we haven’t asked these questions or worse, if we’ve asked them and left them unanswered, then our problems are not external. They’re structural. And they’re ours to solve.
VI. The Journey Forward, If You’re Willing to Start It
There is a version of every team that works differently. I’ve seen it, not in theory, but in rooms where people stop defending their area of responsibility and start understanding each other’s context.
It begins with small, deliberate shifts.
With data product owners asking engineers how sustainable a design is and not just how fast it can be built.
With stewards explaining why terms matter, not just what they are.
With scientists adjusting models based on how the data is produced, not just how it’s shaped.
With analysts revisiting metrics based on what decisions they’re meant to support and not just what’s technically possible.
And from there, it deepens through real practice. Not theoretical workshops or empty rituals, but actual working sessions where teams rehearse the full life cycle of a use case - from discovery to deployment - and ask, at every stage: What do we expect of each other here? What happens when things break? Who needs to know what, and when?
The target is making expectation and necessities in collaboration visible, because once those expectations are visible, they can be challenged. And once they’re challenged, they can be improved. And once they’re improved, they can be trusted.
That trust, in time, becomes reliability.
And reliability is what makes pressure bearable.
VII. The First Step Is the Hardest—and the Most Necessary
So before you commit to the next initiative, before you layer another framework or align with yet another mandate, ask a quieter question:
Do we actually know how to work together—not in principle, but in practice?
If the answer isn’t a clear yes, if there’s uncertainty, if roles are blurry, if collaboration feels fragile, then start there.
Because the truth is, no AI roadmap, no operating model, no investment in tooling will compensate for a team that doesn’t trust itself.
But if you are willing to face that truth - if you pause long enough to ask the questions that are harder than they appear - then you’ll find something far more valuable than clarity: you’ll find readiness.
Not the kind that comes from templates or toolkits, but the kind that comes from people who understand one another well enough to move forward, together.
And if by the end of this reflection you recognize a little too much of your own team in these words - then good. That recognition is a beginning, not a solution. But a real, honest beginning.
Finde me on LinkedIn
Book a Call