For decades, culture did not matter.
Or rather, we behaved as if it did not matter. I stupidly did, especially in the early days of Contegix. Culture was the invisible hand shaping organizations while leadership focused on strategy, revenue, and operations. We wrote mission statements and framed core values on lobby walls, then wondered why toxic behaviors festered in conference rooms. Culture defined itself within whatever guardrails we accidentally provided. We often did not like what emerged from that lack of intentionality. I remember waking up one day and questioning if I wanted to work at my own company.
AI usage inside companies is proceeding down a similar path.
History does not repeat itself. Human behavior does.
The Pattern We Keep Missing
I have watched this pattern unfold multiple times across my career. The technology changes. The human response does not.
In the mid-1990s, I did IT consulting through my own company (RIP: NetFactor Technologies LLC). One client had a T1 line and SDSL connection to their business (serious bandwidth for that era) yet provided internet access to employees via a single computer in a locked room. The technology was present. The intentionality around its use was absent. Fear, distrust, and control substituted for governance.
When I worked at IBM Global Services, I witnessed the rollout of content filtering. Some employees lost their jobs, and their defense amounted to: “No one said I could not surf NSFW [modern day term] material on my lunch break.” They were technically correct. No one had said it. No one had defined what appropriate use looked like because no one had thought to ask the question until the answer became obvious and expensive.
The same pattern repeated with email. No one taught employees how to communicate professionally in writing at scale. We learned the hard way about tone, reply-all disasters, and the permanence of digital communication. To this day, I often start emails with a statement of “Email is poor at conveying tone, and I am even worse. Please read this email with a tone of…” Thus, policies followed embarrassment.
Social media arrived next. Employees suddenly became public representatives of their companies whether organizations intended it or not. The line between personal and professional blurred. I remember talking to students at my alma mater 15 years ago and explaining how we looked at potential hires’ social media. One student began crying. Most companies wrote social media policies after someone went viral for the wrong reasons.
BYOD and mobile devices followed the same trajectory. Companies resisted, then capitulated, then scrambled to secure devices that were already in the wild. The horse left the barn before we noticed the door was open.
Remote work fit the pattern perfectly. For years, it was treated as a perk to be rationed rather than a capability to be developed. When COVID forced intentionality overnight, the companies that thrived already had the muscle. The rest improvised or worse…
Each time, the sequence was identical: technology emerges, organizations fail to manage it intentionally, chaos ensues, reactive policies attempt cleanup.
The AI Version Is Already Underway
The data is no longer theoretical.
UpGuard reported that roughly 8 out of 10 employees, including nearly 90% of security professionals, use unauthorized AI tools, and regular use is highest among executives. Separately, Cybsafe reported that 38% of employees admitted sharing sensitive information with AI tools without their employer’s knowledge. Microsoft and LinkedIn reported that 78% of AI users bring their own AI tools to work.
Samsung learned this lesson publicly in 2023 when engineers leaked source code, internal meeting notes, and hardware-related data to ChatGPT in three separate incidents within a single month. The company’s response was to ban generative AI tools entirely, a reaction that addresses symptoms while ignoring the underlying governance vacuum.
The legal profession provides a particularly visible cautionary tale. One researcher has now identified over 850 court cases where generative AI produced hallucinated content: fabricated case citations, invented quotes, or legal arguments unsupported by the sources cited. Lawyers have faced sanctions, fines, and suspensions for submitting briefs containing nonexistent cases. Stanford research found hallucination rates between 58 and 88 percent in state-of-the-art legal AI models. Even tools marketed as “hallucination-free” produced fabricated content in one-fifth to one-third of responses.
The consequences extend beyond embarrassment. Data leakage, regulatory violations, reputational damage, workforce departure, reduced innovation, and the emergence of data silos that fragment organizational knowledge. These are the cultural outcomes of ungoverned AI use.
Authentication Is Not Authorization
Companies have become reasonably good at authentication. They can verify who you are. Where they struggle is authorization: defining what you should be permitted to do once your identity is confirmed.
This distinction matters enormously for AI governance. Many organizations can tell you which employees have access to ChatGPT Enterprise or Microsoft Copilot. Far fewer can tell you what those employees should do with those tools, what data should never enter a prompt, or how outputs should be validated before use.
I recently chatted with a consultant who helped a client build their data lake/mesh in order to feed their AI system. The data lake sources and the data lake itself took into account authentication and authorization. The AI system did not. Now, the entire company is aware of the owner’s compensation and how the company pays for her vacation home.
AI governance failures will follow the same pattern. When an organization’s AI usage contradicts its stated values, the damage compounds beyond the immediate incident.
The Equifax Lesson Was Never Only Technical
This pattern is not limited to AI.
In March 2017, Equifax was alerted to a critical vulnerability in Apache Struts (CVE-2017-5638). Attackers later exploited it, and the company disclosed that unauthorized access occurred from May 13 through July 30, 2017.
Equifax had policies. What they lacked was the operational intentionality to execute those policies reliably. The failure was not merely technical. It contradicted their implied promise to protect consumer data. The reputational damage was existential because the incident revealed a values gap, not just a security gap.
Now add AI, which can leak data faster, at greater scale, with less visibility, and with employees who believe they are being productive.
Policy Sets the Rules. It Does Not Fix Behaviors.
A policy is not governance. It is the starting point for governance.
Policies define the rules of the game. You cannot learn to ride a bike by reading a book on cycling. Teams need instruction. They need to see leadership model the behavior. They need practice with feedback.
Think of it like driving from New York City to San Francisco. Values and guardrails establish the destination and the boundaries. These rarely change once truly formalized. Policy handles the terrain as it changes: weather, road closures, construction, the unexpected realities of the journey. Proactively defining policy creates the flexibility to adapt without losing direction.
Organizations that wait for AI incidents to force policy creation will find themselves in the same position as those who wrote culture handbooks after toxic behaviors became entrenched. The remediation is always harder than prevention.
Three Questions Every Leader Should Answer About AI
For executives and leaders, the path forward starts with honest assessment:
1. What is our stated position on AI use, and does our actual behavior match it? If employees are using unauthorized tools, the silence is speaking louder than any written policy.
2. Do we know what data is entering AI systems, and who has authorized that data to leave our control? Authentication without authorization is governance theater.
3. How will we know when our AI usage contradicts our mission, vision, or values? The Equifax lesson is that technical failures become existential when they reveal values gaps.
For those bringing this conversation to leadership, the framing is simple: waiting is the riskiest choice. The pattern is clear. The technology is already inside the organization. The only question is whether you will govern it intentionally or clean up the mess later.
Culture taught us this lesson. The question is whether we learned it.