Curiosity: Are people Experimenting Safely?
Curiosity, despite its feline unaliving (thanks to YouTube/TikTok for this algospeak) reputation, is a good thing. In fact, curiosity is the whole point at the start of many endeavours. It’s how organisations discover the real use cases: employees addressing squeaky wheels as individuals, attacking annoying, repetitive work, and the “why hasn’t anyone fixed this?” processes where GenAI can often genuinely help.
But curiosity without boundaries quickly becomes the deadly enemy to both data security and budgets - Shadow AI. By its very nature, it’s concealed, but can be a real problem for your data, security and wallet.
Shadow AI isn’t just “a bit of harmless tinkering”, which is what it looks like on the surface. It’s a coping strategy that happens when smart people feel blocked in their use of AI tooling, so they route around the organisation. They paste customer data into whatever tool is easiest, or most effective. They try five different copilots, three browser extensions, and two “free” AI note-takers to find what they need. They build unofficial workflows that quietly become critical, until something leaks, breaks, or gets audited. Of course, then you may even start getting some hefty bills for these shadowy implementations.
A healthy early-stage GenAI environment has a few recognisable signals:
People are trying tools in the open and talking about what they’re learning, rather than whispering about it in DMs. The wins are celebrated, and so are the losses.
There’s a sanctioned, IT-supported sandbox with clear guidance on what you can and cannot do with organisational data (including the boring but important stuff: client information, source code, internal strategy docs, financials, HR data)
You’re capturing genuine use cases and scoring them for value and risk, rather than chasing hype, vendor kool-aid, or whichever vendor demo impressed the board last week
The organisation is actively learning: prompts and patterns are shared, good examples are celebrated, and “here’s what went wrong” is treated as useful feedback, rather than a career-limiting move. Knowledge and practice are built as a growing body and kept evergreen as technologies advance.
But there are warning signs too, and they’re worth treating as early smoke, not something you wait to become a fire:
People are using GenAI in secret because they expect IT to say “no”, the classic “we’re not allowed to use this, but I think I can get away with it” behaviour.
Sensitive data is drifting into random tools, or simply too many tools for you to manage safely, and you can’t govern what you can’t even see.
The most active users are the least supported users. The power users are making up the rules as they go, and everyone else is either copying them blindly or opting out.
Entire teams are deterred from experimenting because policies are unclear, overly restrictive, or basically amount to “don’t do anything”. This is how you create Shadow AI through the back door, as you don’t prevent AI usage, you just prevent safe AI usage.
This is the balancing act organisations need to tread. You want curiosity, but you need guardrails. Employee curiosity is your organisational energy source. Your job isn’t to crush it. Your job is to channel it. So the test question for leaders isn’t “are we using AI yet?” It’s: “Can people experiment safely, visibly, and supportably, without feeling like they have to go underground?”
If the answer is no, your readiness work starts here, and it should start quickly, as Shadow AI is coming for you. A recent MIT report showed that 85% of employees reported using GenAI tooling regularly at work, despite only around 45% of organisations having subscriptions. Let that one sink in.