AI Workshops

Explore our range of AI workshops designed to help you move forward with confidence, wherever you're headed next in your AI journey.

Modern building with large glass windows and a sign that reads 'Victoria Gate' against a partly cloudy sky.

Workshops we Offer

If you want AI that’s genuinely useful, safe, and worth the effort, start here. NorthStar workshops are short, practical engagements designed to help teams make sense of a fast-moving, sometimes confusing landscape, and turn “we should do something with AI” into a clear, confident next step.

Each session is built to create real outcomes, not just conversation. We align leadership on intent, values, and boundaries, translate principles into delivery behaviours and guardrails, and surface the risks that organisations most often underestimate. Depending on where you are, we’ll help you identify and prioritise the right use cases, pressure-test vendors and approaches, and put in place the minimum governance and assurance so pilots don’t become uncontrolled experiments.

You’ll leave with tangible artefacts you can use immediately: a prioritised backlog, decision-ready options, a risk and control plan, and a practical path from exploration to implementation, tailored to your organisation’s context.

  • Most organisations don’t have a shortage of AI ideas, they have a shortage of good AI ideas that are safe, feasible, and worth doing. This workshop is a structured way to turn curiosity into a practical shortlist. We bring the right mix of people into the room (business, product, data/engineering, security, risk) and work through real workflows, pain points, and opportunities, not “shiny demos”. You’ll leave with a use-case backlog, clearly described problem statements, early assumptions, data needs, and a first pass at success measures. We then score and prioritise the best candidates using an Impact/Effort/Risk lens, so you can confidently select 1–2 pilots that can be delivered in weeks and measured properly.

    Typical outputs: use-case backlog, prioritisation scorecard, shortlist of pilots, early KPI ideas, and a clear “what next”.

  • LLM systems fail in new and slightly weird ways: prompt injection, indirect injection through documents, data exfiltration via retrieval, tool abuse in agentic workflows, poisoned knowledge bases, runaway costs, and supply chain compromises. This workshop applies a repeatable threat modelling method designed specifically for LLM-enabled systems, using our STOIC catalogue (Stolen, Tricked, Obstructed, Infected, Compromised). We start by mapping the actual system boundaries (users, prompts, RAG, tools/actions, identity, logs, vendors) and then identify threats, impacts, and controls at each trust boundary. The aim is not to produce a scary list, but to produce a practical backlog of mitigations and tests: what to build, what to verify every release, what to monitor in production, and who owns each control.

    Typical outputs: LLM threat model, prioritised risk list, mitigation backlog, “before go-live” controls, and recommended monitoring and incident playbooks.

  • AI strategy and governance fail when leadership either treats AI as magic, or treats it as “just another IT tool”. This primer is a fast, plain-English session that equips executives and board members to ask better questions, spot weak proposals, and make confident decisions. We cover what modern GenAI can and cannot do, where the real risks sit (data, accountability, safety, regulation, reputation), what “good” looks like in a pilot, and what an organisation needs to scale responsibly. The focus is not hype or doom, it’s decision-making: the minimum governance and assurance you should expect, how to interpret vendor claims, and how to balance innovation with risk in a way that protects trust.

    Typical outputs: a shared vocabulary, board-level question set, recommended principles and red lines, and clear next steps for pilots, governance, and adoption.