<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=115389302216927&amp;ev=PageView&amp;noscript=1">
LET'S TALK

Evaluating Training Effectivenes

    mega-menu-graphic

    Storyline Scheduled Public Courses

    In last week’s post, I wrote about the jagged edge — the uneven frontier between what AI is exceptionally good at and where it is still surprisingly weak.

    That insight came into focus for me while standing on the patio of the Stahl House in the Hollywood Hills, looking down at the sharp, uneven ridges, set against the flat, orderly grid of Los Angeles below.

    That contrast — smooth here, jagged there — feels like the perfect metaphor for AI right now. Not evenly distributed. Not predictable. Not consistent.

    The takeaway from last week’s post was pretty clear: success with AI isn’t about mastering the technology. It’s about recognising where it shines and where it stumbles.

    And that raises the next logical question.

     

    What Does the Jagged Edge Look Like for L&D?

    This is the part that matters. The practical, day-to-day reality of using AI in real L&D work.

    Because if you work in L&D, you are very likely navigating that jagged edge — whether through generative content requests, experiments with course outlines, or colleagues asking, “Can we use this AI thing to help with…?”

    Understanding where AI is strong (and where it is unreliable) is becoming a genuinely useful professional skill. So, here are some thoughts on what this looks like as we head towards the end of 2025

     

    Where AI Really Helps L&D (The “Smooth” Side)

    This is where the ground is smooth and firm and flat. Where AI consistently behaves well and adds value without fuss.

    Summarising long, dense content


    Policies, procedures, reports, transcripts — AI is excellent at turning them into:

    • short summaries
    • digestible bullet points
    • suggested learning objectives
    • concise recaps for learners

    This isn’t just speculation. If you’ve used AI for this kind of task already you’ll know it’s pretty reliable.

     

    Cleaning up messy SME input


    We all love our SMEs; but we also know that a massive, unstructured ‘brain dump’ is often their default mode. However, if you give AI:

    • rambling emails
    • half-formed notes
    • slides covered in dense bullet points

    it will make a good first pass at organising that content more clearly and logically. The output won’t be perfect. It will still need some extra human brain power to get it over the finish line; but it will get you to the end of the process much more quickly.

     

    Drafting outlines, ideas and examples


    AI is consistently strong at:

    • drafting module outlines
    • drafting simple scenarios
    • creating examples and non-examples
    • rewriting content at different reading levels

    It certainly doesn’t replace creativity and original thinking; but it can really help unblock those creative juices when you are struggling to come up with good ideas.

     

    Re-framing content for clarity


    “Explain this as if…”

    “Give me examples from…”

    “Rewrite this in plain English.”

    This is an area of real strength. In the right circumstances, AI can be just as great at giving you the last 10% (refining and polishing) as it at getting you 90% of the way there.

     

    Where AI Still Struggles (The Jagged Edge)

    These are the uneven ridges — the places where footing is uncertain and relying on AI is definitely risky.

    Anything requiring organisational context


    AI doesn’t know:

    • your policies
    • your culture
    • the shortcuts your learners actually use
    • the unofficial steps that matter
    • the messy real-world constraints

    Its answers can sound right in theory, but can be very wrong in practical terms.

     

    Emotional nuance, judgement or interpersonal dynamics


    Handling conflict. Managing a difficult conversation. AI can mimic empathy but it often misses:

    • tone
    • boundaries
    • what not to say

    Use it in these kinds of contexts with extreme care.

     

    Multi-step reasoning


    AI still struggles with:

    • multi-stage logic
    • conditional pathways
    • decisions that depend on context
    • keeping its own reasoning consistent

    It can excel at steps 1–3 then completely misfire on step 4.

     

    Highly specialised expertise


    In specialised domains like medicine, engineering, law and compliance, plausible-sounding nonsense is especially dangerous.

     

    Explaining the “why” behind a rule


    AI is good at restating a rule. Much weaker at explaining the reasoning behind it. Learners need the “why” for authentic understanding — and AI often can’t supply it.

     

    Navigating the Terrain

    As noted in last week’s post, the jagged edge isn’t a barrier — it’s a map. In an L&D context, it really helps us understand where:

    • AI can accelerate design work, and
    • humans must remain firmly in the loop.

    The real work right now is not avoiding the jagged edge but learning to navigate along it with confidence.

    Andrew Jackson

    Written by Andrew Jackson

    Hi, I’m Andrew Jackson, co-founder of Pacific Blue Solutions and founder of Pacific Blue AI. I’ve spent almost 20 years helping L&D teams design learning that actually changes what people do at work. Through this Learning Academy blog, I share practical, evidence-based ideas for designing better learning and performance support in a changing, AI-enabled world.