<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=115389302216927&amp;ev=PageView&amp;noscript=1">
LET'S TALK

Evaluating Training Effectivenes

    mega-menu-graphic

    Storyline Scheduled Public Courses

    3 min read

    Designing at the Jagged Edge: What AI’s Uneven Abilities Mean for L&D

    By Andrew Jackson on Tue, Nov 25,2025

    In last week’s post, I wrote about the jagged edge — the uneven frontier between what AI is exceptionally good at and where it is still surprisingly weak.

    That insight came into focus for me while standing on the patio of the Stahl House in the Hollywood Hills, looking down at the sharp, uneven ridges, set against the flat, orderly grid of Los Angeles below.

    That contrast — smooth here, jagged there — feels like the perfect metaphor for AI right now. Not evenly distributed. Not predictable. Not consistent.

    The takeaway from last week’s post was pretty clear: success with AI isn’t about mastering the technology. It’s about recognising where it shines and where it stumbles.

    And that raises the next logical question.

     

    What Does the Jagged Edge Look Like for L&D?

    This is the part that matters. The practical, day-to-day reality of using AI in real L&D work.

    Because if you work in L&D, you are very likely navigating that jagged edge — whether through generative content requests, experiments with course outlines, or colleagues asking, “Can we use this AI thing to help with…?”

    Understanding where AI is strong (and where it is unreliable) is becoming a genuinely useful professional skill. So, here are some thoughts on what this looks like as we head towards the end of 2025

     

    Where AI Really Helps L&D (The “Smooth” Side)

    This is where the ground is smooth and firm and flat. Where AI consistently behaves well and adds value without fuss.

    Summarising long, dense content


    Policies, procedures, reports, transcripts — AI is excellent at turning them into:

    • short summaries
    • digestible bullet points
    • suggested learning objectives
    • concise recaps for learners

    This isn’t just speculation. If you’ve used AI for this kind of task already you’ll know it’s pretty reliable.

     

    Cleaning up messy SME input


    We all love our SMEs; but we also know that a massive, unstructured ‘brain dump’ is often their default mode. However, if you give AI:

    • rambling emails
    • half-formed notes
    • slides covered in dense bullet points

    it will make a good first pass at organising that content more clearly and logically. The output won’t be perfect. It will still need some extra human brain power to get it over the finish line; but it will get you to the end of the process much more quickly.

     

    Drafting outlines, ideas and examples


    AI is consistently strong at:

    • drafting module outlines
    • drafting simple scenarios
    • creating examples and non-examples
    • rewriting content at different reading levels

    It certainly doesn’t replace creativity and original thinking; but it can really help unblock those creative juices when you are struggling to come up with good ideas.

     

    Re-framing content for clarity


    “Explain this as if…”

    “Give me examples from…”

    “Rewrite this in plain English.”

    This is an area of real strength. In the right circumstances, AI can be just as great at giving you the last 10% (refining and polishing) as it at getting you 90% of the way there.

     

    Where AI Still Struggles (The Jagged Edge)

    These are the uneven ridges — the places where footing is uncertain and relying on AI is definitely risky.

    Anything requiring organisational context


    AI doesn’t know:

    • your policies
    • your culture
    • the shortcuts your learners actually use
    • the unofficial steps that matter
    • the messy real-world constraints

    Its answers can sound right in theory, but can be very wrong in practical terms.

     

    Emotional nuance, judgement or interpersonal dynamics


    Handling conflict. Managing a difficult conversation. AI can mimic empathy but it often misses:

    • tone
    • boundaries
    • what not to say

    Use it in these kinds of contexts with extreme care.

     

    Multi-step reasoning


    AI still struggles with:

    • multi-stage logic
    • conditional pathways
    • decisions that depend on context
    • keeping its own reasoning consistent

    It can excel at steps 1–3 then completely misfire on step 4.

     

    Highly specialised expertise


    In specialised domains like medicine, engineering, law and compliance, plausible-sounding nonsense is especially dangerous.

     

    Explaining the “why” behind a rule


    AI is good at restating a rule. Much weaker at explaining the reasoning behind it. Learners need the “why” for authentic understanding — and AI often can’t supply it.

     

    Navigating the Terrain

    As noted in last week’s post, the jagged edge isn’t a barrier — it’s a map. In an L&D context, it really helps us understand where:

    • AI can accelerate design work, and
    • humans must remain firmly in the loop.

    The real work right now is not avoiding the jagged edge but learning to navigate along it with confidence.

    Topics: Learning Tech
    3 min read

    The Jagged Edge of AI: What It Means for Learning and Performance

    By Andrew Jackson on Tue, Nov 18,2025

    A few weeks ago, during a trip to the US, I was lucky enough to find myself standing on the patio of the Stahl House in the Hollywood Hills — a glass-and-steel architectural icon perched high above Los Angeles.

    The view from the patio is breath-taking; and as a lover of mid-century minimalist architecture, I found the house equally stunning.

    From that patio vantage point, the other striking aspect was the contrast between the hills immediately below the patio and the distant cityscape. Way below, the city stretched out in a perfect grid: flat, predictable, orderly. But right in front of me, the landscape fell away in uneven ridges — sharp, fractured, unpredictable.

    IMG_2014

    I was reminded of the sharp, uneven edges of those hills the other day when I came across the term jagged edge - to describe what the world of AI looks like at the moment.

    So, let’s explore the meaning of that term and what it might mean for L&D.

    Often, we talk about AI as though it’s spreading evenly across everything; but it isn’t. It’s advancing in fits and starts — powerful in one moment, incredibly clumsy in the next. Researchers have named this unevenness the jagged edge of AI.

    Understanding the Jagged Edge

    You may well have experienced this jagged edge first hand, when AI is strong and accurate in some areas but weak and unreliable in others. 

    It can draft a pretty good course outline in seconds. It can generate accurate code that works. Yet still produce nonsense when asked to evaluate a moral dilemma.

    This unevenness exists because AI doesn’t understand; it predicts. It excels where data is abundant and patterns are clear — structured writing, number crunching, summarising, and retrieval. It falls short where context, empathy or tacit knowledge contribute significantly to the outcome.

     

    Evidence from the Research

    This metaphor for uneven progress is backed by evidence. In 2023, researchers from Harvard Business School and Boston Consulting Group ran a field experiment exploring how AI affected knowledge workers. They called their study Navigating the Jagged Technological Frontier.

    They found that workers using AI completed 12% more tasks and worked 25% faster when those tasks fell within AI’s frontier — areas where AI already performed well.

    But when tasks sat just beyond that frontier, the results flipped: performance declined.
    In other words, AI’s progress isn’t smooth or universal. It’s jagged. Full of peaks and troughs that shift depending on the nature of the work.

    The Human–AI Division of Labour

    At first glance, that jaggedness can seem like a flaw. But if you think of it more like a map or a bunch of helpful road signs — it usefully highlights where humans and machines each add the most value.

    • AI excels at automating structure: generating first drafts, summarising information, categorising, and identifying patterns.
    • We excel at interpreting complexity: spotting anomalies, understanding tone, making ethical calls, and connecting dots in ambiguous situations.

    When we understand where the jagged edge runs, we can design smarter systems and workflows that let each party play to its strengths.

    What This Means for L&D

    For L&D professionals, that insight is particularly useful. It tells us the real opportunity isn’t just using AI generically. It’s about recognising where AI’s edge lies in our own jobs — what can be safely automated, and what still needs human input and decision-making.

    For example:

    • A learning designer can use AI to structure a skills framework (below the edge), but must still apply contextual awareness to fit it to their own organisation (beyond the edge).
    • A subject matter expert might use AI to generate examples or case studies (below the edge), but they’ll need to refine and validate accuracy, richness and complexity of those examples (beyond the edge).


    It’s going to take us all a while to get used to this ‘new frontier’; and we’ll need to be ready for the jagged edge to shift as AI technology evolves. But recognising and navigating the jagged edge confidently, within our own job roles, could massively transform our productivity and effectiveness.


    Reference:
    Dell’Acqua, F., Kalliamvakou, E., Kirov, S., Ransbotham, S., & Rock, D. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. Harvard Business School Working Paper 24-013.

    Topics: Learning Tech
    3 min read

    Design Thinking, Not Coding: The Real AI Skill for L&D

    By Andrew Jackson on Tue, Oct 28,2025

    If you are an end user of AI it all feels pretty simple. Ask your question or make your request. Get an answer or response. Keep going until you are done.

    But what about if you need to start engaging with AI at a slightly more strategic/technical level. “It’s all a bit too complicated for me,” is not an untypical (or unreasonable) response. After all, once you start talking about APIs, prompts, Python or LLMs you can understand why someone’s eyes might glaze over and their brain might shut down.

    To the uninitiated, it does start to sound like a bunch of tech gobbledygook dreamt up by a fairly unfriendly bunch of Martians!

    (By the way if you are interested in dipping your toe into the more technical aspects of AI, my goal is to unpack the jargon and concepts in an L&D-friendly manner in my PerformaGo Diary and the associated Making Learning Stick blog)

    But the good news in all this is that you don’t need to become a part-time programmer to get more than end-user value from AI. Because a lot of what’s going on behind the curtain with AI is as much about design logic as it is about programming. And design logic is something that most people in L&D are already very familiar and comfortable with.

     

    It’s About Design, Not Code

    So, what do I mean by design logic and how is it different from something like coding? Well, coding is about telling a system how to execute something, step-by-step. The detailed ‘how to’ instructions that run under the hood. Whereas design logic is about the why and the what.

    So, viewed from a strategic or purpose-driven perspective, what makes AI really useful is the ability of those deploying it in a work-based environment to clearly define its intent (the why) and the outcomes you want from it (the what).

    Which, of course, is not that different from thinking purposefully about a piece of work-based training.

     

    What This Means for L&D

    So, if we move beyond the anxiety many people feel about coding and tech speak, what we actually discover is that AI design and learning design share the same DNA.

    Both are about:

    • Creating flows and structures so things make sense.
    • Filtering out unnecessary complexity.
    • Anticipating what people will need, when they’ll need it, and how they’ll best engage with it.

    In reality, you are already thinking in systems, flows, and interactions — exactly the same kind of mental models that you need to use when thinking about deploying AI.

    So, working more strategically and purposefully with AI isn’t about shifting from instructional designer to techie expert. It’s about applying your existing learning design skills into a slightly different arena.

    You don’t need to learn a single line of Python code to work effectively with AI. But thinking like a learning designer will definitely help you get better results from it.

     

    AI Doesn’t Replace Design Thinking — It Rewards It

    So, AI isn’t something L&D needs to fear or surrender to. It’s something we can shape using our existing design skills and thinking. Because when you strip away the code, AI’s power depends on the same things that make great learning work: clear purpose, sound structure, and empathy for the learner

    That’s why the future of L&D doesn’t belong to the coders — it belongs to the designers.  People who can take what they already know about how people learn and apply it to how intelligent systems perform.

    Because design thinking isn’t just part of L&D’s DNA. It’s also our bridge to the future.

     

    Curious how this post took shape?
    This post was inspired by a story I shared in my Diary blog — “From Rodeo Drive to AI: The Three Languages Every Custom GPT Needs.”

    Spoiler alert. It is a bit more technical in content. But it’s the tale of a California trip that unexpectedly helped me understand how several bits of tech work together to make AI more effective.

    You can read that story here

    Topics: Performance Support Learning Tech
    2 min read

    From Order-Taker to Performance Enabler - a Shift L&D Can't Ignore

    By Andrew Jackson on Tue, Oct 7,2025

    We’ve all been there. “We need a course on…” or “Can you put together a workshop about…” And before you know it, you’re scoping out slides, activities, and maybe even booking rooms.

    It’s the classic “training order-taking” scenario. The problem is, fulfilling training orders doesn’t always lead to better performance. Sometimes it just leads to more training.

    The Limits of the Order-Taker Role

    When we accept training requests at face value, we miss the bigger question: what’s the real problem we’re trying to solve?

    Is performance lagging because people don’t know how to do something? Or is it because the process is broken, the tools are clunky, or expectations are unclear? In many cases, training isn’t the answer at all.

    That’s the trap of being an order-taker. It turns us into a busy L&D function, but it doesn’t necessarily make us impactful.

    The Shift: From Training to Performance

    To increase impact, L&D needs to move from being an order-taker to becoming a performance enabler.

    Sometimes, that means asking harder questions up front:

    • What’s really getting in the way of performance?
    • Is this a skills gap, a motivation gap, or an environmental issue?
    • If training is part of the answer, what else needs to be in place for it to work?

    It also means widening our toolkit. A course might sometimes be the right solution — but often it’s just one piece of a broader picture.

    What Performance Enablement Looks Like

    Performance enablement is about making it easier for people to succeed in their work. That might look like:

    • A job aid that reduces reliance on memory.
    • A checklist that ensures key steps aren’t missed.
    • A guided conversation that helps managers coach effectively.
    • A performance support tool that provides answers in the flow of work.

    These may not look as impressive as a full-blown course, but their impact can be huge. They get used in the moment of need. They reduce errors. They increase confidence. And they show the business that L&D is directly connected to performance outcomes.

    Why This Matters Now

    People don’t always have time to attend courses, however well designed. They need solutions that fit into their flow of work, not outside it.

    That’s why the order-taker mindset feels increasingly out of step. The future belongs to L&D teams who can enable performance: diagnosing real needs, designing for usability, and delivering support that works in the moment.

    How This Shapes PerformaGo

    This shift is also shaping the design of the AI-powered tool I’m currently working on, called PerformaGo. From the ground up, it’s being built around the principle of performance enablement.

    Instead of just asking, “What training can we deliver?”, it encourages us to consider, “What support will make performance easier?”

    The goal isn’t to replace courses. It’s to give those of us in L&D a way to extend learning into performance — so the business gets the impact it needs, and learners get the support they want.

    A Closing Thought

    The order-taker model has kept many in L&D busy for many decades. But if we want to stay relevant, we can’t just take orders. We need to enable performance, too.

    That’s the shift PerformaGo is designed to support. If you would like to stay connected and receive regular updates about what we are doing, you can register your interest.

    If you prefer a more personal, behind-the-scenes take on all this, check out The PerformaGo Diary.

    Topics: Performance Support Learning Tech Learning Impact
    2 min read

    An AI coach for every learner? What would you want it to say or do?

    By Andrew Jackson on Fri, Sep 19,2025

    Indulge me for a second and let’s try a little thought experiment.

    Imagine your colleagues outside of L&D — managers, frontline staff, sales reps, new hires etc — each had their own always-available, always-supportive AI performance coach. No set-up time, no searching for materials, just instant, intelligent support in the moment they need it.

    What would you want that coach to do for each of those colleagues?

    • Offer reminders on key tasks?

    • Help them prepare for or reflect on a difficult conversation or meeting?

    • Serve up step-by-step guidance during their first few weeks in a new role?

    This isn’t the stuff of fantasy anymore. AI is becoming sophisticated enough to do all of that — and more. But let's not get carried away.

    Your answers to the 'learner need' question matter more than any tech solution. Because performance support is about learner need first and tech delivery second — not the other way around.

    That's why the PerformaGo AI tool that we are developing, starts with performer needs, not content or clever tech. It help's you to think through: What do learners struggle with most, post-training? What’s hard for them to remember? What gets skipped when things get busy? What barriers exist in the workplace that might be holding them back or discouraging them from applying their new skills?

    Then we make it super-easy for you to design lightweight, AI-powered assistants that:

    • Provide authentic and accurate help, advice and reminders

    • Reference the specific skill sets and content that an individual learner needs to complete a specific task 

    • Prompt the learner to plan, complete or reflect, based on real-world application and challenges

    This is about providing real-time learning and support that lives in the workplace — not inside the LMS.

    So, what would you want an AI coach to help your learners do? 

    Interested in finding out more about how PerformaGo could help your learners achieve more successful workplace application? Register your interest at pacificblue.ai -  help us shape the tool and get early access.

    Topics: Performance Support Learning Tech Learning Impact
    3 min read

    You don't have to be a techie to make AI work in L&D

    By Andrew Jackson on Tue, Sep 16,2025

    Many of us in L&D still hesitate about AI. The reasons vary enormously. For example:

    • “It’s not good enough yet.”
    • “It’s going to steal my job.”
    • “I’m not technical enough.”

    Last week’s post explored the flaw in that last reason. We saw that most of us in L&D already hold a hidden set of technical skills — more than most of us give ourselves credit for.

    This week, I want to dig into that idea a bit more, with a personal story and a reminder of another skill that many of us underestimate.

    You Don’t Need to Know the Detail to Benefit

    I’ve never been particularly strong in or had much of an interest in maths. Yet here I am, building a software tool. Even more surprising, as part of deepening my understanding of how AI works, I became quite fascinated by the elegant mathematics that sits behind GPTs.

    Luckily, I don’t need to do the maths to be in awe of it — or to benefit from what it enables. I can use GPTs without knowing the detail of semantic maps, tokens, or probability models. I simply need to know how to make the best use of what that ‘behind the scenes’ math can produce.

    And I think that highlights an interesting and important parallel for us in L&D. We don’t have to understand every last underlying detail of the subject matter we turn into courses for our learners.

    Instead, we need to filter out irrelevant complexity and translate what remains into a learning experience that is relevant, authentic, and usable. This is the hidden L&D skill that I'd like to focus on here.

    The Hidden Skill: Turning Complexity Into Clarity

    Plenty of SMEs know their subject in depth. A few are even instinctively good instructional designers. But a key understanding that separates a good instructional designer from many an SME is perspective. We design for the learner, not for the content. We can take a step back in order to

    • see it from the learner’s point of view
    • strip away the content clutter
    • sequence ideas clearly and logically, and
    • build authentic practice out of all that.

    This, I believe, is one of our greatest professional assets.

    From Learning Design to Performance Support

    Not everyone reading this will have designed performance support materials or content before. If you haven’t, the good news is that the skills you use to design an effective course are very similar to the ones you’ll need to design useful performance support in the workplace.

    And here’s the even better news: performance support is the perfect way to combine those existing skills with the power of AI.

    Just as we filter complexity into clarity for learners during the course design process, we can use AI to help us produce, usable workplace performance support scaffolding.

    Put those two forces together — L&D’s eye for learner relevance and AI’s knack for simplification — and you have a winning formula for extending learning into the flow of work.

    Harnessing AI without Being a Techie

    You don’t need to be a techie to make AI work in L&D. What you need is a recognition of the skills you have already — and a willingness to apply them in new contexts.

    The maths that underpins GPTs may remain out of reach for me, but the elegance of what it makes possible is not. The same is true for L&D. Our job isn’t to master the inner workings of AI, but to harness its power in ways that help people learn, perform, and succeed at work.

    If you’d like to see the personal story that sparked this reflection, take a look at this week’s post from my PerformaGo diary: In Awe of the Math I’ll Never Do 

    Topics: Instructional Design Learning Tech
    3 min read

    Why we in L&D Know More About Tech than we Think

    By Andrew Jackson on Tue, Sep 9,2025

    It’s easy to feel uncomfortable or distracted when yet another new technology comes along.

    Fresh concepts, unfamiliar vocabulary, and endless hype can leave us feeling overwhelmed or even anxious. Neuroscientists call this an amygdala response — the part of our brain that processes threats quickly kicks in, making us want to shut down or avoid the source of discomfort.

    That reaction is natural. But here’s the problem: it just reinforces a sense of inadequacy. A sense that we’re “behind” or “not technical enough” to keep up. And in the new world of AI unfolding before us that feeling can become overwhelming.

    I went to a fairly tech-focused, hands-on event run by Amazon Web Services in Manchester last week. And trust me I had a couple of my own, amygdala-induced, 'shall I just cut and run, now?" moments. But I stuck it out. And I'm glad that I did because the more I persevered, the more I realised something really important

    Most of us in L&D are far more tech-savvy than we give ourselves credit for.

    Take a step back for a minute and think about all the different tech tools in your orbit and the different ways you might use them:

    • Instructional tools → You’ve already learned to use Articulate Storyline, Rise, Captivate, or similar platforms. That’s logic, sequencing, and (if you really get into Storyline in particular) using variables.

    • LMS administration → Uploading SCORM files, managing enrolments, troubleshooting access. That’s systems thinking and workflow design.

    • Digital collaboration tools → Microsoft Teams, SharePoint, Slack, Miro. That’s user experience awareness and adoption strategy.

    • Content authoring → Video editing, podcasting, slide design. That’s multimedia fluency and interface design.

    These aren’t minor skills. They are the very foundations of working with any new software, regardless of its unfamiliar associated concepts or jargon.

    Now granted, not everyone is going to be expert in every one of those areas or those tools. But joining the dots between what you know already and what's emerging is probably not as difficult as you think. The “new” world of AI doesn’t replace what you already know — it connects to it.

    As well as my recent experiences last week, I’ve been reminded of this in my wider journey. If you've been reading any of my recent posts on this blog or over on my  new diary blog, you'll know that I'm on a journey to build a performance support app, specifically for L&D.

    Moving into software development felt like jumping in at the deep end. But as I began exploring no-code tools and AI, I kept noticing familiar patterns: workflows that echoed the decision-making aspects of Storyline variables and interfaces that weren’t so different from the tools I’ve used for years.

    We’re already carrying around a whole bunch of tech-related skills around with us. It's almost like discovering a hidden skillset that we never realised we had.

     

    If you’d like to find out about how I worked through the excitement (and the fear) of stepping into a more tech-focused space, I share some of the behind-the-scenes story in this week’s Diary post.

    Topics: Learning Tech
    2 min read

    The Hidden Cost of Too many Clicks

    By Andrew Jackson on Tue, Sep 2,2025

    One of the things that we cover in the e-learning modules in our impact and instructional design training is the difference between user interface design (UID) and learner interface design. I first came across this distinction courtesy of all-round e-learning genius Michael Allen.

    Without getting into the weeds, it’s the difference between what you might call the basic high-level global interface settings you provide to your learners – navigation being the most obvious and probably the best example.

    As with all things UID, the aim is to make this as simple, consistent and intuitive as possible. However, Allen makes the point that within that global UID it’s likely you’ll have an entirely separate set of design principles that relate specifically to the task-focused practice activities that you are building into your e-learning.

    The interesting point here, is that your learner interface design (LID) might not be consistent with your user interface design. In fact, in extreme cases, it might effectively ‘go against’ your UID.

    In other words, for the purposes of making your practice activity authentic and a reflection of the real-world situation it is aiming to replicate, if that real-world activity is messy and complicated to complete, that may need to be reflected in the design of the activity. Meaning that from a learner perspective, it might not be simple, consistent and intuitive.

    Diving into that in a bit more detail is probably a good topic for another day. Today I really do want to focus on simple, consistent and intuitive UID. Because endless unnecessary clicks can be a quiet killer of learner engagement.

    Every extra click — the extra “Next” button, the redundant confirmation screen, the maze-like menu — acts as micro-barrier. It slows learners down, interrupts flow, and chips away at motivation. It may not seem like much, but multiplied across an entire course or module, the hidden cost is high: lost learners, missed outcomes, and frustrated managers.

    E-Learning is a particularly problematic delivery medium in this respect; but the hidden cost of too many clicks or actions is not restricted to e-learning.

    Performance support provided in a digital/electronic format is no different. If we want support tools to be genuinely used in the flow of work, they need to feel like they fit perfectly in that flow. That means eliminating friction at every step. Every second counts when someone is trying to solve a problem or apply a skill on the job.

    The principle is simple: the easier the journey to the required solution, the more likely the learner is to stick with it — and the more likely performance will actually improve.

    If you’ve been following my posts recently, you’ll know that I’m very focused on supporting performance in the flow of work just at the moment. So well-contextualised performance support tools built around well-defined and contextualised moments of need are very much top of mind at the moment.

    That’s why, as I continue building performance support-focused tools like PerformaGo, one of my guiding principles is simplicity. Because every unnecessary click isn’t just wasted effort — it’s a lost opportunity for impact.

    If you are interested in reading a little behind the scenes story of my obsession with UID, you can read the latest entry from my PerformaGo diary here.

    Topics: Learning Tech Learning Impact
    2 min read

    Lost in translation: why tech jargon drives me nuts

    By Andrew Jackson on Wed, Aug 27,2025

    I’ve long been interested in technology and its potential to make life easier and more efficient. However, I’m definitely not a pointy-headed tech nerd. Tech jargon user-unfriendly pieces of software drive me insane.

    One of my particular pet peeves is the ability of the tech world to make something simple and every day sound much more complicated than it actually is.

    One of my favourite toe-curlers is ‘boot up’ or ‘reboot’. For the ordinary folk in the room, ‘start’, ‘switch on’ or ‘restart’ seems like a perfectly reasonable and well-understood alternative.

    And it’s amazing how, just when you thought you had heard all the weird and wonderful terms that the tech world can come up with, you discover a new one.

    I honestly don’t know if this has come into use recently with the arrival of AI or if it has been around for a while and I’d just never come across it before; but a newly discovered tech term for me is ‘ingest’ or ‘ingestion’.

    Now the standard usage, of course, would be in a biological or medical context.

    But in the best traditions of tech jargon, this slightly obscure verb/noun combination, which is definitely not used much in everyday speech, has become the standard way to describe how a file or doc has been uploaded and chunked by a GPT’s knowledge base.

    Granted, it is a term that most people will more or less understand; and, ‘yes’ there is some logic behind its use. In a biological/medical context, of course, it describes the full consumption/digestion of something into something else; and this is clearly the intent behind the tech usage.

    But really? Couldn’t we just say, the files have been uploaded and chunked?

    Of course, the annoying thing about all of this jargon is that once it’s established, if you are working in a tech context in which it is used, at the very least you have to make the effort to understand it; and (shudder) eventually you’ll almost certainly find yourself using it.

    In short, sometimes you have no choice but to learn to speak the tech ‘lingo.’

    If you have read some of my recent posts, you’ll know that I’m in the early stages of building an AI-powered app for L&D called PerformaGo. I’m also keeping a diary-like blog which is documenting the journey ‘behind the scenes.’

    I’ve divided the diary into 4 broad sections, one of which is called, ‘Learning to speak API’. It’s all about understanding, unpacking and explaining the tech ‘lingo’ and concepts so they make sense to me and to ordinary readers like you.

    So, why not follow along if you are interested. And if you’ve got any particular tech-speak pet-peeves, I’d love to hear from you.

    Topics: Learning Tech
    3 min read

    The Gold Beneath The Gloss: Unlocking Software's True Potential

    By Andrew Jackson on Fri, Aug 22,2025

    This isn’t the first time that I’ve made reference to what I call Storyline’s ‘secret weapon’. A Storyline ‘advanced’ feature that gets buried away in the background because it’s not very ‘sexy’ nor particularly easy to market. Yet learning to use it can transform the e-learning courses you build.

    In case you are wondering? I’m talking about Storyline variables.

    However, the purpose of today’s post is not to write extensively about Storyline variables. (If you are interested, I have written about Storyline variables here).

    What interests me today is the fact that lots of software/technologies often have some kind of ‘secret weapon’. In other words, a hidden something that a majority of users don’t know about, which is incredibly powerful or useful but somehow never gets the attention it deserves.

    Typically, the hidden something is conceptually abstract or technically a bit complex; so, it takes some care and effort to explain it clearly and easily. And it often takes some care and effort by software users to really reap the benefits of it.

    Which usually leaves it friendless. Tucked away in an obscure corner of the interface by UX and UI designers. Shunned by the marketers (just not ‘sexy’ enough). Glossed over by the technical writers and technology evangelists.

    And you can understand why. In the push to get a piece of software or a technology widely adopted as quickly as possible, the most popular and easiest to use features are, inevitably, going to get the most attention.

    And in lots of ways, that is a good, user-centred approach. After all, if the software or technology in question isn’t obviously solving some kind of problem or making it easier or possible to do something that was previously difficult or impossible to do, what’s the point?

    But this also misses an important point. Hiding that more difficult-to-explain feature probably results in inferior user output. Returning to the example I opened with, this is absolutely true in the case of Storyline variables. It’s not that you need to use them in every single course that you create. But never using them at all will absolutely reduce the creative potential and the effectiveness of your instructional design.

    So, why am I obsessing about all this, at the moment? Well, you may already know that I’m working on a new software tool for L&D called PerformaGo. And the AI technology that this software is designed to help L&D folk like yourself unleash, definitely has its own buried treasures. (Something I’ve been writing about here). In essence, it’s about how you provide specific knowledge to your custom GPT so that you get reliable, accurate output when it’s being used by a learner.

    Which means that this particular ‘buried treasure’ is going to play a very significant role in making the PerformaGo tool a success. The question of how that specific knowledge is packaged up and accessed can’t be glossed over, in the hope that some users will find it and work out how to use it. It needs to be front and centre.

    And that will be a challenge. Because in some respects this element is a bit abstract. It will definitely require some careful thought around how it is presented and explained to users within the software interface itself and in any related ‘help’ content or software onboarding.

    But once you consider the benefits for L&D folk of getting this ‘buried treasure’ front and centre, the challenge of achieving that goal seems small by comparison.

    If any of this interests you, or you like the idea of becoming an early adopter or pilot user of PerformaGo or you would just like to find out some more about the tool, why not register your interest.

    And you can follow a more ‘behind-the-scenes’ take on the journey to build the PerformaGo software tool in my online diary.

    Topics: Performance Support Learning Tech