What is Computing 2.0?

Written by:

Rethinking Digital Thinking in an AI Age

A few years ago, I saw a familiar post resurface online: an exhausted teacher despairing at the thought of teaching computing. It struck a chord then, and I wrote a detailed blog. You can find the original, henceforth monikered 1.0, HERE.

I like to consider myself a reflective practitioner. So, in light of a curriculum review and our new book Learning to Lead Computing, I paused again. I continue to respond to complaints about computing with a passionate defence of the subject. It’s my favourite subject. It’s not about computers. It’s about thinking.

But the world has shifted. Drastically. However, as we say regularly in the book… DON’T PANIC! you can get your copy here: https://bit.ly/L2LCo

We’ve moved from conversations about whether we should use iPads in lessons to considering how generative AI might affect child safety, creativity, and even truth. The landscape of what we call “computing” has expanded. We’re not just preparing children to use technology; we’re preparing them to live in it.

So, what is computing now? Surprisingly, Still Not About Computers

Let’s say it again, louder for the people at the back: computing is not about computers. It’s about thinking — strategically, logically, creatively. It’s about understanding how systems work, how data moves, how machines follow instructions (or don’t), and what all that means for us as humans.

One concept that has moved further forward is online reliability. We are being told constantly by all sides not to trust a thing we see or believe online. This is incredibly damaging. Children, whose minds are far more impressionable than ours, are being hit on the head harder by this narrative. They are regularly being told to accept opinion as truth. Where? TikTok et al.

I am a regular social media user, and Allen and I are a testament to its ability to bring people from polar opposite places together over a shared interest. Yet I continue to see a continually prevailing narrative: trust no one and trust nothing. A very popular Instagram account by the name of – mitchckofficial – I follow used to share the facts behind videos or pictures sent to him. Now, almost every other video, picture, or piece of information is AI-generated, with increasing realism. This is the constant diet of the children in our care. It is our job not to tell them to trust nothing, but to teach them how to discern fact from fiction, and where to go when they want to verify information and its accuracy — which, sadly, isn’t another social media influencer.

Ultimately, I am not on TikTok. I feel my social media bandwidth ends at Instagram. However, even there I am also seeing young teachers clearly spending over an hour a day recording themselves walking into their classrooms and switching the lights on. Let’s call it what it is: you walked in, set up the camera, then turned the lights off and walked in again. How many times did you do it? I always imagine myself watching the young colleague setting this nonsense up.

Don’t get me wrong. I share my classroom practice. I share outcomes of work and ideas. However, it is almost always during or after the fact. There is a whole selection of TikTokers sharing their classes singing. If my son or daughter’s voice was among these, I would be uncomfortable.

Now, I apologise for the slight rant. However, I use this point to illustrate the new social playground that young people are growing up in — one in which they believe they must produce this façade of the world they live in. Computing is 100 percent the subject that will be responsible for ensuring children think about how they play in this playground.

So What Does Computing 2.0 Look Like?

Whether we’re writing instructions for brushing teeth or building a program to sort recycling, we’re teaching computational thinking. These are the same skills needed to evaluate the truth of an AI-generated video or to write a reliable prompt for ChatGPT.

The Core Hasn’t Changed, But the Context Has

The core principles — decomposition, abstraction, pattern recognition, algorithmic thinking — remain unchanged. But the way they’re applied now includes:

  • Prompt design for generative tools
  • Ethical thinking about algorithms and bias
  • Understanding the limits and capabilities of AI
  • Data privacy, ownership, copyright, and responsibility
  • Digital identity and safety in an AI-enhanced world

We’re no longer just guiding children to “be creators, not consumers.” They’re now curators, navigators, evaluators — especially as the content they consume may never have passed through human hands. Take a pause to let that last bit sink in… how much of the content consumed now has not passed through human hands?

Algorithmic Thinking in the Everyday

From choosing a route on a map app to designing a Scratch animation, the importance of clear, sequential thinking endures. But we need to push beyond “jam sandwich” activities and teach children what an algorithm really is: a solution strategy, a way of ordering ideas to solve problems.

Absolutely, this starts at the core in EYFS and KS1. However, we need to progress.

In an AI age, this is vital. AI systems rely on algorithms to function. If children can’t explain how an algorithm works, they’ll never question why it works or whether it should. I know computer scientists, I know programmers, and they are almost exclusively writing code on ChatGPT. Yet their job has become much more focused on user experience and debugging the less finessed lines of code.

PRIMM Still Matters — So Does Purpose

I’m still a huge advocate of PRIMM. But we must now ensure the ‘Predict’ and ‘Investigate’ stages are not just about spotting errors but about spotting bias, limitations, and consequences. I agree this is tricky to do.

Computing can no longer exist in isolation. It’s cross-curricular by necessity. I don’t say this lightly, as I am not a fan of a cross-curricular approach. I firmly believe that subjects and their disciplines need to be respected — I wrote on this HERE. However, think again about the new playground that our children are immersed in. We need to consider in each subject how this relates back to this incredible subject of computing:

  • In English: Can AI write a good story? What makes human creativity unique?
  • In PSHE: What happens when a deepfake spreads online?
  • In Science: Can AI simulate weather patterns, and should we trust the result?

Reframing Progression

We’re often obsessed with linear progression, but computing rarely works in straight lines. Now more than ever, we need to accept spiral learning — looping back to old concepts with new contexts.

Can children:

  • Revisit a concept in a more nuanced way?
  • Apply a foundational idea to a new tool?
  • Transfer understanding from a computer science task to a real-world ethical dilemma?

These, I believe, are the markers of a deeper computing understanding.

Elegance in an Era of AI

I still believe the best instruction is elegant. But now, elegance includes clarity in the face of complexity. A beautifully constructed algorithm is no longer just a sequence of clean blocks. It’s a model of thinking that can be used to test an AI, debug a chatbot, or design a system that puts people first.

Simple is not basic. It’s powerful.

Tools and Platforms, But Consider Them Now with a Purpose

Scratch Jr, MakeCode, CodeSpark, etc. — they’re still my go-tos. But our framing needs to shift. We’re not just teaching children to use these tools; we’re teaching them why, when, and how to choose tools. Children need to be able to ask what is the right tool for the job, and we need to model this approach by reflecting on our choices. At the end of a particular unit, can we simply ask children if there is a more appropriate tool, and why?

We also need to consider what these tools might be doing in return.

It’s not about the platform; it’s about the purpose.

AI Literacy is Now Information Literacy

We can’t avoid it. Children need to understand how generative AI works — not to build it, but to live alongside it. Some children are already using it regularly, either consciously or passively.

So what do they need to understand? I believe that includes:

  • Knowing what data trains it
  • Recognising its limitations
  • Interpreting AI responses critically
  • Understanding their own role as content creators and digital citizens

In short, AI literacy is not optional. It’s the new digital literacy.

So What Next?

The next frontier for computing education isn’t just technical. It’s ethical, philosophical, and deeply human.

We need to stop asking, “How do we teach children to code?” and start asking, “How do we teach children to think, evaluate, and thrive in a digital world that thinks with them… and sadly, for them?”

This is Computing 2.0.

Let’s teach it that way. Always open to questions and criticisms.

Karl (MRMICT)

Leave a comment