Site icon The digital classroom, transforming the way we learn

AI in school: from exposure to evidence-based professional judgment

Ai generated image

Ai generated image

How can teachers justify the use of AI?

Should teachers know how to use evidence-based digital judgment in the use of AI?

Generative AI has passed the point of being an optional novelty. Many students already interact with AI daily, whether schools plan for it or not. This reality creates a professional dilemma: teachers must make continuous decisions about which tools to allow, which tasks to redesign, and which uses to restrict.

The key risk in this moment is not “AI in the classroom.” The key risk is unjustified AI in the classroom.

What we need in school now is a professional framework that ensures AI decisions are made with research-based argumentation and digital judgment rather than habit, excitement, or external pressure.


This framing is timely because generative AI has become a routine part of students’ everyday lives. At the same time, the pace of technological change outstrips schools’ ability to respond with stable norms, shared practices, and robust policy. In this context, “knowing the tool” is no longer sufficient. Teachers are now required to explain why a tool should be used for a specific learning purpose, and equally why a tool should not be used in particular contexts.

The Utdanningsnytt article contributes a research-informed warning: not all students learn from using AI. In particular, large language models are qualitatively different from earlier educational technologies because they are probabilistic systems that can generate credible-sounding but incorrect output. Therefore, educational AI is not a neutral add-on; it requires careful pedagogical and ethical design. The article highlights the example of Asker municipality, where teachers and researchers have developed subject-specific chatbots by adding a curated knowledge layer aligned with curricula and course texts. This approach aims to reduce hallucinations and improve instructional relevance. The implicit argument is that learning benefits are conditional on deliberate design, not guaranteed by exposure to the technology. AI use in school is not primarily a technical issue. It is a professional judgment issue, requiring structured reasoning, ethical analysis, and research-informed justification.


When AI Looks Like Learning but Isn’t

1. The tool can be designed to either trigger or bypass cognition

Ludvigsen emphasizes that language models must be configured so they stimulate cognitive functions such as understanding, memory, language development, and problem solving. The implication is that if the design does not nudge these functions, the tool can become a shortcut rather than a learning scaffold. Utdanningsnytt

2. The risk of a “downward spiral” without cognitive effort

This is the article’s most direct formulation. Ludvigsen warns that if the design is slightly off, students may end up solving tasks without cognitive strain, which can lead to a negative long-term trajectory across the school years. The article underscores that small design choices now may have large consequences over time, precisely because they influence whether students are required to do cognitively demanding work. Utdanningsnytt

This is essentially the research version of the classroom phenomenon many teachers already suspect:

AI can produce work that looks like learning, while the learning process is absent.

3. Struggling students may be especially vulnerable

Ludvigsen draws a contrast with findings from other fields (e.g., consulting), where AI tends to lift everyone somewhat and top performers most. He is skeptical that school will mirror this pattern. Instead, he argues that a student who struggles with language and formulation may actually lose out, because they might not engage in the cognitive efforts that learning depends on. Utdanningsnytt

This point strongly supports the framing around equity and professional responsibility:  AI can widen gaps not only through access differences, but through differences in who gets cognitively carried by the tool versus who uses it to extend thinking.

The traditional equity concern: access

This is the familiar digital divide argument:

That is real, but it is only part of the picture.

The newer equity concern: cognitive roles

Generative AI introduces a second, subtler divide:

For them, AI can function like a high-quality tutor or brainstorming partner.

Other students may be cognitively carried by AI.
These students may struggle with language, structure, or confidence. AI can then become:


4. Students need help distinguishing “learning the tool” from “learning the subject”

The researchers report that an important insight from the Asker trials was the need to clarify for students when they are:

Early attempts had an unclear boundary here, and the team had to adjust teaching designs to make this distinction visible. Utdanningsnytt

If that boundary remains fuzzy, students may interpret successful interaction with the AI as subject mastery.


AI in school: from exposure to evidence-based professional judgment

Why “students will learn from AI” is not a safe assumption

One of the most critical messages in contemporary research discourse is that AI does not automatically produce learning gains. If students use generative AI to shortcut reasoning, writing, or problem solving, the appearance of competence can grow while real understanding declines.

This is particularly likely when:

In other words, the learning outcome depends less on the model and more on the learning design.


Digital judgment: the ethical and pedagogical spine of AI use

Digital judgment plays a significant role in the ethics of a professional practice. In an AI context, this competence includes at least four practical commitments:

  1. Privacy-aware choices
    Teachers must understand when a tool’s data practices are incompatible with responsible classroom use.
  2. Bias awareness
    Teachers need to explain that AI systems can reproduce social and knowledge biases and may present questionable claims with undue confidence.
  3. Accuracy discipline
    AI output must be treated as a starting point for verification, not as an authority.
  4. Pedagogical purpose
    The tool must be chosen for a clear learning function, not because it is available.

This shifts the teacher’s role toward a critical filter function: students need explicit instruction in why AI can feel intelligent while still being fallible and biased.


Tasks for students, to be moderated according to age.

1. Writing: AI as a revision partner, not as an author

Teacher design:
Students draft first without AI. Then they use AI to:

Required student evidence:

Why is this a strong practice?
The student’s judgment becomes the assessed competence.


2. Social studies or science: AI-supported source critique

Teacher design:
Provide two short texts on the same topic (one robust, one weaker). Students ask AI to summarize both and then complete a verification task:

Why is this a strong practice?
It turns AI into an object of critical literacy instruction, reinforcing digital judgment.


3. Mathematics: AI for method comparison, not answer generation

Teacher design:
Students solve problems first. AI use is limited to:

Required student evidence:
A reflection identifying:

Why this is a strong practice:
It protects productive struggle while leveraging feedback and differentiation.


4. Teacher workload: AI for differentiation of materials

Teacher use case:
Teachers use AI to:

Why is this a strong practice?
This is a high-impact, lower-risk entry point where teacher judgment remains central, and student cognition is not outsourced.


Examples of uses to avoid or tightly restrict

AI should be avoided or heavily constrained when:

It is crucial that teachers can defend both use and non-use.


This article was written by the use of “Utdanningsnytt” , input from presentation by Sigurd Michaelsen used in an exam and then polished and added to by ChatGPT-

Exit mobile version