How can teachers justify the use of AI?
Should teachers know how to use evidence-based digital judgment in the use of AI?
Generative AI has passed the point of being an optional novelty. Many students already interact with AI daily, whether schools plan for it or not. This reality creates a professional dilemma: teachers must make continuous decisions about which tools to allow, which tasks to redesign, and which uses to restrict.
The key risk in this moment is not “AI in the classroom.” The key risk is unjustified AI in the classroom.
What we need in school now is a professional framework that ensures AI decisions are made with research-based argumentation and digital judgment rather than habit, excitement, or external pressure.
This framing is timely because generative AI has become a routine part of students’ everyday lives. At the same time, the pace of technological change outstrips schools’ ability to respond with stable norms, shared practices, and robust policy. In this context, “knowing the tool” is no longer sufficient. Teachers are now required to explain why a tool should be used for a specific learning purpose, and equally why a tool should not be used in particular contexts.
The Utdanningsnytt article contributes a research-informed warning: not all students learn from using AI. In particular, large language models are qualitatively different from earlier educational technologies because they are probabilistic systems that can generate credible-sounding but incorrect output. Therefore, educational AI is not a neutral add-on; it requires careful pedagogical and ethical design. The article highlights the example of Asker municipality, where teachers and researchers have developed subject-specific chatbots by adding a curated knowledge layer aligned with curricula and course texts. This approach aims to reduce hallucinations and improve instructional relevance. The implicit argument is that learning benefits are conditional on deliberate design, not guaranteed by exposure to the technology. AI use in school is not primarily a technical issue. It is a professional judgment issue, requiring structured reasoning, ethical analysis, and research-informed justification.
When AI Looks Like Learning but Isn’t
1. The tool can be designed to either trigger or bypass cognition
Ludvigsen emphasizes that language models must be configured so they stimulate cognitive functions such as understanding, memory, language development, and problem solving. The implication is that if the design does not nudge these functions, the tool can become a shortcut rather than a learning scaffold. Utdanningsnytt
2. The risk of a “downward spiral” without cognitive effort
This is the article’s most direct formulation. Ludvigsen warns that if the design is slightly off, students may end up solving tasks without cognitive strain, which can lead to a negative long-term trajectory across the school years. The article underscores that small design choices now may have large consequences over time, precisely because they influence whether students are required to do cognitively demanding work. Utdanningsnytt
This is essentially the research version of the classroom phenomenon many teachers already suspect:
AI can produce work that looks like learning, while the learning process is absent.
3. Struggling students may be especially vulnerable
Ludvigsen draws a contrast with findings from other fields (e.g., consulting), where AI tends to lift everyone somewhat and top performers most. He is skeptical that school will mirror this pattern. Instead, he argues that a student who struggles with language and formulation may actually lose out, because they might not engage in the cognitive efforts that learning depends on. Utdanningsnytt
This point strongly supports the framing around equity and professional responsibility: AI can widen gaps not only through access differences, but through differences in who gets cognitively carried by the tool versus who uses it to extend thinking.
The traditional equity concern: access
This is the familiar digital divide argument:
- Some students have better devices, better internet, more time, or more guidance at home.
- Therefore they benefit more.
That is real, but it is only part of the picture.
The newer equity concern: cognitive roles
Generative AI introduces a second, subtler divide:
-
Some students will use AI to extend thinking.
They already have enough subject knowledge and learning strategies to:
-
- ask precise questions
- evaluate the answer,
- challenge weak reasoning,
- integrate ideas into their own work,
- revise critically.
For them, AI can function like a high-quality tutor or brainstorming partner.
Other students may be cognitively carried by AI.
These students may struggle with language, structure, or confidence. AI can then become:
-
- a substitute for the hardest mental work,
- a way to avoid “productive struggle,”
- a tool that supplies the reasoning the student should be practicing.
4. Students need help distinguishing “learning the tool” from “learning the subject”
The researchers report that an important insight from the Asker trials was the need to clarify for students when they are:
-
- training to understand what the technology does, versus
- using the technology to learn disciplinary content.
Early attempts had an unclear boundary here, and the team had to adjust teaching designs to make this distinction visible. Utdanningsnytt
If that boundary remains fuzzy, students may interpret successful interaction with the AI as subject mastery.
AI in school: from exposure to evidence-based professional judgment
Why “students will learn from AI” is not a safe assumption
One of the most critical messages in contemporary research discourse is that AI does not automatically produce learning gains. If students use generative AI to shortcut reasoning, writing, or problem solving, the appearance of competence can grow while real understanding declines.
This is particularly likely when:
- tasks reward a polished product rather than visible thinking and reasoning.
- students lack sufficient knowledge to evaluate AI output;
- classroom norms for AI use are vague;
- Assessment designs do not capture the process.
In other words, the learning outcome depends less on the model and more on the learning design.
Digital judgment: the ethical and pedagogical spine of AI use
Digital judgment plays a significant role in the ethics of a professional practice. In an AI context, this competence includes at least four practical commitments:
- Privacy-aware choices
Teachers must understand when a tool’s data practices are incompatible with responsible classroom use. - Bias awareness
Teachers need to explain that AI systems can reproduce social and knowledge biases and may present questionable claims with undue confidence. - Accuracy discipline
AI output must be treated as a starting point for verification, not as an authority. - Pedagogical purpose
The tool must be chosen for a clear learning function, not because it is available.
This shifts the teacher’s role toward a critical filter function: students need explicit instruction in why AI can feel intelligent while still being fallible and biased.
Tasks for students, to be moderated according to age.
1. Writing: AI as a revision partner, not as an author
Teacher design:
Students draft first without AI. Then they use AI to:
- identify thesis clarity,
- diagnose paragraph coherence,
- suggest two alternative structures.
Required student evidence:
- original draft,
- AI feedback transcript,
- a short justification of what they accepted or rejected and why.
Why is this a strong practice?
The student’s judgment becomes the assessed competence.
2. Social studies or science: AI-supported source critique
Teacher design:
Provide two short texts on the same topic (one robust, one weaker). Students ask AI to summarize both and then complete a verification task:
- identify unsupported claims,
- compare AI’s summary to the original wording,
- locate missing nuance or false balance.
Why is this a strong practice?
It turns AI into an object of critical literacy instruction, reinforcing digital judgment.
3. Mathematics: AI for method comparison, not answer generation
Teacher design:
Students solve problems first. AI use is limited to:
- generating a parallel practice set,
- explaining an alternative method for a similar problem,
- highlighting common misconceptions.
Required student evidence:
A reflection identifying:
- one error corrected,
- one strategy improved,
- one AI limitation encountered.
Why this is a strong practice:
It protects productive struggle while leveraging feedback and differentiation.
4. Teacher workload: AI for differentiation of materials
Teacher use case:
Teachers use AI to:
- simplify texts,
- generate vocabulary supports,
- Create a series of staged tasks that are aligned with the same learning goal.
Why is this a strong practice?
This is a high-impact, lower-risk entry point where teacher judgment remains central, and student cognition is not outsourced.
Examples of uses to avoid or tightly restrict
AI should be avoided or heavily constrained when:
- the task is intended to assess independent reasoning or writing;
- students are early novices who cannot evaluate output;
- privacy risks are unclear;
- The tool encourages invisible authorship.
It is crucial that teachers can defend both use and non-use.
This article was written by the use of “Utdanningsnytt” , input from presentation by Sigurd Michaelsen used in an exam and then polished and added to by ChatGPT-
Good morning! Thank you for a very interesting post about AI. It gave me some useful concepts for my own teaching practice with AI as I am trying to develop the pupils’ AI-consciousness (how, why, when to use AI etc). I believe that teaching the pupils some metacognition about AI-use and the role it plays in their learning will eventually make them better AI-users. To achieve this I make the pupils answer questionaires about the way they put the Ai to use/non-use. Your description of the pupils being “cognitively carried” by the AI is fitting for some of my pupils, while some of the cleverer pupils have become “extended thinkers” because of AI.