Artificial Intelligence has been in the news a lot this week, with Prime Minister Sir Keir Starmer announcing that “Advances in AI will define the decade to come. This will bring extraordinary opportunities… I believe that we must fully embrace our role as insurgents in this revolution if our economy is to grow and our people are to prosper.”
This more proactive approach to the use of AI recognises that in the last couple of years, the influence of AI has continued to grow, develop, and become more integrated into our everyday lives. The opportunities are real, with increased productivity being one of the main benefits; we can now use AI to produce more, be more creative, and automate tasks that previously we could not. We can use AI in business and governance, allowing the economy to grow and people to prosper as processes become more streamlined and more productive.
However, we know that these promises of prosperity come with risks. Sir Starmer’s tone is different from that of the previous UK government, which warned about the risks of the implementation of AI. As such, Mr. Sunak noted that monitoring should not be left to the private sector and AI firms could not be left to “mark their own homework.” The risks are very real. Just this week, Apple AI no longer summarises news reports due to a number of high-profile inaccuracies. AI has impacted the quality of information in the public sphere and how far this influences political decision-making at the highest level, particularly with society’s reliance on social media. The ethical considerations of the use and development of AI are real and vast, ranging from bias in decision-making to privacy and copyright concerns.
In the education setting, the development and implementation of AI have profound implications. The first impression by many relates to how learning can be impacted by the use of AI; from now on, students do not need to write their own essays, they can ask ChatGPT or Copilot to do so. So how can teachers be assured that their students are learning? How can learning be accurately assessed and tailored if the work provided is not the student’s own? From a wider perspective, how can exam boards and OFQUAL be sure that non-examined assessment does not become compromised, especially considering that AI checkers, by definition, are not as accurate as previous plagiarism protection tools?
While the larger conversations regarding AI and the educational setting are beyond the scope of this article, it is important that as a school we recognise that AI will define the decade to come and we must support our students to grow and prosper as part of this revolution. Students need to be supported in understanding what AI is, how it works, how it can be used, and what the ethical issues are. As a Foundation, we are working hard to be at the forefront of this and incorporate the responsible use of AI into the curriculum, how to best use it for learning, and maximise its potential moving forward, while mitigating the risks as far as possible.
To this end, it is important to address the question of why; why do we still need schools if AI can do everything for us? This is similar to questions that were raised about the internet and the power of a well-executed Google search that can provide an immeasurable fountain of knowledge at our fingertips. For many intents and purposes, the development of AI does not advance this question much further. Ultimately, the question remains the same: why is knowing ‘stuff’ important?
Educational psychologists note that knowledge and understanding are interlinked into everything that we do. In everything that we learn, we make connections with prior knowledge. We develop a schema of knowledge that is interconnected; the more we know, the more knowledge we connect together. From there, we can make more comparisons between facts and topics and can make more effective judgments. Our brains are evolved to be able to recognise problems and use past solutions to solve them; the more we know, the more we are able to do this. Thus, the more we know, the more complex problems we can solve. Even though the internet makes the sum of all human knowledge available to everyone, we need to have learned the knowledge to be able to use it. This does not change with the advent of AI.
It is worth noting that with the way generative AI is currently designed, forming judgments is not something that it is able to do. In developing ChatGPT, OpenAI used computers to read all available content on the internet and make records of what words tended to follow others. Using the wealth of text on the internet, the model was ‘listening’ and making connections between words based on the frequency they are used. From there, the AI could make sentences but they did not make sense. The AI was then developed through the training phase, in which human readers corrected the AI to ensure that it made sense. After the launch of ChatGPT in November 2022, this process has continued and AI is continually learning what words work together, how frequently, and in what context. This is what makes it a powerful tool for many uses, but engaging in critical thinking and forming judgments is not one of them. Apple’s experience with their AI engine summarising news reminds us that selecting words that work together more frequently does not provide an accurate judgment; this is still in the domain of human beings and to do so, we need to know and understand relevant facts and information.
This means that AI becomes a more powerful tool for accessing and compiling the information that we need to learn, much like a higher-powered search engine that can access what we need to know very quickly. AI can tell us which academics specialise in specific areas, provide a summary of their views, and give us a route into further research. AI can provide some inspiration for creative projects or help students to revise by tailoring learning to topics that require a greater depth of understanding.
At The Kingsley School, we will continue to develop our use of AI and the curriculum surrounding it, to ensure that we are helping our pupils to be part of the revolution moving forward. In addition to lessons such as PSHE and ICT where pupils will learn to evaluate the accuracy of online information and how to navigate this minefield, Year 8 pupils will soon embark on a series of lessons related to AI that they can build on in the years to come. As part of their Pathfinder lessons, Year 8 will learn about how AI works, how it was developed, how to prompt AI, and how to use it ethically. This is an exciting scheme of work that will add a new dimension to the curriculum at Kingsley and ensure that our students are aware of the positives, the risks, and the opportunities that AI will present to us now, and in the future.
*For this article, AI was used for research purposes only.
Article written by Peter Bucknall – Deputy Head (Academic)