Guidance and support for the use of AI in A Level Computer Science NEA
04 February 2026
This blog was originally published in 2024 and has now been updated with new information.
Ceredig Cattanach-Chell, Computing Subject Advisor

Generative artificial intelligence (AI) tools are developing quickly. These tools bring benefits and challenges to education and assessment.
In this blog I highlight the guidance available for managing AI use in computer science. I also look at how to deal with suspected misuse in assessments.
Using this blog
This blog is written to help highlight where AI tools could be used to support NEA, and how teachers can detect where AI may have been used.
It is important to remember that only independent candidate work should be credited when marking the NEA.
AI is a useful support tool, in the same way that external sources (websites), tutorials (built into software/YouTube, etc.) and teacher support are also useful support tools.
Any direct intervention by teachers, YouTube tutorials, AI tools, etc. must be clearly shown and then marks adjusted accordingly. Candidates and teachers must be mindful that use of direct intervention from teachers, YouTube, AI cannot be credited. It is only the independent work beyond that which may then attract credit again.
Here’s what the JCQ states:
If teachers give any assistance which goes beyond general advice, for example:
- provide detailed specific advice on how to improve drafts to meet the assessment criteria
- give detailed feedback on errors and omissions which limits candidates’ opportunities to show initiative themselves
- intervene personally to improve the presentation or content of work
(JCQ Jan 2026)
Therefore we can extend that to use of AI to read:
If AI tools give any assistance which goes beyond general advice, for example:
- provide detailed specific advice on how to improve drafts to meet the assessment criteria
- give detailed feedback on errors and omissions which limits candidates’ opportunities to show initiative themselves
- direct/show candidates on how improve the presentation of or content of work
- give or create worked solutions/edits/updates to code
then they must record this assistance and take it into account when marking the work.
For further support on instructions for conducting non-examination assessments, see the JCQ website. Guidance on referencing sources and AI use is found in Section 4.3, Resources.
What are AI tools?
AI tools use user input (prompts and questions) to generate text or images. AI tools are trained on data sets, and the response from an AI tool depends on how it has been trained. For example, ChatGPT is the best-known example of an AI chatbot. It has been trained on all the text available on the internet. There are many other chatbots and other tools available.
The primary focus of AI use in the A Level NEA is likely to be tools that generate program code. This would include common chatbots like ChatGPT, Google Gemini and Microsoft’s Copilot.
However, with the development of chatbot styled AI tools, it is also possible for an AI chatbot to emulate an end user, provide critique on research, and to provide ideas and solutions to problems.
AI tools may also be integrated into common desktop applications. This could include Microsoft Word, the Google suites, Microsoft Visual Studio, and so on.
Appropriate use of AI in the NEA
The appropriate use of AI is determined by:
- the specific mark scheme
- the nature of the task.
Use of AI tools to support sections of the NEA is possible. AI tools may provide a great springboard for candidates – especially if a candidate is struggling with debugging or thinking about alternative paths that could be followed.
We want to encourage candidates to feel confident in how to use AI tools effectively, and especially in how to reference use of them in work. The JCQ student guide is a great resource and summarises the use of AI in a student-friendly way.
Effective use should provide prompts and ideas, without specifically giving solutions. For example
“What algorithms could I use to create a ‘pathing’ for a guard in a game to try to detect the player”
rather than
“Write me 3 algorithms that I can use to create a path for a guard in a game”.
Use of AI in each section of the NEA
Analysis
Generating ideas for projects can be supported by AI. AI tools can provide great ideas and develop concepts. AI tools can help with initial project concepts, and to provide more scope to a project.
For example, a student could use ChatGPT to provide stimulus to the question: “Write me 10 game project ideas that could use OOP paradigms.”
Ideas from ChatGPT or candidates could then be developed further using stimulus: “State 10 ways that I could use power ups in this game.”
AI tools could also be used to identify similar ideas or types of projects. This may speed up the research process. However, it’s also important that this stage is not solely driven by using AI tools.
To detect over-reliance on AI, teachers could:
- keep weekly drop-ins with the student and discuss each idea
- ask them to justify their understanding of sections
- encourage them to discuss if they want to use AI to support their analysis with you, and get them to describe why they feel it is necessary
- make sure you stress good habits of AI use, and remind candidates to evidence what they have researched, referenced it accurately, and then clearly shown how they have developed from it.
Design
Appropriate use is more of a challenge in this section. Students may try to use AI tools to:
- decompose problems
- create pseudocode for algorithms
- provide prompts for suggesting justification for solutions
- suggest testing strategies and data.
Though it is highly unlikely that AI tools would be able to produce a fully working design (without significantly expert prompt engineering), it can certainly be used to support most of the requirements for this section.
Ideally, candidates should be encouraged to propose a “first attempt” at each of the mark scheme points in the mark scheme. Using AI tools has potential to significantly impact a student’s mark in this section.
If candidates struggle with creating ideas for breaking down projects or creating a solution structure, AI is best used to support them after they have made some level of attempt to do this independently.
Candidates who struggle to decompose problems could then ask AI, “How would you break down Problem A into a series of smaller problems for computation solutions?”. They could then reference this and justify which is better – their original ideas, or a different approach. This would allow them to gain the most credit.
However, generally, AI use in this section should be discouraged.
There is no requirement for the design to be ‘perfect’ at this stage. Indeed, overuse of AI to reach a ‘perfect’ solution at this stage hinders the iterative development – which is designed to identify flaws, and allow re-design, iterative development and therefore editing/updating testing where changes are made.
Therefore, we’d suggest that candidates steer away from using AI as much as possible in this section.
To detect over-reliance on AI you could look for:
- ‘perfect’ designs from students who struggle with decomposing problems
- very code-like pseudocode algorithms
- unexpected rates of progress in this section from students you may not expect it from
- designs/work which still has the telltale signs of AI text-based output for things like class diagrams and testing tables
- sudden changes in direction, or wholescale update of designs from week to week.
- lack of evidence to show the thought process behind each step of the design stages.
Implementation
AI tools may be used to support debugging. AI tools could also be used to suggest ideas and methods to troubleshoot non-functional code.
For example:
- a candidate could ask it to suggest how a method or object could be written. But the candidate must show clearly how this suggestion has been adapted to suit their project.
- a candidate could copy and paste a method or section of code into an AI tool and ask it to suggest why it may not be compiling, or working as expected.
- a candidate could, if totally stuck, ask AI to provide a potential solution to a short section of code, reference and integrate this, and then show how they have then resumed independent work.
There is no harm in using AI tools to support the coding process but only if all AI use is well documented and referenced. Candidates must clearly show where AI tools are used, and how their independent work develops from AI generated content.
The challenge in using AI tools is that AI generated code is not always functional, or error free. This leads to challenges in testing and evaluation. A lack of understanding about how the code works will likely limit a candidate’s ability to test and evaluate at later stages. Therefore, whilst the implementation may “work” they may struggle to reach upper mark bands in the Testing sections.
If a candidate does use AI support, it is important that credit should only be awarded to independent work.
To detect over-reliance on AI you could look for:
- very erratic work rates, such as going from being very stuck to generating significant amounts of code in a short space of time
- changes in coding styles from section to section
- very fast development of code when compared to progress rates in classroom lessons
- sudden changes in direction
- departing from earlier designs with little justification/reasoning, or redesign work
- limited documentation to accompany development of the more ‘significant’ areas of code.
Testing and evaluation
AI use in this area should be very limited.
The testing uses test data from earlier sections. The evaluation is clearly linked back to specific stakeholder requirements. Where students struggle to relate their earlier testing data to the code they have, it may well be a strong indicator of non-referenced/documented AI use earlier in the design or implementation sections.
AI tools are unlikely to be able generate detailed and specific support for this section without well-crafted prompting. Projects are likely to be very hard to upload entirely to a LLM or similar tool.
These sections also provide opportunities to create evidence of live interactions between stakeholders and the candidate. This is very hard to replicate with AI tools.
AI could be used to provide prompts for candidates to respond to. For example: “What impacts might a partially-functioning GUI have on an end user?”. These questions are fine to use as prompts to generate candidate-driven discussion, but must be referenced.
To detect over-reliance on AI you could look for:
- a disconnect between ‘generic’ discussions and the specifics of the project
- changes in writing styles, where candidates are using a lot of generated responses from LLMs to support discussions
- issues with formatting/layouts
- lack of flow to the section.
Inappropriate use of AI in the NEA
AI increases the opportunity for students to claim credit for responses that are not independently created.
Where a student has used AI to complete work, they are not demonstrating their own knowledge, understanding and application of skills. This prevents the student from presenting their own authentic evidence and will limit access to mark bands.
Examples of AI misuse include:
- using or modifying AI responses without acknowledgement
- disguising the use of AI
- using it for substantial sections of work.
It is important that you have an AI department policy. This should be based around the school policy. Further guidance for this exists on the JCQ website.
Teach students about appropriate use of AI in computer science before they start their NEA. Demonstrate how to reference AI correctly, including how to evidence the use in an appendix.
Supporting best practice of AI use
Analysis
AI can generate great prompts. However, candidates should steer away from generating final ideas. Analysis requires interactions with stakeholders. Candidates restrict access to upper mark bands where this is limited. Overuse of AI may restrict engagement with stakeholders, as the candidates become too dependent on the AI responses.
Much of the analysis is open to AI misuse. Care should be taken early on to ensure that candidates do not rely on AI tools too heavily. Plenty of short review points and teacher monitoring at this stage of the NEA is key.
Design
The design does not have to be fully completed in one go. This is due to the iterative nature of the NEA. For example, a high-level sketch or idea is often refined later after a “first attempt” has been made at building a user interface for example. After testing, these earlier designs may be refined (and documented). The project goes through another iteration, and the result of the tweaked designs are re-evaluated.
The lure of AI may lead candidates to create over-specific designs – which more reflect a final piece of code, rather than an early ‘pseudocode’ styled design.
Examiners will not reward reverse engineered designs. We expect there to be errors and challenges in the initial designs. This is why the projects are iterative in nature. Therefore, a candidate who overuses AI to generate ‘code perfect’ designs are likely to cause themselves issues.
Full and perfect ‘code like’ designs for algorithms should raise suspicions at teacher level.
Development
Much of the development encourages independent work. However, AI tools may be used to auto-comment code and solve challenges along the way.
The key to ‘good’ use of AI is to encourage research into techniques, rather than solutions. Researching techniques allow a candidate to adapt and modify their findings to their project.
For example: Researching “shortest path algorithms” would allow a candidate to explore which pathing algorithm they would want to use and why. Researching: “Write Dijkstra’s shortest path for my code” limits their ability to show independent development.
Spotting misuse of AI is down to knowing your candidates, and being able to spot:
- sudden changes in work production
- changes in coding styles
- sudden redevelopment without justification.
This will help you challenge candidates and help to reassure you over authenticity.
Testing
Use of AI to test a program is not really much help, as the tests and data will have been defined earlier in the project.
One area that AI could be misused is in the remedial action taken to resolve issues. We do not expect that every project will work perfectly. There may be bugs in the system at final completion.
However, simply copying errors/issues and getting AI tools to solve them without referencing is cheating. Therefore we would encourage teachers to discuss the solutions that candidates produce, with a focus being on how they reached the solution, and how well they understand it.
Evaluation
AI tools are very good at writing evaluations when given enough information. A key pointer to AI-generated evaluations will be the generic nature of the evaluation. There is also likely to be a lack of evidence of interaction between stakeholders and candidate. Formatting, grammar change and stylistic output is also quite easy to spot in text which has been generated by AI tools.
Using AI to generate the evaluation is poor practice and will mean it is very difficult to access upper mark bands.
Dealing with misuse of AI in the NEA
Teachers must not accept work which is not the student’s own. Ultimately the Head of Centre has the responsibility for ensuring that students’ work is authentic.
If you suspect AI misuse before a candidate has signed the declaration of authenticity, you can resolve the matter internally. You do not need to report this to Cambridge OCR.
If AI misuse is suspected after a candidate has signed the declaration of authenticity, you must report suspected malpractice to Cambridge OCR.
Guidance on reporting malpractice is outlined in the JCQ AI guidance on the Malpractice section of the JCQ website.
To report malpractice, you must:
Further support
Please refer to the JCQ AI use in assessments: Protecting the integrity of assessment document for further information on managing the use of AI within your assessments.
We also have a range of support resources, including recorded webinars, on our AI support page.
Stay connected
If you have any questions, you can email us at ComputerScience@ocr.org.uk or call us on 01223 553998. You can also sign up to subject updates to keep up to date with the latest news, updates and resources.
About the author
joined Cambridge OCR in September 2015 incorporating his breadth of experience from education to support the reform and development of the new GCSE (9-1) Computer Science and Entry Level R354. A keen advocate of the challenges faced within the classroom, Ceredig led on the concept and delivery of teacher delivery packs, which have become one of the flagships for the new GCSE’s success with teachers. Prior to joining Cambridge OCR, Ceredig had eight years of education and teaching experience across a wide range of schools, including primary, secondary, academies and SEN sectors. Ceredig has a degree in Computer Science from Liverpool University and post-grads from Liverpool Hope and Cambridge Universities. Outside of work, Ceredig is a keen modeller/painter, gamer and all-around geek. From wildlife to war games, his varied hobbies ensure that he is never just ‘sitting down watching the box’.
Related blogs