Artificial Intelligence (AI) is no longer a peripheral novelty in educational settings - it's a growing reality in how psychological services are delivered, assessments conducted, and decisions supported. For educational psychologists, who often navigate high caseloads, administrative burden, and complex pupil needs, AI promises speed, precision, and insight.
But with innovation comes responsibility. The use of AI in psychological practice raises important ethical questions: How do we ensure transparency in decision-making? Who is accountable when automated tools misguide? How do we protect children’s rights in a world of datafication?
AI in educational psychology must be more than just efficient - it must be fair, explainable, and always secondary to professional judgment. The conversation isn't about whether we should use AI, but how we use it in a way that reflects the values of psychological care: trust, safety, autonomy, and equity.
In this blog, we’ll explore the most pressing ethical issues and how educational psychologists can prepare for a future where digital tools are part of everyday practice.
1. Data Privacy and Confidentiality
Educational psychologists are guardians of sensitive, often life-shaping information. Introducing AI tools into this context means managing new forms of data collection, storage, and sharing, and protecting clients from harm.
AI systems require large datasets to function effectively, often including detailed behavioural, emotional, and academic information. If mismanaged, this data can expose children and families to breaches of confidentiality or even profiling.
In the UK, all AI tools must comply with the General Data Protection Regulation (GDPR) and adhere to the ethical codes set by the British Psychological Society (BPS). This includes principles around data minimisation, secure processing, and explicit consent.
For example, tools often used for transcription and note summarisation should only be used in GDPR-compliant, encrypted environments, preferably with offline or enterprise-level setups.
Psychologists should also consider data sovereignty, where the data is stored and processed. US-based platforms, even if GDPR-compliant, may be subject to different privacy laws that conflict with UK standards.
2. Algorithmic Bias and Fairness
AI models are trained on existing data, and that data can carry historical biases, omissions, or skewed representations.
When AI systems are used to flag risk, analyse behaviour, or inform assessments, any embedded bias in their training data can result in unjust or inaccurate outcomes, particularly for neurodivergent pupils, children from minority backgrounds, or those with limited digital footprints.
A 2024 study titled "FairAIED: Navigating Fairness, Bias, and Ethics in Educational AI Applications" discusses how AI systems can perpetuate biases in educational settings. The study highlights instances where AI algorithms exhibited bias against specific demographic groups.
Bias isn’t just about the algorithm - it's about who defines the problem, what outcomes are prioritised, and what data is excluded. Educational psychologists need to critically interrogate AI outputs and the assumptions behind them.
They also have a responsibility to educate schools and services on the ethical use of such tools, especially when used for school-wide tracking.
3. Transparency and Explainability
Understanding how an AI tool arrived at a conclusion is essential in a field that values reflective practice and collaborative decision-making.
Many AI tools function as “black boxes,” offering results without a clear rationale. This makes it difficult for psychologists to justify findings, particularly in legal, educational, or multi-agency settings.
The Alan Turing Institute’s document on AI explainability highlights the importance of making algorithmic decisions interpretable, especially in public sector applications.
The UK AI Security Institute is also actively developing frameworks around safety, explainability, and long-term risk mitigation in artificial intelligence, crucial for public sector and education applications.
To align with professional standards, psychologists must:
-
Use explainable AI (XAI) systems that offer clear logic paths
-
Provide written rationales in reports where AI tools are used
-
Be transparent with families and schools about the role AI has played
4. Human Oversight and Professional Judgment
No AI tool should ever replace the nuanced understanding and ethical discernment of a trained psychologist.
Tools that can streamline report writing and assessment planning do not (and cannot) replace in-person observation, relationship building, and critical reasoning.
Every AI-supported insight must be:
-
Reviewed by a qualified psychologist
-
Interpreted within context
-
Used as a supplement, not a source of truth
This aligns with the British Psychological Society’s stance that AI should support, not supplant, professional autonomy and judgement in psychological care.
5. Informed Consent and Autonomy
Ethical use of AI must prioritise clients’ rights to autonomy, understanding, and choice.
Using AI in assessments or interventions, even in a supporting role, requires clear, age-appropriate explanations and explicit consent from students or guardians.
The Digital Self-Determination framework suggests that individuals should have the right not only to consent, but to understand and challenge digital decisions that affect them.
In practice, this means:
-
Clearly explaining what AI is and how it is used
-
Offering alternatives when appropriate
-
Documenting consent or refusal in case records
6. Equity and Access
AI can widen or narrow the equity gap, depending on how it's implemented.
Schools and services in lower-income areas may lack access to cutting-edge AI tools, while others may overuse them due to underfunding and staff shortages. This can lead to two-tier systems where some pupils benefit from personalised insights while others are overlooked or misjudged.
Additionally, many AI tools are developed in the US, which means terminology, frameworks, and assessment norms may need adapting for UK use. Some tools may offer powerful functionality, but their cultural validity and alignment with UK practice standards often require further research and careful professional evaluation.
Psychologists must remain vigilant about applicability, inclusivity, and localisation when selecting tools for UK use.
Using AI Ethically, Effectively, and With Care
AI is already transforming educational psychology, but whether that transformation is positive depends entirely on how the technology is used.
Responsible integration means:
-
Protecting client data and confidentiality
-
Avoiding algorithmic bias and ensuring fairness
-
Ensuring transparency, consent, and human oversight
-
Adapting tools appropriately to UK frameworks
At Leaders in Care, we support psychologists and services navigating this evolving landscape, not just through expert recruitment but by championing innovation grounded in ethics. Through high-quality CPD events, sector insights, and people-first solutions, we’re committed to helping technology and human care move forward together.
Want to Learn More? Join Our Upcoming CPD Event
To continue this important conversation, we’re hosting a free CPD-accredited webinar designed specifically for educational psychologists, “Application of AI in Psychological Practice: Opportunities, Ethics, and Impact”, led by Dr Rachael Skews.
🗓 Date: June 24th, 2025
🕔 Time: 5:00 PM
📍 Location: Online
🎓 Includes: CPD certificate, recording, resources, and slides
To help you get even more from the session, we’re also publishing a dedicated series of blogs exploring the evolving role of AI in psychological services:
🔗 Harnessing AI in Educational Psychology: Balancing Innovation with Human Insight
🔗 AI Tools Educational Psychologists Should Know About and Consider Exploring in Practice