AI Ethical Use Policy
Effective Date: 27 April 2026 · Version 1.1
In brief: what you need to know
- AI is used only to support clinicians with administrative work like transcription and note drafting — never to make clinical decisions about you.
- Your sessions are not recorded as audio. Speech is transcribed into text in short chunks during the session, and only the text is kept.
- A qualified clinician reviews and approves every AI-generated output before it enters your record.
- Your data is never used to train AI models.
- You have the right to know which AI tools your clinician uses and to opt out at any time without affecting your care.
- The practice officially supports Cogent Clinic. Other tools may be used by individual clinicians, but every tool must meet our minimum standards.
The full policy is set out below. This policy should be read alongside our Privacy Policy.
1. Purpose and Scope
This policy explains how artificial intelligence (AI) is used within Illuminated Thinking, the principles that guide that use, the standards every AI tool must meet, and your rights as a client.
It applies to all AI-assisted tools used by Illuminated Thinking Ltd and by the independent clinicians associated with our practice in their work with our clients. It complements (and does not replace) our Privacy Policy, which sets out how we handle personal data more generally.
2. What We Mean by 'AI'
When we refer to 'AI' or 'AI-assisted tools' in this policy, we mean software that uses artificial intelligence — typically large language models or speech-recognition models — to support clinical work. In current practice this is mainly:
- Transcription — converting spoken words from a clinical session into written text
- Note drafting — producing a structured first draft of a clinical note from the transcript, which the clinician then edits and approves
This policy does not apply to ordinary practice software that does not use AI, such as appointment scheduling, invoicing, or secure email.
3. Our Guiding Principles
We believe AI can responsibly support clinical work when it is used transparently, with consent, and with strong human oversight. The following principles guide every decision we make about AI in the practice:
- Human-led care. AI supports clinicians; it does not replace them. Clinical judgement, the therapeutic relationship, and decisions about your care remain with a qualified clinician.
- Transparency. You will be told if AI is being used in your sessions, what it is being used for, and what tool is being used. You can ask questions about it at any time.
- Explicit consent. AI-assisted tools are only used with your explicit, informed consent. You can refuse or withdraw consent at any time without affecting your access to therapy.
- Human oversight of every output. A qualified clinician reviews, edits, and approves every AI-generated output before it is stored in your clinical record.
- Data minimisation. Only the information necessary to support clinical documentation is processed by AI tools.
- No model training on client data. Your personal or clinical information is never used to train AI models.
- No autonomous clinical decisions. AI is not used to triage risk, diagnose, recommend treatment, or make any decision that affects your care.
- Accountability. Each clinician is accountable for the AI tools they choose to use, and for ensuring those tools meet the standards set out in this policy.
4. Bias, Equity, and Language
AI tools are trained on data that may not represent every voice equally. Speech recognition can be less accurate for some accents, dialects, and bilingual or code-switched conversation, and note-drafting tools can carry cultural assumptions about what is "normal" or relevant. We are aware of these limitations.
Clinician review is our primary safeguard. Responsibility for ensuring your record reflects you accurately — including the language, cultural context, and meaning of what you have shared — sits with the clinician, not with the AI tool. Where a clinician finds that an AI tool does not work well for a particular client or population, they are expected not to rely on it.
5. The Independent Clinician Model
All clinicians at Illuminated Thinking practise as independent professionals rather than employees, and each is an independent data controller in their own right (see Section 2 of our Privacy Policy). This means each clinician selects, contracts with, and is responsible for the AI tools used in their own practice.
Illuminated Thinking Ltd sets the standards every clinician must meet, supports an officially endorsed tool (Cogent Clinic), and acts as the central point of contact for clients with questions or complaints. We do not, however, dictate which specific tool any individual clinician must use, provided the tool meets our minimum standards.
6. The Officially Endorsed Tool: Cogent Clinic
Cogent Clinic is the practice's officially endorsed AI-assisted tool. It is operated by Cogent Clinic Ltd, a separate legal entity, and acts as a data processor for the clinicians who use it. Cogent Clinic is live and in use by our associate clinicians.
Cogent Clinic provides:
- Transcription of clinical sessions to support accurate note-taking
- AI-assisted note drafting to help clinicians produce clinical notes more efficiently
All outputs produced by Cogent Clinic are reviewed and approved by a qualified clinician before being stored in the clinical record (held in Halaxy). Cogent Clinic operates under a Data Protection Impact Assessment completed by each clinician for their own use, and meets the minimum standards set out in Section 8.
7. Other AI Tools Clinicians May Use
Because clinicians act as independent data controllers, some may choose to use AI tools other than Cogent Clinic. A common example is Heidi Health. Other tools may also be in use.
Illuminated Thinking Ltd does not officially endorse these alternative tools, but we do require that any tool used in clinical work with our clients meets the minimum standards set out below. Each clinician is responsible for verifying this for the tool(s) they use.
If you would like to know which AI tool (if any) your clinician uses, please ask them directly. You are entitled to an answer before consenting to the use of AI in your sessions.
8. Minimum Standards for Any AI Tool
Every AI tool used by a clinician in their work with our clients must meet, at a minimum, all of the following requirements:
- No audio recording. Audio of sessions is not recorded or stored. Speech is transcribed in short chunks during the session, and only the resulting text is retained.
- UK GDPR and Data Protection Act 2018 compliant. The vendor must demonstrably meet UK data protection law.
- Signed Data Processing Agreement (DPA). A DPA must be in place between the clinician and the vendor before client data is processed.
- No model training on client data. The vendor must contractually commit that client data is not used to train, fine-tune, or otherwise improve AI models.
- Human review of every output. A qualified clinician must review, edit, and approve every AI-generated output before it is stored in the clinical record. AI outputs are never stored unreviewed.
- No autonomous clinical decisions. The tool must not be used to triage, diagnose, assess risk, or recommend treatment without clinician oversight.
- DPIA completed by the clinician. Each clinician must complete their own Data Protection Impact Assessment for their use of the tool, before using it with clients.
- Explicit, documented client consent. The clinician must obtain explicit consent before using AI in a session, and must document that consent.
- Right to opt out. Clients must be able to refuse or withdraw consent at any time, without any impact on their care.
- Appropriate security. The vendor must apply appropriate technical and organisational measures, including encryption in transit and at rest, access controls, and breach notification procedures.
- Lawful international transfers. Where data is processed outside the UK/EEA, the vendor must rely on a UK GDPR-compliant transfer mechanism (such as adequacy regulations or Standard Contractual Clauses).
Tools that do not meet these standards must not be used in clinical work with our clients. If you have any concerns about whether a tool meets these standards, please contact us at dataprotection@illuminatedthinking.co.uk.
9. What We Do Not Use AI For
To be clear about the limits of AI in our practice, we do not use AI to:
- Make any clinical decision about your care, including diagnosis, treatment selection, or risk assessment
- Triage referrals or decide who is offered services
- Replace clinical supervision or professional judgement
- Profile clients or generate predictions about clinical outcomes
- Communicate with clients in place of a clinician (for example, AI chatbots in clinical contact)
- Generate marketing or website content that purports to be the personal voice or clinical opinion of a clinician without their review
Future use of AI. We do not currently use AI for any client-facing function — for example, AI chatbots, intake assistants, symptom screeners, or AI-generated responses to enquiries. If we ever decide to introduce a tool of this kind, we will update this policy and make the change clearly visible before the tool goes live with clients.
10. Your Rights
As a client of Illuminated Thinking, you have the following rights in relation to AI use:
- The right to be informed. Your clinician will tell you, before AI is used in your sessions, what tool is being used and what it does.
- The right to ask. You can ask your clinician at any time which AI tools they use, how those tools work, and what safeguards apply. You can also contact us at dataprotection@illuminatedthinking.co.uk if you would like help with this.
- The right to consent — or not. AI-assisted tools are only used with your explicit consent. Refusing will not affect your access to therapy.
- The right to withdraw consent. You can withdraw consent at any time by speaking to your clinician or by emailing us. See Section 11 for what this means for your existing record.
- The right to complain. If you have concerns about how AI is being used in your care, please see Section 13.
11. Consent, Withdrawal, and Accuracy
Sessions involving more than one person
In couple, family, or other systemic sessions, AI-assisted tools are only used where every participant in the session consents. If any participant declines, AI tools will not be used in that session. Any participant may also withdraw consent at any time during a course of therapy.
Children and young people
Where a client is under 16, we obtain consent from the person(s) with parental responsibility before any AI-assisted tool is used in their care. A young person under 16 may also consent themselves where their clinician is satisfied that they understand what is involved, in line with the Age of Legal Capacity (Scotland) Act 1991. Young people aged 16 and over are presumed capable of consenting for themselves.
In all cases, the young person's own views are sought and respected, and the right to refuse or withdraw consent at any time — without any impact on their access to therapy — applies equally to children and young people.
Effect of withdrawing consent on your existing record
Withdrawing consent stops any further use of AI-assisted tools in your care. It does not affect the lawfulness of processing that has already taken place with your consent (UK GDPR Article 7(3)). Existing notes in your clinical record will not be re-drafted or deleted because of a withdrawal of consent, as we are professionally and legally required to retain accurate clinical records (see Section 10 of our Privacy Policy).
If you believe a specific entry in your record is inaccurate — whether or not it was AI-assisted — you can request rectification at any time under UK GDPR.
Accuracy and errors
The clinician, not the AI tool, is responsible for the accuracy of what is recorded in your clinical record. AI-generated outputs are drafts only; the clinician reviews, edits, and corrects them before they are stored. If you spot an error in your record, please raise it with your clinician or contact us at dataprotection@illuminatedthinking.co.uk — we will help you exercise your right to rectification.
12. Governance and Review
Illuminated Thinking Ltd reviews this policy at least annually and whenever a significant change in our use of AI occurs (for example, a major change to Cogent Clinic, a change in a vendor's data handling, or a relevant change in law or regulator guidance).
We monitor regulator guidance from the Information Commissioner's Office (ICO), the Health and Care Professions Council (HCPC), and the British Psychological Society (BPS), and update our standards in line with their expectations.
Responsibility for this policy sits with the Clinical Director, Dr Aisha Tariq, who acts as our designated lead for data protection and AI governance.
13. Complaints
If you have concerns about how AI is being used in your care, you can:
- Raise it directly with your clinician, who will explain what is happening and address your concerns where possible
- Contact us at dataprotection@illuminatedthinking.co.uk — we will look into it and respond as soon as we can
- Complain to the Information Commissioner's Office (ICO) if your concern relates to how your personal data is handled. Details are in Section 15 of our Privacy Policy.
14. Changes to This Policy
We may update this policy from time to time. The most current version will always be available on our website with the effective date and version number shown at the top. Significant changes will be communicated where appropriate.
Questions about how we use AI?
If you have questions about AI in your care, please speak to your clinician or get in touch with us directly.