Artificial Intelligence Policy

Purpose & Scope

It is undeniable that Artificial Intelligence (AI) has rapidly become woven into our everyday lives. It has transformed how we search for and consume information to how we plan, write and communicate. It is clear that AI is here to stay.

As a sustainability consultancy, we recognise that when used well, it has the enormous to potential to help businesses measure, manage and communicate their impact more effectively. However, we are also keenly aware that with new technology, there are new risks to consider. We are increasingly mindful that AI has social, environmental and ethical impacts. While it can speed up admin intensive tasks and free up critical time to use towards deeper thinking, richer collaboration and more ambitious sustainability action, it can also amplify bias, spread misinformation, compromise privacy and increase resource use if it is not handled carefully.

That is why we have created this AI Policy. It exists to guide how we use AI responsibly in this new era, to set clear boundaries on its use and to keep us watchful for unintended negative consequences. It ensures we protect client trust, uphold our B Corp values and remain mindful of AI’s ethical and environmental implications, while embracing its potential for good.

This policy applies to all Cyd Connects team members, contractors, associates and collaborators.

2. Why we use AI

AI has enormous potential to support businesses driving positive change. Used thoughtfully, it can:

  • Strengthen research by accessing multiple data points from reports, journal articles and evidence-based research quickly.

  • Improve clarity and accuracy in written work.

  • Accelerate brainstorming and spark ideation.

  • Free up time otherwise taken up by admin heavy tasks which we can divert to  deeper data analysis, stakeholder engagement and creative sustainability strategy.

  • Support internal operations such as calendar management, project management, information and data organisation.

However, AI will never have the final say in our work. Outputs are always reviewed, fact checked, cross referenced and rewritten using our decades of expertise, sector knowledge and lived experience.

3. Our Principles

3.1 Thoughtful, purpose-led use

Every use of AI tool must support our mission to help businesses progress their sustainability goals credibly and ethically. We always ask whether AI is the right tool for the task and whether its use aligns with our values, sustainability commitments and B Corp standards.

3.2 Human-first, human-led

AI assists but humans must always lead. This means:

  • Our consultants remain fully accountable for the accuracy, quality and integrity of all work.

  • AI must never be used to produce final outputs, make judgements or replace professional expertise.

3.3 Transparency

We are open with clients about how AI is used in research, drafting or internal processes. If a client prefers we do not use AI on a project, we will honour that fully.

3.4 Truth, ethics, and unbiased information

We are mindful of bias, representation and fairness. This means we will never accept outputs that reinforce stereotypes or bias, present unchecked claims or misrepresent science and research.

3.5 Environmental responsibility

As a sustainability consultancy, we recognise that AI tools carry an environmental footprint linked to energy and compute intensity (UNEP, DEFRA 2025). We therefore:

  • Use AI proportionately and intentionally rather than by default.

  • Avoid unnecessary querying or high-impact tasks when a human-led approach is more efficient.

  • Prefer tools demonstrating commitments to efficiency, transparency or renewable energy use where available.

  • Continue monitoring research on AI’s environmental impact as the field evolves.

3.6 Continuous learning

AI is rapidly developing. So it is integral that though we stay curious, we must remain critical, sharing what works and what doesn’t and recognise that emerging risks will require ongoing attention.

4. Examples of where AI is useful in our work

Appropriate uses include:

  • Drafting early outlines of reports and presentations.

  • Summarising legislation, frameworks and technical documents (always fact checked).

  • Generating early ideas for sustainability campaigns, events and communications which are then reviewed and reshaped by human hands and minds.

  • Data organisation and entry into spreadsheets (we will never share sensitive or private data with out AI Tools).

  • Comparing high-level definitions or methodologies before applying expert analysis.

  • Supporting grammar, readability and clarity checks.

  • Researching broad themes or identifying initial data points for deeper manual investigation.

  • Administrative efficiency (notes, scheduling, project management).

5. Where AI must not be used

AI must not be used for:

  • Final versions of sustainability reports, gap analyses, assessments or strategic recommendations.

  • Claims about impact, metrics or compliance without human verification.

  • Legal interpretations, audit readiness decisions or certification guidance without expert judgement.

  • Sensitive stakeholder engagement content, especially involving vulnerable groups.

  • Confidential or client-identifiable data in non-approved AI tools.

  • Automation of client communications without human review.

6. Data, privacy and confidentiality

We never input into AI tools:

  • Non-public client information.

  • Personal data or identifiable stakeholder details.

  • Draft reports, impact assessments or sensitive findings unless using a secure, approved environment.

  • Any data covered by confidentiality agreements.

Only approved tools may be used and storage must remain within our secure systems.

 7. Approved AI Tools

In our work we use a number of approved AI tools. They are:

  • Chat GBT Business &  Claude – for early-stage ideation, outlining and light research to help organise our thinking before we apply our own expertise

  • Perplexity – supports deeper, more structured research, especially when scanning legislation, frameworks and emerging trends, which we then verify from primary sources.

  • Grammarly – helps keep our writing clear and accurate.

  • Supernormal – summarises meetings and actions to support project management

Across all tools, AI supports our process but never replaces our professional judgement, credibility and accuracy.

8.  Policy review and accountability

A named team member must review every AI-assisted piece of work before it is shared, and project leads remain accountable for ensuring AI use aligns with this policy. AI is evolving rapidly, as is our understanding of its opportunities, risks and environmental impacts, so this policy will be reviewed regularly and updated as new information, tools and best practice emerge.

We will continue to monitor developments in AI closely, adjust our approach responsibly and ensure our work reflects the highest standards of integrity, accuracy and sustainability. Team members are encouraged to share insights, concerns and suggestions so that we can keep strengthening our approach together.

This policy will be reviewed every year by the directors of business. Employees who suspect violations of this policy or have concerns about the ethical use of generative AI should report them to their manager or directly to the CEO.