Our Use of Artificial Intelligence
1. Introduction
As a consultancy firm committed to remaining at the forefront of innovation, Insuco aims to optimise the quality and performance of its expertise by responsibly integrating Artificial Intelligence (AI) into its internal processes.
Our approach is centred on social sustainability, and our use of AI tools must contribute to our values and the quality standards of our expertise.
As a member of the International Association for Impact Assessment (IAIA), Insuco fully adheres to its Principles for the Use of Artificial Intelligence in Impact Assessment (SP n°16, Feb. 2026). These principles aim to establish safeguards for the ethical and responsible use of AI in the impact assessment sector.
More generally, Insuco adheres to the European Artificial Intelligence Act (AI Act), enacted in 2024, which aims, among other things, to hold companies accountable and regulate their use of AI with regard to the protection of their stakeholders and citizens.
2. Transparency on AI Use and Personal Data Protection
Insuco is committed to full transparency regarding the use of artificial intelligence (AI) by its employees and external experts.
All our impact assessment reports clearly state the use of AI (tool name, type of use, control method, date), as well as compliance with current policies, standards or regulations governing AI use.
In this context, all data collected from stakeholders (individuals, communities, organisations) that we process with the help of AI tools are handled in strict compliance with the European General Data Protection Regulation (GDPR), and the principle of data revocability/correction.
3. Expertise Remains the Cornerstone of Our Studies
Artificial intelligence (AI) is a tool at the service of our employees and experts. It is not a substitute. Stakeholder involvement and fieldwork, which are the pillars of our interventions, remain activities conducted by our experts and employees.
AI can enrich or optimise field studies, impact assessments, the processing of large volumes of data, and the monitoring and evaluation of action plans. However, our professional supervision remains imperative: our employees, experts, and researchers conduct an independent verification of the results generated by the AI tools authorised by Insuco that they use.
The responsibility for the application of AI throughout the impact assessment processes, from conception to finalisation, rests entirely with our teams. This includes the decision to integrate AI, the supervision and management of its application, the validation of content and analyses, the verification of sources and data, the accuracy of information, the review of our reports by regulatory authorities, the management of public comments, the responses provided, as well as the formulation of conclusions and recommendations for action.
4. AI at the Service of Proven Methodologies
Our studies are based on methods, protocols, and tools validated by the community of experts and authorities in the impact assessment sector.
We use AI tools in a controlled manner to complement and improve our processes, without ever replacing, supplanting, or infringing upon recommended or regulatory methods, protocols, and tools: scientific principles, standardised assessment, calculation, and modelling techniques, and rules for the protection of social and environmental data.
Any impact assessment process must comply with the AI policies and requirements of the country where the assessment is conducted, as well as the policies and standards of any organisation funding or supporting said assessment, according to the principle of the most stringent standard. Thus, our methods, protocols, and tools do not integrate AI systems classified as unacceptable and high-risk according to the AI Act.
5. Consideration of Bias and Inclusivity
We implement careful management of AI biases and a quality control process for inputs (prompts) and results.
Information and data provided by AI tools may be insufficient or not accurately represent indigenous, vulnerable, conflict-affected, or marginalised groups. This is why AI should not replace direct communication with affected individuals or the judgment of experts.
AI users must exercise human oversight to prevent errors and ensure accurate representation. It is crucial to research and address the limitations of AI tools and systems, such as:
- biases;
- AI “hallucinations” (false or misleading information);
- incomplete data, especially for remote locations and contexts;
- subjects and locations with little non-digitised knowledge;
- lack of attention to cultural values, beliefs, or tacit knowledge;
- ambiguous criteria for judging impact importance;
- lack of knowledge of data source and credibility.
6. Awareness, Risk Management, and E&S Impacts
We are committed to raising awareness among our staff and external experts about the limitations, risks, and impacts of AI, particularly through initial internal training and the dissemination of a Good Practice Note to our external experts.
This approach includes a comparative analysis of different types of AI and their environmental and social issues. We classify the AI systems we use according to the AI
Act’s risk categories (unacceptable, high, limited, minimal or nil), to which we add our assessment of environmental and social risks.
Thus, aware of the social and environmental impacts generated by the development and use of AI, we limit its use according to the principle of actual need, i.e., only when an AI tool provides real added value to our work compared to other pre-existing computer or manual tools.
To ensure compliance with the AI Act, our action plan includes the following activities:
- mapping AI system uses: identification of internal and external AI tools and systems, as well as their application areas;
- classification of AI systems used according to their risk level: evaluation and documentation of each system according to the AI Act’s typology;
- AI control policy: AI policy, AI Use Charter, review of the IT charter;
- team training: awareness of the AI Act, AI ethics, cybersecurity, and the impacts of AI systems;
- monitoring and auditing: establishment of continuous compliance control and regular internal audits.
Within Insuco, the effective, efficient, and ethical use of AI systems requires prior awareness and training in our AI policy.
7. Responsibility
Insuco assumes full responsibility for the use of AI (decisions, content, accuracy) and commits to rigorously observing the principles proposed by IAIA, the AI Act, and the European GDPR regarding the professional use of AI tools and personal data protection.
This implies for us the need to map and classify the AIs we use, to be transparent about our uses of them, to raise awareness and train our teams, and to ensure continuous monitoring.