Challenging Bias in Big Data user for AI and Machine Learning

Raising awareness about the negative impacts of the lack of a critical and ethical approach to techEd.

Project information

Project website

Funding program

Erasmus+ KA2. Strategic Associations for Higher education.

Project funding


Project duration

30/12/2022- 29/06/2025

Share project
Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on whatsapp
Share on email

Artificial Intelligence is in our everyday life. From the algorithm that recognizes our face when we are walking down the street feeding police biometric security services to the algorithm that chooses the advertising we will see in our social media, AI is everywhere. But although machine learning and AI are mathematics they are not always right and this happens because the data that is processed to come to any conclusion can be, and often is, biased. Social sciences have been studying Human Bias for many years. It arises from the implicit association that reflects bias we are not conscious of and that can result in multiple negative outcomes. AI and ML are not designed to make ethical decisions, that is not an algorithm for ethics. It will always make predictions based on how the world works today, therefore contributing to fostering the bias and discriminatory practices that are systemically rooted in our societies today. With the widespread of AI and ML technologies, often owned by big tech companies with the only objective of making profits, that is an urgent need to bring a human-centred approach to tech and using it to solve social problems instead of contributing to them. HE, AE and Youth require new and innovative curricula that can meet this skills gap and that can equip learners with the knowledge and skills to contribute to a more ethical approach to tech development. The need to make tech education more human is aligned with the Digital Education Action Plan that includes specific actions to address the ethical implications and challenges of using AI and data in education and training.  

The CHARLIE project aims at challenging the bias in big data used for AI and machine learning by bringing a greater level of awareness regarding the negative impacts of the lack of a critical and ethical approach to techEd. CHARLIE main objectives are to linked with the increase capacity of HE institutions, its teachers and its students to provide learning opportunities that meet society needs and creating synergies between HE institutions, AE organisations and Youth organisations.

The proposed activities are:

  • Development activities (Competency Matrices, OERs, toolkits, policy recommendation)
  • Implementation activities (pilots, external stakeholders consultations, webinars)
  • Quality and Evaluation (peer-reviews, quality checks)
  • Communication and promotion (webinars, national and international conferences)

Its results include competency Matrices for the learning programmes – EQF6 (HE), EQF4 (AE), EQF2 (Youth); algorithmic Bias course (HE); algorithmic Bias toolkit for synchronous sessions (HE) and guideline for boosting the capacity of university administrators/management (HE); Webinars to foster peer-learning and discuss the role of HE institutions in techEd; ethical AI microcredential for Adult learners and a digital Serious game for Youth complemented by policy recommendation for recognition of microcrentials in HE.