Résumé
Artificial Intelligence (AI) is becoming increasingly integrated into various aspects of daily life, from healthcare and finance to social media and law enforcement. While AI has the potential to enhance efficiency and innovation, concerns about bias within AI systems have emerged. With the public perception of AI bias not being clear, it becomes crucial that the public can trust these technologies. This knowledge gap can impede the effective deployment and acceptance of AI systems, potentially leading to public scepticism and resistance. The study was guided by the following objectives: to explore the public’s perception of AI and to evaluate the general public’s awareness of AI technologies. The study employed a qualitative research approach by using software like WhatsApp, Twitter, Facebook, YouTube, Snapchat, and Instagram. The study found that awareness of AI technologies varies significantly among different demographic groups. Younger individuals with higher levels of education demonstrated greater awareness of AI and its applications. Higher awareness of AI bias correlated with lower levels of trust in AI technologies. A considerable portion of the public is aware of the concept of AI, though the depth of understanding differs. Trust in AI technologies varied based on the type of AI application. The study also found that media exposure plays a significant role in shaping public perception. Those who consume more news and media content related to AI have a more nuanced understanding of its benefits and risks. Individuals who had direct interactions with AI technologies, such as chatbots, exhibited different levels of trust compared to those who had not. The public expressed concerns over the transparency and accountability of AI systems, leading to varied trust levels depending on how transparent and understandable AI processes are perceived to be. The study found a complex relationship between awareness and trust, where increased awareness of AI’s potential biases led to increased scepticism or greater trust due to a better understanding of how these issues are being addressed. The study recommends the need for an increase in public education to enhance public understanding of AI technologies, including their benefits, risks, and potential biases. The research encouraged AI developers to adopt transparent practices, such as clearly explaining how AI systems make decisions and what data they use. Transparency can help build trust by demystifying AI processes. There is a need to create platforms for public engagement and feedback on AI technologies. Involving the public in discussions about AI development and deployment can help address concerns and build trust.
