Public Perception of AI: Awareness and Trust in Artificial Intelligence (AI)
Lizzy Zinyemba, PhD (Bindura University of Science Education)
Chido Joana Ndoro (Bindura University of Science Education)
Munyaradzi Ashley Zinyemba (Chinhoyi University of Technology)
Abstract
Artificial Intelligence (AI) is becoming increasingly integrated into various aspects of daily life, from healthcare and finance to social media and law enforcement. While AI has the potential to enhance efficiency and innovation, concerns about bias within AI systems have emerged. With the public perception of AI bias not being clear, it becomes crucial that the public can trust these technologies. This knowledge gap can impede the effective deployment and acceptance of AI systems, potentially leading to public scepticism and resistance. The study was guided by the following objectives: to explore the public’s perception of AI and to evaluate the general public’s awareness of AI technologies. The study employed a qualitative research approach by using software like WhatsApp, Twitter, Facebook, YouTube, Snapchat, and Instagram. The study found that awareness of AI technologies varies significantly among different demographic groups. Younger individuals with higher levels of education demonstrated greater awareness of AI and its applications. Higher awareness of AI bias correlated with lower levels of trust in AI technologies. A considerable portion of the public is aware of the concept of AI, though the depth of understanding differs. Trust in AI technologies varied based on the type of AI application. The study also found that media exposure plays a significant role in shaping public perception. Those who consume more news and media content related to AI have a more nuanced understanding of its benefits and risks. Individuals who had direct interactions with AI technologies, such as chatbots, exhibited different levels of trust compared to those who had not. The public expressed concerns over the transparency and accountability of AI systems, leading to varied trust levels depending on how transparent and understandable AI processes are perceived to be. The study found a complex relationship between awareness and trust, where increased awareness of AI’s potential biases led to increased scepticism or greater trust due to a better understanding of how these issues are being addressed. The study recommends the need for an increase in public education to enhance public understanding of AI technologies, including their benefits, risks, and potential biases. The research encouraged AI developers to adopt transparent practices, such as clearly explaining how AI systems make decisions and what data they use. Transparency can help build trust by demystifying AI processes. There is a need to create platforms for public engagement and feedback on AI technologies. Involving the public in discussions about AI development and deployment can help address concerns and build trust.
Keywords: Artificial Intelligence, public perception, technologies, public, transparency, accountability of AI systems.
___________________________
Introduction
Artificial Intelligence (AI) is rapidly transforming various aspects of human life, from healthcare and finance to social media, law enforcement, and academia. Despite its growing presence, public perception and awareness of AI technologies vary widely. Understanding these perceptions is crucial for guiding policymaking, ensuring ethical AI deployment, and fostering trust between AI developers and users. This study, guided by the Technology Acceptance Model (TAM), explores the public’s perception of AI and evaluates the general public’s awareness of AI technologies. The research is particularly relevant for countries like Zimbabwe, where AI is on the rise, to fill the knowledge gap regarding Zimbabweans’ perceptions and awareness of AI technologies. The study aims to address the increasing integration of AI into daily life and the importance of public trust, given concerns about AI bias. The objectives of this study are to explore the public’s perception of AI and to evaluate the general public’s awareness of AI technologies.
___________________________
Historical Background of Artificial Intelligence
Artificial Intelligence (AI) is rapidly transforming various facets of human life, from healthcare and finance to social media, law enforcement, and academia. Despite AI being viewed as omnipresent, public perception and awareness of AI technologies vary widely. Understanding these perceptions is necessary to guide policymaking, guarantee ethical AI deployment, and foster trust between AI developers and users. This study explores the public’s perception of AI and evaluates the general public’s awareness of AI technologies. The research is necessary for countries like Zimbabwe, as AI is on the global rise; hence, it will fill in the knowledge gap regarding Zimbabweans’ perceptions and awareness of AI technologies.
Artificial Intelligence (AI) has a rich history, rooted in the quest to create machines that can simulate human thought and behaviour. Its evolution can be traced through various milestones spanning centuries of theoretical speculation, scientific exploration, and technological advancements [Hou et al., 2025]. The concept of Artificial Intelligence predates the development of modern computers. In ancient Greek mythology, there is a depiction of mechanical beings, such as Talos, a giant bronze robot that was forged by Hephaestus, the god of fire and forge, to protect the island of Crete from invasion [Fleck, 2018]. Bates [2024] asserts that in the 17th century, mathematicians such as René Descartes and Gottfried Wilhelm Leibniz speculated on creating systems that are capable of mechanical reasoning, laying the groundwork for AI concepts.
The formalisation of logic and computation theory marked the early steps toward AI. Alan Turing, often regarded as the father of computer science, introduced the concept of a “universal machine” in 1936 that could perform computations similar to a modern computer [Daylight, 2015]. During World War II, Turing’s work on breaking the Enigma code highlighted the potential of machines to process information. The term “Artificial Intelligence” was coined in 1956 during the Dartmouth Summer Research Project on Artificial Intelligence, organised by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon [van Assen, 2022]. This event is widely regarded as the official birth of AI as a field of study. Researchers aimed to create machines that could “think” like humans, solving problems and learning from data. Early AI programmes, such as Logic Theorist (1956) and General Problem Solver (1957), showcased the potential of machine reasoning. Early optimism in AI led to high expectations, but progress was hindered due to limitations in computational power, a lack of large datasets, and insufficient funding. This period, known as the “AI Winter,” saw reduced interest and investment in AI research [van de Sande et al., 2022]. However, foundational work continued, particularly in machine learning, knowledge representation, and expert systems.
According to Deng [2018], AI experienced a renaissance in the 1990s due to developments in computing power, the improvement of the internet, and the availability of larger datasets. Machine learning algorithms, particularly neural networks, began to realise significant success. Notable milestones include IBM’s Deep Blue defeating chess champion Garry Kasparov in 1997. The explosion of big data and improvements in Graphics Processing Units (GPUs) augmented the development of AI [Deng, 2018]. Deep learning, a subset of machine learning, enabled breakthroughs in natural language processing, computer vision, and robotics. Innovations like Google’s AlphaGo defeating world Go champion Lee Sedol in 2016, and the rise of virtual assistants like Siri and Alexa, demonstrated AI’s potential in everyday applications. The rapid advancement of AI has sparked discussions on its ethical implications, including privacy concerns, job displacement, and the need for responsible AI governance [Camilleri, 2024]. These discussions have become central to ensuring that AI development aligns with societal values and human well-being. The historical trajectory of AI illustrates a journey from philosophical musings to a transformative technology shaping the modern world [Deng, 2018]. While the field has faced challenges, ongoing innovations and interdisciplinary efforts continue to push the boundaries of what AI can achieve, offering profound possibilities for the future. The public’s perception and acceptability of Artificial Intelligence (AI) have evolved, shaped by technological advancements, media portrayals, and societal experiences. Camilleri [2024] notes that while some view AI as a transformative force, others approach it with scepticism, often driven by ethical, economic, and existential concerns.
In the initial stages of AI development, mostly in the 1950s and 1960s, there was excitement about AI’s potential. Researchers and the public projected a future where machines could solve complex problems and contribute to the everyday lives of humans. The Dartmouth Conference of 1956 embodied the spirit of optimism, with scientists believing that human-level intelligence in machines could be achieved within a few decades [McCarthy et al., 1956]. However, these high expectations were tempered by the technical challenges of creating a truly intelligent system. There was a gap between public aspirations and the practical realities of AI research that eventually led to the “AI Winter” of the 1970s and 1980s, during which enthusiasm faded, and funding for AI projects declined [Minsky, 1991].
The resurgence of AI in the 1990s, prompted by improvements in computing power and the success of systems like IBM’s Deep Blue, reignited public interest in AI. Ensmenger [2012] notes that in the 1990s, AI was viewed as a tool designed to solve specific problems like playing chess or diagnosing medical conditions. Through this pragmatic focus, AI became more acceptable to the public, as its applications were viewed in a complementary sense rather than as a threat to human capabilities [Brynjolfsson and McAfee, 2017].
___________________________
Contemporary Views: Mixed Perceptions
AI is widely accepted in fields like healthcare, where it improves diagnostics and treatment outcomes. A survey by the Pew Research Centre [2018] found that 63% of Americans viewed AI as a tool for societal improvement, particularly in medicine and education. Issues such as privacy, bias, and accountability have made some segments of society sceptical about AI. The Cambridge Analytica scandal in 2018, which involved the misuse of AI-driven data analysis, heightened concerns about the potential misuse of AI technologies [West, 2018]. Trust in AI systems remains a key determinant of public acceptability. Studies show that people are more likely to accept AI when they understand how it works and perceive it as being transparent and fair [Shin, 2020]. Lack of transparency often leads to fears of manipulation or misuse, as seen with algorithmic decision-making in hiring and law enforcement.
“Digital natives,” as the younger generations are referred to, tend to have higher levels of trust and acceptance of AI compared to older individuals, possibly due to greater exposure to technology. Cultural factors play a role in whether societies accept or are mistrustful of AI; for instance, societies with a strong emphasis on technological innovation, like Japan, view AI more favourably than those with a more cautious approach, such as some European countries [Vinuesa et al., 2020]. Acceptability of AI has been shaped by perceived benefits and risks that have been observed at different points in history. Fast and Horvitz [2017] note that while optimism in the early days gave way to scepticism during the AI Winter, contemporary AI applications have led to a more nuanced public view. Ensuring ethical practices, transparency, and equitable benefits is crucial for fostering long-term acceptance of AI technologies. AI has been described as a double-edged sword, offering significant benefits while raising concerns about ethics, privacy, and job displacement [Brynjolfsson and McAfee, 2017]. Although AI has increasingly integrated into everyday technologies such as virtual assistants and automated customer services, public awareness of its capabilities and limitations remains inconsistent. Moreover, trust in AI systems is critical to their adoption, as users are less likely to engage with technologies they do not trust [Shin, 2020]. Studies have shown that perceptions of AI often hinge on media portrayals, which sometimes exaggerate its capabilities or potential risks [West, 2018]. This can lead to both inflated expectations and unwarranted fears. By exploring public awareness and perceptions, this research contributes to a deeper understanding of how society interacts with AI and identifies opportunities to enhance education and communication around its use.
Artificial Intelligence (AI) has begun to make significant inroads in Africa, transforming healthcare, agriculture, education, and finance. Notwithstanding its potential to address the continent’s developmental challenges, the acceptability, public perception, and trust in AI technologies vary according to regions and demographic groups. AI development in Africa has primarily been driven by technological innovations tailored to local challenges. AI-based diagnostic tools for healthcare, predictive models for agricultural yields, and natural language processing for indigenous languages have demonstrated AI’s potential to improve the quality of life [Cisse et al., 2020]. South Africa, Kenya, Nigeria, and Rwanda are emerging as leaders in AI adoption, supported by investments in innovation hubs and partnerships with international tech firms [World Bank, 2022]. However, the continent of Africa still faces challenges that include inadequate digital infrastructure, limited access to high-quality data, and a lack of skilled AI professionals. These factors have slowed down the pace of AI adoption compared to other regions of the world [Bright and Hruby, 2020].
The acceptability of AI in Africa is influenced by its relevance to local contexts. AI solutions that address pressing socio-economic issues have garnered support in areas of healthcare access and agricultural productivity. For instance, in Rwanda, the use of AI-powered drones for delivering medical supplies has been widely praised for improving access to essential services in remote areas [Zipline, 2021]. AI-driven mobile applications for diagnosing plant diseases have been readily accepted by smallholder farmers in Kenya and Uganda, demonstrating the technology’s utility in agriculture [AGRA, 2020]. However, the lack of public awareness and understanding of AI remains a barrier. Many people are unfamiliar with how AI works and its possible benefits, leading to scepticism in some communities [Adebayo et al., 2021].
Public perceptions of AI in Africa have been shaped by a mix of optimism, apprehension, and curiosity. Most people in Africa view AI as a tool for leapfrogging developmental gaps. AI’s ability to provide cost-effective, scalable solutions for healthcare, education, and agriculture is widely appreciated [Bright and Hruby, 2020]. There have been concerns about job displacement, data privacy, and ethical issues that have tempered this optimism. For example, the automation of tasks in the financial sector has raised fears of unemployment, particularly among young people. The growing interest in AI is evident among Africa’s youth, with an increasing number of young people participating in AI-focused hackathons, coding boot camps, and innovation hubs [World Economic Forum, 2021].
Trust in AI technologies is a critical factor for adoption in Africa. There is growing trust in AI solutions that demonstrate tangible benefits, but there are concerns about transparency and accountability, with many users unsure about how AI systems make decisions, especially in critical areas such as loan approvals and medical diagnoses [Adebayo et al., 2021]. Users are also concerned with data sovereignty, questioning the storage and use of African data by international companies, raising issues about data privacy and security [Makulilo, 2019]. Cultural relevance is also a concern because AI systems that fail to account for cultural and linguistic diversity are less trusted, particularly in rural areas where traditional practices still dominate [Cisse et al., 2020].
To enhance trust and acceptability, African governments, organisations, and developers must prioritise public education by raising awareness about AI technologies and their benefits through community engagement and educational campaigns, developing frameworks to ensure transparency, accountability, and fairness in AI systems, and creating AI systems tailored to African languages, cultures, and socio-economic contexts to increase relevance and usability. AI has immense potential to drive sustainable development in Africa, but its acceptability, public perception, and trust are contingent on how well technology aligns with local needs and values. Addressing barriers such as public awareness, ethical concerns, and infrastructure gaps will be crucial for maximising AI’s impact on the continent.
There has been a gradual uptake of AI in Zimbabwe, particularly in sectors like healthcare, agriculture, and finance. Key developments in healthcare have seen AI-powered tools being used to enhance diagnostics and streamline healthcare delivery. For example, mobile health applications are leveraging AI to provide health information and connect patients to medical professionals, such as Plus263Health and Period Tracker [Mutambara et al., 2023]. In agriculture, AI-driven solutions are being introduced to improve farming practices. These include predictive analytics for weather forecasting and crop management tools that help farmers optimise yields in the face of climate change [FAO, 2022]. In financial services, Fintech companies in Zimbabwe (Ecocash, Sasai, Onemoney, Telecash) are adopting AI for credit scoring, fraud detection, and personalised financial solutions. This has improved access to financial services for previously underserved populations [Reserve Bank of Zimbabwe, 2022]. Despite these advancements, the lack of vigorous digital infrastructure, limited AI expertise, and insufficient government policies remain significant barriers to AI development in Zimbabwe [Mawere, 2021].
The acceptability of AI in Zimbabwe is closely tied to its perceived relevance to the country’s challenges, where AI solutions addressing healthcare and agriculture have seen relatively high levels of acceptance due to their direct impact on livelihoods. For instance, AI-driven chatbots providing farming advice are widely used by small-scale farmers [FAO, 2022]. In contrast, AI adoption in other areas is slower due to low levels of digital literacy and public awareness about technology. Many Zimbabweans remain unaware or do not have enough information about what AI entails, which limits its acceptance beyond niche applications [Mawere, 2021].
Public perceptions of AI in Zimbabwe are shaped by a mix of optimism and scepticism, with many Zimbabweans viewing AI as a probable tool for solving developmental challenges. Young people, particularly in urban areas, are enthusiastic about the opportunities AI could create in education, entrepreneurship, and job markets [TechZim, 2023]. However, there is widespread scepticism in Zimbabwe, where the benefits of AI are not immediately apparent. Concerns include fear of job losses due to automation and a lack of trust in AI systems that are perceived as opaque or biased [Mutambara et al., 2023]. Zimbabwean society, particularly in rural areas, is deeply rooted in traditional practices. This cultural orientation sometimes leads to resistance to adopting technologies like AI that are seen as foreign or incompatible with local customs [Mawere, 2021].
Trust in AI systems in Zimbabwe is influenced by the availability of transparency and accountability, where many users are hesitant to trust AI systems due to a limited understanding of how they work. The lack of clear guidelines on the ethical use of AI exacerbates this issue [Reserve Bank of Zimbabwe, 2022]. Concerns about data protection and misuse are significant, particularly in the financial and health sectors. The absence of strong data protection laws undermines public trust in AI applications [Mawere, 2021]. AI systems tailored to local languages and contexts are more trusted. Efforts to develop AI tools in indigenous languages, such as Shona and Ndebele, have improved acceptance among users [Mutambara et al., 2023].
To improve public trust and the acceptability of AI in Zimbabwe, stakeholders must enhance public awareness through education campaigns and community engagement initiatives that can demystify AI and highlight its benefits. They should also develop ethical AI policies by establishing regulations that promote transparency, fairness, and accountability in AI systems, and foster local innovation by encouraging local developers to create AI solutions tailored to Zimbabwe’s socio-economic and cultural contexts, which will enhance relevance and trust.
As AI technologies continue to gain prominence globally, there remains a limited understanding of how populations in developing contexts, such as Zimbabwe, perceive and engage with these technologies. This study thus becomes significant, as AI development in Zimbabwe holds immense potential to address pressing developmental challenges, but its acceptability and trust depend on how well the technology is integrated into local contexts. Building public awareness, addressing ethical concerns, and creating locally relevant solutions are critical steps toward maximising AI’s impact in Zimbabwe.
___________________________
Technology Acceptance Model (TAM)
The study is guided by the Technology Acceptance Model (TAM), which explains how users come to accept and use technology [Davis, 1989]. TAM posits that perceived usefulness and perceived ease of use are the two main factors influencing users’ attitudes toward technologies such as AI. For this study, public perception of AI aligns with perceived usefulness, while awareness of AI technologies corresponds to perceived ease of use. This framework helps contextualise how awareness and perceptions influence trust and adoption.
The Technology Acceptance Model (TAM), introduced by Davis [1989], is a widely recognised framework for understanding user acceptance of technology. When Fred Davis developed the framework, “the main aim was to predict and explain the attitude and behaviour of individuals towards new and emerging technologies in organisational settings” [Mutelo 2025:5731]
TAM posits that two key factors influence an individual’s decision to accept and use technology: Perceived Usefulness (PU)—the degree to which a person believes that using the technology will enhance their performance or provide benefits; and Perceived Ease of Use (PEOU)—the degree to which a person believes that using the technology will be free from effort. These factors influence attitude towards use, which in turn impacts behavioural intention to use and ultimately the actual usage of the technology. TAM is particularly relevant in examining public perception of AI, as it helps explain how awareness and trust shape AI adoption. Mutelo [2025:5732] maintains that:
“From the perspective of user acceptance, TAM can be used to explain the extent to which PU and PEOU influence an individual’s attitude, intention to use, and eventually, actual system use. The framework is often used due to its clarity, predictive power, and ease of application across different technologies and settings. The approach emphasises individual perceptions over technical features. This makes the framework a key model in human-computer interaction and technology adoption research generally.”
The Technology Acceptance Model provides a robust framework for understanding how public awareness and trust influence AI adoption. By addressing factors such as perceived usefulness, ease of use, and trust, stakeholders can enhance public perceptions of AI, fostering greater acceptance and integration of AI technologies into everyday life.
___________________________
Methodology
Given the rising importance of digital platforms in shaping public opinion, online ethnography offered a rich, unobtrusive method for capturing real-time public discourses on AI. The study made use of a qualitative research approach, employing online ethnography, which creates data through computer-mediated social interaction like X, formerly known as Twitter; Facebook, YouTube, Snapchat, Instagram, and WhatsApp [Ward, 1999]. The data were collected from September 2025 to December 2025. The researchers sought and joined eight relevant groups on computer-mediated social interaction over mobile phones to discuss issues of AI. As for YouTube, Instagram, and Snapchat, the researchers depended on the comments that were posted by subscribers/followers of the researchers’ accounts. Data were analysed thematically using NVivo. The following ethical issues were observed: online safety, digital well-being, cyber protection, voluntary participation, anonymity, confidentiality, and the right to withdraw.
___________________________
Findings
Awareness of AI
The study revealed that most participants had some familiarity with AI, primarily through applications such as virtual assistants on smartphones and chatbots. However, some participants expressed uncertainty about what AI entails, indicating gaps in basic awareness. The findings highlight a disparity between familiarity with AI applications and understanding of its broader implications. This aligns with previous studies suggesting that public knowledge of AI is often superficial [Shin, 2020]. The depth of understanding of AI differed among the respondents, with the use of AI on social media (WhatsApp) and chatbots being very high compared to other platforms that are more technical, like productivity tools and finance.
Perceptions of AI
Participants expressed mixed feelings about AI. While some viewed AI as beneficial, particularly in healthcare and in improving diagnostics and patient outcomes, others expressed concerns about the potential for AI to replace human practitioners, particularly in industries like manufacturing, customer service, and transportation. The concern is that automation will lead to economic hardship for those unable to find new roles that require different skills. Most social media platforms revealed a significant influence of worry about privacy and data security. Scholars like Fast and Horvitz [2017] and Brynjolfsson and McAfee [2017] describe AI as offering significant benefits while raising concerns about ethics, privacy, and job displacement. Mutambara et al. [2023] further note the need for AI systems tailored to local languages and contexts to be more trusted and improve acceptance among users. One commentator from Instagram reported that “AI will soon surpass human control, threatening humanity if not properly managed.”
The study found that perceptions of AI are largely influenced by demographics like age and geographical location. Most social media platforms reported that the younger age group is familiar with AI, reporting its personal benefit to their education, in contrast to the older generations, who showed a lack of familiarity. The findings point to generational and educational differences in awareness, suggesting that targeted educational initiatives could bridge the knowledge gap in AI. In support, Shin [2020] points out that awareness can improve when people understand how AI works and perceive it as being transparent. Shin [2020] and West [2018] further point out that media portrayals also need to be addressed, as they exaggerate their capabilities or potential risks. One respondent had this to say: “The young generation is quick to adopt new AI tools and technological advancements as they continuously integrate AI with their daily lives and work.”
The research also noted that perceptions were influenced by geographical location in Zimbabwe, with most urban residents reporting high levels of familiarity with AI compared to their rural counterparts. Young urban residents perceive AI as beneficial and accessible, reflecting the TAM proposition that higher perceived usefulness and ease of use contribute to more positive attitudes towards technology. Upon further probing, the study also found that rural residents were not familiar with AI due to unstable or unavailable internet connectivity. One commentator from Facebook remarked: “In rural Zimbabwe, there are infrastructure challenges, with most rural areas having no electricity, let alone WiFi.”
Perceptions were also reported to be influenced by the media, which is a powerful lens through which individuals view and interpret the world. From traditional news outlets to social media, the content people consume shapes their beliefs, attitudes, values, and even their understanding of reality. The research further noted that in most rural areas, people had access to radios compared to their urban counterparts, who were more exposed to multimedia access. In agreement, one commentator from Facebook made this remark: “Media has more influence than explicitly what it tells people, but also how it frames information, what it chooses to highlight, and what it omits.”
Trust in AI
Trust levels were moderate, with participants citing transparency, reliability, and ethical alignment as key factors influencing their trust in AI systems. Younger participants showed higher trust levels compared to older individuals, possibly due to greater exposure to technology. The study further noted that many people are sceptical of AI systems that make decisions impacting their lives, especially when they cannot fully comprehend how the decisions are being made. Trust in AI is shaped by perceptions of its reliability and ethical use, consistent with the TAM framework. One respondent from Snapchat indicated that: “There is general fear over the storage of personal data, which raises a lot of questions about privacy and security.”
There were mixed feelings towards familiarity with how AI works or its potential benefits and risks, leading to either exaggerated fears or misplaced trust. The respondents felt there was a knowledge gap between the reality of AI and public understanding. Most people don’t understand how AI works; their imaginations often fill in the blanks with narratives drawn from science fiction, sensationalised media reports, or worst-case scenarios. One respondent from X alluded: “For an average person, who may not have a background in computer science, statistics, or machine learning, comprehending the intricacies of algorithms, neural networks, and data processing can be daunting.”
Lack of understanding of how AI processes data can lead to mistrust issues. Most comments from social media reflected the view that interaction with technology is seen as a malicious attempt to track and control individuals. This creates scepticism and fear about the reliability of AI. One respondent indicated: “...even experts may find it difficult to fully explain how certain outcomes are achieved.”
The respondents expressed concern over errors that can be made by AI due to biased data. Some of the respondents viewed this as potentially leading to the belief that AI is inherently unreliable or prone to catastrophic failures, undermining trust even in beneficial applications. While some of the respondents were concerned about who would be held responsible if AI made a mistake or caused harm, this lack of clarity about accountability raised significant concerns. One respondent had this to say: “People are unsure who should be responsible for AI’s ability to make decisions that could conflict with human values and ethics.”
___________________________
Recommendations
Taking into account the significant difference in the level of AI awareness among population groups—in particular, the difference between young, more educated people and other demographic categories—the research highly recommends the implementation of mass education programmes to increase the general awareness of artificial intelligence. Such efforts must not only explain what AI is and how it works but also discuss the implications of AI in society, such as the benefits that may arise, the ethical issues that may come about, and the truth about the existence of algorithmic bias. Such educational activities would decrease unjustified distrust and increase constructive engagement with AI technologies.
To overcome the transparency issue raised by the population regarding AI decision-making, the paper suggests that developers and organisations should use transparent AI development methods. This involves open access explanations of AI systems’ functioning, the information they are based on, and output creation, particularly in high-impact areas such as healthcare, finance, and law enforcement. Moreover, the more users believe that AI systems are comprehensible and that their logic can be explained, the more they feel in control, with perceived ease of use and perceived usefulness being directly supported. This kind of openness is not only an ethical mandate but a commendable means of developing trust and reducing resistance based on doubt or fear of the unknown.
Given that global AI systems are often unsuccessful at capturing local contexts, the study also suggests that local AI technologies, which are culturally and linguistically sensitive to African populations, should be built. AI tools that consider local languages, norms, values, and user expectations in their design will be less likely to be viewed as irrelevant, unusable, and unhelpful. This localisation can increase perceived usefulness and ease of use, which are major factors in the Technology Acceptance Model, as, instead of imposing alien systems on users, AI applications are more sensitive to the realities of users in their daily lives. Such cultural congruence can, in turn, promote enhanced acceptance and long-term interaction among different populations in Africa.
Lastly, the paper also recommends that a strong ethical governance framework for AI should be established that is inclusive and context-dependent. This framework must entail multi-stakeholder cooperation—for example, governmental agencies, civil society, technologists, and the general population—to establish guidelines on fairness, accountability, data security, and bias reduction. Notably, this governmental framework should be sensitive to local demands and based on the actual experiences of local users. As ethical oversight is part of the AI lifecycle, societies would be able to anticipate issues of accountability and transparency introduced by AI at the most fundamental levels, which would help them enhance trust and build an environment where AI is not merely a technologically advanced device but a socially acceptable and valued idea
___________________________
Conclusion
The study concludes that while public awareness of AI technologies is growing, significant gaps remain in understanding its full scope and potential. Perceptions of AI are shaped by a combination of personal experience, media influence, and societal narratives, with trust being a pivotal factor for its adoption. The findings highlight a disparity between familiarity with AI applications and an understanding of their broader implications. To address these issues, stakeholders must prioritise education, transparency, and ethical considerations in AI development and deployment. The study recommends public education initiatives to enhance public understanding of AI technologies, transparent AI development practices, and the development of AI systems tailored to local languages and cultures to increase relevance and usability. Future studies could explore longitudinal changes in public perception as AI becomes more embedded in Zimbabwe’s economy and public services.
___________________________
References
Adebayo, A.D., Olayemi, A., and Oyinlola, M., 2021. Public attitudes towards Artificial Intelligence in sub-Saharan Africa, African Journal of Technology and Development 8/2: 45–58.
AGRA, 2020. The potential of AI to transform agriculture in Africa, Alliance for a Green Revolution in Africa.
Bates, D.W., 2024. An Artificial History of Natural Intelligence: Thinking with Machines from Descartes to the Digital Age, Chicago: University of Chicago Press.
Bright, J. and Hruby, A., 2020. The promise and peril of artificial intelligence in Africa, Harvard International Review 42/3: 40–44.
Brynjolfsson, E. and McAfee, A., 2017. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, New York: W.W. Norton & Company.
Camilleri, M. A., 2024. Artificial intelligence governance: Ethical considerations and implications for social responsibility, Expert Systems 41/7: e13406.
Cisse, M., Sanou, J., and Diop, A., 2020. Addressing Africa’s AI challenges: Building data infrastructure and capacity, Journal of AI Research for Development 15/1: 10–22.
Davis, F. D., 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology, MIS Quarterly 13/3: 319–340.
Daylight, E. G., 2015. Towards a historical notion of ‘Turing—the father of Computer Science’, History and Philosophy of Logic 36/3: 205–228.
Deng, L., 2018. Artificial intelligence in the rising wave of deep learning: The historical path and future outlook [perspectives], IEEE Signal Processing Magazine 35/1: 180–177.
Ensmenger, N., 2012. Is chess the Drosophila of artificial intelligence? A social history of an algorithm, Social Studies of Science 42/1: 5–30.
FAO, 2022. The use of Artificial Intelligence in African agriculture: Zimbabwe case study, Food and Agriculture Organisation of the United Nations.
Fast, E. and Horvitz, E., 2017. Long-term trends in the public perception of artificial intelligence, in Proceedings of the AAAI Conference on Artificial Intelligence 31/1.
Fleck, J., 2018. Development and establishment in artificial intelligence, in The Question of Artificial Intelligence, ed. J. Fleck, London: Routledge: 106–164.
Hou, J., Zheng, B., Li, H., and Li, W., 2025. Evolution and impact of the science of science: from theoretical analysis to digital-AI driven research, Humanities and Social Sciences Communications 12/1: 1–9.
Makulilo, A., 2019. Data protection and AI in Africa: Sovereignty in the digital age, Journal of Law, Technology & Society 16/4: 567–585.
Mawere, M., 2021. Challenges and opportunities of AI adoption in Zimbabwe, Journal of African Technological Progress 10/1: 23–35.
McCarthy, J., Minsky, M.L., Rochester, N., and Shannon, C.E., 1956. A proposal for the Dartmouth Summer Research Project on Artificial Intelligence.
Minsky, M., 1991. Logical versus analogical or symbolic versus connectionist AI, AI Magazine 12/2: 34–51.
Mutambara, E., Moyo, T., and Sibanda, N., 2023. Public perceptions and use of AI in healthcare and agriculture in Zimbabwe, African Journal of Innovation Studies 7/2: 45–58.
Mutelo, I., 2025. Understanding the Generative Artificial Intelligence Revolution in Zambian Higher Education Research: Adoption, Challenges, and Strategies for Responsible Integration. International Journal of Research and Innovation in Social Science IX/IIIS: 5731–5737.
Pew Research Centre, 2018. Public attitudes toward artificial intelligence and robotics.
Reserve Bank of Zimbabwe, 2022. AI in financial services: Opportunities and risks in Zimbabwe.
Shin, D., 2020. User perceptions of algorithmic decisions in the personalised AI system: Perceptual evaluation of fairness, accountability, and transparency, Computers in Human Behaviour 108: 106–203.
TechZim, 2023. The rise of Artificial Intelligence in Zimbabwe: A youth perspective.
Van Assen, M., Muscogiuri, E., Tessarin, G., and De Cecco, C.N., 2022. Artificial intelligence: A century-old story, in Artificial Intelligence in Cardiothoracic Imaging, ed. M. van Assen, Cham: Springer International Publishing: 3–13.
Van de Sande, D., Van Genderen, M.E., Smit, J.M., Huiskens, J., Visser, J.J., Veen, R.E., and Van Bommel, J., 2022. Developing, implementing and governing artificial intelligence in medicine: a step-by-step approach to prevent an artificial intelligence winter, BMJ Health & Care Informatics 29/1: e100495.
Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., and Dignum, V., 2020. The role of Artificial Intelligence in achieving the Sustainable Development Goals, Nature Communications 11/1: 233.
West, D.M., 2018. The Future of Work: Robots, AI, and Automation, Washington, D.C.: Brookings Institution Press.
World Bank, 2022. The rise of Artificial Intelligence in Africa: Opportunities and challenges.
World Economic Forum, 2021. Artificial Intelligence and the future of work in Africa.
Zipline, 2021. Transforming healthcare delivery in Africa through drone technology, Zipline Africa Annual Report.