Investigating the Role of Stakeholder Engagement in Artificial Intelligence Governance and Policy Making: A Case Study of Zimbabwe.

Paul Sambo, PhD

Great Zimbabwe University (Masvingo, Zimbabwe)

Abstract

Stakeholder engagement is a crucial aspect of effective governance and policy-making in the field of artificial intelligence (AI). In Zimbabwe, the role of stakeholder engagement in the development and implementation of AI governance and policies has not been extensively studied. This research aims to fill this gap by using Actor-Network Theory (ANT) to explore the network of actors involved in AI governance and policies in Zimbabwe and how their interactions and relationships influence the outcomes. A case study approach was used, incorporating qualitative methods, including interviews and literature review. This research identified key stakeholders, including chief executive officers from industry and the public sector, civil society organisations, ICT experts, and users, by examining their roles and relationships within the network. By applying ANT, this study uncovered the power dynamics and interests of these actors and how they shape the development and implementation of AI governance and policy-making in Zimbabwe. The findings of this research have implications for other countries and regions seeking to develop and implement AI governance and policies. It also contributes to the growing body of research on stakeholder engagement in the field of AI governance.

Keywords: Stakeholder Engagement, Artificial Intelligence, Governance, Policy-Making, Actor-Network Theory.

___________________________

Introduction

The rapid advancement of Artificial Intelligence (AI) presents both exciting opportunities and significant challenges for societies around the globe [Gordon and Gunkel: 2025, 1897-1903]. As AI technologies become more integrated into various sectors, the need for effective governance and policy frameworks grows increasingly critical [Ghosh, Saini and Barad: 2025, 1-23]. In developing countries like Zimbabwe, this need is especially important. The potential benefits of AI must be carefully weighed against ethical considerations, social impacts, and economic realities. Engaging a diverse array of stakeholders, such as government officials, industry leaders, academics, civil society organisations, and the public, is essential for shaping effective AI governance and policy [Cihon, Schuett and Baum: 2021, 275]. This inclusive approach ensures that multiple perspectives are considered, leading to policies that are not only more effective but also equitable. Therefore, reflecting the unique needs and values of the society they serve, these policies can help navigate the complexities of AI, fostering a future where technology benefits everyone.

Several international studies have been conducted on the role of stakeholder engagement in artificial intelligence governance and policy-making. For example, de Castex [2021:13] conducted a study in the Netherlands that emphasised the significance of engaging multiple stakeholders in AI governance. The research highlights that incorporating diverse perspectives can lead to more effective and ethical AI policies. The author argues that stakeholder participation is essential for grasping the societal impacts of AI and ensuring that these systems align with public values. The study suggests the creation of formal mechanisms for involvement, such as public consultations and collaborative workshops, to encourage dialogue among technologists, policymakers, and affected communities. Radu [2021:188] examined AI governance in Switzerland and identified key stakeholders, including government bodies, industry leaders, academia, and civil society organisations. The findings indicate that effective stakeholder engagement can improve transparency and accountability in AI development. The author recommends establishing inclusive platforms for dialogue to facilitate knowledge sharing and best practices among stakeholders, as well as developing ethical AI guidelines that incorporate stakeholder feedback. Marmolejo-Ramos [2022:11] explored the role of public engagement in shaping AI policy frameworks in the United Kingdom (UK). They found that involving citizens in AI discussions helps demystify the technology and fosters trust between the public and developers. The author advocates for educational initiatives to enhance public understanding of AI technologies and suggests incorporating citizen feedback into the policymaking process to ensure that policies meet societal needs. Pallet [2024:12] focused on the legal aspects of AI governance and underscored the importance of stakeholder engagement in addressing regulatory challenges. Their study reveals that such engagement can lead to more adaptable regulatory frameworks that keep pace with technological advancements. The authors call for the formation of interdisciplinary task forces that include legal experts, technologists, ethicists, and community representatives to collaboratively develop responsive governance strategies for emerging AI technologies.

___________________________

Theoretical Framework

The theoretical framework for this research, which analyses AI governance in Zimbabwe, is grounded in Actor-Network Theory (ANT) [Latour: 1996, 369-381]. ANT emphasises the relationships and interactions among various actors, both human and non-human, within a network. It posits that the agency of each actor contributes to shaping governance outcomes, highlighting the importance of inclusivity in stakeholder engagement. Within AI, this means recognising the roles of diverse stakeholders, including marginalised communities, technical experts, and civil society members. By understanding how these actors interact, policymakers can identify potential gaps in representation and ensure that AI governance frameworks are designed to reflect a wide array of perspectives, thereby addressing ethical concerns more effectively.

Integrating principles of participatory governance within the ANT framework can enhance the development of ethical guidelines for AI deployment. By fostering collaborative platforms where stakeholders can co-create standards, the governance process becomes more dynamic and responsive to the needs of the community. This participatory approach not only empowers individuals to voice their concerns but also facilitates ongoing dialogue that can adapt to the evolving nature of AI technologies. The theoretical framework underscores the necessity of sustained engagement and transparency in AI governance, enabling a more equitable and responsible implementation of AI systems that align with societal values and priorities in Zimbabwe.

In Africa, Bokhari and Myeong [2023, 5-6] explored how stakeholder engagement plays a vital role in digital transformation and AI governance across various countries. Their research highlighted successful examples where inclusive participation led to improved policy outcomes. In South Africa, Hwabamungu, Brown, and Williams [2018, 36-48] examined the implications of stakeholder engagement in developing AI policies. They found that insufficient engagement fosters mistrust among stakeholders and can hinder effective policy implementation. To address this, the study recommended creating a national AI strategy that ensures ongoing stakeholder involvement throughout the policymaking process, emphasising the importance of transparency in decision-making. Hlongwane et al. [2024, 413, 421-423] focused on the state of AI governance in Zimbabwe. They pointed out the absence of comprehensive policies that include stakeholder engagement. Their research identified key stakeholders—government agencies, private sector representatives, academia, and civil society organisations—highlighting their crucial roles in shaping AI policy. The authors proposed establishing a multi-stakeholder platform to encourage dialogue among all involved in AI governance, along with regular workshops and forums to educate stakeholders about AI technologies and their potential impacts. However, their studies could not evaluate the impact of stakeholder engagement in government agencies, the private sector, academia, civil society, and workshops and forums on governance and policymaking

___________________________

Literature Review

Perceptions of Various Stakeholders Regarding the Significance and Impact of AI Technologies

Ernst, Merola, and Samaan [2019, 36-37] examined how AI and automation are affecting labour markets and economic productivity. They found that while AI technologies can greatly boost productivity, there is increasing concern about job displacement for workers. Business leaders and policymakers have expressed mixed feelings about the advantages of AI compared to its potential to increase inequality. To address these challenges, the authors recommend investing in education and training programmes to help workers transition into new roles created by AI advancements. They also advocate for policies that ensure equitable access to technology.

A study by Wolff et al. [2020] assessed the potential economic impact of AI across various global sectors. Their study revealed that stakeholders recognise both the transformative possibilities of AI and the associated risks, especially concerning privacy and ethical issues. They suggest establishing clear regulatory frameworks that address these ethical concerns while also promoting innovation and investment in responsible AI practices. Wamba-Taguimdje [2020, 1910] explored how businesses view the integration of AI into their operations. They found that many companies acknowledge the significant benefits of adopting AI technologies, such as increased efficiency, but also express considerable concerns about data security and ethical implications. The study recommends developing comprehensive guidelines for data usage in AI applications and fostering collaboration between tech companies and regulatory bodies to ensure the responsible use of technology.

Barriers That Prevent Effective Stakeholder Engagement in AI Governance

Kallina and Singh [2024, 7] explored several barriers to effective stakeholder engagement in AI governance. They pointed out that a lack of understanding of AI technologies among stakeholders, insufficient representation of diverse voices, and the complexity of regulatory frameworks create significant challenges. Many stakeholders feel overwhelmed by the technical jargon surrounding AI, making it difficult for them to engage meaningfully. To address this, the authors recommend developing educational programmes specifically designed to enhance stakeholders’ understanding of AI technologies. They also suggest creating dialogue platforms that include a wide range of stakeholders, ensuring that marginalised voices are heard throughout the governance process.

Kinney et al. [2024, 7] identified trust issues as another major barrier to stakeholder engagement in AI governance. Many stakeholders harbour distrust towards organisations involved in AI development, often due to past failures in transparency and accountability. The study highlights a common disconnect between policymakers and technologists regarding the implications of AI technologies. To bridge this gap, the authors recommend establishing clear communication channels between stakeholders and developers. They advocate for increased transparency in decision-making processes and suggest conducting regular public consultations to rebuild trust and ensure that stakeholder concerns are taken seriously.

Limani et al. [2024, 11] emphasised the challenge of inclusivity in stakeholder engagement. Their study notes that traditional governance structures often exclude non-expert voices, leading to decisions that do not adequately reflect societal values or needs. They highlight how power imbalances can skew engagement towards more privileged groups. To counter this, the authors recommend implementing inclusive practices, such as participatory design workshops, where diverse groups can actively contribute to discussions about AI governance. They also suggest using digital tools to facilitate broader participation from various demographics, ensuring that all voices are heard.

Current Stakeholder Engagement Practices in AI Governance Frameworks

Drier et al. [2022, 33] underscored the significance of multi-stakeholder engagement in AI governance. They noted that current practices often lack inclusivity, especially from marginalised communities and non-technical stakeholders. The research highlights that effective governance frameworks should incorporate diverse perspectives to address the ethical issues related to AI technologies. The authors recommend establishing formal mechanisms for stakeholder participation, such as public consultations and advisory boards that include representatives from civil society, academia, and industry.

Mensah [2023, 15] examined the role of transparency and accountability in AI systems. The study found that many organisations fail to adequately involve stakeholders during the development of AI technologies, resulting in a disconnect between developers and affected communities. To bridge this gap, the study suggests implementing regular stakeholder engagement sessions throughout the AI life cycle to facilitate ongoing dialogue and feedback. Mensah advocates for clearer communication strategies to inform stakeholders about how their input is being used.

Díaz-Rodríguez et al. [2023, 6] focused on responsible AI and emphasised the importance of involving stakeholders in defining ethical guidelines for AI deployment. They indicated that existing governance frameworks often neglect the voices of end-users and those affected by AI decisions. The study recommends creating collaborative platforms where stakeholders can work together to co-create ethical standards and guidelines for AI use, fostering a sense of ownership regarding the implications of these technologies.

Recommendations for Policymakers Aimed at Enhancing Stakeholder Engagement in AI Governance

Drier et al. [2022, 1] underscored the significance of multi-stakeholder engagement in AI governance. They noted that current practices often lack inclusivity, especially from marginalised communities and non-technical stakeholders. The research highlights that effective governance frameworks should incorporate diverse perspectives to address the ethical issues related to AI technologies. The authors recommend establishing formal mechanisms for stakeholder participation, such as public consultations and advisory boards that include representatives from civil society, academia, and industry.

Mensah [2023, 3, 11-15] examined the role of transparency and accountability in AI systems. The study found that many organisations fail to adequately involve stakeholders during the development of AI technologies, resulting in a disconnect between developers and affected communities. To bridge this gap, the study suggests implementing regular stakeholder engagement sessions throughout the AI life cycle to facilitate ongoing dialogue and feedback. Mensah advocates for clearer communication strategies to inform stakeholders about how their input is being used.

Díaz-Rodríguez [2023, 7-16] focused on responsible AI and emphasised the importance of involving stakeholders in defining ethical guidelines for AI deployment. They indicated that existing governance frameworks often neglect the voices of end-users and those affected by AI decisions. The study recommends creating collaborative platforms where stakeholders can work together to co-create ethical standards and guidelines for AI use, fostering a sense of ownership over the implications of these technologies.

Sharma [2020, 1] highlighted that researchers and practitioners focused on AI applications often lack robust governance structures, which can be used as models for policy and regulatory frameworks. This research seeks to address this gap by engaging stakeholders who inform issues of governance and policy formulation.

___________________________

Methodology

This study employed a case study approach to explore the role of stakeholder engagement in Artificial Intelligence (AI) governance and policy-making in Zimbabwe. The qualitative nature of the research aimed to provide a comprehensive understanding of the dynamics at play within this specific context. The case study design was particularly well-suited for this research, as it allowed for an in-depth examination of the complex interplay among various stakeholders involved in AI governance. By focusing on Zimbabwe, the study sought to illuminate how local factors influence stakeholder engagement and how these interactions can shape effective AI policies that align with the needs and values of the community. The research was guided by several key principles. First, it aimed to understand the perceptions, challenges, and opportunities surrounding AI governance as experienced by different stakeholder groups. By recognising that AI technologies are not only technical innovations but also social constructs, the study emphasised the importance of stakeholder perspectives in shaping governance frameworks.

Data collection involved a combination of qualitative methods, including interviews and literature reviews. A total of 60 participants were interviewed, representing a diverse range of stakeholders. This included company executives, Information Communication Technology (ICT) experts, civil society members, and users of AI technologies. The selection of these participants was purposeful, aimed at capturing a wide array of perspectives on AI governance. Company executives provided insights into the business implications of AI, while ICT experts contributed technical knowledge and industry best practices. Civil society members offered a lens into the ethical and social implications of AI deployment, and users shared their experiences and expectations regarding AI technologies.

The interviews were semi-structured, allowing for flexibility in exploring topics while ensuring that key themes were addressed. This format encouraged participants to express their views in their own words, leading to rich and nuanced data. Each interview lasted between 45 minutes to an hour and was conducted in a neutral setting to promote open dialogue. The interviewer recorded the key themes with the consent of the participants and subsequently grouped key themes and patterns related to stakeholder engagement in AI governance.

To complement the primary data collected from interviews, a comprehensive literature review was conducted. This review focused on existing research related to AI stakeholder engagement, governance frameworks, and policy-making in both developed and developing contexts. By situating the findings within the broader academic discourse, the study aimed to identify gaps in the literature and highlight best practices that could inform AI governance in Zimbabwe. The literature review also served to contextualise the challenges faced by Zimbabwe in implementing effective AI policies, drawing comparisons with experiences from other regions.

Ethical considerations were paramount throughout the research process. Participants were informed about the purpose of the study and their rights, including the right to withdraw at any time. Confidentiality was maintained by removing identifiable information from the transcripts and reports. This commitment to ethical research practices not only protected the participants but also contributed to building trust and rapport, which are essential for obtaining candid responses.

___________________________

Findings

This section discusses the findings of the research on the role of stakeholder engagement in Artificial Intelligence (AI) governance and policy-making in Zimbabwe, framed through the lens of Actor-Network Theory (ANT). The findings are intended to inform policymakers and stakeholders about pathways for enhancing engagement in AI governance, ultimately contributing to the development of effective and inclusive AI policies that resonate with the Zimbabwean community. Thematic analysis was employed to analyse the qualitative data gathered from interviews. This method involved coding the data to identify recurring themes and patterns, which were then organised into categories that reflected the research objectives. The analysis focused on key areas such as stakeholder perceptions of AI technologies, barriers to effective engagement, and current practices in AI governance. By synthesising insights from various stakeholders, the study sought to provide a holistic understanding of the factors influencing AI governance in Zimbabwe.

Stakeholder Perceptions of AI Technologies

The research revealed diverse perceptions among stakeholders regarding the governance and policy formulation of AI technologies in Zimbabwe. Company executives viewed AI as a transformative force, capable of driving economic growth and enhancing efficiency within their organisations. They highlighted the potential for AI to improve service delivery, especially in sectors like industry and agriculture. However, their optimism was tempered by concerns regarding the lack of a comprehensive governance framework. They expressed that, without appropriate regulations and support, the benefits of AI might not reach the broader population.

In contrast, civil society members voiced apprehensions about the ethical implications of AI. They pointed out that the rapid deployment of AI technologies could exacerbate existing inequalities and lead to job displacement. This perspective aligns with the critical stance within ANT, which posits that technology is not neutral and can have varying impacts across different societal segments. The civil society representatives emphasised the need for inclusive governance that considers the voices of marginalised groups, thereby underscoring the importance of stakeholder engagement.

ICT experts advocated for a comprehensive governance and policy framework that aligns with the country’s developmental and international goals, such as the National Development Strategy 1 (NDS1), National Development Strategy 2 (NDS2), and the United Nations Sustainable Development Goals (UNSDGs). They stressed the importance of ensuring that AI technologies are harnessed to address local challenges in areas like healthcare, agriculture, mining, and education. The experts highlighted the need for strong regulatory frameworks that ensure ethical AI use while protecting citizens’ rights. They also called for policies that promote transparency, accountability, and fairness in AI systems, addressing potential biases and discrimination. Furthermore, ICT experts emphasised the importance of public-private partnerships to leverage resources and expertise, facilitating the sharing of best practices and knowledge. Continuous stakeholder engagement, including input from academia, industry, and civil society, was also deemed essential to ensure that AI policies remain relevant, adaptive, and inclusive.

Users of AI technologies emphasised the need for inclusivity and representation in discussions surrounding AI governance and policy formulation. They advocated for engaging diverse stakeholders, including marginalised communities and non-technical experts, to ensure that AI solutions address a wide array of societal needs. Transparency and accountability were also crucial concerns, as users called for clear communication about how AI systems function and the decision-making processes behind them. Ethical considerations were paramount, with many users advocating for policies that prioritise fairness and prevent biases, ensuring equitable outcomes from AI deployment. Users highlighted the importance of adaptability in policy frameworks to keep pace with the rapidly evolving nature of AI technologies. Ongoing dialogue and reassessment of policies are necessary to tackle emerging challenges effectively. Education and awareness initiatives are also vital in empowering users to engage meaningfully in policy discussions. Additionally, users called for collaboration among stakeholders—government, industry, academia, and civil society—to foster innovation while addressing ethical and societal issues. By incorporating these insights, policymakers can create frameworks that maximise the benefits of AI while minimising its risks.

Barriers to Effective Stakeholder Engagement

One critical finding of the study was the identification of barriers that hinder effective stakeholder engagement in AI governance. Interview participants highlighted several issues contributing to this challenge. Many stakeholders, particularly in rural areas, lacked awareness of AI technologies and their implications. This gap in knowledge significantly impeded meaningful participation in governance discussions. Access to relevant information about AI technologies and governance frameworks was uneven within the rural community. Other stakeholders expressed concerns regarding the opacity of decision-making processes, which often excluded those without the necessary technical expertise.

Another significant barrier is the issue of power dynamics; Actor-Network Theory (ANT) emphasises the role of power in shaping networks, and in the context of Zimbabwe, these imbalances were evident. Company executives and government officials frequently dominated discussions, sidelining the perspectives of civil society members and ordinary users. This concentration of power limits the diversity of voices in the policy-making process. Without a clear policy and regulatory framework, the governance landscape for AI in Zimbabwe is characterised by institutional fragmentation, resulting in unclear roles and responsibilities among stakeholders. This fragmentation complicates efforts to engage effectively, highlighting the need for a more cohesive approach to governance.

Current Stakeholder Engagement Practices

The research assessed existing stakeholder engagement practices within AI governance frameworks in Zimbabwe. While some initiatives were identified, such as public consultations and workshops organised by government agencies, others fell short of being genuinely inclusive and participatory. Some stakeholders reported that consultations were superficial, often serving as a formality rather than a platform for genuine dialogue. This aligns with ANT’s assertion that networks are only as strong as the relationships within them. When consultation processes are tokenistic, the resulting policies fail to reflect the needs and values of the broader community.

The study found that engagement practices tended to focus predominantly on technology experts and business leaders, neglecting the voices of end-users, especially marginalised groups and civil society. This exclusion not only limits the diversity of perspectives but also risks creating policies that do not resonate with the lived experiences of those affected by AI technologies.

___________________________

Discussion

Different actors have varying interests in governance and policy-making, as highlighted in the findings.

Multi-Stakeholder Engagement

Dreier et al. [2022, 21] and Hu et al. [2019, 11] stress the importance of inclusive stakeholder engagement in AI governance. ANT posits that every actor, whether a marginalised community member or a technical expert, plays a crucial role in shaping the network’s dynamics. The lack of inclusivity often leads to biased AI systems that do not reflect the needs and values of all segments of society. By recognising the agency of diverse stakeholders, policymakers can create more equitable governance frameworks that genuinely represent varied perspectives.

Bridging the Developer-Community Gap

Mensah [2023, 6] highlights the disconnect between AI developers and affected communities. According to ANT, this gap can be understood as a failure in the network of actors to effectively communicate and collaborate. Regular stakeholder engagement sessions, proposed by Mensah, can be viewed as attempts to reinforce connections within the network. These sessions aim to facilitate dialogue, allowing stakeholders to voice concerns and contribute to the AI development life cycle. By fostering ongoing communication, stakeholders can better understand how their input influences AI technologies, thereby enhancing transparency and accountability.

Collaborative Platforms for Ethical Guidelines

Díaz-Rodríguez et al. [2023, 24] and Dreier [2022, 20-22] advocate for collaborative platforms where stakeholders can co-create ethical guidelines for AI deployment. From an ANT perspective, this approach recognises that ethical standards are not predetermined but are constructed through interactions among diverse actors. By creating spaces for collaboration, stakeholders can negotiate and redefine ethical considerations, ensuring that guidelines reflect the collective values and concerns of all involved. This participatory approach fosters a sense of ownership regarding AI technologies and their implications, empowering stakeholders to influence governance actively.

Implications of AI Governance and Policy Making in Zimbabwe

The implications of AI governance and policy-making in Zimbabwe are profound, emphasising the need for inclusive and participatory frameworks. Ensuring that marginalised and non-technical stakeholders are actively involved in AI policy discussions can lead to more robust and representative governance, ultimately reflecting the diverse values of society. Understanding the dynamics between various actors within the stakeholder network is crucial for identifying barriers to effective engagement and facilitating collaboration. Involving a broad range of stakeholders in defining ethical standards can result in guidelines that are culturally sensitive and relevant to local contexts. Sustained engagement throughout the AI life cycle fosters transparency and builds trust among stakeholders, contributing to more responsible AI practices.

Overall, a comprehensive approach to AI governance in Zimbabwe not only enhances ethical considerations but also promotes social equity, ensuring that AI technologies serve the interests of all citizens. The application of Actor-Network Theory to the study of AI governance in Zimbabwe underscores the critical importance of multi-stakeholder engagement. By recognising the interconnectedness of various actors and fostering inclusive dialogue, policymakers can develop more ethical, transparent, and accountable AI systems that reflect the diverse needs of society. This approach not only enhances the governance landscape but also promotes a collaborative atmosphere essential for addressing the complex challenges posed by AI technologies.

Recommendations for Enhancing Stakeholder Engagement

Based on the findings, several actionable recommendations emerged to enhance stakeholder engagement in AI governance. Firstly, implementing awareness campaigns is crucial for educating stakeholders about AI technologies and their implications. These campaigns should target diverse audiences, including rural communities, to ensure broad participation. Secondly, establishing transparent decision-making processes can help build trust among stakeholders. Clear communication about how their input is considered in policy formulation can mitigate feelings of disenfranchisement. It is also essential to create platforms that amplify the voices of marginalized groups, ensuring their perspectives are integrated into governance discussions. Dedicated forums or advisory committees focusing on the needs of vulnerable populations can facilitate this inclusion.

Thirdly, strengthening institutional frameworks is vital for developing a cohesive governance structure for AI, which clarifies roles and responsibilities among various stakeholders. This framework should promote collaboration among public, private, and civil society actors. Lastly, leveraging technology to facilitate engagement can enhance participation by utilizing online platforms. Such platforms allow for the gathering of input from a wider audience and enable contributions regardless of geographical barriers.

___________________________

Conclusion

This study highlighted the critical role of stakeholder engagement in the governance and policy-making processes surrounding AI in Zimbabwe. Through the lens of Actor-Network Theory, the findings reveal the complex interplay between various stakeholders, their perceptions, and the power dynamics that influence engagement practices. While opportunities exist for enhancing AI governance and policy-making through inclusive participation, significant barriers remain. Addressing these barriers requires concerted efforts to raise awareness, promote transparency, empower marginalised voices, and strengthen institutional frameworks. By fostering a more equitable and participatory governance and policy-making landscape, Zimbabwe can better harness the potential of AI technologies while addressing the ethical, legal, and social concerns that accompany their deployment. The integration of diverse perspectives in AI governance will not only lead to more effective policies but also ensure that the benefits of AI are shared broadly, contributing to the overall well-being of Zimbabwean society.

___________________________

References

Bokhari, S.A.A. and Myeong, S., 2023. The influence of artificial intelligence on e-Governance and cybersecurity in smart cities: A stakeholder’s perspective, IEEE Access 11: 5-6.

Cihon, P., Schuett, J. and Baum, S.D., 2021. Corporate governance of artificial intelligence in the public interest, Information 12(7): 275.

Resseguier, Anaïs, Brey, Philip, Dainow, Brandt, Drozdzewska, Anna, 2021. SIENNA D5.4: Multi-stakeholder Strategy and Practical Tools for Ethical AI and Robotics, Zenodo, published 29 September 2021, URL = https://zenodo.org/records/5536176

Díaz-Rodríguez, N., Del Ser, J., Coeckelbergh, M., De Prado, M.L., Herrera-Viedma, E. and Herrera, F., 2023. Connecting the dots in trustworthy artificial intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation, Information Fusion 99: 1-16.

Dreier, V., Gelissen, T., Oliveira, M., Riezebos, S., Saxena, R., Sibal, P., Yang, S.Y., 2022. Multi-stakeholder AI development: 10 building blocks for inclusive policy design, UNESCO Publishing.

Ernst, E., Merola, R. and Samaan, D., 2019. Economics of artificial intelligence: Implications for the future of work, IZA Journal of Labour Policy 9(1): 1-35.

Ghosh, A., Saini, A. and Barad, H., 2025. Artificial intelligence in governance: Recent trends, risks, challenges, innovative frameworks and future directions, AI & SOCIETY: 1-23.

Gordon, J.S. and Gunkel, D.J. 2025. Artificial intelligence and the future of work, AI & SOCIETY 40(3): 1897-1903.

Hlongwane, J., Shava, G.N., Mangena, A. and Muzari, T., 2024. Towards the integration of artificial intelligence in higher education: Challenges and opportunities in the African context, a case of Zimbabwe, Int J Res Innov Soc Sci 8(3S): 417-435.

Hu, X., Neupane, B., Echaiz, L.F., Sibal, P., Rivera Lam, M,. 2019. Steering AI and advanced ICTs for knowledge societies: A rights, openness, access, and multi-stakeholder perspective, UNESCO Publishing.

Hwabamungu, B., Brown, I., Williams, Q., 2018. Stakeholder influence in public sector information systems strategy implementation: The case of public hospitals in South Africa, International Journal of Medical Informatics 109: 39-48.

Kallina, E. and Singh, J., 2024. Stakeholder involvement for responsible AI development: A process framework, in Proceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimisation 2024 Oct 29: 1-14.

Latour, B., 1996. On actor-network theory: A few clarifications, Soziale Welt: 369-381.

Limani, E., Hajdari, L., Limani, B., Krasniqi, J., 2024. Enhancing stakeholder engagement: Using the communication perspective to identify and enhance stakeholder communication in place management, Cogent Business & Management 11(1): 2383322.

Kinney, M., Anastasiadou, M., Naranjo-Zolotov, M., Santos, V., 2024. Expectation management in AI: A framework for understanding stakeholder trust and acceptance of artificial intelligence systems, Heliyon 10(7).

Marmolejo-Ramos, F., Workman, T., Walker, C., Lenihan, D., Moulds, S., Correa, J.C., Hanea, A.M., Sonna, B., 2022. AI-powered narrative building for facilitating public participation and engagement, Discover Artificial Intelligence 2(1): 7.

Mensah, G.B., 2023. Artificial intelligence and ethics: A comprehensive review of bias mitigation, transparency, and accountability in AI systems, Preprint, November 10(1): 1.

Pallett, H., Price, C., Chilvers, J., Burall, S., 2024. Just public algorithms: Mapping public engagement with the use of algorithms in UK public services, Big Data & Society 11(1): 12.

Radu, R., 2021. Steering the governance of artificial intelligence: National strategies in perspective, Policy and Society 40(2): 178-193.

Robles, P., Mallinson, D.J., 2025. Artificial intelligence technology, public trust, and effective governance, Review of Policy Research 42(1): 11-28.

Sharma, G.D., Yadav, A., Chopra, R., 2020. Artificial intelligence and effective governance: A review, critique and research agenda, Sustainable Futures 2: 1.

Wamba-Taguimdje, S.L., Wamba, S.F., Kamdjoug, J.R.K., Wanko, C.E.T., 2020. Influence of artificial intelligence (AI) on firm performance: The business value of AI-based transformation projects, Business Process Management Journal 26(7): 1893-1924.

Wolff, J., Pauling, J., Keck, A., Baumbach, J., 2020. Systematic review of economic impact studies of artificial intelligence in health care, Journal of Medical Internet Research 22(2).