AI-Enhanced IT Governance: Fostering Autonomy, Decision-Making, and Human Accountability
Abstract
The accelerated incorporation of Artificial Intelligence (AI) technologies into the Information Technology (IT) landscape presents both prospects and challenges for governance frameworks. This mixed-methods study provides an exhaustive examination of the role of AI in IT governance. The research is rooted in STEM disciplines and employs a two-pronged approach, focusing on autonomous decision-making in AI and human accountability. Quantitative analysis includes an evaluation of core algorithms pivotal to AI governance, such as decision trees, neural networks, and reinforcement learning models. Parameters like efficiency, ethical alignment, and organizational adaptability are quantified using robust statistical methods. Additionally, qualitative meta-analysis is conducted using NVivo software to assess existing literature, thus enabling a thematic analysis that highlights issues around human accountability and ethical challenges. The study reveals that while AI integration into governance frameworks presents several advantages, it requires a balanced approach involving human oversight and ethical safeguards. Our research fills a critical gap in existing literature by offering empirically backed, actionable insights that are of utility to IT professionals, organizational leaders, and policymakers.
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.