Statement of the Problem: Artificial Intelligence (AI) development is rapidly reshaping human capabilities and societal structures. However, despite AI’s remarkable growth, it remains inherently reliant on human oversight. This dependency is influenced by political, ethical, and economic forces. Political constraints, such as international regulations and national policies, aim to limit AI’s unchecked power. Ethically, concerns about AI’s impact on privacy, bias, and moral dilemmas require human oversight to ensure responsible application. Economically, powerful industries, particularly in sectors like healthcare, influence AI’s direction to maintain profitability. The interplay between these forces hinders autonomous AI advancement, highlighting the technology’s dependence on human intervention. This research seeks to investigate the limitations imposed on AI and the essential role humans play in governing its trajectory. Methodology & Theoretical Orientation: Utilizing a mixed-methods approach, this study combines qualitative analysis of case studies with quantitative research on AI advancements within legislative frameworks. Theoretical underpinnings include sociotechnical theory, emphasizing the interdependence of humans and technology. Findings: Preliminary results reveal significant interference from political bodies, driven by fears of AI surpassing human control. Ethical concerns continue to generate debates on the moral implications of AI, while economic interests impede technological breakthroughs by prioritizing profit over progress. This multidimensional struggle accentuates AI’s reliance on human governance and illustrates the need for a balanced approach. Conclusion & Significance: AI’s growth is inextricably linked to human oversight. Without clear and ethical human intervention, AI could pose risks to society. This research underscores the importance of crafting policies that balance innovation with ethical responsibility. Recommendations include creating global coalitions for AI governance and engaging in cross-disciplinary dialogue to shape AI’s future positively.
Hasan Swaidan is a researcher & student from Syria at the American University of Kuwait dedicated to understanding AI’s complex relationship with human governance. With a keen interest in the ethical, political, and economic dimensions of AI development, he investigates the necessity of human oversight in shaping this technology. Drawing on structured research methodologies, Swaidan examines how international regulations, ethical concerns, and economic pressures inhibit AI’s autonomous progress. His work is inspired by a commitment to promoting responsible innovation and ensuring that AI serves humanity’s best interests. Swaidan’s research is grounded in cross-disciplinary analysis, and he actively contributes to scholarly conversations about AI’s evolving impact. He embodies resilience and believes in the strength that comes from merging knowledge with purpose.