Article contents
Explainable AI in DevOps: Architectural Patterns for Transparent Autonomous Pipelines
Abstract
This article presents a comprehensive framework for integrating explainable artificial intelligence into DevOps pipelines while maintaining appropriate human oversight and control. The article explores architectural patterns that enable transparency in AI-driven decision flows through the application of SHAP and LIME techniques for model interpretability. The article introduces confidence scoring mechanisms for gating autonomous remediation actions and establishes bidirectional feedback loops between production environments and training pipelines. The article demonstrates how large language models can be leveraged to enhance infrastructure-as-code workflows while enforcing versioned checkpoints. Through case studies and implementation guidance, this article addresses the growing tension between automation benefits and the need for explainability, traceability, and compliance in modern software delivery platforms. The proposed approaches enable SREs and platform teams to build resilient, self-explaining DevOps ecosystems that balance the advantages of AI automation with necessary human judgment and organizational governance requirements.
Article information
Journal
Journal of Computer Science and Technology Studies
Volume (Issue)
7 (4)
Pages
890-904
Published
Copyright
Open access

This work is licensed under a Creative Commons Attribution 4.0 International License.