Research Article

Explainable AI in DevOps: Architectural Patterns for Transparent Autonomous Pipelines

Authors

  • Venkata Krishna Koganti The University of Southern Mississippi, USA

Abstract

This article presents a comprehensive framework for integrating explainable artificial intelligence into DevOps pipelines while maintaining appropriate human oversight and control. The article explores architectural patterns that enable transparency in AI-driven decision flows through the application of SHAP and LIME techniques for model interpretability. The article introduces confidence scoring mechanisms for gating autonomous remediation actions and establishes bidirectional feedback loops between production environments and training pipelines. The article demonstrates how large language models can be leveraged to enhance infrastructure-as-code workflows while enforcing versioned checkpoints. Through case studies and implementation guidance, this article addresses the growing tension between automation benefits and the need for explainability, traceability, and compliance in modern software delivery platforms. The proposed approaches enable SREs and platform teams to build resilient, self-explaining DevOps ecosystems that balance the advantages of AI automation with necessary human judgment and organizational governance requirements.

Article information

Journal

Journal of Computer Science and Technology Studies

Volume (Issue)

7 (4)

Pages

890-904

Published

2025-05-22

How to Cite

Venkata Krishna Koganti. (2025). Explainable AI in DevOps: Architectural Patterns for Transparent Autonomous Pipelines. Journal of Computer Science and Technology Studies, 7(4), 890-904. https://doi.org/10.32996/jcsts.2025.7.4.103

Downloads

Views

23

Downloads

16

Keywords:

Explainable AI, DevOps automation, CI/CD pipelines, ML feedback loops, infrastructure as code