Secure Code Completion Models Tuned for Compliance-Heavy Domains

Authors

  • Arjun Deshraje Urs USA Author

DOI:

https://doi.org/10.47363/JAICC/2025(4)476

Keywords:

Compliance-Aware Code Generation, Large Language Models, Secure Software Development, Domain-Specific Fine-Tuning

Abstract

The integration of large language models (LLMs) into software engineering workflows has accelerated code generation and review processes. However, their deployment in compliance-intensive sectors—such as finance, healthcare, and defense—introduces stringent challenges around regulatory adherence,security assurance, and legal accountability. This paper presents a principled methodology for fine-tuning LLMs on domain-specific, security-vetted datasets to produce code aligned with rigorous compliance frameworks. The proposed pipeline addresses dataset curation, model adaptation, and multi-layered evaluation, ensuring both syntactic correctness and regulatory fidelity. Empirical results demonstrate that the fine-tuned models significantly outperform general-purpose LLMs in generating secure, regulation-compliant code.

Author Biography

  • Arjun Deshraje Urs, USA

    Arjun Deshraje Urs, USA

Downloads

Published

2025-08-25

How to Cite

Secure Code Completion Models Tuned for Compliance-Heavy Domains. (2025). Journal of Artificial Intelligence & Cloud Computing, 4(4), 1-2. https://doi.org/10.47363/JAICC/2025(4)476

Similar Articles

11-20 of 496

You may also start an advanced similarity search for this article.