Secure Code Completion Models Tuned for Compliance-Heavy Domains
DOI:
https://doi.org/10.47363/JAICC/2025(4)476Keywords:
Compliance-Aware Code Generation, Large Language Models, Secure Software Development, Domain-Specific Fine-TuningAbstract
The integration of large language models (LLMs) into software engineering workflows has accelerated code generation and review processes. However, their deployment in compliance-intensive sectors—such as finance, healthcare, and defense—introduces stringent challenges around regulatory adherence,security assurance, and legal accountability. This paper presents a principled methodology for fine-tuning LLMs on domain-specific, security-vetted datasets to produce code aligned with rigorous compliance frameworks. The proposed pipeline addresses dataset curation, model adaptation, and multi-layered evaluation, ensuring both syntactic correctness and regulatory fidelity. Empirical results demonstrate that the fine-tuned models significantly outperform general-purpose LLMs in generating secure, regulation-compliant code.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Journal of Artificial Intelligence & Cloud Computing

This work is licensed under a Creative Commons Attribution 4.0 International License.