(1)
Optimizing LLM Deployments through Inference Backends. jaicc 2024, 3 (4), 1-4. https://doi.org/10.47363/JAICC/2024(3)E128.