1.
Optimizing LLM Deployments through Inference Backends. jaicc. 2024;3(4):1-4. doi:10.47363/JAICC/2024(3)E128