Abstract
Accurate network application classification is essential for quality of service, security, and regulatory compliance, yet traditional Deep Packet Inspection (DPI) approaches struggle with encrypted traffic, scalability, and evasion vulnerability, while existing machine learning approaches lack explainability and require costly retraining. To address these challenges, the techniques presented herein implement AppLLM, a system that combines structured Chain-of-Thought (CoT) training with Low-Rank Adaptation (LoRA)-based fine-tuning to enable protocol-aware, explainable classification. By embedding domain-specific reasoning into training data and system prompts, supporting dual-mode inference for latency optimization, and integrating directly into the Vector Packet Processing (VPP) data plane, AppLLM delivers accurate, low-latency classification while supporting rapid incremental learning, making it a practical and scalable application detection solution for modern network environments.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Subramanian, Rajasekar; Sharma, Divyansh; Voonna, Praveen; and Gowrish, Koushik Padavalli, "PURPOSE FINE-TUNED LARGE LANGUAGE MODEL FOR END USER APPLICATION DETECTION", Technical Disclosure Commons, ()
https://www.tdcommons.org/dpubs_series/9999