Inventor(s)

HP INCFollow

Abstract

In this work, we explore the development of a software application using Large Language Models (LLMs) to retrieve data from public GitHub repositories. The reliance on external sources can increase susceptibility to failures due to various factors, which are discussed in the following sections of this paper. Decoding strategies and prompt engineering are addressed to improve the coherence of LLM responses. We conducted experiments demonstrating how prompt techniques and inference parameters, such as temperature and top-p, affect model accuracy. The goal is to develop an application that extracts data from GitHub and generates outputs in natural language, enhancing user comprehension. We used the Gemini 1.5 Pro (Google) and Llama 3 70B (Meta) models, achieving accuracy above 90% by the end of the tests, reinforcing the effectiveness of the strategies adopted. The findings contribute to optimizing the use of LLMs for generating more accurate and context-aware responses in practical applications.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution-Share Alike 4.0 License.

Share

COinS