Abstract

Smart devices equipped with infrared (IR) sensors offer convenient temperature measurement capabilities. However, the accuracy of temperature measurement using IR sensors is dependent on estimating the emissivity of the target surface. Currently, manual categorization of the object is necessary so that its material properties can be determined. Requiring such manual input is inconvenient and detracts from the user experience. This disclosure describes techniques that utilize a multimodal large language model (LLM) to automatically determine material type of an object whose temperature is to be determined. Per the techniques, an image of the object captured by a camera is provided to the multimodal LLM along with a suitable prompt instructing the LLM to determine the material type for the object. The LLM outputs the material type which is used in determining the temperature of the object based on infrared sensors. Alternatively, the LLM can be prompted to provide an emissivity estimate of the object directly and the emissivity estimate can be used to determine the object temperature. The techniques leverage the capability of a multimodal LLM of analyzing images to provide detailed information regarding an input image without requiring extensive training for various use cases.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS