Abstract

A vehicle head unit may train a surround-view (SV) detection module to rectify distortions in fish-eye camera images of the surroundings of a vehicle by comparing the object (e.g., traffic signs, lane markings, etc.) detection results of the SV detection module with those of an advanced driver assistance system (ADAS) detection module (e.g., while the SV detection module and the ADAS detection module are detecting the same objects of the same scenery). The vehicle head unit may receive the object detection results of the ADAS detection module by using one or more communication processes. For example, the vehicle head unit may use the object detection results of the ADAS detection module as ground truth data for training the SV detection module. The vehicle head unit may then update parameters, weights, and/or the like of the SV detection module to decrease the difference between the object detection results of the SV detection module and ADAS detection module. In some examples, the vehicle head unit may send (potentially after anonymizing personally identifiable information) the updated parameters, weights, and/or the like of the SV detection module to a remote computing system (e.g., a cloud server) to train a machine learning model that implements SV detection modules. The machine learning model may be trained using the collective updated parameters, weights, and/or the like of multiple SV detection modules.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS