Abstract

The potential theft or unauthorized use of machine learning models developed by a company can lead to significant financial losses and damage to the company's intellectual property. While existing methods of protecting machine learning models such as encryption or access controls can be circumvented by skilled attacker, techniques presented herein involve the integration of embedded watermarks into machine learning models. Such techniques involving the integration of embedded watermarks may not only uniquely identify a model but may also include a unique user identification/identity that can make it possible to track usage of the model and detect any unauthorized use of the model. Thus, if a model is leaked, redistributed, or misused, the watermark for the model makes it possible to identify a source of the leak/misuse, allowing for better traceability and accountability.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS