Deepfake audio is fake audio where the content, while sounding like it is in the voice of a particular speaker, is not actually spoken by the person. With the advent of deepfakes, verifying audio as being provided by a particular person, e.g., a public figure or other party of interest, and at a particular time and location is important to promote trust within a society. This disclosure describes techniques that enable the listener of a particular piece of audio to verify that the audio was indeed created by the person associated with the voice. A recorder that is certified as being owned and operated by a person speaking records the audio, digitally signs it, and publishes it to a blockchain ledger. An inaudible, signed beacon emitted periodically during the recording is captured by other microphones in the room. At playback time, the recording is verified as genuine by verifying the signature of the beacons and by semantically matching the recording with the speech stored in the blockchain.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Belmon, Stephane and Rammohan, Roshan, "Robust Anti-Deepfake Measures for Audio Using Blockchain and Machine Learning", Technical Disclosure Commons, (December 07, 2023)