Inventor(s)

Hui SuFollow

Abstract

File compression often involves the use of a transform. Prior to applying a transform, the information in the file is converted to the frequency domain to make it easier to compress via the transform. Application of the chosen transform can be performed at different sizes of input data blocks. The rate-distortion (RD) cost, and correspondingly, the speed of the compression process depends on the block size chosen for application of the transform. The techniques described in this disclosure use a trained machine learning model to predict optimal block size for the application of a given transform used to compress a file. The techniques can automatically determine if it is optimal to apply the transform to the entire input block or split the block into smaller units to which the transform is subsequently applied.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS