Repository logo
Institutional Digital Repository
Shreenivas Deshpande Library, IIT (BHU), Varanasi

Fast Coding Unit Depth Identification Using Texture and Multiple Deep Learning Architectures

Abstract

High-Efficiency Video Coding (HEVC), often known as H.265, is a new video coding standard that offers substantially better compression efficiency than the previous standard, H.264 while maintaining the same video quality. In HEVC, the quadtree partition method divides Coding Tree Units (CTUs) into Coding Units (CUs). This coding unit partition is recursive and increases computational complexity because it is dependent on rate-distortion optimization (RDO). In this paper, we propose a texture and deep learning-based system, that initiates the CU partition by the calculation of the CU texture attributes. Coding units are classified into three categories based on texturing properties. Class 1 represents mostly homogeneous regions, Class 2 mostly non-homogeneous regions, and Class 3 other regions. Only class 3 blocks are sent through the deep learning architecture. As a result, the total number of blocks partitioned by the deep learning architecture is lowered. We also proposed three distinct deep learning-based architectures in our system for coding unit partitioning, which eliminates the need for rate-distortion optimization and thereby decreases computational complexity. The input to our proposed texture and deep learning-based system is an image of size 64×64 (CTU), while the output is a 1×16 vector representing the depths of the coding tree unit. Simulation results demonstrate the effectiveness of our proposed system. Compared to existing models, our proposed CU-CNN, CU-MobileNet, and CU-Resnet have reduced the encoding time of CU partitions by 68.41%, 75.77%, and 88.08% respectively. In addition, the results demonstrated that the proposed system with the CU-MobileNet model is appropriate for mobile or lightweight applications, while the CU-Resnet model works well for time-critical or high-speed applications. © 2004-2012 IEEE.

Description

Keywords

Citation

Collections

Endorsement

Review

Supplemented By

Referenced By