Cookies

We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.


Durham Research Online
You are in:

Dynamic Unary Convolution in Transformers

Duan, Haoran and Long, Yang and Wang, Shidong and Zhang, Haofeng and Willcocks, Chris G. and Shao, Ling (2023) 'Dynamic Unary Convolution in Transformers.', IEEE Transactions on Pattern Analysis and Machine Intelligence .

Abstract

It is uncertain whether the power of transformer architectures can complement existing convolutional neural networks. A few recent attempts have combined convolution with transformer design through a range of structures in series, where the main contribution of this paper is to explore a parallel design approach. While previous transformed-based approaches need to segment the image into patch-wise tokens, we observe that the multi-head self-attention conducted on convolutional features is mainly sensitive to global correlations and that the performance degrades when these correlations are not exhibited. We propose two parallel modules along with multi-head self-attention to enhance the transformer. For local information, a dynamic local enhancement module leverages convolution to dynamically and explicitly enhance positive local patches and suppress the response to less informative ones. For mid-level structure, a novel unary co-occurrence excitation module utilizes convolution to actively search the local co-occurrence between patches. The parallel-designed Dynamic Unary Convolution in Transformer (DUCT) blocks are aggregated into a deep architecture, which is comprehensively evaluated across essential computer vision tasks in image-based classification, segmentation, retrieval and density estimation. Both qualitative and quantitative results show our parallel convolutional-transformer approach with dynamic and unary convolution outperforms existing series-designed structures.

Item Type:Article
Full text:(AM) Accepted Manuscript
Download PDF
(14252Kb)
Status:Peer-reviewed
Publisher Web site:https://doi.org/10.1109/TPAMI.2022.3233482
Publisher statement:© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Date accepted:No date available
Date deposited:16 January 2023
Date of first online publication:02 January 2023
Date first made open access:16 January 2023

Save or Share this output

Export:
Export
Look up in GoogleScholar