We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.

Durham Research Online
You are in:

Style augmentation : data augmentation via style randomization.

Jackson, Philip and Atapour-Abarghouei, Amir and Bonner, Stephen and Breckon, Toby and Obara, Boguslaw (2019) 'Style augmentation : data augmentation via style randomization.', IEEE/CVF Conference on Computer Vision and Pattern Recognition, Deep Vision Long Beach, CA, USA, 16-20 June 2019.


We introduce style augmentation, a new form of data augmentation based on random style transfer, for improving the robustness of Convolutional Neural Networks (CNN) over both classification and regression based tasks. During training, style augmentation randomizes texture, contrast and color, while preserving shape and semantic content. This is accomplished by adapting an arbitrary style transfer network to perform style randomization, by sampling target style embeddings from a multivariate normal distribution instead of computing them from a style image. In addition to standard classification experiments, we investigate the effect of style augmentation (and data augmentation generally) on domain transfer tasks. We find that data augmentation significantly improves robustness to domain shift, and can be used as a simple, domain agnostic alternative to domain adaptation. Comparing style augmentation against a mix of seven traditional augmentation techniques, we find that it can be readily combined with them to improve network performance. We validate the efficacy of our technique with domain transfer experiments in classification and monocular depth estimation illustrating superior performance over benchmark tasks

Item Type:Conference item (Paper)
Full text:(AM) Accepted Manuscript
Download PDF
Publisher Web site:
Date accepted:12 May 2019
Date deposited:13 May 2019
Date of first online publication:2019
Date first made open access:13 November 2019

Save or Share this output

Look up in GoogleScholar