Xing, Baixi and Cao, Hanfei and Shi, Lei and Si, Huahao and Zhao, Lina (2022) 'AI‐driven user aesthetics preference prediction for UI layouts via deep convolutional neural networks.', Cognitive Computation and Systems .
Leveraging the power of computational methods, AI can perform effective strategies in intelligent design. Researchers are pushing the boundaries of AI, developing computational systems to solve complex questions. The authors investigate the association of user preference for UI and deep image features, aiming to predict user preference level using deep convolutional neural networks (DCNNs) trained on a UI design image dataset. A total of 12,186 UI design images were collected from UI.cn and DOOOOR.com. Users' views and likes can help understand the implicit user preference level, which is set as the ground-truth annotation for the dataset. Six DCNNs, including VGG-19, InceptionNet-V3, MobileNet, EfficientNet, ResNet-50 and NASNetLarge were trained to learn the user preference of UI images. The experiment achieves an optimal result with a mean-squared error of 0.000214 and a mean absolute error of 0.0103 based on EfficientNet, which indicates that the proposed method provides the possibility in learning the pattern of user aesthetics preference for UI design. On the basis of the prediction model, a mobile application named ‘HotUI’ was developed for UI design recommendations.
|Full text:||(VoR) Version of Record|
Available under License - Creative Commons Attribution Non-commercial No Derivatives 4.0.
Download PDF (1634Kb)
|Publisher Web site:||https://doi.org/10.1049/ccs2.12055|
|Publisher statement:||© 2022 The Authors. Cognitive Computation and Systems published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology and Shenzhen University. This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.|
|Date accepted:||04 February 2022|
|Date deposited:||07 April 2022|
|Date of first online publication:||01 March 2022|
|Date first made open access:||07 April 2022|
Save or Share this output
|Look up in GoogleScholar|