We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.

Durham Research Online
You are in:

Gradient Origin Networks

Bond-Taylor, Sam and Willcocks, Chris G. (2021) 'Gradient Origin Networks.', International Conference on Learning Representations Vienna / Virtual, 3 - 7 May 2021.


This paper proposes a new type of generative model that is able to quickly learn a latent representation without an encoder. This is achieved using empirical Bayes to calculate the expectation of the posterior, which is implemented by initialising a latent vector with zeros, then using the gradient of the log-likelihood of the data with respect to this zero vector as new latent points. The approach has similar characteristics to autoencoders, but with a simpler architecture, and is demonstrated in a variational autoencoder equivalent that permits sampling. This also allows implicit representation networks to learn a space of implicit functions without requiring a hypernetwork, retaining their representation advantages across datasets. The experiments show that the proposed method converges faster, with significantly lower reconstruction error than autoencoders, while requiring half the parameters.

Item Type:Conference item (Poster)
Full text:(VoR) Version of Record
Download PDF
Publisher Web site:
Date accepted:12 January 2021
Date deposited:28 October 2021
Date of first online publication:2021
Date first made open access:28 October 2021

Save or Share this output

Look up in GoogleScholar