How to accumulate gradients in tensorflow?
10:01 16 Oct 2017

I have a question similar to this one.

Because I have limited resources and I work with a deep model (VGG-16) - used to train a triplet network - I want to accumulate gradients for 128 batches of size one training example, and then propagate the error and update the weights.

It's not clear to me how do I do this. I work with tensorflow but any implementation/pseudocode is welcome.

python tensorflow conv-neural-network gradient-descent