Neural Networks with weight plus input instead of multiplication?
01:10 26 Jan 2026

Often, standard Neural Networks work with linear functions of inputs and weights such as $\sum_{i=1}^d (w_i x_i)$ and then some non linear activation function is used. We propose a Neural Network with $\sum_{i=1}^d (w_i+x_i)$ weighting scheme. A consequence is efficiency both in forward and back-propagation phases. We implemented the sum equation with a 2 layers NN and we tested it with the UCI wine regression dataset achieving a mean MSE score of 0.649220 on a 5 fold CV procedure. The standard neural Neural Network with 4 layers reaches a mean MSE score of 0.439025 on the same 5 fold CV procedure. We are asking if there is any way to make use of auto-grad to implement this scheme and, then, to add more layers to the net without coping with gradient manual derivation.
For example, in this case, a matrix X (the whole batch) of (813 x 11) dims should be summed to a weight matrix of W1(11, 50) dims, for example, and this should be done in the same way matrix multiplication does:

$$\foreach i \in [1..813], \foreach j \in [1..50], O1_{i,j} = \sum_{k=1}^11 (X_{i,k} + W1_{k,j})$$

Then O1 (the first output layer) should be followed by an activation function such as a Rectified Linear Unit, and so on for the next layer.

deep-learning neural-network autograd automatic-differentiation