Reading data more efficiently from a csv file in python
14:08 24 Jan 2026

I am programming an neural network to recognise digits trained on the MNIST dataset. At the moment I am reading each line separately using the following code:

file = np.loadtxt(dataLocation, delimiter=",", dtype="float128")
        count = 0
        print("Starting training")
        for row in file:
            count += 1
            data = []
            for item in row:
                data.append([item/255]) #This takes an array like [1,2,3,4] and makes it [[1],[2],[3],[4]] which is necessary for this program
            desired = self.Transformer(data.pop(0)) #Look at transformer method to understand

where self.transformer() calls a function that takes the first element from a line e.g 2 and outputs this in the vector [[0],[0],[1],[0],[0],[0],[0],[0],[0],[0]]. What I wanted to know is whether there is a more efficient way to read this as with my current program, one epoch takes about 30 minutes and that means that for 30 epochs I am taking 15 hours. I suspect that this in one of the main inefficiencies in my code. How do I improve this?

python csv mnist