I know there are tensorflow, pytorch, kera and a whole bunch of other libraries out there but I need something that can work on Termux but no success (at least no longer after Python 3.7 upgrade). But after reading again the textbook on how a neural network operates, it doesn’t seem hard to write my own library.
Here I explain the code:
Symbols and notations
Artificial neural network is a nonlinear regression model in stacked layers. The simple regression in statistics is having one input and one output and to find the equation to fit in between. A multilayer neural network (MLP, multilayer perceptron) is to extend this structure to multiple layers, so regression on layer gives output that will become input of the layer . Input to layer 0 is the model’s input and the output from the last layer is the model’s output.
Using a notation similar to Russell and Norvig’s book ^{1}, we can model the NN as follows:
 there are layers in the NN
 model input is matrix , which the convention is to have features of each data point as rows, and different data are presented as columns. We will have matrix for instances of data, each having features
 model reference output is matrix , which again, data of each instances are presented as columns. We will have matrix for the same instances of data, each having output features
 is the input of layer and output of layer . It will be a matrix of dimension if there are perceptrons on layer
 by definition, and we define the model output

each perceptron (building block of NN) computes for some weight vector and the input to the layer for each instance of data, then outputs for some activation function . This is the nonlinear function in the regression. In matrix form for all instances of data and the whole layer on layer , it is
where the addition of above is broadcast to each row. Matrix is of dimension for this layer has perceptrons and the previous layer has perceptrons. Matrices and is of dimensions and respectively
 The activation function is commonly one of these:
 ReLU
 logistic:
 hyperbolic tangent:
 leaky ReLU: for some small when otherwise
 ELU: for some small when otherwise
To train the NN, we feed forward the network with data and in each epoch and then use back propagation to update the parameters, then repeat for many epochs in the hope that the parameters will converge to a useful value. First we define a loss function to measure the average discrepancy between the NN output and the reference output over the data instances. Then we minimize , usually by gradient descent method: On output layer:
Otherwise:
which the sum on is to sum on all columns of . Then we update the parameters by
For some learning rate . Observing the definition of each differentials, they are all partial derivatives of w.r.t. each parameters to update. Hence the above two equations as update rule. It is common to use binary cross entropy as loss function for classification applications: (in scalar form)
which then we have
How to use it
Sample code:
from pyann import pyann
# make N instances of data stacked as columns of numpy array
X, y = prepare_data()
X_train, X_test, y_train, y_test = train_test_split(X, Y)
# learn it
layers = [2, 50, 50, 50, 1]
activators = ["relu"] * 4 + ["logistic"]
NN = pyann(layers, activators)
NN.fit(X_train, y_train, 10000, 0.001, printfreq=500)
# use it
y_hat = NN.forward(X_test)
What can go wrong
The recent O’Reilly book^{2} has a very wellwritten Chapter 11. I would say, all problems it describes can happen to this code. So you cannot use it to build a deep neural network out of the box.
First is the issue of vanishing gradients and exploding gradients. The problem will be exaggerated when the network has a lot of layers. The code above did not implement Xavier initialization (just very simple quasitruncated normal).
Second is the saturation of the ReLU activation functions. It is common to use ReLU, and it may saturate to its flattened region (negative ) that render the NN malfunction. We did not implement leaky ReLU above, but we do have the exponential linear unit (ELU) with parameter 1 to the rescue. But using it will see noticeable slow down.
Third, no regularization and no early stopping is implemented. After all, we have no way to provide test set to the NN model to fit.
Lastly, we did not implement drop out. I heard people do not use it any more in favor of other techniques. But if you want to, we have to implement masks to the weight matrices .
Except that it is simple, usable, and not depend on sophisticated libraries, this is far from a featurerich NN framework. Try at your own risk.