# Session 6: Perceptrons in Python Practice This session will mark our move forward to actually engage with machine learning. While the default mode would be to look into linear regression first, and to use one of the established machine learning frameworks, we will start to explore the topic a bit differently. General outline of the session: * Recap of the programming intro part of this course, which we now leave behind * A simple perceptron in python learning NAND Background Reading: * A bit of a general background info about the [Perceptron](https://en.wikipedia.org/wiki/Perceptron). * A general explanation of [Boolean algebra](https://en.wikipedia.org/wiki/Boolean_algebra). * What is and does an [Artificial Neuron](https://en.wikipedia.org/wiki/Artificial_neuron) actually do? * Pasquinelli, M. (2017): “[Machines that Morph Logic: Neural Networks and the Distorted Automation of Intelligence as Statistical Inference.](https://www.glass-bead.org/article/machines-that-morph-logic/?lang=enview)” In: Glass Bead Journal 1, pp. 1-17. ## Simple percepton NAND demo The following code is based on a snippet found on the Wikipedia page [Perceptron, in its version from 24. Dec. 2013](https://en.wikipedia.org/w/index.php?title=Perceptron&oldid=587515588), and extends it by some more debugging output, a limit for the iterations, and some alternative training data sets and weights, to play around with. ```python threshold = 0.5 learning_rate = 0.1 # the original training set and weights for a NAND of 2 values with an additional helper value training_set = [((1, 0, 0), 1), ((1, 0, 1), 1), ((1, 1, 0), 1), ((1, 1, 1), 0)] weights = [0, 0, 0] # a training set and weight only for two values to be NANDed, without any additional helper inputs training_set_dual = [((0, 0), 1), ((0, 1), 1), ((1, 0), 1), ((1, 1), 0)] weights_dual = [0, 0] # actual NAND of 3 values training_set_triple = [ ((1, 0, 0), 1), ((1, 0, 1), 1), ((1, 1, 0), 1), ((1, 1, 1), 0), ((0, 0, 0), 1), ((0, 0, 1), 1), ((0, 1, 0), 1), ((0, 1, 1), 1), ] # again NAND of 3 values, now with an additional 4th helper input training_set_triple_with_helper = [ ((1, 1, 0, 0), 1), ((1, 1, 0, 1), 1), ((1, 1, 1, 0), 1), ((1, 1, 1, 1), 0), ((1, 0, 0, 0), 1), ((1, 0, 0, 1), 1), ((1, 0, 1, 0), 1), ((1, 0, 1, 1), 1), ] # we also have to add a 4th weight for the additional helper input weights_triple_with_helper = [0, 0, 0, 0] # Set alternative training sets and weights training_set = training_set_triple_with_helper weights = weights_triple_with_helper def dot_product(values, weights): return sum(value * weight for value, weight in zip(values, weights)) iteration = 0 while True: print('-' * 60) iteration += 1 print(f'iteration: {iteration}') error_count = 0 for input_vector, desired_output in training_set: result = dot_product(input_vector, weights) > threshold error = desired_output - result print(f'Weights: {weights} Input: {input_vector} Result: {result} Error: {error}') if error != 0: error_count += 1 for index, value in enumerate(input_vector): weights[index] += learning_rate * error * value if error_count == 0 or iteration > 100: break ```