Session 6: Perceptrons in Python Practice

This session will mark our move forward to actually engage with machine learning. While the default mode would be to look into linear regression first, and to use one of the established machine learning frameworks, we will start to explore the topic a bit differently.

General outline of the session:

  • Recap of the programming intro part of this course, which we now leave behind

  • A simple perceptron in python learning NAND

Background Reading:

Simple percepton NAND demo

The following code is based on a snippet found on the Wikipedia page Perceptron, in its version from 24. Dec. 2013, and extends it by some more debugging output, a limit for the iterations, and some alternative training data sets and weights, to play around with.

threshold = 0.5
learning_rate = 0.1
# the original training set and weights for a NAND of 2 values with an additional helper value
training_set = [((1, 0, 0), 1), ((1, 0, 1), 1), ((1, 1, 0), 1), ((1, 1, 1), 0)]
weights = [0, 0, 0]

# a training set and weight only for two values to be NANDed, without any additional helper inputs
training_set_dual = [((0, 0), 1), ((0, 1), 1), ((1, 0), 1), ((1, 1), 0)]
weights_dual = [0, 0]
# actual NAND of 3 values
training_set_triple = [
    ((1, 0, 0), 1), ((1, 0, 1), 1), ((1, 1, 0), 1), ((1, 1, 1), 0),
    ((0, 0, 0), 1), ((0, 0, 1), 1), ((0, 1, 0), 1), ((0, 1, 1), 1),
]
# again NAND of 3 values, now with an additional 4th helper input
training_set_triple_with_helper = [
    ((1, 1, 0, 0), 1), ((1, 1, 0, 1), 1), ((1, 1, 1, 0), 1), ((1, 1, 1, 1), 0),
    ((1, 0, 0, 0), 1), ((1, 0, 0, 1), 1), ((1, 0, 1, 0), 1), ((1, 0, 1, 1), 1),
]
# we also have to add a 4th weight for the additional helper input
weights_triple_with_helper = [0, 0, 0, 0]

# Set alternative training sets and weights
training_set = training_set_triple_with_helper
weights = weights_triple_with_helper


def dot_product(values, weights):
    return sum(value * weight for value, weight in zip(values, weights))


iteration = 0
while True:
    print('-' * 60)
    iteration += 1
    print(f'iteration: {iteration}')
    error_count = 0
    for input_vector, desired_output in training_set:
        result = dot_product(input_vector, weights) > threshold
        error = desired_output - result
        print(f'Weights: {weights}    Input: {input_vector} Result: {result} Error: {error}')
        if error != 0:
            error_count += 1
            for index, value in enumerate(input_vector):
                weights[index] += learning_rate * error * value
    if error_count == 0 or iteration > 100:
        break