0% found this document useful (0 votes)
6 views2 pages

Ex 1

The document contains a Python script that implements a McCulloch-Pitts neuron model and various logic functions including AND, OR, NOT, NOR, and XOR. Each logic function utilizes the neuron model to compute outputs based on given inputs and weights. The script concludes by executing all defined logic functions to display their outputs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views2 pages

Ex 1

The document contains a Python script that implements a McCulloch-Pitts neuron model and various logic functions including AND, OR, NOT, NOR, and XOR. Each logic function utilizes the neuron model to compute outputs based on given inputs and weights. The script concludes by executing all defined logic functions to display their outputs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

1/19/25, 10:07 PM ex1.

py

def mcculloch_pitts_neuron(inputs, weights, bias, threshold):


"""
Implements a McCulloch-Pitts neuron.

Parameters:
inputs (list): Input values (0 or 1).
weights (list): Weight values corresponding to each input.
bias (float): Bias value for the neuron.
threshold (float): Threshold value for activation.

Returns:
int: 1 if activation >= threshold, else 0.
"""
# Calculate activation as a weighted sum of inputs plus bias
activation = sum(i * w for i, w in zip(inputs, weights)) + bias
return 1 if activation >= threshold else 0

# Define logic functions

def AND_logic():
print("AND Logic Function")
weights = [1, 1]
bias = -1.5
threshold = 0

for x1 in [0, 1]:


for x2 in [0, 1]:
output = mcculloch_pitts_neuron([x1, x2], weights, bias, threshold)
print(f"Inputs: {x1}, {x2} => Output: {output}")
print()

def OR_logic():
print("OR Logic Function")
weights = [1, 1]
bias = -0.5
threshold = 0

for x1 in [0, 1]:


for x2 in [0, 1]:
output = mcculloch_pitts_neuron([x1, x2], weights, bias, threshold)
print(f"Inputs: {x1}, {x2} => Output: {output}")
print()

def NOT_logic():
print("NOT Logic Function")
weights = [-1]
bias = 0.5
threshold = 0

for x1 in [0, 1]:


output = mcculloch_pitts_neuron([x1], weights, bias, threshold)
print(f"Input: {x1} => Output: {output}")
print()

def NOR_logic():
print("NOR Logic Function")
weights = [-1, -1]
bias = 0.5
threshold = 0

for x1 in [0, 1]:


for x2 in [0, 1]:
output = mcculloch_pitts_neuron([x1, x2], weights, bias, threshold)
print(f"Inputs: {x1}, {x2} => Output: {output}")
print()

def XOR_logic():
print("XOR Logic Function")
# XOR is implemented using two layers of neurons
file:///D:/Semester 6/DLT Lab/ex1.py 1/2
1/19/25, 10:07 PM ex1.py

def XOR_gate(x1, x2):


weights1 = [1, -1]
weights2 = [-1, 1]
weights_out = [1, 1]
bias1 = -0.5
bias2 = -0.5
bias_out = -1.5

# Hidden layer neurons


n1 = mcculloch_pitts_neuron([x1, x2], weights1, bias1, 0)
n2 = mcculloch_pitts_neuron([x1, x2], weights2, bias2, 0)
# Output neuron
output = mcculloch_pitts_neuron([n1, n2], weights_out, bias_out, 0)
return output

for x1 in [0, 1]:


for x2 in [0, 1]:
output = XOR_gate(x1, x2)
print(f"Inputs: {x1}, {x2} => Output: {output}")
print()

# Run all logic functions


AND_logic()
OR_logic()
NOT_logic()
NOR_logic()
XOR_logic()

file:///D:/Semester 6/DLT Lab/ex1.py 2/2

You might also like