You're offline — showing cached content
Module 2 — Python for Machine Learning beginner 22 min

Loops & Functions

Why Loops and Functions?

Machine learning involves doing the same operation thousands — or millions — of times: processing each image, updating model weights for each batch, evaluating every validation sample. Loops automate repetition. Functions package that logic neatly.


for Loops — Iterate Over Sequences

A for loop runs a block of code once for each item in a sequence.

# Loop over a list
fruits = ["apple", "banana", "cherry"]

for fruit in fruits:
    print(fruit)
# apple
# banana
# cherry

range() — Loop a Fixed Number of Times

# range(5) gives 0, 1, 2, 3, 4
for i in range(5):
    print(i)   # 0, 1, 2, 3, 4

# range(start, stop, step)
for epoch in range(1, 11):   # 1 to 10
    print(f"Training epoch {epoch}...")

for i in range(0, 100, 10):  # 0, 10, 20, ..., 90
    print(i)

Training Loop Pattern

This is the most important loop in deep learning:

num_epochs = 10
losses = []

for epoch in range(1, num_epochs + 1):
    # (Imagine this does actual training)
    loss = 1.0 / epoch  # fake decreasing loss
    losses.append(loss)
    print(f"Epoch {epoch:2d}/{num_epochs} — Loss: {loss:.4f}")

# Epoch  1/10 — Loss: 1.0000
# Epoch  2/10 — Loss: 0.5000
# ...
# Epoch 10/10 — Loss: 0.1000

enumerate() — Loop With Index

When you need both the index and the value:

class_names = ["plane", "car", "bird", "cat", "deer", "dog"]

for idx, name in enumerate(class_names):
    print(f"Class {idx}: {name}")
# Class 0: plane
# Class 1: car
# ...

zip() — Loop Over Two Lists Together

predictions = [0.92, 0.78, 0.95, 0.63]
labels      = [1,    1,    1,    0   ]

for pred, label in zip(predictions, labels):
    correct = (pred > 0.5) == label
    print(f"pred={pred:.2f}, label={label}, correct={correct}")

while Loops — Loop Until a Condition

patience = 3
no_improve_count = 0
best_loss = float('inf')

while no_improve_count < patience:
    current_loss = 0.5  # imagine this is the actual loss
    if current_loss < best_loss:
        best_loss = current_loss
        no_improve_count = 0
    else:
        no_improve_count += 1
    print(f"No improvement count: {no_improve_count}")
    break  # (just to prevent infinite loop in this example)

Early stopping (stopping training when it stops improving) is commonly implemented as a while loop.


Functions — Reusable Code Blocks

A function takes some inputs, does something, and optionally returns a result.

def greet(name):
    print(f"Hello, {name}!")

greet("Alice")   # Hello, Alice!
greet("Bob")     # Hello, Bob!

Functions With Return Values

def add(a, b):
    return a + b

result = add(3, 7)
print(result)   # 10

Default Arguments

def train_model(epochs=10, learning_rate=0.001, verbose=True):
    if verbose:
        print(f"Training for {epochs} epochs with lr={learning_rate}")

train_model()                    # uses defaults
train_model(epochs=50)           # override epochs only
train_model(100, 0.0001, False)  # all custom

Multiple Return Values

Python functions can return multiple values as a tuple:

def train_test_split(data, test_ratio=0.2):
    split = int(len(data) * (1 - test_ratio))
    train = data[:split]
    test = data[split:]
    return train, test   # returns two values

all_data = list(range(100))   # [0, 1, 2, ..., 99]
train_set, test_set = train_test_split(all_data)

print(f"Train: {len(train_set)} samples")  # Train: 80 samples
print(f"Test: {len(test_set)} samples")    # Test: 20 samples

Docstrings — Document Your Functions

def accuracy(y_true, y_pred):
    """
    Calculate classification accuracy.
    
    Args:
        y_true: list of true labels (0 or 1)
        y_pred: list of predicted labels (0 or 1)
    
    Returns:
        float: accuracy between 0.0 and 1.0
    """
    correct = sum(t == p for t, p in zip(y_true, y_pred))
    return correct / len(y_true)

y_true = [1, 0, 1, 1, 0, 1]
y_pred = [1, 0, 0, 1, 0, 1]
print(f"Accuracy: {accuracy(y_true, y_pred):.2%}")  # Accuracy: 83.33%

Putting It Together: A Mini Training Simulation

def compute_loss(predictions, labels):
    """Mean squared error loss."""
    n = len(predictions)
    total = sum((p - l) ** 2 for p, l in zip(predictions, labels))
    return total / n

def update_predictions(preds, labels, lr=0.1):
    """Simulate one gradient descent step."""
    return [p - lr * (p - l) for p, l in zip(preds, labels)]

# Initial (bad) predictions
predictions = [0.2, 0.8, 0.1, 0.9]
labels      = [1.0, 0.0, 1.0, 0.0]

print("Starting training...")
for epoch in range(1, 21):
    loss = compute_loss(predictions, labels)
    predictions = update_predictions(predictions, labels)
    if epoch % 5 == 0:
        print(f"Epoch {epoch:2d}: loss = {loss:.4f}")

# Epoch  5: loss = 0.1445
# Epoch 10: loss = 0.0381
# Epoch 15: loss = 0.0100
# Epoch 20: loss = 0.0026

Summary

ConceptSyntaxUse
for loopfor x in seq:Iterate datasets, epochs
range()range(n)Fixed-count loops
while loopwhile cond:Early stopping
Functiondef name():Reusable logic
Returnreturn valueOutput results
Knowledge Check

What does `range(1, 6)` produce?