Linear Features Only
Using only x₁, x₂, x₃, etc. produces straight-line decision boundaries
While logistic regression outputs probabilities (like 0.7 or 0.3), we often need discrete predictions (0 or 1). A common approach uses a threshold:
Therefore: The model predicts 1 whenever w·x + b ≥ 0
Consider a classification problem with two features (x₁, x₂) where:
The decision boundary occurs when w·x + b = 0:
This creates a straight line where:
Using polynomial features like x₁², x₂², we can create non-linear decision boundaries.
Example: If z = w₁x₁² + w₂x₂² + b with w₁ = 1, w₂ = 1, b = -1
The decision boundary becomes:
This creates a circular decision boundary where:
With polynomial features, logistic regression can:
Linear Features Only
Using only x₁, x₂, x₃, etc. produces straight-line decision boundaries
Polynomial Features
Adding x₁², x₁x₂, x₂², etc. enables curved decision boundaries of any complexity
The decision boundary is the line (or curve) where w·x + b = 0. While linear features create straight decision boundaries, polynomial features enable logistic regression to learn complex, curved boundaries that can separate non-linearly separable data. This flexibility makes logistic regression suitable for a wide range of classification problems.