Introducing Azure Load Balancer

IntermediateTopic20 min7 min readAzure

AZ-104 notes: Introducing Azure Load Balancer. Covers key concepts for the Azure Administrator Associate exam.

1️⃣ What is Azure Load Balancer?

Azure Load Balancer is a Layer 4 (Transport Layer) load balancing service in Microsoft Azure.

It distributes inbound and outbound traffic based on:

Source IP

Source port

Destination IP

Destination port

Protocol (TCP/UDP)

Because it operates at Layer 4, it does not inspect HTTP headers or URLs (that’s Layer 7 – handled by Application Gateway or Front Door).

2️⃣ Deployment Scope

Azure Load Balancer can be:

🔹 Regional

Balances traffic within one Azure region

Backend resources must be in the same region

🔹 Global (Public only)

Cross-region load balancing

Used for multi-region architectures

3️⃣ Public vs Internal Load Balancer

🌐 Public Load Balancer

  • Has a public IP
  • Exposes services to the internet
  • Example: Web servers

🔐 Internal Load Balancer

  • Uses private IP
  • Only accessible inside VNet or connected networks
  • Example: Web tier → Database tier

4️⃣ Core Components Explained

Azure Load Balancer consists of several key components:

1️⃣ Frontend IP Configuration

Defines:

  • Public or private IP
  • IPv4 / IPv6
  • Entry point for traffic

2️⃣ Backend Pool

Contains:

  • Virtual Machines
  • VM Scale Sets
  • NICs or IP addresses
  • Traffic is distributed among these backend resources.

3️⃣ Health Probes

  • Health probes check if backend instances are healthy.

Types:

  • TCP
  • HTTP
  • HTTPS (Standard SKU)

If a VM fails probe checks:

  • It is removed from rotation
  • Traffic goes only to healthy instances
  • Why important? Prevents traffic from being sent to failed servers.

4️⃣ Load Balancing Rules

Defines:

  • Frontend port
  • Backend port
  • Protocol
  • Backend pool
  • Health probe

Example:

  • Frontend: TCP 80
  • Backend: TCP 80
  • Used for HTTP traffic

5️⃣ Inbound NAT Rules

Used for:

  • Administrative access (e.g., SSH, RDP)

Example:

  • Public IP: Port 1001 → VM1: Port 22
  • Public IP: Port 1002 → VM2: Port 22

Commonly used with:

  • Virtual Machine Scale Sets

6️⃣ Outbound Rules

Controls:

  • SNAT (Source Network Address Translation)
  • Outbound internet connectivity
  • Prevents SNAT port exhaustion

5️⃣ SKUs Explained

Azure Load Balancer has 3 SKUs:

🟡 Basic SKU (Retiring)

  • No SLA
  • No zone redundancy
  • Limited features
  • Not for production

Key Features:

  • 99.99% SLA
  • Zone redundancy
  • HA ports
  • HTTPS health probes
  • Azure Monitor integration
  • Multiple frontend IPs
  • Secure by default (NSG required)
  • This is the production-grade option.

🔵 Gateway SKU

  • Azure Gateway Load Balancer

Used for:

  • Chaining traffic to NVAs (Network Virtual Appliances)
  • Firewall insertion
  • Deep packet inspection
  • Flow: Client → Public LB → Gateway LB → NVA → Public LB → Backend

Used in:

  • Security inspection
  • Enterprise architectures

6️⃣ Demonstration Architecture Summary

From transcript:

  • VNet: vnet-prod
  • Subnet: fe-subnet
  • 2 Ubuntu VMs running NGINX
  • Public Load Balancer (Standard SKU)
  • HTTP load balancing rule (Port 80)
  • TCP health probe (Port 80)
  • SSH NAT rule (Port range 1000+)
  • Result: Accessing public IP → NGINX splash page → Traffic successfully distributed

7️⃣ How Traffic Distribution Works (Under the Hood)

Azure Load Balancer uses:

🔹 Hash-Based Distribution

Based on 5-tuple hash:

  • Source IP
  • Source Port
  • Destination IP
  • Destination Port
  • Protocol
  • Ensures consistent routing for same session.

8️⃣ Session Persistence Options

  • None (default)
  • Client IP
  • Client IP + Protocol
  • Used when applications require sticky sessions.

9️⃣ HA Ports (Standard SKU Only)

Allows:

  • Load balancing across all ports

Useful for:

  • NVAs
  • Complex firewall scenarios

Prevents:

  • Port exhaustion issues

🔟 When to Use Azure Load Balancer

Use when:

  • You need L4 load balancing
  • You want high performance
  • You need ultra-low latency
  • You don’t need URL routing

11️⃣ Comparison with Other Azure Load Balancing Services

12️⃣ Production Best Practices

✅ Always use Standard SKU ✅ Use Availability Zones ✅ Configure health probes correctly ✅ Monitor via Azure Monitor ✅ Define outbound rules to avoid SNAT exhaustion ✅ Use VM Scale Sets for auto scaling ✅ Secure with NSGs

13️⃣ Common Real-World Architectures

Architecture 1 – Web App

Internet → Public LB → Web VM Scale Set

Architecture 2 – 3-Tier

Internet → Public LB → Web Tier Web Tier → Internal LB → App Tier App Tier → Internal LB → DB Tier

Architecture 3 – Secure Enterprise

Internet → Public LB → Gateway LB → Firewall NVA → Backend

14️⃣ Reference Documentation

Official Microsoft Docs:

  • Azure Load Balancer overview
  • Load Balancer components
  • Standard vs Basic comparison
  • Health probes
  • Gateway Load Balancer
  • Outbound connections & SNAT

🔎 Deep Conceptual Understanding (Interview Ready)

Q: Why is Azure Load Balancer not suitable for URL-based routing?

Because it operates at Layer 4 and does not inspect HTTP headers.

Q: What happens if a backend VM fails?

Health probe fails → VM removed from rotation → No traffic sent.

Q: Why is Standard SKU secure by default?

It requires:

  • NSG rules explicitly allowing traffic
  • No open default access

Q: What causes SNAT port exhaustion?

Too many outbound connections using limited ephemeral ports. Solution:

  • Use outbound rules
  • Use NAT Gateway (preferred for large outbound traffic)

🎯 Key Takeaways

  • Azure Load Balancer = High-performance L4 load balancer
  • Standard SKU = Production-ready
  • Health probes = Critical
  • Gateway LB = Security chaining
  • Public vs Internal depends on exposure needs
  • Use VM Scale Sets for scalability

If you'd like, I can now convert this into:

  • 📊 PowerPoint-ready slides
  • 🧠 Interview Q&A (100+ questions format)
  • 📄 Printable cheat sheet
  • 🗂 Flashcard set
  • 🎥 YouTube script format
  • Just tell me the format you prefer.

Hands-on: Public Standard Load Balancer for Two VMs

Goal: Publish two web VMs behind a Standard public Load Balancer.

  1. Create two Linux VMs in the same VNet and availability set, or use a VM scale set.
  2. Install nginx on both VMs and make each home page show its hostname.
  3. Create a Standard public IP address.
  4. Create a Standard public Load Balancer named az104-public-lb.
  5. Add a frontend IP configuration using the public IP.
  6. Create backend pool web-backend and add both VM NICs.
  7. Create a health probe:
    • Protocol: HTTP
    • Port: 80
    • Path: /
  8. Create a load balancing rule:
    • Frontend port: 80
    • Backend port: 80
    • Backend pool: web-backend
    • Health probe: HTTP probe
  9. Ensure VM NSGs allow HTTP from the load balancer.
  10. Browse to the Load Balancer public IP.
  11. Stop nginx on one VM and confirm the health probe removes it from rotation.

Hands-on: Internal Load Balancer

  1. Create an internal Standard Load Balancer in the app subnet.
  2. Assign a private frontend IP such as 10.50.2.10.
  3. Add backend app VMs to the backend pool.
  4. Create a TCP health probe on the app port.
  5. Create a load balancing rule for the app port.
  6. From a VM in the web subnet, connect to the private frontend IP.
  7. Confirm the internal Load Balancer is not reachable from the internet.

More in Microsoft Azure