Python Interview Questions — Cloud & DevOps Focus

IntermediateInterview Prep60 min7 min read11 Jan 2025Python

Python interview questions for DevOps, cloud engineering, and scripting roles — covering fundamentals, OOP, file handling, APIs, and automation patterns.

What you'll learn

  • Confidently answer Python fundamentals questions in interviews
  • Demonstrate knowledge of file handling, exceptions, and context managers
  • Show understanding of list comprehensions, generators, and decorators
  • Write Python scripts for automation and cloud SDK use

Prerequisites

Core Python Fundamentals

Q1: What are Python's mutable and immutable types?

Immutable — cannot be changed after creation:

  • int, float, bool, str, tuple, frozenset, bytes

Mutable — can be changed after creation:

  • list, dict, set, bytearray
# Immutable — rebinding, not mutation
s = "hello"
s = s + " world"  # Creates a new string object

# Mutable — actual in-place mutation
lst = [1, 2, 3]
lst.append(4)     # Modifies the same object

Tip

Immutability matters for dictionary keys (must be hashable = immutable) and function default arguments.

Q2: What is the difference between is and ==?

  • == — checks value equality (calls __eq__)
  • is — checks identity (same object in memory, same id())
a = [1, 2, 3]
b = [1, 2, 3]
c = a

print(a == b)  # True  — same values
print(a is b)  # False — different objects
print(a is c)  # True  — same object

Interview trap: Small integers (-5 to 256) and short strings are cached by CPython, so is can return True unexpectedly. Always use == for value comparison.

Q3: What is a Python decorator?

A decorator is a function that wraps another function to extend its behaviour without modifying it.

import functools
import time

def timer(func):
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        start = time.perf_counter()
        result = func(*args, **kwargs)
        elapsed = time.perf_counter() - start
        print(f"{func.__name__} took {elapsed:.3f}s")
        return result
    return wrapper

@timer
def slow_function():
    time.sleep(1)

slow_function()  # slow_function took 1.001s

Common use cases: logging, authentication, rate limiting, caching (@functools.lru_cache).

Q4: What is a generator and why use one?

A generator produces values lazily (on demand) using yield, saving memory compared to building a full list.

# List — builds all 1M items in memory
squares = [x**2 for x in range(1_000_000)]

# Generator — yields one at a time
def squares_gen(n):
    for x in range(n):
        yield x**2

gen = squares_gen(1_000_000)
next(gen)   # 0
next(gen)   # 1

# Generator expression
gen = (x**2 for x in range(1_000_000))

Key use cases: streaming files, processing log lines, paginated API results.

Q5: Explain list comprehensions vs map/filter

data = [1, 2, 3, 4, 5, 6]

# List comprehension (preferred in Python)
evens = [x for x in data if x % 2 == 0]
squared = [x**2 for x in data]

# Equivalent using map/filter
evens = list(filter(lambda x: x % 2 == 0, data))
squared = list(map(lambda x: x**2, data))

# Dict comprehension
word_lengths = {w: len(w) for w in ["hello", "world"]}

# Set comprehension
unique_first_letters = {w[0] for w in ["apple", "avocado", "banana"]}

Q6: How does Python's with statement work?

The with statement uses context managers to guarantee setup/teardown:

# File handling — automatically closes file even if exception raised
with open("data.txt", "r") as f:
    content = f.read()

# Custom context manager using class
class DatabaseConnection:
    def __enter__(self):
        self.conn = connect_to_db()
        return self.conn

    def __exit__(self, exc_type, exc_val, exc_tb):
        self.conn.close()
        return False  # Don't suppress exceptions

# Custom context manager using contextlib
from contextlib import contextmanager

@contextmanager
def timer():
    start = time.perf_counter()
    yield
    print(f"Elapsed: {time.perf_counter() - start:.3f}s")

with timer():
    do_work()

File Handling and I/O

Q7: How do you read a large file without loading it entirely into memory?

# Read line by line (generator-based)
with open("large_file.log") as f:
    for line in f:
        process(line.strip())

# Read in chunks
def read_chunks(filename, chunk_size=8192):
    with open(filename, "rb") as f:
        while chunk := f.read(chunk_size):
            yield chunk

# Using pathlib (modern approach)
from pathlib import Path

path = Path("/var/log/app.log")
lines = path.read_text().splitlines()
path.write_text("new content")

Q8: How do you parse JSON and YAML in Python?

import json
import yaml  # pip install pyyaml

# JSON
with open("config.json") as f:
    config = json.load(f)

# Write JSON
with open("output.json", "w") as f:
    json.dump(data, f, indent=2)

# JSON from string
data = json.loads('{"key": "value"}')
json_str = json.dumps(data)

# YAML
with open("config.yaml") as f:
    config = yaml.safe_load(f)

Error Handling

Q9: How do you handle exceptions properly in Python?

# Be specific — don't catch Exception or BaseException broadly
try:
    result = risky_operation()
except FileNotFoundError as e:
    logger.error(f"File not found: {e}")
    raise  # Re-raise if you can't handle it
except (ValueError, TypeError) as e:
    logger.warning(f"Data error: {e}")
    return default_value
else:
    # Runs if no exception
    process(result)
finally:
    # Always runs — cleanup here
    cleanup()

# Custom exceptions
class AzureDeploymentError(Exception):
    """Raised when an Azure deployment fails."""
    def __init__(self, resource: str, message: str):
        self.resource = resource
        super().__init__(f"Failed to deploy {resource}: {message}")

Scripting and Automation

Q10: How do you run shell commands from Python?

import subprocess

# Simple command (check=True raises on non-zero exit)
result = subprocess.run(
    ["kubectl", "get", "pods", "-n", "default"],
    capture_output=True,
    text=True,
    check=True,
)
print(result.stdout)
print(result.returncode)  # 0 = success

# Shell command (use with caution — shell injection risk)
# Only use when you control the input
result = subprocess.run("ls -la | grep .py", shell=True, capture_output=True, text=True)

# Stream output in real time
with subprocess.Popen(
    ["ping", "-c", "4", "8.8.8.8"],
    stdout=subprocess.PIPE, text=True
) as proc:
    for line in proc.stdout:
        print(line, end="")

Warning

Never use shell=True with user-supplied input — it allows shell injection attacks. Always use a list of arguments.

Q11: How do you work with environment variables in Python?

import os
from dotenv import load_dotenv  # pip install python-dotenv

# Load .env file in development
load_dotenv()

# Read environment variables
db_url = os.environ["DATABASE_URL"]   # Raises if missing
debug = os.environ.get("DEBUG", "false").lower() == "true"
port = int(os.environ.get("PORT", "8080"))

# Set environment variables (current process only)
os.environ["MY_VAR"] = "value"

# Check if variable exists
if "AZURE_CLIENT_ID" not in os.environ:
    raise RuntimeError("AZURE_CLIENT_ID must be set")

Q12: How do you make HTTP requests in Python?

import requests  # pip install requests

# GET request
response = requests.get(
    "https://api.example.com/data",
    headers={"Authorization": f"Bearer {token}"},
    params={"page": 1, "size": 100},
    timeout=10,
)
response.raise_for_status()  # Raises HTTPError for 4xx/5xx
data = response.json()

# POST request
response = requests.post(
    "https://api.example.com/items",
    json={"name": "test", "value": 42},
    headers={"Content-Type": "application/json"},
)

# Session for connection pooling (better for multiple requests)
with requests.Session() as session:
    session.headers.update({"Authorization": f"Bearer {token}"})
    r1 = session.get("https://api.example.com/users")
    r2 = session.get("https://api.example.com/groups")

Azure SDK Example

# pip install azure-identity azure-mgmt-resource
from azure.identity import DefaultAzureCredential
from azure.mgmt.resource import ResourceManagementClient

credential = DefaultAzureCredential()
client = ResourceManagementClient(credential, subscription_id)

# List resource groups
for rg in client.resource_groups.list():
    print(f"{rg.name}: {rg.location}")

# Create resource group
rg = client.resource_groups.create_or_update(
    "rg-myapp-dev",
    {"location": "uksouth", "tags": {"env": "dev"}}
)

Quick Reference — Common Patterns

# Flatten a nested list
nested = [[1, 2], [3, 4], [5]]
flat = [item for sublist in nested for item in sublist]

# Group items by key
from itertools import groupby
from collections import defaultdict

data = [{"dept": "IT", "name": "Alice"}, {"dept": "HR", "name": "Bob"}, {"dept": "IT", "name": "Carol"}]
by_dept = defaultdict(list)
for item in data:
    by_dept[item["dept"]].append(item["name"])

# Most common element
from collections import Counter
words = ["apple", "banana", "apple", "cherry", "banana", "apple"]
most_common = Counter(words).most_common(1)[0]  # ('apple', 3)

# Merge dicts (Python 3.9+)
merged = dict1 | dict2
dict1 |= dict2  # In-place

# Safe dict access
value = data.get("key", {}).get("nested", "default")

# Retry pattern
import time

def retry(func, retries=3, delay=1):
    for attempt in range(retries):
        try:
            return func()
        except Exception as e:
            if attempt == retries - 1:
                raise
            time.sleep(delay * (2 ** attempt))  # Exponential backoff