SYSTEM_DESIGN
System Design: Cinema Booking System
Design a cinema ticket booking system like BookMyShow or Fandango supporting seat selection, show scheduling, and concurrent bookings. Emphasizes per-seat locking, session management, and preventing double-booking at scale.
Requirements
Functional Requirements:
- Browse movies, cinemas, and available showtimes with seat availability
- Interactive seat map selection showing available, selected, and unavailable seats
- Time-limited seat reservation (8 minutes to complete purchase)
- Payment processing and e-ticket generation with QR code
- Cancellation and refund up to 30 minutes before showtime
- Support multiple cinemas, screens, and concurrent show management by cinema operators
Non-Functional Requirements:
- Prevent double-booking: two users cannot book the same seat for the same show
- Seat availability must be accurate within 30 seconds for concurrent users viewing the same screen
- Handle 50,000 bookings per hour during popular release weekends
- Seat reservation holds must reliably expire after 8 minutes
- 99.9% uptime; cinema chains depend on the system for box office revenue
Scale Estimation
For a national cinema platform: 5,000 cinemas × 10 screens × 200 seats × 5 shows/day = 50M seat-shows/day. Booking volume: assume 60% occupancy on weekends = 30M tickets/weekend = 15M/day or ~174/second. Peak during a blockbuster opening: 10x average = 1,740 bookings/second. Seat availability queries (read-heavy): users browse seat maps 10x more than they book = ~1,740 reads/second baseline, up to 17,400 reads/second peak.
High-Level Architecture
The booking flow has three distinct phases: Browse (read-heavy, highly cacheable), Reserve (write, contention-sensitive, time-bounded), and Purchase (payment, idempotent). Separating these phases allows each to be optimized independently.
Browse traffic hits a CDN-backed API for movie and showtime metadata. Seat availability maps are fetched from a read-optimized cache layer (Redis) that reflects current state including temporary holds. The cache is updated on every seat state change (reservation, expiry, purchase). Stale cache data within 30 seconds is acceptable for browsing but not for reservation — the reservation step always reads authoritative state.
Reservation uses per-seat optimistic locking in the database. When a user selects seats and clicks "Reserve", the system performs an UPDATE with a WHERE clause checking the seat status is still available. If the affected row count is less than the seats requested, one or more seats were taken — the client is notified with the current availability for conflict resolution. This single SQL update is atomic without requiring explicit row-level locks.
Core Components
Seat Map & Availability Service
Manages the real-time availability state of every seat for every active showtime. State stored in Redis as a Hash per show: show:{show_id}:seats → {seat_id: status}. Status transitions: available → held:{user_id}:{expires_at} → sold. The availability map is the source of truth for the seat-selection UI. A WebSocket channel pushes real-time seat status updates to all users viewing the same seat map — users see seats turn gray when others reserve them, improving perceived accuracy and reducing conflict at purchase.
Booking & Reservation Service
Orchestrates the reservation saga: read seat statuses from Redis → validate all requested seats are available → write reservation to PostgreSQL with optimistic concurrency check → update Redis to held state → set a Redis key with 8-minute TTL for expiry notification. On TTL expiry, a Redis keyspace event triggers a Lua cleanup script that resets the seat to available in both Redis and schedules a PostgreSQL update. The PostgreSQL update is done by an expiry worker consuming from a Redis Pub/Sub channel.
Payment & Ticket Service
Wraps Stripe or a payment gateway with idempotent order semantics. Each reservation has a unique booking_id used as the payment idempotency key. Successful payment triggers: seat status update to sold in both Redis and PostgreSQL, ticket record creation with a unique QR payload (HMAC-signed {booking_id, show_id, seats[], expires_at}), and notification (email, push) to the customer. Cancellation within the refund window reverses the payment via the processor refund API and resets seats to available.
Database Design
Shows: show_id UUID, cinema_id, screen_id, movie_id, starts_at, price_tier. Seats: seat_id UUID, screen_id, row_label, seat_number, seat_type ENUM(standard, premium, accessible). Show-seat state: show_seats (show_id, seat_id, status ENUM(available, held, sold), held_by, held_until, booking_id). The (show_id, seat_id) pair is the primary key. An optimistic lock UPDATE: UPDATE show_seats SET status='held', held_by=$userId, held_until=now()+interval'8 minutes' WHERE show_id=$showId AND seat_id=$seatId AND status='available'.
Bookings: booking_id UUID, user_id, show_id, seat_ids[], status ENUM, idempotency_key UNIQUE, total_price, booked_at, cancelled_at. Tickets: ticket_id, booking_id, seat_id, qr_code_payload, scanned_at. A scheduled job runs every minute scanning for held_until < now() records with status held and resetting them, as a belt-and-suspenders complement to the Redis TTL expiry.
API Design
GET /api/v1/shows/{showId}/seats — returns current seat availability map (from Redis cache; max 30s stale).
POST /api/v1/bookings — body: {show_id, seat_ids[], idempotency_key}; reserves seats; returns {booking_id, held_until} or 409 CONFLICT with taken seats.
POST /api/v1/bookings/{bookingId}/pay — idempotent payment; returns {ticket_ids, qr_codes[]}.
DELETE /api/v1/bookings/{bookingId} — cancels booking and initiates refund if within cancellation window.
Scaling & Bottlenecks
Blockbuster openings create seat map hotspots: thousands of users simultaneously viewing the same popular showtime. CDN caching of seat maps is short-lived (30 seconds) to keep data reasonably fresh. The real-time WebSocket updates for seat status changes are fanned out through a Redis Pub/Sub channel per show — each WebSocket server subscribes to the channels for shows its connected clients are viewing and forwards updates. Connection affinity (consistent hashing by show_id) reduces the number of WebSocket servers subscribing to each channel.
The PostgreSQL show_seats table for popular shows becomes a hot page under concurrent reservations. A partial index on status = 'available' speeds availability checks. Partitioning show_seats by starts_at date allows past shows to be moved to a cold archive, keeping the active partition small and fast. Database connection pooling via PgBouncer is critical to handle the booking burst without exhausting PostgreSQL connections.
Key Trade-offs
- Optimistic vs. pessimistic seat locking: Optimistic locking scales better for browsing-heavy traffic but causes user-visible conflicts on popular shows; row-level locks prevent conflicts but serialize all bookings for a show.
- WebSocket vs. polling for seat map updates: WebSocket provides real-time updates with lower latency and fewer requests, but requires stateful server infrastructure; short-poll (every 10s) is simpler but less responsive.
- 8-minute hold duration: Long enough for most users to complete payment but short enough to return abandoned inventory quickly; a dynamic hold duration (shorter during high-demand periods) optimizes inventory utilization.
- Show-partitioned vs. cinema-partitioned data: Partitioning by show enables targeted expiry and archival of past shows but creates hotspots for popular shows; partitioning by cinema distributes load more evenly.
GO DEEPER
Master this topic in our 12-week cohort
Our Advanced System Design cohort covers this and 11 other deep-dive topics with live sessions, assignments, and expert feedback.