SYSTEM_DESIGN
System Design: Wishlist & Saved Items
System design of a wishlist and saved items service supporting multiple lists, sharing, price tracking, and back-in-stock notifications for e-commerce platforms.
Requirements
Functional Requirements:
- Users create and manage multiple wishlists (e.g., 'Birthday', 'Home Office')
- Add/remove products to wishlists with optional notes and priority
- Share wishlists via public link (gift registries, wedding lists)
- Price drop notifications for wishlisted items
- Back-in-stock notifications for out-of-stock wishlisted items
- Move items from wishlist to cart with one click
Non-Functional Requirements:
- Support 50M users with an average of 3 wishlists and 15 items per list
- Wishlist load under 100ms (p99)
- Notification delivery within 5 minutes of price change or restock
- 99.9% availability; wishlist data must never be lost
- Handle viral shared wishlists (celebrity gift lists with 1M+ views)
Scale Estimation
50M users × 3 lists × 15 items = 2.25B wishlist items. Each item: user_id, list_id, product_id, added_at, notes = ~200 bytes. Total data: 2.25B × 200 bytes = 450GB. Wishlist reads: 10M DAU × 2 wishlist views = 20M reads/day = 231 QPS. Wishlist writes: 5M adds/removes per day = 58 writes/sec. Shared wishlist reads: 1M/day (bursty around holidays). Price/stock check for notifications: 2.25B items to check against price/inventory changes — requires efficient indexing.
High-Level Architecture
The Wishlist Service uses PostgreSQL as the primary data store with Redis caching for hot wishlists. The read path: Wishlist API → Redis cache (per-user wishlist data) → PostgreSQL fallback. The write path: Wishlist API → PostgreSQL (write) → invalidate Redis cache → emit event to Kafka (for notification processing).
The notification pipeline monitors two event streams: (1) Price changes from the Product Catalog Service (Kafka topic price-changes) and (2) Inventory restocks from the Inventory Service (Kafka topic inventory-restocks). A Notification Matcher consumer joins these events against a Redis-cached inverted index: wishlisted_by:{product_id} → set of user_ids. When a match is found, a notification is enqueued in the Notification Service (SQS → email/push workers).
Shared wishlists use a separate read path: public wishlist URL → CDN (cached HTML for popular lists) → Wishlist Service (renders list with current prices and availability by calling the Product Catalog Service).
Core Components
Wishlist Storage & Caching
Wishlists are stored in PostgreSQL: wishlists table (list_id UUID, user_id, name, is_public BOOLEAN, share_token VARCHAR, created_at) and wishlist_items table (item_id, list_id FK, product_id, notes TEXT, priority INT, added_at). Redis cache: Hash wishlist:{user_id}:{list_id} → serialized list of items with product metadata (denormalized for fast rendering). Cache TTL: 1 hour, invalidated on write. For users with >100 items across all lists, only the most recently accessed list is cached to manage memory.
Notification Matching Engine
Efficiently matching price changes and restocks against 2.25B wishlisted items requires an inverted index. A Redis set wishlisted:{product_id} stores all user_ids who have wishlisted that product. When a price change event arrives for product_id=X, the consumer checks SMEMBERS wishlisted:X and sends notifications to each user. This inverted index is built from PostgreSQL on startup and maintained incrementally via wishlist add/remove events. Memory: assuming 50M unique products are wishlisted, with an average of 45 users per product, the index requires ~50M sets × 45 × 8 bytes (user_id) = ~18GB — fits on a single large Redis instance.
Shared Wishlist Renderer
Public wishlists are rendered with live pricing and availability. For viral lists (>10K views/day), the rendered HTML is cached at the CDN with a 5-minute TTL and a stale-while-revalidate window of 60 minutes. The renderer enriches each wishlist item with current price, availability, product image, and a direct add-to-cart link. For gift registry use cases, the renderer also shows which items have already been purchased by other viewers (tracked via an anonymous purchase counter on the wishlist item).
Database Design
PostgreSQL schema: wishlists table (list_id UUID PK, user_id BIGINT, name VARCHAR(100), is_public BOOLEAN DEFAULT false, share_token VARCHAR(32) UNIQUE, created_at TIMESTAMP). wishlist_items table (item_id UUID PK, list_id UUID FK, product_id BIGINT, notes TEXT, priority SMALLINT DEFAULT 0, price_at_add DECIMAL, added_at TIMESTAMP). Indexes: (user_id) on wishlists for user's lists; (list_id, added_at DESC) on items for chronological display; (product_id) on items for the notification inverted index rebuild.
Redis data model: Hash wl:{user_id}:{list_id} for cached list data. Set wishlisted:{product_id} for notification matching. String wl_share:{share_token} → list_id for public link resolution.
API Design
GET /api/v1/wishlists— Fetch all wishlists for authenticated user with item countsPOST /api/v1/wishlists/{list_id}/items— Add item to wishlist; body contains product_id, notes, priority; returns updated listGET /api/v1/wishlists/shared/{share_token}— Fetch shared wishlist with live pricing; public endpoint, no auth requiredPOST /api/v1/wishlists/{list_id}/items/{item_id}/move-to-cart— Move item from wishlist to shopping cart; calls Cart Service internally
Scaling & Bottlenecks
The notification matching inverted index is the primary memory constraint. With 50M wishlisted products and 18GB of index data, a single Redis instance suffices. However, during major sales events (Black Friday), price changes flood the system: 10M price changes in an hour = 2,778 price events/sec. Each event triggers a Redis SMEMBERS lookup and N notification enqueues. To handle this, the notification consumer runs as a Kafka consumer group with 16 partitions, processing events in parallel. Notification delivery is rate-limited per user (max 5 notifications/hour) to prevent alert fatigue.
Viral shared wishlists (celebrity gift registries with 1M+ views) create hot-key problems. The CDN cache absorbs most reads, but cache misses trigger expensive PostgreSQL queries joining wishlist items with live product data. A dedicated read replica handles shared wishlist queries, and the Wishlist Service pre-warms the CDN cache for wishlists exceeding 1,000 views/day.
Key Trade-offs
- Redis inverted index over database JOIN for notifications: O(1) lookup per price change event enables real-time notification matching, but requires 18GB of RAM and careful synchronization with PostgreSQL
- Denormalized cache with product data over lazy loading: Faster wishlist rendering (single cache read vs. N product lookups), but cache must be invalidated when product data changes (name, image, price)
- 5-minute CDN TTL for shared wishlists: Balances freshness (prices may be slightly stale) with performance for viral lists
- Per-user notification rate limiting: Prevents alert fatigue but may delay delivery of genuinely important price drops during high-event periods
GO DEEPER
Master this topic in our 12-week cohort
Our Advanced System Design cohort covers this and 11 other deep-dive topics with live sessions, assignments, and expert feedback.