Skip to main content

Node.js Interview Questions

Master these 31 carefully curated interview questions to ace your next Node.js Interview Questions interview.

Quick Answer

Node.js is a JavaScript runtime built on V8 that runs on servers, with access to file system, network, and OS — unlike browser JS.

Detailed Explanation

Node.js uses Chrome's V8 engine but adds server-side APIs: fs (file system), http (web server), crypto, path, os. No DOM, window, or document objects. Uses CommonJS modules (require) alongside ES modules. Single-threaded event loop handles concurrent I/O. npm is the world's largest package registry. Used for APIs, microservices, real-time apps, CLI tools, and build tools.

Quick Answer

The event loop is Node's mechanism for handling non-blocking I/O by offloading operations to the system kernel or thread pool.

Detailed Explanation

Phases: (1) Timers: execute setTimeout/setInterval callbacks. (2) Pending callbacks: system-level callbacks. (3) Idle/Prepare: internal. (4) Poll: retrieve new I/O events, execute I/O callbacks. (5) Check: setImmediate callbacks. (6) Close callbacks: socket.on('close'). process.nextTick() runs between phases (microtask). Promises run in microtask queue. libuv provides the thread pool (default 4 threads) for blocking operations like DNS lookup and file I/O.

Quick Answer

Middleware functions have access to request, response, and next() function, executing code in a pipeline for each request.

Detailed Explanation

Middleware runs sequentially: app.use((req, res, next) => { ... next(); }). Types: application-level, router-level, error-handling (4 args), built-in (express.json()), third-party (cors, helmet). Can modify req/res, end the request cycle, or call next(). Error middleware: (err, req, res, next). Common uses: authentication, logging, CORS, body parsing, rate limiting, input validation.

Quick Answer

Streams are objects for reading/writing data in chunks rather than loading everything into memory at once.

Detailed Explanation

Types: Readable (fs.createReadStream), Writable (fs.createWriteStream), Duplex (TCP socket), Transform (zlib compression). Use pipe() to connect streams: readStream.pipe(writeStream). Events: data, end, error, finish. Streams handle large files efficiently — process 10GB file with constant memory. Backpressure mechanism prevents overwhelming slow consumers. Pipeline utility handles errors: pipeline(source, transform, dest, callback).

Quick Answer

npm is Node's package manager; package.json defines project metadata, dependencies, scripts, and configuration.

Detailed Explanation

npm install downloads packages to node_modules. Key package.json fields: dependencies (production), devDependencies (development), scripts (custom commands), engines (Node version). package-lock.json locks exact versions for reproducible builds. Semantic versioning: ^1.2.3 (minor updates), ~1.2.3 (patch only), 1.2.3 (exact). npx executes packages without installing. npm workspaces support monorepos.

Quick Answer

Use try-catch for synchronous code, .catch() for promises, error-first callbacks, and process.on('uncaughtException').

Detailed Explanation

Strategies: (1) Synchronous: try-catch blocks. (2) Callbacks: error-first pattern (err, data). (3) Promises: .catch() or try-catch with async/await. (4) Express: error middleware (err, req, res, next). (5) Global handlers: process.on('uncaughtException'), process.on('unhandledRejection'). (6) Custom error classes extending Error for typed errors. Best practice: fail fast, log everything, return appropriate HTTP status codes, never swallow errors silently.

Quick Answer

The cluster module creates child processes (workers) that share the same server port, utilizing multiple CPU cores.

Detailed Explanation

Node.js is single-threaded. cluster.fork() creates worker processes, each running a copy of the server. The master process distributes incoming connections (round-robin on Linux). Workers share the same port. If a worker crashes, the master can restart it. PM2 abstracts this: pm2 start app.js -i max. Worker threads (worker_threads module) are different — they share memory via SharedArrayBuffer for CPU-intensive tasks.

Quick Answer

SQL databases use structured tables with relations (PostgreSQL); NoSQL uses flexible documents (MongoDB) — Node.js works with both.

Detailed Explanation

SQL (PostgreSQL, MySQL): ACID compliance, joins, structured schemas, ORMs like Sequelize/Prisma. Best for: complex queries, transactions, relational data. NoSQL (MongoDB): flexible schemas, horizontal scaling, native JSON, ODM like Mongoose. Best for: rapid prototyping, unstructured data, high write throughput. Node.js ORMs: Prisma (type-safe, both SQL/NoSQL), Sequelize (SQL), Mongoose (MongoDB). Choose based on data relationships, scale needs, and query patterns.

Quick Answer

Generate JWT on login with jsonwebtoken, send in response, verify on protected routes via middleware.

Detailed Explanation

Flow: (1) User sends credentials. (2) Server validates, creates JWT: jwt.sign({ userId }, secret, { expiresIn: '1h' }). (3) Client stores token (httpOnly cookie preferred). (4) Auth middleware: jwt.verify(token, secret). (5) Attach decoded user to req. (6) Refresh tokens: long-lived token to get new access tokens. Security: use strong secrets, short expiration, HTTPS only, httpOnly cookies. Libraries: passport.js for strategy-based auth, bcrypt for password hashing.

Quick Answer

Rate limiting restricts the number of requests from a client within a time window to prevent abuse and DDoS attacks.

Detailed Explanation

Implementation: (1) express-rate-limit middleware: windowMs (time window), max (request limit). (2) Redis-based for distributed systems (rate-limit-redis). (3) Algorithms: Fixed window, Sliding window, Token bucket, Leaky bucket. (4) Apply per IP, per user, or per API key. (5) Return 429 Too Many Requests with Retry-After header. (6) Different limits for different endpoints (stricter for login, lenient for reads). Also consider: API key tiers, cost-based limiting.

Quick Answer

Decompose by business domain, use API gateway, message queues for async communication, and containerize with Docker.

Detailed Explanation

Design principles: (1) Single responsibility per service. (2) API Gateway (Kong, Express gateway) for routing, auth, rate limiting. (3) Sync communication: REST or gRPC between services. (4) Async: message queues (RabbitMQ, Kafka) for event-driven architecture. (5) Service discovery (Consul, Kubernetes). (6) Database per service pattern. (7) Docker + Kubernetes for deployment. (8) Distributed tracing (Jaeger), centralized logging (ELK). (9) Circuit breaker pattern for fault tolerance.

Quick Answer

Worker threads enable CPU-intensive tasks to run in parallel threads sharing memory, unlike child processes.

Detailed Explanation

const { Worker, isMainThread, parentPort } = require('worker_threads'). Workers run in separate V8 instances but share memory via SharedArrayBuffer and Atomics. Use for: image processing, data compression, complex calculations, CSV parsing. Don't use for: I/O operations (event loop handles these efficiently). Workers communicate via message passing (postMessage). Thread pool size defaults to 4 (UV_THREADPOOL_SIZE). Piscina library provides a worker pool abstraction.

Quick Answer

Use migration tools like Knex.js, Prisma Migrate, or Sequelize migrations to version-control database schema changes.

Detailed Explanation

Migrations are versioned scripts that modify database schema. Flow: (1) Create migration file with up() and down() functions. (2) up() applies changes (CREATE TABLE, ALTER COLUMN). (3) down() reverses them (for rollbacks). (4) Migration table tracks which migrations have been applied. Tools: Prisma Migrate (declaration), Knex.js migrations (imperative), Flyway. Best practices: never edit applied migrations, test down() functions, run in CI/CD before deployment, separate data migrations from schema migrations.

Quick Answer

Event-driven architecture uses event emitters and listeners for loose coupling, where components communicate by emitting events.

Detailed Explanation

Node.js EventEmitter is the foundation. Pattern: emitter.emit('event', data) → listener runs. Used throughout Node: HTTP server (request event), streams (data event), process (exit event). In architecture: domain events (OrderPlaced, UserRegistered) decouple services. Implementation: (1) In-process EventEmitter for simple cases. (2) Redis Pub/Sub for multi-process. (3) Message broker (RabbitMQ, Kafka) for microservices. Benefits: loose coupling, scalability, easy to add new consumers.

Quick Answer

Profile with clinic.js, check event loop lag, optimize database queries, add caching, and scale horizontally.

Detailed Explanation

Diagnosis: (1) Use clinic.js or 0x for flame graphs to find CPU bottlenecks. (2) Monitor event loop lag (event-loop-lag package). (3) Check for blocking operations on main thread. (4) Profile database queries (slow query log). Fixes: (1) Add Redis caching for frequent queries. (2) Database indexing and query optimization. (3) Connection pooling. (4) Horizontal scaling with cluster/PM2/Kubernetes. (5) Offload CPU work to worker threads. (6) Implement request queuing. (7) CDN for static assets.

Quick Answer

Use WebSockets (Socket.io) for real-time delivery, with Redis Pub/Sub for multi-server broadcast and database persistence.

Detailed Explanation

Architecture: (1) Socket.io for WebSocket connections with fallback. (2) Redis adapter for multi-server setups (socket.io-redis). (3) Notification service receives events from other services via message queue. (4) Store notifications in database (MongoDB/PostgreSQL). (5) Client connects on login, receives real-time updates. (6) Fallback: long polling or SSE for older clients. (7) Push notifications for mobile (FCM/APNs). (8) Notification preferences per user. (9) Batch notifications to prevent spam.

Quick Answer

Stream the CSV file, process records in batches, use a job queue for heavy processing, and provide progress updates.

Detailed Explanation

Approach: (1) Accept multipart upload. (2) Stream CSV with csv-parser (don't load entire file into memory). (3) Validate each row as it's read. (4) Batch insert into database (100-500 rows per batch). (5) For heavy processing: add to job queue (Bull/BullMQ with Redis). (6) Background worker processes jobs. (7) Send progress via WebSocket or polling endpoint. (8) Return job ID immediately (202 Accepted). (9) Handle errors per-row (don't fail entire import). (10) Set upload size limits and rate limits.

Quick Answer

Netflix uses Node.js for API gateway, A/B testing, and SSR, handling billions of requests with microservices architecture.

Detailed Explanation

Netflix's approach: (1) Node.js powers their API layer and SSR for the website. (2) Microservices communicate via gRPC and Kafka. (3) Custom Node.js platform team maintains internal tools. (4) Extensive A/B testing framework. (5) Chaos engineering (Chaos Monkey) to test resilience. (6) They reduced startup time from 45 minutes (Java) to under a minute with Node.js. (7) Edge services handle routing, auth, and API composition. Key takeaway: Node.js excels at I/O-heavy, data-streaming workloads.

Quick Answer

Use geospatial indexing, WebSockets for real-time location updates, and a matching algorithm with Redis for fast lookups.

Detailed Explanation

Architecture: (1) Drivers send GPS coordinates via WebSocket every 3-5 seconds. (2) Store locations in Redis with geospatial indexing (GEOADD, GEORADIUS). (3) When rider requests: find nearby drivers with GEORADIUS, rank by distance/rating/ETA. (4) Send match to driver (accept/reject). (5) If rejected, try next driver. (6) Once matched, both see real-time location updates. (7) Service scales horizontally with Redis Cluster. (8) Consider surge pricing based on demand/supply ratio. (9) Message queue for ride events (started, completed).

Quick Answer

PayPal found Node.js reduced development time by 2x, handled more requests per second, and used fewer engineers.

Detailed Explanation

PayPal's results: (1) Built in almost half the time (3 vs 5 months). (2) 33% fewer lines of code. (3) Written by fewer people. (4) Handled double the requests per second vs Java. (5) 35% decrease in average response time. (6) Full-stack JavaScript — same language for frontend and backend. (7) npm ecosystem accelerated development. (8) They open-sourced kraken.js (Express-based framework). Key insight: for I/O-heavy web services, Node's async model outperforms thread-based approaches.

Quick Answer

process.nextTick() executes before I/O events in the current phase; setImmediate() executes in the check phase after I/O.

Detailed Explanation

process.nextTick() callbacks are processed after the current operation completes, before the event loop continues — it essentially 'cuts in line'. setImmediate() schedules callback in the check phase of the next event loop iteration, after I/O polling. Recursive nextTick() can starve I/O (blocks event loop). setImmediate() is generally safer for deferring work. In practice: use nextTick for ensuring callback runs after current sync code; use setImmediate for yielding to event loop.

Quick Answer

Node provides child_process module with spawn, exec, execFile, and fork methods for creating child processes.

Detailed Explanation

spawn(command, args): streams I/O, best for long-running processes. exec(command): buffers output, returns callback with stdout/stderr, good for simple commands. execFile(file): like exec but runs file directly (no shell). fork(modulePath): special spawn for Node scripts, creates IPC channel for parent-child messaging via process.send()/process.on('message'). Use worker_threads for CPU-intensive JS work (shared memory via SharedArrayBuffer). cluster module forks workers for HTTP server load balancing.

Quick Answer

Cluster creates multiple worker processes sharing the same server port, managed by a master process for load balancing.

Detailed Explanation

cluster.fork() creates worker processes (copies of the master). Workers share the same port via round-robin (Linux) or OS-level (Windows) load balancing. Master manages workers: restart on crash, distribute work, graceful shutdown. Each worker has its own event loop and memory. IPC channel enables master-worker communication. PM2 is a production process manager built on cluster. For CPU-bound tasks, worker_threads are better as they share memory. Typical setup: fork workers equal to CPU cores.

Quick Answer

Streams process data piece by piece without loading entire content into memory. Types: Readable, Writable, Duplex, Transform.

Detailed Explanation

Readable: source of data (fs.createReadStream, http.IncomingMessage). Writable: destination (fs.createWriteStream, http.ServerResponse). Duplex: both readable and writable (TCP sockets, zlib). Transform: modifies data passing through (zlib.createGzip, crypto.createCipher). Streams operate in flowing or paused mode. pipe() connects readable to writable with automatic backpressure handling. pipeline() (util) handles errors and cleanup. Modern: for await...of iterates async readable streams.

Quick Answer

Use heap snapshots, --inspect flag with Chrome DevTools, track event listeners, and monitor process.memoryUsage().

Detailed Explanation

Detection: process.memoryUsage() tracking over time — heapUsed growing steadily. Tools: node --inspect + Chrome DevTools Memory tab (heap snapshots, allocation timeline). Common causes: (1) Global variables accumulating data. (2) Closures retaining large scopes. (3) Event listeners not removed (emitter.removeListener). (4) Unclosed database connections/streams. (5) Caching without eviction (use LRU cache). (6) timer references (clearInterval/clearTimeout). Fix: WeakRef/WeakMap for caches, proper cleanup in process.on('exit'), connection pooling, stream.destroy().

Quick Answer

Use sliding window or token bucket algorithm with Redis for distributed rate limiting, returning 429 status when exceeded.

Detailed Explanation

Algorithms: (1) Fixed Window: count requests per time window (simple but bursty). (2) Sliding Window: tracks requests in rolling window (smoother). (3) Token Bucket: tokens refill at fixed rate, requests consume tokens (allows bursts). Implementation: Redis INCR + EXPIRE for distributed apps. Store key = IP/userId, value = request count. Middleware checks count before processing. Headers: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset. Libraries: express-rate-limit, rate-limiter-flexible. Consider: different limits per endpoint, authenticated vs anonymous users.

Quick Answer

Profile with clinic.js, check event loop lag, analyze slow database queries, and review middleware chain.

Detailed Explanation

Steps: (1) clinic.js doctor/flame/bubbleprof for profiling. (2) Event loop lag: blocked-at package or process.hrtime(). (3) APM tools: New Relic, Datadog, or 0x for flamegraphs. (4) Slow queries: enable query logging, add indexes, use EXPLAIN. (5) Memory pressure: GC pauses visible in --trace-gc. (6) Middleware audit: measure each middleware's execution time. (7) Connection pooling: ensure database pool isn't exhausted. (8) DNS resolution: cache DNS lookups. (9) Network: check if upstream services are slow. (10) Consider caching hot endpoints with Redis.

Quick Answer

Use WebSockets (Socket.io) for real-time delivery, Redis Pub/Sub for cross-server messaging, and queue for reliability.

Detailed Explanation

Architecture: (1) WebSocket server (Socket.io) for persistent client connections. (2) Redis Pub/Sub for broadcasting across multiple Node instances. (3) Message queue (RabbitMQ/SQS) for guaranteed delivery. (4) REST API for sending notifications. Flow: API receives notification → publishes to Redis → all Socket.io servers receive → deliver to connected clients. Offline users: store in database, deliver on reconnect. Additional: push notifications (FCM/APNs) for mobile, email fallback, rate limiting per user, notification preferences and read status tracking.

Quick Answer

Use saga pattern for distributed transactions, event sourcing, idempotent operations, and eventual consistency with message queues.

Detailed Explanation

Challenges: no distributed ACID transactions across services. Patterns: (1) Saga: sequence of local transactions with compensating actions for rollback. Choreography (event-driven) vs Orchestration (central coordinator). (2) Event Sourcing: store events, not state — replay to rebuild. (3) Outbox pattern: write to DB + outbox table atomically, separate process publishes events. (4) Idempotency keys: prevent duplicate processing. (5) Two-phase commit (2PC): use sparingly, creates bottleneck. (6) CQRS: separate read and write models. Tools: Apache Kafka, RabbitMQ, AWS SQS for reliable messaging.

Quick Answer

Handle SIGTERM/SIGINT signals to stop accepting connections, finish in-flight requests, close database connections, then exit.

Detailed Explanation

Implementation: (1) process.on('SIGTERM', shutdown). (2) server.close() — stops accepting new connections, waits for existing to finish. (3) Set timeout for forced exit (e.g., 30s). (4) Close database pools, Redis connections, message queue consumers. (5) Flush logs and metrics. (6) process.exit(0). For Kubernetes: readiness probe fails → no new traffic → SIGTERM → graceful shutdown. Health check endpoint returns 503 during shutdown. Handle both SIGTERM (kill) and SIGINT (Ctrl+C). Use stoppable package for HTTP keep-alive connections.

Ready to master Node.js Interview Questions?

Start learning with our comprehensive course and practice these questions.