Bulk Data Import (CSV / SQL dump)
Real-world databases need bulk data loading — importing thousands of rows from CSV files, SQL dump files, or using multi-row INSERT. This is how you populate production databases, restore backups, and migrate data between systems.
Bulk Import Methods
- Multi-row INSERT — fastest for small batches (100–1000 rows)
- COPY (PostgreSQL) — fastest for large CSV imports (millions of rows)
- LOAD DATA INFILE (MySQL) — equivalent to COPY for MySQL
- SQL dump files (.sql) — complete database snapshots with schema + data
- psql -f file.sql — restore PostgreSQL dump via CLI
- mysql < file.sql — restore MySQL dump via CLI
- pgAdmin / DBeaver — GUI import for CSV files
Bulk Import Examples
-- Multi-row INSERT (best for < 1000 rows)
INSERT INTO products (name, price, stock) VALUES
('Keyboard', 49.99, 300),
('Mouse', 29.99, 500),
('Monitor', 299.99, 80),
('Webcam', 79.99, 150),
('USB Hub', 19.99, 400);
-- PostgreSQL COPY from CSV file
-- CSV file: products.csv
-- name,price,stock
-- Laptop Pro,1299.99,30
-- Gaming PC,1999.99,15
COPY products (name, price, stock)
FROM '/tmp/products.csv'
DELIMITER ','
CSV HEADER;
-- MySQL LOAD DATA INFILE
LOAD DATA INFILE '/tmp/products.csv'
INTO TABLE products
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 ROWS
(name, price, stock);
-- Export to CSV (PostgreSQL)
COPY products TO '/tmp/products_backup.csv'
DELIMITER ',' CSV HEADER;
-- Export to CSV (MySQL)
SELECT * FROM products
INTO OUTFILE '/tmp/products_backup.csv'
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n';
-- Restore full SQL dump
-- psql -U postgres -d ecommerce_db -f backup.sql
-- mysql -u root -p ecommerce_db < backup.sqlQuick Quiz
Tip
Tip
Practice Bulk Data Import CSV SQL dump in small, isolated examples before integrating into larger projects. Breaking concepts into small experiments builds genuine understanding faster than reading alone.
Use smallest type that fits — saves storage, speeds queries.
Common Mistake
Warning
A common mistake with Bulk Data Import CSV SQL dump is skipping edge case testing — empty inputs, null values, and unexpected data types. Always validate boundary conditions to write robust, production-ready sql code.
Practice Task
Note
Practice Task — (1) Write a working example of Bulk Data Import CSV SQL dump from scratch without looking at notes. (2) Modify it to handle an edge case (empty input, null value, or error state). (3) Share your solution in the Priygop community for feedback.
Key Takeaways
- Real-world databases need bulk data loading — importing thousands of rows from CSV files, SQL dump files, or using multi-row INSERT.
- Multi-row INSERT — fastest for small batches (100–1000 rows)
- COPY (PostgreSQL) — fastest for large CSV imports (millions of rows)
- LOAD DATA INFILE (MySQL) — equivalent to COPY for MySQL