Beyond Rows: Exploring Storage and Design Strategies for Massive MariaDB Tables
- Partitioning: If a massive dataset is expected, MariaDB allows splitting a table into partitions. Each partition acts like a separate table, effectively raising the limit.
- Row Size: The size of each record (row) in the table also plays a role. InnoDB has a limit of about half the page size (typically 4KB to 32KB) for fixed-length data. Variable length data (text, blob) has a separate 4GB limit. There's also a total row size limit of 65,535 bytes for all data in a record.
- Storage Engine: MariaDB uses different storage engines to manage data. InnoDB, a popular choice, is limited to a table size of 64 Terabytes (TB). With an average record size, this could translate to roughly 64 billion rows in one table.
-- Create a table with some sample data
CREATE TABLE IF NOT EXISTS large_table (
id INT PRIMARY KEY AUTO_INCREMENT,
data VARCHAR(255) NOT NULL
);
-- Simulate adding a large number of records (adjust 100000 as needed)
INSERT INTO large_table (data)
SELECT CONCAT('Record-', id) FROM information_schema.generations LIMIT 100000;
-- Show how many records are currently in the table
SELECT COUNT(*) AS record_count FROM large_table;
This code creates a table large_table
and inserts 100,000 sample records. You can adjust the LIMIT
value for a larger dataset. Finally, it retrieves the number of records using COUNT(*)
.
database mariadb