<?xml version="1.0" encoding="utf-8" ?><rss version="2.0" xmlns:tt="http://teletype.in/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>Rostislav Dugin</title><generator>teletype.in</generator><description><![CDATA[Rostislav Dugin]]></description><link>https://teletype.in/@rostislav_dugin?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin</link><atom:link rel="self" type="application/rss+xml" href="https://teletype.in/rss/rostislav_dugin?offset=0"></atom:link><atom:link rel="next" type="application/rss+xml" href="https://teletype.in/rss/rostislav_dugin?offset=10"></atom:link><atom:link rel="search" type="application/opensearchdescription+xml" title="Teletype" href="https://teletype.in/opensearch.xml"></atom:link><pubDate>Mon, 20 Apr 2026 01:06:34 GMT</pubDate><lastBuildDate>Mon, 20 Apr 2026 01:06:34 GMT</lastBuildDate><item><guid isPermaLink="true">https://teletype.in/@rostislav_dugin/postgresql-vs-mongodb</guid><link>https://teletype.in/@rostislav_dugin/postgresql-vs-mongodb?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin</link><comments>https://teletype.in/@rostislav_dugin/postgresql-vs-mongodb?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin#comments</comments><dc:creator>rostislav_dugin</dc:creator><title>MongoDB vs PostgreSQL — 6 factors to consider when choosing your database</title><pubDate>Fri, 13 Feb 2026 17:48:19 GMT</pubDate><media:content medium="image" url="https://img1.teletype.in/files/0b/d0/0bd09b93-8fce-41ec-b534-92a845cf22a8.png"></media:content><description><![CDATA[<img src="https://img1.teletype.in/files/cc/1f/cc1f8ab6-c8e4-4062-93b5-8d8ba0bda2d8.png"></img>Choosing between MongoDB and PostgreSQL is one of the most important decisions you'll make for your project. Both databases are mature, reliable and widely used. But they're fundamentally different in how they store, query and scale data. This choice affects your development speed, operational costs and how easily your system can grow.]]></description><content:encoded><![CDATA[
  <p id="S3Mw">Choosing between MongoDB and PostgreSQL is one of the most important decisions you&#x27;ll make for your project. Both databases are mature, reliable and widely used. But they&#x27;re fundamentally different in how they store, query and scale data. This choice affects your development speed, operational costs and how easily your system can grow.</p>
  <p id="WVfL">Many developers pick a database based on what&#x27;s familiar or what&#x27;s trending. That&#x27;s fine for small projects. But if you&#x27;re building something that needs to scale or handle complex data relationships, you need to understand the real differences. This article breaks down six key factors to help you make an informed decision: data model, query complexity, scalability, consistency, performance and backup strategies.</p>
  <figure id="HFmP" class="m_original">
    <img src="https://img1.teletype.in/files/cc/1f/cc1f8ab6-c8e4-4062-93b5-8d8ba0bda2d8.png" width="1536" />
    <figcaption>PostgreSQL vs MongoDB</figcaption>
  </figure>
  <h2 id="1-data-model-and-schema-flexibility">1. Data model and schema flexibility</h2>
  <p id="jbLX">The data model is probably the biggest difference between these two databases. PostgreSQL is a relational database that uses tables with strict schemas. You define columns, types and relationships upfront. MongoDB is a document database that stores JSON-like documents with flexible schemas. Each document can have different fields, and you can change the structure on the fly.</p>
  <p id="GJMe">PostgreSQL&#x27;s structured approach works great when your data has clear relationships and you know the schema in advance. Think user accounts, orders, inventory or financial records. The strict schema catches errors early and ensures data integrity. But changing the schema later requires migrations, which can be painful on large datasets.</p>
  <p id="xlUb">MongoDB&#x27;s flexibility is useful when you&#x27;re building something new and your data model is still evolving. Or when you&#x27;re dealing with semi-structured data like logs, events or user-generated content. You can store different document shapes in the same collection. No migrations needed. But that flexibility comes at a cost: you need to handle data validation in your application code instead of relying on the database.</p>
  <p id="3RWM">Here&#x27;s a quick comparison:</p>
  <p id="6yJO"><strong>Schema:</strong></p>
  <ul id="19WP">
    <li id="JIz5">PostgreSQL: Strict, predefined table structure</li>
    <li id="t7iH">MongoDB: Flexible, documents can vary</li>
  </ul>
  <p id="s2Wl"><strong>Data relationships:</strong></p>
  <ul id="Xj13">
    <li id="jgOb">PostgreSQL: Built-in foreign keys and joins</li>
    <li id="k7lP">MongoDB: Manual references or embedding</li>
  </ul>
  <p id="d1Bq"><strong>Schema changes:</strong></p>
  <ul id="cIlC">
    <li id="tneh">PostgreSQL: Requires migrations</li>
    <li id="M0dJ">MongoDB: No migrations needed</li>
  </ul>
  <p id="FW5c"><strong>Data validation:</strong></p>
  <ul id="yZrg">
    <li id="N0Zh">PostgreSQL: Enforced at database level</li>
    <li id="4Piv">MongoDB: Enforced at application level</li>
  </ul>
  <p id="SMhF"><strong>Best for:</strong></p>
  <ul id="m0U0">
    <li id="4Jx7">PostgreSQL: Well-defined, relational data</li>
    <li id="usLA">MongoDB: Evolving, semi-structured data</li>
  </ul>
  <p id="9hgP">The reality is most applications have structured data that benefits from PostgreSQL&#x27;s relational model. User profiles, orders, products and analytics usually have predictable relationships. MongoDB makes sense when you&#x27;re prototyping quickly or dealing with truly flexible data structures.</p>
  <h2 id="2-query-capabilities-and-complexity">2. Query capabilities and complexity</h2>
  <p id="aDZJ">PostgreSQL has one of the most powerful query engines available. It supports complex joins, subqueries, window functions, common table expressions and full SQL. You can express almost any data relationship or transformation in a single query. Need to join five tables, aggregate by multiple dimensions and filter with complex conditions? PostgreSQL handles it.</p>
  <p id="ssR8">MongoDB&#x27;s query language is simpler and more limited. It&#x27;s based on JSON-like syntax and works well for basic queries on single collections. But once you need to join data across collections or perform complex aggregations, things get awkward. MongoDB added a $lookup operator for joins, but it&#x27;s slower and less flexible than SQL joins. You often end up making multiple queries or denormalizing your data to avoid joins entirely.</p>
  <p id="tjRm">For most business applications, query complexity matters. You&#x27;ll need reports, analytics and ad-hoc queries. PostgreSQL makes this easy. MongoDB makes it painful unless you carefully structure your data to avoid joins.</p>
  <p id="gHNg">Here&#x27;s a comparison of common query patterns:</p>
  <p id="xCue"><strong>Simple filters:</strong></p>
  <ul id="5h5O">
    <li id="Ccvv">PostgreSQL: Fast and straightforward</li>
    <li id="HLpv">MongoDB: Fast and straightforward</li>
  </ul>
  <p id="qvNU"><strong>Multi-table joins:</strong></p>
  <ul id="s9Yy">
    <li id="mLyt">PostgreSQL: Native and efficient</li>
    <li id="ukBx">MongoDB: Limited, slower with $lookup</li>
  </ul>
  <p id="MeMm"><strong>Aggregations:</strong></p>
  <ul id="dpkh">
    <li id="1VsM">PostgreSQL: Full SQL power</li>
    <li id="HszM">MongoDB: Aggregation pipeline (good but limited)</li>
  </ul>
  <p id="aB5e"><strong>Full-text search:</strong></p>
  <ul id="4EPC">
    <li id="OteA">PostgreSQL: Built-in with GIN indexes</li>
    <li id="HC5o">MongoDB: Text indexes available</li>
  </ul>
  <p id="yJAi"><strong>Complex analytics:</strong></p>
  <ul id="CinT">
    <li id="10aK">PostgreSQL: Excellent with window functions</li>
    <li id="rabd">MongoDB: Requires careful data modeling</li>
  </ul>
  <p id="Gcns"><strong>Ad-hoc queries:</strong></p>
  <ul id="YrEf">
    <li id="tWZX">PostgreSQL: Easy with standard SQL</li>
    <li id="KPGk">MongoDB: More difficult without joins</li>
  </ul>
  <p id="PLIz">If your application needs complex queries or you&#x27;re not sure what queries you&#x27;ll need later, PostgreSQL gives you more flexibility. MongoDB works if your access patterns are known upfront and you can structure your data accordingly.</p>
  <h2 id="3-scalability-and-sharding">3. Scalability and sharding</h2>
  <p id="BnAL">MongoDB was built for horizontal scaling from the start. It has native sharding support that distributes data across multiple servers automatically. You can add more machines to handle more data or traffic. MongoDB handles shard key selection, data distribution and routing queries to the right shards. This makes it easier to scale out without changing application code.</p>
  <p id="A5Q5">PostgreSQL&#x27;s strength is vertical scaling. You get better performance by upgrading to a bigger server with more CPU, RAM and faster storage. PostgreSQL can handle massive datasets on a single node if you have enough hardware. Horizontal scaling is possible through manual sharding or tools like Citus, but it&#x27;s more complex and less mature than MongoDB&#x27;s built-in approach.</p>
  <p id="4HMt">For most projects, vertical scaling is simpler and cheaper than horizontal scaling. Modern servers are powerful. A single PostgreSQL instance can handle millions of records and thousands of queries per second. You only need horizontal scaling if you&#x27;re dealing with truly massive datasets (hundreds of terabytes) or extreme traffic levels (hundreds of thousands of concurrent users).</p>
  <p id="hSVr">The key questions to ask:</p>
  <ul id="chKk">
    <li id="PsbY">How much data will you have in 1-2 years?</li>
    <li id="CfFh">What&#x27;s your traffic growth projection?</li>
    <li id="ICxj">Can you predict your bottlenecks?</li>
    <li id="Jv3k">Do you have the expertise to manage sharded databases?</li>
  </ul>
  <p id="DJd1">If you&#x27;re building a typical web application or SaaS product, PostgreSQL&#x27;s vertical scaling will probably be enough. MongoDB&#x27;s sharding makes sense if you&#x27;re building something that needs to scale globally from day one or you know you&#x27;ll hit horizontal scaling limits quickly.</p>
  <h2 id="4-consistency-and-acid-guarantees">4. Consistency and ACID guarantees</h2>
  <p id="7DLb">PostgreSQL provides full ACID guarantees for all transactions. Atomicity, Consistency, Isolation and Durability are guaranteed out of the box. Multi-document transactions work correctly. If something fails, everything rolls back. Your data stays consistent even under high load or system failures.</p>
  <p id="2tIm">MongoDB added multi-document ACID transactions in version 4.0, but they&#x27;re slower and more limited than PostgreSQL&#x27;s transactions. Single-document operations in MongoDB are atomic, but cross-document consistency requires explicit transactions. In practice, many MongoDB users avoid transactions entirely by denormalizing data into single documents.</p>
  <p id="3ixL">For financial applications, e-commerce or anything where data integrity is critical, PostgreSQL&#x27;s consistency guarantees are valuable. You can trust that your data won&#x27;t end up in an inconsistent state. MongoDB works fine for use cases where eventual consistency is acceptable, like logging, caching or analytics pipelines.</p>
  <p id="cG4X">MongoDB does offer tunable consistency levels (write concerns and read concerns), which gives you flexibility. But that flexibility also means you need to think carefully about consistency trade-offs and configure them correctly. PostgreSQL just works consistently by default.</p>
  <h2 id="5-performance-characteristics">5. Performance characteristics</h2>
  <p id="vXAX">Performance depends heavily on your use case and access patterns. MongoDB is generally faster for simple reads and writes on single documents. If you&#x27;re doing lots of inserts or updates on independent records, MongoDB can outperform PostgreSQL. Document databases avoid join overhead by storing related data together.</p>
  <p id="6t4g">PostgreSQL is faster when you need complex queries, joins or aggregations. Its query planner is extremely sophisticated and can optimize complicated queries that would be slow or impossible in MongoDB. PostgreSQL also has better support for indexes, including partial indexes, expression indexes and various index types (B-tree, hash, GIN, GiST).</p>
  <p id="YkPC">Both databases can be fast if you design your schema and indexes properly. But they optimize for different workloads:</p>
  <ul id="yBO0">
    <li id="6IP0">MongoDB: fast single-document operations, high write throughput</li>
    <li id="NslZ">PostgreSQL: fast complex queries, efficient joins, flexible indexing</li>
  </ul>
  <p id="TZ5S">If your application is read-heavy with complex queries, PostgreSQL will likely be faster. If you&#x27;re write-heavy with simple access patterns, MongoDB might have an edge. In practice, most bottlenecks come from poor schema design or missing indexes, not the database choice itself.</p>
  <h2 id="6-backup-strategies-and-operational-complexity">6. Backup strategies and operational complexity</h2>
  <p id="9Ywg">Backups are critical for production databases. PostgreSQL has mature backup tools like pg_dump for logical backups and pg_basebackup for physical backups. Point-in-time recovery is available through WAL archiving. Most managed PostgreSQL services (AWS RDS, Google Cloud SQL, Azure Database) include automated backups with easy restore.</p>
  <p id="bvRP">MongoDB has mongodump for logical backups and filesystem snapshots for physical backups. Backups are straightforward for single-node deployments. But backing up sharded MongoDB clusters requires careful coordination to ensure consistency across shards. You need to stop the balancer and take snapshots at the same time on all shards.</p>
  <p id="K8s2"><a href="https://databasus.com/mongodb-backup" target="_blank">Databasus</a> is an industry standard backup tool that supports both PostgreSQL and MongoDB. It handles scheduled backups, multiple storage destinations (S3, Google Drive, local storage) and notifications across Slack, Discord and email. Whether you&#x27;re running PostgreSQL or MongoDB, Databasus simplifies backup management with a clean interface and reliable scheduling.</p>
  <p id="6FCz">Operational complexity also differs between these databases:</p>
  <ul id="dPKM">
    <li id="gwB1"><strong>PostgreSQL</strong>: simpler operations, well-understood tooling, extensive documentation</li>
    <li id="ZypI"><strong>MongoDB</strong>: more complex operations with sharding, requires specialized knowledge for production deployments</li>
  </ul>
  <p id="Aiit">PostgreSQL has been around since 1996 and has decades of operational experience built into its tooling and documentation. MongoDB is newer (2009) and still evolving. If you don&#x27;t have dedicated database administrators, PostgreSQL is easier to operate reliably.</p>
  <h2 id="which-one-should-you-choose">Which one should you choose?</h2>
  <p id="c6xt">After comparing these six factors, here&#x27;s a practical decision framework:</p>
  <p id="lJyr"><strong>Choose PostgreSQL if you:</strong></p>
  <ul id="3JrS">
    <li id="qFnQ">Have structured, relational data with clear relationships</li>
    <li id="sxdK">Need complex queries, joins or analytical workloads</li>
    <li id="reW3">Want full ACID guarantees and strong consistency</li>
    <li id="rAkB">Prefer simpler operations and well-established tooling</li>
    <li id="deg9">Can scale vertically and don&#x27;t need global horizontal scaling immediately</li>
    <li id="7QzY">Are building typical web applications, SaaS products or business software</li>
  </ul>
  <p id="RQuG"><strong>Choose MongoDB if you:</strong></p>
  <ul id="1ZtY">
    <li id="mOlp">Have flexible, semi-structured data with evolving schemas</li>
    <li id="LCsZ">Need high write throughput with simple access patterns</li>
    <li id="WRuC">Know your access patterns upfront and can denormalize data</li>
    <li id="M4s4">Need native horizontal sharding for massive scale</li>
    <li id="N9CT">Are building applications like content management, catalogs or logging systems</li>
    <li id="vVOY">Have expertise to manage distributed database operations</li>
  </ul>
  <p id="mfxl">For most developers building standard applications, PostgreSQL is the safer choice. It&#x27;s more flexible for queries, easier to operate and handles most workloads efficiently. MongoDB makes sense for specific use cases where its strengths (flexibility, sharding, document model) align with your requirements.</p>
  <p id="1po0">The good news is both databases are excellent. Either choice can work if you design your schema properly and use the database&#x27;s strengths. But understanding these differences helps you pick the right tool and avoid fighting against your database later.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@rostislav_dugin/mysql-security-mistakes</guid><link>https://teletype.in/@rostislav_dugin/mysql-security-mistakes?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin</link><comments>https://teletype.in/@rostislav_dugin/mysql-security-mistakes?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin#comments</comments><dc:creator>rostislav_dugin</dc:creator><title>8 MySQL security mistakes that expose your database to attackers</title><pubDate>Wed, 11 Feb 2026 20:02:11 GMT</pubDate><media:content medium="image" url="https://img2.teletype.in/files/11/28/1128cf9e-7565-477e-93c6-d576d9c6b393.png"></media:content><description><![CDATA[<img src="https://img2.teletype.in/files/55/ef/55ef9235-beb5-4efa-b248-dd5b0194532f.png"></img>MySQL is one of the most deployed databases in the world, which also makes it one of the most targeted. A lot of MySQL installations in the wild are running with default settings, overly permissive user accounts and no encryption. Some of these are dev setups that accidentally went to production. Others are production systems that nobody ever hardened because &quot;it's behind a firewall.&quot;]]></description><content:encoded><![CDATA[
  <p id="kCRd">MySQL is one of the most deployed databases in the world, which also makes it one of the most targeted. A lot of MySQL installations in the wild are running with default settings, overly permissive user accounts and no encryption. Some of these are dev setups that accidentally went to production. Others are production systems that nobody ever hardened because &quot;it&#x27;s behind a firewall.&quot;</p>
  <p id="PUSg">This article covers eight real security mistakes that leave MySQL databases exposed. Not abstract threat models, but concrete misconfigurations that attackers actually look for and exploit.</p>
  <figure id="D8Aq" class="m_original">
    <img src="https://img2.teletype.in/files/55/ef/55ef9235-beb5-4efa-b248-dd5b0194532f.png" width="1536" />
    <figcaption>MySQL security mistakes</figcaption>
  </figure>
  <h2 id="1-running-with-default-credentials-and-the-root-account">1. Running with default credentials and the root account</h2>
  <p id="WnGA">This sounds obvious, but it still happens constantly. Fresh MySQL installations often ship with a root account that has no password or a well-known default password. Automated scanners specifically look for MySQL instances on port 3306 with empty root passwords. It takes seconds to find and exploit.</p>
  <p id="VMus">The root account in MySQL has unrestricted access to everything: all databases, all tables, all administrative commands. Using it for application connections means your app has full control over the server, including the ability to drop databases, create users and modify grants.</p>
  <p id="1sw7">Fix the root password immediately after installation:</p>
  <pre id="nSzV">ALTER USER &#x27;root&#x27;@&#x27;localhost&#x27; IDENTIFIED BY &#x27;a-strong-random-password-here&#x27;;
FLUSH PRIVILEGES;
</pre>
  <p id="EmRz">Then create separate accounts for each application with only the privileges it needs:</p>
  <pre id="LyiH">CREATE USER &#x27;app_user&#x27;@&#x27;10.0.1.%&#x27; IDENTIFIED BY &#x27;another-strong-password&#x27;;
GRANT SELECT, INSERT, UPDATE, DELETE ON myapp.* TO &#x27;app_user&#x27;@&#x27;10.0.1.%&#x27;;
FLUSH PRIVILEGES;
</pre>
  <p id="9Ajx">The <code>10.0.1.%</code> host restriction means this user can only connect from your application subnet. If someone steals the credentials, they can&#x27;t use them from an arbitrary machine.</p>
  <p id="04Ox">Run <code>mysql_secure_installation</code> on every new MySQL instance. It removes anonymous users, disables remote root login and drops the test database. This takes thirty seconds and closes the most common attack vectors.</p>
  <h2 id="2-granting-excessive-privileges-to-application-users">2. Granting excessive privileges to application users</h2>
  <p id="fTCX">Most MySQL applications need SELECT, INSERT, UPDATE and DELETE on specific databases. That&#x27;s it. Yet it&#x27;s common to see application accounts with <code>GRANT ALL PRIVILEGES ON *.*</code> because someone copied a Stack Overflow answer during initial setup and never revisited it.</p>
  <p id="5EMA">The damage from excessive privileges scales with the access level. An application account with <code>FILE</code> privilege can read any file the MySQL process can access on the server filesystem. <code>PROCESS</code> lets it see all running queries, including those from other users. <code>SUPER</code> lets it kill connections and change global variables.</p>
  <p id="UCvW">Common dangerous privileges and their risks:</p>
  <ul id="heRS">
    <li id="EjG8"><strong><code>ALL PRIVILEGES ON *.*</code></strong> — Full administrative access. Risk: Complete server takeover.</li>
    <li id="5RL4"><strong><code>FILE</code></strong> — Read/write server filesystem. Risk: Credential theft and data exfiltration.</li>
    <li id="tcnk"><strong><code>PROCESS</code></strong> — View all running queries. Risk: Exposure of sensitive queries and data.</li>
    <li id="N26L"><strong><code>SUPER</code></strong> — Kill connections and change configs. Risk: Denial of service and configuration tampering.</li>
    <li id="Dr6K"><strong><code>SELECT, INSERT, UPDATE, DELETE ON app.*</code></strong> — Standard CRUD on one database. Risk: Limited to application data only (this is what you want).</li>
  </ul>
  <p id="2td3">Audit your current grants to see what&#x27;s actually assigned:</p>
  <pre id="5DzR">SELECT user, host, Super_priv, File_priv, Process_priv, Grant_priv
FROM mysql.user
WHERE Super_priv = &#x27;Y&#x27; OR File_priv = &#x27;Y&#x27; OR Process_priv = &#x27;Y&#x27;;
</pre>
  <p id="Ei5i">If your application user shows up in this list, something is wrong. Revoke what it doesn&#x27;t need:</p>
  <pre id="lnzd">REVOKE ALL PRIVILEGES ON *.* FROM &#x27;app_user&#x27;@&#x27;10.0.1.%&#x27;;
GRANT SELECT, INSERT, UPDATE, DELETE ON myapp.* TO &#x27;app_user&#x27;@&#x27;10.0.1.%&#x27;;
FLUSH PRIVILEGES;
</pre>
  <p id="DdEg">A good rule of thumb: if you can&#x27;t explain why an account needs a specific privilege, it shouldn&#x27;t have it.</p>
  <h2 id="3-exposing-mysql-to-the-internet-without-network-restrictions">3. Exposing MySQL to the internet without network restrictions</h2>
  <p id="t5l1">By default, MySQL listens on all network interfaces. That means if your server has a public IP address, MySQL is reachable from the entire internet. Combined with weak credentials (mistake #1), this is how most MySQL breaches happen.</p>
  <p id="Z5c8">Check your current binding:</p>
  <pre id="Xjvz">SHOW VARIABLES LIKE &#x27;bind_address&#x27;;
</pre>
  <p id="I7AT">If it shows <code>0.0.0.0</code> or <code>*</code>, MySQL is accepting connections from everywhere.</p>
  <p id="S3AJ">Restrict it in <code>my.cnf</code>:</p>
  <pre id="CDl8">[mysqld]
bind-address = 127.0.0.1
</pre>
  <p id="P1O1">This limits MySQL to local connections only. If your application runs on a different server, bind to the private network interface instead:</p>
  <pre id="KWe2">[mysqld]
bind-address = 10.0.1.5
</pre>
  <p id="jlEF">But network binding alone isn&#x27;t enough. Add firewall rules to restrict port 3306:</p>
  <pre id="cvn6"># iptables example: only allow MySQL connections from app server
iptables -A INPUT -p tcp --dport 3306 -s 10.0.1.10 -j ACCEPT
iptables -A INPUT -p tcp --dport 3306 -j DROP
</pre>
  <p id="OULc">Also disable the <code>skip-networking</code> and <code>skip-name-resolve</code> options thoughtfully. <code>skip-networking</code> disables TCP connections entirely (only socket connections work), which is fine if the application is on the same host. <code>skip-name-resolve</code> prevents DNS lookups for connecting hosts, which speeds up connections and removes DNS spoofing as an attack vector.</p>
  <p id="AeeA">If your application must reach MySQL over the internet, use an SSH tunnel or VPN instead of opening port 3306 directly. Never expose MySQL to the public internet, even with strong passwords.</p>
  <h2 id="4-not-encrypting-connections-with-tls">4. Not encrypting connections with TLS</h2>
  <p id="e9lw">MySQL connections transmit data in plaintext by default. This includes queries, result sets, usernames and passwords. Anyone who can capture network traffic between your application and MySQL can read everything.</p>
  <p id="lynf">This isn&#x27;t just a theoretical concern. On shared hosting, cloud VPCs with misconfigured security groups and corporate networks, packet sniffing is a real threat. Even &quot;private&quot; networks aren&#x27;t always as isolated as you think.</p>
  <p id="vA8V">Check if TLS is currently enabled:</p>
  <pre id="N3SP">SHOW VARIABLES LIKE &#x27;%ssl%&#x27;;
</pre>
  <p id="IgWP">To enable TLS, generate or obtain certificates and configure MySQL:</p>
  <pre id="yhmn">[mysqld]
ssl-ca   = /etc/mysql/ssl/ca-cert.pem
ssl-cert = /etc/mysql/ssl/server-cert.pem
ssl-key  = /etc/mysql/ssl/server-key.pem
require_secure_transport = ON
</pre>
  <p id="R5Ze">The <code>require_secure_transport = ON</code> setting forces all connections to use TLS. Without it, clients can still connect unencrypted.</p>
  <p id="IDnJ">You can also enforce TLS on a per-user basis, which is useful for a gradual rollout:</p>
  <pre id="xpKf">ALTER USER &#x27;app_user&#x27;@&#x27;10.0.1.%&#x27; REQUIRE SSL;
FLUSH PRIVILEGES;
</pre>
  <p id="lcaY">Verify that connections are actually encrypted:</p>
  <pre id="iimE">SELECT ssl_type, ssl_cipher FROM mysql.user WHERE user = &#x27;app_user&#x27;;
</pre>
  <p id="EMtd">And from the client side:</p>
  <pre id="Ub5F">SHOW STATUS LIKE &#x27;Ssl_cipher&#x27;;
</pre>
  <p id="Co4P">If <code>Ssl_cipher</code> returns an empty string, the connection is unencrypted.</p>
  <h2 id="5-leaving-the-binary-log-and-data-directory-unprotected">5. Leaving the binary log and data directory unprotected</h2>
  <p id="f1dv">MySQL&#x27;s binary log contains every data-modifying statement that runs against the database. If an attacker gains access to the filesystem, they can read the binary log and reconstruct your entire data history: every insert, update and delete.</p>
  <p id="ixfA">The data directory itself contains the actual table files. Depending on the storage engine, these might be readable with basic tools. InnoDB files can be parsed with specialized utilities to extract raw data, bypassing MySQL authentication entirely.</p>
  <p id="mcvI">Check your current file permissions:</p>
  <pre id="xdhl">ls -la /var/lib/mysql/
ls -la /var/log/mysql/
</pre>
  <p id="svSZ">The MySQL data directory and log directory should be owned by the <code>mysql</code> user and group, with no world-readable permissions:</p>
  <pre id="T8KS">chown -R mysql:mysql /var/lib/mysql
chmod 750 /var/lib/mysql
chown -R mysql:mysql /var/log/mysql
chmod 750 /var/log/mysql
</pre>
  <p id="G6it">Also protect the MySQL configuration file, which may contain passwords:</p>
  <pre id="6qGF">chmod 600 /etc/mysql/my.cnf
chown root:root /etc/mysql/my.cnf
</pre>
  <p id="20u1">If you&#x27;re running MySQL in Docker, make sure the volume mounts for data and logs aren&#x27;t world-readable on the host filesystem. Default Docker volume permissions can be more permissive than you expect.</p>
  <p id="Sxgo">For the binary log specifically, consider encrypting it. MySQL 8.0+ supports binary log encryption:</p>
  <pre id="cpua">[mysqld]
binlog_encryption = ON
</pre>
  <p id="oWdx">This encrypts the binary log files at rest. Even if someone copies the files, they can&#x27;t read the contents without the encryption key.</p>
  <h2 id="6-ignoring-sql-injection-in-application-code">6. Ignoring SQL injection in application code</h2>
  <p id="KAxF">SQL injection has been the number one database attack vector for over two decades, and it still works because developers keep building queries by concatenating user input directly into SQL strings. MySQL doesn&#x27;t have a built-in defense against this. The protection has to come from application code.</p>
  <p id="sWA5">An injectable query looks like this:</p>
  <pre id="P3NK"># Vulnerable: user input directly in the query string
query = f&quot;SELECT * FROM users WHERE email = &#x27;{user_input}&#x27;&quot;
cursor.execute(query)
</pre>
  <p id="fYBV">If <code>user_input</code> is <code>&#x27; OR &#x27;1&#x27;=&#x27;1&#x27; --</code>, the query becomes:</p>
  <pre id="rco1">SELECT * FROM users WHERE email = &#x27;&#x27; OR &#x27;1&#x27;=&#x27;1&#x27; --&#x27;
</pre>
  <p id="3ZHH">This returns every row in the users table. More destructive payloads can drop tables, read files from disk (if the MySQL user has <code>FILE</code> privilege) or create new admin accounts.</p>
  <p id="qGUE">The fix is parameterized queries. Every database library supports them:</p>
  <pre id="hRlx"># Safe: parameterized query
cursor.execute(&quot;SELECT * FROM users WHERE email = %s&quot;, (user_input,))
</pre>
  <pre id="Q5rY">// Node.js with mysql2
connection.execute(&quot;SELECT * FROM users WHERE email = ?&quot;, [userInput]);
</pre>
  <pre id="lnyt">// Go with database/sql
db.Query(&quot;SELECT * FROM users WHERE email = ?&quot;, userInput)
</pre>
  <p id="GlMr">Parameterized queries separate the SQL structure from the data. The database engine knows that the parameter is a value, not SQL code, regardless of what it contains.</p>
  <p id="wZlP">On the MySQL side, you can reduce the blast radius by removing the <code>FILE</code> privilege from application accounts (see mistake #2) and by running MySQL with <code>--local-infile=0</code> to disable <code>LOAD DATA LOCAL INFILE</code>, which attackers use for file reading through SQL injection.</p>
  <h2 id="7-not-auditing-or-monitoring-database-access">7. Not auditing or monitoring database access</h2>
  <p id="eA1S">If someone is accessing your MySQL database in ways they shouldn&#x27;t, how quickly would you know? Most MySQL installations have no audit logging enabled. An attacker could be reading sensitive tables for weeks before anyone notices.</p>
  <p id="C9YW">MySQL Enterprise Edition includes an audit plugin, but the community edition requires other approaches. The general query log is one option, though it captures everything and creates enormous log files on busy servers.</p>
  <p id="DuBd">A more practical approach for the community edition is to enable specific logging:</p>
  <pre id="9ls7">[mysqld]
log_error      = /var/log/mysql/error.log
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
log_queries_not_using_indexes = 1
</pre>
  <p id="yrUA">For connection monitoring, regularly check who is connected and what they&#x27;re doing:</p>
  <pre id="rRBR">SELECT user, host, db, command, time, state
FROM information_schema.processlist
WHERE user NOT IN (&#x27;system user&#x27;, &#x27;event_scheduler&#x27;)
ORDER BY time DESC;
</pre>
  <p id="bGIK">Track failed login attempts by checking the error log. Repeated failed logins from the same IP usually mean a brute force attack is underway.</p>
  <p id="blnH">Key monitoring areas to watch:</p>
  <ul id="hYdT">
    <li id="bgFW"><strong>Failed logins</strong> — Brute force attempts. Check error log entries with &quot;Access denied&quot;.</li>
    <li id="e8QS"><strong>Unusual connections</strong> — Unknown hosts or users. Check with <code>SHOW PROCESSLIST</code> or processlist table.</li>
    <li id="c6Xd"><strong>Schema changes</strong> — Unauthorized DDL. Use general log or trigger-based auditing.</li>
    <li id="kT0g"><strong>Privilege escalation</strong> — New grants or users. Run periodic diff of <code>mysql.user</code> table.</li>
    <li id="P165"><strong>Large data reads</strong> — Bulk exfiltration. Monitor slow query log and network traffic.</li>
  </ul>
  <p id="odX1">For production systems, consider deploying a third-party audit plugin like <code>audit_log</code> from Percona or MariaDB&#x27;s audit plugin (which works with MySQL forks). These provide structured, filterable audit trails without the overhead of the general query log.</p>
  <p id="hGdQ">Set up alerts for critical events: new user creation, privilege changes, connections from unexpected hosts and queries against sensitive tables. The goal is to detect unusual activity before it becomes a full breach.</p>
  <h2 id="8-skipping-backups-or-storing-them-insecurely">8. Skipping backups or storing them insecurely</h2>
  <p id="FoBS">Security isn&#x27;t just about preventing unauthorized access. It&#x27;s also about recovery. Ransomware attacks against MySQL databases are real: attackers gain access, drop all tables and leave a ransom note. Without backups, you&#x27;re negotiating with criminals.</p>
  <p id="Cnlw">But having backups isn&#x27;t enough if they&#x27;re stored insecurely. Unencrypted backup files sitting on the same server as MySQL are useless in a ransomware scenario because the attacker deletes them too. Backups on an S3 bucket with public read access are just a different kind of data breach.</p>
  <p id="3HnQ">A secure backup strategy covers three things:</p>
  <ul id="gfnI">
    <li id="cKVg"><strong>Encryption</strong> — Backup files should be encrypted at rest so they&#x27;re useless if stolen</li>
    <li id="NZqr"><strong>Offsite storage</strong> — At least one copy should be on a separate system or cloud storage that the MySQL server doesn&#x27;t have delete access to</li>
    <li id="bAdF"><strong>Regular testing</strong> — A backup you&#x27;ve never restored is a backup you hope works</li>
  </ul>
  <p id="D2jq">For MySQL, <code>mysqldump</code> is the basic tool:</p>
  <pre id="lPVD">mysqldump --single-transaction --routines --triggers myapp | \
  gzip | openssl enc -aes-256-cbc -salt -pbkdf2 -out /backup/myapp_$(date +%F).sql.gz.enc
</pre>
  <p id="pPjQ">This creates a compressed, encrypted backup. But managing encryption keys, scheduling and offsite storage manually is tedious and error-prone.</p>
  <p id="y21W"><a href="https://databasus.com/mysql-backup" target="_blank">MySQL backup</a> tools like Databasus automate the entire process. It&#x27;s an industry standard for MySQL backup tools that handles scheduling, compression, AES-256-GCM encryption and storage to multiple destinations like S3, Google Drive and SFTP. It&#x27;s suitable for individual developers and enterprise teams, with workspace-based access management and audit logs.</p>
  <p id="Vfmo">Whatever approach you choose, make sure your backups are not accessible from the MySQL server with the same credentials. If the database server is compromised, the attacker shouldn&#x27;t be able to delete your backups.</p>
  <h2 id="the-pattern-behind-these-mistakes">The pattern behind these mistakes</h2>
  <p id="jJWs">Looking at these eight mistakes together, a pattern emerges. Most MySQL security failures come from defaults that were never changed, permissions that were never reviewed and monitoring that was never set up. None of these fixes are complex. They don&#x27;t require expensive tools or deep security expertise.</p>
  <p id="wFsq">Start with the basics: strong credentials, minimal privileges, network restrictions and encrypted connections. Then add monitoring so you know when something unusual happens. And keep tested, encrypted backups so you can recover when prevention fails.</p>
  <p id="wnm2">The best time to secure your MySQL database was when you first set it up. The second best time is now.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@rostislav_dugin/postgresql-query-optimization</guid><link>https://teletype.in/@rostislav_dugin/postgresql-query-optimization?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin</link><comments>https://teletype.in/@rostislav_dugin/postgresql-query-optimization?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin#comments</comments><dc:creator>rostislav_dugin</dc:creator><title>PostgreSQL slow queries — 7 ways to find and fix performance bottlenecks</title><pubDate>Tue, 10 Feb 2026 20:02:24 GMT</pubDate><media:content medium="image" url="https://img1.teletype.in/files/41/3d/413dd109-ef34-4102-8de1-66da19439acd.png"></media:content><description><![CDATA[<img src="https://img1.teletype.in/files/48/20/48206297-0af1-4de6-8f13-0b0961d860bc.png"></img>Every PostgreSQL database eventually develops slow queries. It might start small: a dashboard that takes a bit longer to load, an API endpoint that times out during peak traffic, a report that used to run in seconds and now takes minutes. The tricky part is that slow queries rarely announce themselves. They creep in as data grows, schemas change and new features pile on.]]></description><content:encoded><![CDATA[
  <p id="w5ki">Every PostgreSQL database eventually develops slow queries. It might start small: a dashboard that takes a bit longer to load, an API endpoint that times out during peak traffic, a report that used to run in seconds and now takes minutes. The tricky part is that slow queries rarely announce themselves. They creep in as data grows, schemas change and new features pile on.</p>
  <p id="ud9G">This article covers seven practical ways to find the queries that are hurting your database and fix them. Not theoretical advice, but actual tools and techniques you can apply to a running PostgreSQL instance today.</p>
  <figure id="WFxo" class="m_original">
    <img src="https://img1.teletype.in/files/48/20/48206297-0af1-4de6-8f13-0b0961d860bc.png" width="1536" />
    <figcaption>PostgreSQL query optimization</figcaption>
  </figure>
  <h2 id="1-enable-pg_stat_statements-to-find-your-worst-offenders">1. Enable pg_stat_statements to find your worst offenders</h2>
  <p id="OaIN">The single most useful extension for tracking slow queries in PostgreSQL is <code>pg_stat_statements</code>. It records execution statistics for every query that runs against your database, including how many times it ran, total execution time, rows returned and more.</p>
  <p id="r8mt">Most performance problems come from a handful of queries. pg_stat_statements lets you find them without guessing.</p>
  <p id="Pde7">To enable it, add the extension to your <code>postgresql.conf</code>:</p>
  <pre id="Bznw">shared_preload_libraries = &#x27;pg_stat_statements&#x27;
pg_stat_statements.track = all
</pre>
  <p id="thcy">After restarting PostgreSQL, create the extension:</p>
  <pre id="ApKp">CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
</pre>
  <p id="rH5y">Then query it to find the most time-consuming queries:</p>
  <pre id="o94k">SELECT
    calls,
    round(total_exec_time::numeric, 2) AS total_time_ms,
    round(mean_exec_time::numeric, 2) AS avg_time_ms,
    round((100 * total_exec_time / sum(total_exec_time) OVER ())::numeric, 2) AS percent_total,
    query
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 20;
</pre>
  <p id="mHWc">This shows you which queries consume the most cumulative time. A query that runs 50,000 times a day at 100ms each is a bigger problem than a query that runs once at 5 seconds. The <code>percent_total</code> column makes this obvious.</p>
  <p id="runE">You can also find queries with the highest average execution time:</p>
  <pre id="Wazn">SELECT
    calls,
    round(mean_exec_time::numeric, 2) AS avg_time_ms,
    round(max_exec_time::numeric, 2) AS max_time_ms,
    rows,
    query
FROM pg_stat_statements
WHERE calls &gt; 10
ORDER BY mean_exec_time DESC
LIMIT 20;
</pre>
  <p id="Uuy8">The <code>WHERE calls &gt; 10</code> filter avoids one-off admin queries that would distort the results.</p>
  <p id="44Pi">Reset statistics periodically to keep the data relevant:</p>
  <pre id="18Xl">SELECT pg_stat_statements_reset();
</pre>
  <p id="HqjP">pg_stat_statements is the starting point. Everything else in this article builds on knowing which queries to focus on.</p>
  <h2 id="2-use-explain-analyze-to-understand-whats-actually-happening">2. Use EXPLAIN ANALYZE to understand what&#x27;s actually happening</h2>
  <p id="qfOK">Once you know which queries are slow, <code>EXPLAIN ANALYZE</code> tells you why. It runs the query and shows the execution plan PostgreSQL actually used, including the time spent at each step.</p>
  <pre id="j6Cj">EXPLAIN ANALYZE
SELECT o.id, o.total, c.name
FROM orders o
JOIN customers c ON c.id = o.customer_id
WHERE o.created_at &gt; &#x27;2026-01-01&#x27;
  AND o.status = &#x27;completed&#x27;;
</pre>
  <p id="Wpkw">The output looks something like this:</p>
  <pre id="3bLd">Hash Join  (cost=12.50..345.00 rows=150 actual time=0.82..15.43 rows=143 loops=1)
  Hash Cond: (o.customer_id = c.id)
  -&gt;  Seq Scan on orders o  (cost=0.00..310.00 rows=150 actual time=0.02..14.20 rows=143 loops=1)
        Filter: ((created_at &gt; &#x27;2026-01-01&#x27;) AND (status = &#x27;completed&#x27;))
        Rows Removed by Filter: 99857
  -&gt;  Hash  (cost=10.00..10.00 rows=200 actual time=0.45..0.45 rows=200 loops=1)
        -&gt;  Seq Scan on customers c  (cost=0.00..10.00 rows=200 actual time=0.01..0.20 rows=200 loops=1)
Planning Time: 0.15 ms
Execution Time: 15.60 ms
</pre>
  <p id="Mbmu">The important things to look for:</p>
  <ul id="pj1i">
    <li id="UVtY"><strong><code>Seq Scan</code> on a large table</strong> — No index is being used, every row is read</li>
    <li id="w3wH"><strong><code>Rows Removed by Filter: 99857</code></strong> — The scan reads far more rows than it returns</li>
    <li id="NKR3"><strong><code>actual rows</code> much higher than <code>rows</code> (estimate)</strong> — Statistics are stale, run <code>ANALYZE</code></li>
    <li id="I10d"><strong><code>Nested Loop</code> with high <code>loops</code> count</strong> — The inner side runs thousands of times</li>
    <li id="Ti7H"><strong><code>Sort</code> with <code>external merge</code></strong> — Not enough <code>work_mem</code>, sorting spills to disk</li>
  </ul>
  <p id="5X2K">The <code>Seq Scan</code> on orders above is the bottleneck. It reads 100,000 rows to return 143. An index on <code>(status, created_at)</code> would fix this:</p>
  <pre id="B4Zh">CREATE INDEX idx_orders_status_created ON orders (status, created_at);
</pre>
  <p id="npY4">After creating the index, run <code>EXPLAIN ANALYZE</code> again. You should see an <code>Index Scan</code> or <code>Bitmap Index Scan</code> replacing the sequential scan, and the execution time dropping significantly.</p>
  <p id="FIAT">One thing people miss: <code>EXPLAIN</code> without <code>ANALYZE</code> shows the plan but doesn&#x27;t execute the query. It gives you estimates, not actual numbers. Always use <code>ANALYZE</code> when debugging performance, unless the query modifies data (in that case, wrap it in a transaction and roll back).</p>
  <h2 id="3-configure-the-slow-query-log">3. Configure the slow query log</h2>
  <p id="Lvbj">pg_stat_statements gives you aggregate data, but sometimes you need to see individual slow queries as they happen. PostgreSQL&#x27;s built-in slow query log captures every query that exceeds a time threshold.</p>
  <p id="o5SF">Add these settings to <code>postgresql.conf</code>:</p>
  <pre id="wl1a">log_min_duration_statement = 500
log_statement = &#x27;none&#x27;
log_duration = off
log_line_prefix = &#x27;%t [%p] %u@%d &#x27;
</pre>
  <p id="uy0T">This logs any query that takes longer than 500 milliseconds. The <code>log_line_prefix</code> adds the timestamp, process ID, username and database name to each log entry, which is essential for debugging.</p>
  <p id="PgDS">Setting <code>log_min_duration_statement = 0</code> logs every query. This is useful for short debugging sessions but generates enormous log files on busy databases. For production, start with 500ms or 1000ms and lower it as you fix the worst offenders.</p>
  <p id="4S6Z">The log entries look like this:</p>
  <pre id="obHd">2026-02-10 14:23:45 UTC [12345] app_user@mydb LOG: duration: 2345.678 ms  statement: 
    SELECT u.*, p.* FROM users u JOIN purchases p ON p.user_id = u.id 
    WHERE u.region = &#x27;eu&#x27; ORDER BY p.created_at DESC;
</pre>
  <p id="tyug">For more structured analysis, tools like pgBadger can parse these logs and generate reports showing the slowest queries, most frequent queries and query patterns over time. But the raw log is often enough to spot problems.</p>
  <p id="U5sK">A practical approach: enable the slow query log in production at 1000ms, review it weekly, fix the top offenders, then lower the threshold to 500ms. Repeat until the log is mostly quiet.</p>
  <h2 id="4-fix-missing-and-misused-indexes">4. Fix missing and misused indexes</h2>
  <p id="xHw1">Missing indexes are the most common cause of slow queries in PostgreSQL. But &quot;add more indexes&quot; isn&#x27;t always the answer. Sometimes existing indexes aren&#x27;t being used, or the wrong type of index was created.</p>
  <p id="B55x"><strong>Finding missing indexes.</strong> Start with the query from pg_stat_statements, then check if the tables involved have appropriate indexes:</p>
  <pre id="Q6X5">SELECT
    schemaname,
    relname AS table_name,
    seq_scan,
    seq_tup_read,
    idx_scan,
    n_live_tup AS row_count
FROM pg_stat_user_tables
WHERE seq_scan &gt; 0
ORDER BY seq_tup_read DESC
LIMIT 15;
</pre>
  <p id="0F0t">Tables with a high <code>seq_tup_read</code> and low <code>idx_scan</code> are being scanned sequentially when they probably shouldn&#x27;t be. A table with 10 million rows and zero index scans is almost certainly missing an index.</p>
  <p id="yDQG"><strong>Finding unused indexes.</strong> Indexes you never use still cost write performance:</p>
  <pre id="FXBl">SELECT
    indexrelname AS index_name,
    relname AS table_name,
    idx_scan AS times_used,
    pg_size_pretty(pg_relation_size(indexrelid)) AS index_size
FROM pg_stat_user_indexes
WHERE idx_scan = 0
  AND indexrelid NOT IN (
      SELECT indexrelid FROM pg_index WHERE indisprimary
  )
ORDER BY pg_relation_size(indexrelid) DESC;
</pre>
  <p id="5ZpY">This shows indexes that have never been scanned (excluding primary keys). If an index is 500 MB and has zero scans, it&#x27;s slowing down every write for nothing. Drop it.</p>
  <p id="KwD7"><strong>Common indexing mistakes:</strong></p>
  <ul id="CKEH">
    <li id="ai1j">Indexing a column with very low cardinality (like a boolean <code>is_active</code> column with 99% true values). The planner often prefers a sequential scan because the index doesn&#x27;t filter out enough rows.</li>
    <li id="B7lC">Creating single-column indexes when your queries filter on multiple columns. A composite index on <code>(status, created_at)</code> is much better than separate indexes on <code>status</code> and <code>created_at</code> when your <code>WHERE</code> clause uses both.</li>
    <li id="uIWr">Forgetting partial indexes. If 95% of queries filter for active records, create a partial index:</li>
  </ul>
  <pre id="Xrqs">CREATE INDEX idx_orders_active ON orders (customer_id, created_at)
WHERE status = &#x27;active&#x27;;
</pre>
  <p id="lYtF">This index is smaller, faster to scan and faster to maintain than a full index.</p>
  <h2 id="5-tune-postgresql-memory-and-planner-settings">5. Tune PostgreSQL memory and planner settings</h2>
  <p id="mjKj">Default PostgreSQL configuration is deliberately conservative. It assumes the server has 128 MB of RAM and a single spinning disk. If you&#x27;re running on a modern server with 16 GB of RAM and SSDs, the defaults are leaving performance on the table.</p>
  <p id="JsiK">The key settings that affect query performance:</p>
  <ul id="ALRC">
    <li id="NW5P"><strong><code>shared_buffers</code></strong> — Default: 128 MB. Recommended: 25% of total RAM. Controls PostgreSQL&#x27;s shared memory cache.</li>
    <li id="C1Aa"><strong><code>work_mem</code></strong> — Default: 4 MB. Recommended: 64-256 MB. Controls memory for sorts and hash operations per query.</li>
    <li id="dWQl"><strong><code>effective_cache_size</code></strong> — Default: 4 GB. Recommended: 50-75% of total RAM. Controls planner&#x27;s estimate of available OS cache.</li>
    <li id="yEF8"><strong><code>random_page_cost</code></strong> — Default: 4.0. Recommended: 1.1 for SSD, 2.0 for HDD. Controls cost of random disk reads (affects index usage).</li>
    <li id="KyYU"><strong><code>effective_io_concurrency</code></strong> — Default: 1. Recommended: 200 for SSD. Controls number of concurrent disk I/O operations.</li>
  </ul>
  <p id="g6fd"><strong>shared_buffers</strong> is the most important one. PostgreSQL uses this as its primary data cache. Too low and it constantly re-reads data from disk. Too high and it competes with the OS page cache. 25% of total RAM is a good starting point for most workloads.</p>
  <p id="G5fF"><strong>work_mem</strong> is tricky because it&#x27;s per-operation, not per-query. A complex query with five sort operations and three hash joins could allocate up to 8x <code>work_mem</code>. Setting it to 256 MB sounds reasonable until 50 concurrent connections each allocate multiple chunks. Start with 64 MB and monitor.</p>
  <p id="pOvN"><strong>random_page_cost</strong> is the one that catches most people. The default of 4.0 tells the planner that random disk reads are four times more expensive than sequential reads. That was true for spinning disks. On SSDs, random and sequential reads are nearly identical. Lowering this to 1.1 makes the planner much more willing to use indexes, which is usually what you want on SSD storage.</p>
  <p id="dd5P">You can change these without a restart (except <code>shared_buffers</code>) using:</p>
  <pre id="D1fe">ALTER SYSTEM SET work_mem = &#x27;64MB&#x27;;
ALTER SYSTEM SET random_page_cost = 1.1;
ALTER SYSTEM SET effective_cache_size = &#x27;12GB&#x27;;
SELECT pg_reload_conf();
</pre>
  <p id="bOAQ">After changing settings, test with <code>EXPLAIN ANALYZE</code> on your slow queries. You should see different plan choices, especially more index scans and in-memory sorts.</p>
  <h2 id="6-rewrite-problematic-query-patterns">6. Rewrite problematic query patterns</h2>
  <p id="lOnW">Sometimes the query itself is the problem. No amount of indexing or tuning will fix a fundamentally inefficient query. Here are patterns that consistently cause performance issues and how to fix them.</p>
  <p id="5F1x"><strong>SELECT * when you only need a few columns.</strong> This forces PostgreSQL to read and transfer every column, including large text or JSONB fields:</p>
  <pre id="XdVY">-- Slow: reads everything, including a 10 KB description column
SELECT * FROM products WHERE category = &#x27;electronics&#x27;;

-- Better: only fetches what you need
SELECT id, name, price FROM products WHERE category = &#x27;electronics&#x27;;
</pre>
  <p id="k5jo">This matters more than people think, especially with TOAST (The Oversized Attribute Storage Technique). Large columns are stored separately, and fetching them requires additional disk reads.</p>
  <p id="NSqx"><strong>Correlated subqueries that run once per row.</strong> The planner sometimes can&#x27;t flatten these:</p>
  <pre id="RGpj">-- Slow: subquery executes for each order row
SELECT o.id, o.total,
    (SELECT name FROM customers c WHERE c.id = o.customer_id)
FROM orders o
WHERE o.created_at &gt; &#x27;2026-01-01&#x27;;

-- Better: explicit JOIN
SELECT o.id, o.total, c.name
FROM orders o
JOIN customers c ON c.id = o.customer_id
WHERE o.created_at &gt; &#x27;2026-01-01&#x27;;
</pre>
  <p id="t74q"><strong>Using OFFSET for pagination on large datasets.</strong> <code>OFFSET 100000</code> means PostgreSQL fetches and discards 100,000 rows before returning results:</p>
  <pre id="5kgF">-- Slow: scans and discards 100,000 rows
SELECT * FROM events ORDER BY created_at DESC LIMIT 20 OFFSET 100000;

-- Better: keyset pagination using the last seen value
SELECT * FROM events
WHERE created_at &lt; &#x27;2026-01-15T10:30:00Z&#x27;
ORDER BY created_at DESC
LIMIT 20;
</pre>
  <p id="U8eI">Keyset pagination is consistently fast regardless of how deep into the result set you go. It requires an index on the column you&#x27;re paginating by.</p>
  <p id="OhWT"><strong>Unnecessary DISTINCT or GROUP BY.</strong> If you&#x27;re adding <code>DISTINCT</code> because a JOIN produces duplicates, the JOIN is probably wrong. Fix the JOIN condition instead of papering over it with <code>DISTINCT</code>.</p>
  <p id="FHNx"><strong>Functions in WHERE clauses that prevent index usage:</strong></p>
  <pre id="5gX4">-- Index on created_at won&#x27;t be used
SELECT * FROM orders WHERE EXTRACT(YEAR FROM created_at) = 2026;

-- Rewrite to use the index
SELECT * FROM orders
WHERE created_at &gt;= &#x27;2026-01-01&#x27; AND created_at &lt; &#x27;2027-01-01&#x27;;
</pre>
  <h2 id="7-keep-statistics-up-to-date-with-analyze-and-vacuum">7. Keep statistics up to date with ANALYZE and VACUUM</h2>
  <p id="EAcn">PostgreSQL&#x27;s query planner relies on table statistics to make decisions. How many rows does a table have? What&#x27;s the distribution of values in each column? How many distinct values are there? If these statistics are wrong, the planner makes bad choices.</p>
  <p id="COKZ"><code>ANALYZE</code> collects fresh statistics about table contents. <code>VACUUM</code> reclaims space from deleted or updated rows (dead tuples) that PostgreSQL can&#x27;t reuse. Both are essential for sustained query performance.</p>
  <p id="1NLv">Autovacuum handles this automatically by default, but it doesn&#x27;t always keep up. Large batch operations, bulk deletes and rapidly growing tables can outpace the default autovacuum settings.</p>
  <p id="cYBS">Check if your statistics are stale:</p>
  <pre id="8vgg">SELECT
    relname AS table_name,
    last_analyze,
    last_autoanalyze,
    n_live_tup,
    n_dead_tup,
    round(100.0 * n_dead_tup / NULLIF(n_live_tup + n_dead_tup, 0), 1) AS dead_pct
FROM pg_stat_user_tables
ORDER BY n_dead_tup DESC
LIMIT 15;
</pre>
  <p id="Xgd9">Tables with a high percentage of dead tuples need vacuuming. Tables that haven&#x27;t been analyzed recently may have stale statistics.</p>
  <p id="wlG1">If a table had 10,000 rows when statistics were collected but now has 10 million, the planner might choose a sequential scan based on the old row count when an index scan would be far more efficient. Running <code>ANALYZE</code> fixes this:</p>
  <pre id="Ffzt">ANALYZE orders;
</pre>
  <p id="SjdR">For autovacuum tuning, the defaults are cautious. On busy databases, consider adjusting:</p>
  <pre id="3Mzk">autovacuum_vacuum_scale_factor = 0.05
autovacuum_analyze_scale_factor = 0.02
autovacuum_vacuum_cost_delay = 2ms
</pre>
  <p id="JT6I">The scale factors control when autovacuum kicks in. The default <code>vacuum_scale_factor</code> of 0.2 means autovacuum runs after 20% of rows have been modified. On a 100 million row table, that&#x27;s 20 million dead tuples before cleanup starts. Lowering it to 0.05 (5%) keeps things cleaner.</p>
  <p id="Vy7l">For large tables with specific requirements, you can set per-table autovacuum settings:</p>
  <pre id="rbRH">ALTER TABLE events SET (
    autovacuum_vacuum_scale_factor = 0.01,
    autovacuum_analyze_scale_factor = 0.005
);
</pre>
  <h2 id="keeping-your-data-safe-while-you-optimize">Keeping your data safe while you optimize</h2>
  <p id="8OlA">Tuning queries and tweaking PostgreSQL configuration is relatively safe work. But mistakes happen. A dropped index on a production table during peak hours, a configuration change that causes out-of-memory crashes, an <code>ANALYZE</code> on a massive table that locks things at the wrong moment.</p>
  <p id="uqHY">Having reliable backups means you can optimize with confidence. <a href="https://databasus.com/" target="_blank">PostgreSQL backup</a> tools like Databasus handle automated scheduled backups with compression, encryption and multiple storage destinations. It&#x27;s an industry standard for PostgreSQL backup tools, suitable for individual developers and enterprise teams.</p>
  <h2 id="putting-it-all-together">Putting it all together</h2>
  <p id="X3uz">Fixing slow queries in PostgreSQL isn&#x27;t a one-time task. It&#x27;s a cycle: identify the slow queries with pg_stat_statements, understand why they&#x27;re slow with EXPLAIN ANALYZE, fix the root cause (missing index, bad query pattern, stale statistics or wrong configuration) and then monitor to make sure the fix holds.</p>
  <p id="8iFU">Start with pg_stat_statements if you haven&#x27;t already. It takes five minutes to set up and immediately shows you where your database is spending its time. From there, work through the list: check your indexes, review your configuration settings, look for problematic query patterns and make sure autovacuum is keeping up.</p>
  <p id="DpWv">Most PostgreSQL performance problems have straightforward solutions. The hard part is knowing where to look.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@rostislav_dugin/postgresql-indexes-types</guid><link>https://teletype.in/@rostislav_dugin/postgresql-indexes-types?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin</link><comments>https://teletype.in/@rostislav_dugin/postgresql-indexes-types?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin#comments</comments><dc:creator>rostislav_dugin</dc:creator><title>PostgreSQL indexing explained — 5 index types and when to use each</title><pubDate>Mon, 09 Feb 2026 14:02:15 GMT</pubDate><media:content medium="image" url="https://img1.teletype.in/files/4a/cc/4acc5850-3b27-4343-9639-51563996abdf.png"></media:content><description><![CDATA[<img src="https://img2.teletype.in/files/d5/6b/d56b18a6-da59-4304-a717-a193172be400.png"></img>Indexes are one of those things that everybody knows they should use, but few people actually understand beyond the basics. You create an index, the query gets faster, done. Except when it doesn't. Or when the wrong index makes things slower. Or when you're running five indexes on a table and none of them are being used.]]></description><content:encoded><![CDATA[
  <p id="xmvX">Indexes are one of those things that everybody knows they should use, but few people actually understand beyond the basics. You create an index, the query gets faster, done. Except when it doesn&#x27;t. Or when the wrong index makes things slower. Or when you&#x27;re running five indexes on a table and none of them are being used.</p>
  <p id="pigE">PostgreSQL ships with five distinct index types, each designed for different access patterns. Picking the right one is the difference between a query that takes 2 milliseconds and one that takes 20 seconds. This article covers all five, when they actually help and when they&#x27;re a waste of disk space.</p>
  <figure id="wXuB" class="m_original">
    <img src="https://img2.teletype.in/files/d5/6b/d56b18a6-da59-4304-a717-a193172be400.png" width="1536" />
    <figcaption>PostgreSQL indexes</figcaption>
  </figure>
  <h2 id="how-postgresql-indexes-work-under-the-hood">How PostgreSQL indexes work under the hood</h2>
  <p id="XT7W">Before jumping into specific types, it helps to understand what an index actually does. A PostgreSQL index is a separate data structure that maps column values to the physical location of rows on disk. When you run a query with a <code>WHERE</code> clause, the planner checks whether an index exists that can narrow down the search instead of scanning every row.</p>
  <p id="hVFz">Without an index, PostgreSQL performs a sequential scan. It reads the entire table, row by row, checking each one against your filter. For a table with 100 rows, that&#x27;s fine. For a table with 100 million rows, it&#x27;s a problem.</p>
  <pre id="ZDSa">-- Without an index, this scans the entire table
SELECT * FROM orders WHERE customer_id = &#x27;abc-123&#x27;;

-- With an index on customer_id, PostgreSQL jumps directly to matching rows
CREATE INDEX idx_orders_customer_id ON orders (customer_id);
</pre>
  <p id="EH0Q">Indexes aren&#x27;t free though. Every index takes disk space and slows down <code>INSERT</code>, <code>UPDATE</code> and <code>DELETE</code> operations because PostgreSQL has to maintain the index alongside the table data. A table with ten indexes means every write operation updates ten additional data structures.</p>
  <p id="CDUh">The goal is to have the right indexes for your query patterns and nothing more.</p>
  <h2 id="1-b-tree--the-default-workhorse">1. B-tree — the default workhorse</h2>
  <p id="NAbP">B-tree is the default index type in PostgreSQL. If you run <code>CREATE INDEX</code> without specifying a type, you get a B-tree. It handles equality and range queries on sortable data, which covers the vast majority of real-world use cases.</p>
  <p id="mvVP">B-tree indexes store data in a balanced tree structure. Each node contains sorted keys and pointers to child nodes, allowing PostgreSQL to find any value in O(log n) time. They support <code>=</code>, <code>&lt;</code>, <code>&gt;</code>, <code>&lt;=</code>, <code>&gt;=</code>, <code>BETWEEN</code> and <code>IS NULL</code> operators efficiently.</p>
  <pre id="S8GU">-- All of these use B-tree indexes effectively
CREATE INDEX idx_orders_created_at ON orders (created_at);

SELECT * FROM orders WHERE created_at &gt; &#x27;2026-01-01&#x27;;
SELECT * FROM orders WHERE created_at BETWEEN &#x27;2026-01-01&#x27; AND &#x27;2026-02-01&#x27;;
SELECT * FROM orders WHERE created_at = &#x27;2026-02-08&#x27;;

-- Multi-column B-tree indexes
CREATE INDEX idx_orders_customer_date ON orders (customer_id, created_at);

-- This uses the index (leftmost prefix rule)
SELECT * FROM orders WHERE customer_id = &#x27;abc-123&#x27; AND created_at &gt; &#x27;2026-01-01&#x27;;

-- This also uses the index (first column matches)
SELECT * FROM orders WHERE customer_id = &#x27;abc-123&#x27;;

-- This does NOT use the index efficiently (skips the first column)
SELECT * FROM orders WHERE created_at &gt; &#x27;2026-01-01&#x27;;
</pre>
  <p id="WQZQ">The column order in multi-column B-tree indexes matters a lot. PostgreSQL can use the index starting from the leftmost column. If your query only filters on the second column, the index likely won&#x27;t help.</p>
  <p id="R7hd"><strong>When B-tree works:</strong></p>
  <ul id="3Tqm">
    <li id="eCnT">Exact match (<code>WHERE status = &#x27;active&#x27;</code>) — Yes</li>
    <li id="bc8u">Range queries (<code>WHERE price &gt; 100</code>) — Yes</li>
    <li id="wEVK">Sorting (<code>ORDER BY created_at DESC</code>) — Yes</li>
    <li id="ygTn">Pattern matching (<code>WHERE name LIKE &#x27;John%&#x27;</code>) — Yes (prefix only)</li>
    <li id="8jHW">Pattern matching (<code>WHERE name LIKE &#x27;%John%&#x27;</code>) — No</li>
    <li id="RPfM">Array or JSON containment — No</li>
  </ul>
  <p id="aCvo">B-tree is the right choice for primary keys, foreign keys, timestamp columns used in range filters and any column you frequently sort on. If you&#x27;re unsure which index type to use, B-tree is almost always a safe starting point.</p>
  <h2 id="2-hash--fast-equality-lookups">2. Hash — fast equality lookups</h2>
  <p id="rw1s">Hash indexes build a hash table mapping each value to the row locations that contain it. They only support equality comparisons (<code>=</code>), but they do it with O(1) lookup time instead of O(log n) for B-tree.</p>
  <p id="W0Ud">Before PostgreSQL 10, hash indexes were not crash-safe because they weren&#x27;t WAL-logged. That made them basically unusable in production. Since PostgreSQL 10, they&#x27;re fully crash-safe and a reasonable option for specific workloads.</p>
  <pre id="9qaA">CREATE INDEX idx_sessions_token ON sessions USING hash (token);

-- This uses the hash index
SELECT * FROM sessions WHERE token = &#x27;a1b2c3d4e5f6&#x27;;

-- This does NOT use the hash index (not an equality check)
SELECT * FROM sessions WHERE token &gt; &#x27;a1b2c3d4e5f6&#x27;;
</pre>
  <p id="36cr">Hash indexes are smaller than B-tree indexes for the same data, which can matter for large tables with high-cardinality columns. If you have a table with 50 million rows and you only ever look up by an exact session token or API key, a hash index uses less memory and disk.</p>
  <p id="qQek">In practice, the difference is often marginal. B-tree handles equality just fine, and it also supports range queries as a bonus. Most PostgreSQL users never create a hash index. But if you&#x27;re optimizing a high-throughput lookup table where every byte of index size matters, it&#x27;s worth benchmarking.</p>
  <p id="LK0D">When to use hash over B-tree:</p>
  <ul id="96ny">
    <li id="l7sh">Exact match queries only, no range scans</li>
    <li id="PAO1">Very high cardinality columns (UUIDs, tokens, hashes)</li>
    <li id="i8dC">You want the smallest possible index size</li>
    <li id="erM5">You&#x27;ve benchmarked and confirmed it outperforms B-tree for your workload</li>
  </ul>
  <h2 id="3-gin--for-full-text-search-arrays-and-jsonb">3. GIN — for full-text search, arrays and JSONB</h2>
  <p id="lmGB">GIN stands for Generalized Inverted Index. It&#x27;s designed for values that contain multiple elements, like arrays, JSONB documents and full-text search vectors. Where a B-tree maps one value to one row, a GIN index maps each element inside a composite value to the rows that contain it.</p>
  <p id="2AWh">Think of it like a book index at the back of a textbook. You look up a word and it tells you all the pages where that word appears. GIN does the same thing for array elements, JSON keys and text lexemes.</p>
  <pre id="YWvI">-- Full-text search
CREATE INDEX idx_articles_search ON articles USING gin (to_tsvector(&#x27;english&#x27;, body));

SELECT * FROM articles
WHERE to_tsvector(&#x27;english&#x27;, body) @@ to_tsquery(&#x27;postgresql &amp; indexing&#x27;);

-- JSONB containment
CREATE INDEX idx_events_data ON events USING gin (metadata);

SELECT * FROM events
WHERE metadata @&gt; &#x27;{&quot;source&quot;: &quot;api&quot;, &quot;version&quot;: 2}&#x27;;

-- Array containment
CREATE INDEX idx_products_tags ON products USING gin (tags);

SELECT * FROM products
WHERE tags @&gt; ARRAY[&#x27;electronics&#x27;, &#x27;wireless&#x27;];
</pre>
  <p id="vtVL">GIN indexes are slower to build and update than B-tree indexes. Every insert potentially needs to update many entries in the inverted index. For write-heavy tables, this can be a noticeable overhead. PostgreSQL mitigates this with &quot;fastupdate&quot; which batches pending index entries, but it means the index can be slightly behind during heavy writes.</p>
  <p id="Gy2D"><strong>B-tree vs GIN comparison:</strong></p>
  <p id="LLqu"><strong>B-tree:</strong></p>
  <ul id="FPcL">
    <li id="6IjT">Equality and range queries — Yes</li>
    <li id="QSD1">Full-text search (<code>@@</code>) — No</li>
    <li id="Yrce">Array containment (<code>@&gt;</code>) — No</li>
    <li id="kdiK">JSONB containment (<code>@&gt;</code>, <code>?</code>, <code>?&amp;</code>) — No</li>
    <li id="c0sc">Index build speed — Fast</li>
    <li id="sBXz">Write overhead — Low</li>
    <li id="Oybu">Index size — Moderate</li>
  </ul>
  <p id="Zo3P"><strong>GIN:</strong></p>
  <ul id="dyFh">
    <li id="WcMh">Equality and range queries — No</li>
    <li id="SCfJ">Full-text search (<code>@@</code>) — Yes</li>
    <li id="js9G">Array containment (<code>@&gt;</code>) — Yes</li>
    <li id="xRQT">JSONB containment (<code>@&gt;</code>, <code>?</code>, <code>?&amp;</code>) — Yes</li>
    <li id="Hjac">Index build speed — Slow</li>
    <li id="7Fhh">Write overhead — Medium to high</li>
    <li id="JeBO">Index size — Large</li>
  </ul>
  <p id="MZQW">GIN is the correct choice whenever you need to search within composite values. If you&#x27;re running <code>WHERE tags @&gt; ...</code>, <code>WHERE metadata @&gt; ...</code> or <code>WHERE tsvector @@ tsquery</code>, a GIN index is what you want. Just be aware that it comes with higher write costs and larger disk usage compared to B-tree.</p>
  <h2 id="4-gist--for-geometric-range-and-proximity-queries">4. GiST — for geometric, range and proximity queries</h2>
  <p id="cwZ3">GiST stands for Generalized Search Tree. It&#x27;s a framework for building custom index types, but in practice it&#x27;s mostly used for geometric data (points, polygons, circles), range types (date ranges, integer ranges) and full-text search (as an alternative to GIN).</p>
  <p id="NSZ2">GiST indexes work by recursively partitioning the search space. For geometric data, imagine dividing a map into progressively smaller regions. To find all restaurants within 500 meters, the index eliminates entire regions that are too far away without checking individual rows.</p>
  <pre id="xR3S">-- PostGIS spatial queries
CREATE INDEX idx_locations_geo ON locations USING gist (coordinates);

SELECT * FROM locations
WHERE ST_DWithin(coordinates, ST_MakePoint(-73.985, 40.748)::geography, 500);

-- Range overlap queries
CREATE INDEX idx_reservations_period ON reservations USING gist (during);

SELECT * FROM reservations
WHERE during &amp;&amp; daterange(&#x27;2026-02-01&#x27;, &#x27;2026-02-15&#x27;);

-- Nearest-neighbor search
SELECT name, ST_Distance(coordinates, ST_MakePoint(-73.985, 40.748)::geography) AS distance
FROM locations
ORDER BY coordinates &lt;-&gt; ST_MakePoint(-73.985, 40.748)::geography
LIMIT 10;
</pre>
  <p id="Oywp">GiST also supports full-text search, but with different trade-offs compared to GIN. GiST full-text indexes are faster to build and smaller on disk, but slower for queries, especially when a search term appears in many documents. GIN is generally preferred for full-text search unless you&#x27;re combining it with other GiST-supported operations.</p>
  <p id="txrY">When to use GiST:</p>
  <ul id="wZTq">
    <li id="Ffwd">PostGIS and geographic data (finding nearby points, intersecting polygons)</li>
    <li id="r0cj">Range type operations (overlapping date ranges, integer ranges)</li>
    <li id="ponJ">Nearest-neighbor queries (<code>ORDER BY ... &lt;-&gt;</code>)</li>
    <li id="HFxq">Exclusion constraints (preventing overlapping ranges in a table)</li>
  </ul>
  <pre id="idCm">-- Exclusion constraint using GiST
-- Prevents overlapping room reservations
CREATE TABLE room_bookings (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    room_id INTEGER NOT NULL,
    during TSTZRANGE NOT NULL,
    EXCLUDE USING gist (room_id WITH =, during WITH &amp;&amp;)
);
</pre>
  <p id="Qo8O">The exclusion constraint example is particularly useful. It guarantees at the database level that no two bookings for the same room can overlap. This is something you can&#x27;t do with B-tree indexes.</p>
  <h2 id="5-brin--for-large-naturally-ordered-tables">5. BRIN — for large, naturally ordered tables</h2>
  <p id="7LaM">BRIN stands for Block Range Index. It&#x27;s the most space-efficient index type PostgreSQL offers, but it only works well under a specific condition: the physical order of rows on disk must correlate with the column values.</p>
  <p id="YKqY">Instead of indexing every row, BRIN indexes store summary information (min and max values) for each block range, which is a group of consecutive physical pages. When PostgreSQL scans for rows, it checks the block summaries and skips entire ranges that can&#x27;t contain matching data.</p>
  <pre id="oCI5">-- Perfect for a time-series table where rows are inserted in chronological order
CREATE INDEX idx_logs_created_at ON access_logs USING brin (created_at);

-- This can skip huge portions of the table
SELECT * FROM access_logs
WHERE created_at BETWEEN &#x27;2026-02-01&#x27; AND &#x27;2026-02-02&#x27;;
</pre>
  <p id="et5W">The size difference is dramatic. A B-tree index on a 100 GB table might be 2 GB. A BRIN index on the same table could be 100 KB. That&#x27;s not a typo. BRIN indexes are orders of magnitude smaller because they store one summary per block range instead of one entry per row.</p>
  <p id="T2qs">But this efficiency has a hard prerequisite. If the data isn&#x27;t physically ordered on disk by the indexed column, BRIN is useless. If you insert rows with random timestamps, the min/max summaries for each block range will span the entire value space, and PostgreSQL won&#x27;t be able to skip anything.</p>
  <p id="lkmJ">Good candidates for BRIN:</p>
  <ul id="cYSw">
    <li id="OOpu">Append-only tables with timestamp columns (logs, events, audit trails)</li>
    <li id="A6ef">Tables where rows are inserted in natural order of some column</li>
    <li id="sL6E">Very large tables (millions or billions of rows) where B-tree index size is a concern</li>
  </ul>
  <p id="WAS4">Bad candidates for BRIN:</p>
  <ul id="p2xA">
    <li id="n75k">Tables with frequent updates that change the indexed column</li>
    <li id="UHvj">Tables where rows are inserted in random order</li>
    <li id="2YiS">Small tables (B-tree is more efficient for small datasets)</li>
  </ul>
  <p id="gkWm">BRIN is a specialized tool. When it fits, it&#x27;s incredible. When it doesn&#x27;t, it won&#x27;t help at all. Check the correlation between physical row order and column values using <code>pg_stats</code> before deciding:</p>
  <pre id="zKnv">SELECT tablename, attname, correlation
FROM pg_stats
WHERE tablename = &#x27;access_logs&#x27; AND attname = &#x27;created_at&#x27;;
</pre>
  <p id="OW2Q">A correlation value close to 1 or -1 means BRIN will work well. Values near 0 mean the data is randomly distributed and BRIN won&#x27;t help.</p>
  <h2 id="practical-indexing-tips">Practical indexing tips</h2>
  <p id="XvRU">Knowing which index types exist is half the story. The other half is using them effectively.</p>
  <p id="3HGq"><strong>Check if your indexes are actually being used.</strong> PostgreSQL tracks index usage statistics. If an index hasn&#x27;t been scanned in months, it&#x27;s costing you write performance for no benefit.</p>
  <pre id="qArR">SELECT
    indexrelname AS index_name,
    idx_scan AS times_used,
    pg_size_pretty(pg_relation_size(indexrelid)) AS index_size
FROM pg_stat_user_indexes
WHERE schemaname = &#x27;public&#x27;
ORDER BY idx_scan ASC;
</pre>
  <p id="Ifa5"><strong>Use <code>EXPLAIN ANALYZE</code> before and after creating indexes.</strong> Don&#x27;t assume an index will help. Verify it. Sometimes the planner chooses a sequential scan because the table is small enough that the index adds no value.</p>
  <pre id="c4wF">EXPLAIN ANALYZE SELECT * FROM orders WHERE customer_id = &#x27;abc-123&#x27;;
</pre>
  <p id="1y1q"><strong>Consider partial indexes for filtered queries.</strong> If you only ever query active orders, index only the active rows:</p>
  <pre id="9MIQ">CREATE INDEX idx_orders_active ON orders (customer_id)
WHERE status = &#x27;active&#x27;;
</pre>
  <p id="Yg8c">This index is smaller and faster than indexing all orders because it only covers rows matching the condition.</p>
  <p id="mBBP"><strong>Don&#x27;t forget about covering indexes.</strong> If a query only needs columns that are all in the index, PostgreSQL can answer it entirely from the index without touching the table. This is called an index-only scan.</p>
  <pre id="WEW5">CREATE INDEX idx_orders_covering ON orders (customer_id) INCLUDE (total, created_at);

-- This can be served entirely from the index
SELECT total, created_at FROM orders WHERE customer_id = &#x27;abc-123&#x27;;
</pre>
  <h2 id="keeping-your-data-safe-while-you-optimize">Keeping your data safe while you optimize</h2>
  <p id="1XrW">Experimenting with indexes is relatively low-risk since you can always drop an index and try again. But schema changes, large data migrations and production experiments can go wrong in ways that are harder to undo.</p>
  <p id="umyJ">Having a reliable <a href="https://databasus.com/" target="_blank">PostgreSQL backup</a> strategy means you can experiment with confidence. Databasus is an industry standard for PostgreSQL backup tools. It handles automated scheduled backups with compression, encryption and multiple storage destinations, suitable for individual developers and enterprise teams alike.</p>
  <h2 id="choosing-the-right-index-for-your-workload">Choosing the right index for your workload</h2>
  <p id="Ngs0">There&#x27;s no universal &quot;best&quot; index type. The right choice depends entirely on your data and your queries. B-tree covers most common scenarios. GIN handles composite and full-text data. GiST solves geometric and range problems. Hash optimizes pure equality lookups. BRIN saves massive disk space on naturally ordered data.</p>
  <p id="TZux">Start with <code>EXPLAIN ANALYZE</code> on your slowest queries, identify what kind of operations they perform and match those operations to the appropriate index type. One well-chosen index beats five poorly chosen ones every time.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@rostislav_dugin/mongodb-schema-design</guid><link>https://teletype.in/@rostislav_dugin/mongodb-schema-design?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin</link><comments>https://teletype.in/@rostislav_dugin/mongodb-schema-design?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin#comments</comments><dc:creator>rostislav_dugin</dc:creator><title>MongoDB schema design — 6 patterns every developer should master</title><pubDate>Sun, 08 Feb 2026 19:57:46 GMT</pubDate><media:content medium="image" url="https://img3.teletype.in/files/25/7d/257dc036-9a1b-4a82-96f5-bd36a2259063.png"></media:content><description><![CDATA[<img src="https://img3.teletype.in/files/e0/95/e095c62c-a602-42a1-8aaf-577d37e1b68f.png"></img>MongoDB gives you flexibility that relational databases don't. No rigid tables, no mandatory schemas, no upfront column definitions. You just throw documents into a collection and go. That freedom is exactly what makes schema design in MongoDB so important and so easy to get wrong.]]></description><content:encoded><![CDATA[
  <p id="qAGL">MongoDB gives you flexibility that relational databases don&#x27;t. No rigid tables, no mandatory schemas, no upfront column definitions. You just throw documents into a collection and go. That freedom is exactly what makes schema design in MongoDB so important and so easy to get wrong.</p>
  <p id="cSzg">The problem is that &quot;schemaless&quot; doesn&#x27;t mean &quot;no design needed.&quot; Without a good schema strategy, you end up with slow queries, bloated documents and data that&#x27;s hard to work with as your application grows. These six patterns solve the most common problems developers hit when designing MongoDB schemas.</p>
  <figure id="hXT4" class="m_original">
    <img src="https://img3.teletype.in/files/e0/95/e095c62c-a602-42a1-8aaf-577d37e1b68f.png" width="1536" />
    <figcaption>MongoDB schema</figcaption>
  </figure>
  <h2 id="1-embedding-vs-referencing">1. Embedding vs referencing</h2>
  <p id="obOI">This is the first decision you&#x27;ll make for every relationship in your data model. Should related data live inside the same document or in a separate collection with a reference? The answer depends on how you read and write the data.</p>
  <p id="MZKt">Embedding means nesting related data directly within a document. If you have a blog post with comments, embedding puts the comments array inside the post document. One read gets everything. No joins needed.</p>
  <pre id="UeEK">// Embedded comments inside a blog post
{
  _id: ObjectId(&quot;...&quot;),
  title: &quot;MongoDB schema tips&quot;,
  author: &quot;Jane&quot;,
  comments: [
    { user: &quot;Bob&quot;, text: &quot;Great article!&quot;, date: ISODate(&quot;2026-02-01&quot;) },
    { user: &quot;Alice&quot;, text: &quot;Very helpful&quot;, date: ISODate(&quot;2026-02-02&quot;) }
  ]
}
</pre>
  <p id="xblu">Referencing stores related data in a separate collection and links them with an ObjectId. You fetch the post first, then the comments in a second query (or use $lookup for a server-side join).</p>
  <pre id="Z4W3">// Post document
{ _id: ObjectId(&quot;post1&quot;), title: &quot;MongoDB schema tips&quot;, author: &quot;Jane&quot; }

// Separate comment documents
{ _id: ObjectId(&quot;c1&quot;), postId: ObjectId(&quot;post1&quot;), user: &quot;Bob&quot;, text: &quot;Great article!&quot; }
{ _id: ObjectId(&quot;c2&quot;), postId: ObjectId(&quot;post1&quot;), user: &quot;Alice&quot;, text: &quot;Very helpful&quot; }
</pre>
  <p id="VtIs">When to embed vs reference:</p>
  <p id="Umiu"><strong>Embed when:</strong></p>
  <ul id="suwM">
    <li id="uuCn">Data is always read together</li>
    <li id="wYZC">Array growth is bounded (won&#x27;t grow indefinitely)</li>
    <li id="da3Z">Document size stays well under 16 MB limit</li>
    <li id="cBHM">Nested data rarely changes</li>
    <li id="sXCC">Data is used only in this context</li>
  </ul>
  <p id="rPpQ"><strong>Reference when:</strong></p>
  <ul id="dziU">
    <li id="WrZv">Data is read independently</li>
    <li id="bHa9">Array growth is unbounded (could grow to thousands)</li>
    <li id="lDDI">Document size would approach size limits</li>
    <li id="Mnn9">Nested data changes frequently</li>
    <li id="5tzt">Data is shared across multiple documents</li>
  </ul>
  <p id="93gm">Embedding works well for one-to-few relationships where the nested data is tightly coupled to the parent. Think user profiles with addresses, products with a small list of variants or orders with line items. Referencing is better when the related data grows without bound, gets accessed independently or is shared across multiple parent documents.</p>
  <h2 id="2-the-subset-pattern">2. The subset pattern</h2>
  <p id="R1WL">Documents in MongoDB have a 16 MB size limit, but you&#x27;ll hit performance problems long before that. Loading a 2 MB document when you only need a few fields from it wastes network bandwidth and memory. The subset pattern solves this by keeping the most-accessed data in the main document and moving the rest to a secondary collection.</p>
  <p id="Bg1n">A common example is an e-commerce product page. The product listing shows the name, price, main image and the three most recent reviews. But the product might have 500 reviews total. Loading all 500 reviews every time someone views the product page is wasteful.</p>
  <pre id="0oHM">// Main product document (fast reads for product listings)
{
  _id: ObjectId(&quot;prod1&quot;),
  name: &quot;Wireless Headphones&quot;,
  price: 79.99,
  image: &quot;headphones-main.jpg&quot;,
  recentReviews: [
    { user: &quot;Alex&quot;, rating: 5, text: &quot;Sound quality is excellent&quot;, date: ISODate(&quot;2026-02-05&quot;) },
    { user: &quot;Sam&quot;, rating: 4, text: &quot;Comfortable for long use&quot;, date: ISODate(&quot;2026-02-03&quot;) },
    { user: &quot;Jordan&quot;, rating: 5, text: &quot;Best in this price range&quot;, date: ISODate(&quot;2026-01-28&quot;) }
  ],
  reviewCount: 487,
  averageRating: 4.3
}

// Full reviews in a separate collection (loaded only on &quot;See all reviews&quot;)
{
  _id: ObjectId(&quot;rev1&quot;),
  productId: ObjectId(&quot;prod1&quot;),
  user: &quot;Alex&quot;,
  rating: 5,
  text: &quot;Sound quality is excellent&quot;,
  date: ISODate(&quot;2026-02-05&quot;)
}
</pre>
  <p id="XSyR">The trade-off is data duplication. The three recent reviews exist in both the product document and the reviews collection. You need to keep them in sync when reviews are added. But the read performance gain is significant because 95% of your traffic only needs the subset.</p>
  <p id="5JGX">This pattern applies anywhere you have a one-to-many relationship where most reads only need a small portion of the &quot;many&quot; side. User activity feeds, article comments and notification lists all benefit from it.</p>
  <h2 id="3-the-bucket-pattern">3. The bucket pattern</h2>
  <p id="RprJ">Time-series and event data can generate enormous numbers of documents. If your IoT sensors send readings every second, that&#x27;s 86,400 documents per sensor per day. Storing each reading as an individual document creates index bloat and makes range queries slower than they need to be.</p>
  <p id="NYV1">The bucket pattern groups multiple data points into a single document based on a time range. Instead of one document per reading, you store one document per hour (or per minute, depending on your granularity).</p>
  <pre id="d371">// Without bucket pattern: one document per reading
{ sensorId: &quot;temp-01&quot;, value: 22.5, timestamp: ISODate(&quot;2026-02-08T10:00:00Z&quot;) }
{ sensorId: &quot;temp-01&quot;, value: 22.6, timestamp: ISODate(&quot;2026-02-08T10:00:01Z&quot;) }
{ sensorId: &quot;temp-01&quot;, value: 22.4, timestamp: ISODate(&quot;2026-02-08T10:00:02Z&quot;) }
// ... 86,397 more documents for this sensor today

// With bucket pattern: one document per hour
{
  sensorId: &quot;temp-01&quot;,
  startDate: ISODate(&quot;2026-02-08T10:00:00Z&quot;),
  endDate: ISODate(&quot;2026-02-08T10:59:59Z&quot;),
  count: 3600,
  readings: [
    { value: 22.5, timestamp: ISODate(&quot;2026-02-08T10:00:00Z&quot;) },
    { value: 22.6, timestamp: ISODate(&quot;2026-02-08T10:00:01Z&quot;) },
    { value: 22.4, timestamp: ISODate(&quot;2026-02-08T10:00:02Z&quot;) }
    // ... 3597 more readings
  ],
  summary: {
    avg: 22.5,
    min: 21.8,
    max: 23.1
  }
}
</pre>
  <p id="ZS4o">Benefits of the bucket pattern:</p>
  <ul id="eqWp">
    <li id="K1xE">Fewer documents means smaller indexes and faster queries</li>
    <li id="PV73">Pre-computed summaries (avg, min, max) avoid full scans for common aggregations</li>
    <li id="I4Lz">Range queries only touch a handful of bucket documents instead of thousands of individual ones</li>
    <li id="IgJq">Deleting old data is simpler since you drop entire bucket documents</li>
  </ul>
  <p id="FT8m">The bucket size depends on your access pattern. If most queries ask for hourly summaries, use hourly buckets. If users typically look at daily dashboards, daily buckets work better. The key is to match bucket granularity to how the data gets consumed.</p>
  <p id="hZ4Z">Note that MongoDB 5.0+ introduced native time series collections which handle some of this automatically. But the bucket pattern is still useful for custom aggregations and when you need pre-computed summaries stored alongside the raw data.</p>
  <h2 id="4-the-polymorphic-pattern">4. The polymorphic pattern</h2>
  <p id="p4a6">Not every document in a collection needs to look the same. The polymorphic pattern handles entities that share some common fields but differ in their details. Instead of creating separate collections for each variation, you store them all in one collection with a type field.</p>
  <p id="DzPs">A content management system is a good example. You might have articles, videos, podcasts and image galleries. They all have a title, author, publish date and tags. But an article has a body field, a video has a duration and URL, a podcast has an audio file and episode number.</p>
  <pre id="cgst">// Article
{
  _id: ObjectId(&quot;...&quot;),
  type: &quot;article&quot;,
  title: &quot;Getting started with MongoDB&quot;,
  author: &quot;Jane&quot;,
  publishDate: ISODate(&quot;2026-02-01&quot;),
  tags: [&quot;mongodb&quot;, &quot;tutorial&quot;],
  body: &quot;MongoDB is a document database...&quot;,
  wordCount: 1500
}

// Video
{
  _id: ObjectId(&quot;...&quot;),
  type: &quot;video&quot;,
  title: &quot;MongoDB schema design workshop&quot;,
  author: &quot;Jane&quot;,
  publishDate: ISODate(&quot;2026-02-05&quot;),
  tags: [&quot;mongodb&quot;, &quot;schema&quot;],
  videoUrl: &quot;https://example.com/videos/mongo-schema&quot;,
  duration: 2400,
  resolution: &quot;1080p&quot;
}

// Podcast
{
  _id: ObjectId(&quot;...&quot;),
  type: &quot;podcast&quot;,
  title: &quot;Database trends in 2026&quot;,
  author: &quot;Bob&quot;,
  publishDate: ISODate(&quot;2026-02-07&quot;),
  tags: [&quot;databases&quot;, &quot;trends&quot;],
  audioUrl: &quot;https://example.com/podcasts/db-trends&quot;,
  episodeNumber: 42,
  duration: 1800
}
</pre>
  <p id="MDCz">The advantage is that queries across all content types are simple. Want all content by Jane sorted by date? One query on one collection. Want only videos? Add a filter on the type field. The shared fields make indexing straightforward, and you can create partial indexes for type-specific fields.</p>
  <pre id="P6fQ">// Index for type-specific queries
db.content.createIndex({ type: 1, publishDate: -1 })

// Partial index only for videos
db.content.createIndex(
  { duration: 1 },
  { partialFilterExpression: { type: &quot;video&quot; } }
)
</pre>
  <p id="SaGq">This pattern works when the entities share enough common fields to justify a single collection and when you frequently query across types. If different types are always queried separately and share almost nothing, separate collections might be cleaner.</p>
  <h2 id="5-the-extended-reference-pattern">5. The extended reference pattern</h2>
  <p id="XgUV">When you reference data in another collection, sometimes you need a few fields from that referenced document on almost every read. The extended reference pattern copies those frequently-needed fields into the referencing document to avoid a second lookup.</p>
  <p id="lT0O">Consider an order system. Every order references a customer. When you display the order list, you need the customer name and email. Without the extended reference, every order list page requires a $lookup or a second query to the customers collection.</p>
  <pre id="RvPt">// Instead of just storing customerId
{
  _id: ObjectId(&quot;order1&quot;),
  customerId: ObjectId(&quot;cust1&quot;),
  items: [
    { product: &quot;Widget&quot;, quantity: 3, price: 9.99 }
  ],
  total: 29.97,
  orderDate: ISODate(&quot;2026-02-08&quot;)
}

// Store frequently-needed customer fields directly in the order
{
  _id: ObjectId(&quot;order1&quot;),
  customer: {
    _id: ObjectId(&quot;cust1&quot;),
    name: &quot;Alice Johnson&quot;,
    email: &quot;alice@example.com&quot;
  },
  items: [
    { product: &quot;Widget&quot;, quantity: 3, price: 9.99 }
  ],
  total: 29.97,
  orderDate: ISODate(&quot;2026-02-08&quot;)
}
</pre>
  <p id="gw4E">The trade-off is data staleness. If Alice changes her email, the orders still show the old one until you update them. For many use cases this is acceptable. An order should probably reflect the customer information at the time it was placed anyway.</p>
  <p id="IgpT">When to use the extended reference pattern:</p>
  <ul id="bV2A">
    <li id="2Wn5">The referenced fields are read frequently but updated rarely</li>
    <li id="3G0p">Join operations ($lookup) are causing performance issues</li>
    <li id="Ffrm">The copied fields are small relative to the document size</li>
    <li id="vKvO">Slight staleness in the copied data is acceptable</li>
  </ul>
  <p id="A7PE">This pattern is different from full embedding. You&#x27;re not copying the entire customer document into every order. You&#x27;re selectively copying only the fields that the most common queries need. The full customer record still lives in its own collection for detailed views and updates.</p>
  <h2 id="6-the-computed-pattern">6. The computed pattern</h2>
  <p id="sicO">Some values are expensive to calculate on the fly. If you&#x27;re counting the number of views on a video, computing the average rating from thousands of reviews or aggregating daily sales totals, doing that calculation on every read is wasteful.</p>
  <p id="2StV">The computed pattern pre-calculates these values and stores them in the document. You update them when the underlying data changes, not when someone reads the result.</p>
  <pre id="bH4v">// Product with pre-computed statistics
{
  _id: ObjectId(&quot;prod1&quot;),
  name: &quot;Wireless Headphones&quot;,
  price: 79.99,
  stats: {
    totalReviews: 487,
    averageRating: 4.3,
    ratingDistribution: {
      &quot;5&quot;: 203,
      &quot;4&quot;: 156,
      &quot;3&quot;: 78,
      &quot;2&quot;: 34,
      &quot;1&quot;: 16
    },
    totalSold: 2341,
    lastPurchaseDate: ISODate(&quot;2026-02-08T14:30:00Z&quot;)
  }
}
</pre>
  <p id="dm3z">When a new review comes in, you update the stats using atomic operations:</p>
  <pre id="rzlF">db.products.updateOne(
  { _id: ObjectId(&quot;prod1&quot;) },
  {
    $inc: {
      &quot;stats.totalReviews&quot;: 1,
      &quot;stats.ratingDistribution.4&quot;: 1
    },
    $set: {
      &quot;stats.averageRating&quot;: 4.28
    }
  }
)
</pre>
  <p id="i3NC"><strong>Calculate on read:</strong></p>
  <ul id="QMiV">
    <li id="k3sz">Read cost: High (aggregation every time)</li>
    <li id="dEo8">Write cost: None</li>
    <li id="ntUi">Accuracy: Always current</li>
  </ul>
  <p id="fvkO"><strong>Computed pattern:</strong></p>
  <ul id="Anca">
    <li id="VwW0">Read cost: Low (pre-stored value)</li>
    <li id="92aX">Write cost: Low (incremental update)</li>
    <li id="KC3h">Accuracy: Eventually consistent</li>
  </ul>
  <p id="3d8n"><strong>Background job:</strong></p>
  <ul id="xew2">
    <li id="XsLm">Read cost: Low (pre-stored value)</li>
    <li id="mVRX">Write cost: Batch update on schedule</li>
    <li id="2MR5">Accuracy: Delayed</li>
  </ul>
  <p id="9f2i">The computed pattern is the right choice when reads vastly outnumber writes and the computation is non-trivial. Product ratings, follower counts, dashboard metrics and leaderboards are all good candidates.</p>
  <p id="owhg">For background computation jobs, you need reliable scheduling. If the computation updates stall because a cron job dies silently, your users see stale data indefinitely. Monitoring and alerting on these jobs matters.</p>
  <h2 id="combining-patterns-in-practice">Combining patterns in practice</h2>
  <p id="32K6">Real applications rarely use a single pattern in isolation. A product catalog might use the subset pattern for reviews, the computed pattern for aggregate statistics, embedding for product variants and the extended reference pattern for category information. The patterns compose well.</p>
  <p id="Gjc0">The key principle behind all of them is the same: design your schema around your queries, not around your entities. In relational databases, you normalize first and optimize later. In MongoDB, you start by listing your most frequent queries and design the schema to serve those queries efficiently.</p>
  <p id="wV3H">Here are a few practical guidelines for combining patterns:</p>
  <ul id="6CYR">
    <li id="pE4Y"><strong>Start simple.</strong> Embed first. Only introduce references and patterns when you hit a specific problem like document size, update complexity or query performance.</li>
    <li id="GFk7"><strong>Know your read-to-write ratio.</strong> High-read workloads benefit from denormalization (embedding, computed, extended reference). High-write workloads favor normalization (referencing) to avoid updating data in multiple places.</li>
    <li id="nHwF"><strong>Monitor document growth.</strong> If a document&#x27;s embedded array keeps growing, apply the subset or bucket pattern before it becomes a problem.</li>
  </ul>
  <p id="Tdrh">As your MongoDB deployment grows, having reliable <a href="https://databasus.com/mongodb-backup" target="_blank">MongoDB backup</a> becomes critical. Schema changes and data migrations can go wrong, and recovering from a bad migration without a backup means data loss. Databasus is an industry standard for MongoDB backup tools, offering automated scheduled backups with compression, encryption and multiple storage destinations for both solo developers and enterprise teams.</p>
  <h2 id="choosing-the-right-pattern">Choosing the right pattern</h2>
  <p id="boGH">There&#x27;s no single correct schema for any application. The right choice depends on your query patterns, data volume, update frequency and consistency requirements. These six patterns cover the scenarios that come up most often in practice.</p>
  <p id="KC3o">Start with the simplest design that works. Add complexity only when you have evidence that the simple approach isn&#x27;t performing. Profile your queries, watch your document sizes and pay attention to how your data grows over time. The best schema is the one that makes your most common operations fast and your least common operations possible.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@rostislav_dugin/mysql-vs-mariadb</guid><link>https://teletype.in/@rostislav_dugin/mysql-vs-mariadb?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin</link><comments>https://teletype.in/@rostislav_dugin/mysql-vs-mariadb?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin#comments</comments><dc:creator>rostislav_dugin</dc:creator><title>MariaDB vs MySQL — 8 reasons developers are switching in 2026</title><pubDate>Fri, 06 Feb 2026 21:08:36 GMT</pubDate><media:content medium="image" url="https://img3.teletype.in/files/aa/f9/aaf9a5d1-84c3-4da3-a42f-69095200be43.png"></media:content><description><![CDATA[<img src="https://img2.teletype.in/files/d7/bf/d7bf0df5-8d2b-42b9-8699-6d32966c152c.png"></img>MariaDB started as a fork of MySQL back in 2009 when Oracle acquired Sun Microsystems. At the time, people weren't sure if the fork would survive long-term or just become another abandoned open source project. Fast forward to 2026 and MariaDB has become a serious alternative that many developers now prefer over the original. This article looks at why.]]></description><content:encoded><![CDATA[
  <p id="7lsi">MariaDB started as a fork of MySQL back in 2009 when Oracle acquired Sun Microsystems. At the time, people weren&#x27;t sure if the fork would survive long-term or just become another abandoned open source project. Fast forward to 2026 and MariaDB has become a serious alternative that many developers now prefer over the original. This article looks at why.</p>
  <p id="xCCC">The split wasn&#x27;t just a copy. MariaDB took a different path on storage engines, performance optimization, licensing and community governance. Some of those decisions are paying off now, especially for teams that care about open source principles and technical independence.</p>
  <figure id="9fQe" class="m_original">
    <img src="https://img2.teletype.in/files/d7/bf/d7bf0df5-8d2b-42b9-8699-6d32966c152c.png" width="1536" />
    <figcaption>MySQL vs MariaDB</figcaption>
  </figure>
  <h2 id="1-truly-open-source-no-asterisks">1. Truly open source, no asterisks</h2>
  <p id="98Hg">The biggest reason developers switch to MariaDB is licensing clarity. MySQL uses a dual licensing model under Oracle. The Community Edition is GPL, but Oracle reserves certain features for MySQL Enterprise Edition, which requires a commercial license. Thread pool, audit plugins, advanced security features and some backup tools are locked behind that paywall.</p>
  <p id="bsXa">MariaDB is fully open source under GPL. Every feature ships in a single edition. There&#x27;s no &quot;enterprise only&quot; tier hiding the good stuff. What you download is what everyone gets.</p>
  <p id="gDqU">Key differences:</p>
  <ul id="lu7E">
    <li id="GtYM"><strong>License</strong>: MariaDB uses GPL v2 (fully open), while MySQL has GPL + Commercial dual license</li>
    <li id="4GN1"><strong>Enterprise features</strong>: MariaDB includes all features in one edition, MySQL locks some behind Enterprise</li>
    <li id="7fzK"><strong>Corporate owner</strong>: MariaDB Foundation is non-profit, MySQL is owned by Oracle Corporation</li>
    <li id="lNVG"><strong>Feature restrictions</strong>: MariaDB has none, MySQL restricts thread pool, audit log and more to Enterprise</li>
  </ul>
  <p id="HOaU">For companies doing compliance reviews or avoiding vendor lock-in, this difference alone can drive the decision. You don&#x27;t need to worry about Oracle changing terms or restricting features in a future release.</p>
  <h2 id="2-better-storage-engine-options">2. Better storage engine options</h2>
  <p id="VleD">MariaDB ships with storage engines that MySQL either doesn&#x27;t have or charges extra for. The most notable one is Aria, a crash-safe alternative to MyISAM. But the real story is about ColumnStore and the overall engine diversity.</p>
  <p id="fyJr">MariaDB ColumnStore provides columnar storage for analytical workloads. If you need to run reports or aggregations over large datasets alongside your transactional workload, ColumnStore handles that without requiring a separate analytical database. MySQL doesn&#x27;t have a built-in columnar engine.</p>
  <p id="f105">The default storage engine for both is InnoDB (or MariaDB&#x27;s fork of it), so basic compatibility isn&#x27;t an issue. But MariaDB&#x27;s InnoDB fork includes optimizations that aren&#x27;t in upstream MySQL InnoDB, particularly around buffer pool management and compression.</p>
  <p id="45PE">MariaDB also includes the S3 storage engine, which lets you archive old tables directly to S3-compatible object storage. That&#x27;s useful for keeping historical data accessible without eating local disk space. Try doing that natively with MySQL.</p>
  <p id="XO7t">For teams running mixed workloads or managing large datasets, MariaDB&#x27;s engine diversity is a practical advantage that saves you from bolting on third-party tools.</p>
  <h2 id="3-thread-pool-that-doesnt-cost-extra">3. Thread pool that doesn&#x27;t cost extra</h2>
  <p id="lQ7m">MySQL&#x27;s built-in thread pool is an Enterprise-only feature. The Community Edition uses a one-thread-per-connection model. Under heavy concurrency this causes performance degradation because the operating system spends more time context-switching between threads than doing actual work.</p>
  <p id="3lo5">MariaDB includes thread pooling in its open source edition. It handles thousands of concurrent connections efficiently by grouping them into a pool and processing them in batches. The performance difference shows up clearly when you have hundreds or thousands of simultaneous connections.</p>
  <p id="dBNk">This matters in practice. Web applications behind load balancers, microservice architectures with many small services connecting to the same database and serverless environments that create connections rapidly all benefit from thread pooling. With MySQL Community, you either accept the performance hit or pay for Enterprise.</p>
  <ul id="2zz1">
    <li id="IvuJ">MariaDB: Thread pool included, configurable, production-ready</li>
    <li id="IK0K">MySQL Community: One-thread-per-connection, no built-in pool</li>
    <li id="auts">MySQL Enterprise: Thread pool available, requires commercial license</li>
  </ul>
  <p id="w22s">For high-concurrency environments, this is not a minor difference. It directly affects response times and database stability under load.</p>
  <h2 id="4-oracle-free-governance">4. Oracle-free governance</h2>
  <p id="X7ph">MySQL development happens primarily inside Oracle. The roadmap is set internally, feature priorities are decided behind closed doors and external contributors have limited influence on the project&#x27;s direction. You can submit patches, but whether they get reviewed or merged depends on Oracle&#x27;s priorities.</p>
  <p id="52HE">MariaDB is governed by the MariaDB Foundation, a non-profit organization. Development happens in the open with public discussions, accessible roadmaps and meaningful community input. Multiple companies contribute to MariaDB, and no single entity controls its future.</p>
  <p id="bG7F">This isn&#x27;t just philosophical. Oracle has a track record of deprioritizing open source projects after acquisition. OpenSolaris, Hudson (now Jenkins after the fork) and Java&#x27;s open source trajectory all changed after Oracle got involved. MySQL hasn&#x27;t been abandoned, but features increasingly land in Enterprise Edition rather than Community.</p>
  <p id="LSV4">Developers who&#x27;ve been burned by corporate stewardship issues tend to prefer MariaDB&#x27;s governance model. It&#x27;s the same reason many prefer PostgreSQL over MySQL in general: community-driven projects are more predictable long-term.</p>
  <h2 id="5-faster-query-optimizer">5. Faster query optimizer</h2>
  <p id="YQqG">MariaDB&#x27;s query optimizer has diverged significantly from MySQL&#x27;s. Several optimizations that MariaDB implements are either absent from MySQL or arrived years later.</p>
  <p id="HH5z">Key optimizer improvements in MariaDB include:</p>
  <ul id="8cTf">
    <li id="QgNs"><strong>Subquery optimizations</strong>: MariaDB converts subqueries to joins more aggressively, which often dramatically improves query performance</li>
    <li id="B6Ka"><strong>Table elimination</strong>: If a joined table doesn&#x27;t contribute to the result, MariaDB removes it from the execution plan automatically</li>
    <li id="fPMz"><strong>Hash joins</strong>: MariaDB supported hash joins before MySQL added them</li>
    <li id="F25o"><strong>Condition pushdown</strong>: Pushes WHERE conditions closer to the data access layer for earlier filtering</li>
  </ul>
  <p id="6OR0">These aren&#x27;t benchmarketing tricks. They affect real queries that developers write every day. A complex reporting query with subqueries and multiple joins can run significantly faster on MariaDB without any query rewriting.</p>
  <p id="dYq8">That said, MySQL has been closing the gap. MySQL 8.0+ added hash joins and improved its optimizer. But MariaDB still tends to handle complex query patterns more efficiently, particularly when subqueries are involved.</p>
  <h2 id="6-smoother-replication-features">6. Smoother replication features</h2>
  <p id="cdpH">Both databases support replication, but MariaDB has added features that make replication management easier in production environments.</p>
  <p id="Ezmq">MariaDB uses Global Transaction IDs (GTIDs) that are simpler to work with than MySQL&#x27;s implementation. Switching a replica to follow a different primary is straightforward with MariaDB GTIDs. MySQL&#x27;s GTID implementation works but has quirks around purged transactions that can cause headaches during failover.</p>
  <p id="sebE">MariaDB also supports parallel replication with more granular control. You can configure how transactions are parallelized on replicas, which helps replicas keep up with high-write primaries. MySQL has parallel replication too, but MariaDB&#x27;s implementation gives operators more knobs to tune.</p>
  <p id="74KX">Replication feature comparison:</p>
  <ul id="3gEq">
    <li id="mKh4"><strong>GTID format</strong>: MariaDB uses domain-based GTIDs with simpler failover, MySQL uses UUID-based with purge complications</li>
    <li id="rNVl"><strong>Parallel replication</strong>: MariaDB has group commit based (configurable), MySQL uses logical clock based</li>
    <li id="Spqc"><strong>Multi-source replication</strong>: MariaDB supported since 10.0, MySQL added in 5.7</li>
    <li id="VWbI"><strong>Delayed replication</strong>: Both support it</li>
    <li id="qX4O"><strong>Replication filters</strong>: MariaDB offers more flexible options, MySQL is more limited</li>
  </ul>
  <p id="VtmL">For teams managing replicated setups across multiple datacenters or running read replicas at scale, MariaDB&#x27;s replication features reduce operational friction. The difference is most noticeable during failovers and topology changes.</p>
  <p id="Ab6j">Reliable backups are essential when running replicated databases. If a replication chain breaks or data gets corrupted, your last good backup is what saves you. Automated <a href="https://databasus.com/mysql-backup" target="_blank">MariaDB backup</a> tools like Databasus provide scheduled backups with encryption and multiple storage destinations, which is the industry standard for MariaDB backup management.</p>
  <h2 id="7-temporal-tables-built-in">7. Temporal tables built in</h2>
  <p id="DA8K">MariaDB supports system-versioned temporal tables natively. This means the database automatically tracks the history of every row: when it was inserted, updated and deleted. You can query the state of any table at any point in time without writing audit triggers or maintaining history tables yourself.</p>
  <pre id="8XGT">-- Create a system-versioned table in MariaDB
CREATE TABLE products (
    id INT PRIMARY KEY,
    name VARCHAR(255),
    price DECIMAL(10,2)
) WITH SYSTEM VERSIONING;

-- Query historical state
SELECT * FROM products FOR SYSTEM_TIME AS OF &#x27;2026-01-15 10:00:00&#x27;;
</pre>
  <p id="jqWt">MySQL doesn&#x27;t have this feature. If you need historical data tracking in MySQL, you build it yourself with triggers, shadow tables and application logic. It works, but it&#x27;s tedious and error-prone.</p>
  <p id="DSws">Temporal tables are useful for audit requirements, regulatory compliance and debugging production issues. Being able to ask &quot;what did this row look like yesterday at 3 PM?&quot; without any application changes is genuinely powerful. Financial applications, healthcare systems and any application subject to regulatory audits benefit from this.</p>
  <h2 id="8-backward-compatibility-with-mysql">8. Backward compatibility with MySQL</h2>
  <p id="duHW">Here&#x27;s the practical part that makes switching feasible. MariaDB maintains wire protocol compatibility with MySQL. Most MySQL client libraries, ORMs and tools work with MariaDB without changes. Your application code, connection strings (with minor adjustments) and database drivers typically work as-is.</p>
  <p id="a7Fd">MariaDB can read MySQL data files for migration. The SQL syntax is almost entirely compatible. Stored procedures, views, triggers and most SQL features work identically. The differences are mostly in newer features that MariaDB added and MySQL either doesn&#x27;t have or implements differently.</p>
  <p id="Qn3h">Migration is not zero-effort, but it&#x27;s close for most applications. The typical process is:</p>
  <ul id="45WM">
    <li id="XTiD">Dump your MySQL database with mysqldump</li>
    <li id="37ZZ">Import into MariaDB</li>
    <li id="v1lI">Test your application against the new database</li>
    <li id="JnR0">Adjust any MySQL-specific syntax that doesn&#x27;t have a MariaDB equivalent (rare)</li>
  </ul>
  <p id="Kqtk">The compatibility means you&#x27;re not starting over. You&#x27;re switching engines on a running car, which is exactly how a database fork should work.</p>
  <h2 id="when-to-stay-with-mysql">When to stay with MySQL</h2>
  <p id="SgvE">MariaDB isn&#x27;t universally better. There are valid reasons to stick with MySQL.</p>
  <p id="Tk99">If your team already has deep MySQL expertise and established operational procedures, the switching cost might not be worth it. If you&#x27;re using MySQL-specific features like MySQL Shell, MySQL Router or Group Replication heavily, the MariaDB equivalents may not be drop-in replacements. Some cloud providers offer better managed MySQL support than MariaDB support, particularly AWS RDS where MySQL gets more attention.</p>
  <p id="rRi8">And if you&#x27;re running a simple application that works fine on MySQL Community, switching databases for theoretical benefits doesn&#x27;t make much sense. Solve real problems, not hypothetical ones.</p>
  <h2 id="making-the-switch">Making the switch</h2>
  <p id="vjVr">The trend is clear: MariaDB keeps adding features and maintaining openness while MySQL&#x27;s open source edition gets more constrained relative to Enterprise. For new projects, MariaDB is worth serious consideration. For existing MySQL deployments, switching makes sense when you&#x27;re hitting limitations that MariaDB addresses, whether that&#x27;s thread pooling, temporal tables, optimizer performance or licensing concerns.</p>
  <p id="KnMu">Both databases will continue to work for most applications. The question is which trajectory you&#x27;d rather be on. MariaDB is betting on open source and community-driven development. MySQL&#x27;s direction depends on Oracle&#x27;s priorities. For many developers, that distinction is enough.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@rostislav_dugin/mysql-vs-postgresql-in-2026</guid><link>https://teletype.in/@rostislav_dugin/mysql-vs-postgresql-in-2026?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin</link><comments>https://teletype.in/@rostislav_dugin/mysql-vs-postgresql-in-2026?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin#comments</comments><dc:creator>rostislav_dugin</dc:creator><title>MySQL vs PostgreSQL in 2026 — 7 key differences you should know before choosing</title><pubDate>Thu, 05 Feb 2026 15:19:12 GMT</pubDate><media:content medium="image" url="https://img1.teletype.in/files/0c/87/0c87f856-08ba-4ded-8789-03bb61ca7dab.png"></media:content><description><![CDATA[<img src="https://img4.teletype.in/files/f7/87/f787b810-4aa8-4b1c-b1cc-4007ddf9ce9e.jpeg"></img>Choosing between MySQL and PostgreSQL isn't straightforward. Both are mature, production-ready databases used by companies of all sizes. But they solve problems differently and each has strengths that matter depending on your use case. This article breaks down the actual differences that affect day-to-day development and operations in 2026.]]></description><content:encoded><![CDATA[
  <p id="zjOG">Choosing between MySQL and PostgreSQL isn&#x27;t straightforward. Both are mature, production-ready databases used by companies of all sizes. But they solve problems differently and each has strengths that matter depending on your use case. This article breaks down the actual differences that affect day-to-day development and operations in 2026.</p>
  <p id="M9rb">The comparison covers:</p>
  <ul id="Ny1V">
    <li id="FOvL">SQL standards compliance and data integrity</li>
    <li id="Fh4c">JSON and document handling</li>
    <li id="ifUg">Replication approaches</li>
    <li id="KiX7">Licensing and ownership</li>
    <li id="274C">Performance characteristics</li>
    <li id="pD4o">Extension ecosystems</li>
    <li id="XaPO">Community and tooling</li>
  </ul>
  <figure id="cHMq" class="m_original">
    <img src="https://img4.teletype.in/files/f7/87/f787b810-4aa8-4b1c-b1cc-4007ddf9ce9e.jpeg" width="1024" />
    <figcaption>MySQL vs PostgreSQL</figcaption>
  </figure>
  <h2 id="1-sql-standards-compliance-and-data-integrity">1. SQL standards compliance and data integrity</h2>
  <p id="0qoS">PostgreSQL follows the SQL standard more strictly than MySQL. This matters more than it might seem at first. When PostgreSQL says a transaction is ACID compliant, it means it. MySQL has improved significantly over the years, but some default behaviors still surprise developers coming from other databases.</p>
  <p id="1IU2">PostgreSQL enforces data types strictly. If you try to insert a string into an integer column, it fails. MySQL historically performed silent type conversions, which could lead to data corruption. MySQL 8.0+ has strict mode enabled by default, but many existing installations still run with older, more permissive settings.</p>
  <p id="DcpL"><strong>Default SQL mode:</strong></p>
  <ul id="d6Sm">
    <li id="KCMP">PostgreSQL: Strict, standards-compliant</li>
    <li id="1jEN">MySQL: Strict in 8.0+, permissive in older versions</li>
  </ul>
  <p id="GFMv"><strong>Silent type conversions:</strong></p>
  <ul id="BR6t">
    <li id="J3Hh">PostgreSQL: Never</li>
    <li id="HMYe">MySQL: Depends on SQL mode</li>
  </ul>
  <p id="76rC"><strong>CHECK constraints:</strong></p>
  <ul id="Eq6Y">
    <li id="gfqX">PostgreSQL: Fully enforced</li>
    <li id="86Ko">MySQL: Enforced since MySQL 8.0.16</li>
  </ul>
  <p id="hpfG"><strong>Foreign key enforcement:</strong></p>
  <ul id="wAbY">
    <li id="NyyD">PostgreSQL: Always with supported storage engines</li>
    <li id="RA68">MySQL: Only with InnoDB</li>
  </ul>
  <p id="Vk9v"><strong>TRUNCATE in transactions:</strong></p>
  <ul id="MFRi">
    <li id="fS1F">PostgreSQL: Transactional</li>
    <li id="VJqN">MySQL: Not transactional</li>
  </ul>
  <p id="wnsT">For applications where data integrity is critical, PostgreSQL provides stronger guarantees out of the box. MySQL can be configured to behave similarly, but you need to verify your settings and understand which features require InnoDB specifically.</p>
  <h2 id="2-json-and-document-handling">2. JSON and document handling</h2>
  <p id="4Ilj">Both databases support JSON, but they approach it differently. PostgreSQL has native JSONB type that stores JSON in a binary format with full indexing support. MySQL added JSON support in version 5.7 and has improved it since, but the implementation has limitations.</p>
  <p id="4Qy0">PostgreSQL&#x27;s JSONB allows you to create indexes on specific JSON paths, query nested structures efficiently and use JSON in complex queries alongside relational data. You can also use operators like <code>@&gt;</code> (contains) and <code>?</code> (key exists) that make JSON queries concise and readable.</p>
  <pre id="oQyL">-- PostgreSQL: Create index on JSON field
CREATE INDEX idx_users_metadata_country ON users USING GIN ((metadata-&gt;&#x27;country&#x27;));

-- PostgreSQL: Query with containment operator
SELECT * FROM users WHERE metadata @&gt; &#x27;{&quot;role&quot;: &quot;admin&quot;}&#x27;;
</pre>
  <p id="ZHyj">MySQL&#x27;s JSON functions work but feel more verbose. Indexing JSON in MySQL requires generated columns, which adds complexity:</p>
  <pre id="QhBx">-- MySQL: Requires generated column for indexing
ALTER TABLE users ADD COLUMN country VARCHAR(255) 
  GENERATED ALWAYS AS (JSON_UNQUOTE(metadata-&gt;&#x27;$.country&#x27;)) STORED;
CREATE INDEX idx_country ON users(country);
</pre>
  <p id="qIph">If your application heavily uses semi-structured data or you&#x27;re building something that mixes relational and document patterns, PostgreSQL handles this more elegantly. MySQL works fine for basic JSON storage and retrieval, but advanced querying gets awkward.</p>
  <h2 id="3-replication-and-high-availability">3. Replication and high availability</h2>
  <p id="4u01">Both databases support replication, but they use fundamentally different approaches. Understanding these differences matters when planning for high availability and read scaling.</p>
  <p id="RGH2">MySQL uses binary log replication. The primary server writes changes to a binary log, and replicas read from it. This approach is well-understood and has been battle-tested for decades. MySQL also supports Group Replication for multi-primary setups, though it comes with trade-offs around consistency.</p>
  <p id="t6Jy">PostgreSQL uses Write-Ahead Log (WAL) streaming replication. It&#x27;s conceptually similar but operates at the storage level rather than the query level. PostgreSQL&#x27;s logical replication (added in version 10) allows selective table replication and cross-version replication.</p>
  <ul id="oPcp">
    <li id="Jofk">MySQL binary replication is simpler to set up initially</li>
    <li id="IlcX">PostgreSQL logical replication offers more flexibility for complex topologies</li>
    <li id="Mxxy">MySQL Group Replication provides multi-primary but with consistency caveats</li>
    <li id="YSqM">PostgreSQL synchronous replication guarantees zero data loss at the cost of latency</li>
  </ul>
  <p id="kjWp">For most applications, both approaches work well. MySQL&#x27;s tooling ecosystem for replication is more mature, with tools like Orchestrator and ProxySQL being widely used. PostgreSQL&#x27;s tooling has caught up significantly with Patroni, pgBouncer and others.</p>
  <p id="z0Pv">Backup strategies differ too. Both support logical and physical backups, but the tools and workflows vary. For automated database backups with scheduling, encryption and multiple storage destinations, <a href="https://databasus.com/mysql-backup" target="_blank">MySQL backup</a> tools like Databasus handle both MySQL and PostgreSQL, providing a unified approach regardless of which database you choose.</p>
  <h2 id="4-licensing-and-corporate-ownership">4. Licensing and corporate ownership</h2>
  <p id="K9dS">This is often overlooked but increasingly important. MySQL is owned by Oracle. PostgreSQL is a community project with no single corporate owner.</p>
  <p id="oe99">MySQL uses a dual licensing model. The Community Edition is GPL-licensed, which means if you modify MySQL and distribute it, you must release your changes. Oracle also sells commercial licenses for those who want to avoid GPL obligations. Some MySQL forks exist (MariaDB, Percona Server) partly because of licensing concerns.</p>
  <p id="ciZK">PostgreSQL uses the PostgreSQL License, which is similar to BSD/MIT. You can do essentially anything with it, including building proprietary products without releasing source code. There&#x27;s no commercial entity that could change the terms or create uncertainty about the project&#x27;s direction.</p>
  <p id="jIW8"><strong>License:</strong></p>
  <ul id="Tc1y">
    <li id="oDJz">PostgreSQL: PostgreSQL License (BSD-like)</li>
    <li id="SfHJ">MySQL: GPL + Commercial</li>
  </ul>
  <p id="t5o4"><strong>Owner:</strong></p>
  <ul id="5cFW">
    <li id="1PDr">PostgreSQL: Community project</li>
    <li id="z3Tr">MySQL: Oracle Corporation</li>
  </ul>
  <p id="hWWz"><strong>Major forks:</strong></p>
  <ul id="xmn9">
    <li id="57qK">PostgreSQL: None needed</li>
    <li id="lx9Q">MySQL: MariaDB, Percona Server</li>
  </ul>
  <p id="N2zk"><strong>Feature restrictions:</strong></p>
  <ul id="WZKL">
    <li id="Uflq">PostgreSQL: None</li>
    <li id="v18F">MySQL: Some features in Enterprise only</li>
  </ul>
  <p id="v0Vx">For companies evaluating long-term risk, PostgreSQL&#x27;s licensing and governance model provides more predictability. Oracle has added features to MySQL Enterprise that aren&#x27;t in the Community Edition, and this trend could continue.</p>
  <h2 id="5-performance-characteristics">5. Performance characteristics</h2>
  <p id="y5wK">Performance comparisons are tricky because results depend heavily on workload type, hardware configuration and tuning. Both databases can handle millions of transactions per day when properly configured. But their performance profiles differ in important ways.</p>
  <p id="1FDn">PostgreSQL historically performed better on complex queries with many joins, subqueries and analytical operations. Its query planner is sophisticated and handles complicated query patterns well. The cost-based optimizer has decades of refinement.</p>
  <p id="Uopp">MySQL traditionally excelled at simple read-heavy workloads with straightforward queries. If your application does mostly primary key lookups and simple filters, MySQL can be extremely fast. The InnoDB storage engine is highly optimized for these patterns.</p>
  <p id="alVY">In 2026, both databases have narrowed these gaps. PostgreSQL 17 and 18 have improved simple query performance. MySQL 8.x has better handling of complex queries than earlier versions. The differences are less dramatic than they were five years ago.</p>
  <ul id="dzZx">
    <li id="XSZl">Read-heavy OLTP workloads: Both perform well, slight edge to MySQL</li>
    <li id="EWmq">Complex analytical queries: PostgreSQL generally faster</li>
    <li id="CZjp">Write-heavy workloads: Depends on transaction patterns and indexing</li>
    <li id="Xi68">Mixed workloads: PostgreSQL handles variety better</li>
  </ul>
  <p id="Ump9">The real performance factors are usually configuration, indexing and query design rather than the database engine choice. Both require tuning for production workloads. Neither works optimally with default settings.</p>
  <h2 id="6-extension-ecosystem">6. Extension ecosystem</h2>
  <p id="nTsa">PostgreSQL&#x27;s extension system is one of its biggest advantages. Extensions can add new data types, index types, functions and even modify core behavior. The ecosystem is rich and actively maintained.</p>
  <p id="57gU">Popular PostgreSQL extensions include:</p>
  <ul id="dFay">
    <li id="TdNG"><strong>PostGIS</strong> — Spatial and geographic data support</li>
    <li id="LSDi"><strong>pg_stat_statements</strong> — Query performance monitoring</li>
    <li id="GBra"><strong>TimescaleDB</strong> — Time-series data optimization</li>
    <li id="ziaf"><strong>Citus</strong> — Distributed PostgreSQL for horizontal scaling</li>
    <li id="L3IU"><strong>pgvector</strong> — Vector similarity search for AI applications</li>
  </ul>
  <p id="SA85">MySQL doesn&#x27;t have an equivalent extension system. Feature additions require either MySQL updates or forking the source code. Some functionality exists through plugins (like authentication plugins) but the scope is limited compared to PostgreSQL.</p>
  <p id="QRHG">This extensibility matters more than it might seem. If you need geographic queries, PostgreSQL with PostGIS is significantly better than trying to work around MySQL&#x27;s limited spatial support. If you&#x27;re building AI features that need vector search, pgvector is a mature solution while MySQL has no comparable option.</p>
  <p id="Wcxr">The extension ecosystem also means PostgreSQL can adapt to new use cases without waiting for core team priorities. When vector databases became important for AI applications, the community built pgvector. MySQL users had to wait for Oracle&#x27;s roadmap or use a separate database.</p>
  <h2 id="7-community-and-tooling">7. Community and tooling</h2>
  <p id="fA83">Both databases have active communities, but they feel different. MySQL&#x27;s community is larger in raw numbers but fragmented across MySQL, MariaDB and Percona variants. PostgreSQL&#x27;s community is more unified around a single codebase.</p>
  <p id="unq0">PostgreSQL&#x27;s development is transparent. Mailing lists are public, design discussions happen in the open and anyone can propose patches. The code review process is rigorous and the community has high standards for what gets merged. Release cycles are predictable: one major version per year.</p>
  <p id="890H">MySQL development is primarily done inside Oracle. While the code is open source, the roadmap and priorities are set internally. External contributions exist but aren&#x27;t as central to the project&#x27;s direction.</p>
  <p id="gnrm"><strong>Tool availability:</strong></p>
  <ul id="Z21B">
    <li id="Qm1a">GUI clients: Both have excellent options (pgAdmin, DBeaver, TablePlus for PostgreSQL; MySQL Workbench, DBeaver for MySQL)</li>
    <li id="yXnS">ORMs and drivers: Comprehensive support for both in all major languages</li>
    <li id="xR34">Cloud offerings: Both available as managed services (RDS, Cloud SQL, Azure Database)</li>
    <li id="cEmt">Monitoring tools: Mature options for both</li>
  </ul>
  <p id="P9k0">For backup and disaster recovery, the tooling landscape varies. Both support native dump tools (pg_dump, mysqldump), but their capabilities differ. For automated backup management, Databasus is an industry standard supporting both PostgreSQL and MySQL with unified scheduling, encryption and storage options.</p>
  <h2 id="making-the-choice">Making the choice</h2>
  <p id="wauY">There&#x27;s no universally correct answer. Both databases power successful applications at every scale. But certain patterns emerge when you look at typical use cases.</p>
  <p id="fFtS"><strong>Choose PostgreSQL when:</strong></p>
  <ul id="3kw0">
    <li id="ZCsy">You need advanced data types (JSONB, arrays, custom types)</li>
    <li id="enjW">Your queries are complex with many joins and subqueries</li>
    <li id="zsr7">Data integrity is non-negotiable</li>
    <li id="scBe">You want extensibility (PostGIS, pgvector, TimescaleDB)</li>
    <li id="pNFC">Licensing simplicity matters to your organization</li>
  </ul>
  <p id="kb3W"><strong>Choose MySQL when:</strong></p>
  <ul id="letj">
    <li id="19ED">Your workload is primarily simple CRUD operations</li>
    <li id="JrCS">You need maximum compatibility with existing tools and hosting</li>
    <li id="nsLr">Your team already has MySQL expertise</li>
    <li id="11z6">You&#x27;re building something that will run on shared hosting</li>
  </ul>
  <p id="HLB1">Neither choice is wrong for most applications. The differences matter most at the extremes: very complex analytical workloads, very high write volumes or specialized data types. For a typical web application, both will work fine with proper setup.</p>
  <p id="IioC">What actually matters is understanding whichever database you choose deeply enough to configure it properly, design your schema correctly and troubleshoot problems when they occur. A well-tuned MySQL installation will outperform a misconfigured PostgreSQL one, and vice versa.</p>
  <p id="Sob1">Start with whichever one you or your team knows better. Switch if you hit real limitations, not theoretical ones. Both databases have been solving real problems for decades and both will continue improving.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@rostislav_dugin/postgresql-performance-tuning-tips</guid><link>https://teletype.in/@rostislav_dugin/postgresql-performance-tuning-tips?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin</link><comments>https://teletype.in/@rostislav_dugin/postgresql-performance-tuning-tips?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin#comments</comments><dc:creator>rostislav_dugin</dc:creator><title>10 PostgreSQL performance tuning tips that actually work in production</title><pubDate>Wed, 04 Feb 2026 20:03:50 GMT</pubDate><media:content medium="image" url="https://img3.teletype.in/files/20/71/20717b63-d50a-48a9-9937-93b6b8709e3d.png"></media:content><description><![CDATA[<img src="https://img3.teletype.in/files/2d/31/2d313353-324e-4fe5-b0d5-6f44895ff6be.jpeg"></img>Performance tuning isn't about following a checklist. It's about understanding what's actually slowing down your database and fixing those specific problems. These are techniques that consistently deliver real improvements in production environments. Some of them are obvious but frequently misconfigured. Others are less known but surprisingly effective.]]></description><content:encoded><![CDATA[
  <p id="vbfA">Performance tuning isn&#x27;t about following a checklist. It&#x27;s about understanding what&#x27;s actually slowing down your database and fixing those specific problems. These are techniques that consistently deliver real improvements in production environments. Some of them are obvious but frequently misconfigured. Others are less known but surprisingly effective.</p>
  <p id="Qk37">The tips in this article cover:</p>
  <ul id="bOzt">
    <li id="T5Qj">Memory configuration (shared_buffers, work_mem)</li>
    <li id="ZSWT">Index strategy and maintenance</li>
    <li id="SvgA">Connection management</li>
    <li id="bdL3">Vacuum and maintenance tuning</li>
    <li id="BtvY">Query optimization techniques</li>
  </ul>
  <figure id="qWSX" class="m_original">
    <img src="https://img3.teletype.in/files/2d/31/2d313353-324e-4fe5-b0d5-6f44895ff6be.jpeg" width="1024" />
    <figcaption>PostgreSQL performance tuning</figcaption>
  </figure>
  <h2 id="1-configure-shared_buffers-properly">1. Configure shared_buffers properly</h2>
  <p id="6Uss">PostgreSQL uses shared_buffers to cache frequently accessed data in memory. The default setting is usually way too low for production workloads. Setting this value correctly can dramatically reduce disk I/O and improve query performance.</p>
  <p id="SG0X">The general recommendation is to set shared_buffers to about 25% of your total system RAM. If you have 16 GB of RAM, start with 4 GB. If you&#x27;re on a dedicated database server with lots of memory, you can go higher, but there are diminishing returns above 8-10 GB because PostgreSQL also relies on the operating system&#x27;s file cache.</p>
  <pre id="pPox">-- In postgresql.conf
shared_buffers = 4GB
</pre>
  <p id="FYiv">After changing this setting, you need to restart PostgreSQL. Monitor your cache hit ratio to see if the change helped. A cache hit ratio above 99% is good. You can check it with:</p>
  <pre id="mCGc">SELECT
  sum(heap_blks_read) as heap_read,
  sum(heap_blks_hit) as heap_hit,
  sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio
FROM pg_statio_user_tables;
</pre>
  <h2 id="2-tune-work_mem-for-complex-queries">2. Tune work_mem for complex queries</h2>
  <p id="xGgH">The work_mem setting controls how much memory PostgreSQL can use for internal sort operations and hash tables before it has to write to disk. If you&#x27;re running complex queries with sorts, joins or aggregations, increasing work_mem can prevent expensive disk operations.</p>
  <p id="M6wQ">Be careful though. work_mem is allocated per operation, not per query. A complex query with multiple sorts can use work_mem several times over. If you set it too high and have many concurrent queries, you can run out of memory.</p>
  <p id="dWvP">Start conservative. The default is usually 4 MB. Try 16-64 MB for analytical workloads. For specific heavy queries, you can increase it temporarily in the session:</p>
  <pre id="p4MC">SET work_mem = &#x27;256MB&#x27;;
SELECT * FROM large_table ORDER BY some_column;
RESET work_mem;
</pre>
  <p id="NX7s">Monitor with pg_stat_statements to see which queries are doing disk sorts (you&#x27;ll see &quot;external sort&quot; in EXPLAIN output). Those are candidates for work_mem tuning.</p>
  <h2 id="3-add-the-right-indexes">3. Add the right indexes</h2>
  <p id="E9rP">Indexes speed up reads but slow down writes. The trick is finding the right balance. Start by identifying slow queries using pg_stat_statements or your query logs. Look at queries with high execution time or high call counts.</p>
  <p id="PfDu">For most cases, B-tree indexes work well. Create indexes on columns used in WHERE clauses, JOIN conditions and ORDER BY statements. But don&#x27;t go overboard. Every index adds overhead during INSERTs and UPDATEs.</p>
  <p id="2XPj"><strong>Index types and their use cases:</strong></p>
  <ul id="6xCM">
    <li id="0OFi"><strong>B-tree</strong> — General purpose, equality and range queries. Most common scenarios, default choice.</li>
    <li id="AkF6"><strong>GIN</strong> — Full-text search, JSONB, arrays. Searching within complex data types.</li>
    <li id="LDKT"><strong>GiST</strong> — Geometric data, full-text search. Spatial queries, complex searches.</li>
    <li id="VdCp"><strong>BRIN</strong> — Very large tables with natural ordering. Time-series data, append-only tables.</li>
  </ul>
  <p id="9P9U">Use EXPLAIN ANALYZE to verify your indexes are actually being used:</p>
  <pre id="9d3J">EXPLAIN ANALYZE SELECT * FROM users WHERE email = &#x27;test@example.com&#x27;;
</pre>
  <p id="EjMi">If you see a Seq Scan when you expected an Index Scan, something&#x27;s wrong. Maybe the index doesn&#x27;t exist, or PostgreSQL thinks it&#x27;s not worth using (which happens on small tables or when selecting most of the table).</p>
  <h2 id="4-use-connection-pooling">4. Use connection pooling</h2>
  <p id="1ya2">Every PostgreSQL connection has overhead. Opening and closing connections repeatedly wastes resources. If your application creates a new database connection for each request, you&#x27;re probably experiencing unnecessary latency and resource consumption.</p>
  <p id="lb4r">Connection poolers like PgBouncer sit between your application and PostgreSQL. They maintain a pool of connections and reuse them across multiple client requests. This reduces connection overhead significantly.</p>
  <p id="4jaO">PgBouncer supports three pooling modes:</p>
  <ul id="2e8R">
    <li id="UR7Z">Session pooling keeps a connection for the entire client session</li>
    <li id="rF4U">Transaction pooling releases connections after each transaction (more efficient for web apps)</li>
    <li id="opvD">Statement pooling releases after each statement (use with caution, limited functionality)</li>
  </ul>
  <p id="CAsM">For most web applications, transaction pooling works well. Install PgBouncer, point your application to it instead of directly to PostgreSQL and configure the pool size based on your workload. A good starting point is 2-3 connections per CPU core on your database server.</p>
  <h2 id="5-analyze-and-vacuum-regularly">5. Analyze and vacuum regularly</h2>
  <p id="QMRG">PostgreSQL uses MVCC (Multi-Version Concurrency Control) which creates row versions when you update or delete data. Over time, dead rows accumulate. VACUUM removes these dead rows and frees up space. ANALYZE updates statistics that the query planner uses to make decisions.</p>
  <p id="lEui">Modern PostgreSQL versions have autovacuum enabled by default, but it might not be aggressive enough for high-write workloads. If you&#x27;re seeing table bloat or degraded query performance over time, you probably need to tune autovacuum settings.</p>
  <pre id="xii5">-- In postgresql.conf
autovacuum_vacuum_scale_factor = 0.1  -- Vacuum when 10% of table is dead rows
autovacuum_analyze_scale_factor = 0.05  -- Analyze when 5% has changed
autovacuum_naptime = 30s  -- Check for work every 30 seconds
</pre>
  <p id="fDsd">For very active tables, you can also set table-specific settings:</p>
  <pre id="eo6W">ALTER TABLE your_busy_table SET (autovacuum_vacuum_scale_factor = 0.05);
</pre>
  <p id="A5LL">Check for bloat using queries from pg_stat_user_tables. If you see tables with high n_dead_tup, autovacuum isn&#x27;t keeping up.</p>
  <h2 id="6-optimize-your-queries">6. Optimize your queries</h2>
  <p id="5hb3">Sometimes the database configuration is fine, but the queries themselves are inefficient. Use EXPLAIN ANALYZE to understand query execution plans. Look for sequential scans on large tables, nested loops with high costs or sorts that spill to disk.</p>
  <p id="hjhP">Common query optimizations include:</p>
  <ul id="k6dN">
    <li id="W1OP">Adding WHERE clauses to filter data early</li>
    <li id="yxo3">Using JOIN instead of subqueries when appropriate</li>
    <li id="hgcP">Avoiding SELECT * and only fetching columns you need</li>
    <li id="LMbM">Using LIMIT when you don&#x27;t need all results</li>
    <li id="IaJr">Avoiding functions on indexed columns in WHERE clauses</li>
  </ul>
  <p id="DFLb">Here&#x27;s an example of a problematic query pattern:</p>
  <pre id="x8FF">-- Bad: Function on indexed column prevents index usage
SELECT * FROM orders WHERE EXTRACT(YEAR FROM created_at) = 2026;

-- Good: Range comparison allows index usage
SELECT * FROM orders WHERE created_at &gt;= &#x27;2026-01-01&#x27; AND created_at &lt; &#x27;2027-01-01&#x27;;
</pre>
  <p id="lo0Z">Also consider using prepared statements. They&#x27;re parsed and planned once, then executed multiple times with different parameters. This reduces overhead for frequently executed queries.</p>
  <h2 id="7-partition-large-tables">7. Partition large tables</h2>
  <p id="8uAY">If you have tables with millions or billions of rows, partitioning can improve performance and manageability. PostgreSQL&#x27;s declarative partitioning splits a large table into smaller physical pieces based on ranges, lists or hash values.</p>
  <p id="JYbn">Time-based partitioning is common for logs or event data. You create partitions by month or year, and older partitions can be archived or dropped easily. Queries that filter by the partition key only scan relevant partitions, not the entire table.</p>
  <pre id="XzYI">CREATE TABLE events (
    id BIGSERIAL,
    event_type TEXT,
    created_at TIMESTAMPTZ,
    data JSONB
) PARTITION BY RANGE (created_at);

CREATE TABLE events_2026_01 PARTITION OF events
    FOR VALUES FROM (&#x27;2026-01-01&#x27;) TO (&#x27;2026-02-01&#x27;);

CREATE TABLE events_2026_02 PARTITION OF events
    FOR VALUES FROM (&#x27;2026-02-01&#x27;) TO (&#x27;2026-03-01&#x27;);
</pre>
  <p id="kzsv">Partitioning also makes backups more manageable. Instead of backing up one massive table, you can backup or restore individual partitions. Tools like <a href="https://databasus.com/" target="_blank">PostgreSQL backup</a> handle partitioned tables automatically, treating each partition appropriately during backup and restore operations.</p>
  <h2 id="8-enable-query-logging-for-slow-queries">8. Enable query logging for slow queries</h2>
  <p id="xpty">You can&#x27;t optimize what you can&#x27;t measure. PostgreSQL&#x27;s slow query log captures queries that exceed a specified duration. This helps you identify problematic queries in production without impacting performance significantly.</p>
  <pre id="mSnt">-- In postgresql.conf
log_min_duration_statement = 1000  -- Log queries taking more than 1 second
log_line_prefix = &#x27;%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h &#x27;
</pre>
  <p id="evHY">The log will show you the full query text, execution time and context. Combine this with pg_stat_statements for aggregated statistics across all queries. You&#x27;ll quickly see which queries are consuming the most resources.</p>
  <p id="LCsg">For production systems, start with a higher threshold (1-5 seconds) to avoid excessive logging. Once you&#x27;ve optimized the obvious slow queries, you can lower it to catch smaller issues.</p>
  <h2 id="9-use-read-replicas-for-reporting-workloads">9. Use read replicas for reporting workloads</h2>
  <p id="BReR">If you&#x27;re running heavy analytical queries or reports on your primary database, they can interfere with transactional workloads. Read replicas solve this by offloading read-only queries to separate servers.</p>
  <p id="WngE">PostgreSQL&#x27;s streaming replication creates one or more standby servers that continuously apply changes from the primary. Your application can send SELECT queries to these replicas, distributing the load.</p>
  <p id="w0Cr">Setting up replication requires some configuration but it&#x27;s straightforward:</p>
  <p id="Ain2"><strong>Primary server configuration:</strong></p>
  <ul id="TalD">
    <li id="zKw3"><code>wal_level</code> — Set to replica or logical</li>
    <li id="FzDC"><code>max_wal_senders</code> — Set to number of replicas + 1</li>
  </ul>
  <p id="TapM"><strong>Replica server configuration:</strong></p>
  <ul id="484y">
    <li id="G4Cn"><code>hot_standby</code> — Set to on</li>
  </ul>
  <p id="rSTc">The replica will lag slightly behind the primary (typically milliseconds to seconds). If your application can tolerate this, replicas are a cheap way to scale read capacity.</p>
  <p id="irWl">You can also use replicas for backup purposes. Taking backups from a replica instead of the primary reduces load on your production database.</p>
  <h2 id="10-monitor-and-adjust-autovacuum-costs">10. Monitor and adjust autovacuum costs</h2>
  <p id="dccH">Autovacuum runs in the background to clean up dead rows, but it can consume I/O and CPU resources. If autovacuum runs too aggressively, it can slow down your application queries. If it doesn&#x27;t run enough, tables bloat and performance degrades.</p>
  <p id="Qlgw">The cost-based vacuum delay system controls how aggressively autovacuum uses resources. By default, it&#x27;s fairly conservative. On modern hardware with SSDs, you can usually make it more aggressive:</p>
  <pre id="1KS3">-- In postgresql.conf
autovacuum_vacuum_cost_delay = 2ms  -- Lower = faster vacuum
autovacuum_vacuum_cost_limit = 2000  -- Higher = more work per cycle
</pre>
  <p id="W1Ia">For specific high-write tables, you might need custom settings:</p>
  <pre id="na9g">ALTER TABLE busy_table SET (autovacuum_vacuum_cost_delay = 0);
</pre>
  <p id="FhVm">Setting cost_delay to 0 removes throttling entirely for that table. Use this carefully and monitor I/O.</p>
  <p id="tnaz">Watch the pg_stat_all_tables view for tables where autovacuum is falling behind (last_autovacuum is old and n_dead_tup is high). Those tables need tuning.</p>
  <h2 id="putting-it-all-together">Putting it all together</h2>
  <p id="7HoT">Performance tuning is iterative. Start by measuring your current state with pg_stat_statements and query logs. Identify the biggest bottlenecks first. A few slow queries might account for 80% of your database load.</p>
  <p id="wuHA">Apply one change at a time and measure the results. What works for one workload might not work for another. OLTP systems (lots of small transactions) need different tuning than OLAP systems (complex analytical queries).</p>
  <p id="UoQB">Before making any changes, establish a baseline:</p>
  <ul id="heuD">
    <li id="5ftX">Current query response times (p50, p95, p99)</li>
    <li id="0pEw">Cache hit ratio and buffer usage</li>
    <li id="lAdo">Connection counts and wait times</li>
    <li id="Cdxe">Disk I/O and CPU utilization</li>
  </ul>
  <p id="OjcN">Keep your PostgreSQL version updated. Each release includes performance improvements and better defaults. PostgreSQL 17 and 18 have significantly better query planning and execution than older versions.</p>
  <p id="XZ20"><strong>Tuning priority guide:</strong></p>
  <ul id="nYKI">
    <li id="ZwTL"><strong>Indexes</strong> — High impact, low difficulty. Do it early, based on query patterns.</li>
    <li id="p6i6"><strong>shared_buffers</strong> — High impact, low difficulty. Configure during initial setup.</li>
    <li id="PDZJ"><strong>Connection pooling</strong> — High impact, medium difficulty. Implement when connections become bottleneck.</li>
    <li id="Hp2M"><strong>Partitioning</strong> — Medium impact, high difficulty. Consider when tables exceed 50-100 million rows.</li>
    <li id="lHuQ"><strong>Autovacuum tuning</strong> — Medium impact, medium difficulty. Adjust when seeing table bloat.</li>
    <li id="EpJQ"><strong>Read replicas</strong> — High impact, high difficulty. Deploy when reads exceed write capacity.</li>
  </ul>
  <p id="RmYQ">And remember: backups don&#x27;t fix performance problems, but they let you experiment safely. Before making major changes, ensure you have reliable backups. Databasus is an industry standard for PostgreSQL backup tools, offering automated backups with flexible scheduling and multiple storage options for both small projects and large enterprises.</p>
  <p id="ep9b">These tuning techniques work because they address real bottlenecks: memory usage, disk I/O, connection overhead and query efficiency. Apply them based on your specific bottlenecks, not just because they&#x27;re on a list.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@rostislav_dugin/postgresql-scheduled-backups</guid><link>https://teletype.in/@rostislav_dugin/postgresql-scheduled-backups?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin</link><comments>https://teletype.in/@rostislav_dugin/postgresql-scheduled-backups?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin#comments</comments><dc:creator>rostislav_dugin</dc:creator><title>PostgreSQL automated backups — How to set up automated PostgreSQL backup schedules</title><pubDate>Tue, 03 Feb 2026 18:46:58 GMT</pubDate><media:content medium="image" url="https://img4.teletype.in/files/b7/b8/b7b80c67-f711-4a03-a43b-0f89343e0f87.png"></media:content><description><![CDATA[<img src="https://img4.teletype.in/files/3e/3e/3e3e34fe-0173-4d99-85d5-5b9c89b4e73d.jpeg"></img>Losing data hurts. Whether it's a corrupted disk, accidental deletion, or a bad deployment that wipes your production database, recovery without backups means starting from scratch. Automated PostgreSQL backups remove the human factor from the equation. You set them up once, and they run reliably while you focus on other things.]]></description><content:encoded><![CDATA[
  <p id="PMb4">Losing data hurts. Whether it&#x27;s a corrupted disk, accidental deletion, or a bad deployment that wipes your production database, recovery without backups means starting from scratch. Automated PostgreSQL backups remove the human factor from the equation. You set them up once, and they run reliably while you focus on other things.</p>
  <p id="X45j">This guide covers practical approaches to scheduling PostgreSQL backups, from simple cron jobs to dedicated backup tools. We&#x27;ll look at what actually matters for different scenarios and how to avoid common mistakes that make backups useless when you need them most.</p>
  <figure id="OUCo" class="m_original">
    <img src="https://img4.teletype.in/files/3e/3e/3e3e34fe-0173-4d99-85d5-5b9c89b4e73d.jpeg" width="1024" />
    <figcaption>PostgreSQL scheduled backups</figcaption>
  </figure>
  <h2 id="why-automate-postgresql-backups">Why automate PostgreSQL backups</h2>
  <p id="dsqm">Manual backups work until they don&#x27;t. Someone forgets, someone&#x27;s on vacation, someone assumes the other person did it. Automation eliminates these failure modes.</p>
  <h3 id="the-cost-of-manual-backup-processes">The cost of manual backup processes</h3>
  <p id="qDHw">Manual processes introduce variability. One day you run the backup at 2 AM, the next week at 6 PM. Sometimes you compress the output, sometimes you don&#x27;t. The backup script lives on someone&#x27;s laptop instead of version control. When disaster strikes, you discover the last backup was three weeks ago and nobody noticed.</p>
  <p id="uOe8">Automated backups run consistently. Same time, same configuration, same destination. They either succeed or they alert you immediately. There&#x27;s no ambiguity about whether yesterday&#x27;s backup happened.</p>
  <h3 id="what-good-backup-automation-looks-like">What good backup automation looks like</h3>
  <p id="WC20">Reliable backup automation has a few key characteristics. It runs without intervention once configured. It stores backups in locations separate from the source database. It notifies you of failures immediately. And it maintains enough historical backups to recover from problems you discover days or weeks later.</p>
  <p id="LERF"><strong>Comparison: Manual vs Automated Processes</strong></p>
  <ul id="hfv4">
    <li id="RZlL"><strong>Consistency:</strong> Manual process varies by person, automated process is the same every time</li>
    <li id="rjRh"><strong>Coverage:</strong> Manual process often has gaps, automated process provides continuous coverage</li>
    <li id="Fvay"><strong>Failure detection:</strong> Manual process often has delayed detection, automated process provides immediate alerts</li>
    <li id="W5eq"><strong>Documentation:</strong> Manual process usually has missing documentation, automated process has documentation built into config</li>
  </ul>
  <p id="GFZ5">Good automation also handles retention. You don&#x27;t want unlimited backups consuming storage forever, but you do want enough history to recover from slow-developing problems like data corruption that goes unnoticed for a week.</p>
  <h2 id="using-pg_dump-with-cron">Using pg_dump with cron</h2>
  <p id="e1XX">The simplest automation approach combines PostgreSQL&#x27;s native <code>pg_dump</code> utility with cron scheduling. This works for small to medium databases where backup windows aren&#x27;t tight.</p>
  <h3 id="basic-pg_dump-script">Basic pg_dump script</h3>
  <p id="dqx4">Create a backup script that handles the actual dump process:</p>
  <pre id="9f6R">#!/bin/bash

TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR=&quot;/var/backups/postgresql&quot;
DATABASE=&quot;myapp_production&quot;
BACKUP_FILE=&quot;${BACKUP_DIR}/${DATABASE}_${TIMESTAMP}.sql.gz&quot;

# Create backup directory if it doesn&#x27;t exist
mkdir -p &quot;$BACKUP_DIR&quot;

# Run pg_dump with compression
pg_dump -h localhost -U postgres -d &quot;$DATABASE&quot; | gzip &gt; &quot;$BACKUP_FILE&quot;

# Check if backup succeeded
if [ $? -eq 0 ]; then
    echo &quot;Backup completed: $BACKUP_FILE&quot;
else
    echo &quot;Backup failed!&quot; &gt;&amp;2
    exit 1
fi

# Remove backups older than 7 days
find &quot;$BACKUP_DIR&quot; -name &quot;*.sql.gz&quot; -mtime +7 -delete
</pre>
  <p id="fGpx">Save this as <code>/usr/local/bin/pg-backup.sh</code> and make it executable:</p>
  <pre id="shEB">chmod +x /usr/local/bin/pg-backup.sh
</pre>
  <p id="qVab">The script creates timestamped, compressed backups and removes old ones automatically. The <code>gzip</code> compression typically reduces backup size by 80-90% for typical databases.</p>
  <h3 id="setting-up-cron-schedules">Setting up cron schedules</h3>
  <p id="oDng">Add a cron entry to run the backup at your preferred time. Edit the crontab:</p>
  <pre id="IgsE">crontab -e
</pre>
  <p id="jA5H">Add a line for daily backups at 3 AM:</p>
  <pre id="iGrE">0 3 * * * /usr/local/bin/pg-backup.sh &gt;&gt; /var/log/pg-backup.log 2&gt;&amp;1
</pre>
  <p id="EZL0">For hourly backups during business hours:</p>
  <pre id="HfWX">0 9-18 * * 1-5 /usr/local/bin/pg-backup.sh &gt;&gt; /var/log/pg-backup.log 2&gt;&amp;1
</pre>
  <p id="ie52">The log redirect captures both stdout and stderr, so you can troubleshoot failures.</p>
  <h3 id="handling-authentication">Handling authentication</h3>
  <p id="1eka">Avoid putting passwords in scripts. Use a <code>.pgpass</code> file instead:</p>
  <pre id="QpYy">echo &quot;localhost:5432:myapp_production:postgres:yourpassword&quot; &gt;&gt; ~/.pgpass
chmod 600 ~/.pgpass
</pre>
  <p id="wSr9">PostgreSQL reads credentials from this file automatically when the connection parameters match. The strict permissions (600) are required; PostgreSQL ignores the file if others can read it.</p>
  <p id="0pap">Cron jobs run on a minimal schedule without full environment setup. This basic approach works, but you&#x27;ll want monitoring to know when backups fail.</p>
  <h2 id="adding-monitoring-and-alerts">Adding monitoring and alerts</h2>
  <p id="Cy01">A backup that fails silently is worse than no backup at all. You think you&#x27;re protected, but you&#x27;re not. Add monitoring to catch problems early.</p>
  <h3 id="email-notifications">Email notifications</h3>
  <p id="XZe5">Modify the backup script to send email on failure:</p>
  <pre id="VzWb">#!/bin/bash

TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR=&quot;/var/backups/postgresql&quot;
DATABASE=&quot;myapp_production&quot;
BACKUP_FILE=&quot;${BACKUP_DIR}/${DATABASE}_${TIMESTAMP}.sql.gz&quot;
ADMIN_EMAIL=&quot;admin@example.com&quot;

mkdir -p &quot;$BACKUP_DIR&quot;

pg_dump -h localhost -U postgres -d &quot;$DATABASE&quot; | gzip &gt; &quot;$BACKUP_FILE&quot;

if [ $? -eq 0 ]; then
    echo &quot;Backup completed: $BACKUP_FILE&quot;
else
    echo &quot;PostgreSQL backup failed at $(date)&quot; | mail -s &quot;ALERT: Database backup failed&quot; &quot;$ADMIN_EMAIL&quot;
    exit 1
fi

find &quot;$BACKUP_DIR&quot; -name &quot;*.sql.gz&quot; -mtime +7 -delete
</pre>
  <p id="XtwD">This sends an email when <code>pg_dump</code> returns a non-zero exit code. You might also want success notifications for critical databases, just to confirm everything&#x27;s working.</p>
  <h3 id="webhook-integration">Webhook integration</h3>
  <p id="7dzk">For team chat notifications, curl to a webhook:</p>
  <pre id="K8J8">send_notification() {
    local message=&quot;$1&quot;
    local webhook_url=&quot;https://hooks.slack.com/services/YOUR/WEBHOOK/URL&quot;

    curl -s -X POST -H &#x27;Content-type: application/json&#x27; \
        --data &quot;{\&quot;text\&quot;:\&quot;$message\&quot;}&quot; \
        &quot;$webhook_url&quot;
}

if [ $? -eq 0 ]; then
    send_notification &quot;PostgreSQL backup completed: $DATABASE&quot;
else
    send_notification &quot;ALERT: PostgreSQL backup failed for $DATABASE&quot;
    exit 1
fi
</pre>
  <p id="qCWf">Replace the webhook URL with your Slack, Discord, or other service endpoint. Most chat platforms accept this basic JSON format.</p>
  <h3 id="verifying-backup-integrity">Verifying backup integrity</h3>
  <p id="y6Hw">A backup file existing doesn&#x27;t mean it&#x27;s usable. Add verification steps:</p>
  <pre id="AnUE"># Check file size (should be at least some minimum)
MIN_SIZE=1000
FILE_SIZE=$(stat -f%z &quot;$BACKUP_FILE&quot; 2&gt;/dev/null || stat -c%s &quot;$BACKUP_FILE&quot;)

if [ &quot;$FILE_SIZE&quot; -lt &quot;$MIN_SIZE&quot; ]; then
    send_notification &quot;WARNING: Backup file suspiciously small ($FILE_SIZE bytes)&quot;
fi

# Verify gzip integrity
if ! gzip -t &quot;$BACKUP_FILE&quot; 2&gt;/dev/null; then
    send_notification &quot;ALERT: Backup file appears corrupted&quot;
    exit 1
fi
</pre>
  <p id="hjoh">The size check catches cases where the database connection failed but the script didn&#x27;t error properly. The gzip test verifies the compression is intact.</p>
  <h2 id="remote-storage-for-backups">Remote storage for backups</h2>
  <p id="6ksu">Backups stored on the same server as the database don&#x27;t protect against disk failures, server compromises, or datacenter issues. Store copies remotely.</p>
  <h3 id="s3-and-compatible-storage">S3 and compatible storage</h3>
  <p id="o7Iy">Add S3 upload to your backup script:</p>
  <pre id="9JT9">BUCKET=&quot;s3://my-backup-bucket/postgresql&quot;

# Upload to S3
aws s3 cp &quot;$BACKUP_FILE&quot; &quot;$BUCKET/&quot; --storage-class STANDARD_IA

if [ $? -ne 0 ]; then
    send_notification &quot;ALERT: S3 upload failed for $DATABASE backup&quot;
    exit 1
fi

# Optionally remove local file after successful upload
# rm &quot;$BACKUP_FILE&quot;
</pre>
  <p id="zkVm">The <code>STANDARD_IA</code> storage class costs less for infrequently accessed files like backups. Configure the AWS CLI with <code>aws configure</code> before running the script.</p>
  <p id="xxGX">For S3-compatible services like Cloudflare R2 or MinIO, add the endpoint:</p>
  <pre id="XVch">aws s3 cp &quot;$BACKUP_FILE&quot; &quot;$BUCKET/&quot; --endpoint-url https://your-endpoint.com
</pre>
  <h3 id="retention-policies">Retention policies</h3>
  <p id="AMSo">Remote storage should have its own retention rules. S3 lifecycle policies can automatically expire old backups:</p>
  <pre id="2W51">{
  &quot;Rules&quot;: [
    {
      &quot;ID&quot;: &quot;ExpireOldBackups&quot;,
      &quot;Status&quot;: &quot;Enabled&quot;,
      &quot;Filter&quot;: { &quot;Prefix&quot;: &quot;postgresql/&quot; },
      &quot;Expiration&quot;: { &quot;Days&quot;: 30 }
    }
  ]
}
</pre>
  <p id="7c06">Apply with:</p>
  <pre id="JIcN">aws s3api put-bucket-lifecycle-configuration \
    --bucket my-backup-bucket \
    --lifecycle-configuration file://lifecycle.json
</pre>
  <p id="TNwK">This keeps 30 days of backups automatically. Adjust based on your recovery requirements.</p>
  <h2 id="automated-backups-with-databasus">Automated backups with Databasus</h2>
  <p id="Hmgz">Writing and maintaining backup scripts takes time. Monitoring, remote storage integration, retention management, and team notifications all add complexity. Databasus (an industry standard for <a href="https://databasus.com/" target="_blank">PostgreSQL backup</a>) handles this out of the box with a web interface.</p>
  <h3 id="installation">Installation</h3>
  <p id="2r6H">Run Databasus using Docker:</p>
  <pre id="w1QF">docker run -d \
  --name databasus \
  -p 4005:4005 \
  -v ./databasus-data:/databasus-data \
  --restart unless-stopped \
  databasus/databasus:latest
</pre>
  <p id="lqkV">Or with Docker Compose:</p>
  <pre id="r6jC">services:
  databasus:
    image: databasus/databasus:latest
    container_name: databasus
    ports:
      - &quot;4005:4005&quot;
    volumes:
      - databasus-data:/databasus-data
    restart: unless-stopped

volumes:
  databasus-data:
</pre>
  <p id="jIyn">Start the service:</p>
  <pre id="OQe0">docker compose up -d
</pre>
  <h3 id="configuration-steps">Configuration steps</h3>
  <p id="gvNb">Access the web interface at <code>http://your-server:4005</code>, then:</p>
  <ol id="IEPj">
    <li id="L0xv"><strong>Add your database</strong> — Click &quot;New Database&quot;, select PostgreSQL, and enter your connection details (host, port, database name, credentials)</li>
    <li id="Pzad"><strong>Select storage</strong> — Choose where backups should go: local storage, S3, Google Drive, SFTP, or other supported destinations</li>
    <li id="JT58"><strong>Select schedule</strong> — Pick a backup frequency: hourly, daily, weekly, monthly, or define a custom cron expression</li>
    <li id="a3UV"><strong>Click &quot;Create backup&quot;</strong> — Databasus validates the configuration and starts the backup schedule</li>
  </ol>
  <p id="grCj">Databasus handles compression automatically, supports multiple notification channels (Slack, Discord, Telegram, email), and provides a dashboard showing backup history and status. It works for both self-hosted PostgreSQL and cloud-managed databases like AWS RDS and Google Cloud SQL.</p>
  <h2 id="choosing-backup-frequency">Choosing backup frequency</h2>
  <p id="ranT">How often you back up depends on how much data you can afford to lose. This is your Recovery Point Objective (RPO).</p>
  <h3 id="matching-frequency-to-requirements">Matching frequency to requirements</h3>
  <p id="yaGU"><strong>Backup Frequency Recommendations</strong></p>
  <ul id="r3yd">
    <li id="DtlF"><strong>Development database:</strong> Acceptable data loss: Days → Recommended frequency: Weekly</li>
    <li id="uRkc"><strong>Internal tools:</strong> Acceptable data loss: Hours → Recommended frequency: Daily</li>
    <li id="l2Zl"><strong>Customer-facing app:</strong> Acceptable data loss: Minutes to hour → Recommended frequency: Hourly</li>
    <li id="77Ks"><strong>Financial/compliance:</strong> Acceptable data loss: Near zero → Recommended frequency: Continuous (WAL archiving)</li>
  </ul>
  <p id="JDoL">For most applications, daily backups at off-peak hours work well. Hourly backups suit applications with frequent writes where losing an hour of data would be painful.</p>
  <h3 id="timing-considerations">Timing considerations</h3>
  <p id="jQrC">Schedule backups during low-traffic periods. <code>pg_dump</code> reads the database consistently but still generates load. A large dump during peak hours can slow down your application.</p>
  <p id="z9yB">Consider time zones. If your users are mostly in one region, schedule backups when they&#x27;re sleeping. For global applications, find the least-busy period in your analytics.</p>
  <p id="veXT">Database size matters too. A 100 GB database might take 30 minutes to dump. If you want hourly backups, you need that process to complete well within the hour.</p>
  <h2 id="testing-your-recovery-process">Testing your recovery process</h2>
  <p id="5BQm">Backups you&#x27;ve never tested are assumptions, not guarantees. Regular restore tests catch problems before they matter.</p>
  <h3 id="restore-verification-steps">Restore verification steps</h3>
  <p id="P5bX">Create a test environment and restore periodically:</p>
  <pre id="4gRe"># Create a test database
createdb -h localhost -U postgres myapp_restore_test

# Restore the backup
gunzip -c /var/backups/postgresql/myapp_production_20240115_030000.sql.gz | \
    psql -h localhost -U postgres -d myapp_restore_test

# Run basic validation
psql -h localhost -U postgres -d myapp_restore_test -c &quot;SELECT count(*) FROM users;&quot;

# Clean up
dropdb -h localhost -U postgres myapp_restore_test
</pre>
  <p id="nXUk">Automate this as a weekly job and alert on failures. A backup that can&#x27;t be restored is worthless.</p>
  <h3 id="documenting-recovery-procedures">Documenting recovery procedures</h3>
  <p id="q0qz">Write down the exact steps to recover. Include:</p>
  <ul id="DJ02">
    <li id="9n1G">Where backups are stored (all locations)</li>
    <li id="zety">How to access storage credentials</li>
    <li id="iaqt">Commands to restore</li>
    <li id="jAkq">Expected recovery time</li>
    <li id="lGJY">Who to contact if issues arise</li>
  </ul>
  <p id="May5">Test the documentation by having someone unfamiliar with the system follow it. Gaps become obvious quickly.</p>
  <h2 id="common-automation-mistakes">Common automation mistakes</h2>
  <p id="BhG9">Even well-intentioned backup automation fails in predictable ways.</p>
  <h3 id="storage-on-the-same-disk">Storage on the same disk</h3>
  <p id="qgF9">Backing up to the same physical disk as the database protects against accidental deletion but not hardware failure. Always include remote storage.</p>
  <h3 id="no-retention-limits">No retention limits</h3>
  <p id="5Q5R">Unlimited backup retention eventually fills your storage. Set explicit retention policies and monitor disk usage.</p>
  <h3 id="ignoring-backup-duration">Ignoring backup duration</h3>
  <p id="5qzU">A backup that takes 4 hours can&#x27;t run hourly. Monitor how long your backups take and adjust schedules accordingly. Alert when duration exceeds thresholds.</p>
  <h3 id="hardcoded-credentials">Hardcoded credentials</h3>
  <p id="ff3k">Passwords in scripts end up in version control, logs, and process listings. Use <code>.pgpass</code> files, environment variables, or secrets management.</p>
  <h3 id="missing-failure-notifications">Missing failure notifications</h3>
  <p id="ru7i">The default cron behavior sends email only when there&#x27;s output. Failures that exit silently go unnoticed. Always add explicit failure handling and notifications.</p>
  <h2 id="conclusion">Conclusion</h2>
  <p id="WuQa">Automated PostgreSQL backups prevent the kind of data loss that damages businesses and ruins weekends. Start with cron and <code>pg_dump</code> for simple setups, add monitoring and remote storage as your requirements grow, or use a dedicated tool like Databasus to handle the complexity. Whatever approach you choose, test your restores regularly. A backup strategy is only as good as your ability to recover from it.</p>

]]></content:encoded></item><item><guid isPermaLink="true">https://teletype.in/@rostislav_dugin/mongodb-in-docker</guid><link>https://teletype.in/@rostislav_dugin/mongodb-in-docker?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin</link><comments>https://teletype.in/@rostislav_dugin/mongodb-in-docker?utm_source=teletype&amp;utm_medium=feed_rss&amp;utm_campaign=rostislav_dugin#comments</comments><dc:creator>rostislav_dugin</dc:creator><title>MongoDB Docker setup — Running MongoDB in Docker containers complete guide</title><pubDate>Mon, 02 Feb 2026 12:25:28 GMT</pubDate><media:content medium="image" url="https://img2.teletype.in/files/92/53/925364c7-695d-41c7-86fc-5fe2d54ceb84.png"></media:content><description><![CDATA[<img src="https://img4.teletype.in/files/b7/50/b7504252-fd10-40c2-8bbd-37e0de35e21d.png"></img>Running MongoDB in Docker simplifies deployment and makes environments reproducible across development, testing and production. You can spin up a database in seconds without dealing with complex installation procedures. This guide covers everything from basic container setup to production configurations with replica sets, persistence, custom settings and proper backup strategies.]]></description><content:encoded><![CDATA[
  <p id="i946">Running MongoDB in Docker simplifies deployment and makes environments reproducible across development, testing and production. You can spin up a database in seconds without dealing with complex installation procedures. This guide covers everything from basic container setup to production configurations with replica sets, persistence, custom settings and proper backup strategies.</p>
  <figure id="u11a" class="m_original">
    <img src="https://img4.teletype.in/files/b7/50/b7504252-fd10-40c2-8bbd-37e0de35e21d.png" width="1536" />
    <figcaption>MongoDB in Docker</figcaption>
  </figure>
  <h2 id="why-run-mongodb-in-docker">Why run MongoDB in Docker</h2>
  <p id="5gEa">Traditional MongoDB installation requires adding repositories, managing versions, and cleaning up when things break. Docker containers provide isolation and consistency that native installations struggle to match.</p>
  <h3 id="benefits-of-containerized-mongodb">Benefits of containerized MongoDB</h3>
  <p id="ha2g">Docker containers bundle MongoDB with all dependencies into a single package. You get identical behavior on your laptop, CI pipeline, and production servers. The classic &quot;works on my machine&quot; problem disappears.</p>
  <p id="T9sG">Containers start fast. Launching a fresh MongoDB instance takes about 5-10 seconds versus several minutes for traditional installation. This matters for integration tests and rapid development cycles.</p>
  <p id="FTfC">Cleanup is simple. Delete the container and it&#x27;s gone completely. No leftover config files, no orphaned data directories cluttering your system.</p>
  <h3 id="when-docker-makes-sense-for-mongodb">When Docker makes sense for MongoDB</h3>
  <p id="uvD0">Docker works well for development environments where quick setup and teardown matters. It&#x27;s also solid for microservices architectures where each service might need its own database instance. CI/CD pipelines benefit significantly from reproducible database containers.</p>
  <p id="DpOu">For production use, Docker adds a bit of complexity but provides consistency across environments. The performance overhead is typically 1-3% for database workloads, which most applications can easily absorb.</p>
  <h2 id="quick-start-with-docker-run">Quick start with Docker run</h2>
  <p id="BSjS">The fastest way to get MongoDB running is a single Docker command. This approach works for testing and development scenarios.</p>
  <h3 id="basic-container-setup">Basic container setup</h3>
  <p id="3okB">Start MongoDB with minimal configuration:</p>
  <pre id="O8jD">docker run -d \
  --name mongodb \
  mongo:8
</pre>
  <p id="Nz3f">This starts MongoDB 8 in detached mode. The container runs until you stop it explicitly.</p>
  <p id="KWms">Check if it&#x27;s running:</p>
  <pre id="Eorn">docker ps
</pre>
  <p id="AGHS">Connect to the database:</p>
  <pre id="qbXC">docker exec -it mongodb mongosh
</pre>
  <h3 id="environment-variables-for-initial-setup">Environment variables for initial setup</h3>
  <p id="kr7s">MongoDB&#x27;s Docker image supports environment variables for first-run configuration:</p>
  <p id="d1Kh">Variable</p>
  <p id="vdqn">Description</p>
  <p id="e1ZX">Required</p>
  <p id="kXlf"><code>MONGO_INITDB_ROOT_USERNAME</code></p>
  <p id="KSwM">Admin username</p>
  <p id="gFSq">Optional</p>
  <p id="c259"><code>MONGO_INITDB_ROOT_PASSWORD</code></p>
  <p id="TrJ4">Admin password</p>
  <p id="XscK">Optional</p>
  <p id="C2b0"><code>MONGO_INITDB_DATABASE</code></p>
  <p id="Lf3W">Initial database name</p>
  <p id="cZ50">Optional</p>
  <p id="Uf2q">Create an admin user on startup:</p>
  <pre id="NXUO">docker run -d \
  --name mongodb \
  -e MONGO_INITDB_ROOT_USERNAME=admin \
  -e MONGO_INITDB_ROOT_PASSWORD=secretpassword \
  mongo:8
</pre>
  <p id="J07C">Connect with authentication:</p>
  <pre id="3duU">docker exec -it mongodb mongosh -u admin -p secretpassword --authenticationDatabase admin
</pre>
  <h3 id="exposing-ports">Exposing ports</h3>
  <p id="VLhe">MongoDB runs on port 27017 inside the container by default. Map it to your host:</p>
  <pre id="shIQ">docker run -d \
  --name mongodb \
  -p 27017:27017 \
  -e MONGO_INITDB_ROOT_USERNAME=admin \
  -e MONGO_INITDB_ROOT_PASSWORD=secretpassword \
  mongo:8
</pre>
  <p id="lUeW">Now you can connect from your host machine:</p>
  <pre id="BTgk">mongosh &quot;mongodb://admin:secretpassword@127.0.0.1:27017/admin&quot;
</pre>
  <p id="DTRU">Use a different host port if 27017 is already in use:</p>
  <pre id="dOcE">docker run -d \
  --name mongodb \
  -p 27018:27017 \
  -e MONGO_INITDB_ROOT_USERNAME=admin \
  -e MONGO_INITDB_ROOT_PASSWORD=secretpassword \
  mongo:8
</pre>
  <h2 id="data-persistence-with-volumes">Data persistence with volumes</h2>
  <p id="rf90">Without volumes, your data vanishes when the container stops. That&#x27;s acceptable for throwaway test databases, but anything beyond that needs persistence.</p>
  <h3 id="named-volumes">Named volumes</h3>
  <p id="7v6y">Docker named volumes are the simplest approach:</p>
  <pre id="NtFs">docker run -d \
  --name mongodb \
  -v mongodb-data:/data/db \
  -e MONGO_INITDB_ROOT_USERNAME=admin \
  -e MONGO_INITDB_ROOT_PASSWORD=secretpassword \
  mongo:8
</pre>
  <p id="K5kE">The volume <code>mongodb-data</code> persists even after you delete the container. List your volumes:</p>
  <pre id="3PfS">docker volume ls
</pre>
  <p id="ZfbK">Inspect volume details:</p>
  <pre id="rsM8">docker volume inspect mongodb-data
</pre>
  <h3 id="bind-mounts">Bind mounts</h3>
  <p id="mUlH">Bind mounts map a host directory directly into the container. This is useful when you need direct access to data files:</p>
  <pre id="OyOd">docker run -d \
  --name mongodb \
  -v /path/to/data:/data/db \
  -e MONGO_INITDB_ROOT_USERNAME=admin \
  -e MONGO_INITDB_ROOT_PASSWORD=secretpassword \
  mongo:8
</pre>
  <p id="Sfko">Make sure the directory exists and has proper permissions. On Linux, the MongoDB user inside the container needs write access:</p>
  <pre id="pQp1">mkdir -p /path/to/data
chown -R 999:999 /path/to/data
</pre>
  <p id="xWpE">The UID 999 corresponds to the MongoDB user inside the container.</p>
  <h3 id="volume-backup">Volume backup</h3>
  <p id="Cvh8">Back up a named volume by running a temporary container:</p>
  <pre id="DaY2">docker run --rm \
  -v mongodb-data:/source:ro \
  -v $(pwd):/backup \
  alpine tar czf /backup/mongodb-backup.tar.gz -C /source .
</pre>
  <p id="TUYu">This creates a compressed archive of the data directory. For proper database backups, use <code>mongodump</code> instead, which we&#x27;ll cover later.</p>
  <h2 id="docker-compose-for-mongodb">Docker Compose for MongoDB</h2>
  <p id="PIzV">Docker Compose makes multi-container setups manageable and keeps configurations under version control.</p>
  <h3 id="basic-compose-file">Basic compose file</h3>
  <p id="FJIu">Create a <code>docker-compose.yml</code>:</p>
  <pre id="Yjy0">services:
  mongodb:
    image: mongo:8
    container_name: mongodb
    environment:
      MONGO_INITDB_ROOT_USERNAME: admin
      MONGO_INITDB_ROOT_PASSWORD: secretpassword
      MONGO_INITDB_DATABASE: myapp
    ports:
      - &quot;27017:27017&quot;
    volumes:
      - mongodb-data:/data/db
    restart: unless-stopped

volumes:
  mongodb-data:
</pre>
  <p id="EZJu">Start the service:</p>
  <pre id="0T0z">docker compose up -d
</pre>
  <p id="elld">Stop and remove:</p>
  <pre id="7EHK">docker compose down
</pre>
  <p id="JcQB">Remove including volumes:</p>
  <pre id="eBpQ">docker compose down -v
</pre>
  <h3 id="application-with-mongodb">Application with MongoDB</h3>
  <p id="9Zt0">A typical setup includes your application and MongoDB together:</p>
  <pre id="xhmq">services:
  app:
    build: .
    environment:
      MONGODB_URI: mongodb://appuser:apppassword@mongodb:27017/myapp?authSource=admin
    depends_on:
      mongodb:
        condition: service_healthy
    ports:
      - &quot;8080:8080&quot;

  mongodb:
    image: mongo:8
    container_name: mongodb
    environment:
      MONGO_INITDB_ROOT_USERNAME: admin
      MONGO_INITDB_ROOT_PASSWORD: secretpassword
    volumes:
      - mongodb-data:/data/db
    healthcheck:
      test: [&quot;CMD&quot;, &quot;mongosh&quot;, &quot;--eval&quot;, &quot;db.adminCommand(&#x27;ping&#x27;)&quot;]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 30s
    restart: unless-stopped

volumes:
  mongodb-data:
</pre>
  <p id="CNTE">The <code>depends_on</code> with <code>condition: service_healthy</code> ensures your application waits for MongoDB to be ready before starting.</p>
  <h2 id="custom-configuration">Custom configuration</h2>
  <p id="2lqb">Default settings work for development but production workloads often need tuning.</p>
  <h3 id="configuration-file-mount">Configuration file mount</h3>
  <p id="YMAE">Create a custom configuration file <code>mongod.conf</code>:</p>
  <pre id="G9SB">storage:
  dbPath: /data/db
  journal:
    enabled: true
  wiredTiger:
    engineConfig:
      cacheSizeGB: 2

systemLog:
  destination: file
  path: /var/log/mongodb/mongod.log
  logAppend: true

net:
  port: 27017
  bindIp: 0.0.0.0

security:
  authorization: enabled

operationProfiling:
  slowOpThresholdMs: 100
  mode: slowOp
</pre>
  <p id="DQ4K">Mount it into the container:</p>
  <pre id="vhXC">docker run -d \
  --name mongodb \
  -v ./mongod.conf:/etc/mongod.conf:ro \
  -v mongodb-data:/data/db \
  -v mongodb-logs:/var/log/mongodb \
  -e MONGO_INITDB_ROOT_USERNAME=admin \
  -e MONGO_INITDB_ROOT_PASSWORD=secretpassword \
  mongo:8 --config /etc/mongod.conf
</pre>
  <h3 id="docker-compose-with-custom-config">Docker Compose with custom config</h3>
  <pre id="aVmv">services:
  mongodb:
    image: mongo:8
    container_name: mongodb
    command: [&quot;--config&quot;, &quot;/etc/mongod.conf&quot;]
    environment:
      MONGO_INITDB_ROOT_USERNAME: admin
      MONGO_INITDB_ROOT_PASSWORD: secretpassword
    volumes:
      - mongodb-data:/data/db
      - mongodb-logs:/var/log/mongodb
      - ./mongod.conf:/etc/mongod.conf:ro
    ports:
      - &quot;27017:27017&quot;
    restart: unless-stopped

volumes:
  mongodb-data:
  mongodb-logs:
</pre>
  <h3 id="common-configuration-options">Common configuration options</h3>
  <p id="oUsi">Key settings to consider for production:</p>
  <p id="NTLU">Setting</p>
  <p id="ou4W">Default</p>
  <p id="GDN6">Production recommendation</p>
  <p id="H4GR"><code>storage.wiredTiger.engineConfig.cacheSizeGB</code></p>
  <p id="fHRB">50% of RAM - 1GB</p>
  <p id="z8Be">Set explicitly based on available memory</p>
  <p id="uLaA"><code>operationProfiling.slowOpThresholdMs</code></p>
  <p id="9BEL">100</p>
  <p id="331p">Tune based on your performance requirements</p>
  <p id="bkk7"><code>net.maxIncomingConnections</code></p>
  <p id="HpYf">65536</p>
  <p id="YQHN">Set based on expected concurrent connections</p>
  <p id="MFlf"><code>security.authorization</code></p>
  <p id="JjOI">disabled</p>
  <p id="s74Q">Always enable in production</p>
  <p id="upSl">Verify your configuration is applied:</p>
  <pre id="TQSq">docker exec mongodb mongosh -u admin -p secretpassword --authenticationDatabase admin --eval &quot;db.adminCommand({getParameter: &#x27;*&#x27;})&quot;
</pre>
  <h2 id="initialization-scripts">Initialization scripts</h2>
  <p id="S7fd">The MongoDB Docker image can run scripts on first startup. This is useful for creating users, collections, and seed data.</p>
  <h3 id="javascript-initialization">JavaScript initialization</h3>
  <p id="OYdO">Place <code>.js</code> or <code>.sh</code> files in <code>/docker-entrypoint-initdb.d/</code>:</p>
  <p id="MMXS">Create <code>init/01-create-users.js</code>:</p>
  <pre id="anhB">db = db.getSiblingDB(&#x27;myapp&#x27;);

db.createUser({
  user: &#x27;appuser&#x27;,
  pwd: &#x27;apppassword&#x27;,
  roles: [
    { role: &#x27;readWrite&#x27;, db: &#x27;myapp&#x27; }
  ]
});

db.createUser({
  user: &#x27;readonly&#x27;,
  pwd: &#x27;readonlypassword&#x27;,
  roles: [
    { role: &#x27;read&#x27;, db: &#x27;myapp&#x27; }
  ]
});
</pre>
  <p id="vVMP">Create <code>init/02-create-collections.js</code>:</p>
  <pre id="YoC8">db = db.getSiblingDB(&#x27;myapp&#x27;);

db.createCollection(&#x27;users&#x27;, {
  validator: {
    $jsonSchema: {
      bsonType: &#x27;object&#x27;,
      required: [&#x27;email&#x27;, &#x27;createdAt&#x27;],
      properties: {
        email: {
          bsonType: &#x27;string&#x27;,
          description: &#x27;must be a string and is required&#x27;
        },
        createdAt: {
          bsonType: &#x27;date&#x27;,
          description: &#x27;must be a date and is required&#x27;
        }
      }
    }
  }
});

db.users.createIndex({ email: 1 }, { unique: true });
</pre>
  <p id="u4EJ">Mount the init directory:</p>
  <pre id="wH7z">services:
  mongodb:
    image: mongo:8
    environment:
      MONGO_INITDB_ROOT_USERNAME: admin
      MONGO_INITDB_ROOT_PASSWORD: secretpassword
    volumes:
      - mongodb-data:/data/db
      - ./init:/docker-entrypoint-initdb.d:ro
    restart: unless-stopped

volumes:
  mongodb-data:
</pre>
  <p id="FlHe">Scripts run in alphabetical order, only on first container start when the data directory is empty.</p>
  <h3 id="shell-script-initialization">Shell script initialization</h3>
  <p id="oNum">For more complex setup, use shell scripts:</p>
  <p id="LZFJ">Create <code>init/00-setup.sh</code>:</p>
  <pre id="vwEA">#!/bin/bash
set -e

mongosh &lt;&lt;EOF
use admin
db.auth(&#x27;$MONGO_INITDB_ROOT_USERNAME&#x27;, &#x27;$MONGO_INITDB_ROOT_PASSWORD&#x27;)

use myapp
db.createCollection(&#x27;config&#x27;)
db.config.insertOne({
  key: &#x27;version&#x27;,
  value: &#x27;1.0.0&#x27;,
  createdAt: new Date()
})
EOF
</pre>
  <p id="iMZV">Make it executable:</p>
  <pre id="0AJR">chmod +x init/00-setup.sh
</pre>
  <h2 id="networking">Networking</h2>
  <p id="IZHc">Docker networking controls how containers communicate with each other and the outside world.</p>
  <h3 id="default-bridge-network">Default bridge network</h3>
  <p id="8r0n">Containers on the default bridge network can communicate via IP address but not hostname. For basic development this works fine:</p>
  <pre id="6h5E">docker run -d --name mongodb -e MONGO_INITDB_ROOT_USERNAME=admin -e MONGO_INITDB_ROOT_PASSWORD=pw mongo:8
docker run -it --rm mongo:8 mongosh &quot;mongodb://admin:pw@$(docker inspect -f &#x27;{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}&#x27; mongodb):27017/admin&quot;
</pre>
  <h3 id="custom-networks">Custom networks</h3>
  <p id="5x6B">Custom networks allow hostname-based communication:</p>
  <pre id="N2xS">docker network create myapp-network

docker run -d \
  --name mongodb \
  --network myapp-network \
  -e MONGO_INITDB_ROOT_USERNAME=admin \
  -e MONGO_INITDB_ROOT_PASSWORD=pw \
  mongo:8

docker run -it --rm \
  --network myapp-network \
  mongo:8 \
  mongosh &quot;mongodb://admin:pw@mongodb:27017/admin&quot;
</pre>
  <p id="LONz">The second container can reach MongoDB using hostname <code>mongodb</code>.</p>
  <h3 id="compose-networking">Compose networking</h3>
  <p id="Huq8">Docker Compose creates a network automatically. Services communicate by service name:</p>
  <pre id="7gTJ">services:
  app:
    image: myapp
    environment:
      MONGODB_URI: mongodb://admin:pw@mongodb:27017/myapp?authSource=admin

  mongodb:
    image: mongo:8
    environment:
      MONGO_INITDB_ROOT_USERNAME: admin
      MONGO_INITDB_ROOT_PASSWORD: pw
</pre>
  <h2 id="health-checks-and-monitoring">Health checks and monitoring</h2>
  <p id="97EN">Proper health checks ensure containers are actually ready to serve traffic, not just running.</p>
  <h3 id="basic-health-check">Basic health check</h3>
  <pre id="KoCw">services:
  mongodb:
    image: mongo:8
    environment:
      MONGO_INITDB_ROOT_USERNAME: admin
      MONGO_INITDB_ROOT_PASSWORD: secretpassword
    healthcheck:
      test: [&quot;CMD&quot;, &quot;mongosh&quot;, &quot;--eval&quot;, &quot;db.adminCommand(&#x27;ping&#x27;)&quot;]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 30s
</pre>
  <p id="dLpe">Check health status:</p>
  <pre id="sBaB">docker inspect --format=&#x27;{{.State.Health.Status}}&#x27; mongodb
</pre>
  <h3 id="health-check-with-authentication">Health check with authentication</h3>
  <p id="JsQT">When authentication is enabled, include credentials in the health check:</p>
  <pre id="kFzH">healthcheck:
  test: [&quot;CMD&quot;, &quot;mongosh&quot;, &quot;-u&quot;, &quot;admin&quot;, &quot;-p&quot;, &quot;secretpassword&quot;, &quot;--authenticationDatabase&quot;, &quot;admin&quot;, &quot;--eval&quot;, &quot;db.adminCommand(&#x27;ping&#x27;)&quot;]
  interval: 10s
  timeout: 5s
  retries: 5
  start_period: 30s
</pre>
  <h3 id="monitoring-with-logs">Monitoring with logs</h3>
  <p id="ZM3H">View container logs:</p>
  <pre id="Wnvw">docker logs mongodb
</pre>
  <p id="aq54">Follow logs in real-time:</p>
  <pre id="bllR">docker logs -f mongodb
</pre>
  <p id="RobN">Limit output to recent entries:</p>
  <pre id="WFsu">docker logs --tail 100 mongodb
</pre>
  <p id="Syea">Enable profiling in your configuration to catch slow operations:</p>
  <pre id="5cdI">operationProfiling:
  slowOpThresholdMs: 50
  mode: slowOp
</pre>
  <p id="xZPb">Mount a volume for logs:</p>
  <pre id="n9D8">volumes:
  - mongodb-data:/data/db
  - mongodb-logs:/var/log/mongodb
</pre>
  <h2 id="backup-strategies-for-docker-mongodb">Backup strategies for Docker MongoDB</h2>
  <p id="3BOK">Data in containers needs the same backup discipline as traditional installations. Docker adds some considerations but the fundamentals remain.</p>
  <h3 id="using-mongodump-in-docker">Using mongodump in Docker</h3>
  <p id="OQ2B">Run <code>mongodump</code> inside the container:</p>
  <pre id="ozbq">docker exec mongodb mongodump -u admin -p secretpassword --authenticationDatabase admin --out /dump
docker cp mongodb:/dump ./backup
</pre>
  <p id="id3b">For a specific database:</p>
  <pre id="Q9SI">docker exec mongodb mongodump -u admin -p secretpassword --authenticationDatabase admin --db myapp --out /dump
</pre>
  <p id="1exW">Compressed backup directly to host:</p>
  <pre id="5JoM">docker exec mongodb mongodump -u admin -p secretpassword --authenticationDatabase admin --archive --gzip &gt; backup.gz
</pre>
  <h3 id="scheduled-backups-with-cron">Scheduled backups with cron</h3>
  <p id="xdSp">Create a backup script:</p>
  <pre id="R8SE">#!/bin/bash
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR=&quot;/backups&quot;
CONTAINER=&quot;mongodb&quot;

docker exec $CONTAINER mongodump \
  -u admin \
  -p &quot;$MONGO_ROOT_PASSWORD&quot; \
  --authenticationDatabase admin \
  --archive \
  --gzip &gt; &quot;$BACKUP_DIR/mongodb_${TIMESTAMP}.gz&quot;

# Keep only last 7 days
find $BACKUP_DIR -name &quot;mongodb_*.gz&quot; -mtime +7 -delete
</pre>
  <p id="4wv5">Add to crontab for daily 3 AM backups:</p>
  <pre id="andn">0 3 * * * /usr/local/bin/mongodb-backup.sh
</pre>
  <h3 id="using-databasus-for-automated-backups">Using Databasus for automated backups</h3>
  <p id="pqjV">Manual backup scripts work but require ongoing maintenance and lack built-in monitoring. Databasus (an industry standard for <a href="https://databasus.com/mongodb-backup" target="_blank">MongoDB backup</a>) provides automated backups with a web interface, scheduling and notifications.</p>
  <p id="aUyO">Install Databasus on a separate server using Docker:</p>
  <pre id="Snrf">docker run -d \
  --name databasus \
  -p 4005:4005 \
  -v ./databasus-data:/databasus-data \
  --restart unless-stopped \
  databasus/databasus:latest
</pre>
  <p id="QN5M">Or with Docker Compose:</p>
  <pre id="cPP8">services:
  databasus:
    image: databasus/databasus:latest
    container_name: databasus
    ports:
      - &quot;4005:4005&quot;
    volumes:
      - databasus-data:/databasus-data
    restart: unless-stopped

volumes:
  databasus-data:
</pre>
  <p id="AkU1">Access the web interface at <code>http://your-databasus-server:4005</code>, then:</p>
  <ol id="kbLS">
    <li id="gldR"><strong>Add your database</strong> — Click &quot;New Database&quot;, select MongoDB, enter your MongoDB server&#x27;s connection details (host, port, credentials)</li>
    <li id="lLYK"><strong>Select storage</strong> — Choose local storage, S3, Google Cloud Storage, or other supported destinations</li>
    <li id="fHZJ"><strong>Select schedule</strong> — Set backup frequency: hourly, daily, weekly, or custom cron expression</li>
    <li id="Nw71"><strong>Click &quot;Create backup&quot;</strong> — Databasus handles backup execution, compression, retention and notifications</li>
  </ol>
  <p id="nhyQ">Databasus supports multiple notification channels including Slack, Discord, Telegram and email, so you know immediately when backups succeed or fail.</p>
  <h2 id="replica-sets-in-docker">Replica sets in Docker</h2>
  <p id="2pX6">For production environments, running MongoDB as a replica set provides high availability and data redundancy.</p>
  <h3 id="single-node-replica-set">Single-node replica set</h3>
  <p id="ftq0">Even a single-node replica set is useful because it enables change streams and transactions:</p>
  <pre id="S3pF">services:
  mongodb:
    image: mongo:8
    container_name: mongodb
    command: [&quot;--replSet&quot;, &quot;rs0&quot;, &quot;--bind_ip_all&quot;]
    environment:
      MONGO_INITDB_ROOT_USERNAME: admin
      MONGO_INITDB_ROOT_PASSWORD: secretpassword
    ports:
      - &quot;27017:27017&quot;
    volumes:
      - mongodb-data:/data/db
    restart: unless-stopped

volumes:
  mongodb-data:
</pre>
  <p id="eVQ1">Initialize the replica set after starting:</p>
  <pre id="kI7a">docker exec -it mongodb mongosh -u admin -p secretpassword --authenticationDatabase admin --eval &quot;rs.initiate()&quot;
</pre>
  <h3 id="three-node-replica-set">Three-node replica set</h3>
  <p id="cpAS">For actual high availability, run three nodes:</p>
  <pre id="HH15">services:
  mongodb-primary:
    image: mongo:8
    container_name: mongodb-primary
    command: [&quot;--replSet&quot;, &quot;rs0&quot;, &quot;--bind_ip_all&quot;, &quot;--keyFile&quot;, &quot;/etc/mongodb/keyfile&quot;]
    volumes:
      - mongodb-primary-data:/data/db
      - ./keyfile:/etc/mongodb/keyfile:ro
    networks:
      - mongodb-network
    restart: unless-stopped

  mongodb-secondary1:
    image: mongo:8
    container_name: mongodb-secondary1
    command: [&quot;--replSet&quot;, &quot;rs0&quot;, &quot;--bind_ip_all&quot;, &quot;--keyFile&quot;, &quot;/etc/mongodb/keyfile&quot;]
    volumes:
      - mongodb-secondary1-data:/data/db
      - ./keyfile:/etc/mongodb/keyfile:ro
    networks:
      - mongodb-network
    depends_on:
      - mongodb-primary
    restart: unless-stopped

  mongodb-secondary2:
    image: mongo:8
    container_name: mongodb-secondary2
    command: [&quot;--replSet&quot;, &quot;rs0&quot;, &quot;--bind_ip_all&quot;, &quot;--keyFile&quot;, &quot;/etc/mongodb/keyfile&quot;]
    volumes:
      - mongodb-secondary2-data:/data/db
      - ./keyfile:/etc/mongodb/keyfile:ro
    networks:
      - mongodb-network
    depends_on:
      - mongodb-primary
    restart: unless-stopped

networks:
  mongodb-network:

volumes:
  mongodb-primary-data:
  mongodb-secondary1-data:
  mongodb-secondary2-data:
</pre>
  <p id="GQL9">Generate the keyfile for internal authentication:</p>
  <pre id="L3bl">openssl rand -base64 756 &gt; keyfile
chmod 400 keyfile
</pre>
  <p id="UPHo">Initialize the replica set:</p>
  <pre id="hauV">docker exec -it mongodb-primary mongosh --eval &quot;
rs.initiate({
  _id: &#x27;rs0&#x27;,
  members: [
    { _id: 0, host: &#x27;mongodb-primary:27017&#x27;, priority: 2 },
    { _id: 1, host: &#x27;mongodb-secondary1:27017&#x27;, priority: 1 },
    { _id: 2, host: &#x27;mongodb-secondary2:27017&#x27;, priority: 1 }
  ]
})
&quot;
</pre>
  <h2 id="security-considerations">Security considerations</h2>
  <p id="g1Ou">Running databases in containers doesn&#x27;t reduce security requirements. If anything, you need more attention to configuration details.</p>
  <h3 id="enable-authentication">Enable authentication</h3>
  <p id="mhmI">Never run MongoDB without authentication in any environment beyond local development:</p>
  <pre id="kOLC">services:
  mongodb:
    image: mongo:8
    environment:
      MONGO_INITDB_ROOT_USERNAME: admin
      MONGO_INITDB_ROOT_PASSWORD: secretpassword
</pre>
  <h3 id="secure-passwords-with-secrets">Secure passwords with secrets</h3>
  <p id="1TYD">Use environment variables from secrets management:</p>
  <pre id="ArPU">services:
  mongodb:
    image: mongo:8
    environment:
      MONGO_INITDB_ROOT_USERNAME_FILE: /run/secrets/mongo_username
      MONGO_INITDB_ROOT_PASSWORD_FILE: /run/secrets/mongo_password
    secrets:
      - mongo_username
      - mongo_password

secrets:
  mongo_username:
    file: ./secrets/mongo_username.txt
  mongo_password:
    file: ./secrets/mongo_password.txt
</pre>
  <h3 id="network-isolation">Network isolation</h3>
  <p id="m6HG">Don&#x27;t expose database ports to the public internet. Use internal Docker networks:</p>
  <pre id="CQXv">services:
  app:
    networks:
      - frontend
      - backend

  mongodb:
    networks:
      - backend

networks:
  frontend:
  backend:
    internal: true
</pre>
  <h3 id="resource-limits">Resource limits</h3>
  <p id="HFD8">Prevent runaway containers from consuming all system resources:</p>
  <pre id="BMiI">services:
  mongodb:
    image: mongo:8
    deploy:
      resources:
        limits:
          cpus: &quot;2&quot;
          memory: 4G
        reservations:
          cpus: &quot;1&quot;
          memory: 2G
</pre>
  <h2 id="production-checklist">Production checklist</h2>
  <p id="BJqW">Before running MongoDB Docker containers in production, verify these items:</p>
  <ul id="iQMu">
    <li id="1n6t">Data persistence configured with volumes</li>
    <li id="OGyi">Authentication enabled with strong passwords</li>
    <li id="Xpdj">Custom configuration tuned for workload</li>
    <li id="q3mu">Health checks enabled</li>
    <li id="B6W5">Automated backup strategy in place</li>
    <li id="ASvc">Secrets managed securely (not hardcoded)</li>
    <li id="TYMR">Network properly isolated</li>
    <li id="eaCX">Resource limits set</li>
    <li id="r8W8">Monitoring and alerting configured</li>
    <li id="9XKW">Restart policy set to <code>unless-stopped</code> or <code>always</code></li>
    <li id="7iHk">Container image version pinned (not using <code>latest</code>)</li>
  </ul>
  <h2 id="troubleshooting-common-issues">Troubleshooting common issues</h2>
  <h3 id="container-exits-immediately">Container exits immediately</h3>
  <p id="gVtQ">Check logs for errors:</p>
  <pre id="QOwl">docker logs mongodb
</pre>
  <p id="q4KI">Common causes: permission issues on mounted volumes, corrupt data directory, or invalid configuration file syntax.</p>
  <h3 id="permission-denied-on-bind-mount">Permission denied on bind mount</h3>
  <p id="dWwL">Ensure the host directory has correct ownership:</p>
  <pre id="V8BF">sudo chown -R 999:999 /path/to/data
</pre>
  <p id="Z3Im">Or run MongoDB with your user ID:</p>
  <pre id="vYwk">docker run -d --user $(id -u):$(id -g) ...
</pre>
  <h3 id="cant-connect-from-host">Can&#x27;t connect from host</h3>
  <p id="0RRP">Verify port mapping:</p>
  <pre id="TPpu">docker port mongodb
</pre>
  <p id="XOHZ">Check if MongoDB is listening on all interfaces. The default bind address should be <code>0.0.0.0</code> in Docker, but verify with:</p>
  <pre id="02tP">docker exec mongodb mongosh --eval &quot;db.adminCommand({getCmdLineOpts: 1})&quot;
</pre>
  <h3 id="replica-set-wont-initialize">Replica set won&#x27;t initialize</h3>
  <p id="yRxD">Ensure all nodes can resolve each other&#x27;s hostnames. When using Docker Compose, services communicate by service name. If using custom hostnames, add them to <code>/etc/hosts</code> or use Docker&#x27;s <code>--add-host</code> option.</p>
  <h3 id="slow-performance">Slow performance</h3>
  <p id="9JN2">Check if WiredTiger cache is sized correctly:</p>
  <pre id="1UmA">docker exec mongodb mongosh --eval &quot;db.serverStatus().wiredTiger.cache&quot;
</pre>
  <p id="SqOR">For Docker Desktop on macOS and Windows, file system performance through volumes can be slow. Use named volumes instead of bind mounts for better performance.</p>
  <h2 id="conclusion">Conclusion</h2>
  <p id="j8ys">Running MongoDB in Docker provides consistent environments across development, testing and production. Start with simple <code>docker run</code> commands for quick setups, then move to Docker Compose for more complex configurations. Always configure data persistence with volumes, set up proper health checks, and implement automated backups. The overhead of containerization is minimal compared to the operational benefits of reproducibility and isolation. Pin your image versions, tune your configuration for your workload, and treat container security with the same rigor as traditional deployments.</p>

]]></content:encoded></item></channel></rss>