EnterpriseDB Blog http://suntzusound.com/feed/blog en EnterpriseDB Blog http://suntzusound.com/blog/thales-vormetric-transparent-encryption-edb-postgres <p><img alt="" src="/sites/default/files/photo-security-shield.jpg" style="width: 100%;" /></p><p> </p><p>Data privacy has senior management visibility as organizations are required to minimize the risk of sensitive data, such as customer payment information or health records being exposed through fraud or data breaches. Complying with the growing data privacy standards and regulations, including CCPA, PCI DSS, GDPR, and HIPAA, is an ever-changing challenge that requires consistent policies and tools that work across the enterprise. </p><p>Securing data at the file system level, or in PCI terms ‘data-at-rest? is possible through encryption by the storage hardware, the operating system, the database server or the application. Encryption when performed by the database server as seen in Oracle, SQL Server, and DB2 is known as transparent data encryption (TDE). Postgres today does not have native TDE capability.</p><p>Vormetric Transparent Encryption (VTE) from leading enterprise data security provider Thales secures data-at-rest without requiring changes to the database or associated applications. The solution also includes Vormetric Data Security Manager (DSM) which provides a unified, centralized platform for managing encryption keys and policies across an enterprise’s storage, databases and applications. </p><p>EnterpriseDB has partnered with Thales to bring this security solution to EDB Postgres Advanced Server.</p><h2>Validated Support</h2><p>Before announcing the joint solution to our customers, EnterpriseDB and Thales put it through a validation process. The goal here was to prove out that VTE’s granular, least-privileged user access policies worked as expected with EDB Postgres Advanced Server, along with seeing auditing and encryption key management in operation. </p><p>My colleagues Tushar Ahuja and Rajkumar Raghuwanshi will be blogging details of the validation effort along with performance impact on our sample application with the solution enabled. As the saying goes, performance will vary with your specific workload. Overall we were pleased with the results.</p><h2>Getting Started</h2><p>Implementing the Vormetric solution requires the following components:</p><p>1. EDB Postgres Advanced Server installed and in operation.</p><p>2. Vormetric Data Security Platform (DSM) installed and operational.</p><p>3. A VTE agent on the Postgres host registered to the DSM.</p><p>A good resource from Thales is the <a href="http://suntzusound.com/sites/default/files/vte-implementation-postgresql-guide.pdf">Vormetric Guide: VTE Implementation for Postgres.</a></p><h2>Peace of Mind</h2><p>If you are following best practices with <a href="http://suntzusound.com/blog/creating-multi-layered-security-architecture-your-databases">layers of protection</a> for securing data from attack, including VTE enables you to answer data-at-rest security concerns. If you are already using Thales to manage data security policies in your enterprise, this validated solution enables you to extend your implementation to include EDB Postgres Advanced Server. The Thales and EnterpriseDB partnership gives you the peace of mind that your Postgres data is secure and supported.</p><p> </p><h3>Additional Resources</h3><p><a href="http://safenet.gemalto.com/resources/security-compliance-postgres-database-solution-brief/">VTE and EDB Postgres Advanced Server Solution Brief</a></p><p><a href="http://suntzusound.com/sites/default/files/vte-implementation-postgresql-guide.pdf">Vormetric Guide: VTE Implementation for Postgres</a></p><p><a href="http://www.thalesesecurity.com/products/data-encryption/vormetric-transparent-encryption">Product webpage: Vormetric Transparent Encryption</a></p><p><a href="http://www.thalesesecurity.com/resources/research-reports-and-white-papers/vormetric-data-security-complying-pci-dss-30-encryption">Vormetric Data Security: Complying with PCI DSS 3.0 Encryption Rules</a></p><p><a href="http://suntzusound.com/resources/creating-multi-layered-security-architecture-your-databases">Creating a multi-layered security architecture for your database</a></p><p><a href="http://suntzusound.com/blog/postgres-and-transparent-data-encryption-tde">Postgres and Transparent Data Encryption (TDE)</a></p><p><a href="http://www.postgresql.org/docs/12/encryption-options.html">Postgres encryption options</a></p><p> </p> Thu, 16 Jan 2020 11:40:10 -0500 John Dalton http://suntzusound.com/blog/thales-vormetric-transparent-encryption-edb-postgres EnterpriseDB Blog http://suntzusound.com/blog/january-and-running-postgres-events-near-and-far <p><img alt="" src="/sites/default/files/photo-rear-audience-view.jpg" style="width: 100%;" /></p><p> </p><p>We kick off January with our experts speaking at four events that are near our headquarters, then the U.S. west coast, and closing out the month in Germany.</p><h2><b>Non-Relational Postgres Storage </b></h2><p>Next Tuesday, January 14, <a href="http://suntzusound.com/bruce-momjian">Bruce Momjian</a>, co-founder and core team member of the PostgreSQL Global Development Group and EDB senior database architect, will speak at the<a href="http://www.meetup.com/Boston-PostgreSQL-Users-Group-BPUG/events/267597649/"> Boston Postgres User Group</a>. The topic is Non-Relational Postgres Storage with Bruce outlining the different types of non-relational storage options, like JSON, as well as advantages of non-relational storage and Postgres support. It’s a free event open to anybody.</p><h2><b>Deep Dive Into EXPLAIN Plans</b></h2><p>The following week on January 21, EDB is among the sponsors of<a href="http://2020.pgdaysf.org/"> PgDay in San Francisco</a> where Richard Yen, principal support engineer at EnterpriseDB, will present at a session titled,<a href="http://postgresql.us/events/pgdaysf2020/schedule/session/778-explaining-explain-a-deep-dive-into-explain-plans/"> Explaining EXPLAIN: A Deep Dive into EXPLAIN Plans</a>. It’s to help developers and DBAs understand what can be done to improve query performance. The event is open to anybody with interest in the world's most awesome open source database (their words, not ours, though we completely agree).</p><h2><b>Making PostgreSQL Central in Your Data Center</b></h2><p>At the end of the month on January 29, Bruce Momjian will be at a meetup in Hamburg, Germany talking about<a href="http://www.meetup.com/hamburg-pug/events/267160268/"> Making PostgreSQL Central in Your Data Center</a>, including lessons learned from migration projects to PostgreSQL. Bruce will cover extensibility of the Postgres database, including access to foreign data sources and ability handle NoSQL-like and data warehousing workloads.</p><p> </p><p>We hope to see you as our team travels the world to share the story of Postgres Everywhere. Keep up on where we’ll be next by visiting our<a href="http://suntzusound.com/events"> Events calendar</a>.</p><p> </p><p> </p> Thu, 09 Jan 2020 11:53:31 -0500 http://suntzusound.com/blog/january-and-running-postgres-events-near-and-far EnterpriseDB Blog http://suntzusound.com/blog/top-10-edb-postgres-blogs-posts-2019 <p><img alt="" src="/sites/default/files/depositphotos_76532449_xl-2015.jpg" style="width: 100%;" /></p><p> </p><p>Wow it’s 2020, and here we are at the start of a new year and new decade!  We’ve clinked our champagne glasses, sung Auld Lang Syne, and made resolutions we may or may not keep (although, simply making them is half the fun). And while 2019 may already be fading into the past, EDB’s top blog posts of last year are still packed with great information that can help you in the year ahead. </p><h2><a href="http://suntzusound.com/blog/documentdb-really-postgresql"><span style="color:#e74c3c;">Is DocumentDB Really PostgreSQL?</span></a></h2><p>Is Amazon’s fancy new MongoDB-compatible DBMS really PostgreSQL under the covers? This blog post makes a pretty compelling argument that it is.</p><h2><a href="http://suntzusound.com/blog/tips-postgres-postgres-insider"><span style="color:#e74c3c;">Tips for Postgres from a Postgres Insider</span></a></h2><p>This collection of Postgres tips highlights some of the more obscure features users might not be familiar with, but they can greatly enhance the use of Postgres.</p><h2><a href="http://suntzusound.com/blog/comparison-mongodb-vs-postgresql"><span style="color:#e74c3c;">Comparison of MongoDB vs. PostgreSQL</span></a></h2><p>Despite the different data models that MongoDB and PostgreSQL expose, many organizations face the challenge of picking either technology. Performance is often the main deciding factor.</p><h2><a href="http://suntzusound.com/blog/8-major-improvements-postgresql-12"><span style="color:#e74c3c;">8 Major Improvements in PostgreSQL 12</span></a></h2><p>PostgreSQL 12 is focused on performance and optimization. This release wasn’t built with brand-new, shiny features in mind; instead, it’s a finely-tuned, well-polished implementation of existing PostgreSQL capabilities.</p><h2><a href="http://suntzusound.com/blog/those-who-forget-past-are-doomed-repeat-it"><span style="color:#e74c3c;">Those Who Forget the Past Are Doomed to Repeat It</span></a></h2><p>Over the past decade, a number of DBMSs have been introduced (typically labeled as NoSQL) that utilize a network or hierarchical data model. Some such systems support networks through the concepts of “links,?and some support hierarchical data using a nested data model often utilizing JSON. But it doesn’t appear that these systems have internalized lessons from history.</p><h2><a href="http://suntzusound.com/blog/postgres-table-partitioning"><span style="color:#e74c3c;">Postgres Table Partitioning</span></a></h2><p>This blog post describes different Postgres partitioning options, how to configure partitioning, and the use cases where partitioning can be used to increase performance. It also describes other benefits of partitioning, as well as things to consider when deciding whether to use partitioning.</p><h2><a href="http://suntzusound.com/blog/pgpool-vs-pgbouncer"><span style="color:#e74c3c;">Pgpool vs PGBouncer</span></a></h2><p>Two of the most well-known database connection poolers are Pgpool-II and PgBouncer. Each option adds functionality to a database’s connection facilities that can be used natively, but each tool includes its own particular strengths and weaknesses.</p><h2><a href="http://suntzusound.com/blog/how-run-postgresql-docker"><span style="color:#e74c3c;">How to Run PostgreSQL on Docker</span></a></h2><p>Docker is an open source platform where you can create, deploy, and run applications by using containers. See how to get started on <a href="http://suntzusound.com/postgres-tutorials/how-install-postgres-docker">deploying a PostgreSQL Image on Docker</a>. </p><h2><a href="http://suntzusound.com/blog/mongodb%E2%80%99s-new-license-model-won%E2%80%99t-save-it-competition-aws"><span style="color:#e74c3c;">MongoDB’s New License Model Won’t Save It From Competition With AWS</span></a></h2><p>Last year, AWS launched Amazon DocumentDB. Look out MongoDB! Your popular DBaaS offering, Atlas, will now have to compete with Amazon on its home turf.</p><h2><a href="http://suntzusound.com/blog/comparing-oracle-edb-postgres"><span style="color:#e74c3c;">Comparing Oracle with EDB Postgres</span></a></h2><p>It’s not often you combine open source and closed proprietary source databases in the same sentence, especially when you think about the giant in the database realm known as Oracle. But the contender, PostgreSQL, has a lot of similarities to the champion. </p><p> </p><p> </p> Tue, 07 Jan 2020 10:59:09 -0500 http://suntzusound.com/blog/top-10-edb-postgres-blogs-posts-2019 EnterpriseDB Blog http://suntzusound.com/blog/enterprisedb-named-challenger-2019-gartner-magic-quadrant-operational-database-management <p><img alt="" src="/sites/default/files/copy_of_social_gartner_2019.png" style="width: 100%;" /></p><p> </p><p>We’re delighted to share that Gartner has, for the seventh year in a row, recognized EDB in their most recent Magic Quadrant (MQ) for Operational Database Management Systems. EDB has been named in the MQ every year, and in 2019, is listed in the “Challengers Quadrant,?with the report citing the completeness of our vision and ability to execute as reasons for our positioning.</p><p>This report is a great validation for us at what is an exciting moment for our company. In the last few years, we’ve placed huge emphasis on building out our <a href="http://suntzusound.com/enterprise-postgres/edb-postgres-platform">product portfolio</a> and <a href="http://suntzusound.com/services-support">services</a> to respond to growing demand from enterprises turning to EDB Postgres as a core database on-premise and in the cloud. It’s exciting to see that our approach has been so well received. </p><p>We remain one of the Postgres community’s most meaningful contributors. And we’ll continue to work with customers to identify and build out the enterprise functionality and services they need to complement the work of the community. </p><p>Seen as <a href="http://insights.stackoverflow.com/survey/2019#technology-_-most-loved-dreaded-and-wanted-databases">one of the most loved databases</a> by developers in Stackoverflow’s 2019 survey and voted ?lt;a href="http://db-engines.com/en/blog_post/79">Database of the Year</a>?in 2018 for the second year running, we at EDB are very upbeat about the future of Postgres and about its challenge to traditional database vendors.</p><p>Download the <a href="http://info.suntzusound.com/2019Gartner-MQOperationalDatabaseManagementSystems.html">Gartner report: Magic Quadrant for Operational Database Management Systems 2019</a>.</p><p> </p><h2><b>Gartner Disclaimer</b></h2><p>Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. </p><p> </p> Mon, 30 Dec 2019 10:16:42 -0500 Frank Days http://suntzusound.com/blog/enterprisedb-named-challenger-2019-gartner-magic-quadrant-operational-database-management EnterpriseDB Blog http://suntzusound.com/blog/how-workaround-oracle-listagg-function-postgresql <style type="text/css"><!--/*--><![CDATA[/* ><!--*/.summary{background:#f3f7f9; padding:20px;}/*--><!]]>*/</style><p><img alt="" src="/sites/default/files/photo-source-code-1.jpg" style="width: 100%;" /></p><p> </p><div class="summary"><p>This blog discusses the LISTAGG function in Oracle and how to achieve similar functionality in PostgreSQL 12 and EDB Postgres Advanced Server. This is one of the most common challenges enterprises face when migrating from Oracle to PostgreSQL.</p><p>1. <a href="#olf">Oracle LISTAGG Function</a></p><p>         a. <a href="#ls">LISTAGG Syntax</a></p><p>         b. <a href="#lft">LISTAGG Function Types</a></p><p>2. <a href="#compatibility">LISTAGG Compatibility Oracle Vs. PostgreSQL Vs. EDB Postgres Advanced Server</a></p><p>         a. <a href="#postgresql">LISTAGG function support in PostgreSQL</a></p><p>         b. <a href="#EDBPostgres">LISTAGG function support in EDB Postgre Advanced Server</a></p><p>3. <a href="#exampleorder">Example of LISTAGG(...) WITHIN GROUP (ORDER BY?</a></p><p>4. <a href="#examplepartition">Example of LISTAGG(...) WITHIN GROUP (ORDER BY ...) OVER (PARTITION BY ...)</a></p></div><h2> </h2><h2 id="olf">The LISTAGG function in Oracle</h2><p>The Oracle LISTAGG() function is an aggregation function that transforms data from multiple rows into a single list of values separated by a specified delimiter. The Oracle LISTAGG function is typically used to denormalize values from multiple rows into a single value, which can be a list of comma-separated values (CSV) or other human-readable format for reporting purposes.</p><p> </p><h3 id="ls">Oracle LISTAGG Syntax</h3><pre><code class="language-sql">LISTAGG(measure_expr [, 'delimiter'])WITHIN GROUP (order_by_clause) [OVER query_partition_clause]</code></pre><p><a href="http://docs.oracle.com/database/121/SQLRF/functions101.htm#SQLRF30030">Reference Link</a></p><p>For the Oracle LISTAGG function, the return data type is RAW if the measure column is RAW; otherwise the return value is VARCHAR2. So, if any string aggregation is beyond the specified data type, it may error out as a <i>“result of string concatenation is too long?</i></p><p> </p><h3 id="lft">Oracle LISTAGG function types</h3><table border="1" cellpadding="1" cellspacing="1" style="width:100%;"><tbody><tr style="width:50%;"><td style="width:50%;"><p>Types</p></td><td style="width:50%;"><p>Description</p></td></tr><tr style="width:50%;"><td><p>LISTAGG(...) </p><p><b>WITHIN GROUP (ORDER BY ...)</b></p></td><td><p>The LISTAGG column is aggregated with a delimiter-separated list. Using WITH GROUP will return one row-per-group.  </p></td></tr><tr style="width:50%;"><td style="width:50%;"><p>LISTAGG(... <b>ON OVERFLOW ERROR</b>) </p><p>WITHIN GROUP (ORDER BY ...)</p></td><td><p>If the concatenation string is longer than the return data type, then there will be an "ORA-01489" error. Using the ON OVERFLOW ERROR clause will handle errors gracefully.</p></td></tr><tr><td><p>LISTAGG(... <b>ON OVERFLOW TRUNCATE</b> ) </p><p>WITHIN GROUP (ORDER BY ...)</p></td><td><p>The ON OVERFLOW TRUNCATE clause not only handles overflow errors gracefully but it will also add a literal  ellipsis ('...') and a count.</p></td></tr><tr><td><p>LISTAGG(... <b>ON OVERFLOW TRUNCATE ?..?lt;/b> ) </p><p>WITHIN GROUP (ORDER BY ...)</p></td><td><p>We can specify our own truncate literal (?..?  if we don't want to use the default ellipsis.</p></td></tr><tr><td><p>LISTAGG(... <b>ON OVERFLOW TRUNCATE ?..?WITHOUT COUNT</b> ) </p><p>WITHIN GROUP (ORDER BY ...)</p></td><td><p>We can also omit the count by adding WITHOUT COUNT. The default is the equivalent of explicitly using WITH COUNT.</p></td></tr><tr><td><p>LISTAGG(...) WITHIN GROUP (ORDER BY ...) </p><p><b>OVER (PARTITION BY ...)</b></p></td><td><p>The OVER (PARTITION BY) clause will return all the rows and duplicate the LISTAGG result for each row in the partition. </p></td></tr></tbody></table><p> </p><p>Now, let’s see what kind of similar support we have in PostgreSQL 12 and EDB Postgres Advanced Server 12.</p><h2> </h2><h2 id="compatibility">LISTAGG Compatibility Oracle Vs. PostgreSQL Vs. EDB Postgres Advanced Server</h2><p>This compatibility chart provides a high-level picture of what is supported in PostgreSQL compared to the <a href="http://suntzusound.com/compare-postgres-databases">EDB Postgres Advanced Server</a>.</p><table border="1" cellpadding="1" cellspacing="1" style="width:100%;"><tbody><tr><td><p>Oracle 12c LISTAGG function models</p></td><td><p>Support in PostgreSQL 12</p></td><td><p>Support in EDB Postgres Advanced Server 12</p></td></tr><tr><td><p>LISTAGG(...) </p><p><b>WITHIN GROUP (ORDER BY ...)</b></p></td><td><p>STRING_AGG(... ORDER BY ...)  OR</p><p>ARRAY_AGG(... ORDER BY ..)</p></td><td><p>Same as Oracle</p></td></tr><tr><td><p>LISTAGG(... <b>ON OVERFLOW ERROR</b>) </p><p>WITHIN GROUP (ORDER BY ...)</p></td><td><p>Not available</p></td><td>Not available</td></tr><tr><td><p>LISTAGG(... <b>ON OVERFLOW TRUNCATE</b> ) </p><p>WITHIN GROUP (ORDER BY ...)</p></td><td><p>Not available</p></td><td>Not available</td></tr><tr><td><p>LISTAGG(... <b>ON OVERFLOW TRUNCATE ?..?lt;/b> ) </p><p>WITHIN GROUP (ORDER BY ...)</p></td><td><p>Not available</p></td><td>Not available</td></tr><tr><td><p>LISTAGG(... <b>ON OVERFLOW TRUNCATE ?..?WITHOUT COUNT</b> ) </p><p>WITHIN GROUP (ORDER BY ...)</p></td><td>Not available</td><td>Not available</td></tr><tr><td><p>LISTAGG(...) WITHIN GROUP (ORDER BY ...) </p><p><b>OVER (PARTITION BY ...)</b></p></td><td><p>STRING_AGG(...) OVER (PARTITION BY ... ORDER BY ...)</p></td><td><p>Same as Oracle</p></td></tr></tbody></table><p> </p><h3> </h3><h3 id="postgresql">LISTAGG function support in PostgreSQL</h3><p>Like other databases, PostgreSQL also has many <a href="http://www.postgresql.org/docs/current/sql-expressions.html">string aggregation expressions and functions</a> with <i>order_by_clause</i> and <i>filter_clause</i>. For string aggregation like Oracle LISTAGG, there are two functions, <a href="http://www.postgresql.org/docs/12/functions-aggregate.html">STRING_AGG and ARRAY_AGG</a>. These don’t cover all the functionalities in Oracle LISTAGG, but do offer the fundamental functionality of string aggregation.</p><p>The PostgreSQL ARRAY_AGG function can be used for string aggregation, but we need to make another call on top of it to convert the aggregation result set from ARRAY to STRING with the ARRAY_TO_STRING function. There will be a slight performance overhead if there’s a big aggregation result set, so the use of the STRING_AGG function is recommended instead. </p><p>The PostgreSQL STRING_AGG function has limited functionality, and it does not cover the handling of a <i>“result of string concatenation is too long?lt;/i> error like in the Oracle clauses ON OVERFLOW ERROR, ON OVERFLOW TRUNCATE, or ON OVERFLOW TRUNCATE ... WITHOUT COUNT. However, the return type of STRING_AGG is BYTEA/TEXT data type and big enough to handle a large string aggregation result set.</p><p> </p><h3>PostgreSQL Syntax</h3><pre><code class="language-sql">postgres=# \df string_agg                            List of functions   Schema   |    Name    | Result data type | Argument data types | Type------------+------------+------------------+---------------------+------ pg_catalog | string_agg | bytea            | bytea, bytea        | agg pg_catalog | string_agg | text             | text, text          | agg(2 rows)</code></pre><h3> </h3><h3 id="EDBPostgres">LISTAGG function in EDB Postgres Advanced Server (Oracle Compatibility database)</h3><p>Latest EDB Postgres Advanced Server (Oracle Compatibility) version 12, supports two aggregate functions: LISTAGG and MEDIAN.</p><p><a href="http://suntzusound.com/edb-docs/d/edb-postgres-advanced-server/installation-getting-started/release-notes/12/EPAS_Release_Notes.1.4.html#">EDB Postgres Release Notes</a></p><p>The EDB Postgres Advanced Server is an enhanced version of open source PostgreSQL with various added functionalities along with Oracle database compatibility. The Oracle database compatibility layer in EDB Postgres Advanced Server directly supports the LISTAGG function, which means we can execute the same Oracle function calls in the EDB Postgres Advanced Server without any change.</p><p> </p><h4>EDB Postgres Advanced Server Syntax</h4><pre><code class="language-sql">LISTAGG( measure_expr [, delimiter ])WITHIN GROUP( order_by_clause ) [ OVER query_partition_by_clause ]</code></pre><p> </p><p>Let’s run some of the supported functionalities in PostgreSQL and EDB Postgres Advanced Server for comparison.</p><p> </p><h2 id="exampleorder">Example of LISTAGG(...) WITHIN GROUP (ORDER BY ...)</h2><h3> </h3><h3>In Oracle</h3><pre><code class="language-sql">SQL&gt; SELECT deptno, LISTAGG(ename, ',') WITHIN GROUP (ORDER BY ename) AS employees FROM   emp GROUP BY deptno ORDER BY deptno;    DEPTNO EMPLOYEES---------------------------------------------------------------------------10 CLARK,KING,MILLER20 ADAMS,FORD,JONES,SCOTT,SMITH30 ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD</code></pre><p> </p><h3>In PostgreSQL 12 (Using STRING_AGG &amp; ARRAY_AGG)</h3><pre><code class="language-sql">postgres=# SELECT deptno, STRING_AGG(ename, ',' ORDER BY ename) AS employeesFROM   empGROUP BY deptno ; deptno |              employees--------+--------------------------------------     10 | CLARK,KING,MILLER     20 | ADAMS,FORD,JONES,SCOTT,SMITH     30 | ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD(3 rows)-- ARRAY_AGG modelpostgres=# select deptno,array_agg(ename order by ename) from emp group by deptno; deptno |               array_agg--------+----------------------------------------     10 | {CLARK,KING,MILLER}     20 | {ADAMS,FORD,JONES,SCOTT,SMITH}     30 | {ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD}(3 rows)-- Converting ARRAY_AGG result to string using ARRAY_TO_STRINGpostgres=# select deptno,array_to_string(array_agg(ename order by ename),',') from emp group by deptno; deptno |           array_to_string--------+--------------------------------------     10 | CLARK,KING,MILLER     20 | ADAMS,FORD,JONES,SCOTT,SMITH     30 | ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD(3 rows)</code></pre><h3> </h3><h3>In EDB Postgres Advanced Server</h3><pre><code class="language-sql">edb=# SELECT deptno, LISTAGG(ename, ',') WITHIN GROUP (ORDER BY ename) AS employeesedb-# FROM   empedb-# GROUP BY deptnoedb-# ORDER BY deptno; deptno |              employees--------+--------------------------------------     10 | CLARK,KING,MILLER     20 | ADAMS,FORD,JONES,SCOTT,SMITH     30 | ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD(3 rows)</code></pre><h3> </h3><h2 id="examplepartition">Example of LISTAGG(...) WITHIN GROUP (ORDER BY ...) OVER (PARTITION BY ...) </h2><h3> </h3><h3>In Oracle</h3><pre><code class="language-sql">SQL&gt; SELECT distinct DEPTNO,                     LISTAGG( ENAME, ',' ) WITHIN GROUP (ORDER BY ENAME) OVER(PARTITION BY DEPTNO) as employees    FROM EMP    order by deptno;    DEPTNO EMPLOYEES---------- ---------------------------------------------------------------------10 CLARK,KING,MILLER20 ADAMS,FORD,JONES,SCOTT,SMITH30 ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD</code></pre><h3> </h3><h3>In PostgreSQL 12</h3><pre><code class="language-sql">postgres=# SELECT distinct DEPTNO,STRING_AGG( ENAME, ',') OVER(PARTITION BY DEPTNO) as employees           FROM EMP           group by deptno,ename           order by deptno; deptno |              employees--------+--------------------------------------     10 | KING,MILLER,CLARK     20 | SCOTT,ADAMS,FORD,JONES,SMITH     30 | WARD,BLAKE,ALLEN,MARTIN,TURNER,JAMES(3 rows)</code></pre><h3> </h3><h3>In EDB Postgres</h3><pre><code class="language-sql">edb=# SELECT distinct DEPTNO,edb-#        LISTAGG( ENAME, ',' ) WITHIN GROUP (ORDER BY ENAME) OVER(PARTITION BY DEPTNO) as employeesedb-# FROM EMPedb-# order by deptno; deptno |              employees--------+--------------------------------------     10 | CLARK,KING,MILLER     20 | ADAMS,FORD,JONES,SCOTT,SMITH     30 | ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD(3 rows)</code></pre><p> </p><p>While moving from Oracle to Postgres isn't always straight forward, EDB Postgres Advanced Server makes it easier to migrate. If you need assistance handling more complex migration issues, connect with <a href="http://suntzusound.com/services-support">EDB support services</a>. We can provide you with a migration assessment and help you migrate to PostgreSQL or EDB Postgres Advanced Server. </p><h3> </h3> Thu, 19 Dec 2019 14:17:35 -0500 Raghavendra Rao http://suntzusound.com/blog/how-workaround-oracle-listagg-function-postgresql EnterpriseDB Blog http://suntzusound.com/blog/happy-holidays <p><img alt="" src="/sites/default/files/happy_holidays_social.png" style="width: 100%;" /></p><p> </p><p>All of us at EDB would like to express our gratitude to our customers, and let you know that, in 2020, we’re going to stay laser focused on helping you succeed with Postgres. </p><p>We’d also like to thank our friends, colleagues, and partners in the Postgres community for their support. Postgres is the strongest it’s ever been in terms of end user adoption and technology momentum--together, we made that happen. </p><p>To everyone, we wish you a happy, peaceful holiday season!</p><p> </p> Thu, 19 Dec 2019 08:29:13 -0500 http://suntzusound.com/blog/happy-holidays EnterpriseDB Blog http://suntzusound.com/blog/can-pgbouncer-handle-failover-new-machine <p><img alt="" src="/sites/default/files/photo-data-transfer_0.jpg" style="width: 100%;" /></p><h2><b>Part Three: PgBouncer and persistent connections during downtime</b></h2><p><i>(This is part three of my technical response to a series of questions about the use of PgBouncer and what you need to look out for. Part two can be found </i><a href="http://suntzusound.com/blog/pgbouncer-connection-pooling-what-do-when-persistent-connectivity-lost"><i>here</i></a><i>)</i></p><p>So, now we have successfully renegotiated a one-minute network timeout, and a server restart, all within a single PgBouncer session, and have done so silently. To a user in an application, there may at worst be an error message; however, the transaction can be completed at the next try <i>in the same session,</i> and, under most circumstances, there will only be a delay in the transaction reporting back as complete to the application. How the application deals with that delay is up to the application, not PgBouncer.</p><p>But what about failover? What if an entire machine goes down and another is brought up to replace it? Surely PgBouncer can’t cope with that. It is, after all, a completely different machine.</p><p>Well, actually, it can. If you have two or more databases in streaming replication, you can use an EDB tool called EDB Failover Manager?(EFM) with Virtual IP (VIP) capability. This tool allows for transparent and automatic failover, or manual switching - commonly used during maintenance of databases in a cluster - between the databases in replication. Connections to the VIP are later re-established on the new master.</p><p>You can connect PgBouncer to the VIP, rather than the IP address of the box itself, and following a change of master; PgBouncer will reconnect to the new master and what <i>looks like</i> the same database as if there had simply been a network outage.</p><p>To prove it, we’ll first need to set up a more complicated architecture.</p><p>We need to set up (at least) two more servers. One server will be a standby to our current database, and the database we have been using will be our initial master database. We will set up streaming replication between these databases and then set up EFM nodes to control them. Then we will use EFM and its VIP capability to fail over between them, and, by connecting pgbouncer to the VIP rather than the box itself, we will mimic a network outage rather than a complete change of machine.</p><p>The other server will be the EFM witness node. The witness node’s role is to sit in an EFM cluster and maintain quorum between the other node, or nodes, in the cluster. In this way EFM avoids what is called “split brain,?where two or more nodes in the cluster think they are the master database at the same time. It is worth noting that no master nodes in a cluster would be considered preferable to two - or more - the point being that with each extra master comes an extra end point for applications to connect and write to, which is going to make consolidation of data, when rebuilding the new master after failure, orders of magnitude more difficult.</p><p>Think of the EFM witness node as a watchdog that makes sure everything goes smoothly, and make sure that at least two EFM nodes agree on what action to take between them. The EFM witness node itself isn't associated with a database. Also, because it's relatively lightweight, it's often bundled with other EDB products outside of the main databases and their own EFM nodes. </p><p>To set up two EPAS databases in streaming replication, follow the instructions here:</p><p><a href="http://suntzusound.com/edb-docs/d/edb-postgres-failover-manager/user-guides/user-guide/3.6/configuring_streaming_replication.html">http://suntzusound.com/edb-docs/d/edb-postgres-failover-manager/user-guides/user-guide/3.6/configuring_streaming_replication.html</a></p><p>To set up a master, standby, and witness EFM cluster, follow the instructions here:</p><p><a href="http://suntzusound.com/edb-docs/d/edb-postgres-failover-manager/user-guides/user-guide/3.6/tutorial.html">http://suntzusound.com/edb-docs/d/edb-postgres-failover-manager/user-guides/user-guide/3.6/tutorial.html</a></p><p>Take note to open ports 7800:7810, and set up your VIP as shown here:</p><p><a href="http://suntzusound.com/edb-docs/d/edb-postgres-failover-manager/user-guides/user-guide/3.6/using_vip_addresses.html">http://suntzusound.com/edb-docs/d/edb-postgres-failover-manager/user-guides/user-guide/3.6/using_vip_addresses.html</a></p><p>So, having set up our EFM cluster with VIP support, let’s check that everything is ready to go:</p><pre><code class="language-sql"># ./efm cluster-status efmCluster Status: efmAgent Type  Address              Agent  DB       VIP-----------------------------------------------------------------------Master      192.168.15.6         UP     UP       192.168.15.200*Standby     192.168.15.8         UP     UP       192.168.15.200Witness     192.168.15.9         UP     N/A      192.168.15.200Allowed node host list:192.168.15.9 192.168.15.6 192.168.15.8Membership coordinator: 192.168.15.9Standby priority host list:192.168.15.8Promote Status:DB Type     Address              WAL Received LSN   WAL Replayed LSN   Info---------------------------------------------------------------------------Master      192.168.15.6                            0/50022F0          Standby     192.168.15.8         0/50022F0          0/50022F0          Standby database(s) in sync with master. It is safe to promote.</code></pre><p> </p><p>The databases are in sync and ready to fail over, and the VIP is visible across all three nodes in the EFM cluster. Although, as we can see from the asterisk (*) on the Master row, only the master database is using the VIP right now.</p><p>So, connecting PgBouncer to the VIP:</p><pre><code class="language-sql">$ grep pgb /etc/pgbouncer/pgbouncer.inipgb = dbname=pgb host=192.168.15.200 port=5444</code></pre><p> </p><p>Restart PgBouncer, and attempt to connect to the master database (currently using the 192.168.15.200 VIP address in my example).</p><pre><code class="language-sql">$ psql -U pgb -h 127.0.0.1 -p 6543 pgbpgb=# select * from test; x --- 1 2 3 4(4 rows)</code></pre><p> </p><p>Connecting to the VIP works!</p><p>Now we will switch over to the standby (which will become the new master) using EFM, and the master will become a standby. We do this by using the EFM promote command, which is useful during maintenance, for example.</p><pre><code class="language-sql"># ./efm promote efm -switchoverPromote/switchover command accepted by local agent. Proceeding with promotion and will reconfigure original master. Run the 'cluster-status' command for information about the new cluster state.</code></pre><p> </p><p>Great. This was accepted by EFM, and the roles of the databases have swapped, We can now check that the databases are still in sync.</p><pre><code class="language-sql"># ./efm cluster-status efmCluster Status: efmAgent Type  Address              Agent  DB       VIP-----------------------------------------------------------------------Standby     192.168.15.6         UP     UP       192.168.15.200Master      192.168.15.8         UP     UP       192.168.15.200*Witness     192.168.15.9         UP     N/A      192.168.15.200Allowed node host list:192.168.15.9 192.168.15.6 192.168.15.8Membership coordinator: 192.168.15.6Standby priority host list:192.168.15.6Promote Status:DB Type     Address              WAL Received LSN   WAL Replayed LSN   Info---------------------------------------------------------------------------Master      192.168.15.8                            0/60001A8          Standby     192.168.15.6         0/60001A8          0/60001A8          Standby database(s) in sync with master. It is safe to promote.</code></pre><p> </p><p>And they are. The VIP is now held by the new master database, and the write ahead logs are in sync. But did PgBouncer survive?</p><p>From the same PgBouncer session as before:</p><pre><code class="language-sql">pgb=# select * from test; x --- 1 2 3 4(4 rows)</code></pre><p> </p><p>So, we have negotiated a controlled switchover using EFM and the VIP capability between two completely different machines, and as far as PgBouncer is concerned, the session is still active as if nothing had happened.</p><p>In the next blog, I will highlight how to simulate a complete database outage and what this means for a PgBouncer session.</p><p> </p> Tue, 17 Dec 2019 11:13:19 -0500 Philip Hankinson http://suntzusound.com/blog/can-pgbouncer-handle-failover-new-machine EnterpriseDB Blog http://suntzusound.com/blog/postgresql-vs-mysql-360-degree-comparison <style type="text/css"><!--/*--><![CDATA[/* ><!--*/.summary{background:#f3f7f9; padding:20px;}/*--><!]]>*/</style><p><img alt="" src="/sites/default/files/depositphotos_84130150_xl-2015_0.jpg" style="width: 100%;" /></p><p> </p><div class="summary"><p>ABSTRACT: This article focuses on comparing and contrasting various features of MySQL and Postgres in detail and discusses which is the better choice for applications today.</p><p>1. <a href="#postgresql">Why use PostgreSQL ?lt;/a></p><p>2. <a href="#mysql">Why use MySQL</a>?lt;/p><p>3. <a href="#use">Ease of Use</a>?lt;/p><p>4. <a href="#syntax">Syntax ?lt;/a></p><p>5. <a href="#data">Data types ?lt;/a></p><p>6. <a href="#replication">Replication &amp; Clustering </a>?lt;/p><p>7. <a href="#views">Views?lt;/a></p><p>8. <a href="#triggers">Triggers?lt;/a></p><p>9. <a href="#procedures">Stored Procedures?lt;/a></p><p>10. <a href="#query">Query?lt;/a></p><p>11. <a href="#partitioning">Partitioning ?lt;/a></p><p>12. <a href="#scalability">Table Scalability (Storage capacity per column, row, table, etc)?lt;/a></p><p>13. <a href="#nosql">NoSQL capabilities?lt;/a></p><p>14. <a href="#security">Security?lt;/a></p><p>15. <a href="#analytical">Analytical Functions?lt;/a></p><p>16. <a href="#administration">Administration(GUI tools)</a>?lt;/p><p>17. <a href="#performance">Performance?lt;/a></p><p>18. <a href="#adoption">Adoption?lt;/a></p><p>19. <a href="#environments">Best Environments / Stacks</a></p></div><p> </p><h2>PostgreSQL vs. MySQL: Which to Choose? </h2><p>PostgreSQL and MySQL are both immensely popular open-source databases, and a variety of real-time applications today utilize both. MySQL is known to be the world’s most popular database, whereas PostgreSQL is known as the world’s most advanced RDBMS database. As an RDBMS database, MySQL is not fully SQL compliant and does not have many of the features that PostgreSQL has. This is why PostgreSQL has become a great choice for developers, and its popularity is growing exponentially with each passing day.</p><p>MySQL was taken over by Oracle a few years ago, resulting in two versions: enterprise and open source, the latter of which has attracted a lot of criticism from MySQL users due to Oracle controlling the MySQL development.</p><p>PostgreSQL is the most preferred database in the world due to its exhaustive list of enterprise-grade features and capabilities. PostgreSQL is developed by a PostgreSQL global community, comprised of some great developers who have been working hard for decades to ensure that PostgreSQL is feature-rich and highly competitive with other open-source and commercial databases. The community also receives significant contributions from various companies around the world.</p><p> </p><h2 id="postgresql">Why PostgreSQL?</h2><p>PostgreSQL is an open-source and feature-rich ORDBMS database that competes with real-time, top-ranked databases such as Oracle. Developers also choose PostgreSQL as their NoSQL database. PostgreSQL makes setting up and using databases easier and simpler both on-premises or in the cloud. In an environment with a large number of databases on a private or public cloud, automating the building of PostgreSQL instances can save a lot of time. It is also being widely adopted across all platforms, including Docker containers.</p><h3>What Kind of Applications </h3><p>PostgreSQL is an open-source database that is fully ACID compliant, enterprise grade, and developer- and DBA-friendly. PostgreSQL is the best choice for high-transactional and complex applications across any domain and can cater to various application services that are web- and mobile-based. PostgreSQL can also be a very good data warehouse for running complex reporting queries and procedures on large volumes of data.</p><p> </p><h2 id="mysql">Why MySQL?</h2><p>MySQL is available in both open-source and commercial versions. The commercial version is being managed by Oracle. It is an RDBMS database, which is simple and easy to set up and use, but may not be suitable for applications requiring full SQL compliance. MySQL has significant limitations on SQL standards, making it more suitable for simple web applications dealing with smaller volumes of data on a fault-tolerant database. Integration capabilities of MySQL are also very limited, which makes it difficult to be part of heterogeneous database environments.</p><p> </p><h3>What Kind of Applications </h3><p>MySQL is a partially SQL-compliant database suitable for simple web applications or any application that requires a simple schema design and performs data operations using simple SQL queries. MySQL is not a great choice for complex applications dealing with large volumes of data.</p><p> </p><h2 id="use">Ease of Use</h2><p>PostgreSQL is an easy-to-use database with its full stack of RDBMS database features and capabilities that can handle structured and unstructured data. Installations can be easily done on Linux-flavored environments using yum or source code from the PostgreSQL website. Installing from source code gives you much more fine-grained control over the installation.</p><p>MySQL database is known for its ease of use. Installation and setting up of MySQL environments is simple across operating systems. However, its limitations in terms of SQL and database features compared with other databases could pose challenges in building efficient RDBMS applications.</p><p> </p><h2 id="syntax">Syntax</h2><p>SQL syntaxes are pretty similar across both databases. The only caveat in MySQL is that all SQL syntaxes are not supported. All supported syntaxes are similar across both databases. We will discuss this further in the “Query?section below.</p><p>PostgreSQL query:</p><pre><code class="language-sql">SELECT * FROM employees;</code></pre><p> </p><p>MySQL query:</p><pre><code class="language-sql">SELECT * FROM employees;</code></pre><h2> </h2><h2 id="data">Data Types</h2><p>Both MySQL and PostgreSQL support many data types, ranging from traditional ones (e.g., Integer, Date, Timestamp) to complex ones (e.g., JSON, XML, TEXT). However, there is a difference between these two database products when it comes to the capability of catering to complex, real-time data search requirements. Let’s take a look at both.</p><p><b>PostgreSQL </b>not only supports traditional SQL data types (e.g., Numeric, Strings, Date, Decimal, etc.) but also supports unstructured data types (e.g., JSON, XML, and HSTORE) as well as network data types, bit strings, etc. What makes PostgreSQL stand out is its support for a wider range of data types, such as ARRAYs, NETWORK types, and Geometric data types (including advanced spatial data functions) to store and process spatial data. Supported data types can be found <a href="http://www.postgresql.org/docs/11/datatype.html">here</a>. The support for spatial data types and functions comes from an external module called PostGIS, which is an open-source extension. </p><p><b>MySQL</b> supports <a href="http://dev.mysql.com/doc/refman/8.0/en/data-types.html">various data types</a> that help a variety of applications store and process data in different formats, including the following: traditional data types to store Integers, Characters or Strings, Date with Timestamps and Time Zones, Boolean, Float, Decimal, Large Text, and BLOB  to store binary data (like images). There is no support for geometric data types in MySQL.</p><p> </p><h2>JSON: PostgreSQL vs. MySQL </h2><p><b>PostgreSQL</b> started supporting JSON data types from version 9.2. The JSON data support in PostgreSQL is much more advanced compared to MySQL. There are several JSON-specific operators and functions, making data searches in JSON documents very efficient. The JSONB feature from PostgreSQL-9.4, which stores JSON in a binary format, started supporting Full-Text Indexing (known as GIN Indexing), and this really makes Full-Text searching on JSON documents much faster.</p><p><b>MySQL</b> just started supporting JSON data types recently (from Version 5.7), years later than PostgreSQL. The JSON data columns can be queried using SQL queries, and JSON attributes can be indexed as well. However, the support for JSON-specific functions is limited compared to PostgreSQL. Another limitation of MySQL is that there is no support for Full-Text Indexing on JSON columns. Since MySQL is not fully SQL compliant, it may not be a great choice for storing and processing JSON data.</p><p> </p><h2 id="replication">Replication &amp; Clustering </h2><p>Both MySQL and PostgreSQL have replication and clustering capabilities and can ensure data operations are distributed horizontally. Let’s take a more detailed look.</p><p><b>MySQL</b> supports a master-slave and master-to-multiple-slaves replication mechanism, which ensures that all data changes are replicated from a master to a slave database via SQLs. This is why replication can only be asynchronous, which can be disadvantageous in terms of performance and scalability.</p><p>The advantage of MySQL replication is that the slaves are not read-only. This means that if an application fails over to a slave when a master database crashes, slaves can consume both reads and writes to ensure that applications are working seamlessly. DBAs will then need to make sure that a slave is out of slave mode and all the changes are reverse-replicated back to the master. However, replication can be slow when there are long-running SQLs.</p><p>MySQL also supports NDB cluster, which is a multi-master replication mechanism. This type of replication can be beneficial to high-transaction environments demanding horizontal scalability. However, there could be serious performance and latency issues if not implemented with care.</p><p><b>PostgreSQL</b> replication is very popular and reliable. Unlike MySQL, PostgreSQL’s replication is based on WAL files, which makes it more reliable, faster, and easier to manage. Postgres supports master-slave and master-to-multiple-slaves, including cascading replication. The replication in Postgres is termed as streaming or physical replication, which can be synchronous or asynchronous.</p><p>By default, replication is asynchronous, and slaves can cater to read requests. If the application (like a web application) demands the data snapshot on slaves to be the same as the master, then synchronous replication will help. The caveat would be that the master will hang if the transactions are not committed to the slave.</p><p>Replication at the table level can be achieved using external open-source tools such as Slony, Bucardo, Londiste, RubyRep, etc. All of them perform trigger-based replication. PostgreSQL also supports logical replication, which performs table-level replication using WAL records and removes the complexity brought in by trigger-based replication. Initially, logical replication was supported by an extension called <b>pglogical</b> and has been part of the <b>PostgreSQL</b> core from version 10. </p><p> </p><h2 id="views">Views</h2><p><b>MySQL</b> supports views; the number of tables used by the SQLs underneath the view is limited to 61. Views are like virtual tables that do not store data physically, and materialized views are not supported in MySQL. Views created with simple SQLs can be updated, while views created with complex SQLs cannot.</p><p><b>PostgreSQL </b>supports views that are partially similar to MySQL. While views created with simple SQLs can be updated, the views created with complex SQLs cannot be updated. But there is a work-around to update complex views using RULES. Also, if the data is to be stored physically, PostgreSQL supports Materialized Views, which can be REFRESHED, and Indexes as well.</p><p> </p><h2 id="triggers">Triggers</h2><p><b>MySQL </b>supports Triggers for AFTER and BEFORE events for INSERT, UPDATE, and DELETE statements. Triggers cannot execute dynamic SQL statements or stored procedures.</p><p><b>PostgreSQL</b> triggers are advanced; the supported triggering events are AFTER, BEFORE, and INSTEAD OF triggers for INSERT, UPDATE, and DELETE events. If you need to execute a complex SQL when the trigger gets invoked, you can do this using functions. Yes, the triggers in PostgreSQL can also execute functions dynamically. </p><pre><code class="language-sql">CREATE TRIGGER auditAFTER INSERT OR UPDATE OR DELETE ON employee    FOR EACH ROW EXECUTE FUNCTION employee_audit_func();</code></pre><h2> </h2><h2 id="procedures">Stored Procedures</h2><p>Stored Procedures are an important part of databases and cater to complex data-extraction requirements. Developers often use stored procedures as part of their database development process. Both MySQL and PostgreSQL databases support stored procedures, but <a href="http://dev.mysql.com/doc/refman/8.0/en/stored-program-restrictions.html">MySQL</a> only supports standard SQL syntaxes, while PostgreSQL supports very advanced procedures.</p><p>Stored Procedures are implemented in the form of functions in PostgreSQL with a RETURN VOID clause. The part of this feature that developers love is the support it has for various languages that are not supported by MySQL, such as Ruby, Perl (PlPerl), Python (PlPython), TCL, Pl/PgSQL, SQL, and JavaScript.</p><p> </p><h2 id="query">Query </h2><p>As mentioned earlier, <b>MySQL</b> is not a fully SQL-compliant database and does not support all SQL features, making it a tough choice for developers and not a great choice for data warehousing applications, as there will be a need here for advanced and complex SQLs. </p><p>Here are some limitations to be considered when choosing MySQL:</p><ul><li>Certain UPDATE SQL results can be surprising and not per the SQL standard, as seen below:</li></ul><pre><code class="language-sql">mysql&gt; select * from test;+------+------+| c | c1   |+------+------+|   10 |  100 |+------+------+1 row in set (0.01 sec) mysql&gt; update test set c=c+1, c1=c;Query OK, 1 row affected (0.01 sec)Rows matched: 1  Changed: 1  Warnings: 0 mysql&gt;  select * from test;+------+------+| c | c1   |+------+------+|   11 |   11 |+------+------+1 row in set (0.00 sec)</code></pre><p> </p><p>The SQL standard expected results are supposed to be as shown here:</p><pre><code class="language-sql">mysql&gt;  select * from test;+------+------+| c | c1   |+------+------+|   11 |   10 |+------+------+</code></pre><p><i> </i></p><ul><li>Cannot UPDATE or DELETE statements; cannot SELECT from the same table. Below is an example of a DELETE:</li></ul><pre><code class="language-sql">mysql&gt; delete from test where c in (select t1.c from test t1, test t2 where t1.c=t2.c);ERROR 1093 (HY000): </code></pre><p> </p><ul><li> LIMIT clause is not allowed in the subqueries:</li></ul><pre><code class="language-sql">mysql&gt; select * from test where c in (select c from test2 where c&lt;3 limit 1);ERROR 1235 (42000): </code></pre><p> </p><p>MySQL doesn't yet support “LIMIT &amp; IN/ALL/ANY/SOME subquery.?lt;/p><p>Also, MySQL does not support standard SQL clauses such as FULL OUTER JOINS, INTERSECT, and EXCEPT, which are commonly used. Index types, including Partial Indexes, Bitmap Indexes, and Expression Indexes, are also not supported, and these are important to speed up query performances.</p><p><b>PostgreSQL,</b> on the other hand, is a fully SQL-compliant database and supports all SQL standard features. Applications of pretty much any nature from any domain can use PostgreSQL as their database, which makes it a popular choice for OLTP, OLAP, and DWH environments. PostgreSQL is the best choice for developers who have to write complex SQLs.</p><p> </p><h2 id="partitioning">Partitioning </h2><p>MySQL and PostgreSQL both support table partitioning to improve the performance of queries that hit large-size Tables. However, there are certain limitations in both the databases. </p><p><b>MySQL</b> supports declarative table partitioning; supported partition types are RANGE, LIST, HASH, KEY, and COLUMNS (RANGE and LIST). SUBPARTITIONING is also supported. However, this feature may not be very flexible for DBAs because of certain limitations. Let’s take a look at them:</p><ul><li>From MySQL version 8.0, table partitioning is only possible with InnoDB and NDB storage engines, not with any other storage engines like MyISAM.</li><li>If the partition key column is not part of all the PRIMARY and UNIQUE KEY constraints of the table, then it is not possible to partition the table at all. The only other option is to partition a table with no PRIMARY or UNIQUE KEYs, which is a rare occurrence in the RDBMS world.</li><li>Support for placing the table partitions on tablespaces was phased out from MySQL-5.7.24. This means, DBAs cannot leverage the benefit of disk I/O balancing from table partitioning.</li></ul><pre><code class="language-sql">mysql&gt; create table emp (id int not null, fname varchar (30), lname varchar(30), store_id int not null ) partition by range (store_id) ( partition p0 values less than (6) tablespace tbs, partition p1 values less than(20) tablespace tbs1, partition p2 values less than (40) tablespace tbs2);ERROR 1478 (HY000): InnoDB : A partitioned table is not allowed in a shared tablespace.mysql&gt;</code></pre><p><b> </b></p><p><b>PostgreSQL</b> supports two types of table partitioning: table partitioning by inheritance and declarative partitioning. Declarative partitioning was introduced in Version 10 and is similar to MySQL, whereas partitioning by inheritance is performed using triggers or rules. Performance benefits are significantly higher when an accurate partitioning strategy is implemented on tables with large volumes of data. Partitioning types supported are RANGE, LIST, and HASH. With the introduction of declarative partitioning, complexity and performance challenges due to triggers and rules are eradicated. </p><p>Both types of partitioning for PostgreSQL<b> </b>have their benefits and limitations:</p><ul><li>Similar to MySQL, declarative partitions in PostgreSQL can only be implemented if the partition key column is part of all the primary and UNIQUE KEY constraints.</li><li>When partitioning a table by inheritance, child tables cannot inherit primary key or UNIQUE KEY constraints from the master table.</li><li>INSERTs and UPDATEs on the master table are not automatically redirected to child tables; to do so, triggers or rules must be implemented, and the same needs to be done for automatically creating new partitions.</li></ul><p> </p><h2 id="scalability">Table Scalability </h2><p>Table segments growing bigger in size can pose performance problems, and queries hitting those tables take more resources and time to complete. Efficient table design is thus important for good performance. MySQL and PostgreSQL have different options for this.</p><p><b>MySQL</b> supports B-Tree Indexing and Partitioning to improve query performance on larger tables. However, the lack of support for Bitmap, Partial, and Functional Indexes in MySQL leaves DBAs with limited tuning options. Partitioning of larger tables can improve performance, but partition tables in MySQL cannot be placed in general tablespaces, which is a serious showstopper for I/O balancing.</p><p><b>PostgreSQL</b> has several indexing and two types of partitioning options to improve data operations on a scalable table. Expression Indexing, Partial Indexing, Bitmap Indexing, and Full-Text Indexing can help improve query performance on larger-size Tables, which in turn can improve query performance significantly. In PostgreSQL, table partitions and Indexes can be placed in separate tablespaces on different disk file systems, which can greatly improve table scalability as well.</p><p>To achieve horizontal table-level scalability in Postgres (horizontal table partitioning), commercially developed products based on Postgres, such as CitusDB, Greenplum, and IBM Netezza, are the way to go. Open-source Postgres itself does not support horizontal table partitioning; PostgresXC is an option but is not popular due to its performance and maintenance overhead.</p><p> </p><h2>Storage</h2><p>Data storage is a critical capability of any database. PostgreSQL and MySQL provide various options for storing data. Storing data means storing the physical database objects, such as Tables and Indexes, on a disk. This section focuses on two types of storage options: common storage and pluggable storage. </p><p><b>PostgreSQL </b>has a common storage feature called tablespaces, which can accommodate physical objects like Tables, Indexes, and Materialized Views. With the help of tablespaces, the objects can be grouped and stored across multiple physical locations, which helps distribute I/O in an efficient way. PostgreSQL does not support pluggable storage engines, although this is expected in future releases. </p><p><b>MySQL</b>, similar to PostgreSQL, has a tablespaces feature (part of InnoDB engine) that can help DBAs group and store physical objects (Tables, Indexes, etc.), and this can help distribute I/O. MySQL also supports pluggable storage engines, which can help the database meet specific storage requirements for various types of applications such as OLTP, Data Warehousing, etc. This is one of the biggest advantages of using MySQL as a database. The pluggable storage feature can be enabled by installing plugins, and although configuring pluggable storage involves complexity, applications are completely oblivious to this.</p><p> </p><h2 id="nosql">Data Models Supported</h2><p>NoSQL capabilities in an RDBMS database can help deal with unstructured data, for example, JSON, XML, and other TEXT data types. </p><p><b>MySQL</b> has limited NoSQL capabilities. JSON data types have been introduced in MySQL from Version 5.7 and have a long way to go to become more mature.</p><p><b>PostgreSQL</b> has become a very popular NoSQL choice for developers in the last three years and has enormous JSON capabilities. With JSON and JSONB data types, JSON-based data operations can be significantly faster and more efficient. JSON data can also be Indexed with B-Tree and GIN for improved searches, and XML and HSTORE data types can handle XML formats and other complex text-format data as well. With the support for spatial data types, PostgreSQL is no doubt a complete multi-model database.</p><p> </p><h2 id="security">Security</h2><p>Database security plays a vital role in securing data from unauthorized access. Secured access is implemented at various levels in the database, including the object level and connection level.</p><p><b>MySQL</b> grants access to the database, objects, and connections via ROLES and PRIVILEGES. Every user must be granted a connection privilege using an SQL for every IP address from which a user is connecting, or connections can be granted all at once to multiple IP addresses as part of a subnet.</p><p>Here below, the command grants all privileges on all the objects in the database “testdb?to user “testuser?from the IP ?92.168.1.1?</p><pre><code class="language-sql">GRANT ALL PRIVILEGES ON testdb.* TO 'testuser@'192.168.1.1?IDENTIFIED BY 'newpassword';</code></pre><p> </p><p>If the user is connecting from all the IPs of 192.168.1, then this command works:</p><pre><code class="language-sql">GRANT ALL PRIVILEGES ON testdb.* TO 'testuser@'192.168.1.*?IDENTIFIED BY 'newpassword';</code></pre><p> </p><p>Every time the privilege is granted, a password must be specified; otherwise, the user cannot connect.</p><p><b>MySQL</b> also supports SSL-based connections over the network and provides security based on SE-Linux modules. Integration with external authentication systems, such as LDAP and PAM, is part of the MySQL enterprise edition.</p><p><b>PostgreSQL</b> provides access to database objects and data via ROLES and PRIVILEGES using GRANT commands. Connection authentication is done in a simpler way, via a <b>pg_hba.conf </b>authentication file that contains the list of IP addresses along with username and type of access. This is much more reliable and simpler. Below is a sample entry from a pg_hba.conf file:</p><pre><code class="language-sql">host   database  user  address  auth-method  [md5 or trust or reject]</code></pre><p> </p><p>Postgres open-source version supports SSL-based connections and can also be integrated with external authentication systems, including LDAP, Kerberos, and PAM, which are efficient and reliable. </p><p id="analytical"><b>Analytical functions</b> perform aggregation on a set of rows. There are two types of analytical functions: window functions and aggregate functions. Aggregate functions perform aggregation and return a single aggregate value for a set of rows (like sum, avg, min, max, etc.), whereas analytical functions return a single aggregate value for each of the rows. Both MySQL and PostgreSQL support various analytical functions. MySQL has lately come up with some window functions in Version 8.0, whereas PostgreSQL has been supporting the same for a long time now.</p><p> </p><p><b>PostgreSQL</b> supports the window functions below: </p><table border="1" cellpadding="0" cellspacing="0"><tbody><tr><td valign="top"><p><b>Name of the function           </b></p></td><td valign="top"><p><b>  Description</b></p></td></tr><tr><td valign="top"><p>CUME_DIST</p></td><td valign="top"><p>Return the relative rank of the current row.</p></td></tr><tr><td valign="top"><p>DENSE_RANK</p></td><td valign="top"><p>Rank the current row within its partition without gaps.</p></td></tr><tr><td valign="top"><p>FIRST_VALUE</p></td><td valign="top"><p>Return a value evaluated against the first row within its partition.</p></td></tr><tr><td valign="top"><p>LAG</p></td><td valign="top"><p>Return a value evaluated at the row that is at a specified physical offset row before the current row within the partition.</p></td></tr><tr><td valign="top"><p>LAST_VALUE</p></td><td valign="top"><p>Return a value evaluated against the last row within its partition.</p></td></tr><tr><td valign="top"><p>LEAD</p></td><td valign="top"><p>Return a value evaluated at the row that is offset rows after the current row within the partition.</p></td></tr><tr><td valign="top"><p>NTILE</p></td><td valign="top"><p>Divide rows in a partition as equally as possible and assign each row an integer starting from 1 to the argument value.</p></td></tr><tr><td valign="top"><p>NTH_VALUE</p></td><td valign="top"><p>Return a value evaluated against the nth row in an ordered partition.</p></td></tr><tr><td valign="top"><p>PERCENT_RANK</p></td><td valign="top"><p>Return the relative rank of the current row (rank-1) / (total rows-1)</p></td></tr><tr><td valign="top"><p>RANK</p></td><td valign="top"><p>Rank the current row within its partition with gaps.</p></td></tr><tr><td valign="top"><p>ROW_NUMBER</p></td><td valign="top"><p>Number the current row within its partition starting from 1.</p></td></tr></tbody></table><p> </p><p><b>MySQL</b> supports pretty much all the window functions that PostgreSQL supports, with the following limitations:</p><ul><li>Window functions cannot be used as part of UPDATE or DELETE.</li><li>DISTINCT is not supported with Window functions.</li><li>NESTED window functions are not supported.=</li></ul><p> </p><h2 id="administration">Administration (GUI Tools)</h2><p><b>MySQL</b> database can be remotely accessed using Oracle’s SQL Developer, MySQL Workbench, DBeaver, OmniDB, and so on. For monitoring database performance and health, tools such as Nagios, Cacti, and Zabbix are popular.</p><p><b>PostgreSQL</b> can be GUI-managed using Oracle’s SQL Developer, pgAdmin, OmniDB, and DBeaver. Tools popular for monitoring PostgreSQL health and performance are Nagios, Zabbix, and Cacti.</p><p> </p><h2 id="performance">Performance</h2><p><b>MySQL </b>database performance-optimization options are very limited, and many Indexes are also not supported. Without full SQL compliance, writing efficient and well-performing SQL queries can be a challenge. MySQL is also not the best choice for large volumes of data. Tablespaces for spreading data across multiple disks are a challenge, since tablespaces are only supported in InnoDB and cannot accommodate table partitions. Simple queries to hit tables can be made to complete faster by creating B-TREE Indexes.</p><p><b>PostgreSQL</b> database is a highly suitable database for any kind of workload: OLTP, OLAP, Data warehousing, and more. It also fully complies with SQL standards, so it is possible to write efficient queries and pl/pgsql programs. With its support for a wide range of Indexes, including B-Tree, Bitmap, Partial, and Full-Text, performance can be greatly improved. Re-indexing and re-organization of tables to remove bloats can also be performed online. Postgres has several configuration options for allocating memory to a database and queries, and partitioned tables can be placed across multiple tablespaces to balance disk I/O.</p><p> </p><h2 id="adoption">Adoption</h2><p><b>PostgreSQL</b> is the world's most advanced open source database. Businesses around the world are using PostgreSQL for mission critical workloads. The PostgreSQL community and a few companies such as EnterpriseDB and 2ndQuadrant are making sure that PostgreSQL adoption continues to expand on a global level.</p><p><b>MySQL</b> is definitely not the best choice for RDBMS or ORDBMS applications. Since Oracle’s acquisition of MySQL, adoption of MySQL has significantly declined, and development progress in the open-source space has taken a hit as well, inviting criticism from MySQL users. </p><p> </p><h2 id="environments">Best Environments / Stacks</h2><p>A stack is a packaged collection of various applications, operating systems, and database technologies that help you easily build web applications.</p><p>Both PostgreSQL and MySQL are part of different stacks used by various organizations and service providers. While MySQL is popular with LAMP stack, PostgreSQL is popular with LAPP stack. The LAPP stack stands for <b>L</b>inux, <b>A</b>pache, <b>P</b>ostgres and <b>P</b>hp/Python and is gaining in popularity.</p><p>LAMP stack stands for <b>L</b>inux <b>A</b>pache <b>M</b>ySQL/<b>M</b>ongoDB and <b>P</b>hp/<b>P</b>ython.</p><p>The LAPP stack is an option if the requirement is to use PostgreSQL. Developers can leverage both the NoSQL and RDBMS capabilities of PostgreSQL. Large-platform service providers, such as Amazon and VMware, are also providing services with readily installed LAPP stack modules. </p><p> </p><h2>Conclusion</h2><p>Undoubtedly, <b>PostgreSQL</b> stands as the best-choice database due to its rich features and aggressive development efforts by PostgreSQL developers. Most every organization uses PostgreSQL today, and most domains are adopting PostgreSQL for their applications as well as looking to migrate their legacy applications to it. If you are migrating away from legacy oracle database and want to accomplish this task in days instead of months, see <a href="http://suntzusound.com/enterprise-postgres/edb-postgres-advanced-server">EDB Postgres Advanced Server</a>. EDB Postgres Advanced Server is an enhanced Postgres database with Oracle compatibility and enterprise security features. </p><p> </p> Wed, 11 Dec 2019 15:25:17 -0500 Pankil Shah http://suntzusound.com/blog/postgresql-vs-mysql-360-degree-comparison EnterpriseDB Blog http://suntzusound.com/blog/edb-postgres-platform-12-available-now <style type="text/css"><!--/*--><![CDATA[/* ><!--*/h2 {text-align: left !important;}/*--><!]]>*/</style><p><img alt="" src="/sites/default/files/email_banner-epas_12_release_1.png" style="width: 100%;" /></p><p> </p><p>Today, we are announcing the general availability of EDB Postgres Platform 12. This release includes all of the updates in open source PostgreSQL 12 AND new customer-requested features added to EDB Postgres. </p><p>Every year, the PostgreSQL community publishes one major release. We endeavor to upgrade our platform to support new open source features within a six to eight-week window after the community release. This is a major engineering effort, and we’re proud to have achieved our goal of updating the EDB Postgres Platform, yet again. This gives our customers access to the latest PostgreSQL features.</p><h2><b>What’s in EDB Postgres Platform 12?</b></h2><p>Improvements to the EDB Postgres Platform include:</p><p><a href="http://suntzusound.com/enterprise-postgres/edb-postgres-advanced-server"><b>EDB Postgres Advanced Server</b></a> is our flagship product, an enhanced version of PostgreSQL that adds enterprise security, performance diagnostics, Oracle database compatibility and productivity features for developers and DBAs. New features include automatic interval partitioning, compound triggers, statistical functions, and user views. Partitions can automatically be created based on a time interval or a numeric sequence. This is a great way to quickly access historical data such as monthly records. Compound triggers reduce manual work and increase productivity by executing DML actions before/after the statement and before/after a table row. New functions (MEDIAN, LISTAGG) calculate medians and concatenate values; system views help DBAs identify user privileges and dependencies.</p><p><a href="http://suntzusound.com/enterprise-postgres/edb-postgres-enterprise-manager-pem"><b>EDB Postgres Enterprise Manager</b> (PEM)</a> is a GUI-based management tool that combines monitoring, alerting and tuning of multiple open source PostgreSQL and/or EDB Postgres Advanced Server clusters. The new job scheduler allows DBAs to automate repetitive/routine tasks with support for exception/blackout timeframes along with alert emails when a job completes. The new “Explain?tab in the PEM Query Tool allows DBAs and developers to get insight into slow-performing queries by visually identifying the problem.</p><p><a href="http://suntzusound.com/enterprise-postgres/edb-postgres-backup-and-recovery-tool"><b>EDB Backup and Recovery Tool</b></a> is our disaster recovery solution. Database backups can now be performed with the click of a button using the EDB Postgres Enterprise Manager GUI. Integration with PEM also means that Sysadmins and DBAs have visibility into all backups across their Postgres estate. Administrators no longer need to manage backups by logging into each EDB Backup and Restore Tool server individually but can manage a catalog of backups from one central console (see screenshot below).</p><p><img alt="" src="/sites/default/files/edb_postgres_ga.png" style="width: 100%;" /></p><p> </p><p>More and more customers want to run Postgres on a hosted Kubernetes platform or on popular Kubernetes distributions like Red Hat OpenShift. Our solution for this deployment option is <a href="http://suntzusound.com/enterprise-postgres/edb-postgres-kubernetes"><b>EDB Postgres on Kubernetes</b></a>, a cloud-native Postgres solution with all the functionality needed for high availability and disaster recovery packaged into Docker containers. We recently introduced <a href="http://suntzusound.com/blog/edb-postgres-kubernetes-operator-now-available">EDB Postgres Kubernetes Operator</a>, to simplify deployment, monitoring, scaling up/down, recovery and upgrading Postgres. Now, EDB Postgres on Kubernetes supports all the functionality in EDB Postgres Platform 12.</p><p>These are just a handful of the enhancements added to EDB Postgres 12. Please refer to <a href="http://suntzusound.com/edb-docs">technical documentation</a> on the respective products for details. If that is not enough for you to consider upgrading, open source PostgreSQL 12 offers significant improvements to query performance through better partitioning; provides application developers with new capabilities, such as SQL/JSON path expression support; optimizations for how common table expression (WITH) queries are executed and generated columns. </p><p>If you are a customer, we encourage you to <a href="http://suntzusound.com/software-downloads-postgres">download and upgrade now</a>. If you are not a customer, we encourage you to try our software by following the same link.</p><p> </p> Tue, 10 Dec 2019 11:05:59 -0500 Bhavani Rao http://suntzusound.com/blog/edb-postgres-platform-12-available-now EnterpriseDB Blog http://suntzusound.com/blog/registration-now-open-postgres-vision-2020 <h2><img alt="" src="/sites/default/files/postgres_vision_2020_social_tiles-01.jpg" style="width: 100%;" /></h2><h2> </h2><h2><b>Leading Postgres event for enterprise users brings together respected experts, customers, and developers</b></h2><p>We’re delighted to announce that <a href="http://postgresvision.com/">Postgres Vision 2020</a> is coming to Boston again. The event will take place at the <a href="http://www.marriott.com/hotels/travel/bosbo-sheraton-boston-hotel/?scid=bb1a189a-fec3-4d19-a255-54ba596febe2">Boston Sheraton Hotel</a> from June 22-24, and will feature a schedule jam-packed with keynote speakers, panel sessions with Postgres experts and customers, and hands-on workshops for a deeper understanding of key Postgres functionalities. </p><p> </p><h3><b>Network with Postgres Experts and Peers</b></h3><p>The conference presents a great opportunity to learn from some of the most respected Postgres experts as they address a wide range of topics, from deploying databases in the cloud to understanding your database migration strategy and adopting Postgres databases in containers. Attendees can look forward to networking with speakers, as well as with Postgres customers and peers.</p><p> </p><h3><b>Call for Papers Now Open</b></h3><p>Interested in presenting at Postgres Vision? We’re looking for submissions from enterprise Postgres users, Postgres community members and experts from companies that provide database tools, services, and support. Submit your paper proposal <a href="http://postgresvision.com/call-for-papers/">here</a>. Deadline for submissions is February 29th, 2020. </p><p> </p><h3><b>Register Today</b></h3><p>To kick things off, we’re opening <a href="http://www.eventbrite.com/e/postgres-vision-2020-tickets-81991098583">early bird registration for a limited time</a>, offering a 2-day conference pass for <a href="http://www.eventbrite.com/e/postgres-vision-2020-tickets-81991098583">just $199</a>.</p><p> </p> Thu, 05 Dec 2019 14:01:46 -0500 http://suntzusound.com/blog/registration-now-open-postgres-vision-2020 EnterpriseDB Blog http://suntzusound.com/blog/seven-things-i%25E2%2580%2599m-watching-aws-reinvent-2019 <style type="text/css"><!--/*--><![CDATA[/* ><!--*/h2 {text-align: left !important;}/*--><!]]>*/</style><p><img alt="" src="/sites/default/files/img_1230_-_for_reinvent_blog.jpg" style="width: 100%;" /></p><p> </p><p>Hello from AWS re:Invent! This year, I’ve set my sights on gaining a better understanding about how the discussion around the cloud has evolved this year among enterprise users, especially those looking at their database options. Certainly, AWS re:Invent is one of the biggest tech events of the year, so what’s discussed here will have a significant impact in the months ahead.</p><p>While here, I’m interested in exploring seven key questions (outlined below), and I’d like to hear your views on them. If you’re attending re:Invent, stop by our booth--1135. It would be great to discuss your thoughts in person.</p><p><img alt="" src="/sites/default/files/img_1225_-_for_reivent_blog.jpg" style="width: 100%;" /></p><p> </p><h2><b>Seven questions for AWS re:Invent</b></h2><h3><b>1.</b>   <b>How many more enterprises are beyond the “we’re still testing?phase of cloud adoption?</b></h3><p>Clearly, cloud adoption is growing dramatically, but a significant milestone will be when enterprises start putting a majority of their mission critical workloads into the cloud. The trends and projections from analysts are compelling, but last year, many of my conversations started with cloud and then veered back to solving today’s on-premise problems. </p><h3><b>2.</b>   <b>What will AWS contribute to the Postgres community?</b></h3><p>It’s great to see the increasing involvement of AWS in the Postgres community through event sponsorships. If you’re interested in learning more about what the community is doing, I recommend checking out<a href="http://www.meetup.com/Las-Vegas-PostgreSQL-Users-Group/events/266880334/"> the Postgres Community Q&amp;A Meetup and Cocktail Hour</a> (sponsored by AWS) at the Venetian Palazzo on Thursday 5<sup>th</sup> 2019 from 3pm to 6pm. My colleague Bruce Momjian, who co-founded the Postgres community group, will be among the speakers.</p><h3><b>3.</b>   <b>What will the big container news be?</b></h3><p>Already a huge topic in the enterprise IT market, it will be interesting to hear what announcements AWS and its partners will make in relation to containers. Obviously, containers enable users to spin up instances quickly, and to easily port applications between cloud service providers. So, what approach will AWS take at the conference? We’ll find out. </p><h3><b>4.</b>   <b>Will Andy Jassy needle Oracle in his keynote?</b></h3><p>I can’t believe Andy won’t mention that AWS has now successfully completed its migration from the Oracle Database ?something Larry Ellison said they could never do! However, there’s much more going on here than entertainment value. The AWS migration reflects what we’re seeing around the world as more and more businesses are looking to move away from Oracle to Postgres. If you want to find out how you can switch, check out<a href="http://suntzusound.com/edb-postgres-migration-portal"> our Oracle migration page</a>.</p><h3><b>5.</b>   <b>Where is RDS on-premises via the partnership with VMware heading?</b></h3><p>AWS has a long-established relationship with VMWare, but having announced Amazon Relational Database Service (RDS) on VMware last year, the company<a href="http://searchaws.techtarget.com/blog/AWS-Cloud-Cover/AWS-month-in-review-Amazon-rolls-out-RDS-on-VMware-loses-JEDI-deal"> confirmed general availability at the end of October 2019</a>. This moves AWS into the on-premise space and backs up its focus on the hybrid cloud with VMWare. It will be good to hear more about the functionality this service offers and how it will be priced. </p><h3><b>6.</b>   <b>Will this be the year we have that conversation about the hybrid cloud enterprise at re:Invent?</b></h3><p>Depending on who you talk to in the enterprise technology world, you will always get a different answer when it comes to how users should deploy their database applications moving forward. Some suggest staying put, on-premise. This makes sense if you have a highly sensitive database that you must protect at all costs. Others see value in a hybrid model, splitting applications across on-premise, private and public cloud environments.  What will be interesting to see this year is how often hybrid cloud is mentioned, as many customers are telling us they want the freedom to run the same Postgres everywhere.</p><h3><b>7.</b>   <b>Who will have the best swag?  </b></h3><p>Clearly I’m biased, but I’d definitely recommend dropping by our booth, 1135, to see what swag we have to offer and to chat with the EDB team.</p><p> </p><p><img alt="" src="/sites/default/files/img_1221_-_for_reinvent_blog.jpg" style="width: 100%;" /></p><p> </p> Tue, 03 Dec 2019 13:56:42 -0500 Frank Days http://suntzusound.com/blog/seven-things-i%25E2%2580%2599m-watching-aws-reinvent-2019 EnterpriseDB Blog http://suntzusound.com/blog/new-enterprisedb-professional-services-postgres <p><img alt="Man holding tablet" src="/sites/default/files/graphic-services_0.jpg" style="width: 100%;" /></p><p>The demands are growing ever louder for the IT to become more agile and responsive to the business, but ensuring database implementations are correctly planned, implemented, managed and optimized is challenging. While organizations want to move faster the reliability and security of relational databases remains of paramount importance. The added challenge is that the database architecture is important so pretty much all the key decision makers interact with these systems but their relationships are all slightly different and have unique requirements. </p><p>At EnterpriseDB (EDB) we understand these interconnected engagements with your database deployments and the challenges that come from the need for greater agility. It is why we have created a series of new <a href="http://suntzusound.com/services-support">Professional Services offerings</a>. We have also re-organised our services to reflect the different priorities of those engaged with Postgres.  We have segmented them as follows:</p><ul><li><a href="http://suntzusound.com/services/strategy-services"><strong>Strategy Services:</strong></a> It is vital that senior decision makers, including the CIO, are able to plan their adoption of Postgres. Our Strategy Services help organizations build the business case for Postgres and qualify the technology requirements, cost objectives and business key performance indicators (KPIs). </li><li><strong><a href="http://suntzusound.com/services/operations-and-implementation-services">Operations and Implementation Services:</a></strong> For those directly using Postgres, including IT Operations, Database Administrators and Systems Administrators, EDB provides services to ensure the database is run in line with industry best practices for reliability, high availability and security. We identify the best fit solution for monitoring and alerting the IT team to important events and make recommendations about potential back-up solutions, weighing up the relative benefits and challenges of the different options. Our experts also provide recommendations on architectural designs to meet our customers?needs for high availability, security and compliance.</li><li><a href="http://suntzusound.com/services/optimization-and-deployment-services"><strong>Optimization and Deployment Services:</strong></a> Developers are increasingly important advocates for Postgres and at EDB we offer a range of services to ensure Postgres can be quickly deployed and optimized for specific applications. Our Professional Services team helps plan a deployment, benchmarking and tuning the Postgres-related infrastructure to ensure it performs at the highest levels. EDB can also provide valuable information as developers consider the size of database they require for new applications, what capacity is required and what impact significant infrastructure changes may have on performance.  </li><li><strong><a href="http://suntzusound.com/services/migration-services">Migration Services:</a></strong> Our Migration Services give customers a clear evaluation of their existing database environment and can plot the simplest path to Postgres. Our experts can also identify which databases can be moved with the greatest efficiency and will maximize cost savings.</li><li><strong><a href="http://suntzusound.com/services/custom-services">Custom Services: </a></strong>We also understand that databases are often tailored for specific industry and organizational requirements. For such implementations we provide access to a PostgreSQL subject matter expert, who can be onsite to help with the configuration, development and deployment of Postgres. Our project team will also manage the implementation process to ensure it meets measurable objectives while a Technical Account Manager will work closely to maximize the benefits gained from adopting EDB Postgres.  </li></ul><p>At a time when Postgres is being adopted everywhere, on-premise and in the cloud, we are excited to see how our Professional Services team can help you drive agility, flexibility and scalability in your adoption of Postgres. I would highly recommend going to our new <a href="http://suntzusound.com/services-support">Professional Services page</a> to find out more information about the services we offer and I would also encourage you to <a href="http://suntzusound.com/get-pricing">reach out to our team</a>, as they will be best placed to understand which services are most appropriate for you.  </p><p> </p> Mon, 25 Nov 2019 21:19:06 -0500 http://suntzusound.com/blog/new-enterprisedb-professional-services-postgres EnterpriseDB Blog http://suntzusound.com/blog/gartners-state-open-source-dbms-market-2019 <p><img alt="" src="/sites/default/files/photo-market-report_0.jpg" style="width: 100%;" /></p><p> </p><p>In the last ten years attitudes towards open source in enterprise IT environments have changed dramatically, as users have shifted from skepticism about its ability to cope with mission-critical workloads to deploying it widely across their infrastructures.  It now dominates the operating system layer in IT infrastructures and is rapidly becoming integral to most strategic IT initiatives.  The database is no different.  According to Gartner: "By 2022, more than 70% of new in-house applications will be developed on an Open Source Database Management System (OSDBMS) or OSDBMS-based Database Platform-as-a-Service (dbPaaS), and 50% of existing proprietary Relational Database Management Systems (RDBMS) instances will have been converted or be in process of converting." The research report also points out to the Gartner Research Circle survey, that said, "89% of organizations reported using free or open source DBMS software; a third reported using it in more than 50% of their organization.?lt;/p><p>This lines up with our belief that open source, and in our case <a href="http://suntzusound.com/enterprise-postgres/edb-postgres-platform">Postgres</a>, is being seen as the only alternative to the traditional commercial databases for most application requirements.  We have seen the functionality argument used against open source fade away as the features of OSDBMS systems have gone on to credibly compete with traditional proprietary databases.  In our case, working with the community to innovate rapidly to deliver new functionality makes Postgres ever more attractive for our customers, who are seeking greater agility and responsiveness from their database solutions.</p><p>However, we believe Gartner is not suggesting enterprise users should switch their entire database infrastructures to open source without undertaking a careful evaluation of some critical issues.  This is something we agree with and offer a range of tools, such as <a href="http://suntzusound.com/enterprise-postgres/free-oracle-postgres-migration-tool">the EDB Postgres Migration Portal</a>, to help customers consider the implications of a move to EDB Postgres. </p><p>While the technology has matured significantly to achieve functional equivalency in many areas we always say that adopting open source databases is no different to any other database implementation in a mission-critical environment.  There is a temptation, given the low barrier to adoption, that open source databases are adopted by anyone within an organization.  In principle, this is possible but as companies seek to modernize their IT infrastructures it is important to avoid creating pools of Shadow IT, that add to complexity, rather than simplify IT infrastructures.  </p><p>Consequently, the move to OSDBMS must be carefully planned, a thorough evaluation conducted and the right <a href="http://suntzusound.com/services-support">support</a> put in place to ensure organizations maintain <a href="http://suntzusound.com/enterprise-postgres/edb-postgres-failover-manager">high availability</a>, reliability, and <a href="http://suntzusound.com/enterprise-postgres/edb-postgres-replication-server">scalability</a>.  In particular, the report makes some good points for enterprise users considering a move to the cloud.  Migration to the cloud is not just about the cost of implementation, but understanding the configuration requirements and how to support the application to ensure minimal disruption.  Indeed, Gartner makes it very clear that users should “always purchase commercial support for production implementations.”?We would agree with this approach, not because we believe there are flaws in open source software, but because we know our customers operate mission-critical applications that cannot afford downtime.</p><p>Ultimately, we are convinced that Postgres - and other open source software - is going to become ever more important in enterprise IT environments. Postgres, given its relational heritage, can minimize some of the cultural and organizational challenges of adopting a new technology, because of its comparable functionality and approach.  More importantly, though, Postgres offers greater flexibility and agility to innovate and respond to business demands by creating a database platform that can operate on-premise and in the cloud simultaneously without the expense and constraints of licensing traditional proprietary databases.</p><p>I would encourage Gartner subscribers to read Gartner’s report as we believe it offers some great insights on the OSDBMS market, as well as considerations to guide your approach to adopting the technology.</p><p> </p><p><i>Gartner, State of the Open-Source DBMS Market, 2019, 17 October 2019, Merv Adrian, Donald Feinberg.</i></p><p> </p> Fri, 22 Nov 2019 13:55:39 -0500 Frank Days http://suntzusound.com/blog/gartners-state-open-source-dbms-market-2019 EnterpriseDB Blog http://suntzusound.com/blog/postgres-enterprise-manager-architecture <style type="text/css"><!--/*--><![CDATA[/* ><!--*/h2 {text-align: left !important;}/*--><!]]>*/</style><p><img alt="" src="/sites/default/files/photo-information-sharing_0.jpg" style="width: 100%;" /></p><p>Postgres Enterprise Manager or PEM is a comprehensive monitoring and management solution for PostgreSQL and EDB Postgres Advanced Server (EPAS) from EnterpriseDB. It is a distributed application with a number of components that can be installed in many different configurations. The following sections explain what those components are, what their purpose is, and how they work.</p><p>Note: In PEM parlance, a <i>managed</i> server refers to a database server that is being monitored and managed by PEM through a PEM Agent. An unmanaged server is one that the PEM user interface may be used to connect to and manage, but is not registered with a PEM Agent.</p><p> </p><h2>Agent</h2><p>The PEM Agent is typically installed on each <i>managed</i> database server host in the environment. It is used to collect metrics about the database server and the underlying host that it is monitoring, and to execute tasks when required (for example, to update the database server configuration and restart it). Probes that collect metrics and all other tasks that an agent might execute on a host or database server are all scheduled; essentially, the PEM Agent is an execution engine for scheduled jobs.</p><p>In the ideal setup, there will be an agent running on every host (physical or virtual) that is running a database server, however it is possible for agents to monitor database servers remotely, albeit without the ability to monitor the underlying host or to execute tasks that require access to the underlying host operating system.</p><p>PEM Agents run under a superuser account by default. This is required to allow monitoring of certain operating system metrics or to execute tasks that require such privileges. It is possible to run agents under a more restricted account, but this will prevent the use of some functionality.</p><p>Agents communicate with the PEM Server by connecting to the Postgres database component (described in detail below), typically on port 5432 or 5444. When agents are first installed, a suitable user account is used to connect to the PEM Server to perform the registration process. A role in the PEM Server’s database is created for the agent, along with a certificate that will be used for authentication. When the agent service is started it will use this role and certificate to authenticate. Agents connect to the PEM Server, thus allowing them to run on hosts on the inside of a NAT firewall.</p><p>Agents have a configuration file that is created during the installation process. This file includes options for a number of purposes, including:</p><ul><li>Identification of itself, e.g. what role to use when connecting to the PEM server.</li><li>Configuration of various tuning parameters, such as connection reuse, polling wait times and more.</li><li>Enabling/disabling of certain functionality that may conflict with security policies, such as the ability to run custom batch/shell tasks.</li><li>Enabling/disabling of functionality used only in the <i>master</i> agent that runs as part of the PEM Server.</li></ul><p>The PEM Agent is designed to be extremely crash resistant; the service started by the operating system is actually a <i>watchdog</i> process that spawns a worker process that is responsible for doing the work of the agent. Should the worker process crash or exit, the watchdog will immediately restart it. The watchdog process is designed to be as simple as possible to minimise the risk that it will crash.</p><p> </p><h2>Server</h2><p>The PEM Server is a collection of components that work together to host a PEM installation:</p><ul><li>A PostgreSQL or EPAS database server with the SSLUtils extension</li><li>A PEM Agent instance</li><li>An Apache HTTPD web server</li></ul><p>Technically, all three components can be hosted on different servers, however typical installation will have all three running on the same server, or with the PEM Agent and Apache HTTPD server on one host and the database on another.</p><p> </p><h3><b>Database Server</b></h3><p>The database server is used to store all the monitoring data, much of the system configuration, and scheduling information used to execute probes and other tasks. It is the central <i>hub</i> of PEM.</p><p>As noted above, agents connect to PEM using roles defined in the database server. This is also the case for end users who are also roles in the database. This allows most authentication mechanisms supported by Postgres to be used to authenticate end users, and it also allows monitoring data from different database servers to be automatically secured using Postgres?Row Level Security functionality. Role Based Access Controls (RBAC) are also tied into the Postgres role system; PEM will check for membership in appropriate roles before allowing users to perform certain operations or use particular tools.</p><p>Data is stored in a number of different schemas with the PEM server database. The <i>pem</i> schema is used to store system configuration data such as agent and database server registrations, probe and alert definitions and configuration, and system parameters. The <i>pemdata</i> schema is used to store the most recently collected metrics from each probe in the system, with database being stored in one table for each probe. This data is automatically copied into the <i>pemhistory</i> schema which contains similar tables for each probe. Keeping a copy of the most recently recorded metric values in the <i>pemdata</i> schema allows for much faster access to the data when only the most recent values are required to render a chart or graph. Some probes may also be configured such that they only store the most recent data and no history at all.</p><p>The database server, through the use of the SSLUtils extension, also manages its own certificates used for authentication of agents. A Certificate Authority (CA) certificate is generated uniquely for each PEM Server installation. This is then used to generate a certificate for the PEM Server’s database server, as well as for all agents that are registered with the server.</p><p> </p><h3><b>Agent</b><b> </b></h3><p>The PEM Agent that is part of the PEM Server performs all the same functions that it does on any managed host (i.e. running probes and executing tasks), providing self-monitoring for PEM. In addition it typically also performs a number of other additional functions which are enabled automatically in the agent’s configuration file:</p><ul><li>Alert processing; checking the collected metric data to see if it matches any of the defined alert conditions and if so, taking appropriate action.</li><li>Spoolers; managing queues of notifications, typically of alert conditions. These may be SMTP email, SNMP traps, or Nagios passive service checks.</li><li>Garbage collection; cleanup and removal of old data that is no longer required from the PEM database.</li></ul><p>The PEM Agent running as part of the PEM Server will always have an ID of <i>1</i>.</p><p> </p><h3><b>Apache HTTPD Server</b></h3><p>The Apache HTTPD Server (usually known simply as the web server) is used to host the user interface and REST-like APIs (or web application) for PEM. Both are implemented in a Python application written using the <i>Flask</i> framework, which is hosted in Apache using <i>mod_wsgi</i>. The web application communicates with other servers and processes using different methods:</p><ul><li>The <i>psycopg2</i> driver is used to communicate with the PEM database server and managed database servers utilising the PostgreSQL wire protocol.</li><li>REST-like APIs are used to communicate with EDB Ark.</li></ul><p>The user interface is implemented in HTML 5 and Javascript and is served from the web application to the client browser. Ajax requests are made from the user interface to the backend server utilising the REST-like APIs implemented in Python.</p><p> </p><h3>SQL Profiler Plugin</h3><p>The SQL Profiler Plugin is an optional component of PEM that can be installed on any managed or unmanaged database server that is registered with a PEM Server. The plugin is used by the SQL Profiler tool in PEM to monitor and log every operation that the database server performs for analysis.</p><p>The SQL Profiler plugin is designed to incur as little overhead as possible on the database server being monitored to minimise the chances of the monitoring itself skewing the results collected. It does this by writing data to files in the data directory of the monitored server, utilising one file per server backend to eliminate the need for file locking. Data is written in CSV format to minimise the need for any processing whilst the server is being profiled.</p><p> </p><h2>Conclusion</h2><p>Postgres Enterprise Manager is a complex application offering a vast array of monitoring and management features for PostgreSQL and EDB Postgres Advanced Server database servers. It has a flexible architecture, designed to maximise deployment options, efficiency and extensibility.</p><p>NOTE: This article was written when PEM 7.11 was the current version, and applies to all PEM 7 release to date. Architectural changes may be made in future releases of PEM.</p><p> </p><p> </p> Tue, 19 Nov 2019 13:31:13 -0500 Dave Page http://suntzusound.com/blog/postgres-enterprise-manager-architecture EnterpriseDB Blog http://suntzusound.com/blog/kubecon-cloudnativecon-north-america-2019 <p><img alt="" src="/sites/default/files/photo-conference.jpg" style="width: 100%;" /></p><p> </p><p>Kubernetes has fast become the dominant platform for container orchestration, which is understandable given it can help to ensure containers are automatically reconfigured, scaled, upgraded, updated, and migrated without disrupting applications and services. </p><p>At EnterpriseDB, we have been developing tools to support containers for a number of years and working to<a href="http://suntzusound.com/enterprise-postgres/edb-postgres-kubernetes"> integrate Kubernetes into our platform</a>, but this year feels like an important moment as the use of containers for database instances grows.  In this year’s<a href="http://diamanti.com/wp-content/uploads/2019/06/Diamanti_2019_Container_Survey.pdf"> Diamanti Container Adoption Benchmark Survey</a> it clearly stated databases have emerged as the top use case for containers in 2019.</p><p><img alt="" src="/sites/default/files/image_6.png" style="width: 100%;" /></p><p>This agreed with the feedback we are getting from customers.  While investment is still in the early stages and the technology is maturing, the level of interest in containers for databases has significantly increased.  This is great news for enterprise customers because it means they have more choice when deciding their database strategy and greater flexibility on where to deploy new applications.  However, as an emerging technology, it also means ?particularly for enterprise customers ?that there are some key considerations to address.  And this is one of the reasons why we are attending KubeCon to be part of the discussions around the role of Kubernetes in database environments.</p><p>The most important question I expect we will be discussing is “How much downtime can your business cope with??lt;/p><p>While Kubernetes offers some high availability and continuity in failover or disaster scenarios it fundamentally treats all applications as ‘Cattle,?meaning that all pods running the same service are treated identically.  If you are running mission-critical relational database applications this approach is not ideal, because Cattle are disposable servers built with automated tools, so if they fail or are deleted, they can be replaced with a clone immediately.  Historically relational databases have been treated as ‘Pets,?unique, pampered systems that cannot afford to go down.  For our enterprise customers, used to four to five nines of unplanned downtime, this is very important.  99.99% availability equates to roughly 4.3 minutes of downtime a month, while 99.999% availability is just 26.3 seconds a month.  Therefore, if you are considering using Kubernetes to orchestrate your database environment it is important to plan your approach to high availability, master/standby relationships, replication lag when repairing a cluster and how to manage and monitor your cluster.</p><p>Having worked with the technology for some time now we understand the different requirements for its use in development, production, and mission-critical environments.   We would be delighted to spend some time with you at KubeCon to demonstrate how our tools can help you employ Kubernetes in mission-critical database environments and achieve the levels of high-availability and reliability that you require.</p><p>We look forward to seeing you at our booth S82 located right outside the breakout session doors.  </p><p> </p> Fri, 15 Nov 2019 08:52:31 -0500 Bhavani Rao http://suntzusound.com/blog/kubecon-cloudnativecon-north-america-2019 EnterpriseDB Blog http://suntzusound.com/blog/pgbouncer-connection-pooling-what-do-when-persistent-connectivity-lost <p><img alt="" src="/sites/default/files/photo-downtime.jpg" style="width: 100%;" /></p><p> </p><p><i>(This is part two of my technical response to a series of questions about the use of pgbouncer and what you need to look out for.  Part one can be found </i><a href="http://suntzusound.com/blog/pgbouncer-tutorial-installing-configuring-and-testing-persistent-postgresql-connection-pooling"><i>here</i></a><i>)</i></p><p>So, in Part One of this blog we completed the installation of PgBouncer, configuration and a brief network timeout using PgBouncer with no problem at all. But what happens with a minute’s downtime, and what if we’re in the middle of a transaction while it happens? Does PgBouncer maintain connectivity, without error, even when the connection to the database is catastrophically lost?</p><p>On our master database, we now start a session and issue a command.</p><pre>pgb=# select * from test; x --- 1 2 3 4(4 rows)</pre><p> </p><p>On the target database box we take the NIC (network interface) down for a minute, and issue another command while connectivity is lost:</p><pre># ifdown eth0 &amp;&amp; sleep 60 &amp;&amp; ifup eth0</pre><p> </p><p>Meanwhile, back in PgBouncer:</p><pre>pgb=# select * from test;</pre><p> </p><p>...Which now hangs, however, when the database NIC is back up, psql returns...</p><pre> x --- 1 2 3 4(4 rows)</pre><p> </p><p>Looks good! The SELECT is held in flight during the minute long network outage, and simply reconnects and completes when it can.</p><p>Now for something a little more advanced, let’s restart the database on the remote server, and see what happens.</p><pre>pgb=# select * from test; x --- 1 2 3 4(4 rows)pgb=# select * from test;FATAL: terminating connection due to administrator commandERROR: server conn crashed?server closed the connection unexpectedlyThis probably means the server terminated abnormallybefore or while processing the request.The connection to the server was lost. Attempting reset: Succeeded.pgb=# select * from test; x --- 1 2 3 4(4 rows)</pre><p> </p><p>A partial success. Our psql session connecting to PgBouncer was persistent, but we get an error message telling us that the server connection has been reset. However, immediately re-issuing the statement we get the result we want without having to reconnect to PgBouncer.</p><p>So, how do we avoid getting the error message about the server connection reset, and having to re-issue our statement?</p><p>Well, the simple answer is we change the 'pool_mode' = 'transaction'.</p><p>This is taken from the official pgbouncer documentation (slightly amended)</p><p> </p><p><b><i>server_reset_query_always</i></b></p><p><i>Whether server_reset_query should be run in all pooling modes. When this setting is off (default), the server_reset_query will be run only in pools that are in sessions-pooling mode. Connections in transaction-pooling mode should not have any need for reset query.</i></p><p><i>It is a workaround for broken setups that run apps that use session features over transaction-pooled PgBouncer. It changes non-deterministic breakage to deterministic breakage - client always lose their state after each transaction.</i></p><p><i>Default: 0</i></p><p>So, this parameter would be a workaround for our problem while in session mode. But, do we want to hack a fix or do the right thing? Of course, we want to do the right thing, so we make our pool_mode (at a minimum) transaction level, which in turn should make our sessions persistent.</p><p>Therefore, rather than ‘fix?our broken PgBouncer, we will do the sensible thing and move to transaction-level pooling:</p><pre>$ cat /etc/pgbouncer/pgbouncer.ini | grep pool_modepool_mode = transaction</pre><p> </p><p>...And restart PgBouncer.</p><p>And now, on the PgBouncer side, we reconnect and?lt;/p><pre>$ psql -U pgb -h 127.0.0.1 -p 6543 pgbPassword for user pgb: Type "help" for help.pgb=# select * from test; x --- 1 2 3 4(4 rows)</pre><p> </p><p>...And, on the database server:</p><pre># systemctl restart edb-as-11.service </pre><p> </p><p>...And then, back on the PgBouncer session again:</p><pre>pgb=# select * from test; x --- 1 2 3 4(4 rows)</pre><p> </p><p>Success, once more.  Next, we will go on to look at what to do if you have a complete failover.</p><p> </p><p> </p> Wed, 13 Nov 2019 13:05:12 -0500 Philip Hankinson http://suntzusound.com/blog/pgbouncer-connection-pooling-what-do-when-persistent-connectivity-lost