Connecting to PostgreSQL from Go lang project

Introduction

The Go programming language, sometimes referred to as Go lang, is making strong gains in popularity. Chances are if you are a Go developer, you will have to interact with SQL at some point in your project. This blog post will show how to connect to a PostgreSQL database from Go using database/sql package.

What is PostgreSQL?

PostgreSQL, is a free and open-source relational database management system that uses and extends the SQL language combined with many features that safely store and scale the most complicated data workload.  PostgreSQL allows user-defined functions to be written in other languages including C and Go. For example, you can define your own data types, build out custom functions, even write code from different programming languages without recompiling your database.

If you’re new to PostgreSQL project, I’ll recommend you to please refer the documentation

What is Go lang?

Go is an open source, statically typed, compiled programming language designed at Google. Its is  syntactically similar to C, but with overcomes lot of it’s limitations. It is often referred to as “Golang” because of its domain name, golang.org, but the proper name is Go. There are multiple areas where Go shines, like statically, strongly typed with a great way to handle errors, compiles down to one binary, super fast compilation, open source and above all simplicity.

To learn more about Go, please refer Go docs

Prerequisites

To be able to achieve our objective i.e. Connecting to PostgreSQL from Go , there are some prerequisites. Sticking to core subject of this blog, I won’t cover them, however I’m providing with some references

Install PostgreSQL and setup an instance – https://www.postgresqltutorial.com/install-postgresql/

Install Go and configure workspace – https://www.callicoder.com/golang-installation-setup-gopath-workspace/

For purpose of this tutorial, I’m using Postgres 11 and Go 1.13.xx

Objective – Connecting to PostgreSQL from Go

In next few steps, I’ll demonstrate how to build a simple command line tool (CLI) using Go lang that can connect to a PostgreSQL instance. So let’s Go (pun intended)

Step 1. Gather PostgreSQL Instance details

host     = "localhost"
port     =  5432
user     = "postgres"
password = "secure-password"
dbname   = "connect-db"

Step 2. Install the github.com/lib/pq package

go get -u github.com/lib/pq

Go’s standard library was not built to include any specific database drivers. So we need to install a third party package named lib/pq.

While Go provides us with the database/sql package that we will be utilizing to interact with our database, the standard libraries do not include drivers for every SQL database variant. Instead this is left up to the community, and from my experience the lib/pq package is the best driver for Postgres.

Step 3. Putting together our package pg-connect.go

package main
	
import (
  "database/sql"
  "fmt"
_ "github.com/lib/pq"
)

Step 4. Configuring database connection string in our code

Inside main() function, we’re are going to create a connection string, containing all the information required to connect to our postgres database.

func main() {
  connStr := "user=postgres dbname=connect-db password=secure-password host=localhost sslmode=disable"
}

Depending on your specific setup, you may want to change sslmode to enable or disable (default value is enabled).

Step 5. Creating a connection to our database

At this time our code is ready for a handshake with database using the connection string connStr. To make this work, we will use sql.Open() function, that takes 2 arguments – a driver name and connecting string. At this step, we’ll also add error handling to ensure we signal a panic if anything goes wrong.

db, err := sql.Open("postgres", connStr )
if err != nil {
  panic(err)
}
defer db.Close()

Hoping our connection details are validated, next we’re going to call Ping() method on sql.DB object to test our connection. db.ping() will force open a database connection to confirm if we are successfully connected to the database.

err = db.Ping()
if err != nil {
  panic(err)
}

Step 6. Putting it all together

package main

// connecting to a PostgreSQL database with Go's database/sql package	
import (
	"database/sql"
	"fmt"
	_ "github.com/lib/pq"
)
	
func main() {
	
/*
variables required for connection string: connStr

user= (using default user for postgres database)
dbname= (using default database that comes with postgres)
password = (password used during initial setup)
host = (hostname or IP Address of server)
sslmode = (must be set to disabled unless using SSL)
*/
	
  connStr := "user=postgres dbname=connect-db password=secure-password 
  host=localhost sslmode=disable"
  db, err := sql.Open("postgres", connStr)
  if err != nil {
	panic(err)
  }
  defer db.Close()
	
  err = db.Ping()
  if err != nil {
	panic(err)
  }
  fmt.Printf("\nSuccessfully connected to database!\n")
}

Next steps…

In the next post, we’ll see how to interact with the data within the database using Go lang.

My journey from DBA to DevOps

I’ve been working as a database engineer for over a decade, engineering enterprise data platforms. During the beginning of my career in early 2000, I chose the ‘safe’ path of being a DBA believing that relational systems being universal containers for storing critical data will never change. I started learning relational technologies like Oracle, MS-SQL, and eventually also learned Open Source systems including PostgreSQL & MySQL.

However contrary to my belief, enterprise technology did change a lot. Some things however remained constant like developers swinging by my cube to request a new database, performing DB refresh or tuning  a slow running query (my favorite). Sometimes these teams were frustrated with me when their app went down or when they couldn’t access their database. At times, my Dev and QA friends would come by my desk to learn what I was doing and even after my attempt to explain them how I am solving their production issue, they would walk away puzzled. I have grown up in this role realizing that a when things go wrong, the first one to be blamed is DBA (i.e. Default Blame Acceptor)

This was my life for long and as you might have guessed, it wasn’t very satisfying; professionally 😉

It was a vicious cycle of configuring environments, routine data refreshes, resolving outages, and late-night upgrades. The work which once gave meaning to my career, lately made me ask myself, “Is this it for a DBA?” , “Is it the time to change?”

The journey ahead…

I then started having discussions with my leaders, determined to find an answer to “What’s the future of my role?”.

During these discussions, I heard terms like Agile, Scrum, Continuous Delivery and DevOps. Since the last one was repeated more often than others, I started researching on “Skillset for DevOps”.

Initially I was all excited to learn a new skill set, maybe a new tool, however after multiple days of googling, I couldn’t find any specific Skill Set definition. During a follow-up discussion with my mentor, I learned something interesting,

“Think of DevOps as not a specific skill set, instead it’s a way of doing something.”

Wait, so all it means is Dev and Ops working together, that’s sounds weird! So now I have to learn all languages used by developers…C++, Java, Python…ugh? While I’ve been coding on PowerShell & SQL for infra automation, I never considered myself as a developer. Hmm…maybe that’s what needed to change.

Fast forward – Learning Go

In the past couple of years, there is a rise of a new programming language Go lang (or simply GO). Most developers in their code will have to interact with the database at some point in their project, and often that means working with PostgreSQL. I particularly got interested in learning Go’s integration with PostgreSQL. There are multiple Go libraries that allow Go PostgreSQL integration productive and fun. Excited about my journey and want to share whatever I’ve learned recently…so in the next few posts I’ll blog about PostgreSQL integration with Go.

Disclaimer: By NO means I claim to be a Go expert and I am not going to teach Go lang. There’s a lot of great online courses and material for that. I am only going to share my learnings on Postgres integration with Go.

The Phoenix Project – Book Review

A Novel about IT, DevOps, and Helping Your Business Win

Some people are lucky to find books that changes their life. While I’m yet to find my life-changing novel, I do come across books that make me think, question my beliefs and push me to learn. The Project Phoenix is that book for me.

Written by Gene Kim, George Spafford, and Kevin Behr, the book is about a large company’s transformation into a DevOps culture. Transformation driven not just to look cool, but as a necessity for the survival of the company.

The synopsis is simple. Bill, the protagonist, is the Director of Midrange Operations at Parts Unlimited, a US-based $4 billion per year manufacturing and retail company. Bill is swiftly pulled into the spotlight by the CEO and persuaded hoodwinked into taking up the post as VP of IT Operations. It soon becomes clear that among standard responsibilities, Bill and his team are responsible for making the launch of the risky doomed Project Phoenix a success. Project Phoenix not only seems to be hugely overscoped for its ambitious – and imminent – timelines, but it also faces enormous pressure elsewhere.

The characters and situations in the book are stereotypical, however that’s not a criticism. The intent is clearly for us to identify with the characters and events, to relate your workplace with the story . So here are my key learnings “spoiler-free

DevOps is a collaborative working relationship between Development and IT Operations 🤝

Outcome of this collaboration is fast flow of planned work, while increasing the reliability, stability of the production environment.

3️ Ways principle 📜

The First way – focuses on maximizing flow of work from left-to-right starting from business to development to IT operations to the end user.

The Second way – focuses on increasing the feedback loop from right to left. The focus is not only on getting feedback but also on how fast we can get the feedback in order to make necessary corrections/improvement quickly.

The Third way – The third way is all about developing and fostering a culture of continuous experimentation and learning.

Speed to Deliver is the key 🚀

Technology is life blood of all business today. It’s imperative that all business should strive to bring their applications to market more quickly so they don’t miss any opportunities and easily adjust to the market standards. To achieve these objectives, organizations must adopt the right DevOps practices in their software development processes to reduce time to market.

Overall, book does a great job at explaining all these ideas with examples and linking them together. It’s a super fun and easy read, and I would definitely recommend you.

Connect to PostgreSQL in VS Code

VS Code has a rich extension API that let you add languages, debuggers, and tools to your installation to support easy development. PostgreSQL extension allows following:

  • Connect to PostgreSQL instances
  • View object DDL with ‘Go to Definition’ and ‘Peek Definition’
  • Write queries with IntelliSense
  • Run queries and save results as JSON, csv, or Excel

Download link: https://marketplace.visualstudio.com/items?itemName=ms-ossdata.vscode-postgresql

Using VS Code PostgreSQL extension

  1. Open the Command Palette Ctrl + Shift + P  (On mac use  ⌘ + Shift + P)

a

  1. Search and select PostgreSQL: New Query

  2. In the command palette, select Create Connection Profile. Follow the prompts to enter your Postgres instance hostname, database, username, and password.

a

 

 

You are now connected to your Postgres database. You can confirm this via the Status Bar (the ribbon at the bottom of the VS Code window). It will show your connected hostname, database, and user.

Now, let’s try to query database.

  1. Type a query ex. SELECT * FROM pg_stat_activity;

  2. Right-click, select Execute Query / keyboard shortcut [⌘M ⌘R] and the results will show in a new window.

6. You can also save the query results as JSONCSV or Excel.

So now, you can seamlessly code for PostgreSQL from Microsoft VS Code without switching screens, leverage powerful intellisense and execute queries.

Enjoy Coding!

IMP NOTE: Result windows from queries won´t show up again after being closed. This is bug with current version and is being worked by dev team. Workaround is either to keep the result window Open Or close / re-open the VS code window.

Site Reliability Engineering: How Google Runs Production Systems – Book Review

Essential Read for anyone managing highly available distributed systems at scale

First off – It’s worth let you know that Google lets you read this “entire” book online for free on their website. Yes you read it right, you don’t need to buy the book, just click on below link – https://landing.google.com/sre/sre-book/toc/index.html and start reading!

The book starts with a story about a time Margaret Hamilton brought her young daughter with her to NASA, back in the days of the Apollo program. During a simulation mission, her daughter caused the mission to crash by pressing some keys accidentally. Hamilton noticed this defect and proactively submitted a change to add error checking code to prevent this from happening again, however the change was rejected because program leadership believed that error should never happen. On the next mission, Apollo 8, that exact error condition occurred and a potentially fatal problem that could have been prevented with a trivial check took NASA’s engineers 9 hours to resolve. Hence early learning from book

“Embrace the idea that systems failures are inevitable, and therefore teams should work to optimize to recover quickly through using SRE principles.”

The book is divided into four parts, each comprised of several sections. Each section is authored by a Google engineer.

In Part I, Introduction, the authors introduce Google’s Site Reliability Engineering (SRE) approach to managing global-scale IT services running in datacenters spread across the entire world. (Google approach is truly extraordinary) After a discussion about how SRE is different from DevOps (another hot term of the day), this part introduces the core elements and requirements of SRE, which include the traditional Service Level Objectives (SLOs) and Service Level Agreements (SLAs), management of changing services and requirements, demand forecasting and capacity, provisioning and allocation, etc. Through a sample service, Shakespeare, the authors introduce the core concepts of running a workflow, which is essentially a collection of IT tasks that have inter-dependencies, in the datacenter.

In Part II, Principles, the book focuses on operational and reliability risks, SLO and SLA management, the notion of toil (mundane work that scales linearly, and can be automated) and the need to eliminate it (through automation), how to monitor the complex system that is a datacenter, a process for automation as seen at Google, the notion of engineering releases, and, last, an essay on the need for simplicity . This rather disparate collection of notions is very useful, explained for the laymen but still with enough technical content to be interesting even for the expert (practitioner or academic).

In Parts III and IV, Practices and Management, respectively, the book discusses a variety of topics, from time-series analysis for anomaly detection, to the practice and management of people on-call, to various ways to prevent and address incidents occurring in the datacenter, to postmortems and root-cause analysis that could help prevent future disasters, to testing for reliability (a notoriously difficult issue), to software engineering the SRE team, to load-balancing and overload management (resource management and scheduling 101), communication between SRE engineers, etc. etc. etc., until the predictable call for everyone to use SRE as early as possible and as often as possible. This is where I started getting a much better sense of practical SRE (a.ha!)

Overall it’s a great read, however it isn’t perfect. The two big downsides for me are 1.) this is one of those books that’s a collection of chapters by different people, so there’s a fair amount of redundancy and 2.) the book takes a sided approach on “Build Vs Buy” dilemma of engineering. I mean at Google scale, it will always be better to build, however that is rarely true in the real world. But even including the downsides, I’d say that this is the most valuable technical book I’ve read in the year. If you really like these notes, you’ll probably want to read the full book.

Azure Data Studio – Switching from Management Studio (SSMS) to Azure Data Studio (ADS)

Azure Data Studio (formerly SQL Operations Studio) is a free Cross-Platform DB management tool for for Windows, macOS and Linux. Azure Data Studio (ADS) initial release was only compatible for SQL Server, however recently Microsoft released a PostgreSQL extension for ADS – so now you can also manage your PostgreSQL instance using ADS. For more details on Azure Data Studio PostgreSQL Extension, refer to my earlier posts

Initially, I was apprehensive switching to ADS as I did not want to leave the comfort and ease of SSMS – after all I had been using it for more than a decade. My thoughts changed once I was on it.

Also, do you know ADS is built on Visual Studio code that has multiple options to ‘Customize’?And in this post I’ll take that feature and share how you could ‘Customize’ and ‘Personalize’ Azure Data Studio.

I. Change the Color theme

  1. Open Settings by clicking the gear on the bottom left and click on Settings
  2. Click on Color Theme
  3. Choose from the number of options (Choose between Light, Dark and High Contrast themes)

II. Change Keyboard Shortcuts

  1. Open Settings by clicking the gear on the bottom left and click on Settings
  2. Click on Keyboard Shortcuts

Change Run Query from Default to F5 And/Or Ctrl+E

// Place your key bindings in this file to overwrite the defaults
[
 {
  "key": "ctrl+e",
  "command": "runQueryKeyboardAction"
 },
 {
  "key": "f5",
  "command": "-runQueryKeyboardAction"
 }
]

III. Add Extensions

  1. Click the Extensions icon on the left
  2. Select the Install button on the extension you want from the list
  3. Click the Reload button to activate the installed extension

Refer to my previous post for  detailed step-by-step instructions for installing PostgreSQL extension.

IV. Get ‘Actual’ execution plan

  1. Highlight the query and Press [Ctrl] + [M] Or click Explain

You can modify the keyboard shortcut to your preference in user settings

V. Open an Integrated Terminal

  1. Use the Ctrl+` keyboard shortcut with the back tick character
  2. As a default, Terminal on my Windows 10 use Powershell, while Linux and macOS use $SHELL.
  3. You can customize and change the terminal by specifying the correct path for executable and update the settings. Below is the list of common shell executable and their default locations.
// Command Prompt
"terminal.integrated.shell.windows": "C:\\Windows\\System32\\cmd.exe"
// PowerShell
"terminal.integrated.shell.windows": "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe"
// Git Bash
"terminal.integrated.shell.windows": "C:\\Program Files\\Git\\bin\\bash.exe"

Isn’t customizing on ADS easy? Let me know what you think.

I’ll share more about Azure Data Studio as I continue the journey – so stay tuned!

Azure Data Studio PostgreSQL Extension – Custom insight dashboard


Azure Data Studio (formerly SQL Operations Studio) is a free Cross-Platform DB management tool for Windows, macOS and Linux.  Staying true to their promise of offering a unified data management experience for developers, Microsoft recently released PostgreSQL Extension for Azure Data Studio

So now developers can use same great GUI features for PostgreSQL database as they were for SQL Server, like IntelliSense, Multiple Query windows and Custom insight dashboard. In this blog post I’ll explain how to create a custom insight and add it to PostgreSQL dashboard as a widget

For this exercise I’ll use a simple query to display ‘active connections’ grouped by connection state

Step 1. Prepare your custom insight

1. Open a new Query Editor (Ctrl+N)

2. Copy / Paste below query

-- ADS custom dashboard - Get Active Vs Inactive connections
SELECT state, count(pid) 
    FROM pg_stat_activity 
     GROUP BY state, datname
     HAVING datname = '<app data>' --replace with your db name
          ORDER BY count(pid) DESC;

3. Save the query as “connection_stats.SQL” file and execute the query (F5)

4. Once you get the result-set, click on View as Chart

5. Customize the chart and click on Create Insight

Step 2. Add the insight to database dashboard

  1. Copy the insight configuration (the JSON data).
  2. Press Ctrl+Comma to open User Settings
  3. Type dashboard in Search Settings
  4. Click Edit for dashboard.database.widgets

  5. Paste the insight configuration JSON into dashboard.database.widgets. A formatted dashboard setting should look something like this
  6. Save the User Settings file
  7. In Server Explorer, right click on your database server name and click Manage
    Similar to above, you can also create more such “insights” to the default server dashboard. I have created one for checking top 5 tables by size (using below query)

    -- Get details of TOP `n` tables in database
    SELECT cl.relname AS objectname 
    ,pg_total_relation_size(cl.oid)/1024/1024/1024 AS size_in_GB
      FROM pg_class cl
         LEFT JOIN pg_namespace n ON (n.oid = cl.relnamespace)
         LEFT JOIN pg_stat_user_tables s ON (s.relid =cl.oid)
         WHERE nspname NOT IN ('pg_catalog', 'information_schema')
                 AND cl.relkind <> 'i' 
                 AND nspname !~ '^pg_toast' 
                   ORDER BY pg_total_relation_size(cl.oid) DESC 
                      LIMIT 5;

Azure Data Studio PostgreSQL Extension – Free data management tool to manage your PostgreSQL databases

Azure Data Studio (formerly SQL Operations Studio) is free Cross-Platform DB management tool for for Windows, macOS and Linux. Azure Data Studio was initially only released for managing SQL Server, however with the today’s Microsoft’s announcement , it will now be possible to connect and manage PostgreSQL databases with Azure Data Studio PostgreSQL Extension (Sweet deal!)

In this (and probably next few blog posts) I’ll be exploring this shiny new Azure Data Studio PostgreSQL Extension and will share my experience. So lets get started with Install and Configuration.

This post is written assuming you already have Azure Data Studio Installed on your machine (PC or Mac).  If you haven’t installed it already, the steps are simple and available here

How to Add Azure Data Studio PostgreSQL Extension

  1. Select the extensions icon from the sidebar in Azure Data Studio
  2. Type ‘PostgreSQL‘ into the search bar. Select the PostgreSQL extension
  3. Click “Reload Now
  4. Extension Install Notification
    Downloading https://pgtoolservice.file.core.windows.net/download/pgsqltoolsservice-win-x64.zip?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-06-28T01:59:13Z&st=2019-02-15T18:59:13Z&spr=https&sig=LPGNO%2BGA%2FjWCavlMuLFKevGK%2FkbulFXARRbXwORgkp4%3D
    
    (21434 KB)....................Done!
    
    Installing pgSQLToolsService service to C:\XXXX\.azuredatastudio\extensions\microsoft.azuredatastudio-pgsql-0.1.0\out\pgsqltoolsservice\Windows\1.2.0-alpha.22
    
    Installed

    Note: If you are do not see above notification, consider restarting Azure Data Studio Window

  5. Click “New Connection” icon in the SERVERS page
  6. Here, you’ll now be able to select “PostgreSQL” in Connection type list
  7. Specify the all connection details as below and click
    - Server Name: PostgreSQL host name
    - User name: User name for the PostgreSQL
    - Password: Password for the PostgreSQL user
    - Database Name: Database name in PostgreSQL
    - Server Group: <Optional - if you want to create a server group>
    - Name: <Optional - name this connection>

In the next few posts, I’ll to share my experience using Azure Data Studio PostgreSQL Extension.

PostgreSQL Table Partitioning Part III – Partition Elimination

Understanding Partition Elimination in PostgreSQL 11

This is Part-III for my series on Postgres Table partitioning. I’ll encourage you to also read Part-I and II

PostgreSQL Table Partitioning Part I – Implementation Using Inheritance

PostgreSQL Table Partitioning Part II – Declarative Partitioning

In this post, lets compare the READ performance between partitioned and an un-partitioned table. Before that lets first review our table schema and data distribution.

Un-Partitioned Data Set
-- Data Distribution un-partitioned tbl
SELECT logdate, COUNT (logdate)
FROM measurement_np
GROUP BY logdate;

Partitioned Data Set
-- Data Distribution partitioned tbl
SELECT logdate, COUNT (logdate)
FROM measurement
GROUP BY logdate;

Comparing Query Plan (EXPLAIN ANALYZE) and Partition Elimination

1st Execution – on cold cache
-- Un-Partitioned tbl
Index Scan using measurement_np_indx_logdate on measurement_np (cost=0.44..416.13 rows=10672 width=16) (actual time=0.031..40.625 rows=10000 loops=1)
Index Cond: (logdate = '2006-04-11'::date)

-- Partitioned tbl
Append (cost=0.43..4.46 rows=1 width=16) (actual time=0.028..2.316 rows=10000 loops=1)
-> Index Scan using measurement_y2006m04_logdate_idx on measurement_y2006m04 (cost=0.43..4.45 rows=1 width=16) (actual time=0.028..1.813 rows=10000 loops=1)
Index Cond: (logdate = '2006-04-11'::date)
2nd Execution – on hot cache
-- Un-Partitioned tbl
Index Scan using measurement_np_indx_logdate on measurement_np (cost=0.44..337.14 rows=8671 width=16) (actual time=0.040..1.750 rows=10000 loops=1)
Index Cond: (logdate = '2006-04-11'::date)

-- Partitioned tbl
Append (cost=0.43..405.61 rows=9519 width=16) (actual time=0.022..1.942 rows=10000 loops=1)
-> Index Scan using measurement_y2006m04_logdate_idx on measurement_y2006m04 (cost=0.43..358.01 rows=9519 width=16) (actual time=0.021..1.426 rows=10000 loops=1)
Index Cond: (logdate = '2006-04-11'::date)
Conclusion

On in both attempts (cold and hot cache), the data retrieval from  ‘partitioned’ tables was faster than the un-partitioned table.

PostgreSQL Table Partitioning Part II – Declarative Partitioning

Starting Postgres 10.x and onward, it is now possible to create declarative partitions.

In my previous post ‘postgresql-table-partitioning-part-i-implementation-using-inheritance‘, I discussed about implementing Partitioning in PostgreSQL using ‘Inheritance’. Up until PostgreSQL 9, it was only way to partition tables in PostgreSQL. It was simple to implement, however had some limitations like:

  • Row INSERT does not automatically propagate data to a child tables (aka partition), instead it uses explicit ‘BEFORE INSERT’ trigger, making them slower
  • INDEXES and constraints have to be separately created on child tables
  • Significant manual work is required to create and maintain child tables ranges

‘Declarative’ partitioning released with Postgres 10 does not have these limitations and requires much less manual work to manage partitions.

Let’s see the implementation of ‘Declarative’ partitioning with example:

— Step 1.
Create a partitioned table using the PARTITION BY clause,                              — which includes the partitioning method (RANGE in this example) and the list of column(s) to use as the partition key

CREATE TABLE measurement (
 	city_id int not null,
 	logdate date not null,
 	peaktemp int,
 	unitsales int
) PARTITION BY RANGE (logdate);

— Step 2.
— Create Index on parent table
— Note: creation of seperate indexes on parent table is not required

CREATE INDEX measurement_indx_logdate ON measurement (logdate);

— Step 3.
— Create Default partition

CREATE TABLE measurement_default PARTITION OF measurement DEFAULT;

— Create partitions with exclusive range and dfeualt partition (catch-all for out of range values)

CREATE TABLE measurement_y2006m02 PARTITION OF measurement 
             FOR VALUES FROM ('2006-02-01') TO ('2006-03-01');
CREATE TABLE measurement_y2006m03 PARTITION OF measurement 
             FOR VALUES FROM ('2006-03-01') TO ('2006-04-01');
CREATE TABLE measurement_y2006m04 PARTITION OF measurement 
             FOR VALUES FROM ('2006-04-01') TO ('2006-05-01');
CREATE TABLE measurement_y2007m11 PARTITION OF measurement 
             FOR VALUES FROM ('2007-11-01') TO ('2007-12-01');

Let’s review our schema now

— Step 4
— Insert sample rows

DO $DECLARE
  BEGIN
    FOR i in 1..10 loop
      INSERT INTO measurement VALUES (1,'2006-02-07',1,1);
    end loop;
  end $;

DO $DECLARE
  BEGIN
    FOR i in 1..1000000 loop
      INSERT INTO measurement VALUES (1,'2006-03-06',1,1);
    end loop;
  end $;

DO $DECLARE
  BEGIN
    FOR i in 1..1000000 loop
      INSERT INTO measurement VALUES (1,'2006-04-09',1,1);
    end loop;
  end $;

DO $DECLARE
  BEGIN
    FOR i in 1..1000000 loop
      INSERT INTO measurement VALUES (1,'2007-11-11',1,1);
    end loop;
  end $;

--Optional Step
ANALYZE measurement;

–Step 5
–Test partition elimination

  SELECT * FROM measurement
    WHERE logdate = '2007-11-11';

Let’s look at the Query Plan

As you can see, the ‘Declarative’ partitioning is much more intuitive and requires less manual steps in declaring  partitions compares to inheritance.

thanks for reading!