Azure Data Studio – Switching from Management Studio (SSMS) to Azure Data Studio (ADS)

Azure Data Studio (formerly SQL Operations Studio) is a free Cross-Platform DB management tool for for Windows, macOS and Linux. Azure Data Studio (ADS) initial release was only compatible for SQL Server, however recently Microsoft released a PostgreSQL extension for ADS – so now you can also manage your PostgreSQL instance using ADS. For more details on Azure Data Studio PostgreSQL Extension, refer to my earlier posts

Initially, I was apprehensive switching to ADS as I did not want to leave the comfort and ease of SSMS – after all I had been using it for more than a decade. My thoughts changed once I was on it.

Also, do you know ADS is built on Visual Studio code that has multiple options to ‘Customize’?And in this post I’ll take that feature and share how you could ‘Customize’ and ‘Personalize’ Azure Data Studio.

I. Change the Color theme

  1. Open Settings by clicking the gear on the bottom left and click on Settings
  2. Click on Color Theme
  3. Choose from the number of options (Choose between Light, Dark and High Contrast themes)

II. Change Keyboard Shortcuts

  1. Open Settings by clicking the gear on the bottom left and click on Settings
  2. Click on Keyboard Shortcuts

Change Run Query from Default to F5 And/Or Ctrl+E

// Place your key bindings in this file to overwrite the defaults
[
 {
  "key": "ctrl+e",
  "command": "runQueryKeyboardAction"
 },
 {
  "key": "f5",
  "command": "-runQueryKeyboardAction"
 }
]

III. Add Extensions

  1. Click the Extensions icon on the left
  2. Select the Install button on the extension you want from the list
  3. Click the Reload button to activate the installed extension

Refer to my previous post for  detailed step-by-step instructions for installing PostgreSQL extension.

IV. Get ‘Actual’ execution plan

  1. Highlight the query and Press [Ctrl] + [M] Or click Explain

You can modify the keyboard shortcut to your preference in user settings

V. Open an Integrated Terminal

  1. Use the Ctrl+` keyboard shortcut with the back tick character
  2. As a default, Terminal on my Windows 10 use Powershell, while Linux and macOS use $SHELL.
  3. You can customize and change the terminal by specifying the correct path for executable and update the settings. Below is the list of common shell executable and their default locations.
// Command Prompt
"terminal.integrated.shell.windows": "C:\\Windows\\System32\\cmd.exe"
// PowerShell
"terminal.integrated.shell.windows": "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe"
// Git Bash
"terminal.integrated.shell.windows": "C:\\Program Files\\Git\\bin\\bash.exe"

Isn’t customizing on ADS easy? Let me know what you think.

I’ll share more about Azure Data Studio as I continue the journey – so stay tuned!

Azure Data Studio PostgreSQL Extension – Custom insight dashboard


Azure Data Studio (formerly SQL Operations Studio) is a free Cross-Platform DB management tool for Windows, macOS and Linux.  Staying true to their promise of offering a unified data management experience for developers, Microsoft recently released PostgreSQL Extension for Azure Data Studio

So now developers can use same great GUI features for PostgreSQL database as they were for SQL Server, like IntelliSense, Multiple Query windows and Custom insight dashboard. In this blog post I’ll explain how to create a custom insight and add it to PostgreSQL dashboard as a widget

For this exercise I’ll use a simple query to display ‘active connections’ grouped by connection state

Step 1. Prepare your custom insight

1. Open a new Query Editor (Ctrl+N)

2. Copy / Paste below query

-- ADS custom dashboard - Get Active Vs Inactive connections
SELECT state, count(pid) 
    FROM pg_stat_activity 
     GROUP BY state, datname
     HAVING datname = '<app data>' --replace with your db name
          ORDER BY count(pid) DESC;

3. Save the query as “connection_stats.SQL” file and execute the query (F5)

4. Once you get the result-set, click on View as Chart

5. Customize the chart and click on Create Insight

Step 2. Add the insight to database dashboard

  1. Copy the insight configuration (the JSON data).
  2. Press Ctrl+Comma to open User Settings
  3. Type dashboard in Search Settings
  4. Click Edit for dashboard.database.widgets

  5. Paste the insight configuration JSON into dashboard.database.widgets. A formatted dashboard setting should look something like this
  6. Save the User Settings file
  7. In Server Explorer, right click on your database server name and click Manage
    Similar to above, you can also create more such “insights” to the default server dashboard. I have created one for checking top 5 tables by size (using below query)

    -- Get details of TOP `n` tables in database
    SELECT cl.relname AS objectname 
    ,pg_total_relation_size(cl.oid)/1024/1024/1024 AS size_in_GB
      FROM pg_class cl
         LEFT JOIN pg_namespace n ON (n.oid = cl.relnamespace)
         LEFT JOIN pg_stat_user_tables s ON (s.relid =cl.oid)
         WHERE nspname NOT IN ('pg_catalog', 'information_schema')
                 AND cl.relkind <> 'i' 
                 AND nspname !~ '^pg_toast' 
                   ORDER BY pg_total_relation_size(cl.oid) DESC 
                      LIMIT 5;

Azure Data Studio PostgreSQL Extension – Free data management tool to manage your PostgreSQL databases

Azure Data Studio (formerly SQL Operations Studio) is free Cross-Platform DB management tool for for Windows, macOS and Linux. Azure Data Studio was initially only released for managing SQL Server, however with the today’s Microsoft’s announcement , it will now be possible to connect and manage PostgreSQL databases with Azure Data Studio PostgreSQL Extension (Sweet deal!)

In this (and probably next few blog posts) I’ll be exploring this shiny new Azure Data Studio PostgreSQL Extension and will share my experience. So lets get started with Install and Configuration.

This post is written assuming you already have Azure Data Studio Installed on your machine (PC or Mac).  If you haven’t installed it already, the steps are simple and available here

How to Add Azure Data Studio PostgreSQL Extension

  1. Select the extensions icon from the sidebar in Azure Data Studio
  2. Type ‘PostgreSQL‘ into the search bar. Select the PostgreSQL extension
  3. Click “Reload Now
  4. Extension Install Notification
    Downloading https://pgtoolservice.file.core.windows.net/download/pgsqltoolsservice-win-x64.zip?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-06-28T01:59:13Z&st=2019-02-15T18:59:13Z&spr=https&sig=LPGNO%2BGA%2FjWCavlMuLFKevGK%2FkbulFXARRbXwORgkp4%3D
    
    (21434 KB)....................Done!
    
    Installing pgSQLToolsService service to C:\XXXX\.azuredatastudio\extensions\microsoft.azuredatastudio-pgsql-0.1.0\out\pgsqltoolsservice\Windows\1.2.0-alpha.22
    
    Installed

    Note: If you are do not see above notification, consider restarting Azure Data Studio Window

  5. Click “New Connection” icon in the SERVERS page
  6. Here, you’ll now be able to select “PostgreSQL” in Connection type list
  7. Specify the all connection details as below and click
    - Server Name: PostgreSQL host name
    - User name: User name for the PostgreSQL
    - Password: Password for the PostgreSQL user
    - Database Name: Database name in PostgreSQL
    - Server Group: <Optional - if you want to create a server group>
    - Name: <Optional - name this connection>

In the next few posts, I’ll to share my experience using Azure Data Studio PostgreSQL Extension.

SQL-Saturday #116 – Question of the day: When does a GRANT overrides a DENY in SQL Server?

SQL_SAT

 

SQLSaturday #116 – Bangalore

WoW!! What a great day it was to learn and connect with SQL guru’s and like minded folks who carry same passion for SQL server as I do. It’s quite an honor to meet and listen @blakhani, @pinaldave, @banerjeeamit, @vinodk_sql, @kashyapa , Rick @Joes2Pros……You guys are truly inspirational

Fellow attendees –  For those of you on Twitter, follow #sqlpass and make sure to check out the #sqlsat116 and #sqlsaturday hashtags to stay up to date

As tweeted by @vinodk_sqlRace to First 10 blog post on #SQLSat116. If you attended the event, we want to hear from you. Drop us a nudge 🙂 …”

Here’s the first one to start with –

Fellow attendees – During the session on Security Pitfalls, Vinod gave us this interesting question to reply/tweet later.

Question: When GRANT overrides DENY in #sql server? Was asked during a session today in #sqlsaturday #sqlsat116

Answer: An explicit GRANT on a TABLE::COLUMN takes precedence on DENY TABLE #sqlsat116 #sqlsaturday

And here’s a simple test I did to get me answer

We needs 3 users/login (one being S.A.) 

–Here:

–UserOne is SA on SQL Instance

–UserTwo has got READ/WRITE privileges on TEST database

–UserThree is DBO (database_owner) for TESTDB

–Step1. Create a sample table say ‘iden_test’ with 2 cols (id,name)

USE [test]

GO

CREATE TABLE [dbo].[iden_test](

     [id] [int] IDENTITY(1,1) NOT NULL,

     [name] [varchar](10) NULL

)

–Step2. Run below T-SQL to DENY READ (SELECT) to UserTwo

DENY SELECT on OBJECT::dbo.iden_test TO UserTwo

GO

–Step 3. Try running below query and this should fail with permissions issues

SELECT id FROM dbo.iden_test

Go

Error: Msg 229, Level 14, State 5, Line 1

The SELECT permission was denied on the object ‘iden_test’, database ‘test’, schema ‘dbo’.

–Step 4. Login to server with UserThree (SA user) and run below T-SQL to GRANT

GRANT SELECT ON OBJECT::dbo.iden_test(id) TO UserTwo

Go

 –Step 5. Login again as UserTwo and run the SELECT (per Step 3) again

SELECT id FROM dbo.iden_test

Go

I can get the results for the specific column now.

Conclusion – A Grant on Table (Column) overrides the DENY on same object.

Varun as iVarund

Identify queries that consume a large amount of log space in SQL Server

One of regular issues DBA’s get are about the T-log growth. Situations, wherein one “bad”  or “poorly-designed” query can eat up entire T-log space, bring the free space to zero and then bring your application down. The cause and remedy of most these issue is discussed in this KB # 317375(I’m  big fan on Microsoft KB’s).

While the KB discussed about the causes and approaches to deal with high T-log growth situations, it also hints about how we can ‘proactively’ find the queries that are consuming your T-log space at any given moment using DMV’s. Taking cue from this, I have written a below T-SQL Code:

Identify queries consuming large T-log space:

Use <your_database_name>
go
-- Description: get queries consuming maximum T-log space
-- Source: based on KB317375
-- Author: varun.dhawan@gmail.com                                               

SELECT dtst.session_id AS 'spid'
, CAST(DB_NAME(dtdt.database_id) AS VARCHAR(20)) AS 'database'
, der.command
, SUBSTRING(st.text, ( der.statement_start_offset / 2 ) + 1
, ((CASE der.statement_end_offset 
WHEN -1 THEN DATALENGTH(st.text) 
ELSE der.statement_end_offset 
END - der.statement_start_offset)/2)+1) AS statement_text
, COALESCE(QUOTENAME(DB_NAME(st.dbid)) + N'.' 
+ QUOTENAME(OBJECT_SCHEMA_NAME(st.objectid, st.dbid)) 
+ N'.' 
+ QUOTENAME(OBJECT_NAME(st.objectid, st.dbid)), '') AS command_text
, der.wait_type
, der.wait_time
, dtdt.database_transaction_log_bytes_used / 1024.0 / 1024.0 AS 'mb used'
, dtdt.database_transaction_log_bytes_used_system / 1024.0 / 1024.0 AS 'mb used system'
, dtdt.database_transaction_log_bytes_reserved / 1024.0 / 1024.0 AS 'mb reserved'
, dtdt.database_transaction_log_bytes_reserved_system / 1024.0 / 1024.0 AS 'mb reserved system'
, dtdt.database_transaction_log_record_count AS 'record count' 
FROM sys.dm_tran_database_transactions dtdt 
JOIN sys.dm_tran_session_transactions dtst 
ON dtdt.transaction_id = dtst.transaction_id 
JOIN sys.dm_exec_requests der 
CROSS apply sys.Dm_exec_sql_text(der.sql_handle) AS st 
ON dtst.session_id = der.session_id 
ORDER BY 8 DESC; 
Hope that this will help you too!

Disclaimer: Everything here, is my personal opinion and is not read or approved by my employer before it is posted. No warranties or other guarantees will be offered as to the quality of the opinions or anything else offered here.

What is a columnstore index?

WARNING: The blog-post is based on pre-release software so things could change. For more details on CTP, please refer SQL Server Code-Named “Denali” CTP1 Release Notes

Upcoming SQL Product, introduces a new data warehouse query acceleration feature based on a new type of index called columnstore. Before we move any further exploring this new feature, I want to take time to explain the basics behind a columnstore index and how different is it from a traditional index (rowstore).

What is columnstore? And what is a rowstore?

To understand this, lets see a simple illustration below. Here I have a table with 4 columns (First name, Email, Phone, Street Address) . Below is a representation of how the index data will be stored and their associated pros and cons.

image

As opposed to a rowstore, a columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. So in above example, columns (First name, Email, Phone, Street Address) are stored in different groups of pages in a columnstore index.

So what’s BAD with rowstore design?

Say if we have to run a query like ‘select first_name, phone from emp’. In a rowstore design, DBMS will transfer the ENTIRE ROW from disk to memory buffer even though the query required just 2 attributes. In case of a large read intensive queries, we do so much of un-necessary disk I/O and thus wasting precious disk bandwidth.

And what’s good with columnstore design?

1. Better performance for SELECT’s – only the attributes needed to solve a query are fetched from disk, thereby saving on disk I/O.
2. Better compression ratio – it’s easier to compress the data due to the redundancy of data within a column

Really are they so good?

Wait, “There’s no free lunch”. Due to change in the index storage design, any tuple (row) writes are very expensive on a column store index. As such, in Denali, tables with columnstore indexes can’t be updated directly using INSERT, UPDATE, DELETE, and MERGE statements, or bulk load operations. Hence to perform a DML on table, we may need to disable/drop an index temporarily and then re-create post DML activity.

Hope this provides you with some initial understanding of a ROWSTORE vs COLUMNSTORE.  This feature is expected to be available in next CTP build of Denali, so once we have the build I will be able to share a demo.

Thanks of reading!

Disclaimer: Everything here, is my personal opinion and is not read or approved by my employer before it is posted. No warranties or other guarantees will be offered as to the quality of the opinions or anything else offered here.