Upending Preconceived Notions

This months T-SQL Tuesday (January 2022) invite is brought to you by Andy Yun – @sqlbek.

Andy’s asked us to write about preconceived notions.

T-SQL TuesdayI’ve not written in T-SQL Tuesday for quite some time – so, why not resolve to get 12 T-SQL Tuesday posts in for ’22 (12 in 22 🙂 ), and while I’m at it revisit my blogging journey. I’m not sure why I stopped blogging regularly, maybe I have a preconceived notion that nobody would read my stuff!

Anyway, before I put pen to paper (so to be speak) – or more accurately fingers to keyboard, I decided to have a look around the web at the definitions for “Preconceived Notions” – I like this one,

“a personal belief or judgment that is not founded on proof or certainty”

And my problem for at least the early part of career was that I was too willing to take the word of other people at face value, even when I suspected they weren’t correct rather than debating with them. I guess I had the preconceived notion that others preconceived notions (that often they were quite happy to tell anybody who cared to listen about) were correct.

Here’s an example, somebody once told me ‘All cursors are bad, never write one’ – and then their preconceive notion became mine (unfortunately)

Now, I soon found that cursors did have there uses, maybe not as a standard approach in production, but certainly to help me ‘write code , that writes code’.

I used them for things like, generating comma separated lists for select statements that I’d pulled from INFORATION_SCHEMA views.

I used then to create multiple DDL statements and cycle through a list, executing individual statements programmatically- usually just on my local dev machine.

Most of this type of thing I can now do with other concepts, such as XML for generating CSV lists or window functions for retrieving values from rows other than the current one.

While my use of cursors has probably subsided over the years, I still use them – but mainly when I’m developing to ‘write code, that writes code’

The moral of story – I think – is avoid letting other peoples reconceived notions become your preconceived notions, that how myths start and are continue.

I soon learned my lesson to question everything, the world revolves around people – or is it data? – but that’s just an expression, or is it yet another preconceived notion 🙂

Have a great day.

Cheers

Martin.

SQL Server STUFF

The STUFF function in SQL Server is one of those little gems that is very under-used but when needed can be a real handy utility – at least that’s how it appears to me.

The STUFF function allows you to replace specific characters from a string with the contents of another string – we have to identify these characters using a number, signifying where to start and how many characters to replace.

At this stage, you might be wondering why the function is not called REPLACE – great questions. Turns out that T-SQL already has a REPLACE function, so (obviously 🙂 ) STUFF became the name for his one.

The function basically takes a string, the number of the character to being the string removal from, the length of the string to remove and another string that represents the characters that should be used to STUFF into the original string.

Here an example.

Let’s turn the sting “Your welcome” into the string “You’re welcome” using T-SQL and the STUFF function.

DECLARE @STRING NVARCHAR(20) = 'Your Welcome';
	
SELECT STUFF(@STRING , 4,1,'''re') AS Correction;

And we get

The strings can even be the result of SELECT statements themselves

SELECT STUFF((SELECT 'Your Welcome') , 4,1,(SELECT '''re')) AS [Correction(Again)];

The replacement string can even be an empty string – in which case we are essentially removing a character. I’ve found this really useful when concerning comma separated strings and then either removing the first or last comma.

Here’s a example (removing the leading comma)

SELECT STUFF((SELECT ',Martin,Julie,John,Bert,Stephen,Jean,Claire') , 1,1,'') AS [First Comma removed];

And we get

Hope somebody finds this useful

Have a great day

Cheers

Martin

SQL Server Sounds

One of my children was exploring sounds at school and asked me to give them a hand.

One of the tasks that was set for them was writing down a list of word pairs that are spelt differently by pronounced identically.

While, I managed to come up with a few – it was a bit more challenge than I originally though.

Anyway, being the techie that I am (or at least think I am 🙂 ) – I decided to see if technology could help me. Specifically, could SQL Server help me out.

I’d worked with SOUNDEX in Oracle many, many years ago, but I’d never really had reason to explore it in SQL Server – well, now was my chance.

So, not only does SQL Server have a SOUNDEX function, it also has a DIFFERENCE function that indicates how different SQL Server thinks two words may be from one another.

As you might expect with functions based around string – collation is a factor.

SOUNDEX returns a four-character code that is based on how the string sounds when spoken in English. You can read more about how those codes are put together in the docs.

DIFFERENCE provides an integer values between 0 and 4 with 0 meaning no similarity and 4 representing a very close match.

I experimented with a few different combinations, ranging from the identical to the clearly different – I even used some numbers.

Here’s my test data set

USE tempdb;
GO

SELECT 
	  Word 
	, [SOUNDEX(word)]    = SOUNDEX(Word)
	, Compare
	, [SOUNDEX(Compare)] = SOUNDEX(Compare)
	, [Difference]       = DIFFERENCE(Word , Compare)
FROM
(
	VALUES
	  ('Talk'       , 'Torque')
	, ('witch'      , 'which')
	, ('green'      , 'greene')
	, ('bee'        , 'be')
	, ('piece'      , 'peace')
	, ('right'      , 'write')
	, ('absence'    , 'absents')
	, ('awe'        , 'oar')
	, ('ball'       , 'bawl')
	, ('band'       , 'banned')
	, ('beach'      , 'beech')
	, ('seed'       , 'cede')
	, ('heard'      , 'herd')
	, ('licker'     , 'liquor')
	, ('hertz'      , 'hurts')
	, ('centre'     , 'center')
	, ('carat'      , 'carrot')
	, ('heroin'     , 'heroine')
	, ('metal'      , 'mettle')
	, ('lightening' , 'lightning')
	, ('Martin'     , 'Luke')
	, ('you''ll'	, 'yule')
	, ('123'	    , '123')
	, ('9183646262' , '9183646262')
	, ('a'          , 'A')
	, ('boy'        , 'girl')
) AS Sounds(Word , Compare)
ORDER BY
	[Difference] DESC;

And here are the results

We can see that it found most homophones to be a 4 – indicating a very strong similarity.

Interestingly enough it compared numbers ok, it got the difference between the strings ‘martin’ and ‘luke’ right – not similar at all. However, it thought the strings ‘boy’ and ‘girl’ had a least some similarity, which seemed odd. it didn’t get ‘right’ and ‘write’ correct, likely as they begin with different letters.

On the whole very encouraging – but not perfect.

An interesting venture into the worlds of sounds in SQL Server and what my kids are up to at school.

Have a great day.

Cheers

Marty

More Times Like These with SQL Server

I blogged the other about have some issues translating Oracle date formats to SQL Server date data types

I find the way SQL Server handles dates with a defined data type to be much more intuitive that the string manipulation techniques that Oracle seems to favour.

My new found friend is the SQL Server FORMAT() function.

Here’s some examples of using FORMAT () with dates, and while you can likely use CONVERT to achieve similar results, I find the option of using FORMAT() to be very useful.

It is also possible to format much more that dates with the FORMAT() function.



SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 1 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 2 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 3 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 4 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 5 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 6 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 7 , SYSDATETIME()) , 'dddd')));

SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 1 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 2 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 3 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 4 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 5 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 6 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 7 , SYSDATETIME()) , 'ddd dd MMM yyyy')));

SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 1 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 2 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 3 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 4 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 5 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 6 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 7 , SYSDATETIME()) , 'dddd dd MMM yyyy')));

Have a great day

Cheers

Martin

Times Like These with SQL Server

I was doing some conversion of Oracle code (PL/SQL) to SQL Server code (T-SQL) – which had some quirks.

I just thought that I’d share a quick tip that has helped me out quite a bit.

Much of the Oracle code had time manipulation functions – but they were done in a way that I didn’t find intuitive. There was a lot of what appeared to be casting (or similar) between text stings containing dates and time.

One of the nice things about T-SQL – in my opinion – is that there are a lot of different data types, some of which deal with times.

On top of that there are functions that allow a specific part of a date/time to be captured – for example we might want the month number, of the day number of the month or the hour, or any part of a date / time really.

So, here’s a list of some of the ways to get that information directly from SQL Server.

SELECT DATEPART(DAY         , SYSDATETIME());
SELECT DATEPART(YEAR        , SYSDATETIME());
SELECT DATEPART(QUARTER     , SYSDATETIME());
SELECT DATEPART(MONTH       , SYSDATETIME());
SELECT DATEPART(DAYOFYEAR   , SYSDATETIME());
SELECT DATEPART(DAY         , SYSDATETIME());
SELECT DATEPART(WEEK        , SYSDATETIME());
SELECT DATEPART(WEEKDAY     , SYSDATETIME());
SELECT DATEPART(HOUR        , SYSDATETIME());
SELECT DATEPART(MINUTE      , SYSDATETIME());
SELECT DATEPART(SECOND      , SYSDATETIME());
SELECT DATEPART(MILLISECOND , SYSDATETIME());
SELECT DATEPART(MICROSECOND , SYSDATETIME());
SELECT DATEPART(NANOSECOND  , SYSDATETIME());
SELECT DATEPART(TZOFFSET    , SYSDATETIME());
SELECT DATEPART(ISO_WEEK    , SYSDATETIME());

You can read more at the following link

Have a great day

Cheers

Martin.

Identifying Columns with Collations in SQL Server

Recently I was asked if I could identify columns in a single SQL Server database that had a collation. I didn’t need to change anything (well, not initially, but we’ll look at doing that anyway ), simply identify columns that were candidates to have their collation changed.

Now, there’s a few other considerations here, notably the database collation and server collation, but more on that later. For now, back to the problem at hand.

My first stop was to identify all data types that could possibly have a collation.

In order to do that I’m going to head to my old friends – the system tables.

First off, a quick trip to sys.types to see which datatypes may have collations

SELECT T.[name] FROM sys.types AS T WHERE T.collation_name IS NOT NULL;

This yields the following data results

  • text
  • ntext
  • varchar
  • char
  • nvarchar
  • nchar
  • sysname

SYSNAME is an interesting one – it’s basically equivalent to NVARCHAR(128), there’s a discussion about that here

But in order to find the columns in our database that have collations we can use a query such as

	SELECT 
		 [Table Name]  = T.[name]
		,[Column Name] = C.[name]
		,C.collation_name
		,TY.[name]
		,[Typelength] = CASE
						WHEN TY.[name] IN ('text' , 'ntext') THEN UPPER(TY.[name])
						WHEN C.[max_length] = 8000 THEN TY.[name] + N'(MAX)'
						ELSE UPPER(TY.[name]) + + N'(' + CAST(C.[max_length] AS NVARCHAR(31)) + N')'
					END
	FROM sys.columns AS C
	JOIN sys.tables AS T ON C.[object_id] = T.[object_id]
	JOIN sys.types AS TY ON TY.[user_type_id] = C.[user_type_id]
	WHERE C.collation_name IS NOT NULL
	AND T.[is_ms_shipped] = 0

From here, it’s just a matter of extending the query to produced some T-SQL that will update the collations of the columns – I chose to put this in a Common Table Expression (CTE), but go with whatever suits you.

WITH CTE
AS
(
	SELECT 
		 [Table Name]  = T.[name]
		,[Column Name] = C.[name]
		,C.collation_name
		,TY.[name]
		,[Typelength] = CASE
						WHEN TY.[name] IN ('text' , 'ntext') THEN UPPER(TY.[name])
						WHEN C.[max_length] = 8000 THEN TY.[name] + N'(MAX)'
						ELSE UPPER(TY.[name]) + + N'(' + CAST(C.[max_length] AS NVARCHAR(31)) + N')'
					END
	FROM sys.columns AS C
	JOIN sys.tables AS T ON C.[object_id] = T.[object_id]
	JOIN sys.types AS TY ON TY.[user_type_id] = C.[user_type_id]
	WHERE C.collation_name IS NOT NULL
	AND T.[is_ms_shipped] = 0
)
SELECT N'ALTER TABLE ' + [Table Name] + N' ALTER COLUMN ' + [Column Name] + N' ' + [Typelength] + N' COLLATE Latin1_General_CI_AS' + ';' FROM CTE

This will produced statements that will change the collation of columns on an individual basis. Notice in the final SELECT statement from the CTE, I’ve stated COLLATE Latin1_General_CI_AS’. This will generate a lot of statements such as

ALTER TABLE CollationChangeTable ALTER COLUMN ColumnOne NCHAR(8) COLLATE Latin1_General_CI_AS;

I prefer this approach to a cursor as it gives me an opportunity to eyeball the T-SQL and column names.

A cursor may be more appropriate in a large database – you choice, there’s lots of them around.

Proceed with caution as getting column COLLATIONs out of sync with the database and instance can bite tou hard down the track – such as conflicts with tempdb joins.

Have a great day.

Cheers

Marty.

OPENJSON and CROSS APPLY

OPENJSON is pretty central to manipulating JSON documents in T-SQL.

As we’ve seen, we can use a default schema that will return metadata about the JSON document or we can use an explicit schema where we supply a WITH clause and a PATH to the data that we want.

So far, in our examples we have either used the default schema or an explicit schema – but we have not mixed the two.

Indeed if you try this in a simple query you’ll find it is not possible without the use of APPLY operator.

OPENJSON is actually a table-valued function that parses JSON – either into an explicit schema (that we provide) or the default schema.

We can mix both the default and explicit schema by using CROSS APPLY

Like so

DECLARE @json NVARCHAR(MAX) =
N'
{
    "Configuration Property": {
      "Configuration name": "recovery interval (min)",
      "Value": 0,
      "minimum": 0,
      "maximum": 32767,
      "value_in_use": 0,
      "description": "Maximum recovery interval in minutes",
      "is_dynamic": true,
      "is_advanced": true
    }
}
';
 
SELECT 
	 DS.[key]
	,DS.[value]
	,DS.[type]
	,ES.[Configuration name]
	,ES.[Value]
	,ES.[minimum]
	,ES.[maximum]
	,ES.[value_in_use]
	,ES.[description]
	,ES.[is_dynamic]
	,ES.[is_advanced] 
FROM  
     OPENJSON(@json)  AS DS
CROSS APPLY
     OPENJSON(DS.[value])
WITH
(
	 [Configuration name]  NVARCHAR(35)
	,[Value]               NVARCHAR(35)
	,[minimum]             NVARCHAR(35)
	,[maximum]             NVARCHAR(35)
	,[value_in_use]        NVARCHAR(35)
	,[description]         NVARCHAR(35)
	,[is_dynamic]          NVARCHAR(35)
	,[is_advanced]         NVARCHAR(35)
) AS ES
GO

This technique can come in extremely handy when attempting to drill down into sub arrays and objects contained in a JSON document.

Notice also that different aliasing (of the default and explicit schema) has allowed us to request data from both the default schema and the explicit schema. This has enabled us to get to the element that is named value – as this name appears in both the default and explicit schema names and thus requires an alias.

I hope this has helped on your SQL Server JSON journey. We’ll see more real soon.

Have a great day

Cheers

Marty

OPENJSON and an Alias

Using Aliases in T-SQL is very common.

We can alias both Tables (FROM clause) and Columns (SELECT clause) and some other things too. It’s all pretty fundamental to writing good T-SQL queries.

Here’s an example

SELECT
	 dbs.collation_name
	,dbs.database_id AS [DatabaseID]
	,[IsReadOnly] = dbs.is_read_only
FROM 
	sys.databases AS dbs
JOIN
	sys.master_files MF
ON 
	dbs.database_id = MF.database_id
WHERE
	dbs.[name] = 'master';
GO

Note that we can use different styles, however, I recommend you standardise however you want to do this.

Anyway, the point of this post if to outline how aliasing is done when using the default and explicit JSON schemas in our T-SQL Queries

Here is an example of aliasing with the default schema

DECLARE @json NVARCHAR(MAX) =
N'
{
    "Configuration Property": {
      "Configuration name": "recovery interval (min)",
      "Value": 0,
      "minimum": 0,
      "maximum": 32767,
      "value_in_use": 0,
      "description": "Maximum recovery interval in minutes",
      "is_dynamic": true,
      "is_advanced": true
    }
}
';
 
SELECT 
      DS.[key]
	 ,DS.[value]
	 ,DS.[type]
FROM  
     OPENJSON(@json, '$."Configuration Property"')  AS DS;
GO

There’s nothing particularly unusual here.

The slightly unusual part may appear when aliasing with an explicit schema.

Here’s an example

DECLARE @json NVARCHAR(MAX) =
N'
{
    "Configuration Property": {
      "Configuration name": "recovery interval (min)",
      "Value": 0,
      "minimum": 0,
      "maximum": 32767,
      "value_in_use": 0,
      "description": "Maximum recovery interval in minutes",
      "is_dynamic": true,
      "is_advanced": true
    }
}
';
 
SELECT 
     ES.[Configuration name] 
    ,ES.[Value]
    ,ES.[minimum]  
    ,ES.[maximum]
    ,ES.[value_in_use]
    ,ES.[description]
    ,ES.[is_dynamic]
    ,ES.[is_advanced] 
FROM  
     OPENJSON(@json, '$."Configuration Property"')  
WITH 
( 
     [Configuration name] NVARCHAR(35) 
    ,[Value]              NVARCHAR(100)   
    ,[minimum]            NVARCHAR(100)  
    ,[maximum]            NVARCHAR(100)  
    ,[value_in_use]       NVARCHAR(100) 
    ,[description]        NVARCHAR(100)
    ,[is_dynamic]         BIT
    ,[is_advanced]        BIT
) AS ES;

Note that the alias for an explicit schema come after the WITH clause.

If we try to put the alias directly after the line

OPENJSON(@json, '$."Configuration Property"') 

Then we will receive an error.

While this may seem trivial it will become very useful when mixing the default schema and an explicit schema.

I hope this has helped on your SQL Server JSON journey. We’ll see more real soon.

Have a great day

Cheers

Marty

OPENJSON : Getting to the data, and the PATH – PART II

We’ve looked at getting pulling data from a JSON document into relational table format using an explicit schema that was defined in the WITH clause of the OPENJSON table valued fumction.

However, in that example, we used a PATH expression that was taken from the root of the JSON document.

A question that I recently had was ‘What of you want to take the PATH from a certain point in the JSON document’ – and that’s what we will look at here.

We use this simple document, which contains one object

{
    "Configuration Property": {
      "Configuration name": "recovery interval (min)",
      "Value": 0,
      "minimum": 0,
      "maximum": 32767,
      "value_in_use": 0,
      "description": "Maximum recovery interval in minutes",
      "is_dynamic": true,
      "is_advanced": true
    }
}

Here, we want to start retrieving data from the document starting at the path

$." "Configuration Property"

All we need to do is define the base PATH (the starting point) in our OPENJSON query.

DECLARE @json NVARCHAR(MAX) =
N'
{
    "Configuration Property": {
      "Configuration name": "recovery interval (min)",
      "Value": 0,
      "minimum": 0,
      "maximum": 32767,
      "value_in_use": 0,
      "description": "Maximum recovery interval in minutes",
      "is_dynamic": true,
      "is_advanced": true
    }
}
';

SELECT 
	 [Configuration name] 
	,[Value]
	,[minimum]  
	,[maximum]
	,[value_in_use]
	,[description]
	,[is_dynamic]
	,[is_advanced] 
FROM  
     OPENJSON(@json, '$."Configuration Property"')  
WITH 
( 
     [Configuration name] NVARCHAR(35) 
    ,[Value]              NVARCHAR(100)   
    ,[minimum]            NVARCHAR(100)  
    ,[maximum]            NVARCHAR(100)  
    ,[value_in_use]       NVARCHAR(100) 
    ,[description]        NVARCHAR(100)
    ,[is_dynamic]         BIT
    ,[is_advanced]        BIT
);

I hope this has helped on your SQL Server JSON journey. We’ll see more real soon.

Have a great day

Cheers

Marty

OPENJSON : Getting to the data, and the PATH – PART I

Recently we reviewed FOR JSON PATH. That was used for shaping tabular data (data that comes directly from a SQL table) into a JSON document.

The PATH we are talking about here is used with OPENJSON to get to the data that is contained in the JSON document.

We’ve seen an example of OPENJSON and reading data with an explicit schema.

SELECT
     [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
    OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
'
)
WITH
(
     [configuration_id]     INT 
    ,[Configuration name]   NVARCHAR(35)
    ,[Value]                NVARCHAR(200)
    ,[minimum]              NVARCHAR(200)
    ,[maximum]              NVARCHAR(200)
    ,[value_in_use]         NVARCHAR(200)
    ,[description]          NVARCHAR(200)
    ,[is_dynamic]           BIT
    ,[is_advanced]          BIT
);
GO

Turns out that in this example, the data is right at the top level of the document.

 {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }

and so we didn’t need to explicitly state the PATH – however we still could have. And here it is.

SELECT
     [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
    OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
'
, '$')
WITH
(
     [configuration_id]     INT 
    ,[Configuration name]   NVARCHAR(35)
    ,[Value]                NVARCHAR(200)
    ,[minimum]              NVARCHAR(200)
    ,[maximum]              NVARCHAR(200)
    ,[value_in_use]         NVARCHAR(200)
    ,[description]          NVARCHAR(200)
    ,[is_dynamic]           BIT
    ,[is_advanced]          BIT
);
GO

The query below has the PATH defined explicitly in the WITH clause.

SELECT
     [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
    OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
')
WITH
(
     [configuration_id]     INT               '$.configuration_id'
    ,[Configuration name]   NVARCHAR(35)      '$."Configuration name"'
    ,[Value]                NVARCHAR(200)     '$.Value'
    ,[minimum]              NVARCHAR(200)     '$.minimum'
    ,[maximum]              NVARCHAR(200)     '$.maximum'
    ,[value_in_use]         NVARCHAR(200)     '$.value_in_use'
    ,[description]          NVARCHAR(200)     '$.description'
    ,[is_dynamic]           BIT               '$.is_dynamic'
    ,[is_advanced]          BIT               '$.is_advanced'
);
GO

Notice also that Configuration name has a space in it. So, in the PATH we simply enclose it in double quotes “..” – in a similar way to how square brackets would be used in a T-SQL expression.

Lets, look at a slightly different shaped JSON document and see how we would change the PATH in the WITH clause to locate all of the desired values.

SELECT 
	 [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
	OPENJSON(
'
{
  "configuration_id": 101,
  "Configuration_Property": {
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
}
')
WITH
(
     [configuration_id]     INT            '$."configuration_id"'
    ,[Configuration name]   NVARCHAR(35)   '$."Configuration_Property"."Configuration name"'
    ,[Value]                NVARCHAR(200)  '$."Configuration_Property"."Value"'
    ,[minimum]              NVARCHAR(200)  '$."Configuration_Property"."minimum"'
    ,[maximum]              NVARCHAR(200)  '$."Configuration_Property"."maximum"'
    ,[value_in_use]         NVARCHAR(200)  '$."Configuration_Property"."value_in_use"'
    ,[description]          NVARCHAR(200)  '$."Configuration_Property"."description"'
    ,[is_dynamic]           BIT            '$."Configuration_Property"."is_dynamic"'
    ,[is_advanced]          BIT            '$."Configuration_Property"."is_advanced"'	
);

Notice, that no ‘default path’ is specified on the OPENJSON clause (after the document on line 27).

I hope this has helped on your SQL Server JSON journey. We’ll see more real soon.

Have a great day

Cheers

Marty

Download Files