SQL Server Sounds

One of my children was exploring sounds at school and asked me to give them a hand.

One of the tasks that was set for them was writing down a list of word pairs that are spelt differently by pronounced identically.

While, I managed to come up with a few – it was a bit more challenge than I originally though.

Anyway, being the techie that I am (or at least think I am 🙂 ) – I decided to see if technology could help me. Specifically, could SQL Server help me out.

I’d worked with SOUNDEX in Oracle many, many years ago, but I’d never really had reason to explore it in SQL Server – well, now was my chance.

So, not only does SQL Server have a SOUNDEX function, it also has a DIFFERENCE function that indicates how different SQL Server thinks two words may be from one another.

As you might expect with functions based around string – collation is a factor.

SOUNDEX returns a four-character code that is based on how the string sounds when spoken in English. You can read more about how those codes are put together in the docs.

DIFFERENCE provides an integer values between 0 and 4 with 0 meaning no similarity and 4 representing a very close match.

I experimented with a few different combinations, ranging from the identical to the clearly different – I even used some numbers.

Here’s my test data set

USE tempdb;
GO

SELECT 
	  Word 
	, [SOUNDEX(word)]    = SOUNDEX(Word)
	, Compare
	, [SOUNDEX(Compare)] = SOUNDEX(Compare)
	, [Difference]       = DIFFERENCE(Word , Compare)
FROM
(
	VALUES
	  ('Talk'       , 'Torque')
	, ('witch'      , 'which')
	, ('green'      , 'greene')
	, ('bee'        , 'be')
	, ('piece'      , 'peace')
	, ('right'      , 'write')
	, ('absence'    , 'absents')
	, ('awe'        , 'oar')
	, ('ball'       , 'bawl')
	, ('band'       , 'banned')
	, ('beach'      , 'beech')
	, ('seed'       , 'cede')
	, ('heard'      , 'herd')
	, ('licker'     , 'liquor')
	, ('hertz'      , 'hurts')
	, ('centre'     , 'center')
	, ('carat'      , 'carrot')
	, ('heroin'     , 'heroine')
	, ('metal'      , 'mettle')
	, ('lightening' , 'lightning')
	, ('Martin'     , 'Luke')
	, ('you''ll'	, 'yule')
	, ('123'	    , '123')
	, ('9183646262' , '9183646262')
	, ('a'          , 'A')
	, ('boy'        , 'girl')
) AS Sounds(Word , Compare)
ORDER BY
	[Difference] DESC;

And here are the results

We can see that it found most homophones to be a 4 – indicating a very strong similarity.

Interestingly enough it compared numbers ok, it got the difference between the strings ‘martin’ and ‘luke’ right – not similar at all. However, it thought the strings ‘boy’ and ‘girl’ had a least some similarity, which seemed odd. it didn’t get ‘right’ and ‘write’ correct, likely as they begin with different letters.

On the whole very encouraging – but not perfect.

An interesting venture into the worlds of sounds in SQL Server and what my kids are up to at school.

Have a great day.

Cheers

Marty

SQL Grouping with Rollup – Simple use case

G’Day,

I was recently looking at getting some sales data from SQL Server, aggregating some totals and then displaying a grand total.

This is the sort of thing that very often gets done on the client side – but as it turns out, this can be accomplished using standard T-SQL, and, before you ask, no, there are no cursors in sight.

So I put together some sample data, here it is

USE tempdb;
GO
/*
Create a simple table.
We'll simply group by ClientName
*/
IF OBJECT_ID('dbo.Sales') IS NOT NULL
	BEGIN
		DROP TABLE dbo.Sales;
	END;
GO
CREATE TABLE dbo.Sales
(
	 SaleID   INT NOT NULL IDENTITY(1,1)
	,ClientName NVARCHAR(10) NULL
	,Amount  INT NOT NULL
);
GO
/*
Insert some data
*/
INSERT INTO dbo.Sales(ClientName , Amount)
SELECT
	  ClientName
	, Amount
FROM
(
	VALUES
	('Marty' , 10),
	('Marty' , 1),
	('Marty' , 3),
	('Callum' , 6),
	('Callum' , 3),
	('Callum' , 23),
	('Jake' , 45),
	('Jake' , 3),
	('Jake' , 67),
	('Bert' , 5),
	('Bert' , 1),
	('Bert' , 0),
	('Bert' , 6),
	('Zeb' , 35),
	('Zeb' , 23),
	('Zeb' , 12)
) AS TempSales(ClientName,Amount);
-----------------------------------------------------------------

The output that we’d really like – using only T-SQL – is like below.

Note that the last line contains a grand total.

While, there are a few ways we might be able to get this result set, lets concentrate on one in particular, and that is using ROLLUP and GROUPING

The following query comes pretty close

SELECT 
	  [Full Name] = ClientName
	, [Units]     = SUM(Amount)
	, [Grouping]  = GROUPING(ClientName)
FROM 
	dbo.Sales
GROUP BY ROLLUP(ClientName)
ORDER BY
	[Grouping] ASC , [Full Name];

and gives us

Which is close, but note we have a column called “Grouping” – however the Total looks fine, except that it is registering as NULL!

So, how did the “Grouping” column get there?

Well, first off – I decided to call it “Grouping”, but I could have literally called it anything that I wanted.

The column actually came from he use of the T-SQL GROUPING function. This function is basically used for totaling groups

The totaled column (in this case there is only one) has a NULL value – and the 1 in the “Grouping” column indicates that this NULL column is giving us a grand total – which will differentiate it from any NULL values in the same column that are not grand totals.

To demonstrate, lets change the value ‘Bert’ in our original table to be NULL

UPDATE dbo.Sales SET ClientName = NULL WHERE ClientName = 'Bert'

and when we run the query again

SELECT 
	  [Full Name] = ClientName
	, [Units]     = SUM(Amount)
	, [Grouping]  = GROUPING(ClientName)
FROM 
	dbo.Sales
GROUP BY ROLLUP(ClientName)
ORDER BY
	[Grouping] ASC , [Full Name];

we get the result

Note that we now have two NULL values – one of which actually represents a Grand Total and the other just represents NULL values in the table.

Also note that SQL has grouped the NULL values from the table together – so watch out for how NULLs are handled in various T-SQL expressions.

So the only way we know which NULL represents a Grand Total is by looking at the “Grouping” column – the one that has a Grand Total value of 1, while the aggregated NULL values from the table are represented by a 0 in the “Grouping” column.

With this knowledge, lets try and tidy up this output to get the output that we originally wanted – I chose to use a CTE

WITH ClientAmountsCTE
AS
(
	SELECT 
		  [Client Name] = ClientName
		, [Units]       = SUM(Amount)
		, [Total]       = GROUPING(ClientName)
	FROM 
		dbo.Sales
	GROUP BY ROLLUP(ClientName)
)
SELECT 
	  [Client Name] =
	  CASE
		WHEN [Total] = 1 THEN 'Total'
		WHEN [Client Name] IS NULL THEN 'UNKNOWN'
		ELSE [Client Name]
	  END
	, [Units]
	--, [Total]
FROM 
	ClientAmountsCTE
ORDER BY
	[Total] ASC;
GO

which gives us the result

Notice that a CASE statement has been used to differentiate the NULL columns based on the value of the “Grouping” column (which is itself not used in the display output).

ROLLUP can do a little more than this when multiple columns are involved, so check out the docs.

Note also, that there are two ways to use ROLLUP – one way is NON-ISO compliant (uses WITH) and the other way is ISO compliant (does not use WITH)

Examples

-----------------------------------------------------------------
/*
Select the data grou abd use the ROLLUP syntax
Non ISO compiant syntax (uses WITH)
*/
SELECT 
	  [Full Name] = ClientName
	, [Units]     = SUM(Amount)
	, [Total]     = GROUPING(ClientName)
FROM 
	dbo.Sales
GROUP BY ClientName WITH ROLLUP
ORDER BY
	[Full Name]  DESC;
-----------------------------------------------------------------
/*
Select the data grou abd use the ROLLUP syntax
Fully ISO compliant syntax (does not use WITH)
*/
SELECT 
	  [Full Name] = ClientName
	, [Units]     = SUM(Amount)
	, [Total]     = GROUPING(ClientName)
FROM 
	dbo.Sales
GROUP BY  ROLLUP(ClientName)
ORDER BY
	[Full Name]  DESC;

Hope this helps somebody.

Have a great day.

Cheers

Marty

More Times Like These with SQL Server

I blogged the other about have some issues translating Oracle date formats to SQL Server date data types

I find the way SQL Server handles dates with a defined data type to be much more intuitive that the string manipulation techniques that Oracle seems to favour.

My new found friend is the SQL Server FORMAT() function.

Here’s some examples of using FORMAT () with dates, and while you can likely use CONVERT to achieve similar results, I find the option of using FORMAT() to be very useful.

It is also possible to format much more that dates with the FORMAT() function.



SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 1 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 2 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 3 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 4 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 5 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 6 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 7 , SYSDATETIME()) , 'dddd')));

SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 1 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 2 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 3 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 4 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 5 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 6 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 7 , SYSDATETIME()) , 'ddd dd MMM yyyy')));

SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 1 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 2 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 3 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 4 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 5 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 6 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 7 , SYSDATETIME()) , 'dddd dd MMM yyyy')));

Have a great day

Cheers

Martin

Times Like These with SQL Server

I was doing some conversion of Oracle code (PL/SQL) to SQL Server code (T-SQL) – which had some quirks.

I just thought that I’d share a quick tip that has helped me out quite a bit.

Much of the Oracle code had time manipulation functions – but they were done in a way that I didn’t find intuitive. There was a lot of what appeared to be casting (or similar) between text stings containing dates and time.

One of the nice things about T-SQL – in my opinion – is that there are a lot of different data types, some of which deal with times.

On top of that there are functions that allow a specific part of a date/time to be captured – for example we might want the month number, of the day number of the month or the hour, or any part of a date / time really.

So, here’s a list of some of the ways to get that information directly from SQL Server.

SELECT DATEPART(DAY         , SYSDATETIME());
SELECT DATEPART(YEAR        , SYSDATETIME());
SELECT DATEPART(QUARTER     , SYSDATETIME());
SELECT DATEPART(MONTH       , SYSDATETIME());
SELECT DATEPART(DAYOFYEAR   , SYSDATETIME());
SELECT DATEPART(DAY         , SYSDATETIME());
SELECT DATEPART(WEEK        , SYSDATETIME());
SELECT DATEPART(WEEKDAY     , SYSDATETIME());
SELECT DATEPART(HOUR        , SYSDATETIME());
SELECT DATEPART(MINUTE      , SYSDATETIME());
SELECT DATEPART(SECOND      , SYSDATETIME());
SELECT DATEPART(MILLISECOND , SYSDATETIME());
SELECT DATEPART(MICROSECOND , SYSDATETIME());
SELECT DATEPART(NANOSECOND  , SYSDATETIME());
SELECT DATEPART(TZOFFSET    , SYSDATETIME());
SELECT DATEPART(ISO_WEEK    , SYSDATETIME());

You can read more at the following link

Have a great day

Cheers

Martin.

OPENJSON and CROSS APPLY

OPENJSON is pretty central to manipulating JSON documents in T-SQL.

As we’ve seen, we can use a default schema that will return metadata about the JSON document or we can use an explicit schema where we supply a WITH clause and a PATH to the data that we want.

So far, in our examples we have either used the default schema or an explicit schema – but we have not mixed the two.

Indeed if you try this in a simple query you’ll find it is not possible without the use of APPLY operator.

OPENJSON is actually a table-valued function that parses JSON – either into an explicit schema (that we provide) or the default schema.

We can mix both the default and explicit schema by using CROSS APPLY

Like so

DECLARE @json NVARCHAR(MAX) =
N'
{
    "Configuration Property": {
      "Configuration name": "recovery interval (min)",
      "Value": 0,
      "minimum": 0,
      "maximum": 32767,
      "value_in_use": 0,
      "description": "Maximum recovery interval in minutes",
      "is_dynamic": true,
      "is_advanced": true
    }
}
';
 
SELECT 
	 DS.[key]
	,DS.[value]
	,DS.[type]
	,ES.[Configuration name]
	,ES.[Value]
	,ES.[minimum]
	,ES.[maximum]
	,ES.[value_in_use]
	,ES.[description]
	,ES.[is_dynamic]
	,ES.[is_advanced] 
FROM  
     OPENJSON(@json)  AS DS
CROSS APPLY
     OPENJSON(DS.[value])
WITH
(
	 [Configuration name]  NVARCHAR(35)
	,[Value]               NVARCHAR(35)
	,[minimum]             NVARCHAR(35)
	,[maximum]             NVARCHAR(35)
	,[value_in_use]        NVARCHAR(35)
	,[description]         NVARCHAR(35)
	,[is_dynamic]          NVARCHAR(35)
	,[is_advanced]         NVARCHAR(35)
) AS ES
GO

This technique can come in extremely handy when attempting to drill down into sub arrays and objects contained in a JSON document.

Notice also that different aliasing (of the default and explicit schema) has allowed us to request data from both the default schema and the explicit schema. This has enabled us to get to the element that is named value – as this name appears in both the default and explicit schema names and thus requires an alias.

I hope this has helped on your SQL Server JSON journey. We’ll see more real soon.

Have a great day

Cheers

Marty

OPENJSON : Getting to the data, and the PATH – PART I

Recently we reviewed FOR JSON PATH. That was used for shaping tabular data (data that comes directly from a SQL table) into a JSON document.

The PATH we are talking about here is used with OPENJSON to get to the data that is contained in the JSON document.

We’ve seen an example of OPENJSON and reading data with an explicit schema.

SELECT
     [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
    OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
'
)
WITH
(
     [configuration_id]     INT 
    ,[Configuration name]   NVARCHAR(35)
    ,[Value]                NVARCHAR(200)
    ,[minimum]              NVARCHAR(200)
    ,[maximum]              NVARCHAR(200)
    ,[value_in_use]         NVARCHAR(200)
    ,[description]          NVARCHAR(200)
    ,[is_dynamic]           BIT
    ,[is_advanced]          BIT
);
GO

Turns out that in this example, the data is right at the top level of the document.

 {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }

and so we didn’t need to explicitly state the PATH – however we still could have. And here it is.

SELECT
     [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
    OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
'
, '$')
WITH
(
     [configuration_id]     INT 
    ,[Configuration name]   NVARCHAR(35)
    ,[Value]                NVARCHAR(200)
    ,[minimum]              NVARCHAR(200)
    ,[maximum]              NVARCHAR(200)
    ,[value_in_use]         NVARCHAR(200)
    ,[description]          NVARCHAR(200)
    ,[is_dynamic]           BIT
    ,[is_advanced]          BIT
);
GO

The query below has the PATH defined explicitly in the WITH clause.

SELECT
     [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
    OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
')
WITH
(
     [configuration_id]     INT               '$.configuration_id'
    ,[Configuration name]   NVARCHAR(35)      '$."Configuration name"'
    ,[Value]                NVARCHAR(200)     '$.Value'
    ,[minimum]              NVARCHAR(200)     '$.minimum'
    ,[maximum]              NVARCHAR(200)     '$.maximum'
    ,[value_in_use]         NVARCHAR(200)     '$.value_in_use'
    ,[description]          NVARCHAR(200)     '$.description'
    ,[is_dynamic]           BIT               '$.is_dynamic'
    ,[is_advanced]          BIT               '$.is_advanced'
);
GO

Notice also that Configuration name has a space in it. So, in the PATH we simply enclose it in double quotes “..” – in a similar way to how square brackets would be used in a T-SQL expression.

Lets, look at a slightly different shaped JSON document and see how we would change the PATH in the WITH clause to locate all of the desired values.

SELECT 
	 [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
	OPENJSON(
'
{
  "configuration_id": 101,
  "Configuration_Property": {
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
}
')
WITH
(
     [configuration_id]     INT            '$."configuration_id"'
    ,[Configuration name]   NVARCHAR(35)   '$."Configuration_Property"."Configuration name"'
    ,[Value]                NVARCHAR(200)  '$."Configuration_Property"."Value"'
    ,[minimum]              NVARCHAR(200)  '$."Configuration_Property"."minimum"'
    ,[maximum]              NVARCHAR(200)  '$."Configuration_Property"."maximum"'
    ,[value_in_use]         NVARCHAR(200)  '$."Configuration_Property"."value_in_use"'
    ,[description]          NVARCHAR(200)  '$."Configuration_Property"."description"'
    ,[is_dynamic]           BIT            '$."Configuration_Property"."is_dynamic"'
    ,[is_advanced]          BIT            '$."Configuration_Property"."is_advanced"'	
);

Notice, that no ‘default path’ is specified on the OPENJSON clause (after the document on line 27).

I hope this has helped on your SQL Server JSON journey. We’ll see more real soon.

Have a great day

Cheers

Marty

Download Files

OPENJSON, explicit schema and data type

Back in our last instalment, we looked at OPENJSON and how we can get data into a tabular format from a JSON document.

Readers may have noticed that we used data from the sys.configurations table in order to generate the JSON document.

However, when we read the data from the document into tabular format we specified that the columns

  • Value
  • minimum
  • maximum
  • value_in_use
  • description

Which are of type SQL_VARIANT in the table were actually of type NVARCHAR(200) in the resultset that was brought back from the JSON document.

SELECT
     [configuration_id]
    ,[Configuration name] 
    ,[Value]
    ,[minimum] 
    ,[maximum]
    ,[value_in_use]
    ,[description]
    ,[is_dynamic]
    ,[is_advanced]
FROM
	OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
'
)
WITH
(
     [configuration_id]     INT 
    ,[Configuration name]   NVARCHAR(35)
    ,[Value]                NVARCHAR(200)
    ,[minimum]              NVARCHAR(200)
    ,[maximum]              NVARCHAR(200)
    ,[value_in_use]         NVARCHAR(200)
    ,[description]          NVARCHAR(200)
    ,[is_dynamic]           BIT
    ,[is_advanced]          BIT
);
GO

So, why was that? and can we fix it?

Well, JSON and has a handful of data types and SQL Server has lots. So there’s not a one-to-one match.

The best thing to do would be to being them back as one of the simple types (likely NVARCHAR()) and then CAST them in the SELECT to the actual desired data type.

Like so

SELECT
     [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
	OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
'
)
WITH
(
     [configuration_id]     INT 
    ,[Configuration name]   NVARCHAR(35)
    ,[Value]                NVARCHAR(200)
    ,[minimum]              NVARCHAR(200)
    ,[maximum]              NVARCHAR(200)
    ,[value_in_use]         NVARCHAR(200)
    ,[description]          NVARCHAR(200)
    ,[is_dynamic]           BIT
    ,[is_advanced]          BIT
);
GO

The result set looks the same, however if the values returned are now of the correct data type. So if you wanted to do anything with the resultset like joining on the original table then you would not see any implicit casting.

So, that’s just a small tip of how to handle data type differences between SQL Server and JSON.

I hope this has helped on your SQL Server JSON journey. We’ll see more real soon.

Have a great day

Cheers

Marty

Download Files

OPENJSON, explicit schema and data type

OPENJSON – using an explicit schema. The WITH clause.

G’day,

Previously, we have looked at using OPENJSON to gain knowledge about the JSON document that we have presented to the function.

A bit like this

SELECT
	*
FROM
	OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
'
)
Columns returned using OPENJSON with the default schema
Columns returned using OPENJSON with the default schema

Notice that we didn’t request any columns in the SELECT statement, but we got three columns back

  • Key
  • value
  • type

That’s great metadata information – but what if we wanted the actual values from the JSON.

Well, the statement above used OPENJSON with the default schema – which is basically no column list defined. If we want to define a list then we need to use a WITH clause that defines an EXPLICIT schema – like so

SELECT
	 *
FROM
	OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
'
)
WITH
(
     [configuration_id]     INT 
    ,[Configuration name]   NVARCHAR(35)
    ,[Value]                NVARCHAR(200)
    ,[minimum]              NVARCHAR(200)
    ,[maximum]              NVARCHAR(200)
    ,[value_in_use]         NVARCHAR(200)
    ,[description]          NVARCHAR(200)
    ,[is_dynamic]           BIT
    ,[is_advanced]          BIT
);
GO
Values returned when supply a explicit list to OPENJSON
Values returned when supply a explicit list to OPENJSON

You might also notice that the names in the WITH clause match those in the JSON document – We can also add these as a column list to the SELECT statement, rather than using SELECT *

Notice also that if we ask for a value in the WITH clause that does not appear in the JSON document (maybe because of a typo) then we simply get a NULL returned in the SELECT list.

Notice here that the name of the first column is incorrect. So we get a null in the resultset.

SELECT
	*
FROM
	OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
'
)
WITH
(
     [configuration_if]     INT 
    ,[Configuration name]   NVARCHAR(35)
    ,[Value]                NVARCHAR(200)
    ,[minimum]              NVARCHAR(200)
    ,[maximum]              NVARCHAR(200)
    ,[value_in_use]         NVARCHAR(200)
    ,[description]          NVARCHAR(200)
    ,[is_dynamic]           BIT
    ,[is_advanced]          BIT
);
GO
A value in the explicit OPENJSON list that is not actually in the JSON

This is perhaps one reason that we should include an explicit column list in the SELECT statement

SELECT
     [configuration_id]
    ,[Configuration name] 
    ,[Value] 
    ,[minimum]
    ,[maximum] 
    ,[value_in_use]
    ,[description]
    ,[is_dynamic]
    ,[is_advanced]
FROM
	OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
'
)
WITH
(
     [configuration_id]     INT 
    ,[Configuration name]   NVARCHAR(35)
    ,[Value]                NVARCHAR(200)
    ,[minimum]              NVARCHAR(200)
    ,[maximum]              NVARCHAR(200)
    ,[value_in_use]         NVARCHAR(200)
    ,[description]          NVARCHAR(200)
    ,[is_dynamic]           BIT
    ,[is_advanced]          BIT
);
GO

And we pretty much see exactly what we had before.

Both explicit OPENJSON schema fully defined along with SELECT column list
Both explicit OPENJSON schema fully defined along with SELECT column list

Next up is some more useful tips about OPENJSON

Have a great day

Cheers

Marty

Download Files

OPENJSON – using an explicit schema. The WITH clause

T-SQL Sequences and the OVER() clause

Using a SEQUENCE object – in unusual ways!

If you’ve delved into Window Functions at all then you probably have read the documentation about the OVER() clause.

While reading this recently, I noticed that a sequence can have an OVER() clause attached to it.

I was curious about this so I experimented.

Consider the following.

USE tempdb;
GO
IF EXISTS(SELECT * FROM sys.sequences AS S WHERE S.[name] = 'TestSeq' AND OBJECT_SCHEMA_NAME(S.[object_id]) = 'dbo')
    BEGIN
        DROP SEQUENCE dbo.TestSeq;
    END;
GO
CREATE sequence dbo.TestSeq
AS  
    BIGINT
    START WITH 1
    INCREMENT BY 1
    NO CYCLE
    NO CACHE;
GO

So now we have a SEQUENCE object that we can play with.

And, as expected the following worked fine.

SELECT NEXT VALUE FOR dbo.TestSeq;
/*
Lets put the value into a variable
*/
DECLARE @value BIGINT = 0;
SELECT @value =NEXT VALUE FOR dbo.TestSeq;

But, did you also know that you can do this (whether or not you would want to is a different matter)

SELECT NEXT VALUE FOR dbo.TestSeq AS [TestValue];
SELECT NEXT VALUE FOR dbo.TestSeq OVER(ORDER BY NEWID()) AS [TestValue];
SELECT NEXT VALUE FOR dbo.TestSeq OVER(ORDER BY (SELECT NULL));
SELECT NEXT VALUE FOR dbo.TestSeq OVER(ORDER BY (SELECT 1));
SELECT NEXT VALUE FOR dbo.TestSeq OVER(ORDER BY RAND());

That’s pretty interesting, but not a whole load of use.

We can also do this

USE tempdb;
GO
/*
There's three of these objects in tempdb
*/
SELECT 
	RowNumber = NEXT VALUE FOR dbo.TestSeq OVER(ORDER BY O.[type] ASC)
	,* 
FROM 
	sys.objects AS O
WHERE
	O.[type] IN ('SQ')
ORDER BY
	RowNumber DESC;
/*
And ordered the opposite way around
*/
SELECT 
	RowNumber = NEXT VALUE FOR dbo.TestSeq OVER(ORDER BY O.[type] DESC)
	,* 
FROM 
	sys.objects AS O
WHERE
	O.[type] IN ('SQ')
ORDER BY
	RowNumber ASC;

Did you spot that a column from sys.objects was used in the OVER() clause and that the ORDER BY in the OVER() clause and main query are the opposite away around.

We are unable to use PARTITION within the OVER() clause of a SEQUENCE, which total makes sense as we only get one row back.

This could come in handy and save us a join.

I hope that helps somebody out someday.

Have a great day

Cheers

Marty.

Download the code (Azure Data Studio notebook)

Concatenating Strings, NULLs and EXEC

G’day,

SQL Server’s great – its been teaching me a lot for many years. But every so often you get a surprise. Sometimes it’s a nice one, sometimes it more of the ‘really, are you sure kind’ – and then you realise, yes SQL Server – you’re correct, I really should have known that.

This one involved

  • A cursor
  • String concatenation
  • Executing a T-SQL Statement

The first two of these didn’t suprise me – after I’d thought about it.

The last one did

I’ll attempt to give an example of what occured

I have a table

USE tempdb;
GO

IF OBJECT_ID('dbo.Issue') IS NOT NULL
	BEGIN
		DROP TABLE dbo.Issue
	END;


CREATE TABLE dbo.Issue
(
	SomeName NVARCHAR(50) NULL
);

and, we’re inserting into that table using a concatenated T-SQL string, like so

DECLARE @InsertValue NVARCHAR(50) = N'TestValue';
DECLARE @SQL NVARCHAR(500) = 'INSERT INTO dbo.Issue(SomeName) VALUES(''' + @InsertValue + ''');';
PRINT @SQL;
EXEC(@SQL);

As you’d expect one row goes in

However, the issue that I saw arose when the variable being INSERTed was NULL, like so

DECLARE @InsertValue NVARCHAR(50) = NULL;
DECLARE @SQL NVARCHAR(500) = 'INSERT INTO dbo.Issue(SomeName) VALUES(''' + @InsertValue + ''');';
PRINT @SQL;
EXEC(@SQL);

You’ll notice (if you run the code) that nothing gets printed – which is fair enough as a NULL in a string concatenation will unltimatly yeild a NULL, as anything + an unknown (NULL) is an unknown (NULL)

That wasn’t the surprising part – although it did get me to think a bit at first.

The surprising part (to me at least), was that the string – which is now NULL -has been run inside the EXEC(..) call, had not errored, had not failed, but had not inserted anything either

Try it

DECLARE @InsertValue NVARCHAR(50) = NULL;
DECLARE @SQL NVARCHAR(500) = 'INSERT INTO dbo.Issue(SomeName) VALUES(''' + @InsertValue + ''');';
PRINT @SQL;
EXEC(@SQL);
SELECT * FROM dbo.Issue;

You can try this more simply by executing a NULL string

DECLARE @Test NVARCHAR(20) = NULL;
EXEC(@Test);

Note however, that you can’t directly execute a NULL (although why you’d want to is a different matter 🙂 )

EXEC(NULL)
Msg 156, Level 15, State 1, Line 44
Incorrect syntax near the keyword 'NULL'.

So, now you know.

If you’ve never seen this before (or even if you have, but a long time ago) then this could easily make you wonder what is going on.

I hope this helps somebody.

Have a great day.

Cheers

Martin.