Identifying Columns with Collations in SQL Server

Recently I was asked if I could identify columns in a single SQL Server database that had a collation. I didn’t need to change anything (well, not initially, but we’ll look at doing that anyway ), simply identify columns that were candidates to have their collation changed.

Now, there’s a few other considerations here, notably the database collation and server collation, but more on that later. For now, back to the problem at hand.

My first stop was to identify all data types that could possibly have a collation.

In order to do that I’m going to head to my old friends – the system tables.

First off, a quick trip to sys.types to see which datatypes may have collations

SELECT T.[name] FROM sys.types AS T WHERE T.collation_name IS NOT NULL;

This yields the following data results

  • text
  • ntext
  • varchar
  • char
  • nvarchar
  • nchar
  • sysname

SYSNAME is an interesting one – it’s basically equivalent to NVARCHAR(128), there’s a discussion about that here

But in order to find the columns in our database that have collations we can use a query such as

	SELECT 
		 [Table Name]  = T.[name]
		,[Column Name] = C.[name]
		,C.collation_name
		,TY.[name]
		,[Typelength] = CASE
						WHEN TY.[name] IN ('text' , 'ntext') THEN UPPER(TY.[name])
						WHEN C.[max_length] = 8000 THEN TY.[name] + N'(MAX)'
						ELSE UPPER(TY.[name]) + + N'(' + CAST(C.[max_length] AS NVARCHAR(31)) + N')'
					END
	FROM sys.columns AS C
	JOIN sys.tables AS T ON C.[object_id] = T.[object_id]
	JOIN sys.types AS TY ON TY.[user_type_id] = C.[user_type_id]
	WHERE C.collation_name IS NOT NULL
	AND T.[is_ms_shipped] = 0

From here, it’s just a matter of extending the query to produced some T-SQL that will update the collations of the columns – I chose to put this in a Common Table Expression (CTE), but go with whatever suits you.

WITH CTE
AS
(
	SELECT 
		 [Table Name]  = T.[name]
		,[Column Name] = C.[name]
		,C.collation_name
		,TY.[name]
		,[Typelength] = CASE
						WHEN TY.[name] IN ('text' , 'ntext') THEN UPPER(TY.[name])
						WHEN C.[max_length] = 8000 THEN TY.[name] + N'(MAX)'
						ELSE UPPER(TY.[name]) + + N'(' + CAST(C.[max_length] AS NVARCHAR(31)) + N')'
					END
	FROM sys.columns AS C
	JOIN sys.tables AS T ON C.[object_id] = T.[object_id]
	JOIN sys.types AS TY ON TY.[user_type_id] = C.[user_type_id]
	WHERE C.collation_name IS NOT NULL
	AND T.[is_ms_shipped] = 0
)
SELECT N'ALTER TABLE ' + [Table Name] + N' ALTER COLUMN ' + [Column Name] + N' ' + [Typelength] + N' COLLATE Latin1_General_CI_AS' + ';' FROM CTE

This will produced statements that will change the collation of columns on an individual basis. Notice in the final SELECT statement from the CTE, I’ve stated COLLATE Latin1_General_CI_AS’. This will generate a lot of statements such as

ALTER TABLE CollationChangeTable ALTER COLUMN ColumnOne NCHAR(8) COLLATE Latin1_General_CI_AS;

I prefer this approach to a cursor as it gives me an opportunity to eyeball the T-SQL and column names.

A cursor may be more appropriate in a large database – you choice, there’s lots of them around.

Proceed with caution as getting column COLLATIONs out of sync with the database and instance can bite tou hard down the track – such as conflicts with tempdb joins.

Have a great day.

Cheers

Marty.

Azure Monitoring Diagrams

Recently, I’ve been digging into Monitoring in Azure.

I was really pleased to find that a visual representation of an Azure VNET can be produced from inside the Azure Portal.

Here’s an example

An Azure VNET diagram from the Azure Portal

It’s an SVG image.

You can import these into your documentation.

If you select a VNET in the Azure Portal and then locate the “Monitoring” section, you’ll see a menu item called “Diagram”

Just select it and you end up with a downloadable image like above

Have a great day

cheers

Marty

Azure dependent resources

G’day

They say you learn by your mistakes.

And today (amongst other days) – I did just that.

But in a good way.

I wanted to create a simple Network Interface Card (NIC) for one of my Azure VM’s.

Unfortunately (at least partly), I picked the wrong subscription.

The subscription just happened to be absolutely empty – not a single resource (even a resource group) in the subscription.

Ordinarily, I’d have just gone ahead and created the resources (in this case a NIC) and then realised my mistake at another point down the road – but something caught my eye.

so, exactly like the message says – I had no vnet in my subscription.

I’m sort of pleased the Azure Portal flagged this to me and it left me wondering if it’d flag other resources that were dependent on things that didn’t exist.

Now that’s another thing I have on my (ever going) TODO list – to identity resources that absolutely rely on other resources.

Have a great day.

Cheers

Marty.

What exactly is a modern data warehouse?

G’day,

I was asked recently what a modern data warehouse is – and that’s a vey thought provoking question.

First, there’s the tag “Modern” – I think what’s modern now may not be quite so modern in a year or twos time – maybe sooner, judging by the speed of some technology changes at present.

My initial response – jokingly – was ‘it depends who you ask’, which is a pretty useless – but true – answer.

Firstly let me put some context around my answer. I’ve built my career around Microsoft technologies – so my answer is going to centre around their latest technologies – which are going to be all cloud based, and essentially incorporate Platform-as-a-Service (PaaS) functionality. Your experience may be different, you might be in a different cloud, you might have different ideas – but for me it’s PaaS Services in the Azure cloud. I’ll expand on this shortly.

But let’s (quickly) look back on how I’ve seen data warehouses develop – in my Microsoft world – over the years. They used to be simply built on SQL Server, incorporating fact and dimension tables. There’d be debate about star and snowflake schemas, about kimball or Inmon. More recently people talk about Data Vault. Essentially, people pulled data from their operational (transactional) system into their data warehouse (decision support system or Enterprise data warehouse or whatever they chose to call it).

They did this using some Extract, Transform and Load (ETL) tool (commonly called a data integration tool) like SQL Server integration services.

They might have had a single data source (one transactional system) or they may have had more data sources – so firstly staging the data made sense, then cleaning it and then putting it into the main data warehouse.

We’d have to deal with things like slowly changing dimensions and very often we may not have modelled data exactly as it is in the real world.

We might put analytical models over the data warehouse, initially these were multi-dimensional (MDX) but later Tabular. Then we’d either report straight off the data warehouse or the analytical model – using tools like SQL Server Reporting Services or – lately – Power BI.

For me, things have moved to the cloud and our options have expanded (or should I say exploded)

We can take data from literally any data source and land it in our Azure data lake. This area can be segregated, as it likely contains our raw data. we can now keep this – untouched – as space is literally no longer an impediment (ok, it still costs money, but it’s cheap, and we can have different degrees of storage (hot / cold /archive) and we have to trade off access times with cost. We also don’t have to worry about upgrading and migrating data at any time – this is all part of the service provide by a modern cloud.

We can then start segregating the data into different areas (of the data lake), say a staging area, a clean area, and maybe an area full of curated data sets.

We can pull in different types of data in different formats – CSV, JSON, AVRO, Parquet, and more.

Once we have the data in out desired area we can then pull it into a traditional data warehouse – say in Azure SQL Database for instance (although there maybe better cloud solution). We can use Polybase to virtualise the data.

We could also use Synapse, CosmosDB or maybe databricks.

We can build Machine Learning Models and enable Artificial Intelligence services.

Basically, I think the ‘modern’ data warehouse is about cloud based data services and simply adds to the traditional models of data warehousing.

Whereas the traditional data warehouse is about historical reporting, such as diagnostic and descriptive analytics – the new tools enable predictive and prescriptive analytics along with cognitive capabilities.

So, basically the answer to ‘what is the modern data warehouse?’ could be a lot of things, pieced together in many different ways. and we’re only going to get more as time marches on.

So maybe we should start talking about the changing data warehouse, or the evolving data warehouse or the cloud data warehouse – but not the modern data warehouse, as what’s modern today is sure to change tomorrow.

Anyway, these are just my thoughts. Some will agree, some won’t and they’ll be plenty of diverse ideas – that’s great for innovation. Um, maybe the innovative data warehouse is a better name 🙂

Anyway, have a great day

Cheers

Martin.

96 GB lost (or so I thought)

The other day I was looking to store some files, so I went and found myself a 128 GB USB 3.1 flash drive.

Upon inserting this into my computer (initially the wrong way up, and then correcting myself, as is the accepted method) – I was amazed to see it only registered as 32 GB

wow, I thought this was 128 GB

Whatever, I thought. I’ll just reformat this to the correct size.

Only 32 GB of space yet the drive had 128 GB firmly stamped on the back.

But no matter what combination of file systems that I chose, I could not get this to recognize more than 32 GB

no matter what file system I chose to format the drive with, I could not get it to recognize more than 32GB of space

I soon realised that because it was FAT32 perhaps this was the reason. But how to reclaim the apparent lost space.

Over to Disk Management, where I seen

So, I found the ‘lost’ space by using disk management.

I quickly deleted the 32GB Volume.

The 32 GB volume was deleted.

To reveal the full quota of unallocated space

So, I found the full quote of space

and I allocated that as a simple volume and re-formatted the drive as NTFS.

I found a few things out on that journey.

I could create multiple volumes of the same flash drive and each volume registers as a separate drive in Windows explorer.

But if you want to use the flash drive for something like re-imaging your surface – then it’ll have to be formatted as FAT32. Then you’ll be back to square one again and wondering where your space disappeared to.

I hope this saves somebody some time at some point.

Have a great day.

Cheers

Marty

OPENJSON and CROSS APPLY

OPENJSON is pretty central to manipulating JSON documents in T-SQL.

As we’ve seen, we can use a default schema that will return metadata about the JSON document or we can use an explicit schema where we supply a WITH clause and a PATH to the data that we want.

So far, in our examples we have either used the default schema or an explicit schema – but we have not mixed the two.

Indeed if you try this in a simple query you’ll find it is not possible without the use of APPLY operator.

OPENJSON is actually a table-valued function that parses JSON – either into an explicit schema (that we provide) or the default schema.

We can mix both the default and explicit schema by using CROSS APPLY

Like so

DECLARE @json NVARCHAR(MAX) =
N'
{
    "Configuration Property": {
      "Configuration name": "recovery interval (min)",
      "Value": 0,
      "minimum": 0,
      "maximum": 32767,
      "value_in_use": 0,
      "description": "Maximum recovery interval in minutes",
      "is_dynamic": true,
      "is_advanced": true
    }
}
';
 
SELECT 
	 DS.[key]
	,DS.[value]
	,DS.[type]
	,ES.[Configuration name]
	,ES.[Value]
	,ES.[minimum]
	,ES.[maximum]
	,ES.[value_in_use]
	,ES.[description]
	,ES.[is_dynamic]
	,ES.[is_advanced] 
FROM  
     OPENJSON(@json)  AS DS
CROSS APPLY
     OPENJSON(DS.[value])
WITH
(
	 [Configuration name]  NVARCHAR(35)
	,[Value]               NVARCHAR(35)
	,[minimum]             NVARCHAR(35)
	,[maximum]             NVARCHAR(35)
	,[value_in_use]        NVARCHAR(35)
	,[description]         NVARCHAR(35)
	,[is_dynamic]          NVARCHAR(35)
	,[is_advanced]         NVARCHAR(35)
) AS ES
GO

This technique can come in extremely handy when attempting to drill down into sub arrays and objects contained in a JSON document.

Notice also that different aliasing (of the default and explicit schema) has allowed us to request data from both the default schema and the explicit schema. This has enabled us to get to the element that is named value – as this name appears in both the default and explicit schema names and thus requires an alias.

I hope this has helped on your SQL Server JSON journey. We’ll see more real soon.

Have a great day

Cheers

Marty

OPENJSON and an Alias

Using Aliases in T-SQL is very common.

We can alias both Tables (FROM clause) and Columns (SELECT clause) and some other things too. It’s all pretty fundamental to writing good T-SQL queries.

Here’s an example

SELECT
	 dbs.collation_name
	,dbs.database_id AS [DatabaseID]
	,[IsReadOnly] = dbs.is_read_only
FROM 
	sys.databases AS dbs
JOIN
	sys.master_files MF
ON 
	dbs.database_id = MF.database_id
WHERE
	dbs.[name] = 'master';
GO

Note that we can use different styles, however, I recommend you standardise however you want to do this.

Anyway, the point of this post if to outline how aliasing is done when using the default and explicit JSON schemas in our T-SQL Queries

Here is an example of aliasing with the default schema

DECLARE @json NVARCHAR(MAX) =
N'
{
    "Configuration Property": {
      "Configuration name": "recovery interval (min)",
      "Value": 0,
      "minimum": 0,
      "maximum": 32767,
      "value_in_use": 0,
      "description": "Maximum recovery interval in minutes",
      "is_dynamic": true,
      "is_advanced": true
    }
}
';
 
SELECT 
      DS.[key]
	 ,DS.[value]
	 ,DS.[type]
FROM  
     OPENJSON(@json, '$."Configuration Property"')  AS DS;
GO

There’s nothing particularly unusual here.

The slightly unusual part may appear when aliasing with an explicit schema.

Here’s an example

DECLARE @json NVARCHAR(MAX) =
N'
{
    "Configuration Property": {
      "Configuration name": "recovery interval (min)",
      "Value": 0,
      "minimum": 0,
      "maximum": 32767,
      "value_in_use": 0,
      "description": "Maximum recovery interval in minutes",
      "is_dynamic": true,
      "is_advanced": true
    }
}
';
 
SELECT 
     ES.[Configuration name] 
    ,ES.[Value]
    ,ES.[minimum]  
    ,ES.[maximum]
    ,ES.[value_in_use]
    ,ES.[description]
    ,ES.[is_dynamic]
    ,ES.[is_advanced] 
FROM  
     OPENJSON(@json, '$."Configuration Property"')  
WITH 
( 
     [Configuration name] NVARCHAR(35) 
    ,[Value]              NVARCHAR(100)   
    ,[minimum]            NVARCHAR(100)  
    ,[maximum]            NVARCHAR(100)  
    ,[value_in_use]       NVARCHAR(100) 
    ,[description]        NVARCHAR(100)
    ,[is_dynamic]         BIT
    ,[is_advanced]        BIT
) AS ES;

Note that the alias for an explicit schema come after the WITH clause.

If we try to put the alias directly after the line

OPENJSON(@json, '$."Configuration Property"') 

Then we will receive an error.

While this may seem trivial it will become very useful when mixing the default schema and an explicit schema.

I hope this has helped on your SQL Server JSON journey. We’ll see more real soon.

Have a great day

Cheers

Marty

OPENJSON : Getting to the data, and the PATH – PART II

We’ve looked at getting pulling data from a JSON document into relational table format using an explicit schema that was defined in the WITH clause of the OPENJSON table valued fumction.

However, in that example, we used a PATH expression that was taken from the root of the JSON document.

A question that I recently had was ‘What of you want to take the PATH from a certain point in the JSON document’ – and that’s what we will look at here.

We use this simple document, which contains one object

{
    "Configuration Property": {
      "Configuration name": "recovery interval (min)",
      "Value": 0,
      "minimum": 0,
      "maximum": 32767,
      "value_in_use": 0,
      "description": "Maximum recovery interval in minutes",
      "is_dynamic": true,
      "is_advanced": true
    }
}

Here, we want to start retrieving data from the document starting at the path

$." "Configuration Property"

All we need to do is define the base PATH (the starting point) in our OPENJSON query.

DECLARE @json NVARCHAR(MAX) =
N'
{
    "Configuration Property": {
      "Configuration name": "recovery interval (min)",
      "Value": 0,
      "minimum": 0,
      "maximum": 32767,
      "value_in_use": 0,
      "description": "Maximum recovery interval in minutes",
      "is_dynamic": true,
      "is_advanced": true
    }
}
';

SELECT 
	 [Configuration name] 
	,[Value]
	,[minimum]  
	,[maximum]
	,[value_in_use]
	,[description]
	,[is_dynamic]
	,[is_advanced] 
FROM  
     OPENJSON(@json, '$."Configuration Property"')  
WITH 
( 
     [Configuration name] NVARCHAR(35) 
    ,[Value]              NVARCHAR(100)   
    ,[minimum]            NVARCHAR(100)  
    ,[maximum]            NVARCHAR(100)  
    ,[value_in_use]       NVARCHAR(100) 
    ,[description]        NVARCHAR(100)
    ,[is_dynamic]         BIT
    ,[is_advanced]        BIT
);

I hope this has helped on your SQL Server JSON journey. We’ll see more real soon.

Have a great day

Cheers

Marty

OPENJSON : Getting to the data, and the PATH – PART I

Recently we reviewed FOR JSON PATH. That was used for shaping tabular data (data that comes directly from a SQL table) into a JSON document.

The PATH we are talking about here is used with OPENJSON to get to the data that is contained in the JSON document.

We’ve seen an example of OPENJSON and reading data with an explicit schema.

SELECT
     [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
    OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
'
)
WITH
(
     [configuration_id]     INT 
    ,[Configuration name]   NVARCHAR(35)
    ,[Value]                NVARCHAR(200)
    ,[minimum]              NVARCHAR(200)
    ,[maximum]              NVARCHAR(200)
    ,[value_in_use]         NVARCHAR(200)
    ,[description]          NVARCHAR(200)
    ,[is_dynamic]           BIT
    ,[is_advanced]          BIT
);
GO

Turns out that in this example, the data is right at the top level of the document.

 {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }

and so we didn’t need to explicitly state the PATH – however we still could have. And here it is.

SELECT
     [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
    OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
'
, '$')
WITH
(
     [configuration_id]     INT 
    ,[Configuration name]   NVARCHAR(35)
    ,[Value]                NVARCHAR(200)
    ,[minimum]              NVARCHAR(200)
    ,[maximum]              NVARCHAR(200)
    ,[value_in_use]         NVARCHAR(200)
    ,[description]          NVARCHAR(200)
    ,[is_dynamic]           BIT
    ,[is_advanced]          BIT
);
GO

The query below has the PATH defined explicitly in the WITH clause.

SELECT
     [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
    OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
')
WITH
(
     [configuration_id]     INT               '$.configuration_id'
    ,[Configuration name]   NVARCHAR(35)      '$."Configuration name"'
    ,[Value]                NVARCHAR(200)     '$.Value'
    ,[minimum]              NVARCHAR(200)     '$.minimum'
    ,[maximum]              NVARCHAR(200)     '$.maximum'
    ,[value_in_use]         NVARCHAR(200)     '$.value_in_use'
    ,[description]          NVARCHAR(200)     '$.description'
    ,[is_dynamic]           BIT               '$.is_dynamic'
    ,[is_advanced]          BIT               '$.is_advanced'
);
GO

Notice also that Configuration name has a space in it. So, in the PATH we simply enclose it in double quotes “..” – in a similar way to how square brackets would be used in a T-SQL expression.

Lets, look at a slightly different shaped JSON document and see how we would change the PATH in the WITH clause to locate all of the desired values.

SELECT 
	 [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
	OPENJSON(
'
{
  "configuration_id": 101,
  "Configuration_Property": {
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
}
')
WITH
(
     [configuration_id]     INT            '$."configuration_id"'
    ,[Configuration name]   NVARCHAR(35)   '$."Configuration_Property"."Configuration name"'
    ,[Value]                NVARCHAR(200)  '$."Configuration_Property"."Value"'
    ,[minimum]              NVARCHAR(200)  '$."Configuration_Property"."minimum"'
    ,[maximum]              NVARCHAR(200)  '$."Configuration_Property"."maximum"'
    ,[value_in_use]         NVARCHAR(200)  '$."Configuration_Property"."value_in_use"'
    ,[description]          NVARCHAR(200)  '$."Configuration_Property"."description"'
    ,[is_dynamic]           BIT            '$."Configuration_Property"."is_dynamic"'
    ,[is_advanced]          BIT            '$."Configuration_Property"."is_advanced"'	
);

Notice, that no ‘default path’ is specified on the OPENJSON clause (after the document on line 27).

I hope this has helped on your SQL Server JSON journey. We’ll see more real soon.

Have a great day

Cheers

Marty

Download Files

OPENJSON, explicit schema and data type

Back in our last instalment, we looked at OPENJSON and how we can get data into a tabular format from a JSON document.

Readers may have noticed that we used data from the sys.configurations table in order to generate the JSON document.

However, when we read the data from the document into tabular format we specified that the columns

  • Value
  • minimum
  • maximum
  • value_in_use
  • description

Which are of type SQL_VARIANT in the table were actually of type NVARCHAR(200) in the resultset that was brought back from the JSON document.

SELECT
     [configuration_id]
    ,[Configuration name] 
    ,[Value]
    ,[minimum] 
    ,[maximum]
    ,[value_in_use]
    ,[description]
    ,[is_dynamic]
    ,[is_advanced]
FROM
	OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
'
)
WITH
(
     [configuration_id]     INT 
    ,[Configuration name]   NVARCHAR(35)
    ,[Value]                NVARCHAR(200)
    ,[minimum]              NVARCHAR(200)
    ,[maximum]              NVARCHAR(200)
    ,[value_in_use]         NVARCHAR(200)
    ,[description]          NVARCHAR(200)
    ,[is_dynamic]           BIT
    ,[is_advanced]          BIT
);
GO

So, why was that? and can we fix it?

Well, JSON and has a handful of data types and SQL Server has lots. So there’s not a one-to-one match.

The best thing to do would be to being them back as one of the simple types (likely NVARCHAR()) and then CAST them in the SELECT to the actual desired data type.

Like so

SELECT
     [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
	OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
'
)
WITH
(
     [configuration_id]     INT 
    ,[Configuration name]   NVARCHAR(35)
    ,[Value]                NVARCHAR(200)
    ,[minimum]              NVARCHAR(200)
    ,[maximum]              NVARCHAR(200)
    ,[value_in_use]         NVARCHAR(200)
    ,[description]          NVARCHAR(200)
    ,[is_dynamic]           BIT
    ,[is_advanced]          BIT
);
GO

The result set looks the same, however if the values returned are now of the correct data type. So if you wanted to do anything with the resultset like joining on the original table then you would not see any implicit casting.

So, that’s just a small tip of how to handle data type differences between SQL Server and JSON.

I hope this has helped on your SQL Server JSON journey. We’ll see more real soon.

Have a great day

Cheers

Marty

Download Files

OPENJSON, explicit schema and data type