Data Thoughts – Microsoft Exams and Simulations

G’day,

I’ve based my career around Microsoft products on the Data Platform.

The first product that I seriously learnt was Microsoft SQL Server back around 2000.

I was lucky enough to be employed as a database developer at the time and I decided to go for the MCDBA certification.

I remember that I felt ok with the development stuff, but there was an exam on administration – a topic that I was a little shaky on. I’d also opted to take an Exam on Windows 2000 as part of the Certification set, this was the one that I was most uncomfortable in.

So, I relied on advice from colleagues (I was in a team of network engineers at the time) – but that would only get me part of the way.

Reading books, Microsoft articles (books online was not as good as the equivalent Microsoft docs of today) – is ok, but I personally find that the information does not stick. I began working through scenarios from the docs, but I found these basic.

I discovered two Microsoft endorsed companies who produced practice tests – Self-Test Software (who unfortunately are no more) and measureup, who still provide a great deal of practice tests. These provided me with more than a few problems that I worked through and I found the knowledge stuck a lot more than it did when I simply read.

When it was time to take the test I wondered how quickly I would pick up how to navigate the exam – not the content and questions, but the user interface and question types – and if I’d waste time on the question types rather the question itself.

As it turned out this was quite straight forward.

If I was in the same position today, I could have used the new Microsoft Exam Simulator that can be found here. And I encourage anybody who has never taken a Microsoft exam – but intends too – to have a look and get familiar with the question types.

Even as a regular test taker, there’s a few parts of the exam engine that I’ve never explored – such as changing the contrast and such things.

Apart from getting to know the engine, there a few strategies that I normally take myself.

I always fully answer every question first time around, even if I don’t know, I attempt to work it out and give it my best shot – extreme worst case, I guess. However, I’ll always mark that question as for review. Occasionally I’ll also mark it as for feedback. Note that there is a section at the end of the exam (after you’ve completed) where you can separately go back to questions and give feedback, this helps Microsoft make questions better for others.

After you complete the last question you see a dashboard letting you know such things as questions you have not (fully) answered (it occasional happens) and questions that you have marked as for review – I always concentrate on those categories.

The main point of this post is to have an exam taking strategy, this might include

  • Visit the exam simulator
  • Answer every question first time around – never miss one.
  • Mark a question for review if you are not confident with the answer you gave.
  • Give feedback on questions at the end of the exam if appropriate.
  • don’t rush – plan your time carefully and use up the time to check answers.
  • Relax – have fun and pass.

and after you’ve successfully passed the exam, remember to claim your badge and let people know about your achievement.

Have a great day

Cheers

Marty

Azure Data Explorer – Setting up a Cluster

I’ve worked with Log Analytics quite a bit over the years and that’s taken me down the road of the Kusto Query Language – or KQL

So, using Azure Data Explorer Clusters seemed the next logical step on that journey.

Log Analytics is great for storing and querying diagnostic data from Azure (or any other infrastructure) – but I decided that I might want to investigate my own data.

So, with that in mind – I decided to set up my own Azure Data Explorer Cluster and start looking around.

It’s easy enough to find in the Azure Portal – I chose to favorite it as I expect to be using it a fair bit over the next little while

Find “Azure Data Explorer Clusters” via the Azure Portal search bar

You can also get to it via a shortcut

Select “Azure Data Explorer Clusters” – optionally favorite

Next, I select “Create” and then filled in the minimum amount of information needed to get up and running.

Information for our ADX cluster

Just a few things to focus on here – firstly, the combination of the cluster name and the region is going to form part of a URI – so this has to be unique, and there are certain formats that the name has to meet, but you’ll soon know if you have not met them as the screen will call them out instantly.

Additionally, I’m just creating a Dev cluster here – note that there is no SLA.

Also, the region that I chose has Availability Zones – yours might not, or you might choose not to take advantage of them. I chose to include all three.

Availability Zones

You’ll then notice that you can step through all of the remain setup.

Additional setup (can be skipped – I accepted the defaults)

I simply accepted the defaults, but it’s great to know you can define additional settings at this stage.

It’ll take a little time to set the cluster up – but you’ll soon be up and running.

And once we are up and running we can see that the URI is made up of the cluster name and region

ADX Cluster name

In order to create a database on the cluster head to the resource, click on “Databases” in the left hand side menu (scrolling may be needed), click “Add database” near the top of the screen and then enter a name for the database in the fly-out pane.

Create a new database on out ADX cluster

That’s it – we now have our first Azure Data Explorer Cluster created with a new shiny database.

Hope this help somebody.

Cheers

Marty.

SQL Server STUFF

The STUFF function in SQL Server is one of those little gems that is very under-used but when needed can be a real handy utility – at least that’s how it appears to me.

The STUFF function allows you to replace specific characters from a string with the contents of another string – we have to identify these characters using a number, signifying where to start and how many characters to replace.

At this stage, you might be wondering why the function is not called REPLACE – great questions. Turns out that T-SQL already has a REPLACE function, so (obviously 🙂 ) STUFF became the name for his one.

The function basically takes a string, the number of the character to being the string removal from, the length of the string to remove and another string that represents the characters that should be used to STUFF into the original string.

Here an example.

Let’s turn the sting “Your welcome” into the string “You’re welcome” using T-SQL and the STUFF function.

DECLARE @STRING NVARCHAR(20) = 'Your Welcome';
	
SELECT STUFF(@STRING , 4,1,'''re') AS Correction;

And we get

The strings can even be the result of SELECT statements themselves

SELECT STUFF((SELECT 'Your Welcome') , 4,1,(SELECT '''re')) AS [Correction(Again)];

The replacement string can even be an empty string – in which case we are essentially removing a character. I’ve found this really useful when concerning comma separated strings and then either removing the first or last comma.

Here’s a example (removing the leading comma)

SELECT STUFF((SELECT ',Martin,Julie,John,Bert,Stephen,Jean,Claire') , 1,1,'') AS [First Comma removed];

And we get

Hope somebody finds this useful

Have a great day

Cheers

Martin

More Times Like These with SQL Server

I blogged the other about have some issues translating Oracle date formats to SQL Server date data types

I find the way SQL Server handles dates with a defined data type to be much more intuitive that the string manipulation techniques that Oracle seems to favour.

My new found friend is the SQL Server FORMAT() function.

Here’s some examples of using FORMAT () with dates, and while you can likely use CONVERT to achieve similar results, I find the option of using FORMAT() to be very useful.

It is also possible to format much more that dates with the FORMAT() function.



SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 1 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 2 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 3 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 4 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 5 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 6 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 7 , SYSDATETIME()) , 'dddd')));

SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 1 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 2 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 3 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 4 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 5 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 6 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 7 , SYSDATETIME()) , 'ddd dd MMM yyyy')));

SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 1 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 2 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 3 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 4 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 5 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 6 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 7 , SYSDATETIME()) , 'dddd dd MMM yyyy')));

Have a great day

Cheers

Martin

Times Like These with SQL Server

I was doing some conversion of Oracle code (PL/SQL) to SQL Server code (T-SQL) – which had some quirks.

I just thought that I’d share a quick tip that has helped me out quite a bit.

Much of the Oracle code had time manipulation functions – but they were done in a way that I didn’t find intuitive. There was a lot of what appeared to be casting (or similar) between text stings containing dates and time.

One of the nice things about T-SQL – in my opinion – is that there are a lot of different data types, some of which deal with times.

On top of that there are functions that allow a specific part of a date/time to be captured – for example we might want the month number, of the day number of the month or the hour, or any part of a date / time really.

So, here’s a list of some of the ways to get that information directly from SQL Server.

SELECT DATEPART(DAY         , SYSDATETIME());
SELECT DATEPART(YEAR        , SYSDATETIME());
SELECT DATEPART(QUARTER     , SYSDATETIME());
SELECT DATEPART(MONTH       , SYSDATETIME());
SELECT DATEPART(DAYOFYEAR   , SYSDATETIME());
SELECT DATEPART(DAY         , SYSDATETIME());
SELECT DATEPART(WEEK        , SYSDATETIME());
SELECT DATEPART(WEEKDAY     , SYSDATETIME());
SELECT DATEPART(HOUR        , SYSDATETIME());
SELECT DATEPART(MINUTE      , SYSDATETIME());
SELECT DATEPART(SECOND      , SYSDATETIME());
SELECT DATEPART(MILLISECOND , SYSDATETIME());
SELECT DATEPART(MICROSECOND , SYSDATETIME());
SELECT DATEPART(NANOSECOND  , SYSDATETIME());
SELECT DATEPART(TZOFFSET    , SYSDATETIME());
SELECT DATEPART(ISO_WEEK    , SYSDATETIME());

You can read more at the following link

Have a great day

Cheers

Martin.

What exactly is a modern data warehouse?

G’day,

I was asked recently what a modern data warehouse is – and that’s a vey thought provoking question.

First, there’s the tag “Modern” – I think what’s modern now may not be quite so modern in a year or twos time – maybe sooner, judging by the speed of some technology changes at present.

My initial response – jokingly – was ‘it depends who you ask’, which is a pretty useless – but true – answer.

Firstly let me put some context around my answer. I’ve built my career around Microsoft technologies – so my answer is going to centre around their latest technologies – which are going to be all cloud based, and essentially incorporate Platform-as-a-Service (PaaS) functionality. Your experience may be different, you might be in a different cloud, you might have different ideas – but for me it’s PaaS Services in the Azure cloud. I’ll expand on this shortly.

But let’s (quickly) look back on how I’ve seen data warehouses develop – in my Microsoft world – over the years. They used to be simply built on SQL Server, incorporating fact and dimension tables. There’d be debate about star and snowflake schemas, about kimball or Inmon. More recently people talk about Data Vault. Essentially, people pulled data from their operational (transactional) system into their data warehouse (decision support system or Enterprise data warehouse or whatever they chose to call it).

They did this using some Extract, Transform and Load (ETL) tool (commonly called a data integration tool) like SQL Server integration services.

They might have had a single data source (one transactional system) or they may have had more data sources – so firstly staging the data made sense, then cleaning it and then putting it into the main data warehouse.

We’d have to deal with things like slowly changing dimensions and very often we may not have modelled data exactly as it is in the real world.

We might put analytical models over the data warehouse, initially these were multi-dimensional (MDX) but later Tabular. Then we’d either report straight off the data warehouse or the analytical model – using tools like SQL Server Reporting Services or – lately – Power BI.

For me, things have moved to the cloud and our options have expanded (or should I say exploded)

We can take data from literally any data source and land it in our Azure data lake. This area can be segregated, as it likely contains our raw data. we can now keep this – untouched – as space is literally no longer an impediment (ok, it still costs money, but it’s cheap, and we can have different degrees of storage (hot / cold /archive) and we have to trade off access times with cost. We also don’t have to worry about upgrading and migrating data at any time – this is all part of the service provide by a modern cloud.

We can then start segregating the data into different areas (of the data lake), say a staging area, a clean area, and maybe an area full of curated data sets.

We can pull in different types of data in different formats – CSV, JSON, AVRO, Parquet, and more.

Once we have the data in out desired area we can then pull it into a traditional data warehouse – say in Azure SQL Database for instance (although there maybe better cloud solution). We can use Polybase to virtualise the data.

We could also use Synapse, CosmosDB or maybe databricks.

We can build Machine Learning Models and enable Artificial Intelligence services.

Basically, I think the ‘modern’ data warehouse is about cloud based data services and simply adds to the traditional models of data warehousing.

Whereas the traditional data warehouse is about historical reporting, such as diagnostic and descriptive analytics – the new tools enable predictive and prescriptive analytics along with cognitive capabilities.

So, basically the answer to ‘what is the modern data warehouse?’ could be a lot of things, pieced together in many different ways. and we’re only going to get more as time marches on.

So maybe we should start talking about the changing data warehouse, or the evolving data warehouse or the cloud data warehouse – but not the modern data warehouse, as what’s modern today is sure to change tomorrow.

Anyway, these are just my thoughts. Some will agree, some won’t and they’ll be plenty of diverse ideas – that’s great for innovation. Um, maybe the innovative data warehouse is a better name 🙂

Anyway, have a great day

Cheers

Martin.

OPENJSON and CROSS APPLY

OPENJSON is pretty central to manipulating JSON documents in T-SQL.

As we’ve seen, we can use a default schema that will return metadata about the JSON document or we can use an explicit schema where we supply a WITH clause and a PATH to the data that we want.

So far, in our examples we have either used the default schema or an explicit schema – but we have not mixed the two.

Indeed if you try this in a simple query you’ll find it is not possible without the use of APPLY operator.

OPENJSON is actually a table-valued function that parses JSON – either into an explicit schema (that we provide) or the default schema.

We can mix both the default and explicit schema by using CROSS APPLY

Like so

DECLARE @json NVARCHAR(MAX) =
N'
{
    "Configuration Property": {
      "Configuration name": "recovery interval (min)",
      "Value": 0,
      "minimum": 0,
      "maximum": 32767,
      "value_in_use": 0,
      "description": "Maximum recovery interval in minutes",
      "is_dynamic": true,
      "is_advanced": true
    }
}
';
 
SELECT 
	 DS.[key]
	,DS.[value]
	,DS.[type]
	,ES.[Configuration name]
	,ES.[Value]
	,ES.[minimum]
	,ES.[maximum]
	,ES.[value_in_use]
	,ES.[description]
	,ES.[is_dynamic]
	,ES.[is_advanced] 
FROM  
     OPENJSON(@json)  AS DS
CROSS APPLY
     OPENJSON(DS.[value])
WITH
(
	 [Configuration name]  NVARCHAR(35)
	,[Value]               NVARCHAR(35)
	,[minimum]             NVARCHAR(35)
	,[maximum]             NVARCHAR(35)
	,[value_in_use]        NVARCHAR(35)
	,[description]         NVARCHAR(35)
	,[is_dynamic]          NVARCHAR(35)
	,[is_advanced]         NVARCHAR(35)
) AS ES
GO

This technique can come in extremely handy when attempting to drill down into sub arrays and objects contained in a JSON document.

Notice also that different aliasing (of the default and explicit schema) has allowed us to request data from both the default schema and the explicit schema. This has enabled us to get to the element that is named value – as this name appears in both the default and explicit schema names and thus requires an alias.

I hope this has helped on your SQL Server JSON journey. We’ll see more real soon.

Have a great day

Cheers

Marty

OPENJSON and an Alias

Using Aliases in T-SQL is very common.

We can alias both Tables (FROM clause) and Columns (SELECT clause) and some other things too. It’s all pretty fundamental to writing good T-SQL queries.

Here’s an example

SELECT
	 dbs.collation_name
	,dbs.database_id AS [DatabaseID]
	,[IsReadOnly] = dbs.is_read_only
FROM 
	sys.databases AS dbs
JOIN
	sys.master_files MF
ON 
	dbs.database_id = MF.database_id
WHERE
	dbs.[name] = 'master';
GO

Note that we can use different styles, however, I recommend you standardise however you want to do this.

Anyway, the point of this post if to outline how aliasing is done when using the default and explicit JSON schemas in our T-SQL Queries

Here is an example of aliasing with the default schema

DECLARE @json NVARCHAR(MAX) =
N'
{
    "Configuration Property": {
      "Configuration name": "recovery interval (min)",
      "Value": 0,
      "minimum": 0,
      "maximum": 32767,
      "value_in_use": 0,
      "description": "Maximum recovery interval in minutes",
      "is_dynamic": true,
      "is_advanced": true
    }
}
';
 
SELECT 
      DS.[key]
	 ,DS.[value]
	 ,DS.[type]
FROM  
     OPENJSON(@json, '$."Configuration Property"')  AS DS;
GO

There’s nothing particularly unusual here.

The slightly unusual part may appear when aliasing with an explicit schema.

Here’s an example

DECLARE @json NVARCHAR(MAX) =
N'
{
    "Configuration Property": {
      "Configuration name": "recovery interval (min)",
      "Value": 0,
      "minimum": 0,
      "maximum": 32767,
      "value_in_use": 0,
      "description": "Maximum recovery interval in minutes",
      "is_dynamic": true,
      "is_advanced": true
    }
}
';
 
SELECT 
     ES.[Configuration name] 
    ,ES.[Value]
    ,ES.[minimum]  
    ,ES.[maximum]
    ,ES.[value_in_use]
    ,ES.[description]
    ,ES.[is_dynamic]
    ,ES.[is_advanced] 
FROM  
     OPENJSON(@json, '$."Configuration Property"')  
WITH 
( 
     [Configuration name] NVARCHAR(35) 
    ,[Value]              NVARCHAR(100)   
    ,[minimum]            NVARCHAR(100)  
    ,[maximum]            NVARCHAR(100)  
    ,[value_in_use]       NVARCHAR(100) 
    ,[description]        NVARCHAR(100)
    ,[is_dynamic]         BIT
    ,[is_advanced]        BIT
) AS ES;

Note that the alias for an explicit schema come after the WITH clause.

If we try to put the alias directly after the line

OPENJSON(@json, '$."Configuration Property"') 

Then we will receive an error.

While this may seem trivial it will become very useful when mixing the default schema and an explicit schema.

I hope this has helped on your SQL Server JSON journey. We’ll see more real soon.

Have a great day

Cheers

Marty

OPENJSON : Getting to the data, and the PATH – PART II

We’ve looked at getting pulling data from a JSON document into relational table format using an explicit schema that was defined in the WITH clause of the OPENJSON table valued fumction.

However, in that example, we used a PATH expression that was taken from the root of the JSON document.

A question that I recently had was ‘What of you want to take the PATH from a certain point in the JSON document’ – and that’s what we will look at here.

We use this simple document, which contains one object

{
    "Configuration Property": {
      "Configuration name": "recovery interval (min)",
      "Value": 0,
      "minimum": 0,
      "maximum": 32767,
      "value_in_use": 0,
      "description": "Maximum recovery interval in minutes",
      "is_dynamic": true,
      "is_advanced": true
    }
}

Here, we want to start retrieving data from the document starting at the path

$." "Configuration Property"

All we need to do is define the base PATH (the starting point) in our OPENJSON query.

DECLARE @json NVARCHAR(MAX) =
N'
{
    "Configuration Property": {
      "Configuration name": "recovery interval (min)",
      "Value": 0,
      "minimum": 0,
      "maximum": 32767,
      "value_in_use": 0,
      "description": "Maximum recovery interval in minutes",
      "is_dynamic": true,
      "is_advanced": true
    }
}
';

SELECT 
	 [Configuration name] 
	,[Value]
	,[minimum]  
	,[maximum]
	,[value_in_use]
	,[description]
	,[is_dynamic]
	,[is_advanced] 
FROM  
     OPENJSON(@json, '$."Configuration Property"')  
WITH 
( 
     [Configuration name] NVARCHAR(35) 
    ,[Value]              NVARCHAR(100)   
    ,[minimum]            NVARCHAR(100)  
    ,[maximum]            NVARCHAR(100)  
    ,[value_in_use]       NVARCHAR(100) 
    ,[description]        NVARCHAR(100)
    ,[is_dynamic]         BIT
    ,[is_advanced]        BIT
);

I hope this has helped on your SQL Server JSON journey. We’ll see more real soon.

Have a great day

Cheers

Marty

OPENJSON : Getting to the data, and the PATH – PART I

Recently we reviewed FOR JSON PATH. That was used for shaping tabular data (data that comes directly from a SQL table) into a JSON document.

The PATH we are talking about here is used with OPENJSON to get to the data that is contained in the JSON document.

We’ve seen an example of OPENJSON and reading data with an explicit schema.

SELECT
     [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
    OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
'
)
WITH
(
     [configuration_id]     INT 
    ,[Configuration name]   NVARCHAR(35)
    ,[Value]                NVARCHAR(200)
    ,[minimum]              NVARCHAR(200)
    ,[maximum]              NVARCHAR(200)
    ,[value_in_use]         NVARCHAR(200)
    ,[description]          NVARCHAR(200)
    ,[is_dynamic]           BIT
    ,[is_advanced]          BIT
);
GO

Turns out that in this example, the data is right at the top level of the document.

 {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }

and so we didn’t need to explicitly state the PATH – however we still could have. And here it is.

SELECT
     [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
    OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
'
, '$')
WITH
(
     [configuration_id]     INT 
    ,[Configuration name]   NVARCHAR(35)
    ,[Value]                NVARCHAR(200)
    ,[minimum]              NVARCHAR(200)
    ,[maximum]              NVARCHAR(200)
    ,[value_in_use]         NVARCHAR(200)
    ,[description]          NVARCHAR(200)
    ,[is_dynamic]           BIT
    ,[is_advanced]          BIT
);
GO

The query below has the PATH defined explicitly in the WITH clause.

SELECT
     [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
    OPENJSON
(
'
  {
    "configuration_id": 101,
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
')
WITH
(
     [configuration_id]     INT               '$.configuration_id'
    ,[Configuration name]   NVARCHAR(35)      '$."Configuration name"'
    ,[Value]                NVARCHAR(200)     '$.Value'
    ,[minimum]              NVARCHAR(200)     '$.minimum'
    ,[maximum]              NVARCHAR(200)     '$.maximum'
    ,[value_in_use]         NVARCHAR(200)     '$.value_in_use'
    ,[description]          NVARCHAR(200)     '$.description'
    ,[is_dynamic]           BIT               '$.is_dynamic'
    ,[is_advanced]          BIT               '$.is_advanced'
);
GO

Notice also that Configuration name has a space in it. So, in the PATH we simply enclose it in double quotes “..” – in a similar way to how square brackets would be used in a T-SQL expression.

Lets, look at a slightly different shaped JSON document and see how we would change the PATH in the WITH clause to locate all of the desired values.

SELECT 
	 [configuration_id]
    ,[Configuration name] 
    ,[Value]          = TRY_CONVERT(sql_variant  , [Value]) 
    ,[minimum]        = TRY_CONVERT(sql_variant  , [minimum]) 
    ,[maximum]        = TRY_CONVERT(sql_variant  , [maximum]) 
    ,[value_in_use]   = TRY_CONVERT(sql_variant  , [value_in_use]) 
    ,[description]    = TRY_CONVERT(sql_variant  , [description]) 
    ,[is_dynamic]
    ,[is_advanced]
FROM
	OPENJSON(
'
{
  "configuration_id": 101,
  "Configuration_Property": {
    "Configuration name": "recovery interval (min)",
    "Value": 0,
    "minimum": 0,
    "maximum": 32767,
    "value_in_use": 0,
    "description": "Maximum recovery interval in minutes",
    "is_dynamic": true,
    "is_advanced": true
  }
}
')
WITH
(
     [configuration_id]     INT            '$."configuration_id"'
    ,[Configuration name]   NVARCHAR(35)   '$."Configuration_Property"."Configuration name"'
    ,[Value]                NVARCHAR(200)  '$."Configuration_Property"."Value"'
    ,[minimum]              NVARCHAR(200)  '$."Configuration_Property"."minimum"'
    ,[maximum]              NVARCHAR(200)  '$."Configuration_Property"."maximum"'
    ,[value_in_use]         NVARCHAR(200)  '$."Configuration_Property"."value_in_use"'
    ,[description]          NVARCHAR(200)  '$."Configuration_Property"."description"'
    ,[is_dynamic]           BIT            '$."Configuration_Property"."is_dynamic"'
    ,[is_advanced]          BIT            '$."Configuration_Property"."is_advanced"'	
);

Notice, that no ‘default path’ is specified on the OPENJSON clause (after the document on line 27).

I hope this has helped on your SQL Server JSON journey. We’ll see more real soon.

Have a great day

Cheers

Marty

Download Files