Data Thoughts – Microsoft Exams and Simulations

G’day,

I’ve based my career around Microsoft products on the Data Platform.

The first product that I seriously learnt was Microsoft SQL Server back around 2000.

I was lucky enough to be employed as a database developer at the time and I decided to go for the MCDBA certification.

I remember that I felt ok with the development stuff, but there was an exam on administration – a topic that I was a little shaky on. I’d also opted to take an Exam on Windows 2000 as part of the Certification set, this was the one that I was most uncomfortable in.

So, I relied on advice from colleagues (I was in a team of network engineers at the time) – but that would only get me part of the way.

Reading books, Microsoft articles (books online was not as good as the equivalent Microsoft docs of today) – is ok, but I personally find that the information does not stick. I began working through scenarios from the docs, but I found these basic.

I discovered two Microsoft endorsed companies who produced practice tests – Self-Test Software (who unfortunately are no more) and measureup, who still provide a great deal of practice tests. These provided me with more than a few problems that I worked through and I found the knowledge stuck a lot more than it did when I simply read.

When it was time to take the test I wondered how quickly I would pick up how to navigate the exam – not the content and questions, but the user interface and question types – and if I’d waste time on the question types rather the question itself.

As it turned out this was quite straight forward.

If I was in the same position today, I could have used the new Microsoft Exam Simulator that can be found here. And I encourage anybody who has never taken a Microsoft exam – but intends too – to have a look and get familiar with the question types.

Even as a regular test taker, there’s a few parts of the exam engine that I’ve never explored – such as changing the contrast and such things.

Apart from getting to know the engine, there a few strategies that I normally take myself.

I always fully answer every question first time around, even if I don’t know, I attempt to work it out and give it my best shot – extreme worst case, I guess. However, I’ll always mark that question as for review. Occasionally I’ll also mark it as for feedback. Note that there is a section at the end of the exam (after you’ve completed) where you can separately go back to questions and give feedback, this helps Microsoft make questions better for others.

After you complete the last question you see a dashboard letting you know such things as questions you have not (fully) answered (it occasional happens) and questions that you have marked as for review – I always concentrate on those categories.

The main point of this post is to have an exam taking strategy, this might include

  • Visit the exam simulator
  • Answer every question first time around – never miss one.
  • Mark a question for review if you are not confident with the answer you gave.
  • Give feedback on questions at the end of the exam if appropriate.
  • don’t rush – plan your time carefully and use up the time to check answers.
  • Relax – have fun and pass.

and after you’ve successfully passed the exam, remember to claim your badge and let people know about your achievement.

Have a great day

Cheers

Marty

Azure Data Explorer – Using KQL Magic inside Azure Data Studio

One of my previous blogs looked at using the free resources that Microsoft provides to link to an Azure Data Explorer Cluster and start using Kusto Query Language.

In this post, I want to look at doing a similar thing, but this time using Azure Data Studio (ADS). The cluster that we will link too will be the same one that we previously used when we link to an ADX cluster via the Azure Portal.

Open Azure Data Studio (ADS) and install KQL addin.

KQL Extension in Azure Data Studio

Now select “New Notebook” – then select the Python Kernel

If you don’t have a python installation then don’t worry, as soon as we execute our first command then ADS will prompt you to install one.

I’m going to summarize the instructions in this post and add my take to them, if you want to switch to that post (or use the two combined) feel free.

The first two code cells, just set up KQL for us

Here’s the actual code

!pip install Kqlmagic –no-cache-dir  –upgrade

%reload_ext Kqlmagic

The next part require authentication – but all you require is a Microsoft account (outlook , hotmail ect)

here’s the code cell

%kql AzureDataExplorer://tenant=”Microsoft.com”;code;cluster=’help’;database=’Samples’

When you run the cell, you’ll be given a code and a link to a Microsoft site.

Simply navigate to the link, authenticate using a Microsoft account and type in the code.

You’ll get a message stating everything is fine and then you’ll be able to head back to ADS and carry on with the final statements they will query the database.

Next, let’s run some code

KQL Statement

And the results are

Results from the KQL Statement

Here’s another KQL statement

KQL Statement
Results from the KQL Statement

The %%kql is a “Magic” command that signals to the kernel that what we are about to run is not Python – but Kusto Query Language!

Hence KQL Magic!

Hope this helps somebody

Cheers

Marty

Azure Data Explorer – A great way to learn Kusto Query Language

Log Analytics is a great resource for storing our monitoring data from Azure.

But in order to get the best from your nicely acquired monitoring data, you’ll need a way to investigate it. You can do that in the Azure Portal by clicking around in the UI.

Kusto Query Language is the way to investigate data that is stored in Log Analytics.

We can either experiment in our own log analytics workspace – or we can use the workspace that Microsoft provide for such things.

The Log Analytics Playground

You can find that here

You’ll need to authenticate with a Microsoft account (eg outlook, hotmail)

Personally, I think a great resource for learning Kusto Query Language (KQL) is the Azure Data Explorer (ADX) cluster that Microsoft provides.

You can find that here

There’s a bit more real estate – but you choose 🙂

The ADX cluster is clean and easy to find your way around

Azure Data Explorer (ADX)

There’s enough data in there to run some KQL queries – and get to grips with the language.

If you’re looking to learn Kusto Query language then I’d suggest heading along to this resource.

Hope this helps somebody.

Cheers

Marty

Azure Data Explorer – Setting up a Cluster

I’ve worked with Log Analytics quite a bit over the years and that’s taken me down the road of the Kusto Query Language – or KQL

So, using Azure Data Explorer Clusters seemed the next logical step on that journey.

Log Analytics is great for storing and querying diagnostic data from Azure (or any other infrastructure) – but I decided that I might want to investigate my own data.

So, with that in mind – I decided to set up my own Azure Data Explorer Cluster and start looking around.

It’s easy enough to find in the Azure Portal – I chose to favorite it as I expect to be using it a fair bit over the next little while

Find “Azure Data Explorer Clusters” via the Azure Portal search bar

You can also get to it via a shortcut

Select “Azure Data Explorer Clusters” – optionally favorite

Next, I select “Create” and then filled in the minimum amount of information needed to get up and running.

Information for our ADX cluster

Just a few things to focus on here – firstly, the combination of the cluster name and the region is going to form part of a URI – so this has to be unique, and there are certain formats that the name has to meet, but you’ll soon know if you have not met them as the screen will call them out instantly.

Additionally, I’m just creating a Dev cluster here – note that there is no SLA.

Also, the region that I chose has Availability Zones – yours might not, or you might choose not to take advantage of them. I chose to include all three.

Availability Zones

You’ll then notice that you can step through all of the remain setup.

Additional setup (can be skipped – I accepted the defaults)

I simply accepted the defaults, but it’s great to know you can define additional settings at this stage.

It’ll take a little time to set the cluster up – but you’ll soon be up and running.

And once we are up and running we can see that the URI is made up of the cluster name and region

ADX Cluster name

In order to create a database on the cluster head to the resource, click on “Databases” in the left hand side menu (scrolling may be needed), click “Add database” near the top of the screen and then enter a name for the database in the fly-out pane.

Create a new database on out ADX cluster

That’s it – we now have our first Azure Data Explorer Cluster created with a new shiny database.

Hope this help somebody.

Cheers

Marty.

SQL Server STUFF

The STUFF function in SQL Server is one of those little gems that is very under-used but when needed can be a real handy utility – at least that’s how it appears to me.

The STUFF function allows you to replace specific characters from a string with the contents of another string – we have to identify these characters using a number, signifying where to start and how many characters to replace.

At this stage, you might be wondering why the function is not called REPLACE – great questions. Turns out that T-SQL already has a REPLACE function, so (obviously 🙂 ) STUFF became the name for his one.

The function basically takes a string, the number of the character to being the string removal from, the length of the string to remove and another string that represents the characters that should be used to STUFF into the original string.

Here an example.

Let’s turn the sting “Your welcome” into the string “You’re welcome” using T-SQL and the STUFF function.

DECLARE @STRING NVARCHAR(20) = 'Your Welcome';
	
SELECT STUFF(@STRING , 4,1,'''re') AS Correction;

And we get

The strings can even be the result of SELECT statements themselves

SELECT STUFF((SELECT 'Your Welcome') , 4,1,(SELECT '''re')) AS [Correction(Again)];

The replacement string can even be an empty string – in which case we are essentially removing a character. I’ve found this really useful when concerning comma separated strings and then either removing the first or last comma.

Here’s a example (removing the leading comma)

SELECT STUFF((SELECT ',Martin,Julie,John,Bert,Stephen,Jean,Claire') , 1,1,'') AS [First Comma removed];

And we get

Hope somebody finds this useful

Have a great day

Cheers

Martin

SQL Server Sounds

One of my children was exploring sounds at school and asked me to give them a hand.

One of the tasks that was set for them was writing down a list of word pairs that are spelt differently by pronounced identically.

While, I managed to come up with a few – it was a bit more challenge than I originally though.

Anyway, being the techie that I am (or at least think I am 🙂 ) – I decided to see if technology could help me. Specifically, could SQL Server help me out.

I’d worked with SOUNDEX in Oracle many, many years ago, but I’d never really had reason to explore it in SQL Server – well, now was my chance.

So, not only does SQL Server have a SOUNDEX function, it also has a DIFFERENCE function that indicates how different SQL Server thinks two words may be from one another.

As you might expect with functions based around string – collation is a factor.

SOUNDEX returns a four-character code that is based on how the string sounds when spoken in English. You can read more about how those codes are put together in the docs.

DIFFERENCE provides an integer values between 0 and 4 with 0 meaning no similarity and 4 representing a very close match.

I experimented with a few different combinations, ranging from the identical to the clearly different – I even used some numbers.

Here’s my test data set

USE tempdb;
GO

SELECT 
	  Word 
	, [SOUNDEX(word)]    = SOUNDEX(Word)
	, Compare
	, [SOUNDEX(Compare)] = SOUNDEX(Compare)
	, [Difference]       = DIFFERENCE(Word , Compare)
FROM
(
	VALUES
	  ('Talk'       , 'Torque')
	, ('witch'      , 'which')
	, ('green'      , 'greene')
	, ('bee'        , 'be')
	, ('piece'      , 'peace')
	, ('right'      , 'write')
	, ('absence'    , 'absents')
	, ('awe'        , 'oar')
	, ('ball'       , 'bawl')
	, ('band'       , 'banned')
	, ('beach'      , 'beech')
	, ('seed'       , 'cede')
	, ('heard'      , 'herd')
	, ('licker'     , 'liquor')
	, ('hertz'      , 'hurts')
	, ('centre'     , 'center')
	, ('carat'      , 'carrot')
	, ('heroin'     , 'heroine')
	, ('metal'      , 'mettle')
	, ('lightening' , 'lightning')
	, ('Martin'     , 'Luke')
	, ('you''ll'	, 'yule')
	, ('123'	    , '123')
	, ('9183646262' , '9183646262')
	, ('a'          , 'A')
	, ('boy'        , 'girl')
) AS Sounds(Word , Compare)
ORDER BY
	[Difference] DESC;

And here are the results

We can see that it found most homophones to be a 4 – indicating a very strong similarity.

Interestingly enough it compared numbers ok, it got the difference between the strings ‘martin’ and ‘luke’ right – not similar at all. However, it thought the strings ‘boy’ and ‘girl’ had a least some similarity, which seemed odd. it didn’t get ‘right’ and ‘write’ correct, likely as they begin with different letters.

On the whole very encouraging – but not perfect.

An interesting venture into the worlds of sounds in SQL Server and what my kids are up to at school.

Have a great day.

Cheers

Marty

SQL Grouping with Rollup – Simple use case

G’Day,

I was recently looking at getting some sales data from SQL Server, aggregating some totals and then displaying a grand total.

This is the sort of thing that very often gets done on the client side – but as it turns out, this can be accomplished using standard T-SQL, and, before you ask, no, there are no cursors in sight.

So I put together some sample data, here it is

USE tempdb;
GO
/*
Create a simple table.
We'll simply group by ClientName
*/
IF OBJECT_ID('dbo.Sales') IS NOT NULL
	BEGIN
		DROP TABLE dbo.Sales;
	END;
GO
CREATE TABLE dbo.Sales
(
	 SaleID   INT NOT NULL IDENTITY(1,1)
	,ClientName NVARCHAR(10) NULL
	,Amount  INT NOT NULL
);
GO
/*
Insert some data
*/
INSERT INTO dbo.Sales(ClientName , Amount)
SELECT
	  ClientName
	, Amount
FROM
(
	VALUES
	('Marty' , 10),
	('Marty' , 1),
	('Marty' , 3),
	('Callum' , 6),
	('Callum' , 3),
	('Callum' , 23),
	('Jake' , 45),
	('Jake' , 3),
	('Jake' , 67),
	('Bert' , 5),
	('Bert' , 1),
	('Bert' , 0),
	('Bert' , 6),
	('Zeb' , 35),
	('Zeb' , 23),
	('Zeb' , 12)
) AS TempSales(ClientName,Amount);
-----------------------------------------------------------------

The output that we’d really like – using only T-SQL – is like below.

Note that the last line contains a grand total.

While, there are a few ways we might be able to get this result set, lets concentrate on one in particular, and that is using ROLLUP and GROUPING

The following query comes pretty close

SELECT 
	  [Full Name] = ClientName
	, [Units]     = SUM(Amount)
	, [Grouping]  = GROUPING(ClientName)
FROM 
	dbo.Sales
GROUP BY ROLLUP(ClientName)
ORDER BY
	[Grouping] ASC , [Full Name];

and gives us

Which is close, but note we have a column called “Grouping” – however the Total looks fine, except that it is registering as NULL!

So, how did the “Grouping” column get there?

Well, first off – I decided to call it “Grouping”, but I could have literally called it anything that I wanted.

The column actually came from he use of the T-SQL GROUPING function. This function is basically used for totaling groups

The totaled column (in this case there is only one) has a NULL value – and the 1 in the “Grouping” column indicates that this NULL column is giving us a grand total – which will differentiate it from any NULL values in the same column that are not grand totals.

To demonstrate, lets change the value ‘Bert’ in our original table to be NULL

UPDATE dbo.Sales SET ClientName = NULL WHERE ClientName = 'Bert'

and when we run the query again

SELECT 
	  [Full Name] = ClientName
	, [Units]     = SUM(Amount)
	, [Grouping]  = GROUPING(ClientName)
FROM 
	dbo.Sales
GROUP BY ROLLUP(ClientName)
ORDER BY
	[Grouping] ASC , [Full Name];

we get the result

Note that we now have two NULL values – one of which actually represents a Grand Total and the other just represents NULL values in the table.

Also note that SQL has grouped the NULL values from the table together – so watch out for how NULLs are handled in various T-SQL expressions.

So the only way we know which NULL represents a Grand Total is by looking at the “Grouping” column – the one that has a Grand Total value of 1, while the aggregated NULL values from the table are represented by a 0 in the “Grouping” column.

With this knowledge, lets try and tidy up this output to get the output that we originally wanted – I chose to use a CTE

WITH ClientAmountsCTE
AS
(
	SELECT 
		  [Client Name] = ClientName
		, [Units]       = SUM(Amount)
		, [Total]       = GROUPING(ClientName)
	FROM 
		dbo.Sales
	GROUP BY ROLLUP(ClientName)
)
SELECT 
	  [Client Name] =
	  CASE
		WHEN [Total] = 1 THEN 'Total'
		WHEN [Client Name] IS NULL THEN 'UNKNOWN'
		ELSE [Client Name]
	  END
	, [Units]
	--, [Total]
FROM 
	ClientAmountsCTE
ORDER BY
	[Total] ASC;
GO

which gives us the result

Notice that a CASE statement has been used to differentiate the NULL columns based on the value of the “Grouping” column (which is itself not used in the display output).

ROLLUP can do a little more than this when multiple columns are involved, so check out the docs.

Note also, that there are two ways to use ROLLUP – one way is NON-ISO compliant (uses WITH) and the other way is ISO compliant (does not use WITH)

Examples

-----------------------------------------------------------------
/*
Select the data grou abd use the ROLLUP syntax
Non ISO compiant syntax (uses WITH)
*/
SELECT 
	  [Full Name] = ClientName
	, [Units]     = SUM(Amount)
	, [Total]     = GROUPING(ClientName)
FROM 
	dbo.Sales
GROUP BY ClientName WITH ROLLUP
ORDER BY
	[Full Name]  DESC;
-----------------------------------------------------------------
/*
Select the data grou abd use the ROLLUP syntax
Fully ISO compliant syntax (does not use WITH)
*/
SELECT 
	  [Full Name] = ClientName
	, [Units]     = SUM(Amount)
	, [Total]     = GROUPING(ClientName)
FROM 
	dbo.Sales
GROUP BY  ROLLUP(ClientName)
ORDER BY
	[Full Name]  DESC;

Hope this helps somebody.

Have a great day.

Cheers

Marty

More Times Like These with SQL Server

I blogged the other about have some issues translating Oracle date formats to SQL Server date data types

I find the way SQL Server handles dates with a defined data type to be much more intuitive that the string manipulation techniques that Oracle seems to favour.

My new found friend is the SQL Server FORMAT() function.

Here’s some examples of using FORMAT () with dates, and while you can likely use CONVERT to achieve similar results, I find the option of using FORMAT() to be very useful.

It is also possible to format much more that dates with the FORMAT() function.



SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 1 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 2 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 3 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 4 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 5 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 6 , SYSDATETIME()) , 'dddd')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 7 , SYSDATETIME()) , 'dddd')));

SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 1 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 2 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 3 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 4 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 5 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 6 , SYSDATETIME()) , 'ddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 7 , SYSDATETIME()) , 'ddd dd MMM yyyy')));

SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 1 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 2 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 3 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 4 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 5 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 6 , SYSDATETIME()) , 'dddd dd MMM yyyy')));
SELECT QUOTENAME(UPPER(FORMAT(DATEADD(DAY , 7 , SYSDATETIME()) , 'dddd dd MMM yyyy')));

Have a great day

Cheers

Martin

Identifying Columns with Collations in SQL Server

Recently I was asked if I could identify columns in a single SQL Server database that had a collation. I didn’t need to change anything (well, not initially, but we’ll look at doing that anyway ), simply identify columns that were candidates to have their collation changed.

Now, there’s a few other considerations here, notably the database collation and server collation, but more on that later. For now, back to the problem at hand.

My first stop was to identify all data types that could possibly have a collation.

In order to do that I’m going to head to my old friends – the system tables.

First off, a quick trip to sys.types to see which datatypes may have collations

SELECT T.[name] FROM sys.types AS T WHERE T.collation_name IS NOT NULL;

This yields the following data results

  • text
  • ntext
  • varchar
  • char
  • nvarchar
  • nchar
  • sysname

SYSNAME is an interesting one – it’s basically equivalent to NVARCHAR(128), there’s a discussion about that here

But in order to find the columns in our database that have collations we can use a query such as

	SELECT 
		 [Table Name]  = T.[name]
		,[Column Name] = C.[name]
		,C.collation_name
		,TY.[name]
		,[Typelength] = CASE
						WHEN TY.[name] IN ('text' , 'ntext') THEN UPPER(TY.[name])
						WHEN C.[max_length] = 8000 THEN TY.[name] + N'(MAX)'
						ELSE UPPER(TY.[name]) + + N'(' + CAST(C.[max_length] AS NVARCHAR(31)) + N')'
					END
	FROM sys.columns AS C
	JOIN sys.tables AS T ON C.[object_id] = T.[object_id]
	JOIN sys.types AS TY ON TY.[user_type_id] = C.[user_type_id]
	WHERE C.collation_name IS NOT NULL
	AND T.[is_ms_shipped] = 0

From here, it’s just a matter of extending the query to produced some T-SQL that will update the collations of the columns – I chose to put this in a Common Table Expression (CTE), but go with whatever suits you.

WITH CTE
AS
(
	SELECT 
		 [Table Name]  = T.[name]
		,[Column Name] = C.[name]
		,C.collation_name
		,TY.[name]
		,[Typelength] = CASE
						WHEN TY.[name] IN ('text' , 'ntext') THEN UPPER(TY.[name])
						WHEN C.[max_length] = 8000 THEN TY.[name] + N'(MAX)'
						ELSE UPPER(TY.[name]) + + N'(' + CAST(C.[max_length] AS NVARCHAR(31)) + N')'
					END
	FROM sys.columns AS C
	JOIN sys.tables AS T ON C.[object_id] = T.[object_id]
	JOIN sys.types AS TY ON TY.[user_type_id] = C.[user_type_id]
	WHERE C.collation_name IS NOT NULL
	AND T.[is_ms_shipped] = 0
)
SELECT N'ALTER TABLE ' + [Table Name] + N' ALTER COLUMN ' + [Column Name] + N' ' + [Typelength] + N' COLLATE Latin1_General_CI_AS' + ';' FROM CTE

This will produced statements that will change the collation of columns on an individual basis. Notice in the final SELECT statement from the CTE, I’ve stated COLLATE Latin1_General_CI_AS’. This will generate a lot of statements such as

ALTER TABLE CollationChangeTable ALTER COLUMN ColumnOne NCHAR(8) COLLATE Latin1_General_CI_AS;

I prefer this approach to a cursor as it gives me an opportunity to eyeball the T-SQL and column names.

A cursor may be more appropriate in a large database – you choice, there’s lots of them around.

Proceed with caution as getting column COLLATIONs out of sync with the database and instance can bite tou hard down the track – such as conflicts with tempdb joins.

Have a great day.

Cheers

Marty.

OPENJSON and CROSS APPLY

OPENJSON is pretty central to manipulating JSON documents in T-SQL.

As we’ve seen, we can use a default schema that will return metadata about the JSON document or we can use an explicit schema where we supply a WITH clause and a PATH to the data that we want.

So far, in our examples we have either used the default schema or an explicit schema – but we have not mixed the two.

Indeed if you try this in a simple query you’ll find it is not possible without the use of APPLY operator.

OPENJSON is actually a table-valued function that parses JSON – either into an explicit schema (that we provide) or the default schema.

We can mix both the default and explicit schema by using CROSS APPLY

Like so

DECLARE @json NVARCHAR(MAX) =
N'
{
    "Configuration Property": {
      "Configuration name": "recovery interval (min)",
      "Value": 0,
      "minimum": 0,
      "maximum": 32767,
      "value_in_use": 0,
      "description": "Maximum recovery interval in minutes",
      "is_dynamic": true,
      "is_advanced": true
    }
}
';
 
SELECT 
	 DS.[key]
	,DS.[value]
	,DS.[type]
	,ES.[Configuration name]
	,ES.[Value]
	,ES.[minimum]
	,ES.[maximum]
	,ES.[value_in_use]
	,ES.[description]
	,ES.[is_dynamic]
	,ES.[is_advanced] 
FROM  
     OPENJSON(@json)  AS DS
CROSS APPLY
     OPENJSON(DS.[value])
WITH
(
	 [Configuration name]  NVARCHAR(35)
	,[Value]               NVARCHAR(35)
	,[minimum]             NVARCHAR(35)
	,[maximum]             NVARCHAR(35)
	,[value_in_use]        NVARCHAR(35)
	,[description]         NVARCHAR(35)
	,[is_dynamic]          NVARCHAR(35)
	,[is_advanced]         NVARCHAR(35)
) AS ES
GO

This technique can come in extremely handy when attempting to drill down into sub arrays and objects contained in a JSON document.

Notice also that different aliasing (of the default and explicit schema) has allowed us to request data from both the default schema and the explicit schema. This has enabled us to get to the element that is named value – as this name appears in both the default and explicit schema names and thus requires an alias.

I hope this has helped on your SQL Server JSON journey. We’ll see more real soon.

Have a great day

Cheers

Marty