Sunday, November 12, 2017

Azure: DTUs and eDTUs




DTUs are being called as Database Throughput Units and sometimes as Database Transaction Units in MS articles. I believe now most widely accepted as Database Transaction Units. Azure database has different service tiers available from Basic tier to Premium RS tier. Each tier will have a different range of resources. DTUs refer to overall resources of an Azure database. So based on the tier we select for the Azure DB we will have the resources allotted for that DB. Eg: A basic tier will have 5 DTUs and so on.

In simple terms based on the service tier, you select while registering for Azure DB the number of DTUs will be provided to your DB. The performance of your DB depends on the number of DTUs.

DTUs is a collective measure of CPU, memory and I/O resources. In order to determine how many DTUs we need to select for our DB or how many DTUs your current DB is using you use this MS calculator. To determine in-depth details about resource consumption we can use “Query Performance Insights” in the Azure portal.

Now once you have selected a tier and your DB started utilizing all the available DTUs then the performance will start slowing down and if the workload still increases beyond the DTUs available after severe slowness then we will start seeing timeouts. When we start to see severe slowness/time-outs we can either perform performance tuning or upgrade to higher tier anytime.

eDTUs stands for elastic DTUs. Instead of allotting resources to a single DB, in eDTUs resources are allotted to a pool of DB’s. elastic pools will have many databases in them and resources are shared with each other. So that at any time there will not be resource crunch and another advantage of eDTUs is we can add/remove resources to the pool if required without any downtime. If we have multiple databases for which the workload is unpredictable then the best option would be to put all those databases in a pool so that maintaining those databases will be easier and also we can save cost as well. Instead of providing high resources to a single database which has highly volatile workloads if we put such database in a pool it save much cost.

Thanks VV!!

Monday, July 3, 2017

Powershell interview questions.

1.    What are cmdlets in PowerShell?

Cmdlets are lightweight commands used in PowerShell to perform some action. Like Get-Date to get the current date, Get-Help displays information about PowerShell commands, Get-Process displays list of processes that are running on the computer and so on.

2.    What are the comparison operators in PowerShell?

There are several comparison operators in PowerShell to compare the values like ‘-eq’ is equals operator, ‘-gt’ is greater than, ‘-lt’ is less than and ‘-ne’ not equal. Examples like 2 –gt 1 this returns True, 3 –ne 2 returns True, 3 –eq 2+1 returns True and so on.

3.    Can you name any other operators of PowerShell?

Yes. There are other operators such as ‘-like’, ‘-notlike’, ‘-replace’ and so on.
Eg: In below example, we are replacing ‘hi’ with ‘hello’. Check both the outputs.


4.    How will you declare variables in PowerShell?

We can declare like below:

$x = 1. This declares a variable named X and assigns 1 as a value to it.

5.    Are there loops and conditional objects in PowerShell?

Yes. We can use all of these in PowerShell  ‘for’, ‘while’, ‘do..while’,
 ‘if’, ‘elseif’, ‘switch’ and so on.

6.    What is the use of pipeline in PowerShell?

Pipeline is used for joining two or more commands so that output 
of the first command will be sent as input to the second 
command and so on.


7.    How to display the list of command lets in PowerShell?

Get-command displays the list of cmdlets available.

8.    Do we have arrays in PowerShell?

Yes. Arrays are used to assign multiple values to a variable. Arrays can be created using ‘@’ symbol.

Eg:


9.    How to list all SQL related service details in PowerShell?

Get-service sql*

10. How to run SQL queries in PowerShell?

We can use ‘Invoke-Sqlcmd’ to perform this. Like Invoke-Sqlcmd  
-query “select * from employee”.

11. How to restart services in PowerShell?

Stop-service for stopping a service.
Start-Service is for starting a service.

12. What is an SMO and use of it?

SMO stands for SQL Server Management Object. These assemblies are used for managing SQL Server without using SSMS. First, we need to load these libraries before using these. These can help in automating many of the database activities.





Thanks VV!!

Thursday, March 23, 2017

Log file full issue on Always On Availability Group database.

We are in SQL Server 2014 and using Always on Availability Group and all I got was ‘Disk is FULL and one log file has consumed entire disk space’. Where ever you go, log file full issues follow J .

The below resolution steps have helped me to fix the log file full issue in my environment. This may not help in every situation and for every environment. Please do proper testing on your test servers before using the same method in production environments. Shrinking the log will have a direct effect on your Index performance and point in time recoveries so please DO NOT follow below steps without knowing their complete impact.


Basic points to remember while using Always on Availability Group:                                                                                                                       

1.   We cannot change the recovery model of database involved in AAG. AS AAG works on FULL recovery only.

2.   We need to shrink the log file of the database of the AAG Primary node only.

Shrinking of the log file is not recommended at all because until the root cause has been figured and fixed the issue keeps recurring. In my case also as an immediate resolution, the log file has been shrunk.

First, identify what is causing the log file not to be re-used. It was ‘LOG_BACKUP’ in my case.

Select log_reuse_wait_desc,* from sys.databases

You can use DBCC LOGINFO as well and if you see the status as ‘2’ then the virtual log files cannot be re-used. Means we cannot shrink the log.

I took log backup of the database on the secondary node where usually backup jobs are configured. If you have backups running on primary perform the log backup in primary node.

BACKUP LOG <DATABASE NAME> TO DISK=’D:\BACKUPS\LOG\’

Once log backup completes, shrink the log file:

DBCC SHRINKFILE(FILENAME,SIZE(MB))

Now the file got shrunk.

Make sure regular backups are happening for the database especially when involved in AAG. In my scenario, the log backups were running once in a day and obviously, that has been the issue.


Please share how you have fixed the log file full issue while using Always On Availability Group in the comments section below.


Thanks VV!!
#Log file full, #Always On Availability log file full, Always On Availability log file