Thu

29

Aug

2013

Find the Size of Azure Databases using Powershell

There is a T-SQL way of finding the size of the database and there are some public domain scripts to find the same, I also created one and that’s here (indeed with more clarity and structured output)

 

Code Snippet

 

CREATE TABLE #DatabaseInfo (

database_id varchar(30)

, name NVARCHAR(500)

, AlltablesizeinKB int

)

CREATE TABLE #DatabaseInfo1 (

dbname varchar(400),

dbsizeinKB int

)

DECLARE @dbname nvarchar(100)

DECLARE @command NVARCHAR(4000)

DECLARE @command1 NVARCHAR(4000)

DECLARE @testflag SMALLINT -- 0 execute. 1 Test

SET @testflag = 0

DECLARE dbcursor1 CURSOR FAST_FORWARD FOR

SELECT

name

FROM

sys.databases

WHERE

name NOT IN ('master')

OPEN dbcursor1

FETCH NEXT FROM dbcursor1 INTO @dbname;

WHILE @@FETCH_STATUS = 0

BEGIN

SET @command =

'select database_id, sys.objects.name, sum(reserved_page_count) * 8192 / 1024 from

sys.dm_db_partition_stats, sys.objects, sys.databases

where is_ms_shipped=0 and database_id>1 and sys.dm_db_partition_stats.object_id = sys.objects.object_id group by database_id, sys.objects.name

'

SET @command1= 'select [name], sum(reserved_page_count) * 8192 / 1024 as dbsizeinKB

from sys.dm_db_partition_stats, sys.databases where database_id>1 group by database_id, sys.databases.name'

-- select * from sys.objects

IF @testflag = 0

BEGIN

INSERT INTO #DatabaseInfo

EXEC sp_executesql @command

INSERT INTO #DatabaseInfo1

EXEC sp_executesql @command1

END

ELSE

SELECT @command

FETCH NEXT FROM dbcursor1 INTO @dbname;

END

SELECT

*

FROM

#DatabaseInfo

SELECT * from #databaseinfo1

DROP TABLE #DatabaseInfo

DROP TABLE #DatabaseInfo1

CLOSE dbcursor1

DEALLOCATE dbcursor1


Now the Problem

 

Now, if you try to get the size of all databases using the above query, it doesn’t work right, I know sad  (sorry wrong emoticon)- Surprised! better word

 

Firstly, you cannot use “USE Database” in Azure, hence changing the context of the user database from within the query to another, doesn’t work. Secondly, I tried to run the query in one go for all databases and retrieve the results, but  then I realized that sys.dm_db_partition_stats in Azure is USER DB scoped and hence we cannot run the query in the context of master either, ALRIGHT!!!

 

I have figured out a better way of doing this bit and have automated the output generation in excel. Here is what needs to be done. Thanks to Matt Lavery my friend, PFE at Microsoft Australia who gave a very good idea and logic behind getting this result using Powershell, I changed this further and made it look a little more enhanced and better.

 

This is how it works, try it and let me know, how you find it. Follow the steps as is

 

1. Download the Azure Powershell module directly from http://go.microsoft.com/?linkid=9811175&clcid=0x409(Save the exe- this will install Azure Powershell in your machine) and use the cmdlets that it provides. The only problem with this is that you have provide the credentials to the server either via a prompt or you can hard code these if you like (my example below uses a prompt).

 

2. Here is what I have put together for checking the database capacity and have also added MaxSize which will help you identify the Maxsize each database can grow in your subscription. Change the Azure DB server name accordingly in this script! Use Powershell_ISE for better visibility. Let me know if you have issues running the script or have any questions!

 

(Note: You can copy and paste the code below into a notepad, rename it to anything (DBsize) and then change the extension to .ps1 (DBSize.ps1))- I am sure you all know how to execute Powershell, but me being me-   Open-> Windows Azure Powershell (Admin mode)-> Browse till the folder where you have saved the powershell script->.\DBSize.ps1

 

Code 

#Make sure you have imported your PublishSettings file as per http://msdn.microsoft.com/en-us/library/jj554332%28v=azure.100%29.aspx#BKMK_Configure

#Import the Azure module

Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1"

$cred = Get-Credential

#create a crediential to use

# NOTE: You will be prompted for Authentication to the server

$ctx = New-AzureSqlDatabaseServerContext -ServerName "ttgochqr7t-Change the Server name to yours only in the red zone and get rid of the yellow" –Credential $cred

#get all the dbs

$dbs = Get-AzureSqlDatabase $ctx

$excel = new-object -comobject excel.application

$excel.visible = $true

$chartType = "microsoft.office.interop.excel.xlChartType" -as [type]

$workbook = $excel.workbooks.add()

$workbook.WorkSheets.item(1).Name = "dbs"

$sheet = $workbook.WorkSheets.Item("dbs")

$x = 2

$sheet.cells.item(1,1) = "DB Name"

$sheet.cells.item(1,2) = "Current Size (MB)"

$sheet.cells.item(1,3) = "Max Size (GB)"

foreach ($db in $dbs)

{

$sheet.cells.item($x,1) = $db.Name

$sheet.cells.item($x,2) = $db.SizeMB

$sheet.cells.item($x,3) = $db.MaxSizeGB

$x++

}

$range = $sheet.usedRange

$range.EntireColumn.AutoFit()

$workbook.charts.add()

#$vFullPath = 'C:\DataProtector\Data\AzureDb.xls'

#$Excel.SaveAs($vFullPath)

#$Excel.Close()

 

There are plenty of properties available for each database. If you want to see how to hardcode the credentials check out http://blogs.msdn.com/b/windowsazure/archive/2013/02/07/windows-azure-sql-database-management-with-powershell.aspx, though I wouldn’t suggest this due to security reasons.

 

Have fun with Cloud and Powershell!!

13 Comments

Thu

15

Aug

2013

[Connection manager "FTP Connection Manager"] Error: An error occurred in the requested FTP operation. Detailed error description: The filename, directory name,

 

Problem:

The user is trying to execute an SSIS package but it fails on the FTP task with the following error:

 

Error

[Connection manager "FTP Connection Manager"] Error: An error occurred in the requested FTP operation. Detailed error description: The filename, directory name, or volume label syntax is incorrect. .

 

Here is the detailed package flow and logic

 

The SSIS Package had the following logic flow:

1. The First Step:  Data is copied into Source.csv file.

2. The file Content is then copied to source OLEDB data connection – SQL Server

3. The data is then further copied from the source to the destination

4. In destination there is a rename operation which is performed

5. Renamed file is then copied to FTP

 

Data Flow Diagram

1 Process data & dump into csv file

2 File operation:- Copy Template file from Source to destination

3. File operation :- Rename the destination file to required format.

4. FTP:- send operation in FTP

             delete destination file.

 

Details

FTP Task Editor=> IsLocalPathVarible:- True

Local Varible :- User::Final

Operation :Send Files

ISTransferASCII :- False

Remote Parameter

IsRemotePathVarible :-False

REmote Path :- \Location

OverwriteFileAtDest :- YES

Final:-D:\test20130724_230456_662000000.xls which has the following expression :

"D:\\test"+Replace(Replace(Replace(Replace((DT_WSTR,50)(getdate()),"-","")," ","_"),":",""),".","_")+".xls"

 

 

Resolution:

Fix the final and Rename Variables with correct values. Include the \ after C:to fix the problem. As you know that if you try to write C:ProgramFiles, you won’t be able to get to program files, so we need to fix the expression and evaluated expression for User::Final to get this working. Once that is fixed you will be able to get the package working.

 

See the User variable Destination, User::Destination – that has a correct value C:\Destination. But not the Final and Rename variables, you will have to fix them both and that should be really easy.

 

For the detailed Screen Shots Please see my BLOG:

http://blogs.msdn.com/b/sql-bi-sap-cloud-crm_all_in_one_place/archive/2013/08/15/connection-manager-quot-ftp-connection-manager-quot-error-an-error-occurred-in-the-requested-ftp-operation-detailed-error-description-the-filename-directory-name-or-volume-label-syntax-is-incorr.aspx

 

4 Comments

Fri

26

Jul

2013

SQL Servers on SSD's

We hear a lot of questions around SQL Server's hosted on SSD's. I thought it is very important that we discuss some of the basic questions which is very important while planning out a design solution for your SQL Server Infrastructure layout based on SSD's. (Pros and Cons)

 

What are SSD's?

Solid State Disks (SSDs) are devices that take advantage of the high speed nature of flash memory chips to provide fast persistent data storage.  SSD’s come in a variety of forms that are suitable for different purposes. The most common forms of SSD are: (source-

  • 2.5” disk replacements.  These devices are SSD’s with the interface electronics of a standard 2.5 inch moving head disk.  You commonly find them in laptops. They’re also installed in a few desktops and even a some servers. No special purpose drivers are needed for this type of SSD because it looks to the computer just like a fast disk. 
  • Rack mounted SSD devices.  These devices have their own chassis and connect with the same Host Bus Adapters (HBA) that might be used to connect to a SAN.  A typical manufacture is Texas Memory Systems.  These systems tend to be expensive but fast replacements for SAN’s.
  • PCI-e boards. These devices use the same chips as other disk replacement products but put the chips on a PCI-express board.  This allows the device to take advantage of the speed of the PCI-express bus to provide a higher number of I/O operations per second and more throughput than is possible when moving data through a disk interface.  These devices are manufactured by companies such as Fusion-IO, OCZ, and Dolphin.
  • SSD enhancements to a standard SAN.  Each SAN manufacturer implements SSD’s in a different way although some use 2.5” disk replacements.  These give you a boost over the speed of rotating disks in the same SAN but the data still must navigate the narrow data path between the CPU and SAN.

 

Question: Are SSD’s a viable option to house databases and log files?  My understanding is if we use SLC SDDs we shouldn’t have any worries when it comes to burning out the disks.  The user databases and the tempdb will be hit with a lot of read/writes.

 

Answer: It is rather a great decision to host the database data and log files on SSD’s. The reason to use SSD cards with SQL Server is the high number of random I/O operations that can be performed with low latency and improved throughput. You will find numerous examples on the net suggesting the same. I'd not mind putting all the database files on the SSD, here are the reasons why:

 

  • SQL actually writes changes to the data files infrequently. Changes are written to the transaction log immediately and then written out to the data file by lazy writer at some point in the future when the IO subsystem isn't busy. So it's usually not trying to write to both the transaction log and the data files at the same time. This is by design.
  • TempDB lives in RAM, no? but can be kept in the RAM (http://support.microsoft.com/kb/115050), though as tempdb also has a physical disk backing file it becomes very important for the disk to sustain high writes
  • Plus log write speed is critical for commits / inserts. So, regardless of data - put the log files on a SSD FOR SURE. Tempdb data file is irrelevant unless the server overflows to it. It should have very little IO

 

The classical situation where you'll gain performance by putting the transaction log on a separate disk is when you have a fairly even write/read mix and you don't have enough RAM for SQL Server to serve those reads from the pages cached in RAM, forcing it to read those pages from disk. Then you get disk contention if both the data file and the transaction logs live on the same physical disk. The one exception might be if you have a database too big to fit into your workstation's RAM and you're doing some kind of big, complex data import that involves lots of reads in addition to the writes.

 

Most of the examples and from my own customer experiences show that it is possible to reduce the run time of procedures that make use of tempdb using SSD’s. Of course, since the Fusion-IO board (SLC SSD) has tempdb, it’s only when using tempdb that there will be any improvement in performance. SQL Server uses tempdb for temp tables that that user code creates with names that begin with # or ##. It also uses tempdb to store table variables, table valued parameters, and xml variables. When needed tempdb is used for work tables for sorting, hashing, group by, distinct, snapshot isolation, and index rebuild operations.

 

Although still new SSDs are a hardware option with promise for significant speed improvements in SQL Server runtimes. There is also a potential for cost savings due to the ability of SSDs to provide a high number of I/O operations and throughput that would require a very large number of hard disks.

 

Question: Is there still a requirement to separate User DBs/System DBs/Tempdb/logs on to separate disks, being that SSDs have a higher contention threshold than other disks?

 

Answer:

No not really, you don’t have to host the data and log files in separate disks if and only if there is shallow read/write activity on the database and tempdb. On the other hand for a higher tempdb usage and read/write it will all start burning and churning the SSD’s quickly (if not best practiced) in those scenarios keeping them all on separate SSD’s will be ideal. Though this is all myth and theoretical till the time they are actually implemented and benchmarked. 

 

The reality is: SSDs slow down with wear, I have seen few customers  where there 1TB SSD storage was worn and burnt so much that after 6 months the SSDs were removed and they were back to SAS 15k drives in their production. Hence it becomes very important to follow certain best practices to stop the SSD’s from tearing apart within few months. You will also find some more details around how to stop the wearing of the SSD’s with time and one of the wonderful videos which I have used as well in the past, should help here:

 

How to Optimally Use SQL Server with SSD’s Without Burning Them Out

http://technet.microsoft.com/en-us/video/Video/hh771099


Question:The SQL Database Server will be hosted on a SQL 2012 Cluster. How does SSD's work here? Are they supported?

 

Answer: SSD’s in LANs and External Rack Mounted SSD products can be used in clusters. However, the Fusion-IO board is internal to a single computer. Standard clustered configurations require that all databases, including tempdb, be on storage that can be shared between members of the cluster so the Fusion-IO board doesn’t qualify. Starting SQL Server 2012, tempdb can be hosted on a local hard drive, SSD in a clustered environment in other words tempdb doesn’t have to be shared and the Fusion-IO board could be used instead. 

 

NOTE: I will edit the post further and add more details based on my findings and experience with SSD's and SQL Server.

 

Like this blog? Leave a comment!!

 

46 Comments

Write a comment

Comments: 3
  • #1

    propackers820 (Wednesday, 15 July 2015 13:25)

    that have exemplary knowledge in house moving not really a specialist professional Movers and Packers or office moving firm.
    Visti site:-
    Packers and movers @ http://professionalmovers.in
    Packers and Movers Chennai@ http://professionalmovers.in/packers-and-movers-in-chennai/
    Packers and Movers Gurgaon@ http://professionalmovers.in/packers-and-movers-in-gurgaon/

    Packers and Movers Hyderabad@ http://professionalmovers.in/packers-and-movers-in-hyderabad/
    Packers and Movers Noida@ http://professionalmovers.in/packers-and-movers-in-nodia/
    Packers and Movers Bangalore@ http://professionalmovers.in/packers-and-movers-in-bangalore/
    Packers and Movers delhi@ http://professionalmovers.in/packers-and-movers-in-delhi/

  • #2

    abhishaknutan (Monday, 05 October 2015 06:18)

    packers and movers Hyderabad @ http://www.top8pm.in/packers-and-movers-hyderabad.html

    Packers and Movers Hyderabad @ http://www.shiftingsolutions.in/packers-and-movers-hyderabad.html

  • #3

    vivekeetrade (Monday, 13 June 2016 04:22)

    Movers and Packers Chaura @ http://www.noidalocal.in/chaura-sector-22.html
    Movers and Packers Nithari @ http://www.noidalocal.in/nithari-sector-31.html
    Movers and Packers Morna @ http://www.noidalocal.in/morna-sector-35.html
    Movers and Packers Naya Baans @ http://www.noidalocal.in/naya-baans-sector-15.html
    Movers and Packers Mamura @ http://www.noidalocal.in/mamura-sector-66.html
    Movers and Packers Harola Sector 2 @ http://www.noidalocal.in/harola-sector-2.html