Saturday, May 12, 2012

SQL Server Memory Settings

It's hard to get a fix on what is the recommended memory settings for SQL Server. It seems to be one of those questions that if you ten DBAs what settings they should use you will get ten different answers. My take on things is pretty much in line with the suggestions from Brent Ozar and Glenn Berry - I've been setting up SQL Servers for years using similar settings to their suggestions without any real memory type issues. The only caveat I have is that as good as their suggestions are, each case is unique and the memory settings need to be monitored and adjusted accordingly. If you've got no idea what the memory settings should be then this is a good place to start followed up by some highly recommended reading at the links below.

Before going any further it's good to keep in mind what the SQL Server memory settings actually do. They control the target memory usage of the SQL Server Memory Manager. That's important because these aren't hard and fast values - SQL Server will try and grab the minimum memory level if it needs it, if it doesn't it may never use the minimum amount of ram allocated to it. Same goes for the maximum level - in fact if you have memory pressure from the operating system SQL Server may not be able to grab all of the memory you specify. Remember - these are target values, not hard and fast memory allocations.

The second point to remember is that this memory is for the SQL Server Memory Manager. The SQL Server Memory Manager consists of pools of memory allocated to different things like caching data or caching query plans. You can't control what happens internally, only the total target memory of this pool. Other processes may run outside of this memory pool such as SQL CLR. In short it's best to know what's happening on your system, what type of workloads are running on the server and keeping any eye on the system resources with any monitoring tools you may have. For more detail on the SQL Server Memory Manager check out this link from Microsoft:

The Minimum Server Memory Setting
As the name implies the minimum server memory is used to control the minimum amount of memory SQL Server will try and reserve for it's data cache. SQL Server isn't guaranteed to use all of this, it only uses this amount of memory if it has cause to. In practical terms it means that once you start your SQL Server don't expect this memory to be immediately grabbed by SQL Server, it will fill this memory with data cache as it accesses pages in the databases.

The default setting for this is zero and that's a bad number. If the server comes under memory pressure SQL Server will try and release memory back to the operating system. If the minimum memory setting is zero then SQL Server is free to release all of it's data cache back to the operating system. This means that SQL Server has just dumped all of it's data cache and at this point the SQL Server will respond very slowly. It will need to fetch all data for incoming queries from the disk and read it back into memory -  reading from disks is thousands of times slower than memory so this will slow your server down massively.

The golden rule here is to pick a number that is the minimum memory you're comfortable with SQL Server running with. One of the rules of thumb from Brent Ozar's site ( is to set the minimum memory to half of the physical memory installed on the server. It's a reasonable rule to follow however I'd strongly suggest that on any critical system that you test this out in a non production environment to test how your workload reacts.

The Maximum Server Memory Setting
The value of the maximum server memory setting is one of those annoying 'it depends' answers. The basic principle is to give SQL Server as much memory as you can whilst still leaving enough memory for the operating system to do what it has to do. The operating system will need memory for it's basic operating system functionality, to cache files that are being copied, for other applications running on the server (eg. antivirus), and for SQL CLR code. That's why this setting is a 'it depends' answer, the more you run outside of SQL Server, even large file copies the more memory  the operating system will require to do what it has to do. 

Glenn Berry has an excellent article at the link below which gives some guidelines on suggested maximum memory settings. 

In any case whatever setting you start with you should carefully monitor the free memory on your system. It's a balancing act but you'll want to leave some memory free. If you find that file copies or the operating system runs slow, you may want to look at decreasing the maximum server memory on SQL Server.

Lock Pages In Memory
Locking pages in memory is a setting which affects the behaviour of the server when it comes under memory pressure. When the setting is enabled and the server comes under memory pressure SQL Server will not release the memory it is using for caching back to the operating system.

This may sound like a good idea however there's mixed advice out there on whether to use the Lock Pages in Memory setting or not. The arguments against using this setting centre around the basic idea that Windows knows best how to manage the memory. That by locking the pages in memory the operating system cannot respond to memory pressure which in turn causes other problems on the server. The counter argument revolves around locking the memory assigned to SQL Server for SQL Server and configuring the min and max memory settings in such a way that there's enough memory left for the operating system to do what it has to do.

Personally I've been locking the pages in memory and it's worked well for me. I've started doing this for a long time before the anti lock pages in memory crowd gathered momentum. In fact I remember doing it because one of the first SQL 2005 boxes I deployed were dumping the buffer cache when the server came under pressure. Luckily it was a development box but still it caused enough problems that the developers were practically unable to work. Due to the changing nature of infrastructure and the wide use of virtual machines I'll be reviewing this setting on my servers in the near future.

MSDN Memory Settings Reference

Friday, May 11, 2012

Database Ownership in SQL Server

Database ownership is one of the most important and overlooked configuration items when creating or restoring databases. The account that owns the database is mapped to the dbo user in that database giving the user full permissions to the database - this includes permission to drop the database! 

The reason it gets overlooked is that once a database is setup we often don't review the database settings. When we provide access to the database we do it via the logins or security pages. Also the creator of the database is set to the owner by default. This is also the case for restoring databases. Other problems can occur if the original creator of the database leaves the organisation. In this case they may still have full permissions to the database long after they've left the organisation.

It's best to create a standard for your databases including setting a standard database owner. Typically I set the database owner to the 'sa' account. To change the owner of a database run the TSQL below (changing the name of the 'username' to the desired owner of the database.

EXEC sp_changedbowner 'username'

As an additional best practice it's a good idea to make sure the owner of your model database is set to 'sa'. This will ensure that all databases create (but not restored) are created with the owner set to 'sa'.

Monday, May 7, 2012

SQL Server Memory Configuration Guidelines

Note: These guidelines are general in nature and are designed for a single instance of SQL Server.

The minimum SQL Server memory should be changed from the default setting of zero.The reason for this is that when the operating system comes under memory pressure it will tell the applications running on the server to return memory to the operating system. SQL Server's response is to release memory based on the minimum memory setting. The recommendation is to set the minimum memory to 50% of the server's physical memory so that SQL Server can co-operate with the operating system but not sacrifice all of it's memory.

The maximum SQL Server memory should be changed from the incredibly high number provided to a smaller number. A good value for the maximum memory setting is to either leave 2GB or 10% of the server memory free for the operating system and other applications, whichever is greater.

It is also recommended to not use the "Lock Pages in Memory" feature as this "may" have a negative impact on server performance. The rationale behind this is that the by locking pages in memory we are really just locking in buffer pages (data cache). This can cause memory pressure if other types of SQL cache or processes that run outside of SQL Server need to allocate more memory yet can't grab it because it's reserved for SQL Server.

Configure SQL Server Reporting Services 2008 R2 To Use SSL

The very brief outline of how to setup SSL access in SQL Server Reporting Services:

First install the SSL Certificate in IIS or Server Certificates:
  • Open MMC
  • Go to File --> Add Snapin
  • Select Certificates
  • Import Certificate
Use Reporting Services Configuration Manager to setup the the SSL / 443 in SSRS.

Next edit RSReportServer.config and add the new report server URL 



Restart the reporting service for the settings to take affect.

Note: The report server has to be accessed using the path configured above. This must be a valid resolvable path and must conform to the SSL certificate issuer address. To test the procedure has worked just edit the hosts file and try connecting with the URL above.

Ubuntu - Setup Group Share on Samba

The purpose of this procedure is to setup access to a FTP site using a SAMBA share. All users will access the same share.
This procedure has been tested against Ubuntu 10.04 LTS
There's a few steps that have been done before running this procedure. We've configured the FTP server in this procedure which has created the following:
  • A folder called /srv
  • A folder called /srv/ftp
  • A folder called /srv/ftp/shared
  • A linux group called ftpshared
  • The group ftpshared has been given root:ftpshared access to /srv/ftp/shared

First we start by installing the samba packages:

sudo apt-get install samba
sudo service smbd stop

Before we start modifying the Samba configuration we will backup the existing samba configs:

sudo mv /etc/samba/smb.conf /etc/samba.smb.conf.bak

Setup a new blank smb.conf file:

sudo nano /etc/samba/smb.conf

Now paste the following into the smb.conf file:
    ; General server settings
    netbios name = server_name
    server string =
    workgroup = WORKGROUP
    announce version = 5.0

    passdb backend = tdbsam
    security = user
    null passwords = true
    username map = /etc/samba/smbusers
    name resolve order = hosts wins bcast

    wins support = yes

    printing = CUPS
    printcap name = CUPS

    syslog = 1
    syslog only = yes

valid users=@ftpshared
create mode=0600
directory mode=0775
read only=no
force group=ftpshared

Save the file using Ctrl-O, Ctrl-X. Now restart the samba service:

sudo service smbd start

Assuming linux users already exist on the system they only need to be added to the ftpshared group and given a Samba password to access the fileshare:

sudo smbpasswd -L -a [user_name]

Ubuntu - Setup a SFTP Server

This procedure will setup a SFTP server, thats FTP over SSH on Ubuntu Server. In this instance the FTP server will be for a group of people accessing the same FTP share.
This procedure has been tested against Ubuntu 10.04 LTS
First we need to install the SSH-Server packages on the Ubuntu server assuming they haven't been installed already. On the console type:

sudo apt-get install openssh-server

Next we will create a group which will have shared access to the FTP Server mount point:

sudo groupadd ftpshared

Next we create the FTP folder on the filesystem. As this folder will contain data that may potentially grow to fill the disk it is recommended to set this up on a separate partition. Another important point to note is that SFTP relies on both the root user and root group owner the top level FTP mount. If this is not the case you will get errors when connecting to the FTP server, in my case authentication and connection errors.

sudo mkdir /srv
sudo mkdir /srv/ftp
sudo mkdir /srv/ftp/shared

Next change the ownership on the shared folder so that members of ftpshared can access it.
sudo chown root:ftpshared /srv/ftp/shared
sudo chmod 775 /srv/ftp/shared

Now that the filesystem is setup we will update the SSH daemon configuration, but first, backup the existing configuration.

sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.bak

sudo nano /etc/ssh/sshd

Now comment out the line Subsystem sftp /usr/lib/openssh/sftp-server Immediately below that line insert Subsystem sftp internal-sftp
Now add the following lines to the bottom of /etc/ssh/ssh_config

Match group ftpshared
ChrootDirectory /srv/ftp/shared
X11Forwarding no
AllowTCPForwarding no
ForceCommand internal-sftp

To make the config take affect we'll need to restart the SSH daemon:

sudo service ssh restart

Finally we create the user accounts on the Ubuntu server and give them access to the FTP server:

sudo useradd [user_name] -d /srv/ftp/shared -s /bin/false
sudo adduser [user_name] ftpshared
sudo passwd [user_name]

As a final step, open up your FTP client of choice and test the FTP logon process. Also confirm that the user cannot navigate outside of the /srv/ftp/shared area.

Ports Used By Active Directory

Windows Active Directory Domain Controllers use the following ports:

DescriptionPort NumbersProtocol
Kerberos88TCP + UDP
TCP Dynamic - User and Computer Authentication, Group Policy49152-65535TCP
Windows Time123UDP
UDP Dynamic - Group Policy (DCOM, RPC, EPM)49152-65535UDP
Group Policy, Netlogon, NetBIOS Datagram Service138UDP
User and Computer Authentication - NetLogon, NetBIOS Name Resolution137UDP
User and Computer Authentication - NetBIOS Session Service, NetLogon139TCP
User and Computer Authentication, Kerberos change/set password464TCP + UDP

Ubuntu - Add Another Disk To The Server

The following procedure assumes that the additional disk has been physically added to the server and needs to be added to the operating system.

This procedure has been tested against Ubuntu 10.04 LTS

First we confirm details of second disk by running the lshw command. If all goes well here we should see the original disk and the new disk appear here. Take note of the location of the disk, in this case it will be /dev/sdb

sudo lshw -C disk

Now open fdisk partitioning tool to create partitions on this disk.

sudo fdisk /dev/sdb

The fdisk tool will display a menu. Run through the following steps to creae a new partition that takes the entire disk:
  1. Type "n" and "enter" to create a new partition
  2. Type "p" to create a primary partition
  3. Type "1" to create 1 partition
  4. Press enter to accept the default start position of the partition
  5. Press enter to accept the default end position of the partition
  6. Type "w" to write the partition
Now that we have create a partition on the disk we now need to create the filesystem. To do this run the following command to create an ext4 filesystem:

sudo mkfs -t ext4 /dev/sdb1

Next we'll create a mount point. This is the point the new partition will be attached to on the host filesystem.

sudo mkdir /srv

Next we need to tell the operating system to mount the partition automatically on system startup. To do this we need to edit the /etc/fstab file which tells the operating system which filesystems to load on startup.

sudo nano /etc/fstab

Add the following line to /etc/fstab

/dev/sdb1 /srv ext4 defaults 0 2

Save the file and exit nano (using Ctrl-X). Now tell Ubuntu to mount the filesystems in /etc/fstab

sudo mount -a

The partition should exist on the mount point now. As a final check, reboot the server (if possible) and check that the second partition appears at the mount point.