Traditionally, SQL Server installation and configuration is a somewhat lengthy process. While not particularly difficult, it is time-consuming. Even once installation is complete, a dba may also want to install some stored procedures (along with the databases they require). Additionally, developers may need to create databases, schemas, and even data. Fortunately, setting up a SQL Server container is very easy. Microsoft has given us a great image to work with. And customizing that image is also quite simple. With the help of a few extra steps, you’ll have a standardized image that can be deployed in seconds. This blog post provides a quick walk-through for setting up Microsoft’s base container image as well as easy steps for customizing your own image.
Microsoft’s Base SQL Server Image
With one simple command, your SQL Server container will be up and running in minutes. And if the image is local, your container will be up and running in seconds.
docker run --name sql -e ACCEPT_EULA=Y -e [email protected] -p 1433:1433 -d mcr.microsoft.com/mssql/server:latest
You can connect to this instance as you would any other instance, through SSMS, SQLCMD, and proceed with any additional configuration you need.
Taking it further with a custom image
But what if you need to deploy many instances in a standard fashion? Traditionally, you could build-out a scripted installation, set up your groups, settings, and helper-stored procedures. But what if other developers or admins want to deploy their own SQL containers without your assistance? You can just standardize these deployments by creating your own customized image. Thankfully, this is a really simple process. A Dockerfile lets you specify your base image, migrate any required scripts or commands, and execute them on container creation.
Create your own custom container images by building a Dockerfile. I’ve put together a very simple Dockerfile here that will customize our SQL Server deployment. Some concepts and nuances around building Dockerfiles—like layering and caching—are beyond the scope of this blog. But if you find yourself building Dockerfiles, you should absolutely look into them.
I’m using Visual Studio Code for this, but you don’t need anything more than a folder to contain your files, a terminal window to execute commands, and a text editor. We don’t need anything complex here.
I’ve created a folder and placed a file named ‘Dockerfile’ in that folder. It doesn’t even need a file extension. The contents of my Dockerfile are:
FROM mcr.microsoft.com/mssql/server:latest RUN mkdir /dba COPY . /dba CMD /bin/bash /dba/start.sh
The Dockerfile performs the following tasks:
- The FROM statement specifies our base image: what we’re going to build on.
- The RUN statement executes a statement on our container. Here, we’re simply creating our /dba directory. Due to the way docker builds container images, you can, and should, execute multiple statements on a single RUN statement.
- The COPY statement copies the contents of our build directory to our newly-created /dba directory in our container image.
- The CMD statement specifies a startup command for new containers built off this image.
Our CMD statement: scripts calling scripts calling scripts
As specified in our Dockerfile’s CMD statement, we need a start.sh shell script to act as our start command for our containers. When the container is built, this file will be kicked off. Taking a look at Microsoft’s GitHub repository for their docker:
We can see that the CMD statement starts SQL Server services:
# Run SQL Server process. CMD [ "/opt/mssql/bin/sqlservr" ]
Since we are specifying our own CMD, we’re going to want to include this (as well as anything else we want to do). Our start.sh script will accomplish two things; it’ll start the SQL Server process and kick-off any scripts we need executed. Here’s the contents of the start.sh:
/dba/setup.sh & /opt/mssql/bin/sqlservr
Another script?! Yes, we want to keep these as simple as possible. This command runs every time the container starts. We want to be able to separate tasks to keep this nice and clean and modular and easy to troubleshoot. Our start.sh script runs the /dba/setup.sh script in the background, then starts the sqlservr service. CRITICAL: since it runs indefinitely, /opt/mssql/bin/sqlservr must be executed as the last step. The container will stop if the startup command ends. Try switching the two statements. The container will stop after /dba/setup.sh completes.
Let’s look at the setup.sh script:
sleep 40s; for filename in /dba/*.sql; do /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P '[email protected]' -i "$filename"; done
This script sleeps for 40s then loops through all SQL files in the /dba directory and executes them via SQLCMD. Because we’re executing this script in the background, we’re able to proceed with starting the SQL Server instance while this is sleeping.
We need the sleep command, because if you’ve ever started a SQL Server instance while users are trying to connect, you know timeouts will occur. SQL Server accepts connections when it’s good and ready! This lets SQL Server get to that state.
After the script wakes up, it loops through the dba directory and executes all .sql files it finds. Why not just one statement and one SQL file? Because this way, we can easily pull pieces out or add as required. We can troubleshoot individual files versus one large file. For this example, I’ve included two very simple scripts—call them whatever you want. The first creates a new sysadmin:
create login jshurak with password = '[email protected]' exec sp_addsrvrolemember 'jshurak', 'sysadmin'
…and the second creates a database with a few tables:
Create database MyDB; GO Use MyDB GO Create table t1 ( c1 int ) Create table t2 ( C1 int )
Building and Running
Now that we have our Dockerfile, setup.sh and startup.sh scripts created, and we’ve brought in some SQL server scripts to call, our directory looks like this:
The next step is to build our image. Open a terminal window and navigate to our directory. I’m using PowerShell for this. Once there, simply execute:
Docker build -t <image-name> .
You can see from the output, the steps Docker takes to build the image:
Once our build is complete, we can simply create our container using the new image:
docker run --name sql -e ACCEPT_EULA=Y -e [email protected] -p 1433:1433 -d custom-sql-image
This successfully creates and starts the container:
After 40 seconds, our setup.sh continues and creates objects in our instance. You can see we now have a new sysadmin and a database:
A few final notes
The CMD statement executes every time the container starts. This means, if you have creation scripts, they’re first going to check if the object exists. Otherwise they will generate SQL errors. This can be overcome by initially starting your container, specifying the SA password as normal, then committing that container as a new image. This latest image will ignore the SA_PASSWORD flag and use the original password specified. You should always change this password once the container is live in production.
The setup.sh script utilizes the sa user and password. This password is specified when the container is created. If there is a mismatch, the scripts will fail to log in.
Lastly, if your organization is looking to customize SQL Server containers using Docker, please reach out to us. We’d love to help you out.
SQL Server Architect
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
|cookielawinfo-checbox-analytics||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".|
|cookielawinfo-checbox-functional||11 months||The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".|
|cookielawinfo-checbox-others||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.|
|cookielawinfo-checkbox-necessary||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".|
|cookielawinfo-checkbox-performance||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".|
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.