DAC Basics

September 9, 2011

The services DACFx provide revolve around some key concepts and technologies. We’ll walk through these step by step so you can see how these basic concepts can leveraged together to provide advanced services.

Basic Concept 1: The In-Memory Database Model

Central to every DACFx service is the need to build and traverse an in-memory model of a database. DACFx is a client side library and so it communicates with SQL Server like any other SQL client does – over a standard SQL TDS connection. Before DACFx can provide services over a database, DACFx must deeply understand what a database actually is. DACFx has an internal in-memory model that covers all the different database objects, their properties, and state. Database objects are any entity you can create in SQL Server with a “CREATE” T-SQL statement. Examples of objects include tables, views, functions, procedures, indices, and constraints.

Another way to describe the model is to say that SQL Server has a structured physical model for a database. When you create an object in SQL Server, a physical record is written out on disk or disk space is allocated for it. DACFx has an in-memory model that maps to the SQL engine’s model.

The in-memory database model is vital to all DAC operations because it allows DACFx to provide client-side services without having to change or manipulate a real database.

Basic Concept 2: The File Format

DACFx has two associated file formats, the DACPAC and the BACPAC. Both file formats are based on the standard System.IO package format in .NET. Effectively, this means you can modify the file extension of any DACPAC or BACPAC to “.zip” and unzip the contents.

If you crack open a DACPAC you will find some XML files which contain the full schema definition for a database. All database objects are represented through entities in XML. DACFx also stores associated properties about an entity as well as any relationships it has to other entities.

We do not store T-SQL inside the DACPAC! Only the serialized in-memory database model is stored for simple retrieval later on. For example DACFx can quickly read a DACPAC to create an in-memory model of the database defined in the DACPAC. Similarly, DACFx can write out a new DACPAC to disk from an in-memory database model for later use or reference.


You’ll see in subsequent sections how this simple but vital capability is used in providing a useful service.



September 9, 2011

Now that you know about the DACFx in-memory database model and its associated serialized representation, the DACPAC, it’s time to cover how to generate an in-memory model. There are three ways to create an in-memory model:

  1. De-serialize (load) a DACPAC into a model
  2. Build T-SQL into a model
  3. Extract a model from a live database

Number one was covered in the basics section, number 3 will be covered in the next post, that leaves number two.

Database developers and administrators code in T-SQL so naturally DACFx must provide some way to convert T-SQL into an in-memory model. DACFx provides what is referred to as the compilation unit that allows you to load in arbitrary TSQL scripts into a container along with some other information and compile (or build) an in-memory model from a jumble of T-SQL statements. Once the in-memory model has been created, it can be serialized to disk as a DACPAC. The database projects in Visual Studio 2010 and the upcoming SQL Server Development Tools Code Name “Juneau” follow this workflow exactly: when building a project which is T-SQL they emit a DACPAC.

Here’s a quick example:

Video Coming Soon!

Declarative programming is pretty straightforward: you must declare the existence of an object, and nothing else. For example you can declare that table T1 exists with some set of columns but you cannot alter table T2 or alter table T3. By and large, alter syntax is not supported by the DACFx compilation service and there’s a good reason why. DACFx is not a SQL engine! It does not maintain database state and allow you to perform CRUD operations on an instance of a database, it merely creates an in-memory model of a database from defined (by CREATE statements) objects.

Effectively, this cuts down on the programmatic surface area the developer needs to worry about when authoring a database or making a change to an existing one.

For example:

Video Coming Soon!

The defined database must also be consistent. For example, I can go to a database, create a table T1, create a view V1 that refers to T1 and that’s perfectly legitimate. Of course, the view is useless, but it’s still there nonetheless. If I were to create the same view V1 that refers to a nonexistent table, then the create statement will fail because the dependent object doesn’t exist. Similarly, DACFx compilation validates all of your database object dependencies when being provided with a set of T-SQL scripts and the compilation will fail if it discovers a discrepancy such as in the case above.

DACFx also verifies that your database is contained. Containment in the database context means that the objects defined in the database and any other objects they might depend on are all contained inside the scope of the same database. Containment is an important concept that is best exemplified in SQL Azure.

SQL Azure is a fully contained database model by necessity. All databases in SQL Azure have to be contained because SQL Azure is a multi-tenant environment. In order to guarantee a base level of performance, availability, and security SQL databases have to be scoped to only allow users the ability to consume database level assets. In a nutshell, that’s why many features available today in the SQL Server box product are not currently available in SQL Azure. Over time, these features will be added into SQL Azure with a contained implementation.

The enforcement of database containment by DACFx does present a bit of an issue for on-premise SQL Server customers if they want to leverage any of the uncontained features in SQL Server. As the contained database model expands, DAC will also expand in lock-step so DAC will become more usable for these types of customers with uncontained feature requirements. For SQL Azure customers (or SQL Server customers without the need for uncontained features), the contained model fits the available features and objects so that there aren’t any limitations.

DACFx also allows the developer to set the version number, application name, or description either in the project or programmatically in the CompiliationUnit’s associated properties. You can see that here in Visual Studio 2010:

Video Coming Soon!

We’ll see how these DAC facets surface later on.


September 9, 2011

We’ve talked about creating an in-memory model of a database from a DACPAC file and straight from T-SQL, but what about from a database itself? Enter the DACFx Extract service.

The Extract service connects to a database, reads all of its objects and their properties, and then creates an in-memory model of the database. Similar to how the Build service validates the defined objects, the Extract service also checks for consistency and containment. The validation done here will also result in failures if you have a view that refers to a non-existent table same as with T-SQL! Additionally, unsupported or uncontained objects are blocked because these objects are not yet allowed in SQL Azure or the DAC in-memory model. Finally, once the in-memory model is complete and validated, a DACPAC is written to disk. Let’s take a look at how to do this with SQL Server Management Studio:

For a full list of supported objects, please see this link that lists the set of supported objects for DACFx as of SQL Server Code Name “Denali.” Note that every release of DACFx will add object support so check back if there’s an object that’s an issue for you.

Final note, the Extract service will extract your schema from:

  • SQL Server 2000
  • SQL Server 2005
  • SQL Server 2008
  • SQL Server 2008R2
  • SQL Server Code Name “Denali”
  • SQL Azure


September 9, 2011

So far we’ve covered some different methods of creating a DACPAC. Well once you have a compiled or extracted DACPAC the first interesting service you can use is the deploy service. The deploy service will create a database with the specified settings and then create all of the objects defined in the DACPAC inside the new database. Once finished, the database is ‘registered’ (see the next post) as a Data-tier Application. Think of the DACPAC as a database definition MSI and the deploy service as a DACPAC installer.


Here’s a quick walkthrough of deploying a DACPAC to a new database:

For more information on how deployment is actually accomplished, see the Upgrade post.


Final note, the deploy service will create your schema in:

  • SQL Server 2005 SP4 and above
  • SQL Server 2008 SP2 and above
  • SQL Server 2008 R2
  • SQL Server Code Name “Denali”
  • SQL Azure


September 9, 2011

In order to be able to provide some of the nice features the DACFx services have, DACFx needs to be able to track some metadata about each application instance (database) that is being managed through DACFx services. DACFx tracks this metadata by ‘registering’ a database. Registration involves creating an entry in some instance level system tables for the database at hand that includes:

  1. Database name
  2. DAC application name
  3. Application description
  4. DAC version number
  5. The current schema definition stored as the same XML found in the DACPAC

Let’s say you have an existing database you would like to start managing using DAC. You can start by registering this database. Here’s a walkthrough:

Under the covers, registration uses the Extract service already discussed and then stores the extracted information (including the schema) inside the DAC registration. For those interested, this information is stored in these system tables:



Once a database is registered, you cannot re-register it. You can however delete the registration and then register the database again.

Pre-Upgrade Validation

September 9, 2011

Once a database is deployed or registered, you can start applying changes to the database using DAC upgrades. Why would you use DAC upgrades versus standard script changes you ask?

  1. You don’t need to maintain script libraries, just DACPACs
  2. DACPACs are versioned so you can always tell what their relative ordering is
  3. When upgrading, the DAC registration is updated as well allowing you to always audit the database to see if anything has changed since the last official DACPAC deployment or upgrade.

Before you can actually perform a DAC upgrade, DACFx performs a few checks on the target database to help inform you prior to the upgrade. DACFx first checks to see if anything in the database has changed since the last DACPAC upgrade (or initial deployment) and presents these as database changes.

For an administrator or a developer who is trying to upgrade the database, knowing what’s changed since the last deployment is a nice piece of information because the developer may not have taken those changes into account and the changes could be lost as a part of the upgrade process. Let’s see what that looks like:

The second item that’s checked is whether the upgrade could potentially cause data loss. Remember that a DACPAC is a replete definition of a database so if you don’t have something defined in the DACPAC but it is defined in the database, we have to remove that item from the database in order to ensure the database mirrors the DACPAC.

As an example, let’s take a database that has two tables: T1 and T2. Then suppose we have a DACPAC that has only T1 defined inside it. If you were to upgrade the database, the upgrade script would drop T2 because T2 is not defined in the DACPAC.

We also warn on potential data loss when data types explicitly change because data could become truncated or otherwise lost. For example, if you were to change a column from BIGINT to INT you could potentially lose data. Let see what a data loss warning looks like:

Presuming there are no issues with database changes and data loss then you can proceed to perform the actual upgrade which is the subject of the next post.


September 9, 2011

Now it’s time for some real meat and potatoes: the Upgrade service! DAC upgrades are done in-place on an existing database. To get started you must have:

  1. A database you want to upgrade (must be registered already)
  2. A DACPAC you want upgrade with


At a high level, the upgrade interface assumes the following intent: you want to make the target database look exactly like the schema defined in the DACPAC. Remember that a DACPAC contains only schema definition, so how does it go about actually upgrading a database?


Step one; DACFx loads the schema definition from the DACPAC into an in-memory schema model we’ll creatively refer to as SchemaModelV2 because it’s the newer version of the schema. Step two; DACFx uses the Extract service to create a second in-memory schema model from the database you want to upgrade. We’ll refer to the current database schema model as SchemaModelV1 because it’s the current schema model.


DACFx now has 2 models, so the next step is to compare the models in order to determine what the differences are. An example difference might be that V1 has a table T1 with 2 columns whereas V2 has a table T1 with the same two columns plus a third one in which case the difference between the two models would be a new column for table T1. Once all the columns are identified, DACFx creates a T-SQL script which when applied to the target database, will ensure that its schema is no longer different than the V2 schema from the DACPAC.


The scripts created by DACFx are sensitive to dependencies and ordering and will generally do their best to ensure the most efficient script is produced. Let’s take a look at a DACFx upgrade in action:


Once the upgrade has completed successfully, the registration for the database is updated to reflect the new version number and description from the DACPAC you just upgraded with. To summarize what happens during an upgrade:

  1. Validation (covered in the previous post)
  2. Compare DACPAC to database
  3. Script differences
  4. Apply script to database
  5. Update registration


So that’s the simple upgrade story. The next post covers some more complex upgrade topics.