You never develop code without version control, why do you develop your database without it? Liquibase is database-independent library for tracking, managing and applying database changes.
Liquibase works with XML files, these files contains which changes should apply to database (create, drop, insert, delete, update, and managing constraints...). Also they contain the revision code (id), so when migration from old database to new database is performed, Liquibase knows if changes have already applied or not. If not, database is updated.
Internally, Liquibase creates a DatabaseChangeHistory table where it saves which update files have been imported. If a change in changelog file is not in the DatabaseChangeHistory table, Liquibase executes it and records the change for being skipped next runs.
Now you know what Liquibase do, what I am doing to explain is how I use in a real use case.
If your company develops server side software, Liquibase is important but you can survive without it because you update only one computer and in theory updates are successive, you would not jump in production server from version 1.0.0 to 1.0.2. Of course this sentence only applies if you don't distribute your server code (your clients connects directly to your server), if not (like Sonar for example), you have the same problem like standalone applications have with database updates.
I develop software that is standalone, every client has its installation (they do not connect to central server), and we have hundreds of clients, where every client have different versions of our product. Although we are constantly developing upgrades, not all clients update at same time, so a typical scenario is that some clients have version 1.0.1, others 1.0.3, and last version is an update to 1.1.0. When a client wants to install a new version, our product is installed correctly, the problem is that usually our database changes too, and our clients want back compatibility with previous versions, and of course they don't want to loose database registries.
Moreover our changes can imply adding new tables, or adding/removing new "static" information into database. Before we were using Liquibase, we created SQL scripts for jumping between versions, and then depending on client version we sent the required SQL scripts so they could update software and database. As you can imagine this was chaotic, we should create SQL scripts for every version, and those scripts should be sent to client with the human error of sending bad ones or not all required (think about jumping from 1.0.0 to 2.0.0, we should execute one script for each intermediate version) so if a failure was occured, application stopped to work and the database remains inconsistent .
Moreover our changes can imply adding new tables, or adding/removing new "static" information into database. Before we were using Liquibase, we created SQL scripts for jumping between versions, and then depending on client version we sent the required SQL scripts so they could update software and database. As you can imagine this was chaotic, we should create SQL scripts for every version, and those scripts should be sent to client with the human error of sending bad ones or not all required (think about jumping from 1.0.0 to 2.0.0, we should execute one script for each intermediate version) so if a failure was occured, application stopped to work and the database remains inconsistent .
As you can imagine Liquibase solves this problem, first of all it uses XML, which is readable than plain SQL, second, it allows us to define a single file for migrating from one version to another one, knowing exactly which changes have been applied in each version, and last it is automatic, once changelog files are created, Liquibase automatically knows which current database version is installed, and which patches should apply.
Let me show with simple example:
We have one Liquibase master file which imports all changelog files that should be applied for updating database to last version (master file where we add a new include line for each version we release):
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
xmlns="http://www.liquibase.org/xml/ns/dbchangelog/1.9"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog/1.9
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-1.9.xsd">
<include file="META-INF/database-change/db.changelog-1.1.xml"/>
<include file="META-INF/database-change/db.changelog-1.2.xml"/>
</databaseChangeLog>
You can see that two files are imported, first one (META-INF/database-change/db.changelog-1.0.xml) that contains changes for version 1.1, and the second ones that contains changes for migrating from 1.1 to 1.2. When Liquibase reads this file, it starts to execute sequentialy the files. If file have been already imported, Liquibase skip it, and tries next one.
First changelog file:
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
xmlns="http://www.liquibase.org/xml/ns/dbchangelog/1.9"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog/1.9
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-1.9.xsd">
<changeSet id="1" author="bob">
<createTable tableName="department">
<column name="id" type="int">
<constraints primaryKey="true" nullable="false"/>
</column>
<column name="name" type="varchar(50)">
<constraints nullable="false"/>
</column>
<column name="active" type="boolean" defaultValueBoolean="true"/>
</createTable>
</changeSet>
</databaseChangeLog>
Second changelog file:
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
xmlns="http://www.liquibase.org/xml/ns/dbchangelog/1.9"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog/1.9
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-1.9.xsd">
<changeSet id="2" author="bob">
<createTable tableName="room">
<column name="id" type="int">
<constraints primaryKey="true" nullable="false"/>
</column>
<column name="name" type="varchar(50)">
<constraints nullable="false"/>
</column>
<column name="active" type="boolean" defaultValueBoolean="true"/>
</createTable>
<insert tableName="department">
<column name="id" valueNumeric="2"/>
<column name="name" value="RD"/>
<column name="active" valueBoolean="true"/>
</insert>
</changeSet>
</databaseChangeLog>
In version 1.1 we are creating a table named department and in version 1.2 we also add a table named room and insert a new department.
I have used this example in HSQL database and this is the result script:
CREATE SCHEMA PUBLIC AUTHORIZATION DBA
CREATE MEMORY TABLE DATABASECHANGELOGLOCK(ID INTEGER NOT NULL,LOCKED BOOLEAN NOT NULL,LOCKGRANTED TIMESTAMP,LOCKEDBY VARCHAR(255),CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY(ID))
CREATE MEMORY TABLE DATABASECHANGELOG(ID VARCHAR(63) NOT NULL,AUTHOR VARCHAR(63) NOT NULL,FILENAME VARCHAR(200) NOT NULL,DATEEXECUTED TIMESTAMP NOT NULL,ORDEREXECUTED INTEGER NOT NULL,EXECTYPE VARCHAR(10) NOT NULL,MD5SUM VARCHAR(35),DESCRIPTION VARCHAR(255),COMMENTS VARCHAR(255),TAG VARCHAR(255),LIQUIBASE VARCHAR(20),CONSTRAINT PK_DATABASECHANGELOG PRIMARY KEY(ID,AUTHOR,FILENAME))
CREATE MEMORY TABLE DEPARTMENT(ID INTEGER NOT NULL,NAME VARCHAR(50) NOT NULL,ACTIVE BOOLEAN DEFAULT TRUE,CONSTRAINT PK_DEPARTMENT PRIMARY KEY(ID))
CREATE MEMORY TABLE ROOM(ID INTEGER NOT NULL,NAME VARCHAR(50) NOT NULL,ACTIVE BOOLEAN DEFAULT TRUE,CONSTRAINT PK_ROOM PRIMARY KEY(ID))
CREATE MEMORY TABLE SAIL(ID BIGINT GENERATED BY DEFAULT AS IDENTITY(START WITH 1) NOT NULL PRIMARY KEY,SERIAL_NUMBER VARCHAR(255) NOT NULL,VERSION INTEGER)
CREATE MEMORY TABLE SAILOR(DTYPE VARCHAR(31) NOT NULL,ID BIGINT GENERATED BY DEFAULT AS IDENTITY(START WITH 1) NOT NULL PRIMARY KEY,NAME VARCHAR(255) NOT NULL,VERSION INTEGER)
CREATE MEMORY TABLE SHIP(ID BIGINT GENERATED BY DEFAULT AS IDENTITY(START WITH 1) NOT NULL PRIMARY KEY,NAME VARCHAR(255) NOT NULL,VERSION INTEGER,CAPTAIN BIGINT,CONSTRAINT SYS_CT_57 UNIQUE(NAME),CONSTRAINT FK35DA3CEE2A6862 FOREIGN KEY(CAPTAIN) REFERENCES SAILOR(ID))
CREATE MEMORY TABLE SHIP_SAILS(SHIP BIGINT NOT NULL,SAILS BIGINT NOT NULL,PRIMARY KEY(SHIP,SAILS),CONSTRAINT SYS_CT_61 UNIQUE(SAILS),CONSTRAINT FK617EDEDF2E41089E FOREIGN KEY(SHIP) REFERENCES SHIP(ID),CONSTRAINT FK617EDEDF348D52B9 FOREIGN KEY(SAILS) REFERENCES SAIL(ID))
ALTER TABLE SAIL ALTER COLUMN ID RESTART WITH 1
ALTER TABLE SAILOR ALTER COLUMN ID RESTART WITH 1
ALTER TABLE SHIP ALTER COLUMN ID RESTART WITH 1
CREATE USER SA PASSWORD ""
GRANT DBA TO SA
SET WRITE_DELAY 10
SET SCHEMA PUBLIC
INSERT INTO DATABASECHANGELOGLOCK VALUES(1,FALSE,NULL,NULL)
INSERT INTO DATABASECHANGELOG VALUES('1','bob','META-INF/database-change/db.changelog-1.1.xml','2011-02-24 11:11:30.565000000',1,'EXECUTED','3:5be91851d2c28889a24a98682a3e7f24','Create Table','',NULL,'2.0.1')
INSERT INTO DATABASECHANGELOG VALUES('1','bob','META-INF/database-change/db.changelog-1.2.xml','2011-02-24 11:11:30.565000000',2,'EXECUTED','3:f84a5c1495e724b97a17c9264f6a6c04','Create Table, Insert Row','',NULL,'2.0.1')
INSERT INTO DEPARTMENT VALUES(2,'RD',TRUE)
Take a look that Liquibase creates two tables for managing database versions, and in our case two inserts to Liquibase tables are done, one for each changelog file.
- In red are statements that are executed by Liquibase the first time is executed.
- In orange insertions about updating from 1.0 to 1.1. See that table departments is created, and also is inserted that db.changelog-1.1 has been executed.
- In blue database structure that was already inserted in version 1.0.
- In green are statements that are executing for updating from 1.1 to 1.2. Last line is an insert into a table created in version 1.1. Obviously this insert will always work because Liquibase ensure that 1.1 script is executed before 1.2.
Liquibase supports:
- Structural Refactorings like create table, add column, create view.
- Data Quality Refactorings like unique constraints, null constraints, create sequence.
- Referential Integrity Refactorings primary key and foreign key management.
- Non-Refactoring Transformations like insert data, load data, tag database.
- Architectural Refactorings for creating indexes.
- Custom Refactoring where you specify which SQL sentence you wanted to execute.
Liquibase can be executed as standalone application, but also as Ant script, Maven script, Servlet Listener, Grails or Spring. In my case, I have chosen Spring rules, because I am already using Spring, and because I wanted that each time the application is started, it checks changelog files to see if there was any new upgrade to process.
<bean id="liquibase" class="liquibase.integration.spring.SpringLiquibase"> <property name="dataSource" ref="dataSource"></property> <property name="changeLog" value="classpath:META-INF/database-change/db.changelog-master.xml"></property></bean>
Liquibase not only updates database, but can also rollback changes, makes a diff file between versions, generates documentation about changes in javadoc fashion, or simply just not run update directly but create an SQL file so DBA can check updates.
But wait there are even more. Do you know Spring Profiles released in Spring 3.1? If not take a look, and see how perfect matches Liquibase context parameter with Spring Profiles. Liquibase context parameter indicates that some changelog should only be applied in case Liquibase is running in defined context. For example you can have a changelog that inserts data for your unit testing queries, and another one that creates some tables. Inserts should be applied only when test context is active, but creation tables should be applied always.
In <changeset></changeset> tag you specify in which context should be applied. If no context is placed, change is always executed. Keep in mind is a good practice that test changesets and production changesets reminds in the same file. Only is a matter of configuring contexts correctly.
Both changeSet and Spring property supports comma-separated value for configuring more than one context.
Then thanks of Spring 3.1, you defines two SpringLiquibase instances each one in different Spring Profile, one with test contexts, for example test, test-integration, ..., and the other one without contexts.
Now when we release a new version, simply we packed all in a war, and sent to client. When the war is deployed in its server, the new version is installed, with software updates, database updates, and more important, client has not lost its data. No postinstallation steps are required nor SQL scripts.
<changeSet id="2" author="bob" context="test"/>Previous changeSet will be executed only when test context is specified. In Spring Liquibase exists a property called contexts where you indicate which contexts are currently active.
<property name="contexts" value="test" /><changeSet id="2" author="bob" context="test"/>
Both changeSet and Spring property supports comma-separated value for configuring more than one context.
Then thanks of Spring 3.1, you defines two SpringLiquibase instances each one in different Spring Profile, one with test contexts, for example test, test-integration, ..., and the other one without contexts.
Now when we release a new version, simply we packed all in a war, and sent to client. When the war is deployed in its server, the new version is installed, with software updates, database updates, and more important, client has not lost its data. No postinstallation steps are required nor SQL scripts.
2 comentarios:
Hey Nice one ... we need it for automated deployment and roll back
I am happy you have found it useful.
Publicar un comentario