What You Should Know When Using Liquibase

A lot of years working now with Liquibase. It’s a powerful tool to integrate into every application which has a database connection.

Still, there are some best practices to take into account if you don’t want Liquibase to be against you.

Run the migration when starting the application, automatically

I always configure Liquibase to run when starting the application. When starting locally, in the development environment or in the production environment.

For that, follow the following integration.

This way, I drop the doubt of whether the database schema is updated or not.

Still, this requires some DevOps skills. As my application needs to know if it’s running locally, in the development environment or in the production environment on its own.

Because the URL of the database is different for each case.

I can do it with:

  • Environment variables: I must add the URL, username and password of the database in environment variables to be accessible by my application.
  • Or maven profiles: running with different profiles will load different configuration files with the adequate values.

Add the rollback scripts

If everything goes as planned, we won’t need any unit test.

But this is never the case.

For the same reason, always add rollback scripts. Better to have them and never use them, than need them and don’t have them.

Test both the migration and rollback scripts at each build

Still, I don’t test the rollback script until a problem occurs.

How to make sure the rollback is correct? Trust the developers? No.

I use to configure a job in my CI/CD pipeline to run all the Liquibase scripts against an in-memory database. Check the exit code (if different from zero, break the pipeline). And finally, run all the rollback scripts.

mvn liquibase:update
rc=$?; if [[ $rc != 0 ]]; then exit $rc; fi
mvn liquibase:rollback -Dliquibase.rollbackCount=1000
rc=$?; if [[ $rc != 0 ]]; then exit $rc; fi
mvn liquibase:update
rc=$?; if [[ $rc != 0 ]]; then exit $rc; fi

I first test the migration and check if everything goes well. Then test the rollback, and check again if everything goes well. Finally, I perform another migration to see if the rollback was effective.

This way, I test both the migration scripts and rollbacks at each commit.

How to handle changelogs files created in parallel branches

Let’s see a use case which happened to me so many times.

We develop two features at the same time. In different Git branches. Let’s call them featA and featB.

When developing featA, I’ve created the changelog file db.changelog-1.32.sql in my branch.

Then somebody started developing featB on its branch with another changelog file as db.changelog-1.33.sql.

But featB was merged before featA.

This means that in production, I have the changelog file db.changelog-1.31.sql and db.changelog-1.33.sql. But version 32 is missing.

I can:

  • Instead of using a number for the file version, use a UUID. This way, the files won’t have a specific order. The order which matters is the one in the master changelog file.
  • Keep the number for the file version, but let’s have unsorted version numbers in the changelog file. This won’t cause any problems.

The thing here is to talk to the team when adding a new changelog file. To be sure there won’t be any conflict.

Never edit a changelog file

A changeset pushed is a changeset applied to the database.

I can’t edit an action which is already on the database.

What if I made a mistake when creating a table or a column? I must add a new changset with the correction.

I always compare Liquibase with Git. If I’ve added a commit with a feature, I won’t go into the history to search for the commit and edit it. I push a new commit with the fix.

But if you do, you need to edit the Liquibase changelog tables manually

But what if I really have to modify an existing changelog file?

Sometimes, due to uncommon reasons, the changesets pass the unit tests but not the deployment phase.

I’ve seen that with security reasons when adding plugins, or using different database versions in the unit tests and in the target environment.

So, I have to edit my changeset as the environment isn’t stable.

delete from DATABASECHANGELOG where FILENAME = '%filename%';
delete from DATABASECHANGELOGLOCK;

This is the only cause I can edit the Liquibase changelogs tables.

When I have my new changeset ready to be deployed, I must first remove the one already in the Liquibase table, and apply the rollback command if necessary.

But as said before, I try never to do it, as it’s a sign of an unstable platform.

If you want to learn more about good quality code, make sure to follow me on Youtube.


Never Miss Another Tech Innovation

Concrete insights and actionable resources delivered straight to your inbox to boost your developer career.

My New ebook, Best Practices To Create A Backend With Spring Boot 3, is available now.

Best practices to create a backend with Spring Boot 3

Leave a comment

Discover more from The Dev World - Sergio Lema

Subscribe now to keep reading and get access to the full archive.

Continue reading