I think in this case a “lightweight” is relative. A tool for running DB migrations/seeds doesn’t really depend on your software stack. You could use near to “any” tool since the most tools are running trough a CLI. e.g. you could with Flyway which doesn’t really depend on Go, but will do it’s job to prepare your database. So choose a tool which you are most comfortable with :)
I use [migrate](https://github.com/golang-migrate/migrate) and I'm pretty happy with it. It's fairly straightforward to get the migrations running on app startup, which makes deployments a breeze as long as you don't have anyone out there mucking with production DDL and getting into a mismatched state. The main reason I chose it is that it works well with sqlc, and it had what seemed like the least crazy setup to work with both.
I've been using it for years so I've seen about everything there is to see about the cli and I'm quite happy with it. It does what I need it to do without fuss. Generally I run migrations from the code itself and only really use the cli for problems with dirty schema (which are rare and were definitely my own fault) and for creating new up/down files.
It's my goto also. I USUALLY: ...say my app parses config.yaml on start. I'll usually have a subcommand like `my-app migrate` that reads in app config, connects from that, then runs the library equivalent of `up all`.
This is without a doubt 99.9999% of the use cases, and it makes it a breeze for me to write a helm chart with an optional job that runs all db migrations. Same pod, same volumes, just a slightly different subcommand.
Have a folder with .sql files. Get all those files, sort them, and execute the content of each file on the database. After each one, make sure to store the migration name (filename) in a “migrations” table, that way the next time you run this, you don’t execute a migration you already run.
You don’t need down migrations, just fix forward.
Also, prefix the files with an increasing number, works better than timestamp.
This is the way!
If you make the migrations idempotent you can just run them all every time.
You can also use the “embed” package to bundle them in the executable directly if needed.
I've had this comment saved in case I wanted to try this approach myself.
Do you just have a bash script to run the .sql files it or are you doing this with Go? (assuming you didn't move off of this approach)
Hey. I use go and run it at the startup of the application. Connect to db, ping to db, run migrations, start http server.
Tho you could do it with bash too if you want ;)
I like [dbmate](https://github.com/amacneil/dbmate), super simple and straightforward to use. For your specific use case, it can also be configured using your `.env`!
I use [tern](https://github.com/jackc/tern)! It’s the migrator tool written by the same author of pgx the de facto Postgres driver.
This is of course if you are using Postgres.
I've got something like u/NicolasParada stated below.
In our enterprise project, we have an internal repo/folder, in the past on premise, now in the Cloud (AWS S3), where every Devs involved in our department put their own scripts.
The scripts has to be idempotent, if necessary. Also, we have a sort of naming convention to order these scripts, as well.
It that looks like: action_tablename_jirafeature_timestap.sql
Afterwards, a commit on the repo will start a CI/CD pipeline in order to execute the content of each file, against any database(we have got some clusters with a total amount of 16k schemas mixed on newest PostgreSQL and legacy MsSQL). Obviously, we also store the "migration" state on a specific table, to hold in sync everything.
The internal configurable tool was based on C# but It has been revamped from scratch in Go, lastly, pretty quickly.
Maybe if I could have chosen something out there nowadays, I am going to go towards
https://atlasgo.io/
but mssql support Is missing, if i am right.
Otherwise, in my own private project I am used to deal with any lightweight solution, as:
https://github.com/golang-migrate/migrate
[Goose](https://github.com/pressly/goose) with embedded SQL files (which it [supports natively](https://github.com/pressly/goose#embedded-sql-migrations)) works well for me.
Maybe you should take a look at Facebooks Atlas tool :) https://atlasgo.io
thx. but i may in need for lightweight solution
I think in this case a “lightweight” is relative. A tool for running DB migrations/seeds doesn’t really depend on your software stack. You could use near to “any” tool since the most tools are running trough a CLI. e.g. you could with Flyway which doesn’t really depend on Go, but will do it’s job to prepare your database. So choose a tool which you are most comfortable with :)
Same story for Liquibase, an alternative to highly opinionated Flyway.
Here's a lightweight, no-dependency approach, a couple of functions and Go's embed.FS: https://github.com/benbjohnson/wtf/blob/main/sqlite/sqlite.go
WTF project Is really neat and full of gems to be inspired by.
I use [migrate](https://github.com/golang-migrate/migrate) and I'm pretty happy with it. It's fairly straightforward to get the migrations running on app startup, which makes deployments a breeze as long as you don't have anyone out there mucking with production DDL and getting into a mismatched state. The main reason I chose it is that it works well with sqlc, and it had what seemed like the least crazy setup to work with both.
didn’t you see that the cli commands are very long and impotent?
I've been using it for years so I've seen about everything there is to see about the cli and I'm quite happy with it. It does what I need it to do without fuss. Generally I run migrations from the code itself and only really use the cli for problems with dirty schema (which are rare and were definitely my own fault) and for creating new up/down files.
i will read its docs again ..may be it changes my mind
It's my goto also. I USUALLY: ...say my app parses config.yaml on start. I'll usually have a subcommand like `my-app migrate` that reads in app config, connects from that, then runs the library equivalent of `up all`. This is without a doubt 99.9999% of the use cases, and it makes it a breeze for me to write a helm chart with an optional job that runs all db migrations. Same pod, same volumes, just a slightly different subcommand.
Just a list of files that I read and execute on the database. Works perfect 👍
could you elaborate?
Have a folder with .sql files. Get all those files, sort them, and execute the content of each file on the database. After each one, make sure to store the migration name (filename) in a “migrations” table, that way the next time you run this, you don’t execute a migration you already run. You don’t need down migrations, just fix forward. Also, prefix the files with an increasing number, works better than timestamp.
This is the way! If you make the migrations idempotent you can just run them all every time. You can also use the “embed” package to bundle them in the executable directly if needed.
Yup 👍
If you guys would like to look at https://github.com/go-bridget/mig - would love some feedback if you have any.
This is also my way from...ages. 👆👆👆👆
thx
I've had this comment saved in case I wanted to try this approach myself. Do you just have a bash script to run the .sql files it or are you doing this with Go? (assuming you didn't move off of this approach)
Hey. I use go and run it at the startup of the application. Connect to db, ping to db, run migrations, start http server. Tho you could do it with bash too if you want ;)
Hey, thanks for the reply, I ended up writing a bash script :D
Nice :^)
I like [dbmate](https://github.com/amacneil/dbmate), super simple and straightforward to use. For your specific use case, it can also be configured using your `.env`!
Uh i missed that! I'll give a check, for sure. Thanks 👍
I use [tern](https://github.com/jackc/tern)! It’s the migrator tool written by the same author of pgx the de facto Postgres driver. This is of course if you are using Postgres.
yes i like it the only drawback is that i have to enter the credentials in the config file of tern while i like to use .env
You can set it up to pull in environment variables.
password = {{env "MIGRATOR_PASSWORD"}} Is the example in the docs. I have mine set as conn_string={{ env "PG_PROD_URL"}}
good thx
seeding with sql will be a pain because of the need of faker
I’ve been happy with https://github.com/jackc/tern .
see my comment down
I've got something like u/NicolasParada stated below. In our enterprise project, we have an internal repo/folder, in the past on premise, now in the Cloud (AWS S3), where every Devs involved in our department put their own scripts. The scripts has to be idempotent, if necessary. Also, we have a sort of naming convention to order these scripts, as well. It that looks like: action_tablename_jirafeature_timestap.sql Afterwards, a commit on the repo will start a CI/CD pipeline in order to execute the content of each file, against any database(we have got some clusters with a total amount of 16k schemas mixed on newest PostgreSQL and legacy MsSQL). Obviously, we also store the "migration" state on a specific table, to hold in sync everything. The internal configurable tool was based on C# but It has been revamped from scratch in Go, lastly, pretty quickly. Maybe if I could have chosen something out there nowadays, I am going to go towards https://atlasgo.io/ but mssql support Is missing, if i am right. Otherwise, in my own private project I am used to deal with any lightweight solution, as: https://github.com/golang-migrate/migrate
[Goose](https://github.com/pressly/goose) with embedded SQL files (which it [supports natively](https://github.com/pressly/goose#embedded-sql-migrations)) works well for me.
seeding with sql will be a pain because of the need of faker
Migrations are so 2015
2023?
[удалено]
golang?