This is a continuation of the previous article on how to run mongodb in docker in a replica set
We start off by a mongodb cluster of two nodes, running in a docker setup like this:
docker-compose.yml
version: '3' services: db01: image: mongo:3.0 volumes: - datadb01:/data/db - ./etc/mongod.conf:/etc/mongod.conf ports: - "30001:30001" command: ["mongod", "--config", "/etc/mongod.conf", "--port","30001"] container_name: db01 db02: image: mongo:3.0 volumes: - datadb02:/data/db - ./etc/mongod.conf:/etc/mongod.conf ports: - "30002:30002" command: ["mongod", "--config", "/etc/mongod.conf", "--port","30002"] container_name: db02 volumes: datadb01: datadb02:
Step Three – Add another host to the replication set
Now adding a third one to the config seems straight forward:
version: '3' services: db01: image: mongo:3.0 volumes: - datadb01:/data/db - ./etc/mongod.conf:/etc/mongod.conf ports: - "30001:30001" command: ["mongod", "--config", "/etc/mongod.conf", "--port","30001"] container_name: db01 db02: image: mongo:3.0 volumes: - datadb02:/data/db - ./etc/mongod.conf:/etc/mongod.conf ports: - "30002:30002" command: ["mongod", "--config", "/etc/mongod.conf", "--port","30002"] container_name: db02 db03: image: mongo:3.0 volumes: - datadb03:/data/db - ./etc/mongod.conf:/etc/mongod.conf ports: - "30003:30003" command: ["mongod", "--config", "/etc/mongod.conf", "--port","30003"] container_name: db03 volumes: datadb01: datadb02: datadb03:
To add this third machine to the replSet, reconfiguration in the mongo shell is required:
Seems to be as easy as rs.add("db03:30003")
(with MongoDB version >=3.0)
A rs.status()
check reveals that the third server is part of the cluster.
It stayed in startup states for a little bit of time (no transactions going on in this test environment)…
{ "_id" : 3, "name" : "db03:30003", "health" : 1, "state" : 5, "stateStr" : "STARTUP2", "uptime" : 12, }
… but finally managed to start up completely:
{ "set" : "rs0", "date" : ISODate("2017-06-23T10:30:02.225Z"), "myState" : 1, "members" : [ { "_id" : 1, "name" : "db01:30001", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", ... "self" : true }, { "_id" : 2, "name" : "db02:30002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", ... "pingMs" : 0, "configVersion" : 139230 }, { "_id" : 3, "name" : "db03:30003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", ... "pingMs" : 0, "configVersion" : 139230 } ], "ok" : 1 }
We have a running cluster on mongodb version 3.0
So, next stop: Update the cluster from 3.0 to 3.4.
Step four and five – update 3.0 to 3.2 to 3.4
((WAIT, you might want to update your applications configuration now as well, see below))
According to the official mongo documentation, this needs to be done in two steps:
3.0 to 3.2: https://docs.mongodb.com/manual/release-notes/3.2-upgrade/#upgrade-a-replica-set-to-3-2
3.2 to 3.4: https://docs.mongodb.com/manual/release-notes/3.4-upgrade-replica-set/#upgrade-replica-set
In both cases, the steps seem to be the same and quite straight forward:
- Upgrade secondary members of the replica set
- Step down the replica set primary to secondary, so an upgraded one becomes primary
- Upgrade the previous primary so all are on the same version
In this case, using docker, the upgrades of the instances should be as easy as changing the version tag in the docker-compose.yml
.
So, one at a time:
As my current primary is db01, I’ll start with db02. The change is just a version number in the file, so I’m not pasting the whole file here:
db02: image: mongo:3.2
A docker-compose up -d
brought db02 down, replacing it with an updated mongod 3.2 and repeating and watching rs.status()
, I could see the machine disapear and the re-sync.
NICE
Repeat it for db03
NICE again
Next step – step down
Running rs.stepDown on the PRIMARY db01 makes db03 turn PRIMARY and leaves db01 a SECONDARY, ready to update to 3.2 as well…
BUT WAIT!
This made me aware of the fact that I forgot to update my application configuration. While I extended the cluster to a 3-host-system, I did not add db03 to the applications mongo server config and the application server’s /etc/host
– which I quickly changed at this point.
Changing the db01’s image to 3.2 now and running docker-compose up -d
did update the image/container and restart it – but rs.status()
made me also aware that – according to their uptime – the other instances seem to have been restarted as well.
So, there must be a way to update/restart single services of docker-compose, right? Let’s check during the upgrade from 3.2 to 3.4
Now that all 3 containers are running the 3.2 image, the SECONDARYs can be updated as well. The line changed in the docker-compose.yml
:
version: '3' services: db01: image: mongo:3.4 ...
Now, instead of running a full docker-compose up -d
, it seems the way to go is
docker-compose stop db02
docker-compose create db02
docker-compose start db02
A previous docker-compose up -d db01
had an effect on the other servers uptimes as well, so I verified with db02 that this works.
After connection with the mongo shell to the PRIMARY (db03) and sending it a rs.stepDown()
, this one is ready to be upgraded as well.
With the stop, create, start
sequence, the last container is upgraded to 3.4 as well and the exercise is finished.
Pingback: Running mongodb as a replicaSet in Docker (and upgrading it from 3.0 to 3.4) | Real-Time-Web, symfony and the rest