Skip to content

Data Migration

Paolo Pasquali edited this page Jan 31, 2019 · 2 revisions

This documentation is based on Olivier Dalang's SPC GeoNode migration.

How to migrate from an existing standard Geonode install

This section lists the steps done to migrate from an apt-get install of Geonode 2.4.1 (with GeoServer 2.7.4) to a fresh MASDAP (GeoNode project) install (2.10.x with GeoFence). It is meant as a guide only as some steps may need some tweaking depending on your installation. Do not follow these steps if you don't understand what you're doing.

Prerequisites

  • access to the original server
  • a new server for the install (with GeoNode installed with Docker)
  • an external hard-drive to copy data over

On the old server

# Move to the external hard drive
cd /path/to/your/external/drive

# Find the current database password (look for DATABASE_PASSWORD, in my case it was XbFAyE4w)
more /etc/geonode/local_settings.py

# Dump the database content (you will be prompted several time for the password above)
pg_dumpall --host=127.0.0.1 --username=geonode --file=pg_dumpall.custom

# Copy all uploaded files
cp -r /var/www/geonode/uploaded uploaded

# Copy geoserver data directory
cp -r /usr/share/geoserver/data geodatadir

On the new server

Setup Geonode by following the prerequisite and production steps on https://github.com/olivierdalang/SPCgeonode/tree/release up to (but not including) run the stack.

Then run these commands :

# Prepare the stack (without running)
docker-compose -f docker-compose.yml pull --no-parallel
docker-compose -f docker-compose.yml up --no-start

# Start the database
docker-compose -f docker-compose.yml up -d db

# Initialize geoserver (to create the geodatadir - this will fail because Django/Postgres arent started yet - but this is expected)
docker-compose -f docker-compose.yml run --rm geoserver exit 0

# Go to the external drive
cd /path/to/drive/

# Empty the dbs
docker exec -i db4masdap dropdb -U postgres geonode
docker exec -i db4masdap dropdb -U postgres geonode_data
docker exec -i db4masdap createdb -U postgres geonode
docker exec -i db4masdap createdb -U postgres geonode_data

# Restore the dump (this can take a while if you have data in postgres)
cat pg_dumpall.custom | docker exec -i db4masdap psql -U postgres

# In case you want to restore only one db at the time, here is an example
psql geonode_data < geonode_data.dump
docker exec -i db4masdap psql -U postgres geonode_data < geonode_data.dump

# Restore the django uploaded files
docker cp uploaded/. django4masdap:/mnt/volumes/statics/uploaded

# Restore the workspaces and styles of the geoserver data directory
docker cp geoserver-data/styles/. django4masdap:/geoserver_data/data/styles
docker cp geoserver-data/workspaces/. django4masdap:/geoserver_data/data/workspaces
docker cp geoserver-data/data/default/. django4masdap:/geoserver_data/data/default
docker cp geoserver-data/data/geonode/. django4masdap:/geoserver_data/data/geonode

# Back to SPCgeonode
cd /path/to/masdap

# Fix some inconsistency that prevents migrations (public.layers_layer shouldn’t have service_id column)
docker exec -i db4masdap psql -d geonode -U postgres -c "ALTER TABLE layers_layer DROP COLUMN service_id;"

# Migrate with fake initial
docker-compose -f docker-compose.yml run --rm --entrypoint "" django python manage.py migrate --fake-initial

# Create the SQL diff to fix the schema # TODO : upstream some changes to django-extensions for this to work directly
docker-compose -f docker-compose.yml run --rm --entrypoint "" django /bin/sh -c "DJANGO_COLORS="nocolor" python manage.py sqldiff -ae" >> fix.sql

# Manually fix the SQL command until it runs (you can also drop the tables that have no model)
nano fix.sql

# Apply the SQL diff (review the sql file first as this may delete some important tables)
cat fix.sql | docker exec -i db4masdap psql -U postgres

# This time start the stack
docker-compose -f docker-compose.yml up -d