You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$ bin/nuodb-migration dump --source.driver=com.nuodb.jdbc.Driver
--source.url=jdbc:com.nuodb://localhost/test
--source.username=
--source.schema=hockey --table=hockey --table.hockey.filter=id<>25
--output.path=/tmp/dump.cat --output.type=bson
will go through ALL tables in hockey issuing metadata query before reaching table=hockey.
In my case (~50k atoms) migrator does not reach desired table not after hours of running because it is extracting structure for each table in schema rendering migrator useless.
Suggested fix: SKIP over tables with TABLENAME <> --table.
The text was updated successfully, but these errors were encountered:
The problem is that Migrator calls JDBC's getColumnInfo() for every table in the database, even for tables that you are not interested in exporting. For 50K tables, this query takes 15ms to run, so 806 seconds for 50K tables.
The following patch shows the location in the Migrator code where it is possible to short-cut the loop so that getColumnInfo() is called only for the table you are interested in (Note that the name of the table is hard-coded in the patch).
diff --git a/core/src/main/java/com/nuodb/migrator/backup/writer/BackupWriter.java b/core/src/main/java/com/nuodb/migrator/backup/writer/BackupWriter.java
index 59dc02d4..d84ec1e3 100644
--- a/core/src/main/java/com/nuodb/migrator/backup/writer/BackupWriter.java
+++ b/core/src/main/java/com/nuodb/migrator/backup/writer/BackupWriter.java
@@ -246,7 +246,7 @@ public class BackupWriter {
}
protected InspectionScope getInspectionScope() {
- return new TableInspectionScope(sourceSpec.getCatalog(), sourceSpec.getSchema(), getTableTypes());
+ return new TableInspectionScope(sourceSpec.getCatalog(), sourceSpec.getSchema(), "T1");
}
protected BackupWriterManager createBackupWriterManager(BackupOps backupOps, Map context) throws Exception {
$ bin/nuodb-migration dump --source.driver=com.nuodb.jdbc.Driver
--source.url=jdbc:com.nuodb://localhost/test
--source.username=
--source.schema=hockey --table=hockey --table.hockey.filter=id<>25
--output.path=/tmp/dump.cat --output.type=bson
will go through ALL tables in hockey issuing metadata query before reaching table=hockey.
In my case (~50k atoms) migrator does not reach desired table not after hours of running because it is extracting structure for each table in schema rendering migrator useless.
Suggested fix: SKIP over tables with TABLENAME <> --table.
The text was updated successfully, but these errors were encountered: