diff --git a/.classpath b/.classpath deleted file mode 100644 index e14a732b..00000000 --- a/.classpath +++ /dev/null @@ -1,51 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/.project b/.project deleted file mode 100644 index befa632b..00000000 --- a/.project +++ /dev/null @@ -1,23 +0,0 @@ - - - datastax.cdm.cassandra-data-migrator-4.1.5-SNAPSHOT - - - - - - org.eclipse.jdt.core.javabuilder - - - - - org.eclipse.m2e.core.maven2Builder - - - - - - org.eclipse.jdt.core.javanature - org.eclipse.m2e.core.maven2Nature - - diff --git a/src/resources/cdm-detailed.properties b/src/resources/cdm-detailed.properties index 677bef16..b4e40a48 100644 --- a/src/resources/cdm-detailed.properties +++ b/src/resources/cdm-detailed.properties @@ -108,16 +108,17 @@ spark.cdm.connect.target.password cassandra #----------------------------------------------------------------------------------------------------------- spark.cdm.schema.origin.keyspaceTable keyspace_name.table_name -# Max TTL value of all non-PK columns will be used for insert on target + +# Max TTL value of all TTL-eligible non-PK columns will be used for insert on target spark.cdm.schema.origin.column.ttl.automatic true -# Max TTL value of specified non-PK columns will be used for insert on target (overrides automatic setting) +# Max TTL value of specified (non-PK) columns will be used for insert on target (overrides automatic setting) #spark.cdm.schema.origin.column.ttl.names data_col1,data_col2,... -# Max WRITETIME value of all non-PK columns will be used for insert on target +# Max WRITETIME value of all WRITETIME-eligible non-PK columns will be used for insert on target spark.cdm.schema.origin.column.writetime.automatic true -# Max WRITETIME value of specified non-PK columns will be used for insert on target (overrides automatic setting) +# Max WRITETIME value of specified (non-PK) columns will be used for insert on target (overrides automatic setting) #spark.cdm.schema.origin.column.writetime.names data_col1,data_col2,... #spark.cdm.schema.origin.column.names.to.target partition_col1:partition_col_1,partition_col2:partition_col_2,... spark.cdm.schema.ttlwritetime.calc.useCollections false