Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HUDI-8427] Fix delete in file group reader and re-enable test in TestHoodieSparkMergeOnReadTableInsertUpdateDelete #12283

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,6 @@
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapred.JobConf;
import org.apache.spark.api.java.JavaRDD;
import org.junit.jupiter.api.Disabled;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.params.ParameterizedTest;
Expand Down Expand Up @@ -264,11 +263,11 @@ public void testRepeatedRollbackOfCompaction() throws Exception {
}
}

@Disabled("HUDI-8203")
@ParameterizedTest
@ValueSource(booleans = {true, false})
public void testSimpleInsertUpdateAndDelete(boolean populateMetaFields) throws Exception {
Properties properties = populateMetaFields ? new Properties() : getPropertiesForKeyGen();
properties.setProperty(HoodieTableConfig.PRECOMBINE_FIELD.key(), "timestamp");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean that without preCombine field it does not work? I've marked this test to be revisited by HUDI-8551.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the test data gen creates the record with an ordering field of type long. If precombine is not set, then we default to int in the filegroup reader. So then when we read the records we try to cast the long to an int which fails. timestamp field is long, so then we don't need to do any casting.

  public RawTripTestPayload(String jsonData, String rowKey, String partitionPath, String schemaStr) throws IOException {
    this(Option.of(jsonData), rowKey, partitionPath, schemaStr, false, 0L);
  }

What is also interesting is that there are 3 more constructors for RawTripTestPayload. 2 of them take in any Comparable as the ordering value. The final one sets it as an int:

this.orderingVal = Integer.valueOf(jsonRecordMap.getOrDefault("number", 0L).toString());

properties.setProperty(HoodieTableConfig.BASE_FILE_FORMAT.key(), HoodieTableConfig.BASE_FILE_FORMAT.defaultValue().toString());
HoodieTableMetaClient metaClient = getHoodieMetaClient(HoodieTableType.MERGE_ON_READ, properties);

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@
import org.apache.hadoop.io.ArrayWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.io.WritableComparable;
import org.apache.hadoop.mapred.FileSplit;
import org.apache.hadoop.mapred.InputSplit;
import org.apache.hadoop.mapred.JobConf;
Expand Down Expand Up @@ -265,10 +266,10 @@ public UnaryOperator<ArrayWritable> projectRecord(Schema from, Schema to, Map<St
@Override
public Comparable castValue(Comparable value, Schema.Type newType) {
//TODO: [HUDI-8261] actually do casting here
if (newType == Schema.Type.STRING) {
return value.toString();
if (value instanceof WritableComparable) {
return value;
}
return value;
return (WritableComparable) HoodieRealtimeRecordReaderUtils.avroToArrayWritable(value, Schema.create(newType));
jonvex marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this all we need for fixing HUDI-8261 too? If so, could we make HUDI-8261 fix version 1.0.0 and mark it as closed after this PR is merged? And remove //TODO: [HUDI-8261] actually do casting here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it doesn't fix everything, but I think it might be worth it to fix that here as well

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spent some time trying to fix 8261. I think it's too complex to get into 1.0.0

}

public UnaryOperator<ArrayWritable> reverseProjectRecord(Schema from, Schema to) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,23 @@
import org.apache.hudi.avro.HoodieAvroUtils;
import org.apache.hudi.common.testutils.HoodieTestDataGenerator;
import org.apache.hudi.common.testutils.HoodieTestUtils;
import org.apache.hudi.hadoop.HiveHoodieReaderContext;

import org.apache.avro.Schema;
import org.apache.avro.generic.GenericRecord;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hive.serde.serdeConstants;
import org.apache.hadoop.io.ArrayWritable;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.FloatWritable;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.WritableComparable;
import org.apache.hadoop.mapred.JobConf;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.mockito.Mockito;

import java.util.Arrays;
import java.util.List;
Expand Down Expand Up @@ -85,4 +93,18 @@ public void testProjection() {
private static ArrayWritable convertArrayWritable(GenericRecord record) {
return (ArrayWritable) HoodieRealtimeRecordReaderUtils.avroToArrayWritable(record, record.getSchema(), false);
}

@Test
public void testCastOrderingField() {
HiveHoodieReaderContext readerContext = Mockito.mock(HiveHoodieReaderContext.class, Mockito.CALLS_REAL_METHODS);
assertEquals(new Text("ASDF"), readerContext.castValue("ASDF", Schema.Type.STRING));
assertEquals(new IntWritable(0),readerContext.castValue(0, Schema.Type.INT));
assertEquals(new LongWritable(1000000000),readerContext.castValue(1000000000, Schema.Type.LONG));
assertEquals(new FloatWritable(20.24f),readerContext.castValue(20.24, Schema.Type.FLOAT));
assertEquals(new DoubleWritable(21.12d),readerContext.castValue(21.12, Schema.Type.DOUBLE));

// make sure that if input is a writeable, then it still works
WritableComparable reflexive = new IntWritable(8675309);
assertEquals(reflexive, readerContext.castValue(reflexive, Schema.Type.INT));
}
}
Loading