Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add GroupColumn Decimal128Array #13564

Merged
merged 10 commits into from
Dec 4, 2024
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,12 @@ impl DatasetGeneratorConfig {
.iter()
.filter_map(|d| {
if d.column_type.is_numeric()
&& !matches!(d.column_type, DataType::Float32 | DataType::Float64)
&& !matches!(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something is wrong here. This change effectively turns off fuzz testing for sum with decimal:

When I reverted this change the fuzz tests fail occasionally like this:

test fuzz_cases::aggregate_fuzz::test_sum ... FAILED
...
Arrow error: Invalid argument error: column types must match schema types, expected Decimal128(21, -112) but found Decimal128(38, 10) at column index 1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my perspective other than this change, this PR is ready to go.

Thank you @jonathanc-n

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alamb thanks for the tip, I reverted that change! For the test_sum fuzz test, I removed it the same way I did for the Float types due to casting to a DateType. This was the error I got after executing with a backtrace (many more of this same error): ERROR: Cast error: Failed to convert 39087111289254881.41 to datetime for Timestamp(Millisecond, None)

I'm getting this, should I still remove it?

Copy link
Contributor

@jayzhan211 jayzhan211 Dec 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The real issue is that we have decimal(38, 10) which is the fixed precision for sum and it mismatches with the fuzz test which has random precision

    fn return_type(&self, arg_types: &[DataType]) -> Result<DataType> {
        match &arg_types[0] {
            DataType::Int64 => Ok(DataType::Int64),
            DataType::UInt64 => Ok(DataType::UInt64),
            DataType::Float64 => Ok(DataType::Float64),
            DataType::Decimal128(precision, scale) => {
                // in the spark, the result type is DECIMAL(min(38,precision+10), s)
                // ref: https://github.com/apache/spark/blob/fcf636d9eb8d645c24be3db2d599aba2d7e2955a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Sum.scala#L66
                let new_precision = DECIMAL128_MAX_PRECISION.min(*precision + 10);
                Ok(DataType::Decimal128(new_precision, *scale))
            }
            DataType::Decimal256(precision, scale) => {
                // in the spark, the result type is DECIMAL(min(38,precision+10), s)
                // ref: https://github.com/apache/spark/blob/fcf636d9eb8d645c24be3db2d599aba2d7e2955a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Sum.scala#L66
                let new_precision = DECIMAL256_MAX_PRECISION.min(*precision + 10);
                Ok(DataType::Decimal256(new_precision, *scale))
            }
            other => {
                exec_err!("[return_type] SUM not supported for {}", other)
            }
        }
    }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Decimal128(38, 3) for normal precision, Decimal128(30, 3) for grouping. Not sure why there is mismatch in fuzz test. We should either align the precision for both cases or fix the fuzz schema check if they are not necessary to have the same precision like slt

query TT
select arrow_typeof(sum(column1)), arrow_typeof(sum(distinct column1)) from t group by column2;
----
Decimal128(38, 3) Decimal128(30, 3)
Decimal128(38, 3) Decimal128(30, 3)

query TT
explain select sum(column1), sum(distinct column1) from t group by column2;
----
logical_plan
01)Projection: sum(alias2) AS sum(t.column1), sum(alias1) AS sum(DISTINCT t.column1)
02)--Aggregate: groupBy=[[t.column2]], aggr=[[sum(alias2), sum(alias1)]]
03)----Aggregate: groupBy=[[t.column2, t.column1 AS alias1]], aggr=[[sum(t.column1) AS alias2]]
04)------TableScan: t projection=[column1, column2]
physical_plan
01)ProjectionExec: expr=[sum(alias2)@1 as sum(t.column1), sum(alias1)@2 as sum(DISTINCT t.column1)]
02)--AggregateExec: mode=FinalPartitioned, gby=[column2@0 as column2], aggr=[sum(alias2), sum(alias1)]
03)----CoalesceBatchesExec: target_batch_size=8192
04)------RepartitionExec: partitioning=Hash([column2@0], 4), input_partitions=4
05)--------AggregateExec: mode=Partial, gby=[column2@0 as column2], aggr=[sum(alias2), sum(alias1)]
06)----------AggregateExec: mode=FinalPartitioned, gby=[column2@0 as column2, alias1@1 as alias1], aggr=[alias2]
07)------------CoalesceBatchesExec: target_batch_size=8192
08)--------------RepartitionExec: partitioning=Hash([column2@0, alias1@1], 4), input_partitions=4
09)----------------RepartitionExec: partitioning=RoundRobinBatch(4), input_partitions=1
10)------------------AggregateExec: mode=Partial, gby=[column2@1 as column2, column1@0 as alias1], aggr=[alias2]
11)--------------------MemoryExec: partitions=1, partition_sizes=[1]

d.column_type,
DataType::Float32
| DataType::Float64
| DataType::Decimal128(_, _)
)
{
Some(d.name.as_str())
} else {
Expand Down
5 changes: 4 additions & 1 deletion datafusion/physical-plan/src/aggregates/group_values/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
use arrow::record_batch::RecordBatch;
use arrow_array::types::{
Date32Type, Date64Type, Time32MillisecondType, Time32SecondType,
Date32Type, Date64Type, Decimal128Type, Time32MillisecondType, Time32SecondType,
Time64MicrosecondType, Time64NanosecondType, TimestampMicrosecondType,
TimestampMillisecondType, TimestampNanosecondType, TimestampSecondType,
};
Expand Down Expand Up @@ -170,6 +170,9 @@ pub(crate) fn new_group_values(
TimeUnit::Microsecond => downcast_helper!(TimestampMicrosecondType, d),
TimeUnit::Nanosecond => downcast_helper!(TimestampNanosecondType, d),
},
DataType::Decimal128(_, _) => {
downcast_helper!(Decimal128Type, d);
}
DataType::Utf8 => {
return Ok(Box::new(GroupValuesByes::<i32>::new(OutputType::Utf8)));
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,8 @@ use crate::aggregates::group_values::GroupValues;
use ahash::RandomState;
use arrow::compute::cast;
use arrow::datatypes::{
BinaryViewType, Date32Type, Date64Type, Float32Type, Float64Type, Int16Type,
Int32Type, Int64Type, Int8Type, StringViewType, Time32MillisecondType,
BinaryViewType, Date32Type, Date64Type, Decimal128Type, Float32Type, Float64Type,
Int16Type, Int32Type, Int64Type, Int8Type, StringViewType, Time32MillisecondType,
Time32SecondType, Time64MicrosecondType, Time64NanosecondType,
TimestampMicrosecondType, TimestampMillisecondType, TimestampNanosecondType,
TimestampSecondType, UInt16Type, UInt32Type, UInt64Type, UInt8Type,
Expand Down Expand Up @@ -1008,6 +1008,14 @@ impl<const STREAMING: bool> GroupValues for GroupValuesColumn<STREAMING> {
)
}
},
&DataType::Decimal128(_, _) => {
instantiate_primitive! {
v,
nullable,
Decimal128Type,
data_type
}
}
&DataType::Utf8 => {
let b = ByteGroupValueBuilder::<i32>::new(OutputType::Utf8);
v.push(Box::new(b) as _)
Expand Down Expand Up @@ -1214,6 +1222,7 @@ fn supported_type(data_type: &DataType) -> bool {
| DataType::UInt64
| DataType::Float32
| DataType::Float64
| DataType::Decimal128(_, _)
| DataType::Utf8
| DataType::LargeUtf8
| DataType::Binary
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,7 @@ where
build_primitive(split, null_group)
}
};

Ok(vec![Arc::new(array.with_data_type(self.data_type.clone()))])
}

Expand Down
39 changes: 39 additions & 0 deletions datafusion/sqllogictest/test_files/group_by.slt
Original file line number Diff line number Diff line change
Expand Up @@ -5499,3 +5499,42 @@ SELECT
GROUP BY ts, text
----
foo 2024-01-01T08:00:00+08:00

# Test multi group by int + Decimal128
statement ok
create table source as values
(1, '123.45'),
(1, '123.45'),
(2, '678.90'),
(2, '1011.12'),
(3, '1314.15'),
(3, '1314.15'),
(2, '1011.12'),
(null, null),
(null, '123.45'),
(null, null),
(null, '123.45'),
(2, '678.90'),
(2, '678.90'),
(1, null)
;

statement ok
create view t as select column1 as a, arrow_cast(column2, 'Decimal128(10, 2)') as b from source;

query IRI
select a, b, count(*) from t group by a, b order by a, b;
----
1 123.45 2
1 NULL 1
2 678.9 3
2 1011.12 2
3 1314.15 2
NULL 123.45 2
NULL NULL 2

statement ok
drop view t

statement ok
drop table source;