Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add GroupColumn Decimal128Array #13564

Merged
merged 10 commits into from
Dec 4, 2024
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,12 @@ impl DatasetGeneratorConfig {
.iter()
.filter_map(|d| {
if d.column_type.is_numeric()
&& !matches!(d.column_type, DataType::Float32 | DataType::Float64)
&& !matches!(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something is wrong here. This change effectively turns off fuzz testing for sum with decimal:

When I reverted this change the fuzz tests fail occasionally like this:

test fuzz_cases::aggregate_fuzz::test_sum ... FAILED
...
Arrow error: Invalid argument error: column types must match schema types, expected Decimal128(21, -112) but found Decimal128(38, 10) at column index 1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my perspective other than this change, this PR is ready to go.

Thank you @jonathanc-n

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alamb thanks for the tip, I reverted that change! For the test_sum fuzz test, I removed it the same way I did for the Float types due to casting to a DateType. This was the error I got after executing with a backtrace (many more of this same error): ERROR: Cast error: Failed to convert 39087111289254881.41 to datetime for Timestamp(Millisecond, None)

I'm getting this, should I still remove it?

Copy link
Contributor

@jayzhan211 jayzhan211 Dec 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The real issue is that we have decimal(38, 10) which is the fixed precision for sum and it mismatches with the fuzz test which has random precision

    fn return_type(&self, arg_types: &[DataType]) -> Result<DataType> {
        match &arg_types[0] {
            DataType::Int64 => Ok(DataType::Int64),
            DataType::UInt64 => Ok(DataType::UInt64),
            DataType::Float64 => Ok(DataType::Float64),
            DataType::Decimal128(precision, scale) => {
                // in the spark, the result type is DECIMAL(min(38,precision+10), s)
                // ref: https://github.com/apache/spark/blob/fcf636d9eb8d645c24be3db2d599aba2d7e2955a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Sum.scala#L66
                let new_precision = DECIMAL128_MAX_PRECISION.min(*precision + 10);
                Ok(DataType::Decimal128(new_precision, *scale))
            }
            DataType::Decimal256(precision, scale) => {
                // in the spark, the result type is DECIMAL(min(38,precision+10), s)
                // ref: https://github.com/apache/spark/blob/fcf636d9eb8d645c24be3db2d599aba2d7e2955a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Sum.scala#L66
                let new_precision = DECIMAL256_MAX_PRECISION.min(*precision + 10);
                Ok(DataType::Decimal256(new_precision, *scale))
            }
            other => {
                exec_err!("[return_type] SUM not supported for {}", other)
            }
        }
    }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Decimal128(38, 3) for normal precision, Decimal128(30, 3) for grouping. Not sure why there is mismatch in fuzz test. We should either align the precision for both cases or fix the fuzz schema check if they are not necessary to have the same precision like slt

query TT
select arrow_typeof(sum(column1)), arrow_typeof(sum(distinct column1)) from t group by column2;
----
Decimal128(38, 3) Decimal128(30, 3)
Decimal128(38, 3) Decimal128(30, 3)

query TT
explain select sum(column1), sum(distinct column1) from t group by column2;
----
logical_plan
01)Projection: sum(alias2) AS sum(t.column1), sum(alias1) AS sum(DISTINCT t.column1)
02)--Aggregate: groupBy=[[t.column2]], aggr=[[sum(alias2), sum(alias1)]]
03)----Aggregate: groupBy=[[t.column2, t.column1 AS alias1]], aggr=[[sum(t.column1) AS alias2]]
04)------TableScan: t projection=[column1, column2]
physical_plan
01)ProjectionExec: expr=[sum(alias2)@1 as sum(t.column1), sum(alias1)@2 as sum(DISTINCT t.column1)]
02)--AggregateExec: mode=FinalPartitioned, gby=[column2@0 as column2], aggr=[sum(alias2), sum(alias1)]
03)----CoalesceBatchesExec: target_batch_size=8192
04)------RepartitionExec: partitioning=Hash([column2@0], 4), input_partitions=4
05)--------AggregateExec: mode=Partial, gby=[column2@0 as column2], aggr=[sum(alias2), sum(alias1)]
06)----------AggregateExec: mode=FinalPartitioned, gby=[column2@0 as column2, alias1@1 as alias1], aggr=[alias2]
07)------------CoalesceBatchesExec: target_batch_size=8192
08)--------------RepartitionExec: partitioning=Hash([column2@0, alias1@1], 4), input_partitions=4
09)----------------RepartitionExec: partitioning=RoundRobinBatch(4), input_partitions=1
10)------------------AggregateExec: mode=Partial, gby=[column2@1 as column2, column1@0 as alias1], aggr=[alias2]
11)--------------------MemoryExec: partitions=1, partition_sizes=[1]

d.column_type,
DataType::Float32
| DataType::Float64
| DataType::Decimal128(_, _)
)
{
Some(d.name.as_str())
} else {
Expand Down
5 changes: 4 additions & 1 deletion datafusion/physical-plan/src/aggregates/group_values/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
use arrow::record_batch::RecordBatch;
use arrow_array::types::{
Date32Type, Date64Type, Time32MillisecondType, Time32SecondType,
Date32Type, Date64Type, Decimal128Type, Time32MillisecondType, Time32SecondType,
Time64MicrosecondType, Time64NanosecondType, TimestampMicrosecondType,
TimestampMillisecondType, TimestampNanosecondType, TimestampSecondType,
};
Expand Down Expand Up @@ -170,6 +170,9 @@ pub(crate) fn new_group_values(
TimeUnit::Microsecond => downcast_helper!(TimestampMicrosecondType, d),
TimeUnit::Nanosecond => downcast_helper!(TimestampNanosecondType, d),
},
DataType::Decimal128(_, _) => {
downcast_helper!(Decimal128Type, d);
}
DataType::Utf8 => {
return Ok(Box::new(GroupValuesByes::<i32>::new(OutputType::Utf8)));
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,8 @@ use crate::aggregates::group_values::GroupValues;
use ahash::RandomState;
use arrow::compute::cast;
use arrow::datatypes::{
BinaryViewType, Date32Type, Date64Type, Float32Type, Float64Type, Int16Type,
Int32Type, Int64Type, Int8Type, StringViewType, Time32MillisecondType,
BinaryViewType, Date32Type, Date64Type, Decimal128Type, Float32Type, Float64Type,
Int16Type, Int32Type, Int64Type, Int8Type, StringViewType, Time32MillisecondType,
Time32SecondType, Time64MicrosecondType, Time64NanosecondType,
TimestampMicrosecondType, TimestampMillisecondType, TimestampNanosecondType,
TimestampSecondType, UInt16Type, UInt32Type, UInt64Type, UInt8Type,
Expand Down Expand Up @@ -1008,6 +1008,14 @@ impl<const STREAMING: bool> GroupValues for GroupValuesColumn<STREAMING> {
)
}
},
&DataType::Decimal128(_, _) => {
instantiate_primitive! {
v,
nullable,
Decimal128Type,
data_type
}
}
&DataType::Utf8 => {
let b = ByteGroupValueBuilder::<i32>::new(OutputType::Utf8);
v.push(Box::new(b) as _)
Expand Down Expand Up @@ -1214,6 +1222,7 @@ fn supported_type(data_type: &DataType) -> bool {
| DataType::UInt64
| DataType::Float32
| DataType::Float64
| DataType::Decimal128(_, _)
| DataType::Utf8
| DataType::LargeUtf8
| DataType::Binary
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ use arrow_array::cast::AsArray;
use arrow_array::{Array, ArrayRef, ArrowPrimitiveType, PrimitiveArray};
use arrow_schema::DataType;
use datafusion_execution::memory_pool::proxy::VecAllocExt;
use datafusion_physical_expr::aggregate::utils::adjust_output_array;
use itertools::izip;
use std::iter;
use std::sync::Arc;
Expand Down Expand Up @@ -190,9 +191,13 @@ impl<T: ArrowPrimitiveType, const NULLABLE: bool> GroupColumn
assert!(nulls.is_none(), "unexpected nulls in non nullable input");
}

let arr = PrimitiveArray::<T>::new(ScalarBuffer::from(group_values), nulls);
let arr = PrimitiveArray::<T>::new(ScalarBuffer::from(group_values), nulls)
.with_data_type(data_type.clone());
let array_ref = Arc::new(arr) as ArrayRef;

// Set timezone information for timestamp
Arc::new(arr.with_data_type(data_type))
adjust_output_array(&data_type, array_ref)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we need adjust_output_array then I think we don't need with_data_type to set the timezone information since it is handled by adjust_output_array as well

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The precision and scale aren't kept in the generic when constructing the buffer, so i think i need t keep with_data_type

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is kept in data_type. When you run adjust_output_array, the precision of Decimal should be set with with_precision_and_scale.

pub fn adjust_output_array(data_type: &DataType, array: ArrayRef) -> Result<ArrayRef> {
    let array = match data_type {
        DataType::Decimal128(p, s) => Arc::new(
            array
                .as_primitive::<Decimal128Type>()
                .clone()
                .with_precision_and_scale(*p, *s)?,
        ) as ArrayRef,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that is what I thought as well but in ea6f77a it fails due to precision/scale errors due to the original array not having .with_data_type on it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error you mentioned can be fixed by adding support for Decimal256

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried backing out the use of adjust_data_type and I didn't see any failures locally 🤔

.expect("Failed to adjust array data type")
}

fn take_n(&mut self, n: usize) -> ArrayRef {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ use arrow_schema::DataType;
use datafusion_common::Result;
use datafusion_execution::memory_pool::proxy::VecAllocExt;
use datafusion_expr::EmitTo;
use datafusion_physical_expr::aggregate::utils::adjust_output_array;
use half::f16;
use hashbrown::raw::RawTable;
use std::mem::size_of;
Expand Down Expand Up @@ -208,7 +209,13 @@ where
build_primitive(split, null_group)
}
};
Ok(vec![Arc::new(array.with_data_type(self.data_type.clone()))])
let array_ref =
Arc::new(array.with_data_type(self.data_type.clone())) as ArrayRef;

let adjusted_array = adjust_output_array(&self.data_type, array_ref)
.expect("Failed to adjust array data type");

Ok(vec![adjusted_array])
}

fn clear_shrink(&mut self, batch: &RecordBatch) {
Expand Down
39 changes: 39 additions & 0 deletions datafusion/sqllogictest/test_files/group_by.slt
Original file line number Diff line number Diff line change
Expand Up @@ -5499,3 +5499,42 @@ SELECT
GROUP BY ts, text
----
foo 2024-01-01T08:00:00+08:00

# Test multi group by int + Decimal128
statement ok
create table source as values
(1, '123.45'),
(1, '123.45'),
(2, '678.90'),
(2, '1011.12'),
(3, '1314.15'),
(3, '1314.15'),
(2, '1011.12'),
(null, null),
(null, '123.45'),
(null, null),
(null, '123.45'),
(2, '678.90'),
(2, '678.90'),
(1, null)
;

statement ok
create view t as select column1 as a, arrow_cast(column2, 'Decimal128(10, 2)') as b from source;

query IRI
select a, b, count(*) from t group by a, b order by a, b;
----
1 123.45 2
1 NULL 1
2 678.9 3
2 1011.12 2
3 1314.15 2
NULL 123.45 2
NULL NULL 2

statement ok
drop view t

statement ok
drop table source;