Skip to content

Commit

Permalink
Next iteration of tests fixing
Browse files Browse the repository at this point in the history
 On branch feature/fix-tests
 Changes to be committed:
	modified:   .github/workflows/ci.yml
	modified:   Makefile
	modified:   quinn/transformations.py
  • Loading branch information
SemyonSinchenko committed Nov 18, 2023
1 parent 09c3fd4 commit 48372f7
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 3 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ jobs:
poetry-${{ hashFiles('**/poetry.lock') }}
- name: Install dependencies
run: make install_deps
run: make install_pure
if: steps.cache.outputs.cache-hit != 'true'

- name: Change PySpark to version ${{ matrix.pyspark-version }}
Expand Down
4 changes: 4 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# COMMON CLI COMMANDS FOR DEVELOPMENT

.PHONY: install_pure
install_pure:
@poetry install

.PHONY: install_deps
install_deps:
@poetry install --with=development,linting,testing,docs
Expand Down
9 changes: 7 additions & 2 deletions quinn/transformations.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
import re
from collections.abc import Callable

from pyspark.sql import DataFrame
from pyspark.sql import DataFrame, SparkSession
from pyspark.sql import functions as F # noqa: N812
from pyspark.sql.types import ArrayType, MapType, StructField, StructType

Expand Down Expand Up @@ -236,7 +236,12 @@ def fix_nullability(field: StructField, result_dict: dict) -> None:
for field in output.schema:
fix_nullability(field, result_dict)

return output.sparkSession.createDataFrame(output.rdd, output.schema)
spark = SparkSession.getActiveSession()

if spark is None:
spark = SparkSession.builder.getOrCreate()

return spark.sparkSession.createDataFrame(output.rdd, output.schema)


def flatten_struct(df: DataFrame, col_name: str, separator: str = ":") -> DataFrame:
Expand Down

0 comments on commit 48372f7

Please sign in to comment.