Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor Python code #580

Merged
merged 15 commits into from
Sep 14, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/pull_request_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Updating other julia files may be required.
- update `docs/core/equations.qmd`
- update `docs/core/usage.qmd`
- update `docs/python/examples.ipynb` # or start a new example model
- update `docs/schema*.json` by running `julia --project=docs docs/gen_schema.jl` and `datamodel-codegen --use-title-as-name --input docs/schema/root.schema.json --output python/ribasim/ribasim/models.py`
- update `docs/schema*.json` by running `julia --project=docs docs/gen_schema.jl` and `datamodel-codegen --use-title-as-name --use-double-quotes --disable-timestamp --use-default --strict-nullable --input docs/schema/root.schema.json --output python/ribasim/ribasim/models.py` and `datamodel-codegen --use-title-as-name --use-double-quotes --disable-timestamp --use-default --strict-nullable --input docs/schema/Config.schema.json --output python/ribasim/ribasim/config.py`
- update the instructions in `docs/contribute/*.qmd` if something changes there, e.g. something changes in how a new node type must be defined.

## QGIS
Expand Down
11 changes: 8 additions & 3 deletions core/src/config.jl
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ const nodetypes = collect(keys(nodekinds))

@option struct Solver <: TableOption
algorithm::String = "QNDF"
saveat::Union{Float64, Vector{Float64}, Vector{Union{}}} = Float64[]
Hofer-Julian marked this conversation as resolved.
Show resolved Hide resolved
saveat::Union{Float64, Vector{Float64}} = Float64[]
adaptive::Bool = true
dt::Float64 = 0.0
abstol::Float64 = 1e-6
Expand Down Expand Up @@ -109,15 +109,15 @@ end
timing::Bool = false
end

@option @addnodetypes struct Config
@option @addnodetypes struct Config <: TableOption
starttime::DateTime
endtime::DateTime

# [s] Δt for periodic update frequency, including user horizons
update_timestep::Float64 = 60 * 60 * 24.0

# optional, when Config is created from a TOML file, this is its directory
relative_dir::String = pwd()
relative_dir::String = "." # ignored(!)
Hofer-Julian marked this conversation as resolved.
Show resolved Hide resolved
input_dir::String = "."
output_dir::String = "."

Expand All @@ -144,6 +144,11 @@ function Configurations.from_dict(::Type{Logging}, ::Type{LogLevel}, level::Abst
)
end

# [] in TOML is parsed as a Vector{Union{}}
function Configurations.from_dict(::Type{Solver}, t::Type, saveat::Vector{Union{}})
return Float64[]
end

# TODO Use with proper alignment
function Base.show(io::IO, c::Config)
println(io, "Ribasim Config")
Expand Down
5 changes: 3 additions & 2 deletions docs/contribute/addnode.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -247,10 +247,11 @@ If you haven't done so before, you first need to instantiate your docs environme
Run `julia --project=docs`, followed by running `instantiate` in the Pkg mode (press `]`).
:::

To generate the Python module `models.py` from the JSON Schemas, run:
To generate the Python module `models.py` and `config.py` from the JSON Schemas, run:

```
datamodel-codegen --use-title-as-name --input docs/schema/root.schema.json --output python/ribasim/ribasim/models.py
datamodel-codegen --use-title-as-name --use-double-quotes --disable-timestamp --use-default --strict-nullable --input docs/schema/root.schema.json --output python/ribasim/ribasim/models.py
datamodel-codegen --use-title-as-name --use-double-quotes --disable-timestamp --use-default --strict-nullable --input docs/schema/Config.schema.json --output python/ribasim/ribasim/config.py
```

Since adding a node type touches both the Python and Julia code,
Expand Down
89 changes: 68 additions & 21 deletions docs/gen_schema.jl
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,14 @@ using JSON3
using Legolas
using InteractiveUtils
using Dates
using Configurations
using Logging

# set empty to have local file references for development
const prefix = "https://deltares.github.io/Ribasim/schema/"

jsondefault(x) = identity(x)
jsondefault(x::LogLevel) = "info"
jsontype(x) = jsontype(typeof(x))
jsonformat(x) = jsonformat(typeof(x))
jsontype(::Type{<:AbstractString}) = "string"
Expand All @@ -23,28 +27,43 @@ jsontype(::Type{<:AbstractFloat}) = "number"
jsonformat(::Type{<:Float64}) = "double"
jsonformat(::Type{<:Float32}) = "float"
jsontype(::Type{<:Number}) = "number"
jsontype(::Type{<:AbstractVector}) = "list"
jsontype(::Type{<:AbstractVector}) = "array"
jsontype(::Type{<:Bool}) = "boolean"
jsontype(::Type{LogLevel}) = "string"
jsontype(::Type{<:Enum}) = "string"
jsontype(::Type{<:Missing}) = "null"
jsontype(::Type{<:DateTime}) = "string"
jsonformat(::Type{<:DateTime}) = "date-time"
jsontype(::Type{<:Nothing}) = "null"
jsontype(::Type{<:Any}) = "object"
jsonformat(::Type{<:Any}) = "default"
jsontype(T::Union) = unique(filter(!isequal("null"), jsontype.(Base.uniontypes(T))))
function jsontype(T::Union)
t = Base.uniontypes(T)
td = Dict(zip(t, jsontype.(t)))
length(td) == 1 && return first(values(td))
types = Dict[]
for (t, jt) in td
nt = Dict{String, Any}("type" => jt)
if t <: AbstractVector
nt["items"] = Dict("type" => jsontype(eltype(t)))
end
push!(types, nt)
end
return Dict("anyOf" => types)
end

function strip_prefix(T::DataType)
(p, v) = rsplit(string(T), 'V'; limit = 2)
n = string(T)
(p, _) = occursin('V', n) ? rsplit(n, 'V'; limit = 2) : (n, "")
return string(last(rsplit(p, '.'; limit = 2)))
end

function gen_root_schema(TT::Vector, prefix = prefix)
name = "root"
function gen_root_schema(TT::Vector, prefix = prefix, name = "root")
schema = Dict(
"\$schema" => "https://json-schema.org/draft/2020-12/schema",
"properties" => Dict{String, Dict}(),
"\$id" => "$(prefix)$name.schema.json",
"title" => "root",
"title" => name,
"description" => "All Ribasim Node types",
"type" => "object",
)
Expand All @@ -60,7 +79,7 @@ end

os_line_separator() = Sys.iswindows() ? "\r\n" : "\n"

function gen_schema(T::DataType, prefix = prefix)
function gen_schema(T::DataType, prefix = prefix; pandera = true)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is pandera optional?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because I hacked the remarks field into each type specifically for pandera. But the generation of the Config doesn't need it.

Frankly, this whole file is a hack, but it works. Doing it right requires a new package that actually implements the whole of JSONSchema.

name = strip_prefix(T)
schema = Dict(
"\$schema" => "https://json-schema.org/draft/2020-12/schema",
Expand All @@ -71,24 +90,49 @@ function gen_schema(T::DataType, prefix = prefix)
"properties" => Dict{String, Dict}(),
"required" => String[],
)
for (fieldname, fieldtype) in zip(fieldnames(T), fieldtypes(T))
fieldname = string(fieldname)
schema["properties"][fieldname] = Dict(
"description" => "$fieldname",
"type" => jsontype(fieldtype),
"format" => jsonformat(fieldtype),
for (fieldnames, fieldtype) in zip(fieldnames(T), fieldtypes(T))
fieldname = string(fieldnames)
ref = false
if fieldtype <: Ribasim.config.TableOption
schema["properties"][fieldname] = Dict(
"\$ref" => "$(prefix)$(strip_prefix(fieldtype)).schema.json",
"default" => fieldtype(),
)
ref = true
else
type = jsontype(fieldtype)
schema["properties"][fieldname] =
Dict{String, Any}("format" => jsonformat(fieldtype))
if type isa AbstractString
schema["properties"][fieldname]["type"] = type
else
merge!(schema["properties"][fieldname], type)
end
end
if T <: Ribasim.config.TableOption
d = field_default(T, fieldnames)
if !(d isa Configurations.ExproniconLite.NoDefault)
if !ref
schema["properties"][fieldname]["default"] = jsondefault(d)
end
end
end
if !(
(fieldtype isa Union) &&
((fieldtype.a === Missing) || (fieldtype.a === Nothing))
)
if !((fieldtype isa Union) && (fieldtype.a === Missing))
push!(schema["required"], fieldname)
end
end
# Temporary hack so pandera will keep the Pydantic record types
schema["properties"]["remarks"] = Dict(
"description" => "a hack for pandera",
"type" => "string",
"format" => "default",
"default" => "",
)
if pandera
# Temporary hack so pandera will keep the Pydantic record types
schema["properties"]["remarks"] = Dict(
"description" => "a hack for pandera",
"type" => "string",
"format" => "default",
"default" => "",
)
end
open(normpath(@__DIR__, "schema", "$(name).schema.json"), "w") do io
JSON3.pretty(io, schema)
println(io)
Expand All @@ -106,4 +150,7 @@ end
for T in subtypes(Legolas.AbstractRecord)
gen_schema(T)
end
for T in subtypes(Ribasim.config.TableOption)
gen_schema(T; pandera = false)
end
gen_root_schema(subtypes(Legolas.AbstractRecord))
7 changes: 0 additions & 7 deletions docs/schema/BasinForcing.schema.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,37 +9,30 @@
},
"time": {
"format": "date-time",
"description": "time",
"type": "string"
},
"precipitation": {
"format": "double",
"description": "precipitation",
"type": "number"
},
"infiltration": {
"format": "double",
"description": "infiltration",
"type": "number"
},
"urban_runoff": {
"format": "double",
"description": "urban_runoff",
"type": "number"
},
"node_id": {
"format": "default",
"description": "node_id",
"type": "integer"
},
"potential_evaporation": {
"format": "double",
"description": "potential_evaporation",
"type": "number"
},
"drainage": {
"format": "double",
"description": "drainage",
"type": "number"
}
},
Expand Down
3 changes: 0 additions & 3 deletions docs/schema/BasinProfile.schema.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,17 +9,14 @@
},
"area": {
"format": "double",
"description": "area",
"type": "number"
},
"node_id": {
"format": "default",
"description": "node_id",
"type": "integer"
},
"level": {
"format": "double",
"description": "level",
"type": "number"
}
},
Expand Down
2 changes: 0 additions & 2 deletions docs/schema/BasinState.schema.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,10 @@
},
"node_id": {
"format": "default",
"description": "node_id",
"type": "integer"
},
"level": {
"format": "double",
"description": "level",
"type": "number"
}
},
Expand Down
6 changes: 0 additions & 6 deletions docs/schema/BasinStatic.schema.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,32 +9,26 @@
},
"precipitation": {
"format": "double",
"description": "precipitation",
"type": "number"
},
"infiltration": {
"format": "double",
"description": "infiltration",
"type": "number"
},
"urban_runoff": {
"format": "double",
"description": "urban_runoff",
"type": "number"
},
"node_id": {
"format": "default",
"description": "node_id",
"type": "integer"
},
"potential_evaporation": {
"format": "double",
"description": "potential_evaporation",
"type": "number"
},
"drainage": {
"format": "double",
"description": "drainage",
"type": "number"
}
},
Expand Down
Loading