Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PEM Pipeline #417

Open
wants to merge 36 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 24 commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
6eb5a6e
draft task conversion pipeop
studener Sep 24, 2024
4d49fa3
draft pipeop + pipeline
studener Sep 26, 2024
d786d08
update pred conversion pipeop
studener Sep 26, 2024
b7bc0c6
added modelmatrix pipeop to PEM pipeline, changed variable naming to …
markusgoeswein Oct 29, 2024
5d6b61b
added col_role original_ids to regression tasks
markusgoeswein Nov 14, 2024
c19e4bb
changed id column role to original_ids
markusgoeswein Nov 14, 2024
717478d
added additional arguments to TaskSurvRegrPEM to enable more complex …
markusgoeswein Nov 21, 2024
976d9c0
form is now to be passed without quotation marks
markusgoeswein Dec 6, 2024
35745f0
resolve merge conflict with main, before merging
markusgoeswein Jan 31, 2025
e2a6c21
resolve merge conflict in R\piplines.R
markusgoeswein Jan 31, 2025
5a75617
setting up unit tests for the PEM pipeline
markusgoeswein Feb 21, 2025
a3bdbf5
update function doc
markusgoeswein Feb 25, 2025
6bf7332
update remotes for offset support
markusgoeswein Feb 25, 2025
690b5c6
remove ped_formula argument in favour of automatically parsing it
markusgoeswein Mar 3, 2025
f341679
add assert and set use_pred_offset to FALSE if not done so
markusgoeswein Mar 3, 2025
d684cd0
Regenerate Rd files using devtools::document()
markusgoeswein Mar 3, 2025
b14e678
adjust assertions in pipeline
markusgoeswein Mar 4, 2025
f00f1f9
Merge pull request #436 from mlr-org/main
markusgoeswein Mar 4, 2025
7a848bf
setting up test_PEM.R and adjustments to tests in pipelines
markusgoeswein Mar 4, 2025
2fc11a4
change lrn() from regr.xgboost to regr.glmnet
markusgoeswein Mar 13, 2025
c035e9b
update DESCRIPTION
markusgoeswein Mar 13, 2025
cb52f0b
add glmnet to suggests for PEM pipeline tests
markusgoeswein Mar 13, 2025
4e509e4
included require_namespace('glmnet, ...) for PEM pipeline tests
markusgoeswein Mar 13, 2025
82b8af7
minor fix
markusgoeswein Mar 13, 2025
0532d71
set PEM to lowercase in PipeOpTask and PipeOpPred, update DESCRIPTION…
markusgoeswein Mar 25, 2025
ec0ffdd
temporary name change of PEM pipeops
markusgoeswein Mar 25, 2025
a5402c3
man files are renamed with lowercase pem
markusgoeswein Mar 25, 2025
e1404b5
add offset explanation
markusgoeswein Mar 26, 2025
bb009bd
code formatting, updated docs and examples
markusgoeswein Mar 26, 2025
e232df6
minor updates to function doc and code comment
markusgoeswein Mar 26, 2025
5788225
removed rhs argument from pem and disctime pipelines, updated doc and…
markusgoeswein Mar 26, 2025
22ca02b
remove rhs from and update pem tests
markusgoeswein Mar 27, 2025
f47f72a
adapt test cases to removal of rhs
markusgoeswein Mar 27, 2025
7c35ba3
run devtools::document for compiled man files
markusgoeswein Mar 27, 2025
484a9fd
rename test_PEM.R
markusgoeswein Mar 27, 2025
1621f43
finish rename
markusgoeswein Mar 27, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 8 additions & 2 deletions DESCRIPTION
Original file line number Diff line number Diff line change
Expand Up @@ -58,13 +58,17 @@ Suggests:
set6 (>= 0.2.6),
simsurv,
survAUC,
testthat (>= 3.0.0)
testthat (>= 3.0.0),
glmnet
LinkingTo:
Rcpp
Remotes:
xoopR/distr6,
xoopR/param6,
xoopR/set6
xoopR/set6,
mlr-org/mlr3,
mlr-org/mlr3learners,
mlr-org/mlr3extralearners
ByteCompile: true
Config/testthat/edition: 3
Encoding: UTF-8
Expand Down Expand Up @@ -115,11 +119,13 @@ Collate:
'PipeOpDistrCompositor.R'
'PipeOpPredClassifSurvDiscTime.R'
'PipeOpPredClassifSurvIPCW.R'
'PipeOpPredRegrSurvPEM.R'
'PipeOpProbregrCompositor.R'
'PipeOpResponseCompositor.R'
'PipeOpSurvAvg.R'
'PipeOpTaskSurvClassifDiscTime.R'
'PipeOpTaskSurvClassifIPCW.R'
'PipeOpTaskSurvRegrPEM.R'
'PredictionDataDens.R'
'PredictionDataSurv.R'
'PredictionDens.R'
Expand Down
3 changes: 3 additions & 0 deletions NAMESPACE
Original file line number Diff line number Diff line change
Expand Up @@ -72,11 +72,13 @@ export(PipeOpCrankCompositor)
export(PipeOpDistrCompositor)
export(PipeOpPredClassifSurvDiscTime)
export(PipeOpPredClassifSurvIPCW)
export(PipeOpPredRegrSurvPEM)
export(PipeOpProbregr)
export(PipeOpResponseCompositor)
export(PipeOpSurvAvg)
export(PipeOpTaskSurvClassifDiscTime)
export(PipeOpTaskSurvClassifIPCW)
export(PipeOpTaskSurvRegrPEM)
export(PredictionDens)
export(PredictionSurv)
export(TaskDens)
Expand All @@ -95,6 +97,7 @@ export(get_mortality)
export(pecs)
export(pipeline_survtoclassif_IPCW)
export(pipeline_survtoclassif_disctime)
export(pipeline_survtoregr_PEM)
export(plot_probregr)
import(checkmate)
import(data.table)
Expand Down
112 changes: 112 additions & 0 deletions R/PipeOpPredRegrSurvPEM.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
#' @title PipeOpPredRegrSurvPEM
#' @name mlr_pipeops_trafopred_regrsurv_PEM
#'
#' @description
#' Transform [PredictionRegr] to [PredictionSurv].
#' Predicted hazards are transformed into survival probabilities and wrapped in a
#' [PredictionSurv] object.
#'
#' Continuous time is partitioned into time intervals \eqn{[0, t_1), [t_1, t_2), ..., [t_J, \infty)}.
#' [PredictionRegr] contains the estimates of the piece-wise constant hazards defined as
#' \deqn{\lambda(t \mid \mathbf{x}_i (t)) := exp(g(x_{ij},t{j})), \quad \forall t \in [t_{j-1}, t_{j}), \quad i = 1, \dots, n.}
#'
#' Via the following identity
#' \deqn{S(t | \mathbf{x}) = \exp \left( - \int_{0}^{t} \lambda(s | \mathbf{x}) \, ds \right) = \exp \left( - \sum_{j = 1}^{J} \lambda(j | \mathbf{x}) d_j\, \right),}
#' where \eqn{d_j} specifies the duration of interval \eqn{j},
#'
#' we compute the survival probability from the predicted hazards.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent! I would suggest 1) remove the time-dependency as we don't support it, ie x instead of x(t) 2) describe a bit g function? 3) add reference via the bibtex file => Andreas 2018 paper (A generalized additive model approach to time-to-event analysis)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added some additional descriptions for clarification. At the same time, I am wondering whether PipeOpPred is the appropriate place for these mathematical explanations. I feel, as a whole, they might be more appropriate in the doc of the pipeline, with exception of the backtransform portion, i.e. how we get surv probs from hazards.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your point - it makes sense to put the related doc directly in the class that implementes what the docs says - and provide the links to that in the pipeline, but up to you if you want to add extra math doc somewhere (even a bit duplicated), always welcome!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll ask Andreas what he thinks/prefers

#'
#'
#'
#' @section Dictionary:
#' This [PipeOp][mlr3pipelines::PipeOp] can be instantiated via the
#' [dictionary][mlr3misc::Dictionary] [mlr3pipelines::mlr_pipeops]
#' or with the associated sugar function [mlr3pipelines::po()]:
#' ```
#' PipeOpPredRegrSurvPEM$new()
#' mlr_pipeops$get("trafopred_regrsurv_PEM")
#' po("trafopred_regrsurv_PEM")
#' ```
#'
#' @section Input and Output Channels:
#' The input consists of a [PredictionRegr] and a [data.table][data.table::data.table]
#' containing the transformed data. The [PredictionRegr] is provided by the [mlr3::LearnerRegr],
#' while the [data.table] is generated by [PipeOpTaskSurvRegrPEM].
#' The output is the input [PredictionRegr] transformed to a [PredictionSurv].
#' Only works during prediction phase.
#'
#' @family PipeOps
#' @family Transformation PipeOps
#' @export
PipeOpPredRegrSurvPEM = R6Class(
"PipeOpPredRegrSurvPEM",
inherit = mlr3pipelines::PipeOp,

public = list(
#' @description
#' Creates a new instance of this [R6][R6::R6Class] class.
#' @param id (character(1))\cr
#' Identifier of the resulting object.
initialize = function(id = "trafopred_regrsurv_PEM") {
super$initialize(
id = id,
input = data.table(
name = c("input", "transformed_data"),
train = c("NULL", "data.table"),
predict = c("PredictionRegr", "data.table")
),
output = data.table(
name = "output",
train = "NULL",
predict = "PredictionSurv"
)
)
}
),

private = list(
.predict = function(input) {
pred = input[[1]] # predicted hazards provided by the regression learner
data = input[[2]] # transformed data
assert_true(!is.null(pred$response))


data = cbind(data, dt_hazard = pred$response)

# From theory, convert hazards to surv as exp(-cumsum(h(t) * exp(offset)))
rows_per_id = nrow(data) / length(unique(data$id))

surv = t(vapply(unique(data$id), function(unique_id) {
exp(-cumsum(data[data$id == unique_id, ][["dt_hazard"]] * exp(data[data$id == unique_id, ][["offset"]])))
}, numeric(rows_per_id)))


unique_end_times = sort(unique(data$tend))
# coerce to distribution and crank
pred_list = .surv_return(times = unique_end_times, surv = surv)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Task: I think this is the part that sometimes results in surv probabilities that are not descreasing, right?
Example:

task = tsk("lung")
l = po("encode") %>>% lrn("regr.xgboost") |> as_learner()
pem = ppl("survtoregr_PEM", learner = l)
pem$train(task)$predict(task)

Can we please identify why that is happening? is it some sort of arithmetic instability thing? or some calculations above with the offset are wrong?

Copy link
Collaborator

@markusgoeswein markusgoeswein Mar 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

task = tsk("lung")
l = po("encode") %>>% lrn("regr.xgboost", objective = 'count:poisson') |> as_learner()
pem = ppl("survtoregr_PEM", learner = l)
pem$train(task)
pred = pem$predict(task)

Always need to specify the family/objective (depending on the learner) as "poisson" to establish the exponential link between hazard and the learned model. So this works as intended, as long as the learner is correctly specified, however, the pipeline does not check whether that has been done. I suppose one can check for this, but not sure if some learners use arguments other than features and family to specify the distributional assumption.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah nice! We should document that in the example of the pipeline (and the vignette). If you want to do the extra leg, you can create a small function that checks the learner in the pipeline for family/objective parameters, and if it doesn't find the keyword poisson, throws a warning "PEM works correcty with learners that support poisson regression". Andreas' list of candidates learners is enough (i.e. is not that many either way)


# select the real tend values by only selecting the last row of each id
# basically a slightly more complex unique()
real_tend = data$obs_times[seq_len(nrow(data)) %% rows_per_id == 0]

ids = unique(data$id)
# select last row for every id => observed times
id = PEM_status = NULL # to fix note
data = data[, .SD[.N, list(PEM_status)], by = id]

# create prediction object
p = PredictionSurv$new(
row_ids = ids,
crank = pred_list$crank, distr = pred_list$distr,
truth = Surv(real_tend, as.integer(as.character(data$PEM_status))))

list(p)
},

.train = function(input) {
self$state = list()
list(input)
}
)
)
register_pipeop("trafopred_regrsurv_PEM", PipeOpPredRegrSurvPEM)
220 changes: 220 additions & 0 deletions R/PipeOpTaskSurvRegrPEM.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,220 @@
#' @title PipeOpTaskSurvRegrPEM
#' @name mlr_pipeops_trafotask_survregr_PEM
#' @template param_pipelines
#'
#' @description
#' Transform [TaskSurv] to [TaskRegr][mlr3::TaskRegr] by dividing continuous
#' time into multiple time intervals for each observation. The survival data set
#' stored in [TaskSurv] is transformed into Piece-wise Exponential Data (PED) format
#' which in turn forms the backend for [TaskRegr][mlr3::TaskRegr].
#' This transformation creates a new target variable `PEM_status` that indicates
#' whether an event occurred within each time interval.
#'
#' @section Dictionary:
#' This [PipeOp][mlr3pipelines::PipeOp] can be instantiated via the
#' [dictionary][mlr3misc::Dictionary] [mlr3pipelines::mlr_pipeops]
#' or with the associated sugar function [mlr3pipelines::po()]:
#' ```
#' PipeOpTaskSurvRegrPEM$new()
#' mlr_pipeops$get("trafotask_survregr_PEM")
#' po("trafotask_survregr_PEM")
#' ```
#'
#' @section Input and Output Channels:
#' [PipeOpTaskSurvRegrPEM] has one input channel named "input", and two
#' output channels, one named "output" and the other "transformed_data".
#'
#' During training, the "output" is the "input" [TaskSurv] transformed to a
#' [TaskRegr][mlr3::TaskRegr].
#' The target column is named `"PEM_status"` and indicates whether an event occurred
#' in each time interval.
#' An additional feature named `"tend"` contains the end time point of each interval.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...numeric feature... (please verify) => add this also in the DiscTime pipeop

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

numeric indeed

#' Lastly, the "output" task has an offset column `"offset"`.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

more precisely: has a column with col_role offset which is the ... log of something?

#' The "transformed_data" is an empty [data.table][data.table::data.table].
#'
#' During prediction, the "input" [TaskSurv] is transformed to the "output"
#' [TaskRegr][mlr3::TaskRegr] with `"PEM_status"` as target, while `"tend"`
#' and `"offset"` are included as features.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mroe accurately: offset in not a feature, ie it doesn;'t havbe the col_role feature, but the offset one

#' The "transformed_data" is a [data.table] with columns the `"PEM_status"`
#' target of the "output" task, the `"id"` (original observation ids),
#' `"obs_times"` (observed times per `"id"`) and `"tend"` (end time of each interval).
#' This "transformed_data" is only meant to be used with the [PipeOpPredRegrSurvPEM].
#'
#' @section State:
#' The `$state` contains information about the `cut` parameter used.
#'
#' @section Parameters:
#' The parameters are
#'
#' * `cut :: numeric()`\cr
#' Split points, used to partition the data into intervals based on the `time` column.
#' If unspecified, all unique event times will be used.
#' If `cut` is a single integer, it will be interpreted as the number of equidistant
#' intervals from 0 until the maximum event time.
#' * `max_time :: numeric(1)`\cr
#' If `cut` is unspecified, this will be the last possible event time.
#' All event times after `max_time` will be administratively censored at `max_time.`
#' Needs to be greater than the minimum event time in the given task.
#'
#' @examplesIf (mlr3misc::require_namespaces(c("mlr3pipelines", "mlr3extralearners"), quietly = TRUE))
#' \dontrun{
#' library(mlr3)
#' library(mlr3learners)
#' library(mlr3pipelines)
#'
#' task = tsk("lung")
#'
#' # transform the survival task to a poisson regression task
#' # all unique event times are used as cutpoints
#' po_PEM = po("trafotask_survregr_PEM")
#' task_regr = po_PEM$train(list(task))[[1L]]
#'
#' # the end time points of the discrete time intervals
#' unique(task_regr$data(cols = "tend"))[[1L]]
#'
#' # train a regression learner
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... that supports poisson regression....

#' learner = lrn("regr.gam") # won't run unless learner can accept offset column role
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: when I finish the mlr3extralearners PR, we can safely remove this comment here.

Also correct the example + make it a bit more interesting, e.g. => l = lrn("regr.gam", formula = pem_status ~ s(age) + s(tend), family = "poisson") => you definitely need the family poisson argument here

#' learner$train(task_regr)
#' }
#'
#'
#' @family PipeOps
#' @family Transformation PipeOps
#' @export
PipeOpTaskSurvRegrPEM = R6Class("PipeOpTaskSurvRegrPEM",
inherit = mlr3pipelines::PipeOp,

public = list(
#' @description
#' Creates a new instance of this [R6][R6::R6Class] class.
initialize = function(id = "trafotask_survregr_PEM") {
param_set = ps(
cut = p_uty(default = NULL),
max_time = p_dbl(0, default = NULL, special_vals = list(NULL)),
censor_code = p_int(0L),
min_events = p_int(1L)
)
super$initialize(
id = id,
param_set = param_set,
input = data.table(
name = "input",
train = "TaskSurv",
predict = "TaskSurv"
),
output = data.table(
name = c("output", "transformed_data"),
train = c("TaskRegr", "data.table"),
predict = c("TaskRegr", "data.table")
)
)
}
),

private = list(
.train = function(input) {
task = input[[1L]]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want to experiment and implement the validation stuff for xgboost, here is a bit of what is happening: task will have a predefined validation task here, which is not transformed. what we need to do is something like:

transformed_internal_valid_task = private$.train(list(task$internal_valid_task))
task$internal_valid_task = transformed_internal_valid_task
and go on transforming the task

assert_true(task$censtype == "right")
data = task$data()

if ("PEM_status" %in% colnames(task$data())) {
stop("\"PEM_status\" can not be a column in the input data.")
}

cut = assert_numeric(self$param_set$values$cut, null.ok = TRUE, lower = 0)
max_time = self$param_set$values$max_time

time_var = task$target_names[1]
event_var = task$target_names[2]
if (testInt(cut, lower = 1)) {
cut = seq(0, data[get(event_var) == 1, max(get(time_var))], length.out = cut + 1)
}

if (!is.null(max_time)) {
assert(max_time > data[get(event_var) == 1, min(get(time_var))],
"max_time must be greater than the minimum event time.")
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removing redundant empty lines in all code would be nice - some space is good, more space is unnecessary



ped_formula = formulate(sprintf("Surv(%s, %s)", time_var, event_var), ".")
long_data = pammtools::as_ped(data = data, formula = ped_formula, cut = cut, max_time = max_time)
long_data = as.data.table(long_data)

self$state$cut = attributes(long_data)$trafo_args$cut

setnames(long_data, old = "ped_status", new = "PEM_status")

# remove some columns from `long_data`
long_data[, c("tstart", "interval") := NULL]
# keep id mapping
reps = table(long_data$id)
ids = rep(task$row_ids, times = reps)
id = NULL
long_data[, id := ids]

task_PEM = TaskRegr$new(paste0(task$id, "_PEM"), long_data,
target = "PEM_status")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a bit more proper indentation style => target should be below new( <= here, please check all code for this

task_PEM$set_col_roles("id", roles = "original_ids")
task_PEM$set_col_roles('offset', roles = "offset")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: no ' anywhere in the code please, use only "!


list(task_PEM, data.table())
},

.predict = function(input) {
task = input[[1]]
data = task$data()

# extract `cut` from `state`
cut = self$state$cut

time_var = task$target_names[1]
event_var = task$target_names[2]

max_time = max(cut)
time = data[[time_var]]
data[[time_var]] = max_time

status = data[[event_var]]
data[[event_var]] = 1


Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of extra space: a good informative comment!

Copy link
Collaborator

@markusgoeswein markusgoeswein Mar 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Frankly, not quite sure what's the purpose of data[[event_var]] = 1
As for data[[time_var]] = max_time, this ensures that for each subject the ped data spans over all intervals instead of only until the event time, which of course is sensible for prediction. I added this as a comment.

ped_formula = formulate(sprintf("Surv(%s, %s)", time_var, event_var), ".")
long_data = pammtools::as_ped(data = data, formula = ped_formula, cut = cut, max_time = max_time)
long_data = as.data.table(long_data)

setnames(long_data, old = "ped_status", new = "PEM_status")

PEM_status = id = tend = obs_times = NULL # fixing global binding notes of data.table
long_data[, PEM_status := 0]
# set correct id
rows_per_id = nrow(long_data) / length(unique(long_data$id))
long_data$obs_times = rep(time, each = rows_per_id)
ids = rep(task$row_ids, each = rows_per_id)
long_data[, id := ids]

# set correct PEM_status
reps = long_data[, data.table(count = sum(tend >= obs_times)), by = id]$count
status = rep(status, times = reps)
long_data[long_data[, .I[tend >= obs_times], by = id]$V1, PEM_status := status]

# remove some columns from `long_data`
long_data[, c("tstart", "interval", "obs_times") := NULL]
task_PEM = TaskRegr$new(paste0(task$id, "_PEM"), long_data,
target = "PEM_status")
task_PEM$set_col_roles("id", roles = "original_ids")
task_PEM$set_col_roles('offset', roles = "offset")

# map observed times back
reps = table(long_data$id)
long_data$obs_times = rep(time, each = rows_per_id)
# subset transformed data
columns_to_keep = c("id", "obs_times", "tend", "PEM_status", "offset")
long_data = long_data[, columns_to_keep, with = FALSE]

list(task_PEM, long_data)
}
)
)

register_pipeop("trafotask_survregr_PEM", PipeOpTaskSurvRegrPEM)
3 changes: 2 additions & 1 deletion R/aaa.R
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,8 @@ register_reflections = function() {

x$task_col_roles$surv = x$task_col_roles$regr
x$task_col_roles$dens = c("feature", "target", "label", "order", "group", "weight", "stratum")
x$task_col_roles$classif = unique(c(x$task_col_roles$classif, "original_ids")) # for discrete time
x$task_col_roles$classif = unique(c(x$task_col_roles$classif, "original_ids"))# for discrete time
x$task_col_roles$regr = unique(c(x$task_col_roles$regr, "original_ids"))
Copy link
Collaborator

@bblodfon bblodfon Mar 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# for pem

x$task_properties$surv = x$task_properties$regr
x$task_properties$dens = x$task_properties$regr

Expand Down
Loading