You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 28, 2019. It is now read-only.
The .dataset format is used as the output of most modules in ML Studio (intermediate datasets). For example, the Split module results are in that format.
Studio currently disables the Generate Data Access Code and Open in Notebook features on those output nodes due to lack of deserialization support for that format in Python.
To access those intermediate datasets from Python code, the user needs to insert a Convert to CSV module. Note that this conversion loses some metadata, such as column type information. Pandas can infer the types most of the time, but sometimes it requires user post-processing.
The text was updated successfully, but these errors were encountered:
OK, but this is not in python version yet so I think we could focus on python parity first, because we are behind on many things and having a reference implementation in place is a big help. So I am saying absolutely, but slightly lower priority.
I didn't see a spec of the format in the material you attached to your email message. If you run into something more detailed can you post it here? Thanks
The .dataset format is used as the output of most modules in ML Studio (intermediate datasets). For example, the Split module results are in that format.
Studio currently disables the Generate Data Access Code and Open in Notebook features on those output nodes due to lack of deserialization support for that format in Python.
To access those intermediate datasets from Python code, the user needs to insert a Convert to CSV module. Note that this conversion loses some metadata, such as column type information. Pandas can infer the types most of the time, but sometimes it requires user post-processing.
The text was updated successfully, but these errors were encountered: