You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It took me quite a while to figure out how to use Dataset.subset.
The documentation states the interface is subset(rows=None, cols=None), so my first assumption was to just pass two ints for the number of requested rows and columns, and when that didn't work I passed a list of column indices, but also to no avail.
Only after debugging I finally found out that I first need to define column headers for the Dataset instance and then pass a subset of these headers to cols.
In my opinion, this could be made more clearly in the documentation.
Also, is there a reason why headers are required and we cannot alternatively just pass column indices?
I would go ahead and try to implement that myself if you don't mind.
The text was updated successfully, but these errors were encountered:
I need just a slice of Dataset but returned as another Dataset instance.That's what "Pythonic" means. If you make Dateset to behave as, say, list, then do it all the way. So, I could for example take a subset of my data then convert it into another format. Returning list from slice is just ugly.
It took me quite a while to figure out how to use
Dataset.subset
.The documentation states the interface is
subset(rows=None, cols=None)
, so my first assumption was to just pass two ints for the number of requested rows and columns, and when that didn't work I passed a list of column indices, but also to no avail.Only after debugging I finally found out that I first need to define column headers for the Dataset instance and then pass a subset of these headers to
cols
.In my opinion, this could be made more clearly in the documentation.
Also, is there a reason why headers are required and we cannot alternatively just pass column indices?
I would go ahead and try to implement that myself if you don't mind.
The text was updated successfully, but these errors were encountered: