You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The data-structures introduced and used by the dwave-ocean-sdk mostly consist of several substructures that might or might not be native to python, e.g. the classdwave.system.EmbeddingComposite [1] consists of properties, such as EmbeddingComposite.child and EmbeddingComposite.parameters, and methods, such as EmbeddingComposite.sample_qubo(...), or the classdimod.SampleSet [2] consists of properties, such as SampleSet.record, and methods, such as SampleSet.lowest(...). However, it is currently not possible to simply write to files, and read from files in a different script, instaces of them. Please do not consider this a limitation to my examples. I know there are a lot more.
In my opinion a missing feature of the dwave-ocean-sdk is write-to and read-from file(s) all kinds of data-structures related to the dwave packages. For example NumPy offers, apart from many other functions, the functions writetxt(filename, array) and loadtxt(filename) [3] to easily write arrays to files and also read them in from files.
So far, I also tried to use the python package pickle [4] (functions pickle.dump(obj, file) for writing and pickle.load(file) for reading). On a simple test-case that worked very well for an instance of a dimod.Sampleset, but failed for an instance of a dwave.system.DWaveSampler, due to some subsub...subsubstructure within, which can not "be pickled".
One could of course also go through all data-structures and recursively all their substructures and decide what data is of interest, but this is of course ... cumbersome ...
I think the pickle package is offering even more than necessary. I think it would be sufficient to I/O data (properties in the examples above), while methods should be known by the reading script from the corresponding packages. But it should be possible to treat the read data-structures as the ones they were written, e.g. a written instance of a dwave.system.EmbeddingComposite should be readable by a different script that imported the packages of the dwave-ocean-sdk, and treat the read data again as an instance of a dwave.system.EmbeddingComposite.
Why do I think such a feature is missing:
Currently a lot of research is going on to improve applicability of quantum annealing to real-world problems. But since there are quite some subtleties, there also is a lot of trial and error. In this regard, I think it would be beneficial for the community to exchange all kinds of findings and especially data. I might even dare to say that the inputs are more interesting than the results, and of course the combination of both would be the most valuable.
The data-structures introduced and used by the dwave-ocean-sdk mostly consist of several substructures that might or might not be native to python, e.g. the
class
dwave.system.EmbeddingComposite
[1] consists ofproperties
, such asEmbeddingComposite.child
andEmbeddingComposite.parameters
, andmethods
, such asEmbeddingComposite.sample_qubo(...)
, or theclass
dimod.SampleSet
[2] consists ofproperties
, such asSampleSet.record
, andmethods
, such asSampleSet.lowest(...)
.However, it is currently not possible to simply write to files, and read from files in a different script, instaces of them. Please do not consider this a limitation to my examples. I know there are a lot more.
In my opinion a missing feature of the dwave-ocean-sdk is write-to and read-from file(s) all kinds of data-structures related to the dwave packages. For example NumPy offers, apart from many other functions, the functions
writetxt(filename, array)
andloadtxt(filename)
[3] to easily write arrays to files and also read them in from files.So far, I also tried to use the python package
pickle
[4] (functionspickle.dump(obj, file)
for writing andpickle.load(file)
for reading). On a simple test-case that worked very well for an instance of adimod.Sampleset
, but failed for an instance of adwave.system.DWaveSampler
, due to some subsub...subsubstructure within, which can not "be pickled".One could of course also go through all data-structures and recursively all their substructures and decide what data is of interest, but this is of course ... cumbersome ...
I think the pickle package is offering even more than necessary. I think it would be sufficient to I/O data (
properties
in the examples above), whilemethods
should be known by the reading script from the corresponding packages. But it should be possible to treat the read data-structures as the ones they were written, e.g. a written instance of adwave.system.EmbeddingComposite
should be readable by a different script that imported the packages of the dwave-ocean-sdk, and treat the read data again as an instance of adwave.system.EmbeddingComposite
.Why do I think such a feature is missing:
Currently a lot of research is going on to improve applicability of quantum annealing to real-world problems. But since there are quite some subtleties, there also is a lot of trial and error. In this regard, I think it would be beneficial for the community to exchange all kinds of findings and especially data. I might even dare to say that the inputs are more interesting than the results, and of course the combination of both would be the most valuable.
1 - https://docs.ocean.dwavesys.com/en/stable/docs_system/reference/composites.html#embeddingcomposite
2 - https://docs.ocean.dwavesys.com/en/stable/docs_dimod/reference/sampleset.html#id1
3 - https://numpy.org/doc/stable/reference/routines.io.html
4 - https://docs.python.org/3/library/pickle.html#
The text was updated successfully, but these errors were encountered: