-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add new code example #1
Conversation
|
||
For each potential new charging station location, we compute the average | ||
distance to all POIs on the map. Using this value as a linear bias on each |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find this approach strange. I would have expected that we would want to place a new charging station next to a POI but by using averages over all POI we won't achieve that. For example, if we have two POIs, one at the top left and the other at the bottom right of the grid, we would want a new station either at the top left or at the bottom right, not in the center away from either POI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As this a basic, intro level demo, I'm simply demonstrating that you can use some metric. These would be completely different depending on how a customer might in practice implement their strategy. I don't think it necessarily matters what the metric is, only that there is a way to take things like this in to account.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can certainly see the usefulness of this example for that purpose, Victoria, your reply makes sense in your context. But now what's unclear to me is our overall dwave-examples repo strategy (@arcondello, please chime in).
Where are we aiming for on the spectrum between users seeing realistic examples of solving real-world problems with our systems on one end and on the other a few simple models covered with many thin veneers? I remember Melody spending weeks fine-tuning her clustering example to do useful work, and my impression was that we wanted users to see the Nursing example in a similar light, and the recent https://github.com/dwavesystems/circuit-equivalence also aims at real-world examples.
I can imagine an alternative where we provide examples with simple cores and then in the README we give a multitude of potential applications that will require working out how to implement realistic metrics.
What I don't think we want is for users to incorrectly assume that either all our examples are simple models for potential real applications or realistic applications and then find our implementation inadequate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Conversation moved to Slack.
@JoelPasvolsky I believe I addressed everything, can you take another look please? |
@hhtong I addressed both of your comments if you want to take another look please! |
|
||
project_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) | ||
|
||
class TestDemo(unittest.TestCase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to add a few unit tests that verify the code produces the expected solution?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a randomly generated scenario so there isn't a canned solution that we can compare against. We've also run into issues with unit tests checking for optimal solutions failing since the QPU is probabilistic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, but there are ways around both those issues.
You can fix the seed and have a fixed random problem.
QPU/HSS is probabilistic, but it should almost certainly find ground states of trivial problems. For non-trivial problems, there's always an option of multiple retries.
Unit tests are generally useful for catching errors early, but this was merely a suggestion. Proceed as you like.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to do that, we'd have to reformat the program. I'll list it as an issue for now since it would be a rework of the entire python program.
No description provided.