Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test failure: test_noise #449

Closed
cqc-alec opened this issue Jan 21, 2025 · 4 comments
Closed

Test failure: test_noise #449

cqc-alec opened this issue Jan 21, 2025 · 4 comments
Assignees

Comments

@cqc-alec
Copy link
Collaborator

cqc-alec commented Jan 21, 2025

Recently test_noise (in backend_test.py) has started to fail, with the following error:

FAILED backend_test.py::test_noise - assert np.float64(0.0) > np.float64(0.46)

(I can't see any code changes that could have caused this, so it may be something to do with the noise model being reported by the backend?)

@cqc-alec
Copy link
Collaborator Author

If I use a different backend (torino) instead of brisbane, the test passes, so I suspect we are receiving a corrupted error model ...

@cqc-alec
Copy link
Collaborator Author

cqc-alec commented Jan 21, 2025

Or the error model may not be corrupted, we may just be selecting qubits with very high qubit readout error rates. Ideally we would not do that ...

@cqc-alec
Copy link
Collaborator Author

Yes, it seems we are selecting dud qubits for the placement.

@cqc-alec
Copy link
Collaborator Author

This test is passing again, because the qubits on brisbane are back. I'm going to close this as I think the issue was with the backend, but we should perhaps think about how such issues could be detected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant