-
Notifications
You must be signed in to change notification settings - Fork 631
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HT gradient - additional modes #7046
base: master
Are you sure you want to change the base?
Conversation
…and finished reversed method for single measurement
So for the failing test, we keep a hardcoded list of potential gradient keyword arguments here:
This was done as a patch to add validation to the |
# Get a generator and coefficients | ||
sub_coeffs, generators = _get_pauli_generators(trainable_op) | ||
|
||
# Create measurement with gate generators | ||
# With type pennylane.measurements.expval.ExpectationMP | ||
mp = qml.expval( | ||
qml.Hamiltonian( | ||
coeffs=sub_coeffs, | ||
observables=generators, | ||
# grouping_type="commuting", | ||
) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When we are using the generator as a meausrement, do we still need to represent the observable as a linear combination of unitaries (paulis)?
Or could we technically just do:
mp = qml.expval(trainable_op.generator() @ qml.Y(aux_wire))
This might be a bit more general, and leaves interpreting the new observable up to the device.
Context: Hadamard gradients method variants
Description of the Change: added support for different modes in hadamard gradient and implemented direct HT, reversed HT and reversed_direct HT
Benefits: more variants of HT + measurement grouping generates better efficiency for gradient estimation
Possible Drawbacks: for reversed methods, that is reversed HT and reversed_direct HT, the current version does not support circuits with multiple measurements. One will need to modify the processing_fn in how it aggregates the results so support this. For non-reversed methods, we aggregate for each measurement and the code work because the shape is N_G*M where N_G is the number of terms in the generator and M is the number of measurements. However, in reversed method, the shape is [N_1, ..., N_M], where for the i-th measurement, there are N_i gradient tapes. Different measurement has different number of tapes, and all the tape has only one measurement, so one does not need to aggregate across dimensions, just group neighboring ones would do.
Related GitHub Issues: