Closed and open loop quantum control


A simple example for open loop quantum control

Consider a single qubit and a Hamiltonian



Assume control the system dynamics by choosing the function f(t). Two typical questions in open loop control are

  1. What are the possible unitary evolutions achievable approximately through our choice? (Reachable set)
  2. How can we find such a function? (Optimisation algorithms)

For the first question, the answer from the theory is that



where




is the dynamical Lie algebra, being the smallest real vector space that contains X and Z and is closed under the commutator. So in this case, we can see that for all unitaries U in SU(2) there is a f(t) such that the time evolution creates it: the system is fully controllable. For the second question, the answer comes from numerical algorithms such as CRAB (not gradient-based) or GRAPE or similar. So on a more abstract level, open loop quantum control theory allows the researcher to be dull: you don't need to find out what your system can do, or how to do it: all can be done by a computer program.

A question for experts in closed loop control
Obviously open loop control is a special case of closed loop control. So it should be possible to re-phrase the above example in a closed-loop theory setting the feedback to zero. How would this look like, for instance in the (S,L,H) framework? The result should still be that the system is fully controllable.

A Hamiltonian of the form



can be expressed in the (S,L,H) formalism by a certain system in a double pass configuration driven by a coherent optical field with amplitude f(t), as shown in [J. Gough, Phys. Rev. A 78, 052311 (2008)]. To get a Hamiltonian of the form



one simply concatenates another (S,L,H) system with parameter (I,0,Z). Of course all this assumes the limit of zero time-delay in all the interconnections. Since the above (S,L,H) model is identical to the open loop system, it too must be fully controllable.

      • Ok, thanks! I should look at John's paper about this more carefully, although a quick glance seems to say that this is not going to be a simple construction (but of course conceptionally very interesting!). I know that it must be fully controllable from the open loop, but how would you compute this in your framework? Or is computing reachable sets not something that one can do in (S,L,H)?
      • I think questions like computing reachable sets relates to the bilinear structure of the system equations rather than the fact that it is representing some open-loop system per se. Since the (S,L,H) model above reproduces the same set of system equations, the same analysis thus applies. This brings me to the next point.
      • One way of designing a control pulse for an open-loop system is to attach some (possibly fictitious) output to the system model (of course, to be useful the output should be chosen to contain some information about the system state) and applying some suitable feedback design method to obtain a virtual feedback controller for the system. The feedback controller is virtual in the sense that in the control implementation it will not receive actual measurements from the open-loop system (since no measurement will be made or there may not even be an output that can be measured on the system), but its input will be the virtual output signal produced by a computer simulation of the open system (i.e., a virtual plant). The control signal produced by a controller driven by the computer model is then sent to the actual physical open-loop system as a control pulse. Of course, one only expects this to work well if the computer model is an accurate representation of the open-loop system. Any mismatches between the model and the physical system will cause a degradation in the control performance. Can all open-loop control strategies be viewed in this way? I don't know. If it is true then it would make the statement "open loop control is a special case of feedback control" more than a triviality.
      • Thanks, sounds interesting. I guess what I would have hoped for is a method for calculating the reachable set, so that I could go further than the H above and say, for example, how does this reachable set change under continuous weak observation of say Z, as a function of the measurement strength. This is obviously something I cannot compute in the open framework, but it seems there is no general method? Is it easy to see the specific solution for the above example?
      • I think there should be some works in this direction, I'm just not that familiar with them. This seems to be one in going in that direction: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1583491&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F10559%2F33412%2F01583491.pdf%3Farnumber%3D1583491 (Yamamoto et al, "Local reachability of stochastic quantum dynamics with application to feedback control of single-spin systems", in Proceedings of the 44th IEEE Conference on Decision and Control (CDC), pp. 8209-8214, 2005). The papers cited therein may be useful and could point to other relevant works.

The mother of all theories Do we have a theory that gives the reachable set of an open system under feedback measurements? Such a theory should allow us to use a computer to verify or even improve things like
  • measurement based quantum computing
  • quantum error correction

An incomplete list of previous feedback experiments performed in the quantum domain
See the experiment page.

Important landmarks for future feedback experiments
Someone asked what experiments would be important landmarks for feedback control. Here are some suggestions and why:



Open loop, feedforward, feedback, and learning/iterative control compared
The content below is sometimes a direct quote and sometimes rephrasing of the following sources:
J. R. Leigh, Control Theory (IEE, 2004).
C. C. Bissell, Control Engineering (Chapman & Hall, 1994).

Although the categories below seem like a tessera-cotomy in practice the different controls are used simultaneously.

Open loop
Explanation: An open loop controller can be thought of as an 'inverse' model of the process/system/plant, together with externally-supplied information about the desired output to a priori determine the control action. The signal that drives the controller is called the input and represents the desired behavior. The output of the entire control system is the 'actual behaviour'.
Advantages: Simpler to design and constuct. Often cheaper.
Disadvantages: Both the controller and the system can be affected by disturbances, the open loop controller can not compensate for these disturbances. That is the control system is unstable (Although one can design a controller that is robust, in some sense, to a characterization of the disturbances. Obviously the best case performance degraded).

Feedforward
Explanation: An obvious strategy to compensate for these disturbances is to directly measure the disturbances. The controller is still an inverse model of the system but the input (the desired behavior) to the controller is modified by taking into account the disturbances one measures in the environment around the controller and or the system.
Advantages: Simpler to design and constuct than feedback.
Disadvantages: The affect of the disturbances on the controller and system must be well characterized (i.e. need a model). Any disturbances which cannot be measured cannot be compensated. If there are too many disturbances feedforward is cumbersome. Sometimes unstable.

Feedback
Explanation: Here one monitors the aspects of the system one desires to control at the output. Then control actions are taken to correct deviations from that behavior, which necessitates modifying the input.
Advantages: Compensates for any discrepancy between the desired behavior (input) and the actual (output).
Disadvantages: Comparatively hard to design often expensive to construct. Naive feedback is sometimes unstable Matt is an expert on the approach (called "robust control") which attempts to negate these problems.

Learning/iterative
Explanation: Use a computer algorithm to learn, in an iterative fashion, how to control a system by driving the inputs and learning about their consequences on the outputs.
Advantages: One does not require a model of the system.
Disadvantages: Can not be use for single shot real time control before it is trained. See open loop control.

This paper might help:
Here are some slides from a series of talks from last year