To be continued

Quantum Neural Networks

  1. Basic idea of QNN
    • Architecture of QNN
    • Training QNN
  2. Barren Plateaus in QNN
  3. QCNN
  4. DQNN

1.1. Basic idea of QNN

1.1.1. Architecture of QNN

We introduce architecture of QNN by comparing it with the classical artificial neural network

1.data input

xi=[xi,0,,xi,L1]Txi=xi,0xi,L1 \begin{aligned} x_i=\left[x_{i, 0}, \ldots, x_{i, L-1}\right]^T \\ \Downarrow \\ \left|x_i\right\rangle=\left|x_{i, 0}\right\rangle \otimes \ldots \otimes\left|x_{i, L-1}\right\rangle \end{aligned}

2 neural network

neural network \longrightarrow unitary quantum gates Uj(j=1,,J)U_{j}(j=1, \ldots, J)

The action of the jj th unitary operator on M=I+KM=I+K qubits can be represented by: Ui=exp[jm1,,mM=0033αm1,,mN(i)σm1(i)σmN(i)] U_{i}=\exp \left[j \sum_{m_{1}, \ldots, m_{M}=0 \ldots 0}^{3 \ldots 3} \alpha_{m_{1}, \ldots, m_{N}}^{(i)} \sigma_{m_{1}}^{(i)} \otimes \ldots \otimes \sigma_{m_{N}}^{(i)}\right] where

  • σml{I,X,Y,Z}\sigma_{m_{l}} \in\{I, X, Y, Z\} represent the Pauli matrices.

3 cost function

C=n=1NyU(n)y(n) C=-\sum_{n=1}^{N}\left\langle y _{U}^{(n)} \mid y ^{(n)}\right\rangle

where

  • yU(n)y _{U}^{(n)} is dependent on the unitary operator parameters αm1,,mN(i)\alpha_{m_{1}, \ldots, m_{N}}^{(i)}
  • y(n)y ^{(n)} represents the labels at the output neurons for the nnth training instance (n=1,,N)(n=1, \ldots, N).

4 gradient descent

We can use the gradient descent to update the α\alpha parameters in the ii th unitary matrix as follows: Δαm1,,mN(i)=ηCαm1,,mN(i) \Delta \alpha_{m_{1}, \ldots, m_{N}}^{(i)}=-\eta \frac{\partial C}{\partial \alpha_{m_{1}, \ldots, m_{N}}^{(i)}}

Δαm1,,mN(i)=ηCαm1,,mN(i) \Delta \alpha_{m_{1}, \ldots, m_{N}}^{(i)}=-\eta \frac{\partial C}{\partial \alpha_{m_{1}, \ldots, m_{N}}^{(i)}}

1.1.2. Training QNN

  1. Ricks, Bob, and Dan Ventura. "Training a quantum neural network." Advances in neural information processing systems 16 (2003).
  2. Beer, Kerstin, et al. "Training deep quantum neural networks." Nature communications 11.1 (2020): 1-6.
  3. Zhang, Kaining, et al. "Toward trainability of deep quantum neural networks." arXiv:2112.15002 (2021).

To be continued

1.2. Barren plateaus in QNN

  1. arXiv:1803.11173 (2018) Barren plateaus in quantum neural network training landscapes

Exponential decay of variance.

The sample variance of the gradient of the energy for the first circuit component of a two-local Pauli term (θ1,1E)\left(\partial_{\theta_{1}, 1} E\right) plotted as a function of the number of qubits on a semi-log plot. As predicted, an exponential decay is observed as a function of the number of qubits, nn, for both the expected value and its spread. The slope of the fit line is indicative of the rate of exponential decay as determined by the operator

variance of measurements Var[kE]{Tr(ρ2)(22n1)Tr[V,uHu]2U+Tr(H2)(22n1)Tr[V,uρu]2U12(3n1)Tr(H2)Tr(ρ2)Tr(V2) \operatorname{Var}\left[\partial_{k} E\right] \approx\left\{\begin{array}{l} -\frac{\operatorname{Tr}\left(\rho^{2}\right)}{\left(2^{2 n}-1\right)} \operatorname{Tr}\left\langle\left[V, u^{\dagger} H u\right]^{2}\right\rangle_{U_{+}} \\ -\frac{\operatorname{Tr}\left(H^{2}\right)}{\left(2^{2 n}-1\right)} \operatorname{Tr}\left\langle\left[V, u \rho u^{\dagger}\right]^{2}\right\rangle_{U_{-}} \\ \frac{1}{2^{(3 n-1)}} \operatorname{Tr}\left(H^{2}\right) \operatorname{Tr}\left(\rho^{2}\right) \operatorname{Tr}\left(V^{2}\right) \end{array}\right. where The average value of the gradient kE=dUp(U)k0U(θ)HU(θ)0 \left\langle\partial_{k} E\right\rangle=\int d U p(U) \partial_{k}\left\langle 0\left|U( \theta )^{\dagger} H U( \theta )\right| 0\right\rangle where the gradient of the objective function kEE(θ)θk=i0U[Vk,U+HU+]U0 \partial_{k} E \equiv \frac{\partial E( \theta )}{\partial \theta_{k}}=i\left\langle 0\left|U_{-}^{\dagger}\left[V_{k}, U_{+}^{\dagger} H U_{+}\right] U_{-}\right| 0\right\rangle

1.3. QCNN

Absence of Barren Plateaus in Quantum Convolutional Neural Networks

1.4. Reference

What is QNN

  1. A review of Quantum Neural Networks: Methods, Models, Dilemma arXiv:2109.01840
  2. Kak, S. (1995). "On quantum neural computing". Advances in Imaging and Electron Physics. 94: 259–313. doi):10.1016/S1076-5670(08)70147-270147-2)
  3. Efficient Learning for Deep Quantum Neural Networks (2020) arXiv:1902.10445
  4. On quantum neural networks 2021 arXiv:2104.07106
    • early definition of QNN & modern definition of QNN

What problem QNN advantage in?

  1. The Power of Quantum Neural Networks 2020 arXiv:2011.00027
  2. Power of data in quantum machine learning 2021 arXiv:2011.01938

What is (Barren Plateau)in QNN ?

  1. Barren plateaus in quantum neural network training landscapes
  2. Cost function dependent barren plateaus in shallow parametrized quantum circuits Nature Communications 12, 1791 (2021)
  3. Trainability of Dissipative Perceptron-Based Quantum Neural Networks arxiv.2005.12458

What make(Barren Plateau)?

  1. explain
  2. arXiv: 2010.15968 Entanglement Induced Barren Plateaus

What is QCNN

2019 Harve

absence Barren Plateau in QCNN

  1. Absence of Barren Plateaus in Quantum Convolutional Neural Networks

1.5. Appendix

results matching ""

    No results matching ""