1 Introduction
In recent years, machine learning methods have found increasing application in various branches of physics, such as the detection of phase transitions
(Torlai & Melko, 2016; Rem et al., 2019) or variational methods for quantum theory (Nagy & Savona, 2019; Vicentini et al., 2019; Hartmann & Carleo, 2019; Yoshioka & Hamazaki, 2019). Similarly, techniques and concepts of physics have found application in machine learning. A fruitful example are tensor networks (TNs) (Montangero, 2018). Originally applied in quantum many body physics as variational approaches for groundstate approximation, TNs provide a way to efficiently parametrise important lowdimensional manifolds – such as those with shortrange correlations – in otherwise unmanageably highdimension spaces (Eisert et al., 2010; Eisert, 2013; Brandão & Horodecki, 2015; Huang, 2019). Together with a powerful set of optimisation algorithms  known broadly as densitymatrixrenormalisationgroup (DMRG) (White, 1992, 1993; Schollwöck, 2011)  TN methods have become stateoftheart in several areas of physics, and continue to develop rapidly in others (Orús, 2019).Given the nature of the problems tackled by TNs in physics, it is natural to consider them for machine learning problems, where similar issues of finding and optimising efficient parametrisations of relevant manifolds in highdimensional spaces are key. Indeed, TNs have been applied to image classification (Stoudenmire & Schwab, 2016; Sun et al., 2019; Efthymiou et al., 2019), unsupervised learning (Han et al., 2018; Stoudenmire, 2018)
(Levine et al., 2019; Gao et al., 2019) and probabilistic graphical models (Glasser et al., 2018, 2019).Despite this rapid progress, little has been done in applying TNs to reinforcement learning (RL) (Sutton & Barto, 2018), one of the largest fields of machine learning. Utilised for the solution of games (Mnih et al., 2015; Silver et al., 2016) or the training of robots (Schulman et al., 2015; Haarnoja et al., 2018)
, RL is often modelled using the formalism of Markov decision processes (MDPs), consisting of repeated updates according to an agent’s decision making policy and the dynamics of an environment it inhabits. Due to the clear product structure of the trajectory probabilities generated by an MDP it is natural to frame such problems as TNs. Given that TNs are typically applied to problems of numerical optimisation, e.g. via DMRG, this alternative perspective could lead to novel approaches to policy optimization, or suggest useful structures of function approximations.
Here, we introduce a TN formulation of MDPs  specifically a representation of the expected return in finite MDPs (FMDPs)  and consider how a simple DMRG inspired algorithm can be applied to policy optimisation. Since MDPs are closely related to the conditioned stochastic dynamics (Majumdar & Orland, 2015), already treated with the TN formalism (Garrahan, 2016), we use this setting to illustrate the construction and the corresponding optimisation algorithm. As a specific example we consider an elementary problem of conditioned stochastic dynamics, the generation of stochastic excursions, which can be phrased as an FMDP and solved exactly using the DMRG method introduced.
The layout of the paper is as follows: In Sect. 2
, after briefly discussing FMDPs, we outline the TN formalism applied to some relevant examples from stochastic dynamics such as hidden Markov models (HMMs) and the representation of timeintegrated observables. The TN formalism for the FMDP is described in Sect.
3, while in Sect. 4 we discuss how a DMRG type algorithm can be used to solve policy optimisation numerically. This is illustrated in Sect. 5 with an application to conditioned dynamics. We conclude in Sect. 6 discussing a number of possible extensions where simplifying features of the cases considered are lost.2 Finite Markov Decision Processes and Tensor Networks for Dynamics
2.1 Finite Markov Decision Processes
In a discretetime, episodic, MDP, individual trajectories take the form,
(1) 
where and
are random variables for the state, action and reward, taking values
and respectively. The termination time, , can also be a random variable, though we consider it fixed for simplicity. At , when the episode ends, the state of the system is a terminal state. We will assume this to be unique and denote it , so that for all trajectories. We will further assume that all random variables take on values from (independent) finite sets, and . We will call this scenario an FMDP.In an FMDP, the probabilities dictating the dynamics are:
where and is the function defining the dynamics from . This obeys the normalisation condition,
(2) 
At the beginning of the episode, the initial state distribution is given by,
(3) 
At termination time, when the episode ends:
(4)  
(5) 
The decision component of the FMDP is contained in the probability of selecting a particular action conditioned on a given state,
(6) 
where the function is known as the policy (labelled by ), and is normalised,
(7) 
The return of an episode, , is defined as,
(8) 
An optimal policy, , is then a policy that maximises the expected return,
(9) 
Policy optimisation for a given FMDP is thus a highdimensional constrained optimisation problem: The total number of parameters is , and each is constrained to lie on the manifold of stochastic matrices, i.e. they obey Eq. (7) and have elements in .
2.2 Tensor Networks for Hidden Markov Models
A TN is a collection of tensors contracted together in a given pattern, typically specified by a graph. An elementary example of this is a chain of matrices
applied to a vector
written in the braket notation common to physics,(10) 
where for some coefficients and the vectors associated to each state form a basis of a vector space . Since the matrices are rank tensors and the vector a rank tensor, this can be considered a TN consisting of tensors, where the contraction pattern is given by the usual matrix products. Performing such a contraction produces a new vector, , with components,
(11) 
where are dual basis vectors such that the dot product . In this sense, we can say that this product is a TN representation (TNR) of the vector .
While the TN structure of Eq. (11) is simple and requires no clarification, more generally it is convenient to specify the contraction pattern of a TN via a graph, using a standard diagrammatic notation. In such a notation, rank tensors are represented as shapes with legs, and contractions are indicated by joining the appropriate legs together. In this notation, Eq. (11) for reads,
(12) 
As suggested by the chosen notation, Eq. (11
) is exactly the TNR for a probability distribution over states produced by a Markovian dynamics. In that case, the components of
are equal to the probabilities of state transitions,(13) 
while the components of give the initial probability distribution over states. Any vector , that is a convex combination of the particular basis , can be interpreted as a probability distribution over states via their components . This implies their normalisation as , where is the flatvector for the basis, . The matrices, , which can be considered as elements of , obey a related condition,
(14) 
which allows for the interpretation of their components as conditional probabilities.
A more complicated TNR relevant for dynamics is offered by the matrix product state (MPS) representation  also know as the tensor train decomposition (Oseledets, 2011)  of HMMs. In a system described by an HMM, the dynamics is Markovian and thus governed by or . However, information about the state, , cannot be accessed directly, and only partial information is revealed through the observables at each time step. We will denote these observables as , since they will correspond to the rewards in the FMDPs considered later.
In HMMs, the relevant probabilities are then , which are related to through marginalisation;
(15) 
The marginalisation Eq. (15), can also be viewed as a decomposition of the matrices , corresponding to , into a sum of matrices , with components ,
(16) 
This decomposition implies the introduction of an encoding for the possible observations, , where and , which can be achieved in the same way as for states. Taking tensor products of these vectors produces an encoding for the possible observations over time, , which form a basis of the vector space . The vectors representing the probability distributions over observations are elements of this space, , and obey the normalisation .
To build a TNR for the HMM, one can consider the set of matrices, , as a rank tensors, which are elements of . The decomposition, Eq. (16) can then be expressed as an equation relating tensors. Graphically, representing the flatvector as a vertical line, Eq. (16) reads,
(17) 
Since the flatvector, , causes the marginalisation of , one can remove the marginalisation by removing this vector. Thus, one finds that the probability of a making a particular set of observations in an HMM can then be expressed as,
(18) 
This is exactly the structure of an MPS (Schollwöck, 2011). For , the TNR of in a HMM is thus,
(19) 
2.3 Tensor Networks for Time Integrated Observables
When considering dynamics described by an HMM, one is often interested in time integrated observables. Such objects can be represented easily in terms of TNs, which in turn allows for the TNR of averages or higherorder moments.
To specialise to the case of MDPs, we will consider a HMM where we observe a reward at each discrete timestep, , and wish to represent the expected return, as a TN. To achieve this, one begins by introducing the operator, , which is diagonal in the encoding basis,
(20) 
such that it can be used to produce the explicit values observed at a given time.
To encode the return of an episode, , one defines the operator, , acting on the vector space spanned by ,
(21) 
where is the operator acting as on the vector space in the tensor product or as identity otherwise.
An appropriate TNR for is offered by the matrix product operator (MPO) (PerezGarcia et al., 2007; Crosswhite & Bacon, 2008; Pirvu et al., 2010). The MPO is defined analogously to the MPS,
(22) 
This has the same contraction pattern as the MPS, as is clear from the corresponding graphical equation, e.g.,
(23) 
The sets of matrices, , can be chosen using standard construction methods (Schollwöck, 2011): Consider the operatorvalued matrix,
(24) 
When multiplied, components of such matrices are to be combined via tensor products:
(25)  
(26) 
Further defining the operatorvalued boundaryvectors:
(27) 
(28) 
The return operator for the whole episode, , can be represented as product of such matrices,
(29) 
In the basis , this corresponds to the previously defined MPO form (22) with matrices:
(30) 
(31) 
(32) 
With this definition, the TNR of the expected return can be obtained directly from its expression in terms of operators and vectors,
(33) 
Contracting together the constituent TNRs of , and gives the overall TN expression. For example, if this is given by,
(34) 
Similarly, representations of higherorder observables such at can be constructed directly from,
(35) 
as can TNRs of any other observables for which there are MPO representations of the relevant operators.
3 Tensor Network Representations for FMDPs
As with HMMs, the expected return of an FMDP can be expressed as a TN by using an MPS representation of and an MPO representation of . However, in order to perform policy optimisation using the tools developed for TNs, the dependence of on the policy, via that of , must be extracted explicitly.
Similar to when moving from Markovian dynamics to HMMs, this can be achieved by relating the relevant probabilities in the FMDP to those of the HMM, , via marginalisation;
(36) 
By expressing this as a relationship between tensors, one can extend the TNR of used for HMMs, Eq. (34), to FMDPs.
To begin, we rewrite Eq. (36) as,
(37) 
where are sets of matrices labelled by both a reward and an action . As for and , an encoding of into vectors is implied. Note that, because the index with value by appears twice but is not summed over (i.e. it is not part of a contraction), Eq. (37) does not yet provide a TN decomposition of Eq. (36) as desired, as can be seen clearly from the graphical notation;
(38) 
Note that while for convenience we have use the same tensor label, for both the rank and rank tensors, these are distinct objects as indicated by the different colours.
To express the decomposition Eq. (37) in terms of tensors and their contractions alone, one must account for the fact that the information about the state at the previous timestep, , is used for conditioning twice; once in the policy and once in the dynamics. In TNs, such additional conditioning requires the inclusion of copy tensors (Biamonte et al., 2011; Glasser et al., 2018). Defined with respect to a chosen basis, , the components of a copy tensor have value one if all indices are equal, and zero otherwise. The copy tensor we will consider is the rank copy tensor, whose components can be defined as a set of matrices,
(39) 
For example,
(40) 
Graphically, we denote the copy tensor as a black circle,
(41) 
In general, including multiple copy tensors will allow the construction of a TNR for any joint probability distribution via decomposition using the chain rule. In such a TNR, each variable will be associated to a number of copy tensors equal to the number of times it is reused for conditioning. Thus, the TNR for a joint probability distribution decomposed via the chainrule is in general a twodimensional, hierarchical TN. In an FMDP, this structure is simplified considerably by the Markovian assumption, so that only a single statecopy tensor is required per timestep, and a onedimensional TNR (an MPS) results.
With the copy tensor defined, the decomposition Eq. (37) can be expressed in terms of tensors as,
In the graphical notation this is,
(42) 
The notation for this decomposition can be further simplified by grouping the copytensor and rank matrix together, while also considering the indices corresponding to the state action variables, and , as a single compound index;
or,
(43) 
Written in this way, one can see that the rank tensors appearing in the MPS representation of can be considered as the contraction of a vector, , with components , and a rank tensor, . This is exactly the MPS representation that results from the application of an operator, , expressed as an MPO, to a vector , expressed as an MPS, i.e. . Graphically, this can be seen by decomposing every tensor in the MPS expression of , Eq. (19). For example, when , this gives,
(44) 
In this expression, the vector, , is represented as a productstate MPS, i.e. an MPS with :
(45) 
or, for ,
(46) 
The vector contains all information about the policy during the FMDP. Information about the dynamics is instead contained in the operator , which has the MPO representation,
(47) 
where for convenience we have defined , with encoding the initial probability distribution as, , and .
With the decomposition , the desired TNR of the expected return is complete. For example, when , the expected return can be written graphically as,
(48) 
4 DMRG Approach to Policy Optimisation
With an objective function expressed as a TN, an approach to optimisation that has proved effective is to optimise just one or two tensors at a time, while keeping the other tensors  the “environment”  fixed. (Note that this is distinct from what is commonly called the environment in RL, which we refer to as the system dynamics.) By passing (sweeping) through the tensors that contain the variational parameters, one can perform the optimisation iteratively. This allows for the efficient use of computational resources and has been very successful in the context of one dimension quantum many body systems, where is it known as DMRG. Building on this basic idea, a large variety of techniques have been developed to perform sophisticated, stateoftheart optimisations.
In the case of policy optimisation using a TNR of the expected return, an approach inspired by DMRG (Schollwöck, 2011) can be used, see Fig. 1. In such an approach, each tensor of is visited in turn. At a given tensor, labelled by , the expected reward is calculated as a linear map onto this tensor by evaluating the environment  i.e. contracting together all other tensors in the network that are considered fixed at this iteration of the optimisation. In the case we are considering, the environment can be evaluated exactly, though in general approximations are required. The policy is then updated by finding the tensor that maximises the return for this fixed environment, subject to the desired constraints that the tensor be normalised appropriately, and have components .
In typical DMRG applications, a backandforth sweeping pattern is used to optimise the tensors. While in general many sweeps might be required to reach convergence, in the simple set up we consider a single DMRG sweep backwards in time is sufficient to find the optimal policy exactly.
To see this, consider splitting the expected return at time : The first of these terms, , can be calculated using only the probability distribution over the first rewards, . Decomposing this probability with the chainrule forwards in time  i.e. conditioning occurs only on past states and actions  allows for the corresponding expectation value to be computed starting from the initial state distribution, , which is assumed known and policyindependent, using only the policy up to time . The second of these terms, , instead requires . Again decomposing this forwards in time the expected value of the return can be calculated. However, in this case the initial (marginal) distribution over states is . Calculating this initial distribution requires knowledge of the policy until time , and thus the policy of the whole episode is required.
Defining as a vector containing all the parameters of the policy for , the dependence on the policy of the two terms in the above decomposition implies that the optimal policy satisfies the simultaneous equations:
(49)  
(50) 
The optimal policy can therefore be found by first solving the second equation, thus finding , and then substituting into the first one and solving for . Since the choice of was arbitrary in the above argument, one can proceed recursively starting from . This is the usual onesite DMRG type algorithm, starting from the rightmost site.
5 Example: Conditioned Dynamics for Rare Trajectory Generation
5.1 Conditioned Dynamics and FMDPs
Often when studying stochastic dynamics, the particular trajectories of interest occur only rarely (Bolhuis et al., 2002; Garrahan, 2018). In many cases, analytical study of the trajectory statistics is intractable, and we must resort to numerical sampling. An important problem is finding an alternative dynamics which generates rare trajectories efficiently (Borkar et al., 2003; Majumdar & Orland, 2015; Chetrite & Touchette, 2015; Jack, 2019). This can be phrased as an FMDP with an appropriate reward structure, such that the trajectories generated by its optimal policies satisfy the desired conditions on the original dynamics.
An elementary example of rare events are “stochastic excursions” (Majumdar & Orland, 2015), where a simple random walker is conditioned to stay above a certain line and at a given time must return to this line. For a symmetric random walker, the probability of an excursion scales as . In terms of an FMDP, the conditioned dynamics can be encoded as the solution of an optimisation problem where movement below the zero line, or failure to reach it at , is given a negative reward. Such a solution will be highly degenerate, and there are many different possible choices of reward structure that will lead to the same space of solutions.
For an episode with fixed termination time, , the positions of the random walker are encoded in , such that . The action space is , where correspond to a down/up move of the walker, respectively. We assume the initial state distribution fixed at zero, , where is the indicator function taking on the value one when the argument is true and zero otherwise. For illustration, we consider a dynamics where stochasticity is included only through the policy, though we emphasise that the TN method we apply for FDMPs has no such restriction. Indeed, one can consider not only general Markovian stochastic processes within the same framework, but also a variety of different rare events  such as meanders or bridges  by choosing the reward structure appropriately.
For the generation of excursions, the reward structure we choose is as follows: when , a reward of is given for and otherwise; at , a reward of is given for and otherwise. Thus, and the dynamics of the problem are governed by the following update rules: When :
(51)  
(52) 
and when :
(53)  
(54) 
By assumption, the system dynamics takes on the value one when the above relations are satisfied, and zero otherwise. Under a uniformly random policy  where up/down moves are equally likely regardless of the state  the dynamics will be that of an unconditioned random walker. Under an optimal policy, for which , the walker will satisfy the excursion condition.
5.2 DMRG for Excursions
To illustrate the DMRG procedure for policy optimisation, we a consider a simple DMRG algorithm to solve the problem of generating excursions. Initially, a policy is generated by randomly selecting real numbers. These are scaled and normalised appropriately to form a valid policy, before proceeding with the policy optimisation.
We perform a single sweep either forwardsintime or backwardsintime, thus optimising tensors or , respectively. The total optimisation consists of iterations. At a given iteration, , the environment is evaluated by contracting all tensors in the network excluding (forward sweep) or (backward sweep). A constrained optimisation is then performed to minimise the negative expected return, shifted appropriately so that the minimum is zero. We apply a simple approach of gradient free optimisation (Powell’s method), and satisfy the constraints directly by applying the necessary scaling and normalising to an input set of real values. While this method is slow, it is sufficient for this illustration, and can easily be replaced with more sophisticated ones for solving the necessary constrained optimisation.
Using the policy found by these optimisations, trajectories can be generated from the FMDP which obey the excursion condition, see Fig. 2. Trajectories sampled with four different polices are shown. In the first (second) row, the policy is determined via a forwardsintime (backwardsintime) sweep. How optimisation progresses is shown by the columns: in the first one, half the policy tensors have been optimised, and in the second one a full sweep has been completed. As can be seen, in the fullsweep backwards case (lower right) all trajectories generated by the policy are excursions as expected (solid blue lines); in contrast, a single fullsweep forwards can fail to find a policy that generates excursions (top right) though randomly restarting the policy optimisation allows for one to postselect a deterministic policy that generates an excursion.
Additionally, the backwards sweep discovers a policy that is stochastic (nondeterministic), while the policy found during the forwards sweep is found to be deterministic in every case: a side effect of the degeneracy of the optimal policies. Since in the backwards sweep the policy in the future of each step in the iteration is optimal, there are multiple actions which are seen to produce the same expected return, which the optimization algorithm does not uniquely focus on. In contrast, due to the random initialization each step of the forward sweep sees a distinct expected return for each action it can take, and thus the optimization algorithm focuses precisely on whichever is currently seen as best according to the incorrect future policy.
6 Conclusions
We have introduced a tensor network formulation for Markov decision processes, along with an policy optimisation algorithm based on those usually applied to matrix product states. TNs and the associated optimisation algorithms are extremely flexible and can certainly be adapted accommodate more sophisticated cases beyond the class of MDPs considered here. Possible generalisations include: (i) termination time that can vary between episodes; (ii) continuing MDPs using uniform MPS/transfer matrix methods (Vanderstraeten et al., 2019); (iii) nonMarkovian system dynamics or nonlocal in time reward structure, optimising for a nonMarkovian policy; (iv) integration of TNs into standard RL algorithms, such as modelbased approaches for unknown system dynamics, see e.g. (Wang et al., 2019), or using the a TNR as a natural model of the value function. As such, the formalism we present here lays the ground to pursue a number of avenues of research combining tensor networks with reinforcement learning more broadly.
Acknowledgement
We thank N. Pancotti for useful discussion and reading of the manuscript. The research leading to these results has received funding from the Leverhulme Trust [grant number RPG2018181] and University of Nottingham grant no. FiF1/3. We are grateful for access to the University of Nottingham’s Augusta HPC service. We acknowledge the use of Athena at HPC Midlands+.
References
 Biamonte et al. (2011) Biamonte, J. D., Clark, S. R., and Jaksch, D. Categorical tensor network states. AIP Advances, 1(4):042172, 2011. doi: 10.1063/1.3672009.
 Bolhuis et al. (2002) Bolhuis, P. G., Chandler, D., Dellago, C., and Geissler, P. L. Transition path sampling: throwing ropes over rough mountain passes, in the dark. Annu. Rev. Phys. Chem., 53:291, 2002.
 Borkar et al. (2003) Borkar, V. S., Juneja, S., and Kherani, A. A. Peformance analysis conditioned on rare events: An adaptive simulation scheme. Commun. Inf. Syst., 3(4):259–278, 2003.
 Brandão & Horodecki (2015) Brandão, F. G. S. L. and Horodecki, M. Exponential decay of correlations implies area law. Communications in Mathematical Physics, 333(2):761–798, Jan 2015. ISSN 14320916. doi: 10.1007/s0022001422138.
 Chetrite & Touchette (2015) Chetrite, R. and Touchette, H. Nonequilibrium markov processes conditioned on large deviations. Ann Henri Poincaré, 2015.
 Crosswhite & Bacon (2008) Crosswhite, G. M. and Bacon, D. Finite automata for caching in matrix product algorithms. Phys. Rev. A, 78:012356, Jul 2008. doi: 10.1103/PhysRevA.78.012356.
 Efthymiou et al. (2019) Efthymiou, S., Hidary, J., and Leichenauer, S. TensorNetwork for Machine Learning. arXiv eprints, art. arXiv:1906.06329, Jun 2019.
 Eisert (2013) Eisert, J. Entanglement and tensor network states. Modeling and Simulation, 3:520, August 2013.
 Eisert et al. (2010) Eisert, J., Cramer, M., and Plenio, M. B. Colloquium: Area laws for the entanglement entropy. Rev. Mod. Phys., 82:277–306, Feb 2010. doi: 10.1103/RevModPhys.82.277.
 Gao et al. (2019) Gao, Z.F., Cheng, S., He, R.Q., Xie, Z. Y., Zhao, H.H., Lu, Z.Y., and Xiang, T. Compressing deep neural networks by matrix product operators. arXiv eprints, art. arXiv:1904.06194, Apr 2019.
 Garrahan (2016) Garrahan, J. P. Classical stochastic dynamics and continuous matrix product states: gauge transformations, conditioned and driven processes, and equivalence of trajectory ensembles. Journal of Statistical Mechanics: Theory and Experiment, 2016(7):073208, jul 2016. doi: 10.1088/17425468/2016/07/073208.
 Garrahan (2018) Garrahan, J. P. Aspects of nonequilibrium in classical and quantum systems: Slow relaxation and glasses, dynamical large deviations, quantum nonergodicity, and open quantum dynamics. Physica A, 504:130–154, 2018. doi: 10.1016/j.physa.2017.12.149.
 Glasser et al. (2018) Glasser, I., Pancotti, N., and Cirac, J. I. From probabilistic graphical models to generalized tensor networks for supervised learning. arXiv eprints, art. arXiv:1806.05964, Jun 2018.
 Glasser et al. (2019) Glasser, I., Sweke, R., Pancotti, N., Eisert, J., and Cirac, I. Expressive power of tensornetwork factorizations for probabilistic modeling. In Wallach, H., Larochelle, H., Beygelzimer, A., d'AlchéBuc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 32, pp. 1496–1508. Curran Associates, Inc., 2019.
 Haarnoja et al. (2018) Haarnoja, T., Zhou, A., Hartikainen, K., Tucker, G., Ha, S., Tan, J., Kumar, V., Zhu, H., Gupta, A., Abbeel, P., and Levine, S. Soft actorcritic algorithms and applications, 2018.
 Han et al. (2018) Han, Z.Y., Wang, J., Fan, H., Wang, L., and Zhang, P. Unsupervised generative modeling using matrix product states. Phys. Rev. X, 8:031012, Jul 2018. doi: 10.1103/PhysRevX.8.031012.
 Hartmann & Carleo (2019) Hartmann, M. J. and Carleo, G. Neuralnetwork approach to dissipative quantum manybody dynamics. Phys. Rev. Lett., 122:250502, Jun 2019. doi: 10.1103/PhysRevLett.122.250502.
 Huang (2019) Huang, Y. Approximating local properties by tensor network states with constant bond dimension. arXiv preprint arXiv:1903.10048, 2019.
 Jack (2019) Jack, R. L. Ergodicity and large deviations in physical systems with stochastic dynamics. 2019.
 Levine et al. (2019) Levine, Y., Sharir, O., Cohen, N., and Shashua, A. Quantum entanglement in deep learning architectures. Phys. Rev. Lett., 122:065301, Feb 2019. doi: 10.1103/PhysRevLett.122.065301.
 Majumdar & Orland (2015) Majumdar, S. N. and Orland, H. Effective langevin equations for constrained stochastic processes. Journal of Statistical Mechanics: Theory and Experiment, 2015(6):P06039, jun 2015. doi: 10.1088/17425468/2015/06/p06039.
 Mnih et al. (2015) Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. Humanlevel control through deep reinforcement learning. Nature, 518(7540):529–533, February 2015. ISSN 00280836.
 Montangero (2018) Montangero, S. Introduction to Tensor Network Methods. Springer International Publishing, 2018.
 Nagy & Savona (2019) Nagy, A. and Savona, V. Variational quantum monte carlo method with a neuralnetwork ansatz for open quantum systems. Phys. Rev. Lett., 122:250501, Jun 2019. doi: 10.1103/PhysRevLett.122.250501.
 Orús (2019) Orús, R. Tensor networks for complex quantum systems. Nature Reviews Physics, 1(9):538–550, 2019. ISSN 25225820. doi: 10.1038/s4225401900867.
 Oseledets (2011) Oseledets, I. V. Tensortrain decomposition. SIAM Journal on Scientific Computing, 33(5):2295–2317, 2011. doi: 10.1137/090752286.
 PerezGarcia et al. (2007) PerezGarcia, D., Verstraete, F., Wolf, M. M., and Cirac, J. I. Matrix Product State Representations. Quantum Inf. Comput., 7:quantph/0608197, Aug 2007.
 Pirvu et al. (2010) Pirvu, B., Murg, V., Cirac, J. I., and Verstraete, F. Matrix product operator representations. New Journal of Physics, 12(2):025012, feb 2010. doi: 10.1088/13672630/12/2/025012.
 Rem et al. (2019) Rem, B. S., Käming, N., Tarnowski, M., Asteria, L., Fläschner, N., Becker, C., Sengstock, K., and Weitenberg, C. Identifying quantum phase transitions using artificial neural networks on experimental data. Nature Physics, 15(9):917–920, 2019. ISSN 17452481. doi: 10.1038/s4156701905540.
 Schollwöck (2011) Schollwöck, U. The densitymatrix renormalization group in the age of matrix product states. Annals of Physics, 326:96–192, January 2011. doi: 10.1016/j.aop.2010.09.012.
 Schulman et al. (2015) Schulman, J., Levine, S., Abbeel, P., Jordan, M. I., and Moritz, P. Trust region policy optimization. In International Conference on Machine Learning, 2015.
 Silver et al. (2016) Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., and Hassabis, D. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484–489, January 2016. doi: 10.1038/nature16961.
 Stoudenmire & Schwab (2016) Stoudenmire, E. and Schwab, D. J. Supervised learning with tensor networks. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29, pp. 4799–4807. Curran Associates, Inc., 2016.
 Stoudenmire (2018) Stoudenmire, E. M. Learning relevant features of data with multiscale tensor networks. Quantum Science and Technology, 3(3):034003, apr 2018. doi: 10.1088/20589565/aaba1a.
 Sun et al. (2019) Sun, Z.Z., Peng, C., Liu, D., Ran, S.J., and Su, G. Generative Tensor Network Classification Model for Supervised Machine Learning. arXiv eprints, art. arXiv:1903.10742, Mar 2019.
 Sutton & Barto (2018) Sutton, R. S. and Barto, A. G. Reinforcement Learning: An Introduction. MIT Press, second edition, 2018.

Torlai & Melko (2016)
Torlai, G. and Melko, R. G.
Learning thermodynamics with boltzmann machines.
Phys. Rev. B, 94:165134, Oct 2016. doi: 10.1103/PhysRevB.94.165134.  Vanderstraeten et al. (2019) Vanderstraeten, L., Haegeman, J., and Verstraete, F. Tangentspace methods for uniform matrix product states. SciPost Phys. Lect. Notes, pp. 7, 2019. doi: 10.21468/SciPostPhysLectNotes.7.
 Vicentini et al. (2019) Vicentini, F., Biella, A., Regnault, N., and Ciuti, C. Variational neuralnetwork ansatz for steady states in open quantum systems. Phys. Rev. Lett., 122:250503, Jun 2019. doi: 10.1103/PhysRevLett.122.250503.
 Wang et al. (2019) Wang, T., Bao, X., Clavera, I., Hoang, J., Wen, Y., Langlois, E., Zhang, S., Zhang, G., Abbeel, P., and Ba, J. Benchmarking modelbased reinforcement learning. ArXiv, abs/1907.02057, 2019.
 White (1992) White, S. R. Density matrix formulation for quantum renormalization groups. Phys. Rev. Lett., 69:2863–2866, Nov 1992. doi: 10.1103/PhysRevLett.69.2863.
 White (1993) White, S. R. Densitymatrix algorithms for quantum renormalization groups. Phys. Rev. B, 48:10345–10356, Oct 1993. doi: 10.1103/PhysRevB.48.10345.
 Yoshioka & Hamazaki (2019) Yoshioka, N. and Hamazaki, R. Constructing neural stationary states for open quantum manybody systems. Phys. Rev. B, 99:214306, Jun 2019. doi: 10.1103/PhysRevB.99.214306.
Comments
There are no comments yet.