Hopf algebras, coproducts and symbols: an application to Higgs boson amplitudes

We show how the Hopf algebra structure of multiple polylogarithms can be used to simplify complicated expressions for multi-loop amplitudes in perturbative quantum field theory and we argue that, unlike the recently popularized symbol-based approach, the coproduct incorporates information about the zeta values. We illustrate our approach by rewriting the two-loop helicity amplitudes for a Higgs boson plus three gluons in a simplified and compact form involving only classical polylogarithms.


Introduction
Higher-order quantum corrections to physical observables in perturbative quantum field theory require the evaluation of so-called Feynman integrals, which arise from the integration over the momenta of (unobserved) quanta exchanged in a physical process. For this reason, analytical results for Feynman integrals are not only interesting in their own right but are also of phenomenological relevance in order to make precise predictions for current and future collider experiments.
In many cases Feynman integrals can be expressed in terms of classical polylogarithms and Nielsen polylogarithms [1]. In the late nineties it was realized however that for certain multi-loop integrals new classes of transcendental functions arise that can no longer be expressed in terms of the classical polylogarithm functions, e.g., [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36]. It was soon realized that many of the new functions appearing in computations at higher loop orders are in fact special cases of a more general class of functions going under the name of multiple polylogarithms in the mathematical literature [37,38]. While multiple polylogarithms do not account for all the classes of functions that can appear in Feynman integral computations (in some cases elliptic functions were encountered, see e.g., Ref. [39]), they are assumed to cover large classes of phenomenologically interesting Feynman integrals.
Just like their classical analogues, not all multiple polylogarithms are independent, but they satisfy (complicated) relations among themselves. These functional equations make it possible to express a given combination of polylogarithms in a multitude of ways with increasing complexity. Thus, while simple and compact analytical results for Feynman integrals are highly desirable, the simplicity of the result might be hidden behind a swath of functional equations. Therefore, a systematic approach and an organizing principle to deal with the functional equations governing the combinatorial structure of (multiple) polylogarithms is a valuable tool to study Feynman integrals at higher loop orders.
In physics a first step in this direction was made in Ref. [40] with the simplification of the six-point remainder function in N = 4 Super Yang-Mills computed in Ref. [41,42]. The main tool used to achieve this simplification were the so-called symbols [58], which provide a way to map the combinatorics of functional equations among iterated integrals to the combinatorics of a certain tensor algebra. Since their introduction, symbols have seen many applications in physics [44,45,46,47,48,49,50,51,52,53]. In particular, by now the symbols of all two-loop MHV remainder functions [54] and of the two-loop six-point NMHV amplitude [55] are completely known, while at three loops the symbols of the remainder functions for the hexagon in general kinematics [56] and for the octagon in special kinematics [46] are known up to some free parameters that could not be fixed from general considerations. However, only in the latter octagon case an integrated form of the symbol is also known. Indeed, while the computation of the symbol of a transcendental function is a straightforward and algorithmic procedure, the inverse problem (sometimes called integration of a symbol) of finding in an algorithmic way a function that matches a given symbol is currently still open, despite the fact that advances have been made in how to determine a class of functions that can reproduce a given symbol [57]. One of the problems one encounters during the integration procedure is that the symbol map is non injective. In particular, all terms proportional to (multiple) ζ value or powers of π are mapped to zero by the symbol.
The aim of this paper is to present an alternative to, or rather an extension of, the naive symbol approach used in physics so far. The cornerstone of our approach is the coproduct on multiple polylogarithms introduced by Goncharov in Ref. [58], augmented by some ideas introduced in a recent paper by Brown [59]. The coproduct has the advantage that it does not lose as much information as the symbol, while still reproducing the latter in a specific limit. In particular, (multiple) ζ values are not mapped to zero by the coproduct, thus providing valuable additional information about the function, information that could not be provided by the symbol alone. While we fall short of a full proof of some of our claims, we present accumulating evidence that our approach works in practice by applying it to several non-trivial cases. In particular, we consider the two-loop helicity amplitudes for a Higgs boson plus three gluons computed in Refs. [60,61], where the results have been expressed as a complicated combination of two-dimensional harmonic polylogarithms. We show that the results of Refs. [60,61] can be rewritten in a compact form that only involves classical polylogarithms with simple rational functions as arguments. This result confirms and extends a similar observation made in Ref. [53], where it was shown that the symbol of the weight-four leading-color contribution of the two-loop helicity amplitudes for a Higgs boson plus three gluons matches the symbol of the form factor of three gluons computed in planar N = 4 Super Yang-Mills.
The outline of the paper is as follows: In Sections 2 and 3 we provide a short review of our main actors, multiple polylogarithms and symbols. In Section 4 we give a pedestrian introduction to some concepts of modern algebra that are put into action in Section 5 where we show how the Hopf algebra of Ref. [58] can be used to simplify complicated expressions involving multiple polylogarithms. This new technique is illustrated on some simple examples in Section 6, while in Section 7 we apply our tool to rewrite the helicity amplitudes for a Higgs boson plus three gluons computed in Refs. [60,61] in a simplified form. In Section 8 we draw our conclusions. The appendices contain some technical results omitted throughout the main text. with Li 1 (z) = − ln(1 − z). While these functions are sufficient to describe large classes of Feynman integrals, it is known that especially multi-loop multi-scale integrals can give rise to new classes of functions. Among these new classes of functions are the so-called multiple polylogarithms, a multi-variable extension of Eq. (2.1) defined recursively via the iterated integral [37,38] G(a 1 , . . . , a n ; z) = z 0 dt t − a 1 G(a 2 , . . . , a n ; t) , (2.2) with G(z) = 1 and where a i , z ∈ C (they can be either constants or variables in the following). In the special case where all the a i 's are zero, we define, using the obvious vector notation a n = (a, . . . , a n ), G( 0 n ; z) = 1 n! ln n z . (2.3)
(2.6) Note that we are using Goncharov's original summation convention [37]; other authors define Li m 1 ,...,m k (x 1 , . . . , x k ) using the reverse summation convention instead, i.e. n 1 > · · · > n k . The G and Li functions define in fact the same class of functions and are related by It is possible to find closed expressions for special classes of multiple polylogarithms in terms of classical polylogarithm functions, e.g., for a = 0 we have G( 0 n ; z) = 1 n! ln n z, G( a n ; z) = 1 n! ln n 1 − z a , where S n,p denotes the Nielsen polylogarithm [1]. All the notations so far follow the conventions used in physics. Some of the formulas used later on however take a nicer form in a different notation commonly used in the mathematical literature, I(a 0 ; a 1 , . . . , a n ; a n+1 ) = a n+1 a 0 dt t − a n I(a 0 ; a 1 , . . . , a n−1 ; t) . (2.9) The notations (2.2) and (2.9) are related by (note the reversal of the arguments) G(a n , . . . , a 1 ; a n+1 ) = I(0; a 1 , . . . , a n ; a n+1 ) . (2.10) The iterated integrals defined in Eq. (2.9) are slightly more general than the ones usually defined by physicists, as they allow to freely choose the base point of the integration. It is nevertheless easy to convert every integral with a generic base point a 0 into a combination of iterated integrals with base point 0. An example will clarify this. First, it is easy to see that at weight one we have Starting from weight two the relation is more complicated because of the nestedness of the integration, e.g., (2.12) In the rest of the paper we mostly use the 'I' notation used in the mathematical literature, as it makes most of the formulas much simpler, keeping in mind that we can easily recover the 'G' notation via the above procedure. Just like their classical analogues, multiple polylogarithms satisfy a large number of functional equations among themselves. When expressing a Feynman integral in terms of multiple polylogarithms, we can therefore arrive at a complicated combination of multiple polylogarithms, which, if the corresponding functional equations were known, could potentially be reduced to a much simpler expression. While these functional equations are unknown in general, they can often be circumvented in practice by using the so-called symbol, which we will review in the next section.

Symbols
In this section we give a short review of symbols [43]. Symbols were first introduced in physics in Ref. [40] where they were used to simplify the six-point remainder function in N = 4 Super Yang-Mills computed in Ref. [41,42]. The main idea is to map a (complicated) combination of multiple polylogarithms to a certain tensor algebra over the group of rational functions (the tensors being called symbols) such that, at least conjecturally, all the functional equations among the polylogarithms are mapped to simple algebraic identities in the tensor algebra. Currently, two different definitions of symbols are in use in physics, 1. In Ref. [40] the symbol of a transcendental function F w (x 1 , . . . , x n ) of weight w in the variables x 1 , . . . , x n was defined recursively by considering the total differential of the function F w . More precisely, if the total differential of F w can be written in the form where F i,w−1 are transcendental functions of weight w − 1 and the R i are rational functions in the variables x 1 , . . . , x n , then the symbol of F w can be computed recursively in the weight by Multiple polylogarithms indeed satisfy a differential equation of the type (3.1) [63], dI(a 0 ; a 1 , . . . , a n ; a n+1 ) = n i=1 I(a 0 ; a 1 , . . . ,â i , . . . , a n ; a n+1 ) d ln where the hat indicates that the corresponding element is omitted. We emphasize though that Eq. (3.3) is strictly speaking only valid if all the a i are generic, i.e., non zero and mutually different. In the non-generic case the differential equation (3.3) can take a different form [63].
2. In Ref. [57] an alternative definition of a symbol was given. It was shown that the symbol of a multiple polylogarithm can be obtained by summing over certain dissections of a rooted and decorated polygon associated to a multiple polylogarithm [64], and the combinatorics of these dissections reproduces precisely the symbol obtained from the recursive procedure of Ref. [40].
Both definitions have their advantages and disadvantages. While the recursive definition has the obvious advantage that it is not necessarily restricted to multiple polylogarithms but extends to any class of (transcendental) functions defined through iterated integrals and satisfying a differential equation of the type (3.1), the second definition maps the combinatorics of the symbol to the combinatorics of rooted decorated polygons, a correspondence currently only established in the case of polylogarithms. On the other hand, the approach based on polygons is algebraic in nature, and does not make any difference between constants and variables. As an example, the differential equation approach would assign a zero symbol to ln 2 (as d ln 2 = 0) while S(ln 2) = 2 from the polygon approach. As otherwise both definitions are equivalent and give the same answer for multiple polylogarithms, we will in the following not distinguish them any further. The symbol map S fulfills various properties. First, S is linear and maps a product of multiple polylogarithms to the shuffle product of their tensors (more precisely, S is an algebra homomorphism, see Section 4). Next, each factor in the symbol is additive with respect to multiplication, This implies in particular that and more generally if ρ n denotes an n-th root of unity, From Eq. (3.4) and Eq. (3.5) it is clear that each factor in the tensor product 'behaves as a logarithm'. The first and the last entry of the symbol of a function carry some special information. Let us consider a transcendental function F w (x 1 , . . . , x n ) whose symbol takes the form where c i 1 ,...,iw are (rational) numbers and ω i k are rational functions in the x i . The symbol of the derivative of F w is given by In other words, a derivative only acts on the last entry of the symbol. The first entry of a symbol encodes in a similar way the information about the monodromies (discontinuities) of the function F w . More precisely, if M x k =a is the operator that computes the monodromy of F w around x k = a, then Note that the action of the monodromy operator is trivial on the left-hand side, because it only acts on ordinary logarithms, We prefer nevertheless to write Eq. (3.9) in this apparently more complicated form in order to exhibit the duality to Eq. (3.8). So far we have only dealt with the problem of how to compute the symbol of a function. Indeed, using any of the two definitions we can compute the symbol of any linear combination of products of multiple polylogarithms. Once the symbol has been obtained, the identities (3.4) and (3.5) allow us to simplify the symbol, which is equivalent to applying functional equations to the original expression. We then have to face the problem, however, of finding a (simpler) function with the same symbol. While there are rules how to compute the symbol of any given combination of polylogarithms, the inverse step of integrating the symbol to a function (i.e., of finding a function with the same symbol) is in general much more difficult. In Ref. [57] a prescription was given that allows one to make an educated guess for the class of functions that can give rise to a given symbol. Once such a class of functions has been determined, one can write down a linear combination (with some free coefficients) of these functions and equate their symbols, obtaining in this way a linear system for the coefficients. However, even after this step there is a remaining ambiguity because the symbol map is not injective. As an example, we have S(iπ) = 0 and S(ζ n ) = 0 . (3.11) As S maps products of functions to shuffle products of tensors, Eq. (3.11) implies that all terms proportional to ζ values and / or iπ will be mapped to zero by S. As a consequence, even if we succeed in finding a simpler function with the same symbol as our original function, we are unable to fix the terms proportional to, e.g., ζ values based on the symbol alone. The aim of this paper is to introduce a framework similar in spirit to the symbol, but where terms proportional to ζ values and iπ are not mapped to zero. Such a framework should of course retain all the good features of the symbol and reduce to the definition of a symbol in a suitable limit. In the rest of this paper we argue that such a framework is provided by the Hopf algebra of multiple polylogarithms introduced by Goncharov in Ref. [58], augmented by some ideas inspired by a recent paper by Brown [59].

Algebras, coalgebras and Hopf algebras
The aim of this section is to provide a (pedagogical) review of the algebraic notions used throughout the rest of the paper. The content of this section is standard textbook material in mathematics. We nevertheless include it here because, at least to our knowledge, most of these concepts have only been rarely used in the context of Feynman integral computations. We emphasize that we do not aim at providing a rigorous mathematical exposition of the different topics, but rather content ourselves to provide a pedestrian introduction, and we refer to the dedicated mathematical literature for further details. In particular, we will proceed by analogy with similar mathematical concepts that are of everyday use in physics. Furthermore, we will not be concerned about technical details, such as for example the differences between rings and modules on the one hand, and vector spaces and fields on the other hand. As a consequence, we will use the different notions interchangeably in the following.

Algebras: first definition
We start by reviewing some basic notions about algebras. An algebra over a field K (= R or C in general) is a K-vector space A together with a map and has a unit ε, ε · a = a · ε = a . Furthermore, the multiplication is compatible with the vector space structure, i.e., distributive, and associative with respect to scalars, a · (k b) = k (a · b) and k (ℓ a) = (k ℓ) · a . where k, ℓ ∈ K are scalars. Let us highlight at this stage some features that will be useful later on. First, the distributivity (4.4) implies that the multiplication m is in fact a bilinear map from A × A to A. Second, as a consequence of the compatibility with the scalar multiplication, Eq. (4.5), we will assume from now on that the field of scalars K is part of the algebra itself, i.e., K can be embedded into A. Under this assumption it is easy to see that we can identify the unit element ε in Eq. (4.3) with the unit element 1 ∈ K. While this definition of an algebra is presumably familiar to most physicists, it is not well suited to understand the link to coalgebras and Hopf algebras. For this reason, we will now reformulate the above definition in terms of tensor products of vector spaces.

Tensor products of vector spaces
As the concept of tensor product used in the following is different from the definition commonly used in physics, let us review the mathematical definition of a tensor product.
Consider three vector spaces U , V and W . A standard textbook result then states that there is a unique vector space, called the tensor product U ⊗ V of U and V , and a unique bilinear map τ : U × V → U ⊗ V such that for every bilinear map β : U × V → W there is a unique linear map µ such that β = µ τ , (4.6) where µ τ denotes the composition of µ and τ . The bilinear map τ simply assigns to two vectors a and b their tensor product, i.e., τ (a, b) = a⊗b. In other words, we can reformulate Eq. (4.6) in an equivalent, but more accessible, form: for every bilinear map β there is a unique linear map µ such that According to Eq. (4.7) it is always possible to break the bilinearity of β up into the bilinearity of the tensor product and the linearity of µ. As an example, we have Before applying this result to the definition of algebras, let us take the opportunity to introduce a diagrammatic tool useful to describe algebraic structures. If we represent each map between vector spaces by an arrow, e.g., then the relation (4.6) between τ , β and µ is conveniently described by the following commutative diagram, The word 'commutative' refers in this context to the fact that we can take any path through the diagram from U × V to W and we always arrive at the same result. This is precisely the content of Eq. (4.6).

Algebras: second definition
We have seen that the distributivity of the multiplication m implies that m is in fact a bilinear map from A × A to A. Thus, following our considerations on tensor products, there is a unique linear map µ : The associativity of the tensor product can then be summarized by the following property of the linear map µ, (4.14) In order to make the transition to coalgebras easier, it is useful to rephrase Eq. (4.12) in terms of commutative diagrams. It is easy to check that Eq. (4.12) is equivalent to the commutativity of the diagram The existence of a unit element can also be recast into this new language. Indeed, we previously assumed that the field of scalars is embedded into the algebra. This implies the existence of a map ǫ : K → A that assigns to each scalar k and element ǫ(k) ∈ A. Compatibility with the scalar multiplication forces us to require that ǫ be linear and that, for any vector a, multiplication with the scalar k or the vector ǫ(k) gives the same result, In terms of commutative diagrams Eq. (4.16) is equivalently expressed as 1 For two functions f and g we define where s denotes the scalar multiplication of a vector and a scalar, s(k, a) = k a. Note that the embedding ǫ, together with Eq. (4.16), implies the existence of the unit ε ≡ ǫ(1) in the usual sense, because ǫ(1) · a = 1 a = a . (4.18) For this reason, the map ǫ is usually referred to as the unit of the algebra. Another recurrent theme in the study of algebraic structures is the study of the structure-preserving maps (the so-called homomorphisms). If A and B are algebras, then a homomorphism from A to B is a linear map φ that preserves the multiplication, At this stage we can identify the multiple polylogarithms as an algebra: the multiplication is given by the shuffle product, while the scalars are given by (rational) numbers. In addition, the shuffle product preserves the weight (the product of two polylogarithms of weight n 1 and n 2 gives a linear combination of polylogarithms of weight n 1 + n 2 ). This feature is formalized by the notion of a graded algebra, i.e., an algebra that is a direct sum as a vector space, such that the multiplication preserves the grading, An element of A n is said to be of weight or grade n. The multiple polylogarithm algebra is thus graded by the weight. In the following we always assume that the weight 0 part of a graded algebra coincides with the field of scalars, A 0 = K. Note that this is in agreement with our naive expectation in the case of multiple polylogarithms: the weight 0 part of the algebra consists of all objects of transcendental weight 0, e.g., rational numbers or rational functions.
To summarize, we arrive at the following conclusion: an algebra A is a vector space equipped with a linear map µ : A ⊗ A → A and an algebra homomorphism ǫ : K → A such that µ satisfies the associativity condition expressed through Eq. (4.12) or, equivalently, through the commutative diagram (4.15). The multiple polylogarithms, equipped with the shuffle product, then form an algebra graded by the weight.
So far we have only formalized the algebraic structure of multiple polylogarithms in an unusual way. In Section 5 we will see that multiple polylogarithms carry another structure, a so-called Hopf algebra structure, which is the topic of the rest of this section.

Coalgebras as the duals of algebras
In this section we introduce coalgebras and coproducts, one of the main topics in the rest of this paper. We refrain from giving a mathematically precise axiomatic definition of a coalgebra, and we rather proceed by analogy with similar concepts of everyday use in physics: vector spaces and hermitian conjugates of linear maps.
Let us start by reviewing some textbook material on the duals of vector spaces. Consider two vector spaces V and W and denote their duals 2 , i.e., the vector spaces of all linear forms on V and W , by V * and W * . Let A be a linear map from V to W . It is well-known that A induces a linear map between the duals, the hermitian conjugate A † , in the opposite direction, Eq. (4.22) provides us with a diagrammatic rule to derive the commutative diagram describing some algebraic structure when replacing all the vector spaces by their duals. Indeed, when dualizing, we 1. replace each vector space by its dual, 2. replace each linear map by its dual (= hermitian conjugate), 3. reverse all the arrows in the diagram.
If we upgrade V to an algebra, then it comes equipped with two linear maps, the multiplication µ and the unit ǫ, so we can consider their 'hermitian conjugates', i.e., the structures induced on the dual space V * by the multiplication and the unit. A coalgebra is defined as the dual of an algebra, i.e., if A is an algebra with multiplication µ : A ⊗ A → A and unit ǫ : K → A, its dual C = A * is equipped with linear maps ∆ = µ † : C → C ⊗ C (the comultiplication) and η = ǫ † : C → K (the counit). The properties of the algebra operations (associativity and unit) will also be reflected in the coalgebra C. Using the diagrammatic rule given in Eq. (4.22), we can easily obtain the commutative diagrams that describe the duals to the associativity (4.15) and the unit (4.17) (the coassociativity and the counit), The coassociativity can also be written in equations as As the coassociativity will be extensively used in the remainder of this paper, let us elaborate on it for a while. While the multiplication µ corresponds to the operation of 'multiplying together' and the associativity expresses the fact that it is immaterial in which order we multiply three or more elements together, the comultiplication ∆ morally corresponds to the operation of 'decomposing', and the coassociativity asserts that the order in which this operation is iterated is irrelevant. To be more concrete, take an element a ∈ C, and consider its coproduct, schematically written as for some a (j) i ∈ C. By acting with ∆ on a, we have decomposed it into (a combination of) two pieces. We can iterate this process, and decompose Eq. (4.25) further into (a combination of ) three pieces. At this stage, we have the choice to decompose either a In the first case we arrive at while in the second case we arrive at While Eqs. (4.27) and (4.28) are in general different, coassociativity asserts that the two expressions must be equal As a consequence, there is an essentially unique way to iterate the coproduct,

Bialgebras and Hopf algebras
We are now only one step away from defining a Hopf algebra. First, a bialgebra is an algebra that is at the same time a coalgebra, i.e., a vector space equipped with both a multiplication µ and a comultiplication ∆ 3 . We emphasize that in this setting µ and ∆ are independent and in general not hermitian conjugate to each other. Furthermore, we require the multiplication and the comultiplication to be compatible with each other, i.e., the coproduct of a product equals the product of the coproducts (in other words, ∆ is an algebra homomorphism), where the multiplication in the right-hand side is taken in each factor of the tensor product separately, A Hopf algebra H is a bialgebra equipped with an additional structure, the so-called antipode S : H → H satisfying the properties As in the rest of this paper we do not make explicit use of the antipode, we do not elaborate on it any further. Let us conclude this section by introducing some notations that will be useful in subsequent sections. Consider a Hopf algebra H with coproduct ∆, and assume that H is graded (as will be the case for the multiple polylogarithms), If the coproduct respects the weight, we can decompose the action of the coproduct ac- We can then write the action of ∆ on H n as where ∆ p,q is the part of the coproduct that takes values in H p ⊗ H q . In a similar way we define ∆ p,q,...,r as the component of the iterated coproduct that takes values in H p ⊗ H q ⊗ . . . ⊗ H r . Finally, it is sometimes useful to define the reduced coproduct ∆ ′ via An element a ∈ H such that ∆ ′ (a) = 0 is called a primitive element of H.

The multiple polylogarithm Hopf algebra
In this section we apply the algebraic concepts of the previous section to multiple polylogarithms. As a result, we obtain a framework that contains the symbol in a certain limit, but is more general and incorporates, in particular, the ζ values. As a starting point, let us denote by H the algebra formed by the multiple polylogarithms equipped with the shuffle product. We already know that H is graded by the weight of the polylogarithms. In Ref. [58] Goncharov showed that H can be equipped with a coproduct which turns it into a Hopf algebra. The coproduct on multiple polylogarithms is given by [58] ∆(I(a 0 ; a 1 , . . . , a n ; a n+1 )) = 0=i 1 <i 2 <...<i k <i k+1 =n I(a 0 ; a i 1 , . . . , a i k ; a n+1 ) ⊗ k p=0 I(a ip ; a ip+1 , . . . , a i p+1 −1 ; a i p+1 ) .
The fact that Eq. (5.1) defines a genuine coproduct, i.e., that ∆ is coassociative, Eq. (4.24), and an algebra homomorphism, Eq. (4.31), is a non-trivial statement. In addition, Eq. (5.1) preserves the weight, i.e., the sum of the weights in each term is equal to n. We stress that Eq. (5.1) is strictly speaking only valid when all the a i 's are generic, i.e., non zero and mutually different. The definition of the coproduct in the non-generic case involves several technical steps that do not add anything new to the discussion in the main text of the paper, and we refer to Appendix A or to Refs. [58,63] for the definition of the coproduct in the non-generic case. Let us quote here only the explicit formulas for the coproducts for the ordinary logarithm and the classical polylogarithm, Eq. (5.2) is enough to compute the coproduct of any expression made out of ordinary logarithms and classical polylogarithms only. Indeed, we can use Eq. (4.31) to obtain for example, Furthermore, it is easy to prove the following result, The coproduct can be used to simplify expressions involving polylogarithms in the same way as the symbol. Indeed, suppose that we have two expression F w and G w of weight w that are equal (modulo functional equations). Then it is clear that also their coproducts must be equal, and also It is important to note that Eq. (5.6) only involves polylogarithms of weight w ′ < w. As a consequence, it is enough to know the functional equations of lower weight in order to check the equality. These functional equations of lower weight might themselves still be complicated or unknown, so we have apparently not gained anything. In such a scenario we can iterate the procedure by applying the coproduct again to one of the factors in the tensor product, and the coassociativity of the coproduct ensures that this iteration is unique. In this way we obtain a whole tower of expressions, which at each stage involve only transcendental functions of lower weight, As an example, in the case of a function of weight four, we obtain the following identities, In the extreme case where we go down to ∆ 1,...,1 , we have decomposed a weight w polylogarithm into a tensor of rank w made out of polylogarithms of weight one, i.e., ordinary logarithms, for which all the functional equations are known. It can be shown that in this way we obtain precisely the symbol of the function (up to some technical detail that will be discussed later). In other words, the symbol is nothing but the maximal iteration of the coproduct. Besides providing a precise definition of the symbol, this approach also shows why the symbol alone is insufficient to determine the function completely. Indeed, requiring two expressions to have the same symbol is equivalent to require that they give the same result when acted upon with ∆ 1,...,1 . While this approach has the obvious advantage that it reduces the problem to the sole application of functional equations for ordinary logarithms, it does in general not imply the equality of the other components of the coproduct. The information on the terms that are missed by the symbol is nevertheless contained in these other components (at least to some extent). To see how this works, and to see how the ζ values that were missed by the symbol arise in the other components, we first need to overcome some technical obstacles that we will discuss now.

The multiple ζ value Hopf algebra
In the previous section we argued that the coproduct provides a more general 'calculus' that contains the symbol in some limit and that cures most of the unwanted features of the symbol. However, a consistent 'extended symbol calculus' should be compatible with specializations of the arguments.
Multiple ζ values are, by definition, the values of the multiple polylogarithms (in the series representation) with all arguments equal to unity. It is easy to check that they form in fact a sub-Hopf algebra Z of the multiple polylogarithm Hopf algebra H. From Eq. (5.2) one immediately sees that the coproduct of ζ values of depth one is given by, ∆(ζ n ) = ∆(Li n (1)) = 1 ⊗ ζ n + ζ n ⊗ 1 , i.e., ζ value of depth one are primitive elements in Z, and thus in H.
At this point we have to face a subtle problem for the even ζ values. We know that the even ζ values are not independent, but they are all proportional to powers of ζ 2 , e.g., Thus, and so there is a contradiction with Eq. (5.8), unless 'ζ 2 = 0', i.e., unless we work modulo As a consequence, we lose all information on the terms proportional to π 2 in the coproduct. Hence, if this was the case we would not have gained anything over the naive symbol approach. In Ref. [59] Brown argues that instead of defining the coproduct of ζ 2 to be zero, it is consistent to define ∆(ζ 2 ) = ζ 2 ⊗ 1 , and more generally ∆(ζ 2n ) = ζ 2n ⊗ 1 . This definition obviously solves the problem we had before, because Even though Eq. (5.12) was introduced in Ref. [59] in the context of multiple ζ values, we argue that it equally well holds in more general situations. Moreover, we conjecture that Eq. (5.12) can be extended to This definition is obviously consistent with Eq. (5.12). In addition, it allows to extend the coproduct to include the iπ terms in a consistent way. A word of caution is however in order: due to the monodromy of the logarithm, we should define 2i π ⊗ x = 0, ∀x, and thus also 4 π 2 ⊗ x = 0. In practice though, we observed that we never have to worry about the monodromy of the logarithm in physical applications. Indeed, in a physical computation the Riemann sheets of the logarithms are fixed, e.g., by assigning a small imaginary part to x, such that ln(x + iδε) = ln |x| + δ iπ θ(−x) . where H π is the quotient of H by the (two-sided) ideal 4 generated by π. The iterated coproduct then takes the form Loosely speaking, this means that we drop all powers of π in all factors of the coproduct, except for the first one.
We are now able to state our main conjecture. We conjecture that using the coproduct together with the definition (5.15) we obtain an extension of the symbol calculus that takes into account the ζ values as well. More precisely, if we have a function F w of weight w and if we can find a (simpler) function G w such that then where the sum runs over all primitive elements of weight w of H for some (rational) coefficients c i . We see from Eq. (5.21) that the reduced coproduct does still not entirely fix the function. A similar observation was made in Ref. [59] in the case of multiple ζ values. In practice, the primitive elements turn out to be constants of a given weight, e.g.,

powers of π,
2. ζ values of depth one, ζ n , 3. Clausen values at the roots of unity, where R n denotes the real part for n even and the imaginary part for n odd.
Even though the function is not entirely fixed, we believe that this approach constitutes an important improvement over the pure symbol approach. Indeed, while a pure symbol approach misses for example all functions multiplied by multiple ζ values, only very few constants are left undetermined by the coproduct. The undetermined constants can easily be fixed by, e.g., comparing to numerical values at a few points, requiring the function to have the right limits, etc. While we fall short of a full proof of our conjecture, we have checked the consistency of our 'extended symbol calculus' by applying it to hundreds of functional equations among multiple polylogarithms. A selection of results will be shown as an illustration in Sections 6 and 7.
Let us conclude this section by discussing how differential and monodromy operators act on the coproduct, i.e., how to generalize the relations (3.8) and (3.9) to our framework. We conjecture that i.e., differential operators only act in the last component of the coproduct, while monodromy operators only act in the first component. Note that the same statement is true for the iterated coproduct. While we fall short of a full proof of Eq. (5.23), we were able to check our claim in the special case where F w is a multiple polylogarithm with generic arguments. The proofs of these special cases are presented in Appendix B.

Relationship between the coproduct and the symbol
In this section we briefly discuss the relationship between the coproduct and the symbol. While it is possible to proof in general that the combinatorics of the maximal iteration of the coproduct on multiple polylogarithms matches precisely the combinatorics of the maximal dissections of the rooted and decorated polygon associated to a polylogarithm [65], we do not present a firm proof in this paper, but merely state the observation that this correspondence holds in the all cases we have considered. We only motivate the relationship by analyzing how the coproduct behaves under differentiation. If F w is a transcendental function of weight w, then without loss of generality we can write its iterated coproduct in the form If we now act with (id ⊗ d) on this expression, we obtain, using Eq. (5.23), i.e., we obtain an expression which is dual to the differential equation (3.1) defining the symbol. We emphasize that we claim in no way that this provides a proof of the fact that the maximal iteration contains the symbol, but we hope that it provides a feeling to the reader why this relationship is true. Note however that the symbol is not exactly equal to ∆ 1,...,1 . Indeed, the symbol does not contain any information about terms proportional to iπ, whereas these terms are incorporated into ∆ 1,...,1 through Eq. (5.15). In other words, the correct relationship between the symbol and the maximal iteration of the coproduct reads S ≡ ∆ 1,...,1 mod π . 3. similarly, the monodromy identity in Eq. (5.23) reduces to the corresponding identity for the symbol, Eq. (3.9).

Examples
In this section we present some simple examples of how the coproduct can be used to simplify expressions involving multiple polylogarithms. The examples in this section do not provide any new results, but they are simple enough so that all the steps can be carried out by hand. They are therefore rather meant to illustrate how to use the coproduct in practise to perform computations.

Inversion relations
We start by considering inversion relations for classical polylogarithms. Throughout this section we assume that x is a real positive variable to which we assign a small positive imaginary part. We proceed in a bootstrap and build up the inversion relations by a recursion in the weight. For the classical polylogarithm of weight 1, the inversion relation is easy to obtain, In order to obtain the inversion relation for weight 2, we act with ∆ 1,1 on Li 2 (1/x) and insert the inversion relation for Li 1 (1/x), Following our conjecture, we conclude that the arguments on the left and right-hand sides are equal modulo primitive elements of weight two. We thus make the ansatz, for some rational number c. Specializing to x = 1, we immediately obtain c = 1/3, which is indeed the correct inversion relation for Li 2 . We emphasize at this stage the importance of the definition (5.15). Moving on to weight 3, we act with ∆ 1,1,1 on Li 3 (1/x) and obtain Eq. (6.4) is not yet the correct inversion relation for Li 3 . After subtracting the terms we have found in Eq. (6.4), we look at the image of the difference under ∆ 2,1 or ∆ 1,2 . As an example, we obtain We see that acting with ∆ 1,2 does not provide any new information. This is not surprising, as the missing terms are of the form π 2 ln x, and ∆ 1,2 (π 2 ln x) = 0. Indeed, acting with ∆ 2,1 and using the inversion relation for Li 2 , we obtain new non-trivial information, (6.6) Thus, Specializing to x = 1 gives α = β = 0, which is indeed the correct inversion relation for Li 3 . Proceeding in exactly the same way, we can now derive the inversion relations for all the classical polylogarithms.

Special values in x = 1/2
As a second example we consider the special values of some harmonic polylogarithms when the argument is equal to 1/2. In many cases these values are expressible through ζ values, ln 2 and Li n 1 2 , for n ≥ 4. It is however impossible to obtain these relations using symbols alone, because S H a 1 , . . . , a n ; where a i ∈ {0, 1} and p is equal to the number of a i 's equal to zero. As a consequence, a pure symbol approach only provides trivial and misleading information, because we always obtain a symbol corresponding to powers of ln 2. In the following we show that using the coproduct approach we can do better and entirely fix the values in x = 1/2, up to primitive elements of a given weight n (in the present case we only need to consider ζ n ). We again proceed in a bootstrap and start from weight 2. We obtain Thus, for some rational number c that cannot be fixed from the coproduct. Hence, at this stage we need to resort to numerics, It appears at this stage that we have not gained anything over the pure symbol approach. In fact, in this case the coproduct only becomes more powerful than the pure symbol approach starting from weight three. Acting with ∆ We could now be tempted to apply the same procedure to Li However, no such formula is currently known, and it is commonly believed that, starting from n = 4, Li n 1 2 defines a genuinely new transcendental number. If our 'extended symbol calculus' is consistent, it should lead us to the same conclusion, i.e., that an ansatz of the form (6.16) is excluded. To see why this is indeed the case, we start from Eq. (5.2) and we write We see that the second factor in the reduced coproduct only involves powers of ln 2. An ansatz made out of combinations of powers of ln 2 and ζ values however inevitably leads to terms in the coproduct that have a ζ value in the second factor, e.g., The only way to make the terms having a ζ value in the second factor vanish is to assume that m is even, because Eq. (5.15) implies ∆(π m ln k 2) = π m ⊗ ln k 2 + . . . . We therefore arrive at the conclusion that starting from weight four, Li n 1 2 can no longer be expressed through ζ values and powers of ln 2 alone, in agreement with the common belief. We stress the role played in this argument by the special treatment of ζ 2 , Eq. (5.12).
So far we have only considered examples of classical polylogarithms. Let us therefore conclude this section by discussing a less trivial example of weight five, where the full superiority of the coproduct approach over the pure symbol approach is revealed. Consider to this effect the harmonic polylogarithm H 0, 1, 0, 0, 1; 1 2 . We can make an ansatz for this number in the form T = c 1 Li 5 1 2 + c 2 Li 4 1 2 ln 2 + c 3 ln 5 2 + c 4 π 2 ln 3 2 + c 5 ζ 3 ln 2 2 + c 6 π 4 ln 2 + c 7 π 2 ζ 3 + c 8 ζ 5 .

(6.21)
Our goal is to find rational numbers c i such that H 0, 1, 0, 0, 1; 1 2 = T . A pure symbol approach is obviously totally inadequate for this: not only would it be unable to constrain the coefficients multiplying the ζ values, but it could also not distinguish between the first three terms in the ansatz, thus only providing a single relation between the coefficients c 1 , c 2 and c 3 . As we will see in the following, the coproduct approach allows us to fix all the coefficients, except for the coefficient of ζ 5 (which is a primitive element).

Amplitudes for H + 3 gluons
In this section we apply the coproduct to a physical problem, namely the two-loop helicity amplitudes for a Higgs boson plus three gluons in the large top mass limit. In this limit the coupling of a Higgs boson to gluons is described by an effective operator of dimension five, The two-loop corrections to the helicity amplitudes for a Higgs boson plus three gluons were computed in Refs. [60,61], where it was expressed as a complicated combination of twodimensional harmonic polylogarithms. In Ref. [53] it was shown that, after subtracting the square of the one-loop amplitude, the symbol of the leading color maximally transcendental part of the two-loop helicity amplitudes is equal to the symbol of the two-loop form factor of three gluons in planar N = 4 Super Yang-Mills. The latter can be expressed in a very compact form involving only classical polylogarithms up to weight four [53]. This suggests that the two-loop corrections to the helicity amplitudes for a Higgs boson plus three gluons can be written in a much simpler form without any multiple polylogarithms. However, as the symbol does not fix terms proportional to ζ values, the symbol alone is insufficient to determine such a simplified form in an easy way. In the following we apply our coproduct approach to rewrite the results of Refs. [60,61] in a compact form, obtaining in this way compact analytical expressions for all helicity amplitudes for a Higgs boson plus three gluons, for both the decay (H → ggg) and the scattering (gg → Hg) regions.

The decay region
We start by investigating the decay region, i.e., the two-loop corrections to the helicity amplitudes for H → ggg. The kinematics is described by three dimensionless ratios, where m H denotes the mass of the Higgs boson and s ij = 2p i p j , with p i the momenta of the external gluons. These kinematic variables are not independent, but they are constraint by 0 < x i < 1 and x 1 + x 2 + x 3 = 1 .
As a consequence, the amplitude is effectively a function of only two of the three dimensionless ratios. Correspondingly, the result of Ref. [61] is expressed in terms of two-dimensional harmonic polylogarithms in x 2 and x 3 . There are two independent helicity configurations for the decay, H → g + g + g + and H → g + g + g − . (7.4) In the following we will analyze each configuration separately. Let us start by analyzing the helicity amplitude where all the final state gluons have a positive helicity. Bose symmetry then implies that the amplitude must be symmetric under a permutation of the external gluons, or, equivalently, it must be totally symmetric in the kinematic variables x i , i = 1, 2, 3. The (finite part of the) one-loop correction to the decay can be written as Following Ref. [61], we decompose the (finite part of the) two-loop correction into contributions with different color structures, and we furthermore subtract the square of the finite part of the one-loop amplitude, Eq. (7.5), The coefficients of the different color structures were computed in Ref. [61] where they were expressed as a combination of two-dimensional harmonic polylogarithms in x 2 and x 3 . In order to simplify these expressions, we start by computing the symbol of Eq. (7.7). It turns out that all the entries in the symbol are drawn from the set The weight four part of the symbol satisfies , and the wedge denotes the antisymmetric tensor product, a ∧ b = a ⊗ b − b ⊗ a. It follows then from a conjecture in Ref. [66] that α (2) can be expressed in terms of classical polylogarithms only. Similar conclusions were already drawn in Ref. [53]. Next, we have to determine the arguments of the polylogarithms that can lead to a symbol with entries drawn from the set (7.8) under the constraint (7.3). Using the  (7.3). Each line shows half an orbit of the S 3 action, the second half being obtained by inversion. All these functions are less than unity in the region defined by Eq. (7.3). prescription given in Ref. [57], we find 54 rational functions grouping into 9 orbits of the symmetric group S 3 whose action on rational functions f ( The rational functions are summarized in Table 1. It is important to note that not all 54 solutions are independent, and in particular we can express half of them in terms of the others by using the inversion relation for the classical polylogarithms, Li n (f ) + . . . . (7.11) It is then easy to see that it is always possible to choose 27 solutions such that all polylogarithms are real in the region defined by Eq. (7.3). Next step we write down a combination of (classical) polylogarithms in the arguments shown in Table 1. Equating the symbol of α (2) and our ansatz provides a linear system for the coefficients. In the following we only discuss the weight four part of A (2) α . All other contributions are similar. In agreement with Ref. [53], we find where R 3 is the N = 4 form factor remainder function of Ref. [53], (ln x 1 ln x 2 + ln x 1 ln x 3 + ln x 2 ln x 3 ) ln 2 x 1 + ln 2 x 2 + ln 2 x 3 − 23π 4 720 , (7.13) 5 We stress that this S3 symmetry is not identical to the S3 describing the Bose symmetry.

Analytic continuation to the scattering region
The expressions presented in the previous section are only valid in the decay region (7.3). In the rest of this section we show how to perform the analytic continuation to the scattering region. We have to distinguish the following cases, In all cases the kinematic invariants are subject to the constraint which simply expresses s + t + u = m 2 H . In the following we only discuss the analytic continuation in the case where all gluons have a positive helicity, all other cases being similar.
In the decay region all invariants are positive and have a small positive imaginary part. The analytic continuation to the scattering region is then performed according to the prescription s 23 → |s 23 | e iπ and s 13 → |s 13 | e iπ , (7.35) while all other invariants remain unchanged. This implies the following prescription for the dimensionless ratios, where we defined x i = |x i | = −x i . Using these prescriptions, the Kummer functions are analytically continued according to In addition, we need to analytically continue classical polylogarithms of the form Li n 1 − z e iδπ , z > 0 and δ = ±1 . While the corresponding formulas could be obtained by the help of, e.g., the Mathematica package HPL [32], we show how the analytic continuation formulas can be derived from the coproduct. Similar to the case of the inversion relations discussed in Section 6, we proceed recursively in the weight. At weight one, we immediately obtain At weight 2, we act with the coproduct, and drop all the iπ terms in all the factors of the coproduct except the first one, Thus, we obtain for some rational number c. Specializing to z = 0 (where we are insensitive to the phase), we immediately obtain c = 1 6 . At weight 3, we first act with ∆ 1,1,1 , In order to determine the terms proportional to π 2 , we compute ln(1 + z) .
(7.43) Thus, Specializing to z = 0, we obtain c = 0. Using this technique we can recursively derive all the analytic continuation formulas for functions of the type (7.38). In particular, at weight 4 we obtain These formulas are enough to perform the analytic continuation from the decay region to the scattering region. We checked numerically that our results agree (after analytic continuation) with the results in the scattering region presented in Ref. [61] for all the helicity configurations.

Conclusion
While recent advances seem to indicate that, at least in the context of the N = 4 Super Yang-Mills theory, scattering amplitudes are simpler than expected, it is a known fact that the analytical evaluation of multi-loop Feynman integrals can lead to very lengthy and complicated expressions involving new classes of transcendental functions only poorly studied in the literature. A systematic approach to study these new functions and their functional equations is therefore highly desirable, not only from the formal standpoint, but also in perspective of phenomenological applications. A first step in this direction has been made in Ref. [40] with the introduction of the so-called symbol map that allows to map the combinatorics of transcendental functions defined by iterated integrals to the combinatorics in a certain tensor algebra.
In this paper we have proposed a novel approach to deal with complicated expressions that can arise from a special class of Feynman integral computations, namely those that can be evaluated in terms of multiple polylogarithms. The cornerstone of this approach is the coproduct on multiple polylogarithms introduced by Goncharov in Ref. [58], augmented by some ideas from a recent paper by Brown [59]. The main feature is that, unlike the symbol, the coproduct allows to incorporate ζ values into the calculus, thus retaining more information about the function. We have demonstrated the virtue of this novel approach by rewriting the two-loop helicity amplitudes for a Higgs boson plus three gluons originally computed in Ref. [60,61] in a compact analytical form, revealing, at least to our knowledge, for the first time an unexpected simplicity for a two-loop multi-scale amplitude in QCD.
The different terms in the coproduct in Eq. (5.1) then correspond to connecting points via a polygon (including the empty polygon) in all possible ways. The points lying on the polygon provide the arguments for polylogarithm in the first factor in a given term in the coproduct, while the remaining points determine the entry of the second factor. Here we illustrate this construction only on the example of I (a 0 ; a 1 , a 2 , a 3 , a 4 ; a 5 ), and we refer to Refs. [58,63] for further details. In this case the polygons together with the terms in the coproduct they correspond to are a 0 a 5 I(a 0 ; a 2 , a 3 , a 4 ; a 5 ) ⊗ I(a 0 ; a 1 ; a 2 ) I(a 0 ; a 1 , a 2 , a 3 , a 4 ; a 5 ) ⊗ 1

A.2 The coproduct in the non-generic case
As already mentioned, the formula for the coproduct, Eq. (5.1), is only valid in the generic case where all the arguments are mutually different. Indeed, in the non-generic case divergences can arise in individual terms in the coproduct. A multiple polylogarithm I(a 0 ; a 1 , . . . , a n ; a n+1 ) is in general divergent if either a 1 = a 0 or a n = a n+1 . In this case the poles in the integrand coincide with the endpoints of the integration path. As an example of how these divergences can arise inside the coproduct, consider the multiple polylogarithm I(a 0 ; a 1 , a 2 , a 2 ; a 3 ), which is convergent whenever a 0 = a 1 and a 2 = a 3 . Eq. (5.1), however, contains a term a 1 a 2 a 2 a 0 a 3 I(a 0 ; a 1 , a 2 ; a 3 ) ⊗ I(a 1 ; a 2 ; a 2 ) , and the second factor, I(a 1 ; a 2 ; a 2 ), is divergent.
In Refs. [58,63] the coproduct in the non-generic case is defined by replacing in the right-hand side of Eq. (5.1) every multiple polylogarithm by its regularized value. In the following we only give a practical rule of how to obtain a regularized value (there is more than one way to perform the regularization), and we refer to Refs. [58,63] for more details.
As the divergences of a multiple polylogarithm are end-point divergences, we can easily regularize them by slightly moving the end points of the integration path. In practice we usually deal with integration paths that are straight lines, and in this case the regularization can simply be achieved by the replacement I(a 0 ; a 1 , . . . , a n ; a n+1 ) → I(a 0 (1 + ε); a 1 , . . . , a n ; a n+1 (1 − ε)) , if a 0 = 0 , I(ε; a 1 , . . . , a n ; a n+1 (1 − ε)) , if a 0 = 0 .
(A. 6) B. Proofs of the derivative and monodromy identities

B.1 Proof of the derivative identity
In this section we sketch the proof of the derivative identity (5.23) in a particular case, namely id ⊗ ∂ ∂a n+1 ∆(I(a 0 ; a 1 , . . . , a n ; a n+1 )) = ∆ ∂ ∂a n+1 I(a 0 ; a 1 , . . . , a n ; a n+1 ) , (B.1) where we assume all arguments of the multiple polylogarithm generic. Let us compute the action of the differential operator in the left-hand side of Eq. (B.1). It is obvious that we do only get a non-zero contribution from those terms in the coproduct where the second factor depends on a n+1 . If all the arguments are generic, it is easy to see that this is the case precisely for those terms in Eq. (5.1) where the polygon inscribed into the semi-circle (see Appendix A) does not contain a n . As an example, for n = 3, the term It is clear that in this way we produce precisely all the polygons inscribed into the semi-circle with the point a n (a 3 in the example above) removed, multiplied by 1 a n+1 −an . These terms are in one-to-one correspondence with the terms in the coproduct of I(a 0 ; a 1 , . . . , a n−1 ; a n+1 ).

B.2 Proof of the monodromy identity
In this section we sketch the proof of the monodromy identity (5.23) in a particular case, namely M a n+1 =a i ⊗ id ∆(I(a 0 ; a 1 , . . . , a n ; a n+1 )) = ∆ M a n+1 =a i I(a 0 ; a 1 , . . . , a n ; a n+1 ) , (B.4) where all arguments of the multiple polylogarithm are assumed generic. We again start by evaluating the action of the monodromy operator in the left-hand side of Eq. (B.4). It is clear that only those terms in the coproduct where the first factor depends on a i contribute a non-zero answer. For generic arguments this implies that the polygon associated to this term must contain a i . As an example, for n = 3 and i = 2, the term  4). In Ref. [63] a formula for the monodromy of a multiple polylogarithm was given, M a n+1 =a i I(a 0 ; a 1 , . . . , a n ; a n+1 ) = 2πi I(a 0 ; a 1 , . . . , a i−1 ; a i ) I(a i ; a i+1 , . . . , a n ; a n+1 ) .
(B.5) Applying this formula to the left-hand side of Eq. (B.4) implies that the monodromy operator 'splits' all the polygons that give a non-vanishing contribution at the point a i . In the example n = 3, i = 2 considered above we obtain the splitting Summing over the split polygons is equivalent to summing over all pairs of polygons contributing to the coproduct of the two multiple polylogarithms in the right-hand side of Eq. (B.5). Thus, we obtain M a n+1 =a i ⊗ id ∆(I(a 0 ; a 1 , . . . , a n ; a n+1 )) = (2πi ⊗ 1) ∆ (I(a 0 ; a 1 , . . . , a i−1 ; a i )) ∆ (I(a i ; a i+1 , . . . , a n ; a n+1 )) = ∆ M a n+1 =a i I(a 0 ; a 1 , . . . , a n ; a n+1 ) , (B.6) which finishes the proof.