Processing math: 100%

4. Subsystems of root systems

Here we define subsystems of root systems and prove necessary facts about them. Unlike for groups, rings, etc. it's pretty tough to prove a subset closed under the operation is itself a root system. Though an equivalent statement is given as an exercise in the text Combinatorics of Coxeter groups, so are some unsolved problems, and the equivalent statement was the main result of Matthew Dyer's dissertation.

Definition 1: Let (R,B) be a root system. A subset UR is called a subsystem if it is closed under the operation in R. Define BR(U)={xR+(B)U|xyR+(B) for all yR+(B)U with xy} We will be showing below that BR(U) forms a basis for U, though it's not even clear that BR(U) is nonempty. Define also for xR and UR xU={xy|yU} Intuitively this is a "translation" of a subsystem by a root. We've been using this notation already for inversion sets. Note that yxy is a bijection between U and xU, and since (xy)(xz)=x(yz) we have that xU is a subsystem of R. You may wish to verify that if xU then xU=U.



Lemma 4.1: If (R,B) is a root system, UR is a subsystem, and xBU then BR(xU)=xBR(U) and R+(BR(xU))=xR+(BR(U))
Proof: First we prove that BR(xU)=xBR(U). Note that xxU. Suppose yBR(xU). Then for all zR+(B)(xU) such that yz we have that yzR+(B)U. Since xB and xU, xy,z and hence x(yz)=(xy)(xz)R+(B)U. Thus xyBR(U), hence yxBR(U). Thus BR(xU)xBR(U).

Suppose conversely that yxBR(U). Then for all zR+(B)U with zxy we have that (xy)zR+(B)U, hence x((xy)z)=y(xz)R+(B)(xU). Thus yBR(xU), and the result follows.

Now note that R+(BR(xU)) is the smallest set satisfying BR(xU)=xBR(U)R+(BR(xU)) and if yR+(BR(xU)) and xxBR(U) are such that xy, then xyR+(BR(xU)). Since x(yz)=(xy)(xz), we see that the bijection yxy preserves these properties, thus xR+(BR(xU))=R+(BR(U)). The result follows.

The following theorem is essentially the main result of Dyer's dissertation (Reflection subgroups of Coxeter systems, Journal of Algebra 135 (1), 1990, 57-73).

Theorem 4.2: Let (R,B) be a root system and let UR be a subsystem. Then (U,BR(U)) is a root system and R+(BR(U))=R+(B)U
Proof: The theorem follows if we prove that R+(BR(U))=R+(B)U. It is an easy consequence of the definition of BR(U) that R+(BR(U))R+(B)U. We prove that the reverse inclusion holds. Suppose yR+(B)U. Let y=w(x) for wW(R) and xB with (w) minimal. We prove by induction on (w) that yR+(BR(U)). If (w)=0, then yB, hence yBR(U), so yR+(BR(U)). Otherwise let xB be such that (sxw)<(w). If xU, then since xy=sxw(x) and (sxw)<(w) we have by the induction hypothesis that xyR+(BR(U)), hence the same is true of y=x(xy). If xU, then xyxU and by the induction hypothesis xyR+(BR(xU))=R+(xBR(U))=xR+(BR(U)) (the equalities by Lemma 4.1), so y=x(xy)R+(BR(U)). The result follows by induction.

We need to know now that W(U) can be naturally treated as a subgroup of W(R). For xU define sUx:UU to be the corresponding reflection in U, and sRx to be the corresponding reflection in R. Define WR(U) to be the subgroup of W(R) generated by all sRx.

Theorem 4.3: Let (R,B) be a root system and let UR be a subsystem. Then the group homomorphism WR(U)W(U) determined by sRxsUx is an isomorphism.

Proof: The reflections sRx for xBR(U) generate WR(U) by Theorem 2.2. Let wWR(U) be a nonidentity element; we claim that the image of w is not the identity. Let U(w) be the minimal length of a word for w in terms of elements of S(BR(U)). Let yR+(B)U. We claim that U(wsy)<U(w) if and only if yI(w). The proof of this is similar to an earlier proof (Theorem 2.3), so we omit it. By exactly the same line of reasoning as the proof of Proposition 2.4, it follows that U(w)=|R+(B)U|. Thus if w fixes R we must have that U(w)=0, so w is the identity. This proves injectivity of the homomorphism. Surjectivity is clear.

We therefore do not distinguish reflections in a subsystem from the corresponding reflections in the larger root system, and we identify W(U) with the subgroup WR(U) of W(R).

Definition 2: If (R,B) is a root system, UR is a subsystem, and wW(U) define IU(w)=I(w)U so that I(w)=IR(w). We use this notation to distinguish inversion sets of elements in W(U) in the subsystem U from the inversion sets of the elements in the larger root system R.

Lemma 4.4: Let (R,B) be a root system, UR a subsystem, xBU, and wW(U). Then IU(w)=xIxU(wsx)
Proof: If yIU(w), then w(y)R+(B) and yU. Then xy(xU)R+(B), and wsx(xy)=w(y)R+(B). Thus IU(w)xIxU(wsx). Conversely, if yIxU(wsx), then yxU, so that xyU, and wsx(y)=w(xy)R+(B). Since xxU we have that xyIU(w) and the result follows.



Now we define a function ϕU:W(R)W(U) that allows us to "project" elements of W(R) onto W(U).

Theorem-Definition 4.5 If (R,B) is a root system and UR is a subsystem, then there is a unique function ϕU:W(R)W(U) such that for each wW(R) we have that I(w)U=IU(ϕU(w))
Proof: We prove existence by induction (w), the result being clear if (w)=0. Let xI(w)B. If xU, then since I(w)={x}(xI(wsx)) we have that I(w)U=({x}(xI(wsx)))U={x}(xIU(ϕU(wsx)))=IU(ϕU(wsx)sx) (using Lemma 3.2), so ϕU(w)=ϕU(wsx)sx If xU, we note that I(w)U=({x}(xI(wsx)))U=x(I(wsx)(xU))=x(IxU(ϕxU(wsx)))=x(xIU(ϕxU(wsx)sx))=IU(ϕxU(wsx)sx) (using Lemma 3.2 and Lemma 4.4), hence ϕU(w)=ϕxU(wsx)sx and the result follows by induction. Uniqueness follows by uniqueness of inversion sets (Proposition 3.5).

The following theorem is quite deep and is related to the theory of permutation patterns.

Theorem-Definition 4.6: Let (R,B) be a root system, let wW(R), and let UR be a subsystem. Then there exist unique elements wUW(R) and wUW(U) such that I(wU)U= and w=wUwU In particular, wU is the unique element of minimal length in the coset wW(U).

Proof: I claim that wU=ϕU(w) satisfies the condition, with wU=ww1U. We prove by induction on U(ϕU(w)) that I(wϕU(w)1)U=. Set v=ϕU(w). If U(v)=0, then wU=w and wU=1=ϕU(w) will work because I(wU)U= by definition. Otherwise let xBR(U)IU(v). Then (wsx)<(w) because xI(w), and by the induction hypothesis since ϕU(wsx)=vsx we have that I((wsx)U)U=I((wsx)(vsx)1)U=. Since (wsx)(vsx)1=wv1, the result follows by induction.

To prove uniqueness of wU, and hence of wU, suppose uwW(U) is an element of minimal length. Then I(u)U= because (usy)>(u) for all yR+(B)U. Any element in wW(U) is equal to uv for some vW(U), and if yIU(v) then v(y)R+(B)U, hence uv(y)R+(B)U, hence yI(uv). This proves uniqueness of the minimal element as well as the element u such that I(u)U=, because any other element in the coset contains a root in R+(B)U in its inversion set.

3. Roots, words and inversion sets

In this post we gather some technical results relating roots, words and inversion sets. This is all standard Coxeter group theory.

Definition 1: Let (R,B) be a root system and let w=(sx1,,sxn) be a word. For each index in we assign a root ri(w)=sxnsxn1sxi+1(xi)
The proof of Lemma 3.1 below is actually quite easy and we leave it to the reader.

Lemma 3.1: Let (R,B) be a root system and let u=(sx1,,sxn) be a word. Let xB and let v=(sx1,,sxn,sx) Then for each 1in we have ri(v)=xri(u)

The next lemma allows us to build inversion sets root by root.

Lemma 3.2: Let (R,B) be a root system, wW(R), and xB. If xI(w), then I(wsx)={x}(xI(w)) If xI(w), then I(wsx)=x(I(w){x})
Proof: Suppose xI(w). Then since wsx(x)=w(x)=w(x)R(B) we have that xI(wsx). If yI(wsx), then wsx(y)=w(sxy)R+(B) Since yx, xyR+(B). Thus xyI(w). Conversely, if yI(w), then w(y)=wsx(xy)R+(B), hence xyI(wsx). Thus yxy is a bijection of I(w) with I(wsx){x}, and we are done. The second part is left to the reader.

Theorem 3.3 tells us why the roots we assign to these indices are important.

Theorem 3.3: Let (R,B) be a root system and let w=(sx1,,sxn) be a reduced word for the element w=sx1sxn Then I(w)={ri(w)|1in}
Proof: We prove this by induction on (w). The base case is (w)=0, in which case a reduced word for w is empty and hence the result is true. For the induction step, we assume the result is true for w with an arbitrary word w=(sx1,,sxn), let xBI(w), and prove the result for the element wsx and the word w=(sx1,,sxn,sx) We define unambiguously Iwsx={ri(w)|1in+1} By Lemma 3.1 above and the induction hypothesis, we have that Iwsx={x}(xI(w)) Hence by Lemma 3.2, Iwsx=I(wsx) and the result follows.

Definition 2: Given an element wW(R), denote by R(w) the set of reduced words for w. Given a reduced word w=(sx1,,sxn) for the element w, we define a bijection bw:I(w)[(w)] by declaring that bw(rj(w))=j. Then define B(w)={bw|wR(w)} These can be seen as linear orderings on the inversion set. These correspond bijectively to reduced words as we prove now.

Theorem 3.4: Let (R,B) be a root system and let wW(R). Then the map R(w)B(w) given by wbw is a bijection.

Proof: Clearly the map is surjective. We prove it is injective by induction on (w). This is clear if (w)=0. Otherwise, suppose uv are reduced words for w. If u(w)v(w), then there exist distinct x,xBI(w) such that bu(x)=bv(x)=(w), hence bubv. If instead u(w)=v(w), then there exists xBI(w) such that bu(x)=bv(x)=(w). Define u=(ui)i<(w) and v=(vi)i<(w) Then uv and if yx then bu(y)=bu(xy) and bv(y)=bv(xy) By the induction hypothesis, bu(xy)bv(xy) for some yI(w), hence bubv and the result follows by induction.

Uniqueness of inversion sets is a very useful fact that we now prove.

Proposition 3.5: Let (R,B) be a root system and let u,vW(R). If I(u)=I(v), then u=v.

Proof: We prove this by induction on (u)=(v). If (u)=(v)=0 this is clear. Otherwise let xI(u)B. Then I(usx)=I(vsx)=x(I(u){x}) By the induction hypothesis, usx=vsx, hence u=(usx)sx=(vsx)sx=v by induction.