Reinforcement Learning Exercise 3.15

    xiaoxiao2025-06-21  12

    Exercise 3.15 In the gridworld example, rewards are positive for goals, negative for running into the edge of the world, and zero the rest of the time. Are the signs of these rewards important, or only the intervals between them? Prove, using (3.8), that adding a constant c c c to all the rewards adds a constant, v c v_c vc, to the values of all states, and thus does not affect the relative values of any states under any policies. What is v c v_c vc in terms of c c c and γ \gamma γ?

    First, for v π v_\pi vπ, according to definition: v π ( s ) = E π ( G t ∣ S t = s ) = E π ( ∑ k = 0 ∞ γ k ⋅ R t + k + 1 ∣ S t = s ) \begin{aligned} v_\pi(s) &= \mathbb E_\pi(G_t|S_t=s) \\ &= \mathbb E_\pi ( \sum_{k=0}^{\infty} \gamma^k \cdot R_{t+k+1} | S_t = s) \end{aligned} vπ(s)=Eπ(GtSt=s)=Eπ(k=0γkRt+k+1St=s) Denote R ^ = R + c \hat R = R + c R^=R+c, for R ^ \hat R R^, there is: v ^ π ( s ) = E π ( G ^ t ∣ S t = s ) = E π ( ∑ k = 0 ∞ γ k ⋅ R ^ t + k + 1 ∣ S t = s ) = E π [ ∑ k = 0 ∞ γ k ⋅ ( R t + k + 1 + c ) ∣ S t = s ] = E π ( ∑ k = 0 ∞ γ k ⋅ R t + k + 1 ∣ S t = s ) + E π ( ∑ k = 0 ∞ γ k ⋅ c ∣ S t = s ) = E π ( ∑ k = 0 ∞ γ k ⋅ R t + k + 1 ∣ S t = s ) + ∑ k = 0 ∞ γ k ⋅ c = v π ( s ) + c 1 − γ \begin{aligned} \hat {v}_\pi(s) &= \mathbb E_\pi(\hat G_t|S_t=s) \\ &= \mathbb E_\pi ( \sum_{k=0}^{\infty} \gamma^k \cdot \hat R_{t+k+1} | S_t = s) \\ &= \mathbb E_\pi \bigl [ \sum_{k=0}^{\infty} \gamma^k \cdot (R_{t+k+1} + c ) | S_t = s \bigr ] \\ &= \mathbb E_\pi ( \sum_{k=0}^{\infty} \gamma^k \cdot R_{t+k+1} | S_t = s) + \mathbb E_\pi(\sum_{k=0}^{\infty} \gamma^k \cdot c | S_t = s)\\ &= \mathbb E_\pi ( \sum_{k=0}^{\infty} \gamma^k \cdot R_{t+k+1} | S_t = s) + \sum_{k=0}^{\infty} \gamma^k \cdot c \\ &= v_\pi(s) + \frac {c}{1 - \gamma} \\ \end{aligned} v^π(s)=Eπ(G^tSt=s)=Eπ(k=0γkR^t+k+1St=s)=Eπ[k=0γk(Rt+k+1+c)St=s]=Eπ(k=0γkRt+k+1St=s)+Eπ(k=0γkcSt=s)=Eπ(k=0γkRt+k+1St=s)+k=0γkc=vπ(s)+1γc ∴ v c = c 1 − γ \therefore v_c = \frac {c}{1-\gamma} vc=1γc

    最新回复(0)