After having written two articles about machine translation, I thought it might be time to address the flourishing field of reinforcement learning. For this sake, I have designed a Cython-optimized billiard environment that simulates physically-accurate (up to torque) game episodes.

To account for the continuous action space I have decided to implement the Deep Deterministic Policy Gradient [Silver et. al, 2015] algorithm. Having only solved easier tasks before, I quickly realized that especially the initial exploration phase isn’t as straightforward as initially assumed. To rule out any other pitfalls, I’ve further tested some components in isolation which was made possible by artificially generating data.

Subsequently it came to my mind that drafting up an article about this analysis might be compelling as well. This article revolves around a mechanism that learns polar coordinates of one point relative to some other (fixed) point.

The code is, as always, available on GitHub and requires Python 3.7 as well as PyTorch 1.0. Furthermore, the code can easily be run on older systems.

Title photo by Annie Spratt on Unsplash

problem Figure: An overlay of two examples

Problem definition

The network is given an image of some pre-defined resolution S×S S \times S that contains two points. The former point (i.e. the origin) is always located in the center while the latter point’s polar coordinate is governed by the random vector [R,Φ] [R, \Phi] where R  U(0,1) R\ \sim\ \mathcal{U}(0, 1) and resp. Φ  U(π,+π) \Phi\ \sim\ \mathcal{U}(-\pi, +\pi) . The ultimate goal is to predict these polar coordinates with respect to the origin.

The angles Φ \Phi are naturally defined on the interval (π,+π) (-\pi, +\pi) while the radii range from 0 to 1 where 1 corresponds to the maximum radius S2 \frac{S}{2} .

As the network is fed with artificial data we abstain from any attempts of measuring generalization (e.g. by defining a proper data split).

In the remainder of the article [r,ϕ] [r, \phi] will denote a realization of the bivariate random variable [R,Φ] [R, \Phi] .

Side note: It should be pointed out that by defining R R to be R  U(0,1) R\ \sim\ \mathcal{U}(0, 1) resp. Φ \Phi to be Φ  U(π,+π) \Phi\ \sim\ \mathcal{U}(-\pi, +\pi) and taking [X,Y]=R [cos(Φ),sin(Φ)] [X, Y] = R\ [\cos(\Phi), \sin(\Phi)] we do not get a circular uniform distribution on the unit disk. Applying the transformation [X,Y]=R[cos(Φ),sin(Φ)] [X, Y] = \sqrt{R} [\cos(\Phi), \sin(\Phi)] would yield, however, exactly said distribution. Yet we abstain from using it for the sake of simplicity as the distribution of R R makes our analyses easier than those with regard to R \sqrt{R} .

Network architecture

network

The network’s architecture consists of two kinds of blocks:

  1. CNN-IN:
  2. FC-BN:

Side note: I kept the configuration of the network as flexible as possible. In this fashion, it is possible to specify the amount of blocks (per block class) and other parameters (e.g. strides or kernel sizes). Judging by the complexity of the problem, it should however be enough to keep the configuration as it is.

Representation of outputs

To get a meaningful output the network needs to be able to “reach” any value that comes from the ground truth. As the angles are defined over (π,+π) (-\pi, +\pi) and the radii over (0,1) (0, 1) , we can leverage some bounded activation function and a linear transformation thereof.

A possible solution using the sigmoid function could read as follows (for two inputs x0, x1 x_0,\ x_1): r=σ(x0)ϕ=π(2σ(x1)1)=πtanh(x12) \begin{aligned} r &= \sigma(x_0) \\ \phi &= \pi (2 \cdot \sigma(x_1) - 1) \\ &= \pi \cdot \tanh\left(\frac{x_1}{2}\right) \end{aligned}

We immediately see that this meets our requirement, as 0  r 1 0\ \leq\ r\ \leq 1 and π  ϕ  +π -\pi\ \leq\ \phi\ \leq\ +\pi are always valid. Moreover, this scheme guarantees uniqueness with respect to the angles as one angle cannot be expressed by multiple values. The boundedness further ensures that out-of-range values can never occur.

Nonetheless, it is not clear whether this scheme is optimal. One potential problem could be that both the sigmoid function and the tangent hyperbolicus saturate very quickly (e.g. σ(4)=11+exp(4)0.98 \sigma(4) = \frac{1}{1 + \exp(-4)} \approx 0.98 and also vice-versa σ(4)=11+exp(4)0.018 \sigma(-4) = \frac{1}{1 + \exp(4)} \approx 0.018 ) causing many values to be either 0 or 1 after initialization.

A better solution could be to leverage a function that saturates more slowly than the sigmoid function. The softsign function s(x)=x1+x s(x) = \frac{x}{1 + | x |} can be interpreted as a continuous generalization of the sign function. The scaled version s^(x)=s(x)+12 \hat{s}(x) = \frac{s(x)+1}{2} shares the same limits as the sigmoid function (i.e. limxs^(x)=limxσ(x)=0 \lim_{x \to -\infty} \hat{s}(x) = \lim_{x \to -\infty} \sigma(x) = 0 and limx+s^(x)=limx+σ(x)=1 \lim_{x \to +\infty} \hat{s}(x) = \lim_{x \to +\infty} \sigma(x) = 1 ) but does not exhibit exponential saturation.

Why is this important? By design our radii (the same arguments holds true in an adapted form for the angles) follow a unit uniform distribution U(0,1) \mathcal{U}(0, 1) . An optimal network G G , given some input IPI I \sim P_I (in this case our images), necessarily needs to parameterize the output distribution of G(I) G(I) in a way that maximizes its similarity to the distribution of the radii (i.e. U(0,1) \mathcal{U}(0, 1) ).

We can always conceive our neural network G G as being composed of two functions G~ \tilde{G} and some activation function a a (s.t. G=a  G~ G = a\ \circ\ \tilde{G} ), then we can define X X as being the random variable G~(I) \tilde{G}(I) .

Since the activation functions under question are all odd (after subtracting 12 \frac{1}{2} ) and given our assumption that G(I) G(I) should be uniform, we can further assume that G~(I) \tilde{G}(I) ’s distribution should be akin to the symmetric uniform distribution U(α,+α) \mathcal{U}(-\alpha, +\alpha) where α > 0 \alpha\ >\ 0 is yet do be determined.

The notion of similarity can be made mathematically precise by using a distance metric Q Q on probability distributions. One option to do this is using the Bhattacharyya index. Despite not being a proper metric in a strict sense (it does not satisfy the triangle inequality), it serves our purpose of expressing similarity as a single number. As we are solely interested in evaluating whether some pair of distributions is more similar to each other than another one (i.e. Q(P0, U) > Q(P1, U) Q(P_0,\ \mathcal{U})\ >\ Q(P_1,\ \mathcal{U}) with U \mathcal{U} being the unit uniform distribution) we could also use the Kullback-Leibler divergence. Both are expressed as an integral over the domain of the random variables involved, I decide to choose the former as it makes the derivations somewhat easier in this case.

Let us also define a parametric family over both functions: σλ(x)=σ(λx)s^λ(x)=s^(λx) \begin{aligned} \sigma_\lambda(x) &= \sigma(\lambda x) \\ \hat{s}_\lambda(x) &= \hat{s}(\lambda x) \end{aligned} for some λ>0 \lambda > 0 . This enables us to further account for varying levels of “steepness”.

sigmoid scaled softsign

Figure 2: A comparison of the sigmoid- and the scaled-softsign family for λ  {0.1, 0.25, 0.5, 1.0, 2.0} \lambda\ \in\ \lbrace 0.1,\ 0.25,\ 0.5,\ 1.0,\ 2.0 \rbrace . We observe that the respective sigmoid functions saturate considerably faster than their scaled-softsign counterparts.

In the following part we will thus deduce the probability distributions of Yλ=σλ(X) Y_\lambda = \sigma_\lambda(X) and Zλ=s^λ(X) Z_\lambda = \hat{s}_\lambda(X) , assuming that X X follows some symmetric uniform distribution U(α,+α) \mathcal{U}(-\alpha, +\alpha) for some yet to be determined α \alpha .

The distribution of Yλ Y_\lambda can be inferred using the definition of the cumulative function: FYλ(y)=Pr[Yλy]=Pr[σλ(X)y]=Pr[Xσλ1(y)]=FX(σλ1(y)) \begin{aligned} F_{Y_\lambda}(y) &= \Pr[Y_\lambda \leq y] \\ &= \Pr[\sigma_\lambda(X) \leq y] \\ &= \Pr[X \leq \sigma_\lambda^{-1}(y)] \\ &= F_X(\sigma_\lambda^{-1}(y)) \end{aligned}

Side note: The probability integral transform law tells us that any continuous random variable X X can be transformed into a random variable Y Y that follows U(0,1) \mathcal{U}(0, 1) by applying its cumulative function FX F_X on itself (i.e. Y=FX(X) Y = F_X(X) ). In our case this would be the same as just taking a linear activation which would however no longer meet our requirement of being bounded.

As σλ() \sigma_\lambda(\cdot) is strictly monotonic it is also invertible. The inversion is given by σλ1(y)=1λlogy1y \sigma_\lambda^{-1}(y) = \frac{1}{\lambda} \log{\frac{y}{1 - y}} which is a scaled version of the so-called logit or log odds function.

ddyFX(σλ1(y))=ddy(FX  σλ1)(y)=FX(σλ1(y))ddyσλ1(y)=fX(σλ1(y))1λ1yy2 \begin{aligned} \dfrac{d}{dy} F_X(\sigma_\lambda^{-1}(y)) &= \dfrac{d}{dy} (F_X\ \circ\ \sigma_\lambda^{-1})(y) \\ &= F_X'(\sigma_\lambda^{-1}(y)) \dfrac{d}{dy} \sigma_\lambda^{-1}(y) \\ &= f_X(\sigma_\lambda^{-1}(y)) \frac{1}{\lambda} \frac{1}{y - y^2} \end{aligned}

The density function of X X is moreover given by fX(x)=12α f_X(x) = \frac{1}{2\alpha} if x  [α,+α] x\ \in\ [-\alpha, +\alpha] and 0 otherwise.

Putting all together we get: fYλ(y)={12λα1yy2,if σ(λα)  y σ(λα)0,otherwise  f_{Y_\lambda}(y) = \begin{cases} \frac{1}{2 \lambda \alpha}\frac{1}{y-y^2}, & \text{if } \sigma(-\lambda \alpha)\ \leq\ y\ \leq \sigma(\lambda \alpha) \\ 0, & \text{otherwise } \end{cases}

Let us now do the same for Zλ=s^λ(X) Z_\lambda = \hat{s}_\lambda(X) . The inverse function and its derivative are given by: s^λ1(z)={2z12λz,if z 122z12λ(1z),otherwise ddzs^λ1(z)={12λz2,if z 1212λ(1z)2,otherwise  \begin{aligned} \hat{s}^{-1}_\lambda(z) &= \begin{cases} \frac{2z - 1}{2 \lambda z}, & \text{if } z\ \leq \frac{1}{2} \\ \frac{2 z - 1}{2 \lambda (1 - z)}, & \text{otherwise } \end{cases} \\ \dfrac{d}{dz} \hat{s}^{-1}_\lambda(z) &= \begin{cases} \frac{1}{2 \lambda z^2}, & \text{if } z\ \leq \frac{1}{2} \\ \frac{1}{2 \lambda (1-z)^2}, & \text{otherwise } \end{cases} \end{aligned}

Analogously, we get: fZλ(z)={14λαz2,if 12(λα+1)z1214λα(1z)2,if 12<z2λα+12(λα+1)0,otherwise  f_{Z_\lambda}(z) = \begin{cases} \frac{1}{4 \lambda \alpha z^2}, & \text{if } \frac{1}{2 (\lambda \alpha + 1)} \leq z \leq \frac{1}{2} \\ \frac{1}{4 \lambda \alpha (1-z)^2}, & \text{if } \frac{1}{2} < z \leq \frac{2 \lambda \alpha + 1}{2 (\lambda \alpha + 1)} \\ 0, & \text{otherwise } \end{cases}

sigmoid scaled softsign

Figure 3: Density plots of Yλ=σλ(X) Y_\lambda = \sigma_\lambda(X) and Zλ=s^λ(X) Z_\lambda = \hat{s}_\lambda(X) for λ  {0.1, 0.25, 0.5, 1.0, 2.0} \lambda\ \in\ \lbrace 0.1,\ 0.25,\ 0.5,\ 1.0,\ 2.0 \rbrace and X  U(1,+1) X\ \sim\ \mathcal{U}(-1, +1) (i.e. α=1chosenarbitrarily \alpha = 1 chosen arbitrarily ).

How can we pick a λ \lambda such that the distributions of Yλ=σλ(X) Y_\lambda = \sigma_\lambda(X) and Zλ=s^λ(X) Z_\lambda = \hat{s}_\lambda(X) become as uniform as possible for some fixed α \alpha ?

For this sake, let us derive the Bhattacharrya index for both cases:

B(Yλ,U)=fYλ(y)fU(y) dy=12λα1yy21dy=12λασ(λα)σ(λα)1yy2dy=12λασ(λα)σ(λα)114(y12)2dy=12λα12tanh(λα2)tanh(λα2)114(u2)2du=12λα12tanh(λα2)tanh(λα2)1121u2du=12λαtanh(λα2)tanh(λα2)11u2du=12λα[arcsin(u)]tanh(λα2)tanh(λα2)=12λα[arcsin(tanh(λα2))arcsin(tanh(λα2))]=12λα2arcsin(tanh λα2)=2λαarcsin(tanhλα2) \begin{aligned} B(Y_\lambda, U) &= \int_{-\infty}^{\infty} \sqrt{f_{Y_\lambda}(y) f_U(y)}\ dy \\ &= \int_{-\infty}^{\infty} \sqrt{\frac{1}{2\lambda \alpha} \frac{1}{y-y^2} \cdot 1} dy \\ &= \frac{1}{\sqrt{2\lambda \alpha}} \int_{\sigma(-\lambda \alpha)}^{\sigma(\lambda \alpha)} \frac{1}{\sqrt{y-y^2}} dy \\ &= \frac{1}{\sqrt{2\lambda \alpha}} \int_{\sigma(-\lambda \alpha)}^{\sigma(\lambda \alpha)} \frac{1}{\sqrt{\frac{1}{4} - (y-\frac{1}{2})^2}} dy \\ &= \frac{1}{\sqrt{2\lambda \alpha}} \frac{1}{2} \int_{\tanh(-\frac{\lambda \alpha}{2})}^{\tanh(\frac{\lambda \alpha}{2})} \frac{1}{\sqrt{\frac{1}{4} - \left(\frac{u}{2}\right)^2 }} du \\ &= \frac{1}{\sqrt{2\lambda \alpha}} \frac{1}{2} \int_{\tanh(-\frac{\lambda \alpha}{2})}^{\tanh(\frac{\lambda \alpha}{2})} \frac{1}{\frac{1}{2} \sqrt{1 - u^2 }} du \\ &= \frac{1}{\sqrt{2\lambda \alpha}} \int_{\tanh(-\frac{\lambda \alpha}{2})}^{\tanh(\frac{\lambda \alpha}{2})} \frac{1}{\sqrt{1 - u^2 }} du \\ &= \frac{1}{\sqrt{2\lambda \alpha}} \left[\arcsin(u) \right]_{\tanh(-\frac{\lambda \alpha}{2})}^{\tanh(\frac{\lambda \alpha}{2})} \\ &= \frac{1}{\sqrt{2\lambda \alpha}} \left[ \arcsin(\tanh\left(\frac{\lambda \alpha}{2}\right)) - \arcsin(\tanh\left(-\frac{\lambda \alpha}{2}\right)) \right] \\ &= \frac{1}{\sqrt{2\lambda \alpha}} 2\cdot \arcsin\left(\tanh\ \frac{\lambda \alpha}{2}\right) \\ &= \sqrt{\frac{2}{\lambda \alpha}} \arcsin\left(\tanh \frac{\lambda \alpha}{2}\right) \end{aligned}

B(Zλ,U)=fZλ(z)fU(z) dz=12(λα+1)1214λαz21dz+122λα+12(λα+1)14λα(1z)21dz=12λα[logz]12(λα+1)1212λα[logz]1212(λα+1)=1λα[logz]12(λα+1)12=log(λα+1)λα \begin{aligned} B(Z_\lambda, U) &= \int_{-\infty}^{\infty} \sqrt{f_{Z_\lambda}(z) f_U(z)}\ dz \\ &= \int_{\frac{1}{2 (\lambda \alpha + 1)}}^{\frac{1}{2}} \sqrt{\frac{1}{4 \lambda \alpha z^2} \cdot 1} dz + \int_{\frac{1}{2}}^{\frac{2 \lambda \alpha + 1}{2 (\lambda \alpha + 1)}} \sqrt{\frac{1}{4 \lambda \alpha (1-z)^2} \cdot 1} dz \\ &= \frac{1}{2 \sqrt{\lambda \alpha}} \left[ \log{z} \right]_{\frac{1}{2 (\lambda \alpha + 1)}}^{\frac{1}{2}} - \frac{1}{2 \sqrt{\lambda \alpha}} \left[ \log{z} \right]_{\frac{1}{2}}^{\frac{1}{2 (\lambda \alpha + 1)}} \\ &= \frac{1}{\sqrt{\lambda \alpha}} \left[ \log{z} \right]_{\frac{1}{2 (\lambda \alpha + 1)}}^{\frac{1}{2}} \\ &= \frac{\log(\lambda \alpha + 1)}{\sqrt{\lambda \alpha}} \end{aligned}

Interestingly, both functions B(Yλ,U) B(Y_\lambda, U) and B(Zλ,U) B(Z_\lambda, U) solely depend on the product λα \lambda \alpha without relying on the variables λ \lambda and α \alpha individually. This enables us to optimize both as functions of a single variable h=λα h = \lambda \alpha .

Numerical optimizations yields: max(B(Yλ,U))  0.93 for h  3.38max(B(Zλ,U))  0.80 for h  3.92 \begin{aligned} \max(B(Y_\lambda, U))\ &\approx\ 0.93 \text{ for } h\ \approx\ 3.38\\ \max(B(Z_\lambda, U))\ &\approx\ 0.80 \text{ for } h\ \approx\ 3.92 \end{aligned}

Side note: The Bhattacharrya coefficient B(P,Q)=ΩfP(x)fQ(x) dx B(P, Q) = \int_\Omega \sqrt{f_P(x) f_Q(x)}\ dx satisfies 0B(P,Q)1 0 \leq B(P, Q) \leq 1 for all density functions fP() f_P(\cdot) , fQ() f_Q(\cdot) on some domain Ω \Omega .

We observe that the sigmoid family is better suited as it can achieve a higher similarity to the uniform distribution U U defined on the unit interval. Notice that the network can always attain the optimal h h value by adjusting α \alpha to be hλ \frac{h}{\lambda} for some given λ \lambda . Intuitively, this makes sense as a large λ \lambda makes the corresponding function steeper, causing the network to settle for a smaller α \alpha (i.e. the variance of the input random variable X  U(α,+α) X\ \sim\ \mathcal{U}(-\alpha, +\alpha) thus becomes smaller) to make Yλ Y_\lambda resp. Zλ Z_\lambda more uniform.

Probability theory tells us that some random variable X  U(α,+α) X\ \sim\ \mathcal{U}(-\alpha, +\alpha) can always be standardized by defining a new variable X~=XE[X]Var[X]=X13α  U(3,+3) \tilde{X} = \frac{X - \mathbb{E}[X]}{\sqrt{\textrm{Var}[X]}} = \frac{X}{\frac{1}{\sqrt{3}}\alpha}\ \sim\ \mathcal{U}(-\sqrt{3}, +\sqrt{3}). Not knowing α \alpha is not much of a problem as E[X] \mathbb{E}[X] and Var[X] \textrm{Var}[X] can easily be estimated using batch statistics. The process of computing these estimates and applying them is called Batch Normalization [Ioffe & Szegedy, 2015].

This enables us to derive a value of λ \lambda that is independent of α \alpha (yielding λ=hmax31.95 \lambda = \frac{h_\textrm{max}}{\sqrt{3}} \approx 1.95 ) according to the aforementioned numerical optimization for the sigmoid function.

Representation of radii/angles

The radii and angles are accordingly represented as:

r=σ1.95(x0)=σ(1.95x0)ϕ=π(2σ1.95(x1)1)=π(2σ(1.95x1)1)=πtanh(1.95x12)=πtanh(0.975x1)πtanh(x1) \begin{aligned} r &= \sigma_{1.95}(x_0) = \sigma(1.95\cdot x_0) \\ \phi &= \pi (2 \cdot \sigma_{1.95}(x_1) - 1) \\ &= \pi (2 \cdot \sigma(1.95 \cdot x_1) - 1) \\ &= \pi \cdot \tanh\left(\frac{1.95 \cdot x_1}{2}\right) \\ &= \pi \cdot \tanh(0.975 \cdot x_1) \\ &\approx \pi \cdot \tanh(x_1) \end{aligned} where x0, x1 x_0,\ x_1 are the outputs of the last batch normalization layer.

Loss functions

Angular loss

Assume that we would like to quantify the difference between two given angles α, β[π,+π] \alpha,\ \beta \in [-\pi, +\pi] . A naïve way to do this is to leverage the absolute difference between α \alpha and β \beta . This works as long as the absolute difference is not greater than π \pi (i.e. if and only if βαπ | \beta - \alpha | \leq \pi ) and fails otherwise. The latter case can easily be demonstrated by choosing α=34π \alpha = -\frac{3}{4} \pi and β=34π \beta = \frac{3}{4} \pi yielding βα=32π | \beta - \alpha | = \frac{3}{2} \pi instead of the actual inscribed angle of π2 \frac{\pi}{2} .

Of course, this can easily be resolved by formulating a case distinction along these lines: I(α,β)={βα,if βαπ2πβα,if βα>π \mathcal{I}(\alpha, \beta) = \begin{cases} | \beta - \alpha |, & \text{if } | \beta - \alpha | \leq \pi \\ 2\pi - | \beta - \alpha |, & \text{if } | \beta - \alpha | > \pi \end{cases}

Continuity: Let us express the above equation in terms of γ=βα \gamma = | \beta - \alpha | s.t. 0γ2π 0 \leq \gamma \leq 2 \pi : G(γ)={G0(γ)=γ,if γπG1(γ)=2πγ,if γ>π \mathcal{G}(\gamma) = \begin{cases} \mathcal{G}_0(\gamma) = \gamma, & \text{if } \gamma \leq \pi \\ \mathcal{G}_1(\gamma) = 2\pi - \gamma, & \text{if } \gamma > \pi \end{cases} We observe that limγπG0(γ)=π \lim_{\gamma \to \pi^-} \mathcal{G}_0(\gamma) = \pi and limγπ+G1(γ)=π \lim_{\gamma \to \pi^+} \mathcal{G}_1(\gamma) = \pi exist and are identical, from which continuity follows. The function is however not differentiable as G0(γ)=+1 \mathcal{G}_0'(\gamma) = +1 is not equal to G1(γ)=1 \mathcal{G}_1'(\gamma) = -1 .

From a theoretical point of view the missing differentiability poses a problem as we need to be able to compute a gradient for each conceivable combination of α \alpha and β \beta to make back-propagation work. In practice, however, this isn’t much of an issue as G0\mathcal{G}_0 and G0\mathcal{G}_0 are individually differentiable and the cases where I \mathcal{I} ’s gradient does not exist (whenever we have so-called antipodal α, β \alpha,\ \beta ) can be resolved by setting these gradients to one of the one-sided limits (i.e. 1 -1 or +1 +1 ). As a side note, this is the same strategy that PyTorch or TensorFlow follow to compute the derivative of the ReLU function.

angle metric

Figure 4: The left contour plot depicts I(,) \mathcal{I}(\cdot, \cdot) , a function of α, β[π,+π] \alpha,\ \beta \in [-\pi, +\pi] . We observe that I(x,x)=0 \mathcal{I}(x, x) = 0 holds for any angle x x . The global maxima are attained if and only if α \alpha and β \beta are antipodal (i.e. βα=π | \beta - \alpha | = \pi ). The right plot illustrates I(α,β)=G(βα)=G(γ) \mathcal{I}(\alpha, \beta) = \mathcal{G}(|\beta - \alpha |) = \mathcal{G}(\gamma) as a function of a single variable γ[0,2π] \gamma \in [0, 2\pi] . The special case γ=0 \gamma = 0 shows that despite G \mathcal{G} ’s continuity it is not differentiable everywhere.

How to implement it?

Excursion: Trigonometrically

One, admittedly, very exotic variant to construct I(,) \mathcal{I}(\cdot, \cdot) is by deriving it using trigonometric functions. Despite its intriguing mathematical beauty and its simple implementation in PyTorch/TensorFlow I would not recommend using it. The reason behind this is that trigonometric functions are many orders of magnitude slower than primitive instructions such as additions or subtractions.

Assume that u0 u_0 and u1 u_1 are some arbitrary vectors in R3 \mathbb{R}^3 and that γ \gamma' is the inscribed angle between u0 u_0 and u1 u_1 .

Exploiting the definition of the cross product and the dot product we get: sin(γ)=u0×u1u0u1cos(γ)=u0Tu1u0u1 \begin{aligned} \sin(\gamma') &= \frac{\| u_0 \times u_1 \|}{\|u_0\| \|u_1\|} \\ \cos(\gamma') &= \frac{u_0^T u_1}{\|u_0\| \|u_1\|} \end{aligned}

Side note: One could think that either of these two definitions already serves our purpose of computing γ \gamma' . Unfortunately, both equations are numerically ill-conditioned which would eventually break our training process.

Furthermore, tan(γ)=sin(γ)cos(γ)=u0×u1u0Tu1 \tan(\gamma') = \frac{\sin(\gamma')}{\cos(\gamma')} = \frac{\| u_0 \times u_1 \|}{u_0^T u_1} by the definition of the tangent.

Any angle x x can be represented using a two-dimensional vector vx=[cos(x)sin(x)] v_x = \begin{bmatrix} \cos(x) & \sin(x) \end{bmatrix} on the unit circle. Let us exploit this by defining u0=[cos(α)sin(α)0] u_0 = \begin{bmatrix} \cos(\alpha) & \sin(\alpha) & 0 \end{bmatrix} and u1=[cos(β)sin(β)0] u_1 = \begin{bmatrix} \cos(\beta) & \sin(\beta) & 0 \end{bmatrix} (a trailing 0 was added because the cross product is only defined for three dimensions).

We deduce: tan(γ)=u0×u1u0Tu1=cos(α)sin(β)sin(α)cos(β)cos(α)cos(β)+sin(α)sin(β)=sin(βα)cos(βα) \begin{aligned} \tan(\gamma') &= \frac{\| u_0 \times u_1 \|}{u_0^T u_1} \\ &= \frac{\cos(\alpha) \sin(\beta) - \sin(\alpha) \cos(\beta)}{\cos(\alpha) \cos(\beta) + \sin(\alpha) \sin(\beta)} \\ &= \frac{\sin(\beta - \alpha)}{\cos(\beta - \alpha)} \end{aligned} where the last result was obtained by applying the sine & cosine addition theorems in reverse.

Finally, we obtain γ=arctan2(sin(βα),cos(βα)) \gamma' = \arctan2(\sin(\beta - \alpha), \cos(\beta - \alpha)) . Special attention has to be paid because γ \gamma' is defined over the interval [π,+π] [-\pi, +\pi] which represents an oriented means of measuring the distance between α \alpha and β \beta . Consequently, our desired angle γ \gamma can easily be computed by harnessing γ=γ[0,π] \gamma = |\gamma'| \in [0, \pi] . As a nice side effect, by setting α=0 \alpha = 0 the function can also be used to normalize some arbitrary angle β(,π)(+π,+) \beta \in (-\infty, -\pi) \cup (+\pi, +\infty) .

The most efficient way to implement the aforementioned case distinction over I(α,β) \mathcal{I}(\alpha, \beta) is as follows:

def great_circle_distance(alpha, beta):
    """
    Computes the unsigned distance between two angles divided by π
    Range of values: [0, 1]
    :param alpha: A PyTorch Tensor that specifies the first angle
    :param beta: A PyTorch Tensor that specifies the second angle
    :return: A new PyTorch Tensor of the same shape and dtype
    """
    abs_diff = torch.abs(beta - alpha)
    return torch.where(abs_diff <= np.pi,
                       abs_diff,
                       2. * np.pi - abs_diff)/np.pi

Notice, that I further re-scaled the result by dividing it by π \pi which ensures that the value will always be between 0 and 1.

Radial loss

The radial loss is defined as : rG(x)=r(σλG~)(x)=rσλ(G~(x)) | r - G(x) | = | r - (\sigma_\lambda \circ \tilde{G})(x) | = | r - \sigma_\lambda(\tilde{G}(x)) | for some small λ \lambda . Notice that the loss is lower bounded by 0 and upper bounded by 1 and is thus defined over the same range as the angular loss.

Side note: Remember that r r always satisfies 0  r  1 0\ \leq\ r\ \leq\ 1 .

Joint loss

Due to the presence of two losses we need to find a way to balance both. Remember that we scaled both the angular and the radial loss to be within [0,1] [0, 1] . This way, the angular loss attains a maximum value of 1 if the angles to be compared are antipodal while the radial loss attains the same value whenever one radius is 0 and the other one is equal to 1. Taking the average of both therefore ensures that the resulting loss is on the same scale and that both terms are weighted equally.

Therefore, we get: L([α,r0],[β,r1])=r0r1+I(α,β)2 \mathcal{L}([\alpha, r_0], [\beta, r_1]) = \frac{|r_0 - r_1| + \mathcal{I}(\alpha, \beta)}{2} where [α,r0] [\alpha, r_0] is the output by the network and [β,r1] [\beta, r_1] denotes the ground truth.

Results

Let us now denote the radial loss by Lr \mathcal{L}_r , the angular loss by Lϕ \mathcal{L}_\phi and the composite loss by L \mathcal{L} .

I have performed five runs each being associated to a different λ  {0.1,1.0,1.95,3.0,5.0} \lambda\ \in\ \lbrace 0.1, 1.0, 1.95, 3.0, 5.0 \rbrace with 1.95 being the (theoretically) optimal value. The training was terminated after 2000 steps, the batch size was set to 90 to fully utilize my GPU at the expense of having less stochasticity in the gradients. Moreover, I used the Nesterov-accelerated gradient optimizer [Nesterov, 1983] with an initial learning rate of 0.1 and a momentum value of 0.95. The learning rate was exponentially decayed by a factor of 0.95 every 30 steps.

One batch was set aside on which I evaluated the loss after each step. Notice however, that this is not a proper data split as each test image can also eventually be part of a training batch.

Loss components

loss

Figure 5: Training loss components over time. We observe that our optimum value both performs better than lower and higher values reaching its minimal loss after roughly 300 steps. The curves were minimally smoothed using cubic B-splines.

Histogram of predictions over time

histograms

Figure 6: The left plot shows the histograms for the radii while the right plot corresponds to the angles. We observe that the support of the output distribution for λ  {0.1,1.0} \lambda\ \in\ \lbrace 0.1, 1.0 \rbrace cannot fully cover the range of the actual ground truth whereas higher-than-optimal values put too much weight on the boundaries of the distribution.

Training dynamics

animation

Figure 7: Predictions over four examples (randomly selected from the “test” batch). There is a noticeable tendency of the predicted points to drop back to ϕ=0 \phi = 0 (irrespective of the radius) which has to be analyzed in more detail.