After having written two articles about machine translation, I thought it might be time to address the flourishing field of reinforcement learning. For this sake, I have designed a Cython-optimized billiard environment that simulates physically-accurate (up to torque) game episodes.

To account for the continuous action space I have decided to implement the Deep Deterministic Policy Gradient [Silver et. al, 2015] algorithm. Having only solved easier tasks before, I quickly realized that especially the initial exploration phase isn’t as straightforward as initially assumed. To rule out any other pitfalls, I’ve further tested some components in isolation which was made possible by artificially generating data.

Subsequently it came to my mind that drafting up an article about this analysis might be compelling as well. This article revolves around a mechanism that learns polar coordinates of one point relative to some other (fixed) point.

The code is, as always, available on GitHub and requires Python 3.7 as well as PyTorch 1.0. Furthermore, the code can easily be run on older systems.

Title photo by Annie Spratt on Unsplash Figure: An overlay of two examples

# Problem definition

The network is given an image of some pre-defined resolution $S \times S$ that contains two points. The former point (i.e. the origin) is always located in the center while the latter point’s polar coordinate is governed by the random vector $[R, \Phi]$ where $R\ \sim\ \mathcal{U}(0, 1)$ and resp. $\Phi\ \sim\ \mathcal{U}(-\pi, +\pi)$. The ultimate goal is to predict these polar coordinates with respect to the origin.

The angles $\Phi$ are naturally defined on the interval $(-\pi, +\pi)$ while the radii range from 0 to 1 where 1 corresponds to the maximum radius $\frac{S}{2}$.

As the network is fed with artificial data we abstain from any attempts of measuring generalization (e.g. by defining a proper data split).

In the remainder of the article $[r, \phi]$ will denote a realization of the bivariate random variable $[R, \Phi]$.

Side note: It should be pointed out that by defining $R$ to be $R\ \sim\ \mathcal{U}(0, 1)$ resp. $\Phi$ to be $\Phi\ \sim\ \mathcal{U}(-\pi, +\pi)$ and taking $[X, Y] = R\ [\cos(\Phi), \sin(\Phi)]$ we do not get a circular uniform distribution on the unit disk. Applying the transformation $[X, Y] = \sqrt{R} [\cos(\Phi), \sin(\Phi)]$ would yield, however, exactly said distribution. Yet we abstain from using it for the sake of simplicity as the distribution of $R$ makes our analyses easier than those with regard to $\sqrt{R}$. # Network architecture The network’s architecture consists of two kinds of blocks:

1. CNN-IN:
2. FC-BN:

Side note: I kept the configuration of the network as flexible as possible. In this fashion, it is possible to specify the amount of blocks (per block class) and other parameters (e.g. strides or kernel sizes). Judging by the complexity of the problem, it should however be enough to keep the configuration as it is.

## Representation of outputs

To get a meaningful output the network needs to be able to “reach” any value that comes from the ground truth. As the angles are defined over $(-\pi, +\pi)$ and the radii over $(0, 1)$, we can leverage some bounded activation function and a linear transformation thereof.

A possible solution using the sigmoid function could read as follows (for two inputs $x_0,\ x_1$): \begin{aligned} r &= \sigma(x_0) \\ \phi &= \pi (2 \cdot \sigma(x_1) - 1) \\ &= \pi \cdot \tanh\left(\frac{x_1}{2}\right) \end{aligned}

We immediately see that this meets our requirement, as $0\ \leq\ r\ \leq 1$ and $-\pi\ \leq\ \phi\ \leq\ +\pi$ are always valid. Moreover, this scheme guarantees uniqueness with respect to the angles as one angle cannot be expressed by multiple values. The boundedness further ensures that out-of-range values can never occur.

Nonetheless, it is not clear whether this scheme is optimal. One potential problem could be that both the sigmoid function and the tangent hyperbolicus saturate very quickly (e.g. $\sigma(4) = \frac{1}{1 + \exp(-4)} \approx 0.98$ and also vice-versa $\sigma(-4) = \frac{1}{1 + \exp(4)} \approx 0.018$) causing many values to be either 0 or 1 after initialization.

A better solution could be to leverage a function that saturates more slowly than the sigmoid function. The softsign function $s(x) = \frac{x}{1 + | x |}$ can be interpreted as a continuous generalization of the sign function. The scaled version $\hat{s}(x) = \frac{s(x)+1}{2}$ shares the same limits as the sigmoid function (i.e. $\lim_{x \to -\infty} \hat{s}(x) = \lim_{x \to -\infty} \sigma(x) = 0$ and $\lim_{x \to +\infty} \hat{s}(x) = \lim_{x \to +\infty} \sigma(x) = 1$) but does not exhibit exponential saturation.

Why is this important? By design our radii (the same arguments holds true in an adapted form for the angles) follow a unit uniform distribution $\mathcal{U}(0, 1)$. An optimal network $G$, given some input $I \sim P_I$ (in this case our images), necessarily needs to parameterize the output distribution of $G(I)$ in a way that maximizes its similarity to the distribution of the radii (i.e. $\mathcal{U}(0, 1)$).

We can always conceive our neural network $G$ as being composed of two functions $\tilde{G}$ and some activation function $a$ (s.t. $G = a\ \circ\ \tilde{G}$), then we can define $X$ as being the random variable $\tilde{G}(I)$.

Since the activation functions under question are all odd (after subtracting $\frac{1}{2}$) and given our assumption that $G(I)$ should be uniform, we can further assume that $\tilde{G}(I)$’s distribution should be akin to the symmetric uniform distribution $\mathcal{U}(-\alpha, +\alpha)$ where $\alpha\ >\ 0$ is yet do be determined.

The notion of similarity can be made mathematically precise by using a distance metric $Q$ on probability distributions. One option to do this is using the Bhattacharyya index. Despite not being a proper metric in a strict sense (it does not satisfy the triangle inequality), it serves our purpose of expressing similarity as a single number. As we are solely interested in evaluating whether some pair of distributions is more similar to each other than another one (i.e. $Q(P_0,\ \mathcal{U})\ >\ Q(P_1,\ \mathcal{U})$ with $\mathcal{U}$ being the unit uniform distribution) we could also use the Kullback-Leibler divergence. Both are expressed as an integral over the domain of the random variables involved, I decide to choose the former as it makes the derivations somewhat easier in this case.

Let us also define a parametric family over both functions: \begin{aligned} \sigma_\lambda(x) &= \sigma(\lambda x) \\ \hat{s}_\lambda(x) &= \hat{s}(\lambda x) \end{aligned} for some $\lambda > 0$. This enables us to further account for varying levels of “steepness”. Figure 2: A comparison of the sigmoid- and the scaled-softsign family for $\lambda\ \in\ \lbrace 0.1,\ 0.25,\ 0.5,\ 1.0,\ 2.0 \rbrace$. We observe that the respective sigmoid functions saturate considerably faster than their scaled-softsign counterparts.

In the following part we will thus deduce the probability distributions of $Y_\lambda = \sigma_\lambda(X)$ and $Z_\lambda = \hat{s}_\lambda(X)$, assuming that $X$ follows some symmetric uniform distribution $\mathcal{U}(-\alpha, +\alpha)$ for some yet to be determined $\alpha$.

The distribution of $Y_\lambda$ can be inferred using the definition of the cumulative function: \begin{aligned} F_{Y_\lambda}(y) &= \Pr[Y_\lambda \leq y] \\ &= \Pr[\sigma_\lambda(X) \leq y] \\ &= \Pr[X \leq \sigma_\lambda^{-1}(y)] \\ &= F_X(\sigma_\lambda^{-1}(y)) \end{aligned}

Side note: The probability integral transform law tells us that any continuous random variable $X$ can be transformed into a random variable $Y$ that follows $\mathcal{U}(0, 1)$ by applying its cumulative function $F_X$ on itself (i.e. $Y = F_X(X)$). In our case this would be the same as just taking a linear activation which would however no longer meet our requirement of being bounded.

As $\sigma_\lambda(\cdot)$ is strictly monotonic it is also invertible. The inversion is given by $\sigma_\lambda^{-1}(y) = \frac{1}{\lambda} \log{\frac{y}{1 - y}}$ which is a scaled version of the so-called logit or log odds function.

\begin{aligned} \dfrac{d}{dy} F_X(\sigma_\lambda^{-1}(y)) &= \dfrac{d}{dy} (F_X\ \circ\ \sigma_\lambda^{-1})(y) \\ &= F_X'(\sigma_\lambda^{-1}(y)) \dfrac{d}{dy} \sigma_\lambda^{-1}(y) \\ &= f_X(\sigma_\lambda^{-1}(y)) \frac{1}{\lambda} \frac{1}{y - y^2} \end{aligned}

The density function of $X$ is moreover given by $f_X(x) = \frac{1}{2\alpha}$ if $x\ \in\ [-\alpha, +\alpha]$ and 0 otherwise.

Putting all together we get: $f_{Y_\lambda}(y) = \begin{cases} \frac{1}{2 \lambda \alpha}\frac{1}{y-y^2}, & \text{if } \sigma(-\lambda \alpha)\ \leq\ y\ \leq \sigma(\lambda \alpha) \\ 0, & \text{otherwise } \end{cases}$

Let us now do the same for $Z_\lambda = \hat{s}_\lambda(X)$. The inverse function and its derivative are given by: \begin{aligned} \hat{s}^{-1}_\lambda(z) &= \begin{cases} \frac{2z - 1}{2 \lambda z}, & \text{if } z\ \leq \frac{1}{2} \\ \frac{2 z - 1}{2 \lambda (1 - z)}, & \text{otherwise } \end{cases} \\ \dfrac{d}{dz} \hat{s}^{-1}_\lambda(z) &= \begin{cases} \frac{1}{2 \lambda z^2}, & \text{if } z\ \leq \frac{1}{2} \\ \frac{1}{2 \lambda (1-z)^2}, & \text{otherwise } \end{cases} \end{aligned}

Analogously, we get: $f_{Z_\lambda}(z) = \begin{cases} \frac{1}{4 \lambda \alpha z^2}, & \text{if } \frac{1}{2 (\lambda \alpha + 1)} \leq z \leq \frac{1}{2} \\ \frac{1}{4 \lambda \alpha (1-z)^2}, & \text{if } \frac{1}{2} < z \leq \frac{2 \lambda \alpha + 1}{2 (\lambda \alpha + 1)} \\ 0, & \text{otherwise } \end{cases}$ Figure 3: Density plots of $Y_\lambda = \sigma_\lambda(X)$ and $Z_\lambda = \hat{s}_\lambda(X)$ for $\lambda\ \in\ \lbrace 0.1,\ 0.25,\ 0.5,\ 1.0,\ 2.0 \rbrace$ and $X\ \sim\ \mathcal{U}(-1, +1)$ (i.e. $\alpha = 1 chosen arbitrarily$).

How can we pick a $\lambda$ such that the distributions of $Y_\lambda = \sigma_\lambda(X)$ and $Z_\lambda = \hat{s}_\lambda(X)$ become as uniform as possible for some fixed $\alpha$?

For this sake, let us derive the Bhattacharrya index for both cases:

\begin{aligned} B(Y_\lambda, U) &= \int_{-\infty}^{\infty} \sqrt{f_{Y_\lambda}(y) f_U(y)}\ dy \\ &= \int_{-\infty}^{\infty} \sqrt{\frac{1}{2\lambda \alpha} \frac{1}{y-y^2} \cdot 1} dy \\ &= \frac{1}{\sqrt{2\lambda \alpha}} \int_{\sigma(-\lambda \alpha)}^{\sigma(\lambda \alpha)} \frac{1}{\sqrt{y-y^2}} dy \\ &= \frac{1}{\sqrt{2\lambda \alpha}} \int_{\sigma(-\lambda \alpha)}^{\sigma(\lambda \alpha)} \frac{1}{\sqrt{\frac{1}{4} - (y-\frac{1}{2})^2}} dy \\ &= \frac{1}{\sqrt{2\lambda \alpha}} \frac{1}{2} \int_{\tanh(-\frac{\lambda \alpha}{2})}^{\tanh(\frac{\lambda \alpha}{2})} \frac{1}{\sqrt{\frac{1}{4} - \left(\frac{u}{2}\right)^2 }} du \\ &= \frac{1}{\sqrt{2\lambda \alpha}} \frac{1}{2} \int_{\tanh(-\frac{\lambda \alpha}{2})}^{\tanh(\frac{\lambda \alpha}{2})} \frac{1}{\frac{1}{2} \sqrt{1 - u^2 }} du \\ &= \frac{1}{\sqrt{2\lambda \alpha}} \int_{\tanh(-\frac{\lambda \alpha}{2})}^{\tanh(\frac{\lambda \alpha}{2})} \frac{1}{\sqrt{1 - u^2 }} du \\ &= \frac{1}{\sqrt{2\lambda \alpha}} \left[\arcsin(u) \right]_{\tanh(-\frac{\lambda \alpha}{2})}^{\tanh(\frac{\lambda \alpha}{2})} \\ &= \frac{1}{\sqrt{2\lambda \alpha}} \left[ \arcsin(\tanh\left(\frac{\lambda \alpha}{2}\right)) - \arcsin(\tanh\left(-\frac{\lambda \alpha}{2}\right)) \right] \\ &= \frac{1}{\sqrt{2\lambda \alpha}} 2\cdot \arcsin\left(\tanh\ \frac{\lambda \alpha}{2}\right) \\ &= \sqrt{\frac{2}{\lambda \alpha}} \arcsin\left(\tanh \frac{\lambda \alpha}{2}\right) \end{aligned}

\begin{aligned} B(Z_\lambda, U) &= \int_{-\infty}^{\infty} \sqrt{f_{Z_\lambda}(z) f_U(z)}\ dz \\ &= \int_{\frac{1}{2 (\lambda \alpha + 1)}}^{\frac{1}{2}} \sqrt{\frac{1}{4 \lambda \alpha z^2} \cdot 1} dz + \int_{\frac{1}{2}}^{\frac{2 \lambda \alpha + 1}{2 (\lambda \alpha + 1)}} \sqrt{\frac{1}{4 \lambda \alpha (1-z)^2} \cdot 1} dz \\ &= \frac{1}{2 \sqrt{\lambda \alpha}} \left[ \log{z} \right]_{\frac{1}{2 (\lambda \alpha + 1)}}^{\frac{1}{2}} - \frac{1}{2 \sqrt{\lambda \alpha}} \left[ \log{z} \right]_{\frac{1}{2}}^{\frac{1}{2 (\lambda \alpha + 1)}} \\ &= \frac{1}{\sqrt{\lambda \alpha}} \left[ \log{z} \right]_{\frac{1}{2 (\lambda \alpha + 1)}}^{\frac{1}{2}} \\ &= \frac{\log(\lambda \alpha + 1)}{\sqrt{\lambda \alpha}} \end{aligned}

Interestingly, both functions $B(Y_\lambda, U)$ and $B(Z_\lambda, U)$ solely depend on the product $\lambda \alpha$ without relying on the variables $\lambda$ and $\alpha$ individually. This enables us to optimize both as functions of a single variable $h = \lambda \alpha$.

Numerical optimizations yields: \begin{aligned} \max(B(Y_\lambda, U))\ &\approx\ 0.93 \text{ for } h\ \approx\ 3.38\\ \max(B(Z_\lambda, U))\ &\approx\ 0.80 \text{ for } h\ \approx\ 3.92 \end{aligned}

Side note: The Bhattacharrya coefficient $B(P, Q) = \int_\Omega \sqrt{f_P(x) f_Q(x)}\ dx$ satisfies $0 \leq B(P, Q) \leq 1$ for all density functions $f_P(\cdot)$, $f_Q(\cdot)$ on some domain $\Omega$.

We observe that the sigmoid family is better suited as it can achieve a higher similarity to the uniform distribution $U$ defined on the unit interval. Notice that the network can always attain the optimal $h$ value by adjusting $\alpha$ to be $\frac{h}{\lambda}$ for some given $\lambda$. Intuitively, this makes sense as a large $\lambda$ makes the corresponding function steeper, causing the network to settle for a smaller $\alpha$ (i.e. the variance of the input random variable $X\ \sim\ \mathcal{U}(-\alpha, +\alpha)$ thus becomes smaller) to make $Y_\lambda$ resp. $Z_\lambda$ more uniform.

Probability theory tells us that some random variable $X\ \sim\ \mathcal{U}(-\alpha, +\alpha)$ can always be standardized by defining a new variable $\tilde{X} = \frac{X - \mathbb{E}[X]}{\sqrt{\textrm{Var}[X]}} = \frac{X}{\frac{1}{\sqrt{3}}\alpha}\ \sim\ \mathcal{U}(-\sqrt{3}, +\sqrt{3})$. Not knowing $\alpha$ is not much of a problem as $\mathbb{E}[X]$ and $\textrm{Var}[X]$ can easily be estimated using batch statistics. The process of computing these estimates and applying them is called Batch Normalization [Ioffe & Szegedy, 2015].

This enables us to derive a value of $\lambda$ that is independent of $\alpha$ (yielding $\lambda = \frac{h_\textrm{max}}{\sqrt{3}} \approx 1.95$) according to the aforementioned numerical optimization for the sigmoid function.

### Representation of radii/angles

The radii and angles are accordingly represented as:

\begin{aligned} r &= \sigma_{1.95}(x_0) = \sigma(1.95\cdot x_0) \\ \phi &= \pi (2 \cdot \sigma_{1.95}(x_1) - 1) \\ &= \pi (2 \cdot \sigma(1.95 \cdot x_1) - 1) \\ &= \pi \cdot \tanh\left(\frac{1.95 \cdot x_1}{2}\right) \\ &= \pi \cdot \tanh(0.975 \cdot x_1) \\ &\approx \pi \cdot \tanh(x_1) \end{aligned} where $x_0,\ x_1$ are the outputs of the last batch normalization layer.

# Loss functions

## Angular loss

Assume that we would like to quantify the difference between two given angles $\alpha,\ \beta \in [-\pi, +\pi]$. A naïve way to do this is to leverage the absolute difference between $\alpha$ and $\beta$. This works as long as the absolute difference is not greater than $\pi$ (i.e. if and only if $| \beta - \alpha | \leq \pi$) and fails otherwise. The latter case can easily be demonstrated by choosing $\alpha = -\frac{3}{4} \pi$ and $\beta = \frac{3}{4} \pi$ yielding $| \beta - \alpha | = \frac{3}{2} \pi$ instead of the actual inscribed angle of $\frac{\pi}{2}$.

Of course, this can easily be resolved by formulating a case distinction along these lines: $\mathcal{I}(\alpha, \beta) = \begin{cases} | \beta - \alpha |, & \text{if } | \beta - \alpha | \leq \pi \\ 2\pi - | \beta - \alpha |, & \text{if } | \beta - \alpha | > \pi \end{cases}$

Continuity: Let us express the above equation in terms of $\gamma = | \beta - \alpha |$ s.t. $0 \leq \gamma \leq 2 \pi$: $\mathcal{G}(\gamma) = \begin{cases} \mathcal{G}_0(\gamma) = \gamma, & \text{if } \gamma \leq \pi \\ \mathcal{G}_1(\gamma) = 2\pi - \gamma, & \text{if } \gamma > \pi \end{cases}$ We observe that $\lim_{\gamma \to \pi^-} \mathcal{G}_0(\gamma) = \pi$ and $\lim_{\gamma \to \pi^+} \mathcal{G}_1(\gamma) = \pi$ exist and are identical, from which continuity follows. The function is however not differentiable as $\mathcal{G}_0'(\gamma) = +1$ is not equal to $\mathcal{G}_1'(\gamma) = -1$.

From a theoretical point of view the missing differentiability poses a problem as we need to be able to compute a gradient for each conceivable combination of $\alpha$ and $\beta$ to make back-propagation work. In practice, however, this isn’t much of an issue as $\mathcal{G}_0$ and $\mathcal{G}_0$ are individually differentiable and the cases where $\mathcal{I}$’s gradient does not exist (whenever we have so-called antipodal $\alpha,\ \beta$) can be resolved by setting these gradients to one of the one-sided limits (i.e. $-1$ or $+1$). As a side note, this is the same strategy that PyTorch or TensorFlow follow to compute the derivative of the ReLU function. Figure 4: The left contour plot depicts $\mathcal{I}(\cdot, \cdot)$, a function of $\alpha,\ \beta \in [-\pi, +\pi]$. We observe that $\mathcal{I}(x, x) = 0$ holds for any angle $x$. The global maxima are attained if and only if $\alpha$ and $\beta$ are antipodal (i.e. $| \beta - \alpha | = \pi$). The right plot illustrates $\mathcal{I}(\alpha, \beta) = \mathcal{G}(|\beta - \alpha |) = \mathcal{G}(\gamma)$ as a function of a single variable $\gamma \in [0, 2\pi]$. The special case $\gamma = 0$ shows that despite $\mathcal{G}$’s continuity it is not differentiable everywhere.

## How to implement it?

### Excursion: Trigonometrically

One, admittedly, very exotic variant to construct $\mathcal{I}(\cdot, \cdot)$ is by deriving it using trigonometric functions. Despite its intriguing mathematical beauty and its simple implementation in PyTorch/TensorFlow I would not recommend using it. The reason behind this is that trigonometric functions are many orders of magnitude slower than primitive instructions such as additions or subtractions.

Assume that $u_0$ and $u_1$ are some arbitrary vectors in $\mathbb{R}^3$ and that $\gamma'$ is the inscribed angle between $u_0$ and $u_1$.

Exploiting the definition of the cross product and the dot product we get: \begin{aligned} \sin(\gamma') &= \frac{\| u_0 \times u_1 \|}{\|u_0\| \|u_1\|} \\ \cos(\gamma') &= \frac{u_0^T u_1}{\|u_0\| \|u_1\|} \end{aligned}

Side note: One could think that either of these two definitions already serves our purpose of computing $\gamma'$. Unfortunately, both equations are numerically ill-conditioned which would eventually break our training process.

Furthermore, $\tan(\gamma') = \frac{\sin(\gamma')}{\cos(\gamma')} = \frac{\| u_0 \times u_1 \|}{u_0^T u_1}$ by the definition of the tangent.

Any angle $x$ can be represented using a two-dimensional vector $v_x = \begin{bmatrix} \cos(x) & \sin(x) \end{bmatrix}$ on the unit circle. Let us exploit this by defining $u_0 = \begin{bmatrix} \cos(\alpha) & \sin(\alpha) & 0 \end{bmatrix}$ and $u_1 = \begin{bmatrix} \cos(\beta) & \sin(\beta) & 0 \end{bmatrix}$ (a trailing 0 was added because the cross product is only defined for three dimensions).

We deduce: \begin{aligned} \tan(\gamma') &= \frac{\| u_0 \times u_1 \|}{u_0^T u_1} \\ &= \frac{\cos(\alpha) \sin(\beta) - \sin(\alpha) \cos(\beta)}{\cos(\alpha) \cos(\beta) + \sin(\alpha) \sin(\beta)} \\ &= \frac{\sin(\beta - \alpha)}{\cos(\beta - \alpha)} \end{aligned} where the last result was obtained by applying the sine & cosine addition theorems in reverse.

Finally, we obtain $\gamma' = \arctan2(\sin(\beta - \alpha), \cos(\beta - \alpha))$. Special attention has to be paid because $\gamma'$ is defined over the interval $[-\pi, +\pi]$ which represents an oriented means of measuring the distance between $\alpha$ and $\beta$. Consequently, our desired angle $\gamma$ can easily be computed by harnessing $\gamma = |\gamma'| \in [0, \pi]$. As a nice side effect, by setting $\alpha = 0$ the function can also be used to normalize some arbitrary angle $\beta \in (-\infty, -\pi) \cup (+\pi, +\infty)$.

The most efficient way to implement the aforementioned case distinction over $\mathcal{I}(\alpha, \beta)$ is as follows:

def great_circle_distance(alpha, beta):
"""
Computes the unsigned distance between two angles divided by π
Range of values: [0, 1]
:param alpha: A PyTorch Tensor that specifies the first angle
:param beta: A PyTorch Tensor that specifies the second angle
:return: A new PyTorch Tensor of the same shape and dtype
"""
abs_diff = torch.abs(beta - alpha)
return torch.where(abs_diff <= np.pi,
abs_diff,
2. * np.pi - abs_diff)/np.pi


Notice, that I further re-scaled the result by dividing it by $\pi$ which ensures that the value will always be between 0 and 1.

## Radial loss

The radial loss is defined as : $| r - G(x) | = | r - (\sigma_\lambda \circ \tilde{G})(x) | = | r - \sigma_\lambda(\tilde{G}(x)) |$ for some small $\lambda$. Notice that the loss is lower bounded by 0 and upper bounded by 1 and is thus defined over the same range as the angular loss.

Side note: Remember that $r$ always satisfies $0\ \leq\ r\ \leq\ 1$.

## Joint loss

Due to the presence of two losses we need to find a way to balance both. Remember that we scaled both the angular and the radial loss to be within $[0, 1]$. This way, the angular loss attains a maximum value of 1 if the angles to be compared are antipodal while the radial loss attains the same value whenever one radius is 0 and the other one is equal to 1. Taking the average of both therefore ensures that the resulting loss is on the same scale and that both terms are weighted equally.

Therefore, we get: $\mathcal{L}([\alpha, r_0], [\beta, r_1]) = \frac{|r_0 - r_1| + \mathcal{I}(\alpha, \beta)}{2}$ where $[\alpha, r_0]$ is the output by the network and $[\beta, r_1]$ denotes the ground truth.

# Results

Let us now denote the radial loss by $\mathcal{L}_r$, the angular loss by $\mathcal{L}_\phi$ and the composite loss by $\mathcal{L}$.

I have performed five runs each being associated to a different $\lambda\ \in\ \lbrace 0.1, 1.0, 1.95, 3.0, 5.0 \rbrace$ with 1.95 being the (theoretically) optimal value. The training was terminated after 2000 steps, the batch size was set to 90 to fully utilize my GPU at the expense of having less stochasticity in the gradients. Moreover, I used the Nesterov-accelerated gradient optimizer [Nesterov, 1983] with an initial learning rate of 0.1 and a momentum value of 0.95. The learning rate was exponentially decayed by a factor of 0.95 every 30 steps.

One batch was set aside on which I evaluated the loss after each step. Notice however, that this is not a proper data split as each test image can also eventually be part of a training batch.

## Loss components Figure 5: Training loss components over time. We observe that our optimum value both performs better than lower and higher values reaching its minimal loss after roughly 300 steps. The curves were minimally smoothed using cubic B-splines.

## Histogram of predictions over time Figure 6: The left plot shows the histograms for the radii while the right plot corresponds to the angles. We observe that the support of the output distribution for $\lambda\ \in\ \lbrace 0.1, 1.0 \rbrace$ cannot fully cover the range of the actual ground truth whereas higher-than-optimal values put too much weight on the boundaries of the distribution.

### Training dynamics Figure 7: Predictions over four examples (randomly selected from the “test” batch). There is a noticeable tendency of the predicted points to drop back to $\phi = 0$ (irrespective of the radius) which has to be analyzed in more detail.