Exact predicates in computational geometry¶
Our goal in this course will be to implement a convex hull algorithm, break it with nasty input, and gradually reinforce it to cope with any situation.
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from functools import cmp_to_key
from itertools import permutations
# Fonctions utilitaires pour des courbes et de l'aléatoire, peu important
# Couleurs aléatoires
def rand_color() :
from matplotlib.colors import hsv_to_rgb
h = random.random()
s = 0.2 * random.random() + 0.4
v = 0.3 * random.random() + 0.7
return hsv_to_rgb([h,s,v])
def pastel(color, amount=0.5):
import matplotlib.colors as mc
try:
c = mc.cnames[color]
except:
c = color
c = mc.rgb_to_hsv(mc.to_rgb(c))
return mc.hsv_to_rgb([c[0], amount*c[1], amount + (1-amount)*c[2]])
# Tracé de nuages de points
def plot_points(array, **args):
if array.shape[1] == 2:
plt.scatter(array[:,0], array[:,1], **args)
else:
ax = plt.gca(projection='3d')
ax.scatter(array[:,0], array[:,1], array[:,2], **args)
# Génération aléatoire de directions
def rand_directions(size, dim = 2):
# distribution normale sur tous les axes
pts = np.random.normal(0,1,(size, dim))
# points uniformes sur la sphere
pts = pts / np.linalg.norm(pts, axis=1)[:, np.newaxis]
return pts
# Génération aléatoires dans un disque
def rand_in_disk(size, center = np.array([0,0]), radius = 1):
dim = len(center)
pts = rand_directions(size, dim)
# points uniforme dans la boule
pts = pts * (np.random.uniform(0,1,size)**(1/dim))[:, np.newaxis]
# recentrage et redimensionnement
pts = center + radius * pts
return pts
Let us start by generating 100 random points in the unit disk. We will test our convex hull algorithm on these points.
pts = rand_in_disk(100)
fig, ax = plt.subplots(figsize=(8,8))
ax.axis('off')
plot_points(pts)
plt.show()
Our algorithm starts by identifying a point on the convex hull. The point with minimum x coordinate is necessarily on the convex hull.
def ref_point(pts):
return np.argmin(pts[:,0])
fig, ax = plt.subplots(figsize=(8,8))
ax.axis('off')
plot_points(pts)
plot_points(pts[ref_point(pts)].reshape((1,2)),s = 200, facecolors = 'none', edgecolor='black', linewidth = 2)
Now to build the convex hull, we will define a predicate specifying the side of a point $\mathbf{p}_2$ with respect to a line passing through two other points $\mathbf{p}_0$ and $\mathbf{p}_1$. This predicate therefore takes three points as input. In 2D, we want to check the sign of the determinant of the matrix with $(\mathbf{p}_1 - \mathbf{p}_0)$ as its first column and $(\mathbf{p}_2 - \mathbf{p}_0)$ as its second, therefore
$$ \begin{align} \begin{vmatrix} x_1 - x_0 & x_2 - x_0 \\ y_1 - y_0 & y_2 - y_0 \end{vmatrix} &= (x_1 - x_0)(y_2 - y_0) - (x_2 - x_0)(y_1 - y_0) \\ &= x_1y_2 - x_1y_0 - x_0y_2 + x_0y_0 - x_2y_1 + x_2y_0 + x_0y_1 - x_0y_0 \\ &= x_1y_2 - x_2y_1 - (x_0y_2 - x_2y_0) + x_0y_1 - x_1y_0 \\ &= \begin{vmatrix} x_0 & x_1 & x_2 \\ y_0 & y_1 & y_2 \\ 1 & 1 & 1 \end{vmatrix} \end{align} $$
def orient(p0, p1, p2):
mat = np.ones((3,3))
mat[0,:2] = p0
mat[1,:2] = p1
mat[2,:2] = p2
return np.linalg.det(mat)
test_pts = np.array([
[0.2,0.7],
[0.1,0.5],
[0.8,0.3]
])
orient(*test_pts)
np.float64(0.15999999999999998)
mat = np.vstack([test_pts[1] - test_pts[0], test_pts[2] - test_pts[0]])
np.linalg.det(mat)
np.float64(0.15999999999999998)
Considering that the reference point on the convex hull is $\mathbf{p}_0$, the next step consists in sorting the points $\mathbf{p}_i$ for $i\geq 1$ by the slope of the line $(\mathbf{p}_0,\mathbf{p}_i)$. In terms of predicate, this consists in sorting $\mathbf{p}_j$ before $\mathbf{p}_i$ if the orientation predicate on $\mathbf{p}_0$, $\mathbf{p}_i$ and $\mathbf{p}_j$ is positive.
def sort_pts(pts):
size = pts.shape[0]
ref = ref_point(pts)
pts[[0,ref]] = pts[[ref,0]]
score = cmp_to_key(lambda p1,p2: orient(pts[0], p1, p2))
scores = np.array([score(p) for p in pts[1:]])
order = np.zeros(size, dtype = int)
order[1:] = np.argsort(scores) + 1
order[0] = ref
for i in np.arange(1,size):
if order[i] == ref:
order[i] = 0
pts[[0,ref]] = pts[[ref,0]]
return order
order = sort_pts(pts)
fig, ax = plt.subplots(figsize=(8,8))
ax.axis('off')
ax.add_collection(matplotlib.collections.LineCollection([[pts[order[0]], pts[i]] for i in order[1:]], array = np.arange(pts.shape[0])))
plot_points(pts[order], c = np.arange(pts.shape[0]), zorder = 2)
plt.show()
Now to build the convex hull, the algorithm uses a sweep mechanism. A stack contains the convex hull of the points above the sweep line. When introducing a new point, this point should be on the convex hull. Points on the stack are therefore popped until the remaining polygon is convex. If $\mathbf{p}_{i-2}$ and $\mathbf{p}_{i-1}$ are the last two points on the stack (it contains at least three points since any triangle is convex), when introducing a new point $\mathbf{p}_i$, the point $\mathbf{p}_{i-1}$ should be popped if $\mathbf{p}_i$ is above the line $(\mathbf{p}_{i-2},\mathbf{p}_{i-1})$. Once all points have been introduced, the stack contains the convex hull of the whole set of points.
def hull(pts):
order = sort_pts(pts)
hull_pts = [pts[order[0]], pts[order[1]]]
for i in order[2:]:
p = pts[i]
p0 = hull_pts[-2]
p1 = hull_pts[-1]
while orient(p0,p1,p) > 0:
hull_pts.pop()
p0 = hull_pts[-2]
p1 = hull_pts[-1]
hull_pts.append(p)
return np.array(hull_pts)
hull_pts = hull(pts)
fig, ax = plt.subplots(figsize=(8,8))
ax.axis('off')
plot_points(pts)
plot_points(hull_pts, s = 200, facecolors = 'none', edgecolor='black', linewidth = 2)
plt.fill(hull_pts[:,0], hull_pts[:,1], fill = False, linewidth = 2)
plt.show()
Now let us break this algorithm by using nasty points.
weird_pts = np.array([
[0,1],
[0,0],
[0,0.8],
[0,0.5],
[1,0.5]
])
fig, ax = plt.subplots(figsize=(5,5))
ax.axis('off')
plot_points(weird_pts)
We will now compute the convex hull for every permutation of the first 4 points, those on the left. To visualize the result, we add a random small permturbation on the segments. Points selected for the hull are circled in black.
hulls = []
for p in permutations(range(4)):
hulls.append(hull(weird_pts[[*p,4]]))
fig,ax = plt.subplots(6,4,figsize = (8,12))
for i,h in enumerate(hulls):
a = ax.flatten()[i]
a.axis('off')
a.scatter(weird_pts[:,0], weird_pts[:,1], s = 20, zorder = 2)
a.fill(h[:,0] + np.random.rand(h.shape[0]) * 0.2 - 0.1, h[:,1], fill=False)
a.scatter(h[:,0], h[:,1], s = 40, facecolors = 'none', edgecolor='black', linewidth = 2)
To try and solve the problem, we will use an infinitesimal perturbation. Let us say that each point coordinate is perturbed by a small $\varepsilon$. This yields the following determinant
$$ \begin{aligned} \begin{vmatrix} x_0 + \varepsilon_{0,0} & x_1 + \varepsilon_{0,1} & x_2 + \varepsilon_{0,2} \\ y_0 + \varepsilon_{1,0} & y_1 + \varepsilon_{1,1} & y_2 + \varepsilon_{1,2} \\ 1 & 1 & 1 \end{vmatrix} &= \begin{vmatrix} x_0 & x_1 & x_2 \\ y_0 & y_1 & y_2 \\ 1 & 1 & 1 \end{vmatrix} \\ &+ \varepsilon_{0,0}(y_1 - y_2) + \varepsilon_{0,1}(y_2 - y_0) + \varepsilon_{0,2}(y_0 - y_1) \\ &+ \varepsilon_{1,0}(x_2 - x_1) + \varepsilon_{1,1}(x_0 - x_2) + \varepsilon_{1,2}(x_1- x_0) \\ &+ \varepsilon_{0,0}\varepsilon_{1,1} - \varepsilon_{0,0}\varepsilon_{1,2}\\ &+ \varepsilon_{0,1}\varepsilon_{1,2} - \varepsilon_{0,1}\varepsilon_{1,0}\\ &+ \varepsilon_{0,2}\varepsilon_{1,0} - \varepsilon_{0,2}\varepsilon_{1,1} \end{aligned} $$
Now we want to control these perturbations by a single $\varepsilon$ to ensure that when $\varepsilon$ decreases to $0$, enven though the determinant decreases to $0$ as well, it does not change sign in the vicinity of $0$. That sign will be ised oas the result of our predicate. For instance, using $\varepsilon_{0,0} = \varepsilon_{1,1} = \varepsilon$ and $\varepsilon_{i,j} = 0$ for all the others, we get
$$ \begin{vmatrix} x_0 & x_1 & x_2 \\ y_0 & y_1 & y_2 \\ 1 & 1 & 1 \end{vmatrix} + \varepsilon(y_1 - y_2 + x_0 - x_2) + \varepsilon^2 $$
If the original determinant is $0$ we then check the sign of the $\varepsilon$ term, since the $\varepsilon^2$ term will be negligible. This means that we check the sign of $(y_1 - y_2 + x_0 - x_2)$. If this term is 0 as well, the sign is therefore positive, since $\varepsilon^2$ is positive.
def orient(p0, p1, p2):
mat = np.ones((3,3))
mat[0,:2] = p0
mat[1,:2] = p1
mat[2,:2] = p2
det = np.linalg.det(mat)
if det != 0:
return det
coeff = p1[1] - p2[1] + p0[0] - p2[0]
if coeff != 0:
return coeff
return 1
hulls = []
for p in permutations(range(4)):
hulls.append(hull(weird_pts[[*p,4]]))
fig,ax = plt.subplots(6,4,figsize = (8,12))
for i,h in enumerate(hulls):
a = ax.flatten()[i]
a.axis('off')
a.scatter(weird_pts[:,0], weird_pts[:,1], s = 20, zorder = 2)
a.fill(h[:,0] + np.random.rand(h.shape[0]) * 0.2 - 0.1, h[:,1], fill=False)
a.scatter(h[:,0], h[:,1], s = 40, facecolors = 'none', edgecolor='black', linewidth = 2)
Now we still have a coherence problem, since we do not ensure that orient(p0,p1,p2) is the opposite of orient(p1,p2,p1)
pbm = np.array([
[0, 1],
[1, 1],
[2, 1]
])
print(orient(pbm[0], pbm[1], pbm[2]))
print(orient(pbm[0], pbm[2], pbm[1]))
-2 -1
To fix this, we lexicographically sort the input points to ensure that we always consider them in the same order, and then ise the signature of the permutation to change the resulting sign if necessary.
def orient(p0_in, p1_in, p2_in):
inputs = np.vstack([p0_in, p1_in, p2_in])
p = np.lexsort((inputs[:,0], inputs[:,1]))
signature = -(p[0] - p[1])*(p[0] - p[2])*(p[1] - p[2]) / 2
p0 = inputs[p[0]]
p1 = inputs[p[1]]
p2 = inputs[p[2]]
mat = np.ones((3,3))
mat[0,:2] = p0
mat[1,:2] = p1
mat[2,:2] = p2
det = np.linalg.det(mat)
if det != 0:
return signature*det
coeff = p1[1] - p2[1] + p0[0] - p2[0]
if coeff != 0:
return signature*coeff
return signature
print(orient(pbm[0], pbm[1], pbm[2]))
print(orient(pbm[0], pbm[2], pbm[1]))
-2.0 2.0
hulls = []
for p in permutations(range(4)):
hulls.append(hull(weird_pts[[*p,4]]))
fig,ax = plt.subplots(6,4,figsize = (8,12))
for i,h in enumerate(hulls):
a = ax.flatten()[i]
a.axis('off')
a.scatter(weird_pts[:,0], weird_pts[:,1], s = 20, zorder = 2)
a.fill(h[:,0] + np.random.rand(h.shape[0]) * 0.2 - 0.1, h[:,1], fill=False)
a.scatter(h[:,0], h[:,1], s = 40, facecolors = 'none', edgecolor='black', linewidth = 2)
a.scatter(h[:1,0], h[:1,1], s = 40, color='red', zorder = 3)
We still have a non deterministic behavior in the selection of the reference point (in red above). To fix that, we will use change our reference point calculation by using our predicate. To do so, once the minimal x is determined, we construct a pivot point outside of the convex hull by using a point with a smaller x. We then look for the point $\mathbf{p}_i$ such that the slope of the line it generates with the pivot is maximal.
def ref_point(pts):
xmin = np.min(pts[:,0])
pivot = np.array([xmin - 0.01, 0])
current = 0
for i in range(1,len(pts)):
if orient(pivot, pts[current], pts[i]) > 0:
current = i
return current
fig, ax = plt.subplots(figsize=(8,8))
ax.axis('off')
plot_points(pts)
plot_points(pts[ref_point(pts)].reshape((1,2)),s = 200, facecolors = 'none', edgecolor='black', linewidth = 2)
hulls = []
for p in permutations(range(4)):
hulls.append(hull(weird_pts[[*p,4]]))
fig,ax = plt.subplots(6,4,figsize = (8,12))
for i,h in enumerate(hulls):
a = ax.flatten()[i]
a.axis('off')
a.scatter(weird_pts[:,0], weird_pts[:,1], s = 20, zorder = 2)
a.fill(h[:,0] + np.random.rand(h.shape[0]) * 0.2 - 0.1, h[:,1], fill=False)
a.scatter(h[:,0], h[:,1], s = 40, facecolors = 'none', edgecolor='black', linewidth = 2)
a.scatter(h[:1,0], h[:1,1], s = 40, color='red', zorder = 3)
We now always get the same result. Let's try on a nastier example
bad_pts = np.zeros((100, 2))
bad_pts[:20,0] = np.random.rand(20)
bad_pts[:20,1] = 0
bad_pts[20:30,0] = 0
bad_pts[20:30,1] = np.random.rand(10)*0.5
bad_pts[30:40,0] = 1
bad_pts[30:40,1] = np.random.rand(10)*0.5
bad_pts[40:60,0] = np.random.rand(20)
bad_pts[40:60,1] = 0.5
bad_pts[60:70,0] = 0
bad_pts[60:70,1] = 0.5 + np.random.rand(10)*0.5
bad_pts[70:80,0] = 1
bad_pts[70:80,1] = 0.5 + np.random.rand(10)*0.5
bad_pts[80:100,0] = np.random.rand(20)
bad_pts[80:100,1] = 1
fig, ax = plt.subplots(figsize=(8,8))
ax.axis('off')
plot_points(bad_pts)
bad_hull = hull(bad_pts)
fig,ax = plt.subplots(figsize = (8,8))
ax.axis('off')
ax.scatter(bad_pts[:,0], bad_pts[:,1], zorder = 2)
ax.fill(bad_hull[:,0], bad_hull[:,1], fill=False)
ax.scatter(bad_hull[:,0], bad_hull[:,1], s = 60, facecolors = 'none', edgecolor='black', linewidth = 2, zorder = 2)
ax.scatter(bad_hull[:1,0], bad_hull[:1,1], s = 40, color='red', zorder = 3)
plt.show()
The hull is correct. Let us now try to compute the hulls of the bottom and top rectangles.
fig, ax = plt.subplots(figsize=(8,8))
ax.axis('off')
plot_points(bad_pts[40:], s = 100)
plot_points(bad_pts[:60])
bad_high_hull = hull(bad_pts[40:])
bad_low_hull = hull(bad_pts[:60])
fig,ax = plt.subplots(figsize = (8,8))
ax.axis('off')
ax.scatter(bad_pts[:,0], bad_pts[:,1], zorder=2)
ax.fill(bad_low_hull[:,0], bad_low_hull[:,1], fill=False)
ax.fill(bad_high_hull[:,0], bad_high_hull[:,1], fill=False)
ax.scatter(bad_low_hull[:,0], bad_low_hull[:,1], s = 200, facecolors = 'none', edgecolor='red', linewidth = 2, zorder = 2)
ax.scatter(bad_high_hull[:,0], bad_high_hull[:,1], s = 60, facecolors = 'none', edgecolor='black', linewidth = 2, zorder = 2)
ax.scatter(bad_high_hull[:1,0], bad_high_hull[:1,1], s = 40, color='red', zorder = 3)
ax.scatter(bad_low_hull[:1,0], bad_low_hull[:1,1], s = 40, color='yellow', zorder = 3)
plt.show()
At first sight, we get two plausible results. However looking closely, we have incoherencies on the middle line : points circled neither in black or red are considered strictly inside both of the rectangles, which is not coherent. One interpretation could be that we do not ensure transitivity : if points $a$, $b$, $c$ and $d$ are aligned, and the predicate states that $c$ is above a line $(a,b)$ and $d$ is above the line $(b,c)$, then $d$ should be above line $(a,b)$. The perturbation we decided is applied to the points depending on the order they appear in the predicate, but if a point appears in first position for a test, it may appear in second position for another test with other points, and get perturbed in a different direction. To ensure that points are always perturbed in the same way, we can use the perturbation proposed by Edelsbrunner and Mücke. Let us consider that the points are sorted in lexicographic order, and that $i \in [0, n-1]$ is the index of a point in that order, and $j \in [1,2]$ is the index of a coordinate of that point, the perturbation of that coordinate will be
$$ \varepsilon_{j,i} = \varepsilon^{2^{2i + j}} $$
This perturbation has interesting properties/ In particular, since
$$ \sum_{i=0}^k 2^i = 2^{k+1} - 1 $$
we can be sure that the product
$$ \Pi_{i=0}^k \varepsilon_{0,i}\varepsilon{1,i} $$
will dominate $\varepsilon_{j,k+1}$ when $\varepsilon$ decreases towards $0$. In our determinant, this implies that the products of perturbations can be ordered solely depending on the indices of the perturbations. Therefore in the perturbed determinant
$$ \begin{aligned} \begin{vmatrix} x_0 + \varepsilon_{0,0} & x_1 + \varepsilon_{0,1} & x_2 + \varepsilon_{0,2} \\ y_0 + \varepsilon_{1,0} & y_1 + \varepsilon_{1,1} & y_2 + \varepsilon_{1,2} \\ 1 & 1 & 1 \end{vmatrix} &= \begin{vmatrix} x_0 & x_1 & x_2 \\ y_0 & y_1 & y_2 \\ 1 & 1 & 1 \end{vmatrix} \\ &+ \varepsilon_{0,0}(y_1 - y_2) + \varepsilon_{0,1}(y_2 - y_0) + \varepsilon_{0,2}(y_0 - y_1) \\ &+ \varepsilon_{1,0}(x_2 - x_1) + \varepsilon_{1,1}(x_0 - x_2) + \varepsilon_{1,2}(x_1- x_0) \\ &+ \varepsilon_{0,0}\varepsilon_{1,1} - \varepsilon_{0,0}\varepsilon_{1,2}\\ &+ \varepsilon_{0,1}\varepsilon_{1,2} - \varepsilon_{0,1}\varepsilon_{1,0}\\ &+ \varepsilon_{0,2}\varepsilon_{1,0} - \varepsilon_{0,2}\varepsilon_{1,1} \end{aligned} $$
we know that the dominating terms are $\varepsilon_{0,0}$, then $\varepsilon_{1,0}$, then $\varepsilon_{0,1}$, then $\varepsilon_{0,1}\varepsilon_{1,0}$ and we do not need to go further. Using this perturbation, the points are always perturbed in the same way. Let us implement and test this.
def orient(p0_in, p1_in, p2_in):
inputs = np.vstack([p0_in, p1_in, p2_in])
p = np.lexsort((inputs[:,0], inputs[:,1]))
signature = -(p[0] - p[1])*(p[0] - p[2])*(p[1] - p[2]) / 2
p0 = inputs[p[0]]
p1 = inputs[p[1]]
p2 = inputs[p[2]]
mat = np.ones((3,3))
mat[0,:2] = p0
mat[1,:2] = p1
mat[2,:2] = p2
det = np.linalg.det(mat)
if det != 0:
return signature*det
coeff = p1[1] - p2[1]
if coeff != 0:
return signature*coeff
coeff = p2[0] - p1[0]
if coeff != 0:
return signature*coeff
coeff = p2[1] - p0[1]
if coeff != 0:
return signature*coeff
return -signature
bad_high_hull = hull(bad_pts[40:])
bad_low_hull = hull(bad_pts[:60])
fig,ax = plt.subplots(figsize = (8,8))
ax.axis('off')
ax.scatter(bad_pts[:,0], bad_pts[:,1], zorder=2)
ax.fill(bad_low_hull[:,0], bad_low_hull[:,1], fill=False)
ax.fill(bad_high_hull[:,0], bad_high_hull[:,1], fill=False)
ax.scatter(bad_low_hull[:,0], bad_low_hull[:,1], s = 200, facecolors = 'none', edgecolor='red', linewidth = 2, zorder = 2)
ax.scatter(bad_high_hull[:,0], bad_high_hull[:,1], s = 60, facecolors = 'none', edgecolor='black', linewidth = 2, zorder = 2)
ax.scatter(bad_high_hull[:1,0], bad_high_hull[:1,1], s = 40, color='red', zorder = 3)
ax.scatter(bad_low_hull[:1,0], bad_low_hull[:1,1], s = 40, color='yellow', zorder = 3)
plt.show()
Damned, we still have inconsistencies. Maybe we have numerical errors ? Let us remove them by first converting our coordinates into rational numbers. Since these are floating points and python integers are not bounded, this is always possible. Since the operations we carry are only multiplications, divisions, sums and differnces, our results will stay in the domain of rationals, and we can therefore have exact computations.
from fractions import Fraction
def orient(p0_in, p1_in, p2_in):
inputs = np.vstack([p0_in, p1_in, p2_in])
p = np.lexsort((inputs[:,0], inputs[:,1]))
signature = -(p[0] - p[1])*(p[0] - p[2])*(p[1] - p[2]) / 2
p0 = [Fraction(c) for c in inputs[p[0]]]
p1 = [Fraction(c) for c in inputs[p[1]]]
p2 = [Fraction(c) for c in inputs[p[2]]]
#numpy will not compute with fractions, we have to manually compute the determinant
det = p0[0]*p1[1] - p0[1]*p1[0] + p1[0]*p2[1] - p1[1]*p2[0] + p2[0]*p0[1] - p2[1]*p0[0]
if det != 0:
return signature*det
coeff = p1[1] - p2[1]
if coeff != 0:
return signature*coeff
coeff = p2[0] - p1[0]
if coeff != 0:
return signature*coeff
coeff = p2[1] - p0[1]
if coeff != 0:
return signature*coeff
return -signature
bad_high_hull = hull(bad_pts[40:])
bad_low_hull = hull(bad_pts[:60])
fig,ax = plt.subplots(figsize = (8,8))
ax.axis('off')
ax.scatter(bad_pts[:,0], bad_pts[:,1], zorder=2)
ax.fill(bad_low_hull[:,0], bad_low_hull[:,1], fill=False)
ax.fill(bad_high_hull[:,0], bad_high_hull[:,1], fill=False)
ax.scatter(bad_low_hull[:,0], bad_low_hull[:,1], s = 200, facecolors = 'none', edgecolor='red', linewidth = 2, zorder = 2)
ax.scatter(bad_high_hull[:,0], bad_high_hull[:,1], s = 60, facecolors = 'none', edgecolor='black', linewidth = 2, zorder = 2)
ax.scatter(bad_high_hull[:1,0], bad_high_hull[:1,1], s = 40, color='red', zorder = 3)
ax.scatter(bad_low_hull[:1,0], bad_low_hull[:1,1], s = 40, color='yellow', zorder = 3)
plt.show()
Bingo ! In practice, computing exact determinants with rationals really slows down computations. To avoid that, interval arithmetics is used to compute error bounds at compile time depending on the carried operations, and determine whether a test is trustworthy. For instance if we test $a \neq 0$ and we know that the computation $a$ implies an error of at most $e$, if $0$ is not in the interval $[a-e, a+e]$, we can be sure that our result is trustworthy. In the rare cases when it is not, we trigger costly rational computations to ensure exact results.
The take home message here is that when building a computational geometry algorithm, try to use a minimal set of predicates, and ensure the robustness of these predicates. In the algorithm above, every decision of the algorithm finally depends on a single predicate, and ensuring the robustness of that predicate is enough to imply the robustness of the whole algorithm. Robust predicates are a lot of work, so use existing code like CGAL or the Predicate Construction Kit.