The objective of this post is to introduce fundamental concepts of numerical computing for design applications. In this post we will focus on root-finding ie. intersection type of problems. The tutorial was part of the Advanced Topics in Digital Design and Fabrication 2016 and Digital Design and Fabrication 2020. This is an updated version for junior students ie. less math heavy and more descriptive, with Python as the programming language.

# Introduction

The core concept behind numerical methods is search. If you think about it for a moment the biggest innovations in information technologies of the past century have all been centered about this notion, from online content indexing ie. looking up web-pages, to scientific computing ie. solving systems of ridiculous number of equations and unknowns. This tutorial aims to establish certain principles for approaching a class of search problems.

Consider the geometric problem of finding the intersection between two lines. We can classify this as simple because we can solve for one unknown given two line equations. In fact the challenge is so simple, we can write down a closed-form solution for all instances of the problem. How about the intersection of a line and a circle? The general case has two solutions, where the line punctures the circle at two points. This turns out into a problem of solving a quadratic equation; also an easy challenge with an analytical solution.

On the other hand consider the case of finding intersections between spline curves. We can formulate the problem alright, but how we go about solving it is much more complicated. The reason is that we cannot perform symbolically all the manipulations required to bring the equations involved into a plug-and-play / solve-for-X form.

We need a difference strategy. Invariably this means that we need to iterate over estimates until we find, or not, a close enough approximation to the exact solution we are happy with. Topics including where to start looking about, how to proceed with subsequent guesses, when is it the right time to stop, how good the approximation results are, how fast we can find a good enough solution, are all central to numerical computing.

# Characterization and Brute Force

The simplest scenario we may begin experimenting with numerical methods is root-finding. First we will look at an example with a simple setup and progressively increase the complexity. The curve below is a cubic spline ie. defined by four control points. It crosses the X-axis at three locations one of which we would like to find. If we consider the curve as a function: *curve( t ) → point*, then we want to find the (*t*) value for which (*point.Y == 0*). The parameter (*t*) is a real number within some domain interval [*t _{min}, t_{max}*].

We will approach this by iterating over the entire domain of the curve evaluating points at equally spaced intervals. To visualize progress, we will also plot (*iteration, point.Y*) pairs of points. Below is the graph definition with a python block requiring one (*curve : Curve*) and one (*iterations : int*) parameters.

In the code below, we initialize the geometry output list [4], we determine the minimum and maximum domain values of the curve [6-7], define a stepping increment as a fraction of the curve’s domain [9], and construct the progress curve as a polyline [11]. For each iteration we march forward by computing the new parameter value, evaluate the curve point and store its Y-coordinate [14-16]. In addition, for visualization purposes, we store a progress update point, plot the currently evaluated point and some text feedback [19-20]. Finally, we output the progress points as polyline curve [22].

from Rhino.Geometry import * from math import * geometry = [] tmin = curve.Domain.Min tmax = curve.Domain.Max delta = ( tmax - tmin ) / ( iterations - 1 ) progress = Polyline( ) for iteration in range( iterations ): tnew = tmin + delta * iteration pnew = curve.PointAt( tnew ) ynew = pnew.Y progress.Add( iteration, ynew, 0 ) geometry.append( pnew ) print( "{0:>6.3f} {1:>12.8f}".format( tnew, ynew ) ) geometry.append( PolylineCurve( progress ) )

Above we see the computed points on the cubic curve (black) and the progress curve (red). Note that the progress curve and actual spline look alike, ie. a stretched version of one another, but this is only a coincidence! The way to read the progress curve is as the X-axis being the iteration number, and Y-axis as true vertical displacement of the original curve’s points. You can visually verify that sampled points and progress points are on the same Y-axis level. Below is another example of cubic curve with significantly different shape but very similar progress curve as the one above.

We need to study the progress curve as somewhat decoupled from the actual design geometry of the spline. This is important because it allows us to observe the shape of the underlying problem which often times is very different than what we view and work with. The shape of the problem has features which are important such as its up and down slopes, its hills and valleys, points at the start and end, top-most and bottom-most and where it crosses the X-axis.

## Brute Force Approach

What computers are best at supposedly, is at obliterating problems through exhaustive iterations. We will start with the simplest algorithm possible also known as brute force. Its logic is as follows: from all the points we compute along the curve, we will remember the one closest to zero as an approximation of the exact solution. By increasing the number of iterations ie. taking smaller and smaller steps, as we are walking along the curve, we will be able to find a better and better approximation. Theoretically, had we infinite resources and made infinitely small steps, we would have bumped on the actual exact solution, maybe.

from Rhino.Geometry import * from math import * geometry = [] tmin = curve.Domain.Min tmax = curve.Domain.Max delta = ( tmax - tmin ) / ( iterations - 1 ) tlow = tmin plow = curve.PointAt( tlow ) ylow = plow.Y progress = Polyline( ); progress.Add( 0, ylow, 0 ) improved = Polyline( ); improved.Add( 0, ylow, tlow ) geometry.append( plow ) for iteration in range( 1, iterations ): tnew = tmin + delta * iteration pnew = curve.PointAt( tnew ) ynew = pnew.Y if( abs( ynew ) < abs( ylow ) ): tlow = tnew plow = pnew ylow = ynew progress.Add( iteration, ynew, 0 ) improved.Add( iteration, ylow, tlow ) geometry.append( pnew ) print( "{0:>6.3f} {1:>12.8f}".format( tnew, ynew ) ) geometry.append( PolylineCurve( progress ) ) geometry.append( PolylineCurve( improved ) ) geometry.append( Line( plow + Vector3d.YAxis * 5, plow - Vector3d.YAxis * 5 ) )

In the code above, we need to remember the best curve parameter, evaluated point and vertical displacement values, initially set to the start of the curve [11-13]. To check if the new solution is better than the currently best found, we need to use absolute values, because we don’t mind if we are close to zero coming from either the positive or negative side [24]. Once a better estimate is found, we update the current best values, for future reference [25-27]. Apart from the progress curve (green), we also track the best value evolution via a new curve (blue), named improved [16]. Finally, we draw a vertical line at the best solution found. This curve is often called the convergence graph, because hopefully over time, it will hone into the solution, which for root-finding it is zero.

As we follow the improvement curve from left to right, we see that initially the algorithms progresses by constantly finding better and better solutions ie. the points march closer and closer to zero. Once the X-axis is crossed, points start moving away from zero and the best estimate is retained ie. the improvement curve flattens. Once we are about the second crossing of the spline, we see a small jump which signals a better estimate was detected and also that it is incidentally above the X-axis. After this we see points on the actual spline, diving below the X-axis and the improvement curve stays flat until the third and last crossing. Once again we have an improvement which brings us closer to zero but it seems it is also on the positive size of the X-axis.

from Rhino.Geometry import * from math import * geometry = [] tmin = curve.Domain.Min tmax = curve.Domain.Max graph = [] for iterations in range( 10, 1000, 1 ): delta = ( tmax - tmin ) / ( iterations - 1 ) tlow = tmin plow = curve.PointAt( tlow ) ylow = plow.Y for iteration in range( 1, iterations ): tnew = tmin + delta * iteration pnew = curve.PointAt( tnew ) ynew = pnew.Y if( abs( ynew ) < abs( ylow ) ): tlow = tnew plow = pnew ylow = ynew graph.append( Point3d( iterations * 0.05, ylow * 20, 0) ) print( ylow ) geometry.append( PolylineCurve( graph ) )

The script above computes and plots the best estimate (Y-axis) per iterations required (X-axis). As the number of iterations increases from 10 to 1000 the approximation improves and error, ie. the absolute distance from zero, reduces. Despite the curve looking as if the results improve rapidly, within 1000 iterations, the best Y-value reached is only 0.001368 ie. the error is still quite large.

### Reflections

A key insight for why the exhaustive approach is inefficient is because it stubbornly marches forward, disregarding some very interesting events that take place on the way. For instance we observed that improvements take place when the X-axis is crossed (see figure below). Why not then use this to our advantage? Instead of keep pushing ahead, when we detect we crossed sides above/below the X-axis, we can turn back and hone into zero. However, if we keep stepping by the same amount, we will be trapped in an infinite loop between two points across the X-axis. So therefore, we will also reduce the step size, by say 50%.

The script below implements the new strategy with the following highlights: We need to keep track of two positions, here named (*new*) and (*old*). The (*old*) position is initialized to the start of the curve [11-13]. In each iteration we compute as before the (*new*) estimate [19-21]. Then to detect if we have crosses the X-axis we check the signs of the (*new*) and (*old*) values [23-24], if they are opposite, then we half the step size and flip its sign to change direction between forward (+) and backward (-) marching [25]. After all plotting and reporting [27-30], we update the (*old*) state from the current (*new*) state [32-34], so the next iteration can continue as usual.

from Rhino.Geometry import * from math import * geometry = [] tmin = curve.Domain.Min tmax = curve.Domain.Max delta = ( tmax - tmin ) / ( iterations - 1 ) told = tmin pold = curve.PointAt( told ) yold = pold.Y progress = Polyline( ); progress.Add( 0, yold, 0 ) geometry.append( pold ) for iteration in range( 1, iterations ): tnew = told + delta pnew = curve.PointAt( tnew ) ynew = pnew.Y if( ( ynew > 0 and yold < 0 ) or ( ynew < 0 and yold > 0 ) ): delta *= -0.5 progress.Add( iteration, ynew, 0 ) geometry.append( pnew ) print( "{0:>6.3f} {1:>12.8f}".format( tnew, ynew ) ) told = tnew pold = pnew yold = ynew geometry.append( PolylineCurve( progress ) ) geometry.append( Line( pnew + Vector3d.YAxis * 5, pnew - Vector3d.YAxis * 5 ) )

In the figure below we see that the algorithm hones into the solution as expected; the green line is the convergence graph. As we zoom in, the estimated points, get closer and closer to zero. The raw data tables below (iteration, t-value, y-value) also shows how well this works! Within approximately 40 iterations we have found a solution up to 8 decimal places. In fact, if we only cared about five decimal places then we are doing extra work. So perhaps we can incorporate an early exit strategy.

1 1.169 -1.76520545 2 2.338 -0.97458236 3 3.507 -0.29093485 4 4.675 0.29109415 5 4.091 0.01244714 6 3.507 -0.29093485 7 3.799 -0.13611013 8 4.091 0.01244714 9 3.945 -0.06105329 10 4.018 -0.02410918 11 4.091 0.01244714 12 4.054 -0.00578262 13 4.073 0.00334435 14 4.064 -0.00121611 15 4.068 0.00106487 16 4.066 -0.00007543 17 4.067 0.00049477 18 4.066 0.00020968 19 4.066 -0.00007543 20 4.066 0.00006713 21 4.066 -0.00000415 22 4.066 0.00003149 23 4.066 0.00001367 24 4.066 -0.00000415 25 4.066 0.00000476 26 4.066 0.00000030 27 4.066 -0.00000415 28 4.066 -0.00000192 29 4.066 0.00000030 30 4.066 -0.00000081 31 4.066 -0.00000025 32 4.066 0.00000030 33 4.066 0.00000002 34 4.066 -0.00000025 35 4.066 -0.00000011 36 4.066 0.00000002 37 4.066 -0.00000005 38 4.066 -0.00000001 39 4.066 0.00000002 40 4.066 0.00000001 41 4.066 -0.00000001 42 4.066 -0.00000000 43 4.066 0.00000001 44 4.066 0.00000000 45 4.066 -0.00000000 46 4.066 0.00000000 47 4.066 -0.00000000 48 4.066 0.00000000 49 4.066 -0.00000000

### Tolerances

The script below incorporates a few improvements. First we define two auxiliary parameters: (*tolerance*) [9] and (*resolution*) [10]. Tolerance captures the notion of how many decimal places below zero define a good enough approximation of the solution; a value that is highly problem related. For geometric problems of construction we rarely care for sub-millimeter dimensions. They are considered noise or within the much larger construction tolerances. So 1e-3mm is good enough algorithmic tolerance or 1e-5mm if we want to be safe or 1e-7mm if we are paranoid.

The resolution parameter is related to the fact that we are reducing step size rapidly. In the same way (*tolerance*) provides an early exit test strategy for the *y*-value, (*resolution*) offers the same function for the *t*-value. If the step size falls below the minimum resolution, we can also abort early. Instead of a fixed value, as used below, we can define it as fraction of the *y*-tolerance. Generally, floating point numbers can hold circa 17 placed below zero in double precision. Beyond this there are no representable real numbers, unless we use specialized high-precision arithmetic libraries.

The key point is that it does not make sense to keep iterating if the step size is too small or the marginal improvement is practically zero. There of course exceptions, often called pathological cases, such as asymptotic curves that meet the X-axis at infinity or discontinuous ie. spiky functions that change direction rapidly about a point. For such cases there is no easy way around, this is why characterizing the objective function was our first task ie. to understand the circumstance and take proactive measures.

In both cases, we check for early exit of the iteration loop using absolute values against tolerances [26, 32], because we simply don’t care which side about zero but how close to it we are. To reduce the verbosity of the sign checking expression, which allow us to detect the X-axis crossing events, we compact it by multiplying the (*old*) and (*new*) *y*-values. If the signs of those values are the same, the result is always positive (+ * + = + and – * – = +), while in both cases they are opposite, the product becomes negative ( + * – = – and – * + = -). This rule is a direct python transcription of the intermediate value theorem [W] which states that if a continuous function *f(x)* defined in the real domain [x_{min}, x_{max}] yields* f(x _{min}) * f(x_{max}) < 0*, then it has root aka it crosses the X-axis. The key take away is that we can only find a solution if we are crossing the X-axis and this can only take place if the signs of the y-values are flipped.

from Rhino.Geometry import * from math import * geometry = [] tolerance = 0.00001 resolution = 0.00000001 tmin = curve.Domain.Min tmax = curve.Domain.Max delta = ( tmax - tmin ) / ( iterations - 1 ) told = tmin pold = curve.PointAt( told ) yold = pold.Y progress = Polyline( ); progress.Add( 0, yold, 0 ) geometry.append( pold ) for iteration in range( 1, iterations ): tnew = told + delta pnew = curve.PointAt( tnew ) ynew = pnew.Y if( abs( ynew ) <= tolerance ): break if( ynew * yold < 0 ): delta *= -0.5 if( abs( delta ) <= resolution ): break progress.Add( iteration, ynew, 0 ) geometry.append( pnew ) print( "{2:>2} {0:>6.3f} {1:>12.8f}".format( tnew, ynew, iteration ) ) told = tnew pold = pnew yold = ynew geometry.append( PolylineCurve( progress ) )

## Divide and Conquer

The above algorithm is already pretty good but it suffers from a couple of issues: (a) We need to specify the maximum number of iterations and a step size based on a guess about how many parts to split the curve’s domain, (b) We are still blindly marching forward until we find the crossing point, at which event we converge. How about using the winning strategy of honing into the solution from the get-go? We can split the domain in two halves, check and keep whichever contains the solution while disposing the half that doesn’t, and rinse & repeat. This is a so called divide and conquer approach which gives rise to the bisection [W] algorithm below.

0 0.000 -3.23249401 1 57.273 7.60797477 2 28.637 -1.88157273 3 42.955 -3.07799589 4 50.114 0.16420663 5 46.535 -1.90515488 6 48.324 -0.99215612 7 49.219 -0.44559736 8 49.667 -0.14875128 9 49.891 0.00569491 10 49.779 -0.07203403 11 49.835 -0.03329631 12 49.863 -0.01383242 13 49.877 -0.00407669 14 49.884 0.00080713 15 49.880 -0.00163528 16 49.882 -0.00041420 17 49.883 0.00019643 18 49.882 -0.00010889 19 49.882 0.00004377 20 49.882 -0.00003256 21 49.882 0.00000560

Here we define a sub-domain using its minimum and maximum values [9-15]. We initialize this with the entire domain of the curve. In each iteration we compute the middle of the sub-domain [28-31]. If the mid-point is within tolerance we have found a solution and exit [36], otherwise we look at the lower and upper half against the intermediate value condition [39-46]. If the solution is in the lower half, we shrink the sub-domain by setting the top end to the middle ie. [*t _{min}, t_{max}*] → [

*t*], otherwise [

_{min}, t_{mid}*t*] → [

_{min}, t_{max}*t*]. In this fashion we are cutting the sub-domain by half for each iteration and recursively hone into the solution by closing the bracket.

_{mid}, t_{max}from Rhino.Geometry import * from math import * geometry = [] tolerance = 0.00001 resolution = 0.00000001 tmin = curve.Domain.Min pmin = curve.PointAt( tmin ) ymin = pmin.Y tmax = curve.Domain.Max pmax = curve.PointAt( tmax ) ymax = pmax.Y progress = Polyline( ) progress.Add( 0, ymin, 0 ) progress.Add( 1, ymax, 0 ) geometry.append( pmin ) geometry.append( pmax ) print( "{2:>2} {0:>6.3f} {1:>12.8f}".format( tmin, ymin, 0 ) ) print( "{2:>2} {0:>6.3f} {1:>12.8f}".format( tmax, ymax, 1 ) ) for iteration in range( 2, 1000 ): tmid = ( tmin + tmax ) / 2 pmid = curve.PointAt( tmid ) ymid = pmid.Y progress.Add( iteration, ymid, 0 ) geometry.append( pmid ) print( "{2:>2} {0:>6.3f} {1:>12.8f}".format( tmid, ymid, iteration ) ) if( abs( ymid ) <= tolerance ): break if( ymin * ymid < 0 ): tmax = tmid pmax = pmid ymax = ymid else: tmin = tmid pmin = pmid ymin = ymid if( abs( tmax - tmin ) <= resolution ): break geometry.append( PolylineCurve( progress ) )

The algorithm is equivalently fast or faster than the earlier ping-pong marching scheme, but we don’t need to bother with step sizes or iteration counts. Here we use a for-loop, for the sake of plotting results, but the same method can be implemented with a while-loop or a recursive function. Divide and conquer is a general approach used to solve many other problems.

## False Position & Proportional Splitting

However, selecting the mid-point of the sub-domain seem rather odd, given that we have some indication of the solution’s whereabouts from the *t* and *y*-value magnitudes. In the figure below we can observe that the sub-domain min/max values (red), are not equal, why then split the domain evenly in half (blue)? Based on the principle of splitting using the ratios defined by the secant (green) we arrive at the false position method [W] implemented with one line of code modification below. The improvement in the number of iterations required is significant and it comes with no additional code complexity.

tmid = ( ymin * tmax - ymax * tmin ) / ( ymin - ymax )

0 0.000 -3.23249401 1 57.273 7.60797477 2 17.078 1.02995755 3 12.952 1.40467843 4 9.028 1.09032303 5 6.751 0.52652935 6 5.805 0.19791732 7 5.471 0.06737036 8 5.359 0.02215527 9 5.322 0.00720316 10 5.311 0.00233320 11 5.307 0.00075484 12 5.305 0.00024411 13 5.305 0.00007894 14 5.305 0.00002552 15 5.305 0.00000825

## Ridder’s Method & Exponential Fitting

A general trend in numerical computing is to try minimizing the number of times we evaluate the objective function. This is because the computation required may be substantial. Here we have a trivial example so it is virtually free. We have also seen that using the interim values as a form of curve fitting approximation is also very useful to guide subsequent iterations. This is exactly what makes the false-position method better than bisection. With only two values we can fit a line segment but with more we can do much better. The challenge is to make the most out of the least number of evaluations. The following approach, known as Ridder’s method uses an exponential function fitting approximation scheme [W]. Here, apart from the mid-point, we also take an additional sample [41-43] and if the fitted sample works [49], we can shrink the sub-domain more aggressively [50-54], otherwise we use the standard bisection logic [56-61].

from Rhino.Geometry import * from math import * geometry = [] sign = lambda x : 1 if x > 0 else -1 tolerance = 0.00001 resolution = 0.00000001 tmin = curve.Domain.Min pmin = curve.PointAt( tmin ) ymin = pmin.Y tmax = curve.Domain.Max pmax = curve.PointAt( tmax ) ymax = pmax.Y progress = Polyline( ) progress.Add( 0, ymin, 0 ) progress.Add( 1, ymax, 0 ) geometry.append( pmin ) geometry.append( pmax ) print( "{2:>2}l {0:>6.3f} {1:>12.8f}".format( tmin, ymin, 0 ) ) print( "{2:>2}h {0:>6.3f} {1:>12.8f}".format( tmax, ymax, 1 ) ) for iteration in range( 2, 1000 ): tmid = ( ymin * tmax - ymax * tmin ) / ( ymin - ymax ) pmid = curve.PointAt( tmid ) ymid = pmid.Y progress.Add( iteration, ymid, 0 ) geometry.append( pmid ) print( "{2:>2}m {0:>6.3f} {1:>12.8f}".format( tmid, ymid, iteration ) ) if( abs( ymid ) <= tolerance ): break tnew = tmid + ( tmid - tmax ) * sign( ymax - ymin ) * ymid / sqrt( ymid * ymid - ymin * ymax ); pnew = curve.PointAt( tnew ) ynew = pnew.Y progress.Add( iteration, ynew, 0 ) geometry.append( pnew ) print( "{2:>2}n {0:>6.3f} {1:>12.8f}".format( tnew, ynew, iteration ) ) if( ynew * ymid < 0.0 ): tmin = tnew; ymin = ynew; tmax = tmid; ymax = ymid; else: if( ynew * ymin < 0.0 ): tmax = tnew; ymax = ynew; else: tmin = tnew; ymin = ynew; if( abs( tmax - tmin ) <= resolution ): break geometry.append( PolylineCurve( progress ) )

The data table contains the iteration number, along with the (*t, y*), values as earlier tables, but the iteration numbers are annotated with the evaluation phase as: (l) initial low bound, (h) initial high bound, (m) iteration mid-point and (n) iteration new/exponential point. So while in loop terms the method takes six steps, in terms of evaluation the figure is double.

0l 0.000 -3.23249401 1h 57.273 7.60797477 2m 17.078 1.02995755 2n 8.905 1.06747812 3m 6.694 0.50829832 3n 6.111 0.31028681 4m 5.575 0.10902179 4n 5.517 0.08609193 5m 5.374 0.02844838 5n 5.367 0.02530483 6m 5.325 0.00823363 6n 5.324 0.00774063 7m 5.311 0.00250763 7n 5.311 0.00242459 8m 5.307 0.00078443 8n 5.307 0.00076989 9m 5.305 0.00024898 9n 5.305 0.00024638 10m 5.305 0.00007967 10n 5.305 0.00007920 11m 5.305 0.00002561 11n 5.305 0.00002552 12m 5.305 0.00000825

## Brent’s Method & Inverse Quadratic Interpolation

Brent’s method [W] follows the same scheme of using interim samples to approximate the objective function locally with a fitted curve and extract better estimates. The algorithm is neither easy to implement from first principles, nor to explain without a lot of math. The Wikipedia pseudo-code is rather messy, so the reference script below is based on the C# implementation from the Math.NET Numerics library [GitHub]. The method is considered hybrid in that it tries to use the so called inverse quadratic interpolation scheme ie. a sideways parabola, but when it fails, it fallsback to the trusty bisection method.

The key take away here is that high-degree locally fitted approximation curves, typically parabolas, have the ability to capture better the shape, ie. curvature, of objective function and yield faster convergence rates, but they sometimes explode, ie. overshoot. In those cases we need a fallback strategy which may be slower but also more reliable. Brent’s method is the defacto choice for univariate root-finding as, you can see from the data table below, it is extremely efficient and it is also very stable unlike earlier approaches.

from Rhino.Geometry import * from math import * geometry = [] sign = lambda x : 1 if x > 0 else -1 tolerance = 0.00001 resolution = 0.00000001 tmin = curve.Domain.Min pmin = curve.PointAt( tmin ) ymin = pmin.Y tmax = curve.Domain.Max pmax = curve.PointAt( tmax ) ymax = pmax.Y progress = Polyline( ) progress.Add( 0, ymin, 0 ) progress.Add( 1, ymax, 0 ) geometry.append( pmin ) geometry.append( pmax ) print( "{2:>2} {0:>6.3f} {1:>12.8f}".format( tmin, ymin, 0 ) ) print( "{2:>2} {0:>6.3f} {1:>12.8f}".format( tmax, ymax, 1 ) ) tmid = tmin ymid = ymin tnew = tmax ynew = ymax delta = 0.0 alpha = 0.0 for iteration in range( 2, 1000 ): if( sign( ynew ) == sign( ymax ) ): tmax = tmin ymax = ymin delta = tnew - tmin alpha = delta if( abs( ymax ) < abs( ynew ) ): tmin = tnew; tnew = tmax; tmax = tmin ymin = ynew; ynew = ymax; ymax = ymin tmid = 0.5 * ( tmax - tnew ) if( abs( alpha ) >= tolerance and abs( ymin ) > abs( ynew ) ): s = ynew / ymin; if( abs( tmax - tmin ) <= resolution ): p = 2.0 * tmid * s q = 1.0 - s else: q = ymin / ymax r = ynew / ymax p = s * ( 2.0 * tmid * q * ( q - r ) - ( tnew - tmin ) * ( r - 1.0 ) ) q = ( q - 1.0 ) * ( r - 1.0 ) * ( s - 1.0 ) if( p > 0.0 ): q = -q p = abs( p ) if( 2.0 * p < min( 3.0 * tmid * q, abs( alpha * q ) ) ): alpha = delta delta = p / q else: delta = tmid alpha = delta else: delta = tmid alpha = delta tmin = tnew ymin = ynew tnew += delta; pnew = curve.PointAt( tnew ) ynew = pnew.Y progress.Add( iteration, ynew, 0 ) geometry.append( pnew ) print( "{2:>2} {0:>6.3f} {1:>12.8f}".format( tnew, ynew, iteration ) ) if( abs( ynew ) <= tolerance ): break geometry.append( PolylineCurve( progress ) )

0 0.000 -3.23249401 1 57.273 7.60797477 2 17.078 1.02995755 3 12.952 1.40467843 4 6.476 0.43682487 5 4.205 -0.49566266 6 5.412 0.04375570 7 5.303 -0.00072021 8 5.305 0.00000644

## Newton-Raphson & Derivatives

So far we have examined methods that attempt to find roots based on sample extraction, typically two or three per iteration. The Newton-Raphson method [W] works quite differently in that it uses point-to-point traversal but as a guide for the next point selection, it deploys the local derivative. You may consider this approach as following the tangent line ie. we find the intersection of the tangent line with the X-axis and use that x-value as the next sample. The formal expression for the new estimate based on an current is: *t _{new} = t_{old} – f( t_{old} ) / f'( t_{old} )*.

Computing the derivative *f'( t )* of a function *f( t )* is something most often impossible to perform symbolically ie. to write an analytical expression. We use therefore the finite differences scheme [W] which takes near zero spaced sample(s) about a point to compute an approximation of the derivative: *f'( t ) = df / dt = ( f( t + dt ) – f( t ) ) / dt*. For our curve, we perform two evaluations, yielding two points at ( *t + dt* ) and ( *t* ), their difference is the tangent vector, scaled up the step size inverse. The key take away here is that while differentiating a function ie. writing a formula that captures the derivative of a function for all of its domain is generally difficult, finding the value of the derivative of a function about a local point is trivial.

In the code below, we initialize the search using a new input parameter ( *initial : float* ) in range [*0, 1*] ie. within [*t _{min}*,

*t*] of the curve’s domain [12-14]. Then we compute a point marginally ahead from our current location

_{max}*f( t + dt )*using the resolution as

*dt*-value [24-26]. We then compute the derivative using the forward finite difference

*( f( t + td ) – f( t ) ) / dt*via the (

*old*) and (

*inc*) values [28]. Finally we compute the (

*new*) position using the Newton-Raphson rule [30].

from Rhino.Geometry import * from math import * geometry = [] tolerance = 0.00001 resolution = 0.0000001 tmin = curve.Domain.Min tmax = curve.Domain.Max told = tmin * ( 1.0 - initial ) + tmax * initial pold = curve.PointAt( told ) yold = pold.Y progress = Polyline( ) progress.Add( 0, yold, 0 ) geometry.append( Circle( pold, 0.5 ) ) #geometry.append( pold ) print( "{2:>2}i {0:>6.3f} {1:>12.8f}".format( told, yold, 0 ) ) for iteration in range( 1, 1000 ): tinc = told + resolution pinc = curve.PointAt( tinc ) yinc = pinc.Y ydel = ( yinc - yold ) / resolution tnew = told - yold / ydel pnew = curve.PointAt( tnew ) ynew = pnew.Y progress.Add( iteration, ynew, 0 ) geometry.append( pnew ) print( "{2:>2}d {0:>6.3f} {1:>12.8f}".format( tinc, ydel, iteration ) ) print( "{2:>2}n {0:>6.3f} {1:>12.8f}".format( tnew, ynew, iteration ) ) if( abs( ynew ) <= tolerance ): break told = tnew pold = pnew yold = ynew geometry.append( PolylineCurve( progress ) )

The data table below contains the (*iteration, t-value, y-value*) results. In simple terms the method is super fast as it takes only four iterations. The rate of convergence of this method is known as quadratic ie. we add zeros at square pace to the number of iterations. However, due to derivative calculation we need to evaluate the function twice as much. The letters (*i, d, n*) next to the iteration number represent (*initial, derivative, new*) evaluation call types.

0i 0.000 -3.23249401 1d 0.000 0.82209954 1n 3.932 -0.63192832 2d 3.932 0.50931527 2n 5.173 -0.05508540 3d 5.173 0.42138164 3n 5.303 -0.00058654 4d 5.303 0.41241758 4n 5.305 -0.00000007

A major caveat about this method is its stability. It can be trapped in loops or it can overshoot out of bounds. The general rule is: if we are close to the solution ie. we have a good initial guess, we will converge super fast. In the figure below we change the initial value along the domain of the curve, denoted by the little circle, and plot the samples evaluated and convergence graph. It is very evident that when the derivative flattens near the hill-top and valley-bottom of the graph, it overshoots and takes more iterations to converge. You can also observe that it sometimes evaluates points outside the curve’s domain, which for spline curves it is generally fine.

## Halley’s Method & Even More Derivatives

The final method presented here builds on top of the previous scheme ie. using derivatives to guide subsequent samples selections. Halley’s method [W] uses the first and second derivatives. We will again use the finite differences scheme to compute those. In as much as Newton-Raphson uses a local approximation of the objective function using a line ie. the point itself and its tangent vector, we may consider the Halley’s method using the circle of curvature to achieve the same goals with a better local fitting. As you can see in the figure below it takes three iteration to find the first root from the curve’s start point.

from Rhino.Geometry import * from math import * geometry = [] tolerance = 0.00001 resolution = 0.000001 f = lambda t : curve.PointAt( t ).Y d = lambda t : ( f( t + resolution ) - f( t ) ) / resolution s = lambda t : ( d( t + resolution ) - d( t ) ) / resolution tmin = curve.Domain.Min tmax = curve.Domain.Max told = tmin * ( 1.0 - initial ) + tmax * initial pold = curve.PointAt( told ) yold = pold.Y progress = Polyline( ) progress.Add( 0, yold, 0 ) geometry.append( Circle( pold, 0.5 ) ) print( "{2:>2}i {0:>6.3f} {1:>12.8f}".format( told, yold, 0 ) ) for iteration in range( 1, 1000 ): yfst = d( told ) ysec = s( told ) tdel = 2.0 * ( yold * yfst ) / ( 2.0 * yfst * yfst - yold * ysec ) tnew = told - tdel pnew = curve.PointAt( tnew ) ynew = pnew.Y progress.Add( iteration, ynew, 0 ) geometry.append( pnew ) print( "{2:>2}n {0:>6.3f} {1:>12.8f}".format( tnew, ynew, iteration ) ) if( abs( ynew ) <= tolerance ): break told = tnew yold = ynew geometry.append( PolylineCurve( progress ) )

To compact the code we use here lambdas ie. inline anonymous functions (more this later). We define the *f( t )* function to capture the curve evaluation and y-coordinate extraction logic [9]. We can write this in more conventional manner ie. *def f( curve, t ): return curve.PointAt( t ).Y*, lambdas are just more compact. Then we define the first derivative as *d( t )* and second derivative *s( t )* in the similar manner [10-11]. It is possible to write those in an even more concise way, but this is the most clear way to see that finite differences are just subtractions for both first and second order derivatives. For each iteration we compute the first and second derivatives at the (*old*) location [27-28]. We compute the step delta using Halley’s rule [30] and the new sample update [32-34].

0i 57.273 7.60797477 1n 50.381 0.35845103 2n 49.883 0.00013751 3n 49.882 -0.00000000

In the graph and table above, the method appears super fast ie. cubic convergence, and also surprisingly stable everywhere along the curve. Of course since we use the first and second derivative, especially in this compacted lambda form, the number of evaluations is seven per iteration, but with optimization it can be reduced to maybe three or four.

# Practical Examples

Enough with the theory, lets look at now some progressively more challenging applications. We will start with the computation of the closest point to a curve. The objective of this example is to demonstrate how to formulate geometric problems as numerical computing processes.

The following script implements the *curve.ClosestPoint( point )* method using an iterative solver based on the false position method (feel free to use a more advanced method). First we wrap the method into a function *Solve( function, domain, tolerance )* [11]. This is to make the method reusable. The first parameter is the objective function we will invoke to evaluate *y*-values, the second is the domain, a list with the low and high *x*-values, and the third is the tolerance ie. our termination clause. Initially, we evaluate the low and high *y*-values [12-16] and immediately check if there is any chance to find a solution there using the intermediate value clause ie. sign flipping rule [18-20]. We proceed with a minimal implementation of the method [29-37] emitting the progress if the debug global is set [26-27]. The *Fail( message, result )* method [5-9] is a short hand for aborting functions with a result value and some print out messages, if the global debug is enabled.

from Rhino.Geometry import * from math import * debug = True def Fail( message, result = False ): global debug if( debug ): print( message ) return result def Solve( function, domain, tolerance = 0.0001 ): xmin = domain[0] ymin = function( xmin ) xmax = domain[1] ymax = function( xmax ) if( ymin * ymax >= 0 ): return ( Fail( "Seed\nMin: {0}, {1}\nMax: {2}, {3}".format( xmin, ymin, xmax, ymax ) ), -1 ) for _ in range( 1000 ): xmid = ( ymin * xmax - ymax * xmin ) / ( ymin - ymax ) ymid = function( xmid ) if( debug ): print( xmid, ymid ) if( abs( ymid ) <= tolerance ): return ( True, xmid ) if( ymin * ymid < 0 ): xmax = xmid ymax = ymid else: xmin = xmid ymin = ymid return ( Fail( "Loop\nMin: {0}, {1}\nMax: {2}, {3}".format( xmin, ymin, xmax, ymax ) ), -1 ) def ClosestPoint( curve, point ): function = lambda t : curve.FrameAt( t )[1].XAxis * ( curve.PointAt( t ) - point ) return Solve( function, [curve.Domain.Min, curve.Domain.Max] ) success, parameter = ClosestPoint( curve, node ) if( success ): geometry = Line( node, curve.PointAt( parameter ) ) else: Fail( "Computation Failed" )

Before we look at the *ClosestPoint( curve, point )* function lets talk about geometry. We can evaluate a Frenet-Serret frame (aka coordinate system or plane in Rhino parlance) [W] at each point along a curve, where its origin is the point at the parameter along the curve, its X-axis is the tangent vector (*t*), its Y-axis is the normal vector (*n*) pointing towards the center of curvature, and Z-axis its bi-normal vector (*b*), result of a cross product between the last two. All three axes are orthogonal to one another, but their magnitudes is another story.

An arbitrary point (*q*) having a point (*p*) at parameter (*s*) along a curve as its closest point means that (*q*) is on the line from (*p*) along the normal direction (*n*), or equivalently if we project (*q*) on the tangent (*t*) line from (*p*), following the normal direction (*n*), it coincides with point (*p*).

We want to construct an expression that captures the essence of this geometric idea numerically and also gives it a sense of directionality so as the solver can hone into the solution. Directionality here is important because if we just measure Euclidean distances, which are always positive by definition, we cannot use the numerical solver because the intermediate value clause will always fail.

Here is a straight forward formulation that works great: We define the vector (*u*) from the currently evaluated point (*p*) on the curve and the test point (*q*). We take the dot product between the (*u*) and tangent vector (*t*) which expresses the signed projected distance (*d*). If (*d*) is positive it just means that the point (*q*) is forward of point (*p*) along the tangent, otherwise behind it as is the figure below. If it is zero, then (*p*) is the closest point from (*q*), and there you go.

Going back to the code we see a compacted implementation of the logic using a lambda expression. The purpose of the lambda is to format the evaluation method as function of only one parameter and capture the curve and test point within its body, without using weird global variable hacks or contaminating the solver method with user variables etc. It uses the *curve.FrameAt( parameter )* function which return two values (*success, Plane*). Here we assume success by default, take the (*Plane*) out of the tuple using the [*1*] subscript, and the X-axis which represents the tangent vector. We take the dot product with the vector from the test point to the point on the curve with another, rather wasteful, evaluation of the curve at the same parameter. Finally, we call the solver function passing the lambda and the entire curve domain. You may ask: why don’t we just use the ready-made closest point function?

def ClosestPoint( curve, point ): function = lambda t : curve.FrameAt( t )[1].XAxis * ( curve.PointAt( t ) - point ) return Solve( function, [curve.Domain.Min, curve.Domain.Max] )

The objective of this contrived example was to demonstrate: (a) How to formulate geometric problems into numerical computing problems; and (b) to introduce anonymous functions or lambdas [W] as a way to package evaluation methods.

## Cladding Setting-Out

It is often the case when we set-out building envelopes, say with a unitized curtain wall facade, to use a grid that governs the location of the building components. In the case of flat walls this is very simple, we just divide the line with equally spaced units. On curved facades it is a bit of a challenge due to curvature. A geometric way to approach this is to use a compass and scribe circles splitting the curve into equal chord lengths (see figure below).

The disappointing aspect of this method is that the last unit does not fit in the curve so we need a special short unit. In straight wall scenarios we just split the difference in two and have special corner bays at both ends instead of one oddball. Here we will first develop an equivalent to the *curve.DivideByChordLength( distance )* function first, and then we will augment it to evenly space the units along the curve leaving two same sized edge cases.

from Rhino.Geometry import * from math import * debug = False geometry = [] def Fail( message, result = False ): global debug if( debug ): print( message ) return result def Solve( function, domain, tolerance = 0.0001 ): xmin = domain[0] ymin = function( xmin ) xmax = domain[1] ymax = function( xmax ) if( ymin * ymax >= 0 ): return ( Fail( "Seed\nMin: {0}, {1}\nMax: {2}, {3}".format( xmin, ymin, xmax, ymax ) ), -1 ) for _ in range( 1000 ): xmid = ( ymin * xmax - ymax * xmin ) / ( ymin - ymax ) ymid = function( xmid ) if( abs( ymid ) <= tolerance ): return ( True, xmid ) if( ymin * ymid < 0 ): xmax = xmid ymax = ymid else: xmin = xmid ymin = ymid return ( Fail( "Loop\nMin: {0}, {1}\nMax: {2}, {3}".format( xmin, ymin, xmax, ymax ) ), -1 ) def PointAtDistanceAhead( curve, distance, source ): point = curve.PointAt( source ) function = lambda t : point.DistanceTo( curve.PointAt( t ) ) - distance return Solve( function, [source, curve.Domain.Max] ) def DivideCurveByDistance( curve, distance, source = None ): if( source == None ): source = curve.Domain.Min parameters = [source] while( True ): success, target = PointAtDistanceAhead( curve, distance, parameters[-1] ) if( not success ): break parameters.append( target ) return parameters def EvaluateCurve( curve, parameters ): return [curve.PointAt( parameter ) for parameter in parameters] points = EvaluateCurve( curve, DivideCurveByDistance( curve, distance ) ) geometry = ( points + [PolylineCurve( points )] + [Circle( point, distance ) for point in points] )

We first focus on the solver evaluation method *PointAtDistanceAhead( curve, distance, source )* [41-45]. The function requires a curve, the chord length and the parameter along the curve to chop a chord moving forward. We first evaluate the reference start point at the supplied parameter [42]. Then we define a lambda function which returns the distance between the start point and any point the solver selects, less the desired chord length [43-44]. This function has clearly positive and negative cases ie. if the point is too close we get negative and if too far we get positive residual. Thus, it shall work with the intermediate value clause of the solver. We then pass it to the solver along with a domain from the current start parameter to the end of the curve’s domain [45]. The process simulates the incremental chord cutting process using circles but without computing circle-curve intersections. Instead you can think of it as sliding a point forward along a curve until its distance from a reference point equals the desired chord length.

def PointAtDistanceAhead( curve, distance, source ): point = curve.PointAt( source ) function = lambda t : point.DistanceTo( curve.PointAt( t ) ) - distance return Solve( function, [source, curve.Domain.Max] )

The function *DivideCurveByDistance( curve, distance, source )* [47-56] applies the curve division scheme repeatedly and returns a list containing the parameters where the curve was chopped. If the parameter source was not supplied, we initialize it with the start of the curve’s domain [48-49]. We then initialize the list of parameters with the requested starting parameter [50]. The while-loop calls the point at distance function starting from the last element pushed into the parameter’s list [52-53]. If the method fails, we assume it is because there no more available points and exit the loop [54]. This is a sloppy assumption but lets roll with it. If the solver succeeds then the newly found parameter is pushed into the list such that at the next iterations we can continue from there [55]. Finally, we return the list [56].

def DivideCurveByDistance( curve, distance, source = None ): if( source == None ): source = curve.Domain.Min parameters = [source] while( True ): success, target = PointAtDistanceAhead( curve, distance, parameters[-1] ) if( not success ): break parameters.append( target ) return parameters

The function *EvaluateCurve( curve, parameters )* transforms the parameters into points using list comprehension [H] notation [58-59]. Eventually, we divide the curve and convert parameters to points [61-62] and emit the points, the setting-out segments and visual indicator circles [64-66]. The objective of this introductory section was to demonstrate that we can deeply embed numerical solvers into any regular function creating complex call graphs eg. DivideCurveByDistance → PointAtDistance → Solve.

def EvaluateCurve( curve, parameters ): return [curve.PointAt( parameter ) for parameter in parameters] points = EvaluateCurve( curve, DivideCurveByDistance( curve, distance ) ) geometry = ( points + [PolylineCurve( points )] + [Circle( point, distance ) for point in points] )

### Same Sized-Edge Bays

In the following section we attack the evenly sized bays at the start and end of the setting-out problem. The key challenge is that we cannot just run the regular chord cutting algorithm once, then measure the leftover length/chord, divide it by half and shift the whole chopping process forward by that amount. The reason is that we have no assurance that curvature changes will not cause the process to over-shoot. For curves with mild-curvature, often used for building envelopes, yes this may work ballpark. But for more complex curves it can miserably fail because it disregards curvature.

Instead we will think the process in exactly the same way we used for chord cutting: we will imagine a sliding point along the curve from which we will start segmenting the curve in equal chords until there is no more space. At the start and end we will measure the distance from the curves start point and at the end from where we took the last successful chord cut to the curve’s end-point. Somewhere along the curve there must be a setting-out start point along the curve where the two leftovers are equal. Thus we will use the solver once again to let it crunch the numbers and find this sliding start point’s parameter along the curve.

from Rhino.Geometry import * from math import * debug = False geometry = [] def Fail( message, result = False ): global debug if( debug ): print( message ) return result def Solve( function, domain, tolerance = 0.0001 ): xmin = domain[0] ymin = function( xmin ) xmax = domain[1] ymax = function( xmax ) if( ymin * ymax >= 0 ): return ( Fail( "Seed\nMin: {0}, {1}\nMax: {2}, {3}".format( xmin, ymin, xmax, ymax ) ), -1 ) for _ in range( 1000 ): xmid = ( ymin * xmax - ymax * xmin ) / ( ymin - ymax ) ymid = function( xmid ) if( abs( ymid ) <= tolerance ): return ( True, xmid ) if( ymin * ymid < 0 ): xmax = xmid ymax = ymid else: xmin = xmid ymin = ymid return ( Fail( "Loop\nMin: {0}, {1}\nMax: {2}, {3}".format( xmin, ymin, xmax, ymax ) ), -1 ) def PointAtDistanceAhead( curve, distance, source ): point = curve.PointAt( source ) function = lambda t : point.DistanceTo( curve.PointAt( t ) ) - distance return Solve( function, [source, curve.Domain.Max] ) def DivideCurveByDistance( curve, distance, source = None ): if( source == None ): source = curve.Domain.Min parameters = [source] while( True ): success, target = PointAtDistanceAhead( curve, distance, parameters[-1] ) if( not success ): break parameters.append( target ) return parameters def EvaluateCurve( curve, parameters ): return [curve.PointAt( parameter ) for parameter in parameters] def CurveLeftover( curve, distance, source ): _half = curve.PointAtStart.DistanceTo( curve.PointAt( source ) ) while( True ): success, target = PointAtDistanceAhead( curve, distance, source ) if( not success ): break source = target half_ = curve.PointAtEnd.DistanceTo( curve.PointAt( source ) ) print( _half , half_ ) return _half - half_ def DivideCurveEvenly( curve, distance ): success, maximum = PointAtDistanceAhead( curve, distance, curve.Domain.Min ) if( not success ): return Fail( "Seed Fail" ) function = lambda t : CurveLeftover( curve, distance, t ) success, source = Solve( function, [curve.Domain.Min, maximum] ) if( not success ): return Fail( "Split Fail" ) params = DivideCurveByDistance( curve, distance, source ) points = EvaluateCurve( curve, params ) global geometry geometry += points DivideCurveEvenly( curve, distance )

We first define a new function *CurveLeftover( curve, distance, source )* [61-70]. Note that it is actually a stripped down version of the *DivideCurveByDistance* function [47-56]. Its goal is to perform a chord division of the curve, from a supplied starting parameter, and return the difference between the front and back leftover segment lengths (*_half*) and (*half_*). This formulation captures the idea of those two terminal bay lengths need to be equal ie. their difference is zero. Note that the numerical expression has directionality, ie. can take both positive and negative values as the solver requires.

def CurveLeftover( curve, distance, source ): _half = curve.PointAtStart.DistanceTo( curve.PointAt( source ) ) while( True ): success, target = PointAtDistanceAhead( curve, distance, source ) if( not success ): break source = target half_ = curve.PointAtEnd.DistanceTo( curve.PointAt( source ) ) print( _half , half_ ) return _half - half_

The function *DivideCurveEvenly( curve, distance )* [72-85] implements the desired objective, by first computing a parameter along the curve at distance equal to a bay’s size [73-75]. The reason for this is to reduce the domain of search and optimize computation: We know that if we were to shift the entire cladding setting-out along the curve, we wouldn’t need to shift more than a bay’s length anyway, so why search across the entire curve’s domain? We then create a lambda which captures the curve and bay size and return the leftover residual which we want to zero [77-79]. Once the solver has found the parameter along the curve from which if we start dividing by equal chords we will end up with an equally sized bays, we pass it to the regular chord division function and plot the points [81- 85].

def DivideCurveEvenly( curve, distance ): success, maximum = PointAtDistanceAhead( curve, distance, curve.Domain.Min ) if( not success ): return Fail( "Seed Fail" ) function = lambda t : CurveLeftover( curve, distance, t ) success, source = Solve( function, [curve.Domain.Min, maximum] ) if( not success ): return Fail( "Split Fail" ) params = DivideCurveByDistance( curve, distance, source ) points = EvaluateCurve( curve, params ) global geometry geometry += points

The objective of this example was to demonstrate that we can actually nest solvers: the outer setting-out shifting solver is using an internal iterative solver to compute chordal division leftovers. We have seen that creating nested loops increases computational time complexity polynomially ie. from linear to quadratic to cubic etc. This is why it is worth making optimization by reducing the search domains to what actually makes sense. The example also shows quite nicely how the lambda converted a function with multiple parameters into a function with one parameter by also capturing local variables ie. constructing a closure [W].

## Airport Pier Layout

When I was a young boy, I had the amazing opportunity to work in a competition for the design of a new airport terminal. You see there is something quite fascinating about the shape of airport terminals which has to do with their operational characteristics. Here we will look at a challenge faced back then as an advanced example for numerical computing.

The perimeter of an airport terminal, at least its air-side, needs to be as long and smooth as possible in order to allow as many aircrafts to park as possible. Aircrafts cannot run their engines full power when too close to the terminal for noise and safety reasons, among others, which makes them very difficult to maneuver. So ideally they will come in a straight line and back off the same way ie. the opposite of three point car parking. This is of course a gross simplification but it explains why most airports in the world have rather simple linear shapes and preferably no sharp corners.

You can also see that convex shapes, from the sense of the air-side, are also pretty good too. For instance, in the figure below, we bent the ends upwards without changing the horizontal span compared to the figure above, and by increasing the perimeter we can fit one more additional aircraft. Moreover, we did not cause any issue with approach path of the aircrafts either: they can come straight to the terminal along the normal direction of the curve and back off the same way. You can also see that the setting-out for the airport terminal bays can be achieved using the chord cutting algorithm we just developed.

On the other hand concave shapes, again from the sense of the air-side, are more tricky. If we use the chord cutting algorithm we end up with pretty bad results (see figure below). We need to account for sufficient spacing between bays such that aircrafts do not only crash but generally block one another’s approach.

The figure below shows what we want to achieve: there is sufficient space between bays for both parking and approach. Before we look at the solution it is worth considering what are some possible quick geometric constructions we may employ, even hacks.

One idea is to take the pier curve, offset it towards the air-side by the depth of the maximum aircraft depth and then use our trusty chord cutting algorithm to cut bays. This can work amazingly well as long as the pier curve is simple. However, offsetting is a geometric operation [W] that can change shapes topologically ie. if you try to offset a bow-tie polygon it may break into two triangles.

Another way to look at it is incrementally, say from left-to-right: first we place the first aircraft bay, then we project its left outer most point onto the curve, then we reflect the bay about the line formed by the projection, so on and so forth. This can also work approximately very well for a wide range of mild curvature curves and perfectly fine for lines and circular arcs.

A way to think about the problem numerically is to consider that we have placed the first aircraft (black) along the curve, and then have a sliding point along the curve (*p*). From point (*p*) we can chord-cut the curve with the wing-span of the aircraft to create a new bay (magenta). In the figure below this is represented with the X-axis (red) and its orthogonal Y-axis (green). We can see now that we have a quite wide gap which we wish to close such that the corner point of the previous aircraft (*c*) becomes coincident with the line from point (*p*) along the Y-axis.

All we therefore need is to formulate this error as an intersection problem and let the root finder pick the best parameter along the curve. The geometric formulation is exactly the same used for the closest point solver earlier: We take the dot product between the X-axis and (*u*) vector from the new bay’s pivot (*p*) to the corner point (*c*) of the previous bay and aim to zero the signed distance.

The picture below presents a scenario where we have closed the gap such that the two aircraft bounding boxes touch one another. While the second bay is unobstructed by the first one, the opposite is not strictly true. For now we will disregard this detail.

from Rhino.Geometry import * from math import * geometry = [] def Fail( message, result = False ): print( message ) return result def DrawShape( curve, s, t, bounds, shape ): source = curve.PointAt( s ) target = curve.PointAt( t ) xaxis = target - source yaxis = Vector3d.CrossProduct( xaxis, Vector3d.ZAxis ) yaxis.Unitize( ) transform = Transform.PlaneToPlane( Plane.WorldXY, Plane( source, xaxis, yaxis ) ) clone = shape.DuplicateCurve( ) clone.Transform( transform ) geometry.append( clone ) pa = source pb = source + yaxis * bounds.Y pc = target + yaxis * bounds.Y pd = target geometry.append( PolylineCurve( [pa, pb, pc, pd, pa] ) ) return pc #-- forward corner def Solve( function, domain, tolerance = 0.0001 ): xmin = domain[0] ymin = function( xmin ) xmax = domain[1] ymax = function( xmax ) if( ymin * ymax >= 0 ): return ( Fail( "Seed\nMin: {0}, {1}\nMax: {2}, {3}".format( xmin, ymin, xmax, ymax ) ), -1 ) for _ in range( 1000 ): xmid = ( ymin * xmax - ymax * xmin ) / ( ymin - ymax ) ymid = function( xmid ) if( abs( ymid ) <= tolerance ): return ( True, xmid ) if( ymin * ymid < 0 ): xmax = xmid ymax = ymid else: xmin = xmid ymin = ymid return ( Fail( "Exhausted" ), -2 ) def PointAtDistance( curve, distance, point, domain ): function = lambda t : point.DistanceTo( curve.PointAt( t ) ) - distance return Solve( function, domain ) def PointAtDistanceAhead( curve, distance, t ): point = curve.PointAt( t ) return PointAtDistance( curve, distance, point, [t, curve.Domain.Max] ) def PointAtDistanceBehind( curve, distance, t ): point = curve.PointAt( t ) return PointAtDistance( curve, distance, point, [curve.Domain.Min, t] ) def ParkingSpot( curve, param, corner, bounds, limit ): def Evaluate( curve, source, cp, bounds ): success, target = PointAtDistanceAhead( curve, bounds.X, source ) if( not success ): return Fail( "No mo", 0 ) sp = curve.PointAt( source ) tp = curve.PointAt( target ) return ( tp - sp ) * ( cp - sp ) success, minimum = PointAtDistanceBehind( curve, bounds.X * 0.5, param ) if( not success ): minimum = param success, maximum = PointAtDistanceAhead( curve, bounds.X * 2.0, minimum ) if( not success ): maximum = limit else: maximum = min( maximum, limit ) success, pivot = Solve( lambda param : Evaluate( curve, param, corner, bounds ), [minimum, maximum] ) if( not success ): geometry.append( Line( curve.PointAt( minimum ), curve.PointAt( maximum ) ) ) return ( Fail( "oh no" ), pivot ) return ( True, pivot if pivot > param else param ) def Layout( shape, curve ): bounds = shape.GetBoundingBox( True ).Diagonal source = curve.Domain.Min success, target = PointAtDistanceAhead( curve, bounds.X, source ) if( not success ): return Fail( "Start Failed" ) corner = DrawShape( curve, source, target, bounds, shape ) source = target success, limit = PointAtDistanceBehind( curve, bounds.X + 0.001, curve.Domain.Max ) if( not success ): return Fail( "Limit Failed" ) for _ in range( 1000 ): success, source = ParkingSpot( curve, source, corner, bounds, limit ) if( not success ): return Fail( "Parking Spot" ) success, target = PointAtDistanceAhead( curve, bounds.X, source ) corner = DrawShape( curve, source, target, bounds, shape ) source = target if( source > limit ): print( "Completed" ) break Layout( shape, curve )

The code is quite lengthy but complete. The drawing function [10-31] requires the pier-curve, the start and end parameters along it, the bounding dimensions of the aircraft as a point3d and its polyline shape. It performs some basic geometric modeling operations such as defining a local coordinate system [11-16], it copies and transform the template aircraft shape drawn at the world XY plane onto the pier-curve [18-23] and draws a bounding rectangle around the aircraft [25-29]. The important point here is that it return the forward corner of the bounding box which is used by the solver later [31]. Generally speaking, don’t do that last step, as it is quite lazy of me. Avoid the temptation of allowing visualization code to intermix with geometric logic, just recompute the corner point. Anyway…

def DrawShape( curve, s, t, bounds, shape ): source = curve.PointAt( s ) target = curve.PointAt( t ) xaxis = target - source yaxis = Vector3d.CrossProduct( xaxis, Vector3d.ZAxis ) yaxis.Unitize( ) transform = Transform.PlaneToPlane( Plane.WorldXY, Plane( source, xaxis, yaxis ) ) clone = shape.DuplicateCurve( ) clone.Transform( transform ) geometry.append( clone ) pa = source pb = source + yaxis * bounds.Y pc = target + yaxis * bounds.Y pd = target geometry.append( PolylineCurve( [pa, pb, pc, pd, pa] ) ) return pc #-- forward corner

The next new chunk of code is an update of the chord-cutting function [59-74]. Here we expanded the function parameter list by requiring the caller to specify the domain we shall search for the chord cut [59-62]. Earlier we used the always look ahead method [64-68]. But for this problem we also need to be looking back so we implemented another variation [70-74]. Those directional functions depend on the general PointAtDistance function with the only difference being the domain including the start or end values.

def PointAtDistance( curve, distance, point, domain ): function = lambda t : point.DistanceTo( curve.PointAt( t ) ) - distance return Solve( function, domain ) def PointAtDistanceAhead( curve, distance, t ): point = curve.PointAt( t ) return PointAtDistance( curve, distance, point, [t, curve.Domain.Max] ) def PointAtDistanceBehind( curve, distance, t ): point = curve.PointAt( t ) return PointAtDistance( curve, distance, point, [curve.Domain.Min, t] )

The next function to dissect is *ParkingSpot( )* method [76-111] but because it is quite lengthy we will first look at its nested function *Evaluate( )* [78-88]. The reason for defining an inner function is because it is only used here so it keeps the global scope clean and the code easier to parse. This is the objective function for the root-finder. It requires the curve, the starting parameter ie. sliding point along the curve, the corner point of the previous aircraft bounding box and size of the aircraft [78]. It first computes the parameter along the curve where we would fit the bay using the chord cutting solver in its forward marching version [79-80]. Obviously, this function can fail so we need to check [82-83] but worry not we have all our bases covered as we will see later. You may actually remove the check altogether. Nevertheless, it is always important to consider what happens when the solver evaluation method fails? Returning zero will force the solver to assume it found a solution and exit. Shall we throw an exception instead? Or modify the solver method to expect tuples with success/failure as the result of the evaluation function? I recommend maybe take preventive measures if possible and the last option seems also ok. Anyway, we then compute the X-axis along the bay [85-86] and return the signed distance ie. dot product [88] as explained earlier in the geometry section.

def Evaluate( curve, source, cp, bounds ): success, target = PointAtDistanceAhead( curve, bounds.X, source ) if( not success ): return Fail( "No mo", 0 ) sp = curve.PointAt( source ) tp = curve.PointAt( target ) return ( tp - sp ) * ( cp - sp )

Now we look at the inner workings of the *ParkingSpot( )* function. It requires the pier-curve, the parameter along the curve to try finding the next bay, the corner point of the previous bay, the size of the aircraft and the upper limit of the curve’s domain [76]. The upper limit is computed upfront to prevent the evaluation function seen earlier to fail [82-83]. The chord-cutting algorithm fails if there is no more curve to chop. So we can proactively give it sufficient space before it reaches the end of the curve such that it never fails. We will later pre-compute the upper limit as one chord from the end of the curve back, hence why we defined the backward searching variation.

As we have also seen before it very important when we use solvers to make sure we give them good domains such that (a) they work in terms of the intermediate value theorem and (b) they don’t look too far away from where the actual solution may be. In this spirit we determine the domain of search for the bay finding solver starting half a bay behind the previous and as much as two chords away from it forward. The reason for looking back [90-93] is to ensure we have a negative side on the evaluation function so the intermediate value clause works. The reason we look up to twice the size of the aircraft ahead [95-100] is twofold: (a) to reduce the search space so we don’t look at the entire curve, and (b) it is actually pretty possible to get a geometric situation where we violate the intermediate value clause and fail. Try using directly the upper limit and you can verify that some pier-shapes cause the solver to fail.

Finally we can invoke the solver with the inner evaluation function and computed domain values and hope it returns a point parameter along the curve where the new bay touches the old bay [102-109]. We return the maximum of the computed parameter and initial pivot [111] because this works to account for both convex and concave pier-curve scenarios. Do test returning the pivot value unconditionally to verify the claim.

success, minimum = PointAtDistanceBehind( curve, bounds.X * 0.5, param ) if( not success ): minimum = param success, maximum = PointAtDistanceAhead( curve, bounds.X * 2.0, minimum ) if( not success ): maximum = limit else: maximum = min( maximum, limit ) success, pivot = Solve( lambda param : Evaluate( curve, param, corner, bounds ), [minimum, maximum] ) if( not success ): geometry.append( Line( curve.PointAt( minimum ), curve.PointAt( maximum ) ) ) return ( Fail( "oh no" ), pivot ) return ( True, pivot if pivot > param else param )

Now for the final stretch, the *Layout( )* function [113-147] which is another big one. First we compute the aircraft size [114] assuming it is placed at the world XY plane origin wing-span aligned along the X-axis. Then we place the first bay using the standard chord cutting solver [116-121] and draw the aircraft [123]. Then we take a chord back from the end of the curve as safety limit [126-130] explained earlier. Then until the curve has more perimeter, we can place a bay, we compute the parking spot [133-137], then we wastefully recompute the bay end point [139-140] and draw it [142]. We abort the loop when the bay has gone beyond our safety limit [145-147].

def Layout( shape, curve ): bounds = shape.GetBoundingBox( True ).Diagonal source = curve.Domain.Min success, target = PointAtDistanceAhead( curve, bounds.X, source ) if( not success ): return Fail( "Start Failed" ) corner = DrawShape( curve, source, target, bounds, shape ) source = target success, limit = PointAtDistanceBehind( curve, bounds.X + 0.001, curve.Domain.Max ) if( not success ): return Fail( "Limit Failed" ) for _ in range( 1000 ): success, source = ParkingSpot( curve, source, corner, bounds, limit ) if( not success ): return Fail( "Parking Spot" ) success, target = PointAtDistanceAhead( curve, bounds.X, source ) corner = DrawShape( curve, source, target, bounds, shape ) source = target if( source > limit ): print( "Completed" ) break

That took longer than expected to document this! One hour of coding, ten of writing… Anyway, the algorithm developed works pretty good and it is also fairly stable. There are some cases of extreme concavity that as mentioned earlier the next bay obstructs the previous one but this can be solved with a better geometric formulation left as an exercise perhaps. The objective of the example was to show the eventually black magic often required when we attack involved problems. Key points to take from this are: (a) nesting and mixing solvers is perfectly fine, (b) coming up with an adequate formulation is the most tricky subject requiring intuition, (c) working out the domain boundaries is hyper critical!

## Conclusions

This was another exhausting presentation of a computational topic. We have just scratched the surface of numerical computing. Solving univariate root finding problems, in a geometric sense intersection problems, is only a small class of thing we can do with numerical computing. Here we looked at the case of trying to find special points where things intersect. But another broad class of looking for special points includes problems of optimization ie. finding minima and maxima or so called stationary points, which will be covered in a later post. We have only looked at one-parameter and one-result type of problems, whereas the vast majority involved multiple inputs and outputs; again a topic for the next post. Finally, we looked a family of problems which we can draw continuous and/or differentiable ie. smooth functions to express the objective. But there are problems where we do not have such luxury because we need to work with categorical parameters ie. they take discrete and/or finite values; yet another topics to cover in the future.

The key take away of the practical examples was to build complexity towards progressively seeing that sometimes problems are way easier to approach conceptually as numerical instead of geometric constructions. We just need to imagine a sliding value, being shifted along a domain by the solver, and just express some criteria to help the algorithm hone into the solution. Eventually once the fear of the technicalities involved dissipates it obvious that the most tricky part was the geometric logic, which by the way it was rather trivial in most cases. Hope this was informative!