Saturday, December 28, 2013

Chaos


Most things that we encounter in our lives follow some sort of predictable pattern. Microwaving something for 26 seconds instead of 25 makes it just a little bit hotter, though we might not be able to really tell the difference. Depress the accelerator of your car just a little more and you go just a little faster. Move your mouse to the left and the little cursor on the screen goes ... left.

Imagine a world in which things routinely were not at all predictable. Where microwaving a frozen burrito for 23 seconds sort of thaws it out, for 24 seconds causes the cheese to boil, and for 25 seconds seems to make it colder than just frozen. Where pressing the accelerator sometimes makes you go faster, sometimes slows you down, and sometimes changes the radio station. Where moving your mouse left makes the cursor go left, except for the times when it goes right, up, down, or clicks the "Buy Now" button.

We'd rightly call living in such a world "chaos."

In mathematics, we revel in situations where things behave predictably. Calculus is built upon continuous functions. A function \(f\) is continuous if, when \(x\) and \(y\) are "close" in value to each other, \(f(x)\) and \(f(y)\) are "close" in value to each other. Since 25 and 26 are "close" in value, the temperatures of frozen burritos after being microwaved for 25 or 26 seconds are about the same.

How does all of this relate to the animated gif above?

There is a famous algorithm called Newton's Method that approximates, accurately, solutions to equations. Given a function \(y=f(x)\), Newton's Method allows us to approximate the solution to an equation like \(f(x)=0\). One starts with an initial guess \(x_0\), and Newton's Method gives the approximation \(\hat x\) that is usually a great approximation to \(f(x)=0\); that is, \(f(\hat x)\approx 0\).

We can think of Newton's Method as a function \(N\) itself, where \(N(x_0) = \hat x\). Returning to the "predictability" theme of this post, we would expect two things to be true of \(N\):
  1. If \(x_0\approx y_0\), then \(N(x_0) \approx N(y_0)\). That is, initial guesses that are close to each other return good approximations that are also close to each other. In other words, initial guesses of 3 and 3.001 should return the same approximate solution.
  2. If \(\overline x\) is a solution to \(f(x)=0\), that is, \(f(\overline x)=0\), and if \(x_0\) is close to \(\overline x\), then we'd expect \(N(x_0)\) to be really close to \( \overline x\). Without all the fancy notation, suppose 5 is a solution to \(f(x)=0\); that is, \(f(5)=0\). We would expect that the initial guess of 5.1 would return something really, really close to 5.
Those are reasonable expectations. And they can fail spectacularly. 

Consider the complex plane, where we plot the complex number \(a+bi\) as the point \((a,b)\) on the familiar Cartesian plane and consider the function \(f(z) = z^5-1\). Since \(f\) is a polynomial with degree 5, we know \(f(z)=0\) has 5 solutions. One of them is real, \(z=1\), and the other 4 are complex. 

Apply Newton's Method to thousands of points in the complex plane and color each point according to the solution of \(f(z)=0\) Newton's Method returns. If Newton's Method returns a solution near \(z=1\), color the point red. If it returns one of the other 4 complex solutions, color the point purple, green, yellow or blue. 

The animated gif above starts with a view of the complex plane with corners at \(-2-2i\) and \(2+2i\). Note how large regions of the plane behave nicely. The big patch of red to the right means lots of points near each other all give the solution \(z=1\). (This means Expectation #2 is holding up: solutions near \(z=1\) return an approximation of \(z=1\).)

But the crazy part is that there are lots of places where the points very close to each other lead to very different solutions. Between the big regions of red and purple there are regions of blue, yellow and green.

And then we zoom in. Over and over we see that the borders between "large" regions of solid color are actually smaller regions of solid color. This goes on forever - no matter how far you zoom in, you will always find that the border between "large" regions of color is made up of a similar pattern of smaller regions of solid color.

The upshot is this: applying Newton's Method to two points that are really close to each other can lead to completely different solutions. That's chaos

The following gif shows the first frame of the gif above in a different way. Newton's Method is an algorithm - a repeated set of steps. We stop repeating once we get close enough to a solution. In the gifs above and below, we indicate how many steps it takes to converge to a solution by the brightness of the color. The darker the color, the more steps it takes. Note how some regions stay black - these do not converge after 50 steps. They might after more, but we stopped after that number in the picture below.



Note how some points converge very quickly - even points that are not close to the solution they converge to.

There are lots of ways to illustrate chaotic behavior. We picked a common one above: showing convergence regions of Newton's Method. Another popular one is shown below.

Start with a quadratic, complex function. We chose \(f(z) = z^2-0.8+0.157i\). For every point in the complex plane, apply this function over and over again. For instance, if we start with \(z_0=1\), applying \(f\) gives \(z_1=f(1) = 0.2+0.157i\). Apply \(f\) again: \(z_2 = f(z_1) = -0.785+0.2198i\). Keep doing this until the results start to get "big." For instance, if you start with \(z_0 = 10\), then \(z_1=f(z_0) = 99.2+0.157i\) and \(z_2 = 9839.82+31.3058i\). Clearly these numbers are getting big. Fast.

Starting with \(z_0=1\) is a different story. After applying \(f\) 250 times, we find \(z_{250} = -1.33+0.575i\), hardly big. Apply \(f\) just a few more times, though, and the result is big: \(z_{255} \approx 1587-2822i\). And once the result is "big," it never gets small again.

Below, we color points in the complex plane according to how many applications of \(f\) it takes to make the result "big." Dark spots get big fast, bright points get big slowly. Pure white spots haven't gotten "big" after 150 iterations of \(f\).




The initial picture is interesting enough, but zooming in tells more of the story. We see that points that "get big fast" are always located near points that do not get big, fast. Chaos. And like our "zooming-in" gif at the top of this post, we see that as we zoom in, we see similar shapes repeated over and over, smaller and smaller.

Below we show the plane as it gets colored in. We see that points that get big, fast, are located throughout the region, as are points that don't get big after 150 iterations.



In both of our examples of chaos, there exist points that never converge. In our Newton's Method example, there are points that never converge to a solution of \(f(z)=0\), and in the current example there are points that never get "big." Never. These points form what is called a Julia Set. Tweaking some things gives the infamous Mandelbrot Set.

So while mathematics brings structure to so much of our lives, it also brings chaos. It shows us behavior that, at present, is beyond our ability to predict and fully understand. And that's awesome.


Consider following us on Twitter; we'll tweet only when a new post is up.





Friday, December 13, 2013

Sonic Booms



Why does a sonic boom ... boom? That is, why is it so loud? Above a plane flies at twice the speed of sound (it's Wonder Woman's plane, which is why you can't see it, just its sound waves), and we can see it sound waves dissipate over time/distance. (The sound waves get less dark to show the sound energy is dissipating.)

Compare this to a helicopter hovering in one spot:


A person standing at the red dot will hear the helicopter with a constant "loudness," or magnitude. Compare this to the person standing at the red dot when the plane flies by, in the picture at top, at mach 2. The guy at the red dot hears nothing until ... BOOM! ... and the plane has already passed him by. Then the magnitude decreases.

Below, we have a plane flying at half the speed of sound. Note how the guy at the red dot hears the plane with increasing, then decreasing, magnitude.


The plane is again loudest once it has passed the guy.

Notice that the sound waves are all distinct, as with the helicopter picture. Compare this to the plane flying at mach 2 above, or the still image of a plane flying at mach 1.3 below.

See how all the sound waves seem to converge at the red dot? Here is the simplest explanation of a sonic boom that I know, which avoids things like compressed pressure waves, etc.: in a sonic boom, you are hearing the sound of a plane from different places in the sky at the same time. In the above picture, there are 6 circles that go through the red dot, corresponding to the idea that that guy is hearing the sound of the plane from 6 spots in the sky at the same time. Of course, in real life, you don't hear sound from distinct, discrete places, but the concept is the same.

By the way, the "cone" that is formed by these circles is called a Mach cone. It is an example of a mathematical envelope, which we discuss in a previous post.

Consider following us on Twitter; we'll tweet only when a new post is up.





Friday, December 6, 2013

Mathematical Envelopes


Start with a circle and let two points go around it, one twice as fast as the other. At each step of the way, draw a line to connect the points. You'll get a picture similar to the one above. This set of lines somehow clearly draws a cardioid. This is an example of a mathematical envelope.(For a gallery of envelopes that is way better than this one, see this.)

The official definition of a mathematical envelope is a bit tricky, so we offer just a pseudo-definition. A "family" of lines is an infinite set of lines, sharing some common trait. In the case above, the common trait is that each line goes through two specially chosen points on a circle. (We of course don't show the whole family, just about 100 of the lines.) The envelope of a family \(F\) of lines is a curve \(C\) such that each line is tangent to \(C\) at a point, and that for each point on \(C\), there is a line in \(F\) that is tangent to \(C\) at that point.  

There are tons of interesting ways of creating envelopes. We start by adjusting the process above. Instead one circle traveling twice as fast as the other, what if one moves only 1.5 times as fast? We get the image below, another cardioid.


One point going around four times as fast as the other gives this image.


You can make your own! When you get some free time, or are wishing you were somewhere else (like when you are in the middle of English class, or a faculty meeting, or getting your molars pulled or spleen removed), pick your own pattern for dot connecting and try it on the image below, with dots helpfully numbered for you. (I highly recommend starting with the "twice as fast" method first, where 1 is connected to 2, 2 to 4, etc. As you go around, note that 30 goes to 60, which is 0 on the diagram, and 31 goes to 62 \(\equiv\) 2, etc.)


If you put pins on a board in the above locations and connect dots with strings, you get string art.

It is a common calculus exercise to note that if \((x,y)\) lies on a circle centered at the origin, then the tangent line has slope \(-y/x\). So it should be no surprise that drawing a bunch of lines through \((\cos\theta,\sin\theta)\) with slope \(-\sin\theta/\cos\theta\) gives the following image:


One of the fun things about all of this is that one can play. Using Mathematica as we do, it isn't hard to change the slopes of the lines and see what comes out. Changing the above slopes from  \(-\sin\theta/\cos\theta\) to  \(\sin\theta/\cos\theta\), we get this image:


This shape has a name. We don't know what it is. (It may be an astroid.) There are ways of determining what the envelope are, but we won't go into that here.

A famous way of creating envelopes is to start with a known curve, then draw through each point a line perpendicular to the curve at that point. This is called the evolute. (The evolute is also the set of all centers of circles of curvature, if you know what that means.) Below, we start with a parabola, then draw the perpendicular lines:


We show one more, starting with an ellipse:


One doesn't have to use a family of lines to create envelopes; any curve will do. Below, we start with a circle \(C\) and a point \(C\) on that circle. Our family of curves is the set of circles with center on \(C\), each passing through \(P\). 


Another cardioid! Pretty cool.

Again, you can use any kind of curve to create an envelope. In the following image we use a family of parabolas. One can define a parabola with a line, called the directrix, and a point, called a focus. We create a family of parabolas by using the red line as the directrix for all the parabolas; the foci for the parabolas lie on the blue parabola, and are indicated with the yellow dot. The image it creates is our MathGifs logo.


Envelopes are important beyond their ability to make pretty pictures. The picture of the cardioid above is related to the sensitivity of certain microphones to sound. In a later post we'll show how we can understand sonic booms in terms of envelopes. 

Don't forget to print out the pic of the points on the circle above and make your own envelopes. If you come up with something cool, show us in the comment section below. If it is really cool, fold it up, put a stamp on it, and mail it to us. Then your envelope really becomes a ... nevermind. Too meta.

Consider following us on Twitter; we'll tweet only when a new post is up.



Tuesday, November 26, 2013

Translations Through Rotations




In a previous post, we showed a few images in which a shape seems to rotate, though the shape is made up of individual points which actually only translate, that is, move along a straight line. Above, the image gives the impression that each point is moving right to left (a translation), while in fact, each point is actually moving in a circle (rotating).

The most basic form of showing "translation through rotation" is just to line up a bunch of circles (where "a bunch of" could mean "an infinite number of") but only show one point on each circle. There are lots of ways to pick this one point; a basic one is where a circle B, one unit to the right of circle A, shows a point 10\(^\circ\) farther around the circle that circle A. (This idea was given to us by NichG in a comment on the previously mentioned post, who also generated this image.)

Here is our take on this concept:


It may be hard to believe that each point is just moving in a circle, so we provide this image as "proof":


Each of the circles shown above has a center that lies on a horizontal line. We can put these circle centers on other curves to get different results. Below, we put the centers on a sine wave.


This is the basic structure of the image at the top of this post; we just varied the color a little according to its height to give the impression of a white-capped ocean wave.

Other Shapes


What if each point does not follow a circle, but some other path? Below, each point follows a square of unit length.

Again, since it can be hard to believe, we offer this image as proof:


We can also vary the way that we traverse this square. The red dot above travels at a constant speed around the square, covering each side in 1 time unit. In the following image, we traverse the square again at a rate of 1 side length per time unit, but slow down at the corners and speed up in the middle.


Notice how the sharp corners are smoothed out. To show the difference of the "traversing of the square," consider the following, where the two dots follow the same path at two different rates.


They arrive at each corner at different times, but the red dot slows down at the corners and speeds up in the middle, while the white dot travels at a constant speed. 

We also considered traveling along a triangle.



We have worked our way down from circles (which have "infinite sides"), to squares (with 4 sides) to triangles (3 sides, but you didn't need us to tell you that). To get even lazier with our motion, lets just move dots up and down.



The red dot is moving up/down at a constant speed; varying this so it goes up/down along a sine wave gives ...


... a sine wave. 

We tried one more, moving a dot along a 5 pointed star.



We're not sure there is much point to this one, but we thought it was fun to try.

We would be interested in seeing any images like these that you think look cool. Post it somewhere and send us a link.

Consider following us on Twitter; we'll tweet only when a new post is up.



Our Code

We've gotten requests to show our code in the past. We haven't done it so far mostly out of shyness/embarrassment about the quality of our coding, but we'll show some of it here. We write in Mathematica.

All of the images are created using the same basic workflow. Use a Manipulate environment to test the image; the change the "Manipulate" to "Table" to create a set of frames, then Export these frames to a gif.

Here is the code to do the basic circles at the top. Change f[x_]:=0 to f[x_]:=Sin[x] to get the circles with centers along the sine wave.

f[x_] := 0

Manipulate[
 Show[ListPlot[
   Table[{Cos[i - a], Sin[i - a]} + {i, f[i]}, {i, -5, 5, .5}], 
   PlotRange -> {{-5, 5}, {-1.5, 1.5}}, 
   PlotStyle -> {White, PointSize[.02]}, Axes -> None, 
   AspectRatio -> Automatic, Background -> Black], 
  ParametricPlot[{{Cos[t], Sin[t]}, {Cos[t - 3] - 3, 
     Sin[t - 3]}, {Cos[t + 3] + 3, Sin[t + 3]}}, {t, 0, 2 Pi}, 
   PlotStyle -> {{White}}], 
  ParametricPlot[{{Cos[t], Sin[t]}, {Cos[t - 3] - 3, 
     Sin[t - 3]}, {Cos[t + 3] + 3, Sin[t + 3]}}, {t, 0, 2 Pi}, 
   PlotStyle -> {{Thickness[.005], White, Opacity[.6]}}], 
  ListPlot[{{Cos[-a], Sin[-a]}, {Cos[-3 - a] - 3, 
     Sin[-a - 3]}, {Cos[-a + 3] + 3, Sin[-a + 3]}}, 
   PlotStyle -> {{PointSize[.025], Red}}], Axes -> None], {a, 0, 2 Pi,2 Pi/30}]

basicRotationframes = 
  Table[Show[
    ListPlot[
     Table[{Cos[i - a], Sin[i - a]} + {i, f[i]}, {i, -5, 5, .5}], 
     PlotRange -> {{-5, 5}, {-1.5, 1.5}}, 
     PlotStyle -> {White, PointSize[.02]}, Axes -> None, 
     AspectRatio -> Automatic, Background -> Black], 
    ParametricPlot[{{Cos[t], Sin[t]}, {Cos[t - 3] - 3, 
       Sin[t - 3]}, {Cos[t + 3] + 3, Sin[t + 3]}}, {t, 0, 2 Pi}, 
     PlotStyle -> {{White}}], 
    ParametricPlot[{{Cos[t], Sin[t]}, {Cos[t - 3] - 3, 
       Sin[t - 3]}, {Cos[t + 3] + 3, Sin[t + 3]}}, {t, 0, 2 Pi}, 
     PlotStyle -> {{Thickness[.005], White, Opacity[.6]}}], 
    ListPlot[{{Cos[-a], Sin[-a]}, {Cos[-3 - a] - 3, 
       Sin[-a - 3]}, {Cos[-a + 3] + 3, Sin[-a + 3]}}, 
     PlotStyle -> {{PointSize[.025], Red}}], Axes -> None], {a, 0, 2 Pi - 2 Pi/30, 2 Pi/30}];

Export["08_basic_rotation_with_circle.gif", basicRotationframes]

We used the following to create waves from squares. In the Table command, change square to square2 to follow the square at the other speed.

square[x_] := 
 With[{xx = Mod[x, 4]}, 
  Piecewise[{{{-.5 + xx, .5}, 0 <= xx < 1}, {{.5, .5 - (xx - 1)}, 
     1 <= xx < 2}, {{.5 - (xx - 2), -.5}, 
     2 <= xx < 3}, {{-.5, -.5 + (xx - 3)}, 3 <= xx <= 4}}]]

pacing[x_] := (-Cos[x*Pi] + 1)/2
square2[x_] := 
 With[{xx = Mod[x, 4]}, 
  Piecewise[{{{-.5 + pacing[xx], .5}, 
     0 <= xx < 1}, {{.5, .5 - pacing[(xx - 1)]}, 
     1 <= xx < 2}, {{.5 - pacing[(xx - 2)], -.5}, 
     2 <= xx < 3}, {{-.5, -.5 + pacing[(xx - 3)]}, 3 <= xx <= 4}}]]

squarewave1frames = 
  Table[Show[
    ListPlot[Table[square[-i + a] + {1 i, ff[i]}, {i, -8, 8, .2}], 
     PlotRange -> {{-2, 2}, {-1.1, 1.1}}, 
     PlotStyle -> {White, PointSize[.02]}, Axes -> None, 
     Background -> Black, AspectRatio -> Automatic],ParametricPlot[
    square[x],{x,0,5},PlotStyle\[Rule]White],ParametricPlot[square[
    x],{x,0,5},PlotStyle\[Rule]{White,Thickness[.005],Opacity[.6]}],
    ListPlot[{square[a]},PlotStyle\[Rule]{PointSize[.025],
    Red}]], {a, 0, 4 - .1, .1}];

Finally, this draws the star:

outerstar = Table[{Sin[t], Cos[t]}, {t, 0, 2 Pi, 2 Pi/5}];
innerstar = Table[.5 {Sin[t], Cos[t]}, {t, 2 Pi/10, 2 Pi, 2 Pi/5}];
starpoints = Riffle[outerstar, innerstar];
starlinedata = Partition[starpoints, 2, 1];

starfunction[xx_] := 
 With[{x = Mod[xx, 10], t = Floor[Mod[xx, 10]]}, 
  starlinedata[[t + 1, 
    1]] + (x - t)*(starlinedata[[t + 1, 2]] - 
      starlinedata[[t + 1, 1]])]

Replace square in the above table with starfunction to see its results. (You'll have to change the plot range, etc.)

We hope this helps. Have fun. Again, show us what you come up with.



Friday, November 15, 2013

Buffon's Needle (and his Noodles)


It seems strange, but it is true: you can approximate \(\pi\) by throwing needles on the floor.

Back in 1733 a French guy named Georges-Louis Leclerc, Comte de Buffon (i.e., "Count Buffon") posed a simple question: If you throw a needle down on a wood plank floor, what is the probability that the needle will cross a line between planks?

It turns out that the probability, if the needle is the same length as the width of the planks, is \(P=2/\pi\approx 0.63662\). That means that if you throw said needle 100 times on the floor, you'd expect about 64 of them to be across a line; throwing 50 needles, like in the animation above, should give about 32 crossings. 

This gives a way of approximating \(\pi\): since \(\pi=2/P\), throw a needle \(n\) times on the floor and count the number of crossings \(c\). This means that \(P\approx c/n\), and so \[\pi = 2/P \approx 2/(c/n)= 2n/c.\]
In the above gif, we find \(\pi \approx 2\cdot 50/32 =3.125.\) That is the best approximation we can arrive at with only 50 tosses. Makes you wonder if we rigged the animation. (We did, sort of. The needle tosses were done at random with Mathematica; we lucked out that we got 32 crossings when we were ready to make a .gif and chose to not change it.) Speaking of "rigging it," there is an interesting story in this, starting with the following generalization:

If the needle has length \(\ell\) and the width of the planks is \(d\), then \(P = 2\ell/(\pi d)\) (where \(\ell\leq d\)). Turning this into an approximation for \(\pi\) with \(n\) tosses and \(c\) crosses, we have \[\pi \approx 2\ell n/(c d).\]

Back in 1901, a mathematician named Mario Lazzarini published the results of his tossing a needle of length 2.5cm onto lines spaced 3cm apart. His claimed results are amazing: in his tosses of 2000, 3000 and 4000 needles, his reported number of crossings are within 1 of the optimal number. Also, on toss 3408, he reportedly had 1808 crossings, which gives the approximation \(\pi \approx 3.1415929\), which is accurate up to the last "9." Most people find it hard to believe that 1) he actually tossed a needle 4000 times, 2) he did it in any way that was truly random (after all, people can hardly do anything randomly), and 3) his results were so close to "perfect." But his paper did get published, and maybe that earned him tenure at some institution of higher education.

Noodles

In 1969, J. F. Ramalay published in the American Mathematical Monthly an article entitled "Buffon's Noodle Problem," in which he generalizes the needle problem further. Instead of tossing needles of length \(\ell\), he solved the problem of tossing "noodles" of length \(\ell\) - that is, arbitrary curves with arc length \(\ell\). Because arbitrary curves are ... curvy, one noodle can cross a line more than once. He found that the expected number of total crossings \(E\) is \[E = 2\ell/(\pi d),\] effectively the same formula found before (and here \(\ell\) can be greater than \(d\)). The difference between "probability" \(P\) and "expected number" \(E\) is subtle yet significant, but here we treat them as the same. If \(P=.3\) or \(E=.3\), then in each case you'd expect about 30 crossings after 100 throws. In the traditional case of needles, that means 30 needles cross a line; with noodles, that means there are 30 crossings, likely from less than 30 noodles.

We demonstrate this with the following 2 gifs. In the first, we toss circles (donuts?) onto a floor where the circumference (arc length) of each circle is 1, the distance between lines. 


Note how each circle, when it crosses a line, gives two crossings. (And we didn't rig this one to give ourselves the optimal 32 crossings.)

Below, we throw arbitrary curves onto our board. Each curve is color-coded by the number of crossings it gives: there is one curve that gives 3 and a couple that give 2.


Thanks to Stan Wagon of Macalester College; he coded a Wolfram Demonstration on the topic and we cribbed some of his code to make our curves and count the crossings. 



Follow us on Twitter and we'll update you when a new post is up.


Sunday, October 27, 2013

Fourier Pumpkins



In honor of Halloween, we thought a picture of a pumpkin would be fun. But how to make it "mathy?"

We started with the following picture of a pumpkin, found through a Google image search for "pumpkin clip art." We then turned it into a black & white image, where the edges are accentuated:


To make the right hand image, we followed instructions found at a great blog post called "Making Formulas… for Everything—From Pi to the Pink Panther to Sir Isaac Newton," one of the many great posts found at blog.wolfram.com.

Continuing to follow the steps provided by the blog author Michael Trott, we used the latter image to create a set of points in the \(x\)-\(y\) plane. The points are subdivided into a set of curves (in this case, 8) and then approximated with a Fourier series. While it only takes 2 sentences to describe what we did, it took many, many lines of code provided by Mr. Trott. (We highly recommend downloading the .cdf provided by the link and running it in Mathematica. If you want to upload your own image and make your own curves, it helps to know that not everything that you run in the .cdf is directly related to making curves. It can be boiled down to a lot fewer lines of code - but it's still a lot.)

All of the above should make some ask "What is a Fourier Series?" In short, a Fourier series is a sum of sines and cosines used to approximate another function. Since trigonometric functions are used, Fourier series are especially good at approximating periodic (that is, repeating) functions. The more sines and cosines used, the better the approximation (in general).

For example, consider the curve on the left, below.


On the right, we approximate the curve with a set of Fourier series. We start with just \( 0.358+0.34\sin(1.28x)-1.03\cos(1.28x)\): that's the curve that is mostly flat. We then add more and more sine and cosine terms to better approximate the orginal curve. The 5th curve shown in the animation is \(0.358+\big(0.34\sin(1.28x)-1.03\cos(1.28x)\big) + \big(-5.34\sin(2.56x)+0.19\cos(2.56x)\big)\)
\(+\big(-1.88\sin(3.85x)+0.12\cos(3.85x)\big)+\big(-0.84\sin(5.13x) + 0.07\cos(5.13x)\big)\)
\(+\big(-0.44\sin(6.41x) + 0.05\cos(6.41x)\big)\)

To make the pumpkin, each of the 8 curves needed a different  number of sines and cosines to make it look good. The first part drawn, and by far the longest, needed the most: 180 sines and cosines!! We then plotted it for increasing values of \(t\), making it look it was being drawn.

The following animation shows how the approximation gets better and better by adding more and more terms to the series:



We thought another Halloweeny picture would be good. We did another Google image search, this time for "bat clip art", and came up with the picture below:


We followed the same procedure outlined by the Wolfram blog and made the following animations:



Finally, we thought the following image was pretty cool, suggested by another part of the Wolfram post. It shows, with increasing levels of opacity (i.e., the better the approximation the more opaque), various approximations of the bat by Fourier series.



So what good are Fourier series? Many of us view functions like \(1.03\cos(1.28x)\) as not-very-simple-or-intuitive. So why use them? There are 3 main reasons.

First, some problems involving functions are really hard to solve in general, but (amazingly) not so bad when the functions are trig functions. So approximating the real function with lots of trig functions (i.e., a Fourier series) makes finding an approximate solution easy.

Second, some data demonstrates oscillating, or periodic, behavior. (An easy example is sound waves.) One way of analyzing the data is to analyze the Fourier series that approximates it. We can also compress data this way - instead of storing all the information supplied by a sound wave, we can store only the Fourier coefficients, taking much less space.

Finally, Fourier himself was interested in approximating curves with sums of sine waves with the intent to draw cool pictures. He never went very far with it, though; on his death bed, he is reported to have said: "Si seulement j'avais animé une citrouille. Ensuite, ma vie aurait été complet," which translates roughly to  "If only I had animated a pumpkin. Then my life would have been complete."


Follow us on Twitter and we'll update you when a new post is up.


Sunday, October 20, 2013

Conformal Maps




Most people are familiar with basic functions like \(y=x^2\), and recognize its graph as the standard parabola. One thing we often don't consider is that this function is also a geometric transformation. That is, it takes the straight line segment [-1,1], for instance, and turns it a particular curve in the plane (that "starts" at (-1,1), passes through (0,0), and "ends" at (1,1)). All functions do this, in fact, some making more interesting--looking curves than others, especially if one considers parametric or polar functions.

Ultimately, though, we take a one-dimensional object (an interval of the \(x\)-axis) and transform it into another one dimensional object, a curve in the plane. One dimensional objects are so ... one dimensional. To make things more interesting, we should consider two dimensional regions of the \(x\)-\(y\) plane, and think of ways to transform them. 

There are infinitely many ways of doing this. One way is to pick two functions \(f\) and \(g\) and send the point \((x,y)\) to the point \((f(x),g(y))\). I thought of animating this for functions like \(f(x) = \sin x\) and \(g(x) = x^2\), but in general, this process makes a mess. (We'd love to see some people do this, though. If you do, share!)

A much, much better way of transforming the plane is through a linear transformation. It is pretty simple: pick 4 numbers \(a\), \(b\), \(c\) and \(d\) and map the point \((x,y)\) to the point \((ax+by,cx+dy)\).  This kind of transformation always sends straight lines to straight lines and circles go to ellipses. Relative distances are also preserved: if the point \(Q\) is 2/3 from \(P\) to \(R\), then \(Q\) gets mapped to a point 2/3 from where \(P\) and \(R\) get mapped to. 

These linear transformations are the underpinnings of computer graphics. For instance, letting \(a=\cos\theta\), \(b=-\sin\theta\), \(c=\sin\theta\) and \(d=\cos \theta\), we can rotate the point \((x,y)\) around the origin counterclockwise by the angle \(\theta\).  Below, we rotate the rectangle with corners (-1,0) and (1,1) around the orgin by 90\(^\circ\). On the left is the original rectangle, on the right is its transformation. Note how the shape and distances are all preserved.


Linear transformation on points \((x,y,z)\) in space allow one to rotate a 3D world, move through it, and show it all on a flat screen (Wolfenstein 3D, anyone?). The GPU's on computers are really, really, really good at performing these transformation fast.

Usually linear transformations do not preserve angles. That is, if two lines meet at a 15\(^\circ\) angle, the transformed lines probably do not meet at the same angle. This is ok for some applications, like projecting a 3D world onto your phone's screen. A map that does preserve angles is called a conformal map. 

To perform such a transformation, we need to view the \(x\)-\(y\) plane in a different way. We start with imaginary numbers. We first learned that \(\sqrt{4} = 2\) and \(\sqrt{-4}\) didn't exist. Later, some of us learned to give the expression \(\sqrt{-1}\) the name \(i\) and call it an imaginary number, as though it only existed in our minds. That allows us to get numbers like \(\sqrt{-4} = 2i\). With the basic principle that \(i^2=-1\), we can create complex numbers like \(2+3i\) and multiply them:
\[(2+3i)(1+2i) = -4+5i.\]

A complex function acts on these complex numbers. For instance, \(f(z) = z^2\) (we often use \(z\) for complex numbers) does the following:

\[ f(3) = 9,\quad f(3i) = -9\quad \text{and} \quad f(1+2i) = -3+4i, \quad \text{etc.}\]

We can visualize complex  numbers with the complex plane. Identify the complex number \(2+3i\) with the point (2,3) in the \(x\)-\(y\) plane. Each point in the plane represents a complex number. Tons of cool stuff comes out of this. 

A great result is that we can now transform this complex plane by applying a complex function to every point (i.e., every complex number) in the plane. 

We now reward to reader who has read this far with some more eye candy. In each of the following, we transform the same rectangle as before (now with vertices at \((-1,0)\sim -1\) and \((1,1) \sim 1+i\)) by a complex function. As you look at these, note how angles are preserved. In the rectangle, we have lots of right angles as the vertical and horizontal lines meet. In the transformed shapes, all the new curves intersect each other at right angles. (You should find that amazing. I do.)

Our introductory image was \(f(z) = e^z\):


We also have \(f(z) = 2 \sqrt{z + 1} +\frac{\ln(\sqrt{z + 1} - 1)}{\sqrt{z + 1} + 1}\)


\(f(z) = \sqrt{z}\): 


\(f(z) = \sin z\):


A really cool thing is that rotations are just multiplications by complex numbers. To rotate and scale, multiply by any complex number; to just rotate, multiply by a complex number of the form \(\cos \theta + i\sin \theta\).  A 90\(^\circ\) rotation is just a multiplication by \(i\), as seen before:

\(f(z) = iz\):


A basic shift up is done by adding \(i\) to every number/point:

\(f(z) = i+z\):


One final one for fun:

\(f(z) = (1+i+z)^2\):


What good is any of this? Two quick answers:

1. What good is this?!?! Are you kidding me? This is awesome. We just took a rectangle, deformed it in weird ways, but preserved angles. Who would have thought this was possible?!?

2. There are many problems in physics/engineering that depend on complex (no pun intended) geometry - for instance, fluid flow around corners or obstacles. The right conformal map can transform the complicated geometry into a simpler geometry, where the problem is "easy" to solve. This solution can be transformed back into a solution in the original geometry using the inverse conformal map. 

While we are pleased that #2 is possible, we here are more impressed by answer #1. 

A final note: this post was inspired by a reader comment. We probably misunderstood the comment, but we rotated a shape by moving points in a straight line and wondered if the reverse was possible - can we move a shape in a straight line by rotating its points? A comment suggested using an inversion (or see this), which got us thinking about conformal maps. So thanks for the comments - we like them and they give us ideas.

Oh - and someday, we'll post the code for our images. These .gif's were produced mostly on a Friday afternoon when we should have been grading papers or writing Chapter 13 of our Calc III book. Our code is generally ugly -  we write stuff in whatever way seems easiest at the time, and only figure out how to make it look nice and readable much later. So ... later we'll post some code because we do want to share.