The Concept of a Limit

Limits at a Real Number

Where do these things come up?

  • Mathematics: What does it mean for a function to be ‘undefined’ at a point? Can we hope to understand and classify the ways this can come about?

  • Physics/Economics: the predictions of a theory return “undefined” at some value. Do we need new physics to describe this? Or is this just a sign we should look closer at our mathematical model?

  • Computer science: a formula you derived throws an error: is this because of some small technicality you forgot to account for, or a fundamental problem / instability in your method?

For the remainder of this lesson we will focus on trying to understand qualitatively what kind of behavior is possible for functions, how we can fruitfully categorize it, and look for precise definitions to guide our future work.

Removable Singularities

Consider the following function $$f(x)=-\frac{3}{10}\frac{x^3-2x^2-5x+6}{x-1}$$ It’s natural domain is all possible inputs that are mathematically possible to ‘plug in’; which here is all real numbers except $x=1$, which makes the denominator zero. But while plugging $1$ in directly is impossible, plotting this function shows that away from this single point, the graph appears nice, smooth, and continuous - almost as if it’s just “missing a value”.

Such behavior is called a removable singularity as we can remove the problem by just defining a value for $f(1)$ that makes everything nice and continuous. From inspection of the graph, it’s possible to see that the right value that’s needed to ‘plug the hole’ is $f(1)=1.8$. We can confirm this mathematically doing a little algebra to see the original expression for $f$ is equivalent to $\frac{3}{10}(2+x)(3-x)$ when $x\neq 1$. Thus, if we define $f(1)$ to be equal to $3/10(1+2)(3-1)=1.8$, then $f(x)$ is given by this formula everywhere, and we say we have “removed the singularity” (a singularity is a ‘bad spot’ for a mathematical formula).

A word of warning: such a situation might seem ridiculously artificial, but this kind of behavior pops up in very important ways all over mathematics! Indeed, the main subject of our course - derivatives - have this exact behavior embedded right into their definition.

It’s also good to be aware that not all removable singularities necessarily have a nice and clean single formula when the hole has been patched. The singularity below is also removable - even though $f$ is not defined at $1$, if we were to extend the definition of $f$ so that $f(1)=4.5$, the result would be a continuous function, even though it has a sharp bend at $x=1$.

Jump Discontinuities

A different kind of behavior which may occur when studying a function’s behavior right near a point outside its domain is a jump discontinuity. The reason for the name is self evident from an example graph, the function abruptly jumps from one value to another.

The important nature of such a jump continuity is that no matter what we assign $f(x)$ to be at this point, the resulting function still fails to be continuous. However, while clearly more serious than the removable discontinuity we saw first, these are still not that poorly behaved, and we can

Essential Singularities

When a Function “Blows Up”

Finally, we could have a function that doesnt converge, jump, or oscillate when it reaches an endpoint of its domain, but rather shoots off to infinity. A classic example of this is the curve $1/x^2$, which is undefined at zero, but gets extremely large as $x$ approaches zero from either the positive or negative numbers. If $\infty$ were a number (its not!), this would almost look like a removable singularity - and inspired by this we will shortly make a precise definition that says as $x$ tends towards $0$, the limit of $1/x^2$ is infinity.

However, not all functions that grow in magnitude without bound seem to have a well-defined limits: consider $y=1/x$, which is also undefined at $0$, but near zero takes extremely large postive values for positive $x$, and extremely large negative values for negative $x$.

And finally, things can be even worse than this: its possible to write down functions which are undefined at $0$, take on arbitrarily large values near zero, but have no well-defined limiting behavior at all! In the example below we can see that $\sin(1/x)/x$ takes on increasingly large positive and negative values, as well as zero, infinitely often as $x$ approaches $0$ from either side.

Definitions: Left and Right handed Limits

Behavior from above: Right-Hand Limits

Behavior from below: Left-Hand Limits

The Limit of a function $f(x)$ at a point $a$, written $\lim_{x\to a}f(x)$ is only defined when 1) both the right hand limit and the left hand limit exist, and 2) they are both equal. In this case, we define $\lim_{x\to a}f(x)$ to be this common limiting value. When the right hand and left hand limits do not agree (or at least one of them does not exist), we say **the limit does not exist**.

Given these two notions, we can begin to lay down some pretty rigorous definitions, precisely categorising the types of limiting behavior we observed above.

  • A function is continuous at $a$ if firstly $f$ is defined at $a$ (so that $f(a)$ makes sense), and secondly as we g approach $a$, the outputs of $f$ get really close to its output at $a$. Said in symbols, $\lim_{x\to a}f(x)=f(a)$.

  • A function has a removable discontinuity at $a$ if $f$ is not continuous at $a$, but we can re-assign $f(a)$ to some new value, which then turns $f$ into a continuous function.

  • A function has a jump discontinuity at $a$ if there is no possible value of $f(a)$ which would make $f$ continuous at $a$, but both the right and left hand limits $\lim_{x\to a^+}f(x)$ and $\lim_{x\to a^{-}}f(x)$ exist. (And thus, necessairly take different values).

  • A function has an essential singularity at $a$ if at least one of the right and left side limits at $a$ does not even exist.

  • A function limits to infinity at $a$ if when $x$ approaches $a$ from either side, the values of $f(x)$ grow larger and larger without bound. A function limits to negative infinity at $a$ if both approaches find $f(x)$ to get more and more negative without bound.

Limits at Infinity

So far we have been concerned with the situation that a function $f$ is defined on some subset of the number line which is missing some point $a$, and we wished to understand ‘what $f$ is like near $a$’ even though we were unable to plug $a$ into $f$. But holes punched out of the number line are not the only tpes of ‘edges’ a domain can have: even functions that are defined on the whole number line have interesting edge behavior to consider: what happens as we let the input drift off towards positive or negative infinity?

Where do these things come up?

  • Mathematics: Oftentimes a global understanding of a difficult problem in mathematics can be achieved studying separately things that happen ‘nearby’, and things that happen ‘far away’. Computationally this leads to a method of tackling problems by gluing together information from a small number of explicit computations, and two limits out to infinity.

  • The Sciences: In mathematical modeling, its often quite important to understand the long term behavior of a system. Left to their own devices, which species will eventually come to dominate this ecosystem? In the long run, what are the economic implications of a policy that’s being debated? Idealiziations of these questions involve limits as $x\to\infty$ (or $t\to-\infty$, if instead we are trying to understanding how a system got to where it is now).

  • Computer Science: To implement an algorithm on a computer, we need to first understand the behavior of the algorithm over the set of all possible inputs so that we can design our implementation accordingly (make sure we have the right data types, or memory, etc to deal with the whole range of potential results). Even in the simplest cases (algorithms where the inputs are numbers), understanding foundational metrics of computational complexity involve thinking about limits to infinity (in the size of input data).

Definitions and Examples:

Definition of $\lim_{x\to\infty}$ and $\lim_{x\to-\infty}$

The function $f(x)=\frac{3x^3-1}{x(x-4)(x+5)}$ provides an example where the limit as $x\to\infty$ and $x\to-\infty$ both exist (and in fact $\lim_{x\to\infty}f(x)=\lim_{x\to-\infty}f(x)=3$, as you can see by scrolling far out along the graphs).

Instead of having a finite limit at infinity, we may also encounter situations where $\lim_{x\to\infty}f(x)= \pm\infty$, meaning that the values of $f$ grow (either large postive or large negative) without bound as $x$ grows. We are familiar with many such functions already, for example $f(x)=x^2$, where $\lim_{x\to\infty}x^2=\infty$ or $f(x)=x^3$ where $\lim_{x\to-\infty}x^3=-\infty$. Below is yet another example, $f(x)=\frac{x^2-1}{x}$.

Finally, a function may tend to no limit whatsoever as $x\to\pm\infty$: a simple way to arrange for this is just to have $f$ be periodic (and not constant) so that it is doing ‘something interesting’ all the way along the number line. As an elementary example take $y=\sin(x)$, which oscillates from $-1$ to $1$ every $2\pi$, and thus never settles down on any limit.

Functions can do this in a variety of ways, and need not stay bounded as they do so. Here the function $f(x)=x\sin(x)$ takes on every real value infinitely many times as $x$ approaches infinity.