Series Intro

We use many numbers like $\sqrt{2}, $3^\frac{1/4}$, $e$, and $\pi$ commonly throughout calculus. But how would one go about finding their numerical values?

Similarly, we use many functions, like $2^x, \sin(x)$, and $\ln(x)$ in calculus that seem hard to evaluate.
For mathematics to be quantitiavely useful in the physical sciences, we need a way to actually \emph{compute} these things: how does one find a decimal approximation to $\sqrt{2}$ or to $\sin(1)$, or to $2^{0.37}$?

We may try to formalize this into a math problem by asking for a procedure to compute such quantities in terms of something we already know. What do we already know how to compute? The standard starting point is the whole numbers and addition, subtraction, multiplication and division: we learned the algorithms for computing these in grade school.

But the unfortunate truth (known already for 2500 years) is that our dream is impossible! There is no way whatsoever to compute $\sqrt{2}$ using only whole numbers, and finitely many addition subtraction multiplications and divisions. So, we need to modify our dream a bit.

The correct modification (already though of by Theon in Greece, 200BC) is to give up on being able to compute a number exactly, and take seriously that in applications we only ever need an approximation. But, if we want to develop a useful theory we need to also recognize that we dont know ahead of time what accuracy we will need for a given application. Thus, we would like to produce a means of computing approximate values to any desired accuracy.

That is, we want to have a first approximation then a (better) second approximation, then a third approximation and so on, which get better and better and better.

The way we will track such values is with a sequence: this is just a name for a list of numbers written in a fixed, specified order. We will denote the elements of a sequence $$a_1, a_2, a_3,\ldots$$ and the whole sequence as ${a_n}$.

Sometimes, we can give a formula for all elements of a sequence in terms of $n$: $$a_n= 2n$$ Other times, we can give a formula for the elements of a sequence in terms of its previous elements: $$a_1=2\hspace{1cm}a_n=2+\sqrt{a_{n-1}}$$

In the first case we call this a closed form as it lets us compute any term we like of the sequence without computing the previous values. For the second case, we call this a recursive sequence as it requires you to compute all the previous values to get the value you’re interested in.

If we wish to have a sequence of numbers better and better approximating $\sqrt{2}$, we need some good notion of what we mean by the ‘approximations get infinity good in the limit.’ You may be familiar with this definition from Calc 1

Informal/Formal Definition

Examples of convergent and non-convergent sequences.

Definition of a sequence going to infinity: for every $M$ there is some $N$ where for all values of the sequence beyond this (if $n>N$) we have $a_n>M$.

In calculus I, the limits we were concerned with were mostly the limits of functions. The only difference here is that we require the “input” not to be a real number $x$ but just an integer $n$:

**Theorem: If $\lim_{x\to\infy}f(x)=L$ and $a_n=f(n)$ then $\lim a_n=L$.

Just like limits of functions, limits of sequences satisfy the same limit laws: for sums, differences, products, and quotients.

Continuous functions of a limit: if $\lim a_n=L$ and $f$ is continuous at $L$, then $$\lim f(a_n)=f(L)$$

Examples:

$$\lim n/(n+1)$$ $$\lim n^2/\sqrt{n^3+4n}$$ $$\lim \sin \pi/n$$

Monotonic sequences, and bounded sequences. Monotone convergence theorem.

Examples: