This is a simple statement of basic mathematical theory that will help you understand the meaning of this statement and how it applies to your life.

The idea is that any bounded sequence has a convergent subsequence. In mathematics, a bounded sequence is a sequence where the number of elements in the sequence is less than the length of the sequence.

When we say the number of elements in a sequence is less than the length of the sequence, we mean that there is only one element in the sequence. If our sequence has two elements, then that subsequence is called a bounded subsequence. If our sequence has only one element, then that subsequence is called an unbounded subsequence.

The convergent subsequence theorem (or Baire category theorem) states that there is no finite sequence with no bounded subsequences. The theorem doesn’t prove that there are no unbounded subsequences, but it does mean that every subsequence you have is unbounded.

The Baire category theorem states that there is no finite set with no bounded and unbounded subsets. That is why Baire set theory is so difficult to understand. It is a difficult branch of mathematics, and one of its main applications is the study of infinite sets.

The convergence of a sequence is the property that the sequence is getting closer and closer to becoming a subsequence of itself. The same concept can be applied to sequences of real numbers. A subsequence of a sequence, as you can see in the image above, is the set of those real numbers that the subsequence of the original sequence is getting closer and closer to.

Convergence is a very technical topic, and in terms of real numbers it is very difficult to understand. However, it is important to recognize that the concept is just like that of “bounded” in English. Every bounded sequence (like the ones in the image) has a convergent subsequence. The most famous example of this is the sequence of natural numbers beginning with 0.

If you start with the smallest natural number, you can just look at it and know it is going to stop when it hits 1. And that’s what we mean by “converging.” The sequence of “1” and “1” can be convergent and the one of “1” and “2” can be convergent, but the sequence of “1” and “2” can’t be. Because “1” and “2” have a lower limit of 1.

The sequence of 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, etc. can be convergent but if you start from the smallest, you can’t know if it will stop or not.

This is why we need this theorem which says that if f: a is a function(an interval) and f(1) = 1, then f(c) = f(c-1) for every c, as for every interval i in c, f(i-1) = f(i) and f(i+1) = f(i+2).