C={a+bi∣∣a,b∈R}
I.e., a linear combination of 1 and i.
We write z=a+bi to illustrate the components of a z∈C.
Remember that R⊂C.
Forms of Representation
Complex numbers are commonly treated as points on a 2-d plane; 1 is the horizontal unit vector and i is the vertical unit vector.
In this case, rectangular form is equivalent to cartesian coordinates; polar form is equivalent to, well, polar coordinates.
Rectangular
the 2-term linear combination described above
For a z=a+bi, a is the real component and b is the imaginary component.
Aka. Cartesian form (for obvious reasons).
Polar
whirry form.
For a z=a+bi=r(cosθ+isinθ), r=∣z∣=a2+b2 is the modulus and θ=arg(z)=atan2(b,a) is the argument.
Also, z=reiθ (ask euler). r,θ∈R, r≥0.
rcisθ is short for r(cosθ+isinθ)
Arithmetic
In rectangular form, everything actually just works under standard “algebraic/symbolic” arithmetic, treating i as an atomic symbol, and simplifying i2 to −1 wherever possible.
The n unique solutions are given by the first n values of k (from 0 to n−1).
Geolaulihafhiafahu
yep
2. Continuous Functions
Function
A↦B, for sets A ad B
Maps every a∈A to some b∈B
Exp & Log
Exponential Function
exp:A↦A for a ring A where exp(a+b)=exp(a)∗exp(b)
Logarithm Function
Inverse of the above loga(ax)=x
ln(x×y)=ln(x)+ln(y) ln(x/y)=ln(x)−ln(y) ln(xy)=yln(x) logb(x)=1/logx(b) logb(x)=ln(x)/ln(b)
(ln denotes a logarithm of arbitrary (but same within the same line) base.)
Limits & Continuity
(ϵ,δ)-definition of a limit: x→alimf(x)=L⟸(∀ε>0∃δ>0:0<∣x−a∣<δ⟹∣f(x)−L∣<ε)
Observe that due to 0<∣x−a∣, f does not have to be continuous at a.
Limit-definition of continuity: f:D↦R is continuous at a⟸limx→af(x)=f(a)
A function is continuous on an open interval(l,r) if it is continuous at all values between l and r.
A function is continuous (everywhere) if it is continuous on (−∞,+∞).
Combination Theorem (for continuous functions)
For continuous (real) functions f and g, f+g, f−g, f×g, f∘g, and f/g are all continuous.
(In the division case, discontinuous at g(x)=0)
I.e., the arithmetic operations “preserve” continuity (except div by 0).
Intermediate Value Theorem
For (real) function f continuous on [l,r],
let L=f(l), R=f(r),L<R, then ∀k∈[L,R]∃a∈[l,r]:f(a)=k
I.e., the image of f over [l,r] is [f(l),f(r)].
3. Vector Spaces
See notes on Algebraic Structures which formally defines this towards the bottom.
Collinearity
u=av⟺u and v are colinear (for a∈F).
Linear Combination
some au+bv
Span
the set of all linear combinations of u∈S for some family of vectors S.
Linear Independence
not colinear
For a set of vectors (more than 2), it means that no vector is in the span of any other(s).
I.e., no an=0 s.t. a1u1+⋯+anun=0
Dimensionality
yes Rn denotes an n-dimensional vector space over R
Basis
A set S of vectors where the span of S is V (its vector space)
(and S are linearly independent) ∣S∣ is the dimension of V.
A canonical basis is aslkdfjasl;dfjslkjaf yeah the obvious one
Linear Maps
A mapping/transformation (between vector spaces) f:V↦W is linear if ∀u,v∈V∃a∈F:f(u+v)=f(u)+f(v)∧f(a⋅u)=a⋅f(u).
Remember that linear maps can be composed.
A linear map f is an endomorphism iff V=W (in f:V↦W).l
Kernel
Ker(f)={v∈V∣f(v)=0} (for a linear map f:V↦W on vector space V).
This is aka. the Null Space (of the equivalent matrix representation).
Image
usual definition as per any other function
Polynomials (over F, in some indeterminant) is a vector space (over F).
The canonical basis of R2[x] is {1,x,x2}.
4. Matrices
A linear map f:V↦W where V has dimension m and Wn can be represented as a m×n matrix.
Conversely, all matrices (of a field F) are representative of some linear map across the vector spaces of F.
Matrix multiplication is the equivalent of function composition, and as such shares all the properties of composition. M(m,n)×M(n,k)∈M(m,k).
Remember that matrix multiplication is notationally right-associative.
M has an inverse ⟺M as column vectors are linearly independent ⟺M as row vectors are linearly independent ⟺Ker(M)=0 ⟺ its equivalent map f is bijective
I.e., M does not reduce (change) dimensionality – M must be a square matrix.
Determinant
ad−bc, otherwise eeeeeeeeee
5. Sequences & Series
Monotonic
non-decreasing or non-increasing sequence
Bounded
sequence that does not reach infinity
Convergence
the existence of a limit at infinity for a sequence
Theorem of Monotone Convergence
a monotone sequence converges iff it is bounded
Squeeze Theorem
yeah
Arithmetic Series
Sn=n⋅2a1+an=n⋅22a1+(n−1)d
Geometric
Sn=a⋅1−r1−rn S∞=a⋅1−r1 (converges when ∣r∣<1)
Absolute Convergence
when the absolute series ∑∣an∣ converges
Note that absolute convergence is stronger than (implies) convergence.
D’Alembert Criterion
For ratio r=∣∣anan+1∣∣ of a series, liminfn→∞r>1⟹ series diverges; limsupn→∞r<1⟹ series absolutely converges;
otherwise :shrug:
A polynomial splits if it can be factored into just degree-1 (linear) factors.
Euclidean Division
For polynomials A and B, there exists Q and R s.t. A=Q⋅B+R,deg(R)<deg(B)
In R, all irreducible polynomials are of degree 2. (when discriminant is negative)
Every polynomial in R can be factorized into a combination of linears and degree-2’s.
In C, all polynomials split into linear factors.
The multiplicity of a root is its, well, multiplicity.
9. Probabilities
Universe / sample space Ω
An event is a set of “accepting” outcomes; a subset of the universe.
As such, set operations naturally correspond to logical operations.