COMP0011 but wtf

1. Complex Numbers

The Imaginary Unit
i=1i = \sqrt{-1}
I.e., i2=1i^2 = -1.
Complex Numbers
C={a+bi    a,bR}\mathbb{C} = \big\{ a + bi \;\big\vert\; a, b \in \R \big\}
I.e., a linear combination of 11 and ii.

We write z=a+biz = a + bi to illustrate the components of a zCz \in \mathbb{C}.

Remember that RC\R \subset \mathbb{C}.

Forms of Representation

Complex numbers are commonly treated as points on a 2-d plane; 11 is the horizontal unit vector and ii is the vertical unit vector.
In this case, rectangular form is equivalent to cartesian coordinates; polar form is equivalent to, well, polar coordinates.

Rectangular
the 2-term linear combination described above
For a z=a+biz = a + bi, aa is the real component and bb is the imaginary component.

Aka. Cartesian form (for obvious reasons).

Polar
whirry form.
For a z=a+bi=r(cosθ+isinθ)z = a + bi = r(\cos \theta + i \sin \theta),
r=z=a2+b2r = |z| = \sqrt{a^2 + b^2} is the modulus and θ=arg(z)=atan2(b,a)\theta = \operatorname{arg}(z) = \operatorname{atan2}(b, a) is the argument.
Also, z=reiθz = re^{i\theta} (ask euler).
r,θRr, \theta \in \R, r0r \ge 0.

rcisθr \operatorname{cis}\theta is short for r(cosθ+isinθ)r(\cos \theta + i \sin \theta)

Arithmetic

In rectangular form, everything actually just works under standard “algebraic/symbolic” arithmetic, treating ii as an atomic symbol, and simplifying i2i^2 to 1-1 wherever possible.

Addition
(a1+b1i)+(a2+b2i)=(a1+a2)+(b1+b2)i(a_1 + b_1 i) + (a_2 + b_2 i) = (a_1 + a_2) + (b_1 + b_2) i

z1+z2z1+z2|z_1 + z_2| \le |z_1| + |z_2|

Multiplication
(a1+b1i)×(a2+b2i)=(a1a2b1b2)+(a1b2+b1a2)i(a_1 + b_1 i) \times (a_2 + b_2 i) = (a_1 a_2 - b_1 b_2) + (a_1 b_2 + b_1 a_2) i
(r1cisθ1)×(r2cisθ2)=r1r2cis(θ1+θ2)(r_1 \operatorname{cis} \theta_1) \times (r_2 \operatorname{cis} \theta_2) = r_1 r_2 \operatorname{cis}(\theta_1 + \theta_2)
Conjugation
The conjugate of xx is denoted xˉ\bar x.
a+bi=abi\overline{a + bi} = a - bi
rcisθ=rcisθ\overline{r \operatorname{cis} \theta} = r \operatorname{cis} -\theta

Observe that x×xˉRx \times \bar x \in \R

Division
z1z2=a1+b1ia2+b2i=a1a2+b1b2z2zˉ2+b1a2a1b2z2zˉ2i\displaystyle \frac{z_1}{z_2} = \frac{a_1 + b_1 i}{a_2 + b_2 i} = \frac{a_1 a_2 + b_1 b_2}{z_2 \bar z_2} + \frac{b_1 a_2 - a_1 b_2}{z_2 \bar z_2}i \\[-0.3em]
z1z2=r1cisθ1r2cisθ2=r1r2cis(θ1θ2)\displaystyle \frac{z_1}{z_2} = \frac{r_1 \operatorname{cis} \theta_1}{r_2 \operatorname{cis} \theta_2} = \frac{r_1}{r_2} \operatorname{cis}(\theta_1 - \theta_2) \\[1em]
Exponentiation
(rcisθ)n=rncis(rθ)(r \operatorname{cis} \theta)^n = r^n \operatorname{cis}(r \theta)

This is De Moivre’s Theorem.

Standard Angles

know them stuffs

Roots of Unity

I.e., complex nn-th roots of 1, of the form zn=1z^n = 1 for some nNn \in \N

zn=(reiθ)n=rneinθ=1\displaystyle z^n = (re^{i\theta})^n = r^n e^{in\theta} = 1
Clearly, r=1r = 1;
arg(zn)=arg(1+0i)=arctan(0)=2kπ\operatorname{arg}(z^n) = \operatorname{arg}(1 + 0i) = \arctan(0) = 2k\pi
 ⁣ ⁣    nθ=2kπ    θ=2kπn\displaystyle \!\!\implies n\theta = 2k\pi \iff \theta = \frac{2k\pi}{n} \\[-0.5em]
I.e., zn=1    z=cis(k2πn)\displaystyle z^n = 1 \implies z = \operatorname{cis}\Big( k \frac{2\pi}{n} \Big)

The nn unique solutions are given by the first nn values of kk (from 00 to n1n-1).

Geolaulihafhiafahu

yep

2. Continuous Functions

Function
ABA \mapsto B, for sets AA ad BB
Maps every aAa \in A to some bBb \in B

Exp & Log

Exponential Function
exp:AA\operatorname{exp}: A \mapsto A for a ring AA where
exp(a+b)=exp(a)exp(b)\operatorname{exp}(a + b) = \operatorname{exp}(a) \ast \operatorname{exp}(b)
Logarithm Function
Inverse of the above
loga(ax)=x\log_a (a^x) = x

ln(x×y)=ln(x)+ln(y)\ln(x \times y) = \ln(x) + \ln(y)
ln(x/y)=ln(x)ln(y)\ln(x / y) = \ln(x) - \ln(y)
ln(xy)=yln(x)\ln(x^y) = y \ln(x)
logb(x)=1/logx(b)\log_b(x) = 1 / \log_x(b)
logb(x)=ln(x)/ln(b)\log_b(x) = \ln(x) / \ln(b)
(ln\ln denotes a logarithm of arbitrary (but same within the same line) base.)

Limits & Continuity

(ϵ,δ)(\epsilon, \delta)-definition of a limit:
limxaf(x)=L    (ε>0 δ>0:0<xa<δ    f(x)L<ε)\lim_{x \to a} f(x) = L \impliedby \big( \forall \varepsilon \gt 0 \ \exists \delta \gt 0 : 0 \lt |x - a| \lt \delta \implies |f(x) - L| \lt \varepsilon \big)

Observe that due to 0<  xa\bm{0 \lt}\; |x - a|, ff does not have to be continuous at aa.

limx+f(x)=L    limx0+f(1/x)=L\lim_{x \to +\infty} f(x) = L \iff \lim_{x' \to 0^+} f(1 / x') = L
limxaf(x)=    limxc1/f(x)=0\lim_{x \to a} f(x) = \infty \iff \lim_{x \to c}1 / f(x) = 0

Little oo notation
f=o(g)f = o(g) near bb     \iff limxbf(x)g(x)=0\displaystyle \lim_{x \to b} \frac{f(x)}{g(x)} = 0
Big OO notation
f=O(g)f = O(g) near bb     \iff lim supxbf(x)g(x)<\displaystyle \limsup_{x \to b} \left| \frac{f(x)}{g(x)} \right| < \infty
Indeterminate Forms
0/00 / 0
/\infty / \infty
0×0 \times \infty
\infty - \infty
000^0
11^\infty1
0\infty^0

Limit-definition of continuity:
f:DRf : D \mapsto \R is continuous at aa     \impliedby limxaf(x)=f(a)\lim_{x \to a} f(x) = f(a)

A function is continuous on an open interval (l,r)(l, r) if it is continuous at all values between ll and rr.
A function is continuous (everywhere) if it is continuous on (,+)(-\infty, +\infty).

Combination Theorem (for continuous functions)
For continuous (real) functions ff and gg,
f+gf + g, fgf - g, f×gf \times g, fgf \circ g, and f/gf / g are all continuous.
(In the division case, discontinuous at g(x)=0g(x) = 0)

I.e., the arithmetic operations “preserve” continuity (except div by 0).

Intermediate Value Theorem
For (real) function ff continuous on [l,r][l, r],
let L=f(l)L = f(l), R=f(r),L<RR = f(r), L \lt R, then
k[L,R] a[l,r]:f(a)=k\forall k \in [L, R] \ \exists a \in [l, r] : f(a) = k

I.e., the image of ff over [l,r][l, r] is [f(l),f(r)][f(l), f(r)].

3. Vector Spaces

See notes on Algebraic Structures which formally defines this towards the bottom.

Collinearity
u=av\vec{u} = a\vec{v}     \iff u\vec{u} and v\vec{v} are colinear (for aFa \in F).
Linear Combination
some au+bva\vec{u} + b\vec{v}
Span
the set of all linear combinations of uS\vec{u} \in S for some family of vectors SS.
Linear Independence
not colinear
For a set of vectors (more than 2), it means that no vector is in the span of any other(s).

I.e., no an0a_n \ne 0 s.t. a1u1++anun=0a_1\vec{u}_1 + \cdots + a_n\vec{u}_n = \vec{0}

Dimensionality
yes
Rn\R^n denotes an nn-dimensional vector space over R\R
Basis
A set SS of vectors where the span of SS is VV (its vector space)
(and SS are linearly independent)
S|S| is the dimension of VV.

A canonical basis is aslkdfjasl;dfjslkjaf yeah the obvious one

Linear Maps
A mapping/transformation (between vector spaces) f:VWf : V \mapsto W is linear if
u,vVaF:f(u+v)=f(u)+f(v)  f(au)=af(u) \forall u, v \in V \, \exists a \in F : \quad f(u + v) = f(u) + f(v) \ \land\ f(a \cdot u) = a \cdot f(u) \space.

Remember that linear maps can be composed.
A linear map ff is an endomorphism iff V=WV = W (in f:VWf : V \mapsto W).l

Kernel
Ker(f)={vVf(v)=0} \operatorname{Ker}(f) = \{ v \in V \mid f(v) = \vec{0} \} \space (for a linear map f:VWf: V \mapsto W on vector space VV).

This is aka. the Null Space (of the equivalent matrix representation).

Image
usual definition as per any other function

Polynomials (over FF, in some indeterminant) is a vector space (over FF).
The canonical basis of R2[x]\R_2[x] is {1,x,x2}\{ 1, x, x^2 \}.

4. Matrices

A linear map f:VWf : V \mapsto W where VV has dimension mm and WW nn can be represented as a m×nm \times n matrix.
Conversely, all matrices (of a field FF) are representative of some linear map across the vector spaces of FF.

Matrix multiplication is the equivalent of function composition, and as such shares all the properties of composition.
M(m,n)×M(n,k)M(m,k)\mathcal{M}(m, n) \times \mathcal{M}(n, k) \in \mathcal{M}(m, k).
Remember that matrix multiplication is notationally right-associative.

MM has an inverse
    \iff MM as column vectors are linearly independent
    \iff MM as row vectors are linearly independent
    \iff Ker(M)=0\operatorname{Ker}(M) = 0
    \iff its equivalent map ff is bijective

I.e., MM does not reduce (change) dimensionality – MM must be a square matrix.

Determinant

adbcad - bc, otherwise eeeeeeeeee

5. Sequences & Series

Monotonic
non-decreasing or non-increasing sequence
Bounded
sequence that does not reach infinity
Convergence
the existence of a limit at infinity for a sequence
Theorem of Monotone Convergence
a monotone sequence converges iff it is bounded
Squeeze Theorem
yeah
Arithmetic Series
Sn=na1+an2=n2a1+(n1)d2\displaystyle S_n = n \cdot \frac{a_1 + a_n}{2} = n \cdot \frac{2 a_1 + (n - 1) d}{2}
Geometric
Sn=a1rn1r\displaystyle S_n = a \cdot \frac{1 - r^n}{1 - r} \\[-0.5em]
S=a11r\displaystyle S_\infty = a \cdot \frac{1}{1 - r} (converges when r<1|r| < 1) \\[0.8em]
Absolute Convergence
when the absolute series an\sum |a_n| converges

Note that absolute convergence is stronger than (implies) convergence.

D’Alembert Criterion
For ratio r=an+1anr = \left| \frac{a_{n + 1}}{a_n} \right| of a series,
lim infnr>1\liminf_{n \to \infty} r > 1     \implies series diverges;
lim supnr<1\limsup_{n \to \infty} r < 1     \implies series absolutely converges;
otherwise :shrug:

6. Differential Calculus

ddxf(x)=f(x)\displaystyle \frac{\mathop{\mathrm{d}}}{\mathop{\mathrm{d}x}} f(x) = f'(x)

f(a)=limh0f(a+h)f(a)h f'(a) = \lim_{h \to 0} \frac{f(a + h) - f(a)}{h}

Elementary Differentiation Rules

Linearity
(af(x))=af(x)\big( a f(x) \big)' = a f'(x)
(f(x)+g(x))=f(x)+g(x)\big( f(x) + g(x) \big)' = f'(x) + g'(x)
Product Rule
(f(x)g(x))=f(x)g(x)+f(x)g(x)\displaystyle \big( f(x)g(x) \big)' = f'(x)g(x) + f(x)g'(x)
Chain Rule
((fg)(x))=(fg)(x)g(x)\displaystyle \big( (f \circ g)(x) \big)' = (f' \circ g)(x)g'(x)
Inverse Function Rule
(f1(x))=1/(ff1)(x)\displaystyle \big( f^{-1}(x) \big)' = 1/(f' \circ f^{-1})(x)
Power Rule
(f(x)g(x))=exp ⁣(g(x)lnf(x))=(g(x)f(x)f(x)+g(x)lnf(x))f(x)g(x)\displaystyle \Big( f(x)^{g(x)} \Big)' = \exp \!\big(g(x) \ln f(x) \big)' \\ = \Big( g(x) \frac{f'(x)}{f(x)} + g'(x) \ln f(x) \Big) f(x)^{g(x)}

Derived rules:

Poorman’s Power Rule
ddxxn=nxn1\displaystyle \frac{\mathop{\mathrm{d}}}{\mathop{\mathrm{d}x}} x^n = nx^{n-1}
Reciprocal Rule
(1/f(x))=f(x)/f(x)2\displaystyle \big( 1/f(x) \big)' = f'(x) / f(x)^2
Quotient Rule
(f(x)g(x))=f(x)g(x)f(x)g(x)g(x)2\displaystyle \bigg( \frac{f(x)}{g(x)} \bigg)' = \frac{f'(x)g(x) - f(x)g'(x)}{g(x)^2}

Exp & Log rules:

Log Rule
(logcx)=1xlncc>1\displaystyle \big( \log_c x \big)' = \frac{1}{x \ln c} \qquad c > 1 \\[-0.5em]
(lnx)=1xx0\displaystyle \big( \ln |x| \big)' = \frac{1}{x} \qquad x \ne 0
Exp Rule
(cax)=acaxlncc>0\displaystyle ( c^{ax} )' = ac^{ax} \ln c \qquad c > 0
(eax)=aeax\displaystyle (e^{ax})' = ae^{ax}

Standard Derivatives (Trig)

sinxcosxcosxsinxtanxsec2x=1+tan2xcscxcscxcotxsecxsecxtanxcotxcsc2x=1cot2xarcsinx11x2arccosx11x2arctanx11+x2 \begin{align*} \sin x &\to \cos x \\ \cos x &\to -\sin x \\ \tan x &\to \sec^2 x = 1 + \tan^2 x \\ \csc x &\to -\csc x \cot x \\ \sec x &\to \sec x \tan x \\ \cot x &\to -\csc^2 x = -1 - \cot^2 x \\ \arcsin x &\to \frac{1}{\sqrt{1 - x^2}} \\ \arccos x &\to -\frac{1}{\sqrt{1 - x^2}} \\ \arctan x &\to \frac{1}{1 + x^2} \\ \end{align*}

7. Integral Calculus

8. Polynomials

deg(P+Q)max ⁣( ⁣deg(P),deg(Q))\operatorname{deg}(P + Q) \le \max\!\big( \!\operatorname{deg}(P), \operatorname{deg}(Q) \big)
deg(P×Q)=deg(P)+deg(Q)\operatorname{deg}(P \times Q) = \operatorname{deg}(P) + \operatorname{deg}(Q)

Irreducible Polynomial
a polynomial with no factors of degree ≥ 1

A polynomial splits if it can be factored into just degree-1 (linear) factors.

Euclidean Division
For polynomials AA and BB, there exists QQ and RR s.t.
A=QB+R,deg(R)<deg(B)A = Q \cdot B + R, \quad \operatorname{deg}(R) \lt \operatorname{deg}(B)

In R\R, all irreducible polynomials are of degree 2. (when discriminant is negative)
Every polynomial in R\R can be factorized into a combination of linears and degree-2’s.

In C\mathbb{C}, all polynomials split into linear factors.

The multiplicity of a root is its, well, multiplicity.

9. Probabilities

Universe / sample space Ω\Omega
An event is a set of “accepting” outcomes; a subset of the universe.
As such, set operations naturally correspond to logical operations.

AA=ΩA \cup A' = \Omega, obviously,

e:
P(A)=P(AB)+P(AB)P(AB)=P(A)P(BA)=P(B)P(AB)P(AB)=P(AB)P(B)P(AB)=P(A)P(BA)P(B)P(AB)=P(A)P(BA)P(A)P(BA)+P(A)P(BA) \begin{align*} P(A) &= P(A \cap B) + P(A \cap B') \\[1em] P(A \cap B) &= P(A) \cdot P(B | A) = P(B) \cdot P(A | B) \\[1em] P(A | B) &= \frac{P(A \cap B)}{P(B)} \\[1em] P(A | B) &= \frac{P(A) \cdot P(B | A)}{P(B)} \\[1em] P(A | B) &= \frac{P(A) \cdot P(B | A)}{P(A) \cdot P(B | A) + P(A') \cdot P(B | A')} \end{align*}

AA and BB are independent if P(AB)=P(A)P(B)P(A \cap B) = P(A) \cdot P(B) (or, equivalently,) P(AB)=P(A)P(A | B) = P(A); P(BA)=P(B)P(B | A) = P(B).

Expected Value:

For a numeric sample space,
E[X]=xΩxP(X=x) \operatorname{E}[X] = \sum_{x \in \Omega} x \cdot P(X = x)

EE is linear; E[X+Y]=E[X]+E[Y]\operatorname{E}[X + Y] = \operatorname{E}[X] + \operatorname{E}[Y] and E[aX]=aE[X]\operatorname{E}[a \cdot X] = a \cdot \operatorname{E}[X].

For Binomial XX, E[X]=np\operatorname{E}[X] = np.
For Normal XX, E[X]=μ\operatorname{E}[X] = \mu

Variance:

Var(X)=E ⁣[(XE[X])2]=E ⁣[X22XE[X]+E[X]2]=E[X2]2E[X]E[X]+E[X]2=E[X2]E[X]2 \begin{align*} \operatorname{Var}(X) &= \operatorname{E}\!\big[ (X - \operatorname{E}[X])^2 \big] \\ &= \operatorname{E}\!\big[X^2 - 2X\operatorname{E}[X] + \operatorname{E}[X]^2 \big] \\ &= \operatorname{E}[X^2] - 2\operatorname{E}[X]\operatorname{E}[X] + \operatorname{E}[X]^2 \\ &= \operatorname{E}[X^2] - \operatorname{E}[X]^2 \end{align*}

For Bernoulli XX, Var(X)=pp2=p(1p)\operatorname{Var}(X) = p - p^2 = p(1 - p).

10. Statistics

Standard deviation, σ=Var=1Ni=1N(xiμ)2\displaystyle \sigma = \sqrt{\operatorname{Var}} = \sqrt{\frac{1}{N} \sum_{i=1}^N (x_i - \mu)^2}

Confidence Interval
μ±z(σN)\displaystyle \mu \pm z \left( \frac{\sigma}{\sqrt{N}} \right)

% zz-score
80% 1.28
85% 1.44
90% 1.64
95% 1.96
98% 2.33
99% 2.58

Hypothesis Testing

H0, null hypo, and H1, alternative hypo

Assume H0, calculate probability of observed outcome. H0 is rejectable if probability below a chosen significance threshold.


  1. not literally indeterminate (=1), but yes when your limit goes like this (go google) ↩︎