Applied Mathematics Essentials

FREE
intermediatev1.0.0tokenshrink-v2
Applied math provides the computational and analytical tools underpinning science and engineering. Core areas: LinAlg, calculus, DiffEq, probability/stats, and numerical methods.

## Linear Algebra

Vectors: dot product a·b = |a||b|cosθ = Σaᵢbᵢ (scalar, measures projection/similarity). Cross product a×b = |a||b|sinθ n̂ (vector, perpendicular to both, magnitude = parallelogram area). Orthogonal vectors: a·b = 0.

Matrices: m×n array, multiplication (AB)ᵢⱼ = Σₖ aᵢₖbₖⱼ. Not commutative: AB ≠ BA generally. Identity matrix I: AI = IA = A. Inverse A⁻¹ exists iff det(A) ≠ 0. For 2×2: A⁻¹ = (1/det)[d,-b;-c,a]. Transpose: (AB)ᵀ = BᵀAᵀ. Orthogonal matrix: QᵀQ = I (columns are orthonormal).

Determinant: det(A) = ad-bc for 2×2. Properties: det(AB) = det(A)det(B), det(Aᵀ) = det(A), row swap changes sign, det = 0 iff singular. Geometric interpretation: absolute value = volume scaling factor of linear transformation.

Eigenvalues/vectors: Av = λv, find λ from det(A-λI) = 0 (characteristic polynomial). Eigenvectors span eigenspaces. Symmetric matrices: real eigenvalues, orthogonal eigenvectors. Spectral decomposition A = QΛQᵀ. Applications: PCA (variance maximization), vibration modes, stability analysis (eigenvalues of Jacobian).

SVD: A = UΣVᵀ where U,V orthogonal, Σ diagonal w/ singular values σᵢ ≥ 0. Always exists. Low-rank approx by truncating smallest σᵢ (image compression, noise reduction, recommender systems). Pseudoinverse A⁺ = VΣ⁺Uᵀ for least-squares solutions.

Linear systems Ax = b: Gaussian elimination O(n³). Cramer's rule (theoretical, impractical for large n). Existence/uniqueness: rank(A) = rank([A|b]) for consistency, rank = n for unique solution. Overdetermined (m>n): least squares x̂ = (AᵀA)⁻¹Aᵀb minimizes ||Ax-b||².

## Calculus

Limits: ε-δ definition formal, L'Hôpital's rule for 0/0 or ∞/∞ indeterminate forms. Continuity: f continuous at a if lim(x→a)f(x) = f(a). IVT: continuous f on [a,b] attains every value between f(a) and f(b).

Differentiation: power rule, product rule (fg)' = f'g + fg', quotient rule, chain rule dy/dx = (dy/du)(du/dx). Implicit differentiation when y not explicitly isolated. Applications: optimization (f'=0 and second derivative test), linear approximation f(a+Δx) ≈ f(a) + f'(a)Δx, related rates.

Integration: FTC — ∫ₐᵇ f(x)dx = F(b)-F(a) where F'=f. Techniques: substitution (reverse chain rule), integration by parts ∫udv = uv - ∫vdu (LIATE priority), partial fractions (rational functions), trig substitution (√(a²-x²) → x=asinθ). Improper integrals: convergence tests (comparison, limit comparison, p-test: ∫₁^∞ x⁻ᵖdx converges iff p>1).

Multivariable: partial derivatives ∂f/∂x (hold other vars constant). Gradient ∇f = (∂f/∂x, ∂f/∂y, ∂f/∂z) points in direction of steepest ascent. Directional derivative Dᵤf = ∇f·û. Critical points: ∇f = 0, classify w/ Hessian matrix (second partial derivative test). Lagrange multipliers for constrained optimization: ∇f = λ∇g at extrema of f subject to g=0.

Vector calculus: divergence ∇·F (scalar, source/sink measure), curl ∇×F (vector, rotation measure). Line integral ∫C F·dr. Green's theorem: ∮C F·dr = ∬D (∂Q/∂x - ∂P/∂y)dA. Stokes' theorem: ∮C F·dr = ∬S (∇×F)·dS. Divergence theorem: ∯S F·dS = ∭V (∇·F)dV.

## Differential Equations

First-order: separable dy/dx = g(x)h(y) → ∫dy/h(y) = ∫g(x)dx. Linear dy/dx + P(x)y = Q(x) → integrating factor μ = e^(∫P dx). Exact: M(x,y)dx + N(x,y)dy = 0 where ∂M/∂y = ∂N/∂x.

Second-order linear constant coeff: ay'' + by' + cy = 0. Characteristic equation ar² + br + c = 0. Two distinct real roots → y = c₁e^(r₁x) + c₂e^(r₂x). Repeated root → y = (c₁ + c₂x)e^(rx). Complex roots α±βi → y = e^(αx)(c₁cos(βx) + c₂sin(βx)). Nonhomogeneous: y = yh + yp, find yp via undetermined coefficients or variation of parameters.

Systems: dx/dt = Ax. Solution involves eigenvalues of A. Stability: all eigenvalues have Re(λ) < 0 → asymptotically stable. Phase portraits: node (real eigenvalues same sign), saddle (opposite signs), spiral (complex eigenvalues), center (purely imaginary).

PDEs: wave equation utt = c²uxx, heat equation ut = αuxx, Laplace equation ∇²u = 0. Separation of variables: assume u(x,t) = X(x)T(t), get two ODEs. Fourier series for periodic BC: f(x) = a₀/2 + Σ(aₙcos(nπx/L) + bₙsin(nπx/L)).

## Probability & Statistics

Probability axioms: P(Ω)=1, P(A)≥0, P(A∪B) = P(A)+P(B) if disjoint. Conditional: P(A|B) = P(A∩B)/P(B). Bayes' theorem: P(A|B) = P(B|A)P(A)/P(B). Independence: P(A∩B) = P(A)P(B).

Distributions: Bernoulli (single trial p), Binomial (n trials, P(X=k) = C(n,k)p^k(1-p)^(n-k)), Poisson (rare events, P(X=k) = e⁻λλ^k/k!, mean=var=λ). Normal N(μ,σ²): 68-95-99.7 rule. CLT: sample mean distribution → normal as n→∞ regardless of population distribution. Standard error SE = σ/√n.

Hypothesis testing: H₀ (null) vs H₁ (alternative). Test statistic → p-value = P(data as extreme | H₀ true). Reject H₀ if p < α (typically 0.05). Type I error (false positive) = α, Type II error (false negative) = β, Power = 1-β. z-test (known σ), t-test (unknown σ, df=n-1), chi-square (categorical data), ANOVA (comparing >2 means).

Regression: y = β₀ + β₁x + ε. OLS minimizes Σεᵢ². R² = 1 - SSres/SStotal (proportion of variance explained). Check assumptions: linearity, independence, normality of residuals, equal variance (homoscedasticity). Multiple regression: y = Xβ + ε, β̂ = (XᵀX)⁻¹Xᵀy. Regularization: Ridge (L2, shrinks coefficients), LASSO (L1, sparse solutions).

## Numerical Methods

Root finding: bisection (guaranteed convergence, slow O(log n)), Newton-Raphson xₙ₊₁ = xₙ - f(xₙ)/f'(xₙ) (quadratic convergence near root, needs good initial guess, may diverge). Secant method: avoids derivative computation.

Numerical integration: trapezoidal rule (error O(h²)), Simpson's rule (error O(h⁴), requires even number of intervals). Gaussian quadrature: optimal node placement, exact for polynomials up to degree 2n-1 w/ n nodes.

ODE solvers: Euler's method yₙ₊₁ = yₙ + hf(tₙ,yₙ) — first-order, simple but inaccurate. RK4 (Runge-Kutta 4th order): four slope evaluations per step, error O(h⁴), workhorse method. Adaptive step size (RK45) adjusts h to maintain error tolerance. Stiff systems require implicit methods (backward Euler, BDF).

Interpolation: Lagrange polynomial (unique degree-n polynomial through n+1 points). Runge phenomenon: high-degree polynomial interpolation oscillates at edges → use Chebyshev nodes or splines. Cubic splines: piecewise cubic, C² continuity, natural BC (S''=0 at endpoints).

2.8K

tokens

13.2%

savings

Downloads0
Sign in to DownloadCompressed by TokenShrink