問題一覧
1
To visualize and approximate the location of the root
2
The point where the graph crosses the x-axis
3
It may not provide sufficient accuracy for precise computation
4
The function must be continuous on the interval [a, b], and the function values at the endpoints, f(a) and f(b), must have opposite signs.
5
It is halved
6
Intermediate Value Theorem
7
The error is less than a specified tolerance
8
Always converges if initial interval is correct
9
Linear
10
The secant line between two points
11
False-Position uses function values to weight the interval ends
12
One endpoint may remain fixed, slowing convergence
13
Linear
14
It doesn't require derivatives
15
Both f(x) and f'(x)
16
Quadratic
17
The derivative is too small or zero at any iteration
18
It has a fast rate of convergence
19
May diverge or converge to a wrong root
20
Secant method does not require derivative
21
May fail if f(x_n) = f(x_{n-1})
22
Muller's Method
23
complex or multiple
24
Cubic
25
It requires calculating higher-order derivatives.
26
Accelerating the convergence of a sequence.
27
An upper triangular matrix
28
Back substitution
29
Reduced row echelon form (identity matrix)
30
No back substitution
31
Cholesky decomposition
32
U
33
They are more efficient when solving for multiple right-hand side vectors b.
34
Cholesky decomposition
35
Decompose matrix A into L and U matrices.
36
To avoid division by zero and minimize rounding errors.
37
When A is symmetric and positive-definite.
38
The matrix must be symmetric and positive-definite.
39
You must perform both forward and backward substitutions.
40
Use old values from the previous iteration only
41
Diagonally dominant
42
Uses newly computed values immediately
43
Diagonally dominant
44
Gauss Elimination
45
Jacobi or Gauss-Seidel
46
Iterative methods are generally preferred for large sparse systems due to lower memory usage
47
Central difference
48
Quartered
49
Truncation error
50
Approach the true value of the derivative.
51
At the end of a data set where there are no points available after the last data point.
52
It is more accurate for the same step size h.
53
Straight lines
54
It provides a more accurate approximation for a given function.
55
Second-degree polynomial (quadratic)
56
Even
57
Degree 1 (linear)
58
Is more accurate for the same number of function evaluations.
59
h^2
60
Gauss-Legendre Quadrature
61
Newton-Cotes formulas require equally spaced nodes, while Gaussian Quadrature uses unequally spaced nodes.
62
Euler's method
63
Its low accuracy, as the error is proportional to the step size.
64
Taking a weighted average of four different slope estimates.
65
Adams-Bashforth-Moulton method
66
More accurate and more stable.
67
Runge-Kutta 4 (RK4) method
68
It would be halved.
69
Taylor series expansion
70
Its solution contains components that decay at vastly different rates.
71
An implicit method like the Backward Euler method
72
Estimating the next value using an explicit formula and then refining that estimate using an implicit formula.
73
Higher accuracy for more computational cost per step.
74
Single-step methods
75
Approximating the solution curve with a series of tangent lines.
76
Significantly higher accuracy for a given step size.
問題一覧
1
To visualize and approximate the location of the root
2
The point where the graph crosses the x-axis
3
It may not provide sufficient accuracy for precise computation
4
The function must be continuous on the interval [a, b], and the function values at the endpoints, f(a) and f(b), must have opposite signs.
5
It is halved
6
Intermediate Value Theorem
7
The error is less than a specified tolerance
8
Always converges if initial interval is correct
9
Linear
10
The secant line between two points
11
False-Position uses function values to weight the interval ends
12
One endpoint may remain fixed, slowing convergence
13
Linear
14
It doesn't require derivatives
15
Both f(x) and f'(x)
16
Quadratic
17
The derivative is too small or zero at any iteration
18
It has a fast rate of convergence
19
May diverge or converge to a wrong root
20
Secant method does not require derivative
21
May fail if f(x_n) = f(x_{n-1})
22
Muller's Method
23
complex or multiple
24
Cubic
25
It requires calculating higher-order derivatives.
26
Accelerating the convergence of a sequence.
27
An upper triangular matrix
28
Back substitution
29
Reduced row echelon form (identity matrix)
30
No back substitution
31
Cholesky decomposition
32
U
33
They are more efficient when solving for multiple right-hand side vectors b.
34
Cholesky decomposition
35
Decompose matrix A into L and U matrices.
36
To avoid division by zero and minimize rounding errors.
37
When A is symmetric and positive-definite.
38
The matrix must be symmetric and positive-definite.
39
You must perform both forward and backward substitutions.
40
Use old values from the previous iteration only
41
Diagonally dominant
42
Uses newly computed values immediately
43
Diagonally dominant
44
Gauss Elimination
45
Jacobi or Gauss-Seidel
46
Iterative methods are generally preferred for large sparse systems due to lower memory usage
47
Central difference
48
Quartered
49
Truncation error
50
Approach the true value of the derivative.
51
At the end of a data set where there are no points available after the last data point.
52
It is more accurate for the same step size h.
53
Straight lines
54
It provides a more accurate approximation for a given function.
55
Second-degree polynomial (quadratic)
56
Even
57
Degree 1 (linear)
58
Is more accurate for the same number of function evaluations.
59
h^2
60
Gauss-Legendre Quadrature
61
Newton-Cotes formulas require equally spaced nodes, while Gaussian Quadrature uses unequally spaced nodes.
62
Euler's method
63
Its low accuracy, as the error is proportional to the step size.
64
Taking a weighted average of four different slope estimates.
65
Adams-Bashforth-Moulton method
66
More accurate and more stable.
67
Runge-Kutta 4 (RK4) method
68
It would be halved.
69
Taylor series expansion
70
Its solution contains components that decay at vastly different rates.
71
An implicit method like the Backward Euler method
72
Estimating the next value using an explicit formula and then refining that estimate using an implicit formula.
73
Higher accuracy for more computational cost per step.
74
Single-step methods
75
Approximating the solution curve with a series of tangent lines.
76
Significantly higher accuracy for a given step size.