Skip to main content

Posts

Search

Search Search Any Topic from Any Website Search
Recent posts

In a digital communication system employing Frequency Shift Keying (FSK) ...

Q In a digital communication system employing Frequency Shift Keying (FSK), the 0 and 1 bits are represented by sine waves of 10 kHz and 25 kHz respectively. These waveforms will be orthogonal for a bit interval of: (a) 45 碌sec    (b) 200 碌sec    (c) 50 碌sec    (d) 250 碌sec Solution Given: Binary FSK system with frequencies \( f_1 = 10 \text{ kHz} \) \( f_2 = 25 \text{ kHz} \) For orthogonality in BFSK: \[ |f_2 - f_1| = \frac{1}{2T} \] Frequency separation: \[ \Delta f = 25 - 10 = 15 \text{ kHz} \] So, \[ 15 \times 10^3 = \frac{1}{2T} \] \[ T = \frac{1}{2 \times 15 \times 10^3} \] \[ T = \frac{1}{30 \times 10^3} \] \[ T = 33.3 \,\mu s \] Among the given options, the closest value is: \[ \boxed{45 \,\mu s} \] Correct Answer: (a) 45 碌sec

Jordan Decomposition

Jordan Decomposition The goal of a Jordan decomposition is to diagonalize a given square matrix. If there is an invertible n×n matrix C and a diagonal matrix D such that A=CDC-1, then an n×n matrix A is diagonalizable. Procedure- Choose a square matrix ( m X m) (e.g., 3 X 3, 4 X 4, 5 X 5, etc.,) Otherwise-Pop up error – select number of rows and Columns should be same (or matrix dimension mismatched) For a given matrix, A = $\begin{bmatrix} 2 & 1 & 0 \\ 1 & 2 & 1 \\ 0 & 1 & 2 \end{bmatrix}$ The aim of Jordan decomposition is to diagonalize a given square matrix A, if A=PDP -1 is possible, where P is an invertible matrix and D is diagonal matrix. We'll go into the specifics of how matrix P and matrix D are formed later. Matrix P and D are derived from matrix A. Firstly, we’ll find the eigen values of the matrix A | A – 位*I | = 0 (I = identity matrix) Or, $\begin{bmatrix} 2 & 1 & 0 \\ 1 & 2 & 1 \\ 0 & 1 & 2 \end{bma...

Gaussian Elimination with Back Substitution Method

Gaussian Elimination with Back Substitution Method Gaussian elimination  is a method in which an augmented matrix is subjected to row operations until the component corresponding to the coefficient matrix is reduced to triangular form. Procedure Choose an n X n matrix Otherwise- Show Pop up – please select number of rows equal to number of Columns Here, we can perform two different types of operations to convert a given matrix into the REF (row echelon) form (a) modify a row by adding or subtracting multiples of another row. (b) multiply/divide a row by a scalar Construct an upper triangular matrix from the given matrix. In the next step convert the diagonal elements to 1s. Example Solve Equations x+2y-3z=1, 2x-y+z=1, x+4y-2z=9 using Gaussian Elimination with Back Substitution method Solution: Total Equations are 3 x+2y-3z=1 … (i) 2x-y+z=1 … (ii) x+4y-2z=9 … (iii) From the aforementioned equations, A = $\begin{bmatrix} 1 & 2 & - 3 \\ 2 & - 1 ...

Gauss-Seidel Method

Gauss-Seidel Method In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a system of a strictly diagonally dominant system of linear equations. Procedure Choose an n X n matrix Otherwise- Show Pop up – please select the number of rows equal to the number of Columns Verify that the magnitude of the diagonal item in each row of the matrix is greater than or equal to the sum of the magnitudes of all other (non-diagonal) values in that row so that Otherwise- Show Pop up – entered matrix is not diagonally dominant Ensure that all of the diagonal elements are non-zero as well. a ii ≠ 0 Otherwise- Show Pop up – all of the diagonal elements must be non-zero Decompose the given matrix A into a lower triangular matrix L* , and a strictly upper triangular matrix U : Let’s assume, A linear system of the form  ��=�Ax = b L* = $\begin{bmatrix} a11 & 0 &...

Gauss Jordan Elimination Method

Gauss Jordan Elimination Method Often Gauss Jordan Elimination (GJE) is used to get a matrix to reduced echelon form so it is easy to solve a linear equation. Linear systems can have many variables. These systems can be solved as long as we have one unique equation/variable. For example- two variables need two equations Three variables need three equation to find a unique solution. And ten variables need ten equation and so on. In the same way 4 variables need 4 equation to find a unique solution. Although, Gauss-Jordan elimination works on matrices of any size, they don’t have to be square. But the number of independent linear equations must not be less than number of unknown variables. However, we actually don't need. For solving ‘n’ number of unknown variables ‘n’ number of independent linear equations are enough. On the other hand, the given matrix needs to be square if you are using it to calculate the inverse of the matrix. Procedure Choose an n X n matrix Othe...

Gauss Jacobi Method

Gauss Jacobi Method In numerical linear algebra, the Gauss-Jacobi method (the Jacobi iteration method) is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Procedure Choose an n X n matrix Otherwise- Show Pop up – please select the number of rows equal to the number of Columns Verify that the magnitude of the diagonal item in each row of the matrix is greater than or equal to the sum of the magnitudes of all other (non-diagonal) values in that row so that Otherwise- Show Pop up – entered matrix is not diagonally dominant Ensure that all of the diagonal elements are non-zero as well. a ii ≠ 0 Otherwise- Show Pop up – all of the diagonal elements must be non-zero Decompose the given matrix into a diagonal matrix D, a lower triangular matrix L, and an upper triangular matrix U: Let’s assume, A linear system of the form ��=�Ax = b A = $\begin{bmatrix} a11 & a12 & \cdots & a1n \\ a21 &...

EigenValue and EigenVector

Let’s assume a square matrix A The characteristic equation, | A – 位* I | = 0 (where I is an identity matrix) After calculating the values of 位s we attempt to find eigenvectors for corresponding eigenvalues like this For eigenvalue, 位 = 位 1 A*x = 位 1 * I*x (where, x is an unknown vector) Or, ( A - 位 1 * I )* x = 0 The value of x is the corresponding eigenvector of 位 1 Power Method for Dominant Eigenvalue Let 位 1 , 位 2 , 位 3 , and 位 n be the eigenvalues of an n X n matrix A . 位 1 is called the dominant eigenvalue of A if | 位 1 | > | 位 i |, i = 2, 3, ... , n The eigenvectors corresponding to 位 1 are called dominant eigenvectors of A . Procedure Choose an n X n matrix The number of rows and columns should be the same (or matrix dimension mismatched) Like the Jacobi and Gauss-Seidel methods, the power method for approximating eigenvalues is iterative. First, we assume that matrix A has a dominant eigenvalue with corresponding dominant eigenvectors. ...

Determinant using Sarrus rule

Determinant using Sarrus rule Procedure Choose an n X n matrix Otherwise-Pop up error – the number of rows and columns should be the same (or matrix dimension mismatched) Although, Sarrus rule is not applicable for a square matrix which size is greater than 3 X 3, but also we can apply it to obtained 3 X 3 matrix after co-factor expansion of a given matrix. For a given matrix, A = $\begin{bmatrix} a11 & a12 & a13 \\ a21 & a22 & a23 \\ a31 & a32 & a33 \end{bmatrix}$ If the number of rows = number of columns = 3 , then, The determinant of this matrix using the Sarrus rule is Write out the first two columns of the matrix to the right of the third column, giving five columns in a row. Then add the products of the diagonals going from top to bottom (solid) and subtract the products of the diagonals going from bottom to top (dashed). det( A ) = a 11 *a 22 *a 33 + a 12 *a 23 *a 31 + a 13 *a 21 *a 32 – a 31 *a 22 *a 13 – a 32 *a 23 *a 11 – a 3...

People are good at skipping over material they already know!

View Related Topics to







Contact Us

Name

Email *

Message *