Skip to main content

UGC-NET & GATE : Communications (EC) Study Material


Home / Engineering & Other Exams / Communications (EC) Study Material ...

 

NET & GATE: Communications (EC) Study Material


What is an error in a communication system?

In wireless communication, we sometimes receive the wrong bit, i.e., the transmitter sends binary '1', but we're receiving binary '0' on the receiver side. That is called a bit of error. Now, we'll tell you why this error occurs. We are all aware of signal attenuation and additive noise in wireless communication. You also know that we use a threshold level at the receiver to detect '0' or '1'. Anyhow if the signal is much affected by attenuation or noise, then we receive binary '0' instead of '1' and vice versa.

We commonly use the term 'bit error rate' to measure bit error. Bit error rate tells us how many bits are affected among the total number of bits transmitted.


What are the possible remedies to reduce the bit error rate?

Channel Coding



Question

There is a digital communication system that sends a symbol or block of N bits. We expect the error probability in decoding to be 0.0001. But there is N number of bits in a symbol or block. And here, the occurrence of a mistake of any bit is independent of others. If we came to know at least one bit in the block/symbol has been decoded wrongly. Then what probability will the received symbol/block be erroneous?


Answer

Error probability of a bit = 0.0001

So, the probability of being decoded correctly= is 1-0.0001

As there are several bits, so correct probably = (1-0.0001)(1-0.0001)(1-0.0001)...Up To N Times

=(1-0.0001)^N

Erroneous Probability = 1 - correct probability

=1 -  (1-0.0001)^N

 


Maximum Likelihood Decoding or ML Decoding

The decision boundary between two adjacent signal points will be their arithmetic mean.


Question

The S symbol is randomly selected from the S1, S2, S3, and S4 and communicated through a digital communication system. S1=-3, S2=-1, S3=+1, and S4=+2 are given. Y = S + W is the received symbol on the receiver side. W stands for "zero mean unit variance." When the transmitted symbol S = Si, the conditional probability of symbol error for maximum likelihood (ML) decoding is P. P is a Gaussian Random Variable independent of S. The index i with the highest conditional symbol error probability Pi is -----


Answer

As an ML detector is used, the decision boundary between two adjacent signal points will be their arithmetic mean.

For S1= -3, the probability of error, P1,  

As the ML decoder first receives the symbol -3 and then -1, so -2 becomes the decision boundary, as shown in the figure below.



If the signal value lies between -∞ to -2, it is decoded correctly as -3. Otherwise, an error occurs. 

Now, the probability of error, P1 = (1 - yellow-colored area)


For S2, probability of error is P2 (say)


So, P2 = (1 - yellow-colored area)


For S3, probability of error is P3 (say)


P3=(1-yellow-colored area)


For S4, probability of error is P4 (say)


P4 = (1 - yellow-colored area)

In the concussion of the above four graphs, the probability of correctness is less for S3 among the four symbols. So, the possibility of error for S3 is more significant, or P3 is more considerable.


Probability & Information

When the probability of an event is less, then information about that event will be more. 

I(x) is inversely proportional to p(x)

When probability = 1, the information will be zero, and vice versa.

.We commonly use the term 'entropy' in information theory. Entropy denotes the average number of bits required per symbol to transfer information.

For example:

The probability of receiving bit '1' is 0.5 & probability of receiving bit '0' is 0.5 on the receiver side. Then the entropy H(x) is going to be    -0.5*log(0.5) -0.5*log(0.5) = 1 bit/symbol


Electronic Devices

pn junction diodes are used as electronic switches. Diodes only allow unidirectional current flow. When the voltage across the diode goes up to a certain amount (typically 0.7 V), it becomes on (in case of forward bias). On the other hand, reverse bias always remains 'off.' But in the case of the zener diode, if you continuously increase the reverse voltage, then the current flows accordingly. But after a specific reverse voltage, current flow rises sharply in reverse bias mode. This phenomenon is called 'avalanche breakdown.' If you try to increase the reverse voltage further, the voltage doesn't increase; only the current flow increases.

What is bias voltage?
The bias voltage is required for an electrical gadget to turn on and work.
An electronic device couldn't turn on and function without a bias voltage.

Networks, Signal, & Systems

 Superposition Theorem

In the superposition theorem, we calculate the individual response of each independent source on an element or branch. Then we sum up the voltage and current.


Thevenin's Theorem 

In Thevenin's theorem, we basically find the Vth and Rth. Procedure for thevenin's theorem

1. Firstly, we open the circuit, the load 

2. Then we find Vth across the load from the circuit

3. Then, we open the circuit's current source and short-circuit the voltage sources. Remember this step is only applicable to independent sources.

4. Then, we find Rth from the circuit.


RL circuit with source:

i(t) = [ i(0+) - i(∞)]*exp(-Rt/L) + i(∞)

v(t) = [ v(0+) - v(∞)]*exp(-t/RC) + v(∞)

The main functions of the inductor and capacitor in a circuit are to prevent the sudden change of current and voltage, respectively.

Question:

In the above circuit, when the switch is transformed from an off to an on state, the voltage across the capacitor will be the same, but the current direction of the capacitor will be reversed.  

A similar rule is applicable for inductors also. When the switch is transformed from an off to an on state, the voltage across the inductor will be exact, but the current direction will be reversed. 

Question:

Find the rate of rise of voltage across the  capacitor at t = 0+

People are good at skipping over material they already know!

View Related Topics to







Contact Us

Name

Email *

Message *

Popular Posts

Online Simulator for ASK, FSK, and PSK

Try our new Digital Signal Processing Simulator!   Start Simulator for binary ASK Modulation Message Bits (e.g. 1,0,1,0) Carrier Frequency (Hz) Sampling Frequency (Hz) Run Simulation Simulator for binary FSK Modulation Input Bits (e.g. 1,0,1,0) Freq for '1' (Hz) Freq for '0' (Hz) Sampling Rate (Hz) Visualize FSK Signal Simulator for BPSK Modulation ...

Constellation Diagrams of ASK, PSK, and FSK

📘 Overview of Energy per Bit (Eb / N0) 🧮 Online Simulator for constellation diagrams of ASK, FSK, and PSK 🧮 Theory behind Constellation Diagrams of ASK, FSK, and PSK 🧮 MATLAB Codes for Constellation Diagrams of ASK, FSK, and PSK 📚 Further Reading 📂 Other Topics on Constellation Diagrams of ASK, PSK, and FSK ... 🧮 Simulator for constellation diagrams of m-ary PSK 🧮 Simulator for constellation diagrams of m-ary QAM BASK (Binary ASK) Modulation: Transmits one of two signals: 0 or -√Eb, where Eb​ is the energy per bit. These signals represent binary 0 and 1.    BFSK (Binary FSK) Modulation: Transmits one of two signals: +√Eb​ ( On the y-axis, the phase shift of 90 degrees with respect to the x-axis, which is also termed phase offset ) or √Eb (on x-axis), where Eb​ is the energy per bit. These signals represent binary 0 and 1.  BPSK (Binary PSK) Modulation: Transmits one of two signals...

BER vs SNR for M-ary QAM, M-ary PSK, QPSK, BPSK, ...

📘 Overview of BER and SNR 🧮 Online Simulator for BER calculation of m-ary QAM and m-ary PSK 🧮 MATLAB Code for BER calculation of M-ary QAM, M-ary PSK, QPSK, BPSK, ... 📚 Further Reading 📂 View Other Topics on M-ary QAM, M-ary PSK, QPSK ... 🧮 Online Simulator for Constellation Diagram of m-ary QAM 🧮 Online Simulator for Constellation Diagram of m-ary PSK 🧮 MATLAB Code for BER calculation of ASK, FSK, and PSK 🧮 MATLAB Code for BER calculation of Alamouti Scheme 🧮 Different approaches to calculate BER vs SNR What is Bit Error Rate (BER)? The abbreviation BER stands for Bit Error Rate, which indicates how many corrupted bits are received (after the demodulation process) compared to the total number of bits sent in a communication process. BER = (number of bits received in error) / (total number of tran...

Q-function in BER vs SNR Calculation

Q-function in BER vs. SNR Calculation In the context of Bit Error Rate (BER) and Signal-to-Noise Ratio (SNR) calculations, the Q-function plays a significant role, especially in digital communications and signal processing . What is the Q-function? The Q-function is a mathematical function that represents the tail probability of the standard normal distribution. Specifically, it is defined as: Q(x) = (1 / sqrt(2Ï€)) ∫â‚“∞ e^(-t² / 2) dt In simpler terms, the Q-function gives the probability that a standard normal random variable exceeds a value x . This is closely related to the complementary cumulative distribution function of the normal distribution. The Role of the Q-function in BER vs. SNR The Q-function is widely used in the calculation of the Bit Error Rate (BER) in communication systems, particularly in systems like Binary Phase Shift Ke...

Theoretical BER vs SNR for m-ary PSK and QAM

Relationship Between Bit Error Rate (BER) and Signal-to-Noise Ratio (SNR) The relationship between Bit Error Rate (BER) and Signal-to-Noise Ratio (SNR) is a fundamental concept in digital communication systems. Here’s a detailed explanation: BER (Bit Error Rate): The ratio of the number of bits incorrectly received to the total number of bits transmitted. It measures the quality of the communication link. SNR (Signal-to-Noise Ratio): The ratio of the signal power to the noise power, indicating how much the signal is corrupted by noise. Relationship The BER typically decreases as the SNR increases. This relationship helps evaluate the performance of various modulation schemes. BPSK (Binary Phase Shift Keying) Simple and robust. BER in AWGN channel: BER = 0.5 × erfc(√SNR) Performs well at low SNR. QPSK (Quadrature...

Wiener Filter Theory: Equations, Error Signal, and MSE

  Assuming known stationary signal and noise spectra and additive noise, the Wiener filter is a filter used in signal processing to provide an estimate of a desired or target random process through linear time-invariant (LTI) filtering of an observed noisy process. The mean square error between the intended process and the estimated random process is reduced by the Wiener filter. Fig: Block diagram view of the FIR Wiener filter for discrete series. An input signal x[n] is convolved with the Wiener filter g[n] and the result is compared to a reference signal s[n] to obtain the filtering error e[n]. In the big picture, the signal is attenuated and added with noise, then the signal is passed through a Wiener filter. And the function of the Wiener filter is to minimize the mean square error between the filter output of the received signal and the reference signal by adjusting the Wiener filter tap coefficient.   Description...

MATLAB code for Pulse Code Modulation (PCM) and Demodulation

📘 Overview & Theory 🧮 Quantization in Pulse Code Modulation (PCM) 🧮 MATLAB Code for Pulse Code Modulation (PCM) 🧮 MATLAB Code for Pulse Amplitude Modulation and Demodulation of Digital data 🧮 Other Pulse Modulation Techniques (e.g., PWM, PPM, DM, and PCM) 📚 Further Reading MATLAB Code for Pulse Code Modulation clc; close all; clear all; fm=input('Enter the message frequency (in Hz): '); fs=input('Enter the sampling frequency (in Hz): '); L=input('Enter the number of the quantization levels: '); n = log2(L); t=0:1/fs:1; % fs nuber of samples have tobe selected s=8*sin(2*pi*fm*t); subplot(3,1,1); t=0:1/(length(s)-1):1; plot(t,s); title('Analog Signal'); ylabel('Amplitude--->'); xlabel('Time--->'); subplot(3,1,2); stem(t,s);grid on; title('Sampled Sinal'); ylabel('Amplitude--->'); xlabel('Time--->'); % Quantization Process vmax=8; vmin=-vmax; %to quanti...

Wide Sense Stationary Signal (WSS)

Q & A and Summary Stationary and Wide Sense Stationary Process A stochastic process {…, X t-1 , X t , X t+1 , X t+2 , …} consisting of random variables indexed by time index t is a time series. The stochastic behavior of {X t } is determined by specifying the probability density or mass functions (pdf’s): p(x t1 , x t2 , x t3 , …, x tm ) for all finite collections of time indexes {(t 1 , t 2 , …, t m ), m < ∞} i.e., all finite-dimensional distributions of {X t }. A time series {X t } is strictly stationary if p(t 1 + Ï„, t 2 + Ï„, …, t m + Ï„) = p(t 1 , t 2 , …, t m ) , ∀Ï„, ∀m, ∀(t 1 , t 2 , …, t m ) . Where p(t 1 + Ï„, t 2 + Ï„, …, t m + Ï„) represents the cumulative distribution function of the unconditional (i.e., with no reference to any particular starting value) joint distribution. A process {X t } is said to be strictly stationary or strict-sense stationary if Ï„ doesn’t affect the function p. Thus, p is not a function of time. A time series {X t } ...