Skip to main content

Posts

Search

Search Search Any Topic from Any Website Search
Recent posts

KVL with Capacitor and inductor

  KVL with Capacitor Kirchhoff’s Voltage Law (KVL) with a Capacitor or inductor Kirchhoff’s Voltage Law (KVL) is still fully applicable even if a mesh contains a capacitor or inductor. Why it still works KVL is based on the conservation of energy—the sum of voltages around any closed loop must be zero. This principle doesn’t depend on the type of component in the loop (resistor, capacitor, inductor, etc.). What changes with a capacitor The only difference is how you express the voltage across the capacitor: V C = (1 / C) ∫ i(t) dt So when applying KVL in a mesh with a capacitor, you include this voltage term. Example In a loop with a source, resistor, and capacitor: V source - V R - V C = 0 Where: V R = iR V C = (1 / C) ∫ i(t) dt V L = L* (di/dt) Special cases DC stea...

AR(2) Model Explained: Step-by-Step Time Series Estimation

AR(2) Model Explained | Step-by-Step Time Series Estimation Guide AR(2) Model Explained: Step-by-Step Time Series Estimation This guide explains how to estimate parameters in a second-order autoregressive model (AR(2)) using simple math and intuition. 1. The Model The AR(2) model predicts a value using its past two values: x t = φ 1 x t-1 + φ 2 x t-2 + ε t Goal: Estimate φ 1 and φ 2 . 2. Matrix Form We rewrite the model like a linear regression problem: x = Aφ A : matrix of past values φ : parameters (φ 1 , φ 2 ) x : current values 3. Least Squares Solution The best estimate is: φ̂ = (A T A) -1 A T x 4. Understanding A T A This matrix contains sums of products (autocorrelations): A T A = [ c 0    c 1 ] [ c 1    c ...

Machine Learning Pipeline- EDA + IQR

Machine Learning Pipeline | EDA to Model Deployment End-to-End Machine Learning Pipeline A complete workflow covering data preprocessing, exploratory data analysis (EDA), feature engineering, and model preparation using real-world structured datasets. Pipeline Overview Data Cleaning EDA Feature Eng Model Evaluation 1. Data Understanding Initial inspection includes checking data types, missing values, duplicates, and overall dataset structure. 2. Target Distribution 3. Numerical Feature Analysis 4. Outlier Detection Outliers are detected using the Interquartile Range (IQR) method: Lower = Q1 - 1.5 × IQR Upper = Q3 + 1.5 × IQR 5. Feature Engineering Label Encoding (categorical → numeric) Feature Scaling (StandardScaler) D...

How Neural Networks Learn

Neural Network Training Overview How Neural Networks Learn Neural networks start with random weights and learn by adjusting these weights to transform input features into target outputs. Training involves feeding input data, computing errors, and updating weights to minimize the error. Example: predicting cereal calories from sugar, fiber, and protein. 1. Loss Function The loss function measures the difference between predicted and true values. Common loss functions for regression include: MAE (Mean Absolute Error): Average absolute difference between predicted and true values. MSE (Mean Squared Error) and Huber Loss are alternatives. The network uses the loss function to guide weight updates. 2. Optimizer (SGD / Adam) Optimizers adjust the weights to minimize the loss. Steps for Stochastic Gradient Descent (SGD) : Sample a minibatch of training data. Run the network to make predictions. Calculate the loss and adjust weights ...

XGBoost Explained

Gradient Boosting and XGBoost 1. Ensemble Methods Recap Random Forests combine many decision trees by averaging predictions. Gradient Boosting is another ensemble method, but instead of averaging, it adds models sequentially , each one correcting the errors of the previous ones. 2. How Gradient Boosting Works Start with a simple model (can be inaccurate). Predict values and compute a loss function (like mean squared error). Train a new model to correct the errors of the current ensemble. Add the new model to the ensemble. Repeat iteratively — this is why it’s called “boosting”. The “gradient” part comes from using gradient descent to minimize the loss when adding each new model. 3. XGBoost XGBoost is a high-performance implementation of gradient boosting. Optimized for speed and accuracy, it works especially well with standard tabular datasets (like those in Pandas). 4. Model Fitting Example fro...

Data Leakage in Machine Learning

Data Leakage in Machine Learning Data leakage occurs when a model is trained with information that would not be available in real-world predictions. This can make models appear highly accurate during training or validation, but they fail when deployed. There are two main types of leakage: target leakage , where predictors include future information about the target (e.g., using post-event features), and train-test contamination , where validation or test data influences training (e.g., preprocessing before splitting). Leakage can be prevented by carefully separating training and validation data, excluding post-target features, and using pipelines for preprocessing. While removing leaky features may lower apparent accuracy, it ensures the model performs reliably on new data.

Cross-Validation Explained

Cross-Validation Cross-validation helps measure model performance more reliably by using multiple subsets of the data instead of a single validation set. Why Not Use a Single Validation Set? Using only one validation set can give noisy or luck-dependent results. Example: In a dataset with 5000 rows, keeping 1000 as validation may give a misleading score. How Cross-Validation Works Split data into k folds (e.g., 5 folds, each 20% of the data). For each fold: Use the fold as the validation set. Use remaining folds for training. Repeat for all folds so every row is used for validation once. Average the performance metrics across all folds for a reliable score. When to Use Cross-Validation Small datasets: Recommended, because you can reuse all data for validation. Large datasets: Single validation set is often sufficient. Implementation Example (Python) from sklearn.ens...

Handling Missing Values in Pandas / Machine Learning

Handling Missing Values in Machine Learning Why Missing Values Matter Datasets often contain missing values (NaN), e.g., a house missing a third bedroom size or a survey respondent skipping a question. Machine learning models usually cannot handle missing values, so we must process them before training. Three Approaches 1. Drop Columns with Missing Values Simply remove columns that contain any missing entries. This is simple but can discard important data. # Identify columns with missing values cols_with_missing = [col for col in X_train.columns if X_train[col].isnull().any()] # Drop these columns X_train_reduced = X_train.drop(cols_with_missing, axis=1) X_valid_reduced = X_valid.drop(cols_with_missing, axis=1) Result: MAE = 183,550 → worse performance due to lost information. 2. Imputation (Recommended) Replace missing values with a substitute (mean, median, or mode). This usually improves model performance. from sklearn.impute import SimpleImp...

People are good at skipping over material they already know!

View Related Topics to







Contact Us

Name

Email *

Message *