AI-Driven Inventory Optimization & Demand Forecasting
Summary A predictive analytics project designed to optimize raw material inventory levels for high-volume FMCG manufacturing. By replacing traditional static inventory models with an XGBoost-based demand forecasting engine, this study aims to minimize Inventory Holding Costs while maintaining high Service Levels. The system dynamically adjusts Safety Stock levels based on predicted demand volatility, simulating a smarter, leaner supply chain operation.
Description This project addresses the critical trade-off in manufacturing supply chains: balancing the cost of holding excess stock against the risk of stockouts (production downtime). In volatile markets, static safety stock policies often lead to either overstocking (capital waste) or shortages. To solve this, I developed an end-to-end optimization framework:
- Synthetic Data Engineering: Generated a realistic FMCG dataset simulating daily raw material consumption, incorporating seasonality, trend factors, and random noise to mimic real-world factory volatility.
- Time-Series Feature Engineering: Transformed raw consumption data into a supervised learning problem by engineering features such as Lag Values (t-1, t-7), Rolling Averages, and volatility indices. This allowed the model to capture complex temporal patterns that traditional linear methods miss.
- Predictive Modeling (XGBoost): Implemented an Extreme Gradient Boosting (XGBoost) regressor to forecast future demand. The model significantly outperformed traditional Moving Average baselines, minimizing the Root Mean Square Error (RMSE).
- Dynamic Inventory Optimization: Developed a custom algorithm to calculate Dynamic Safety Stocks. Instead of a fixed buffer, the system calculates safety stock based on the forecast error of the model. This ensures higher buffers during volatile periods and leaner stocks during stable periods.
Results: The simulation demonstrated a 32.6% reduction in Total Inventory Costs compared to static inventory policies. The model maintained a 95% Service Level, ensuring production continuity even during simulated demand spikes, proving the viability of AI in operational decision-making.
Tech Stack
- Languages: Python 3.x
- Machine Learning: XGBoost, Scikit-learn
- Techniques: Time Series Forecasting, Regression, Feature Engineering, Dynamic Safety Stock Calculation, Hyperparameter Tuning
- Data Science: Pandas, NumPy
- Visualization: Matplotlib, Seaborn (for Forecast vs. Actual & Cost Analysis plots)
Classification of Iron Ionization States Using Ensemble Learning on NIST Spectral Data
Summary A machine learning-based classification study designed to distinguish between Singly Ionized (Fe II) and Doubly Ionized (Fe III) states of Iron atoms. By implementing an automated web scraping pipeline on the NIST Atomic Spectra Database and engineering physics-based features, this project compares probabilistic and ensemble learning models, achieving near-perfect classification accuracy with Random Forest.
Description This project addresses the challenge of automating the identification of atomic spectral lines in large-scale physics databases. Manually classifying spectral signatures is prone to error and time-consuming. To solve this, I developed a robust data mining and classification framework:
- Automated Data Pipeline: Built a dynamic web scraper using Python to extract raw spectral data from the NIST server. Implemented complex Regex (Regular Expression) cleaning to handle measurement uncertainties and noise in raw wavelength/intensity data.
- Physics-Informed Feature Engineering: Beyond standard features, I derived a new physical attribute, "Energy Difference (ΔE)," representing the gap between upper and lower energy levels, which significantly improved model separability.
- Comparative Model Analysis: Evaluated three distinct algorithms: Naive Bayes (Baseline), Support Vector Machines (Kernel-based), and Random Forest (Ensemble). Validated results using 10-Fold Cross-Validation to ensure generalization.
- Results: The Random Forest model outperformed others, achieving a 99.16% Accuracy and 99.15% F1-Score, successfully distinguishing complex spectral patterns where the baseline model struggled.
Tech Stack
- Languages: Python 3.x
- Machine Learning: Scikit-learn (Sklearn)
- Techniques: Random Forest, SVM (RBF Kernel), Naive Bayes, K-Fold Cross Validation, Web Scraping, Feature Engineering
- Data Science: Pandas, NumPy
- Visualization: Matplotlib, Seaborn, Heatmaps
Remaining Useful Life Prediction of Turbofan Engines Using LSTM Networks with Piecewise Linear Target Labeling
Summary A production-ready Predictive Maintenance system designed to estimate the Remaining Useful Life (RUL) of aircraft engines using Deep Learning. Moving beyond static modeling, this project implements a complete MLOps pipeline, featuring an LSTM-based forecasting engine served via a RESTful API (FastAPI) for real-time inference.
Description This project bridges the gap between theoretical AI research and applied software engineering in the aviation sector (Prognostics and Health Management). While traditional regression models struggle with the noise inherent in engine sensor data, this solution offers a full-stack approach:
- Deep LSTM Architecture: Engineered a stacked LSTM network (128 & 64 units) with Dropout regularization to model long-term temporal dependencies in multivariate sensor data.
- Piecewise Linear Strategy: Applied a target clipping technique (RUL limit at 125 cycles) to stabilize training during the engine's "healthy" phase, significantly improving model convergence.
- Production-Grade Engineering: Transformed raw research notebooks into a modular, object-oriented codebase (src, models, api).
- Microservice Deployment: Developed a high-performance REST API using FastAPI, enabling external systems to send sensor data and receive RUL predictions in milliseconds.
- Results: Achieved a Root Mean Squared Error (RMSE) of 13.87 on the FD001 test set, outperforming standard baselines with a robust and deployable architecture.
Tech Stack
- Core: Python 3.10+
- Deep Learning: TensorFlow, Keras, LSTM
- Deployment & API: FastAPI, Uvicorn, REST Architecture
- Data Engineering: Pandas, NumPy, Scikit-learn (MinMax Scaling, Windowing)
- Tools: Git, Modular Design, Swagger UI
Link Github
DataDiet Daily AI Newsletter
Summary A minimalist, automated newsletter aggregator designed to "cut the noise" and deliver essential tech developments directly to your inbox.
Description In an era of information overload, staying updated on technology can feel overwhelming. DataDiet was built to solve this problem by focusing on signal over noise. It is an automated system that curates the most critical developments across five key sectors: AI, Space, Gaming, Mobile, and Gear.
The project prioritizes a "Clean UI" philosophy. I intentionally removed timestamps to eliminate FOMO (Fear Of Missing Out) and utilized distinct color-coded categories for rapid visual scanning. The goal is to provide a digestible, focused briefing on what truly matters in tech, accessible via a clean web interface or delivered as an email newsletter.
Tech Stack React / Next.js, Tailwind CSS, Python (BeautifulSoup, requests, feedparser), Vercel Deployment.
Link Live Project
KUKA Robot Dynamics Analysis
Summary: A comprehensive mathematical analysis of the motion dynamics, forces, and torques for KUKA industrial robots.
Description: This project focuses on the dynamic modeling of industrial robotic arms. By analyzing the physical properties and motion constraints of KUKA robots, I calculated the necessary forces and torques required for specific trajectory executions. The study bridges theoretical robotics (Lagrangian and Newton-Euler formulations) with practical engineering requirements, providing a foundation for efficient motion control and motor sizing in industrial automation.
Tech Stack: Python, Robotics Dynamics, Physics Modeling, MATLAB/NumPy
Link: View on GitHub
6-DOF Robot Arm Forward Kinematics Simulator
Summary: A Python-based simulation tool designed to visualize the Forward Kinematics of a 6-Degrees-of-Freedom (6-DOF) robot arm.
Description: I developed a custom simulator to visualize the movement and positioning of a 6-axis robotic arm. The software utilizes Forward Kinematics (FK) algorithms to calculate the precise position and orientation of the end-effector based on given joint angles. This tool serves as a verification platform for motion planning algorithms, allowing for the simulation of complex robotic tasks in a virtual environment before physical deployment. It demonstrates strong proficiency in spatial transformation matrices and algorithm implementation.
Tech Stack: Python, Forward Kinematics, Simulation, 3D Visualization, Linear Algebra
Link: View Simulator
Real-Time Sensor Monitoring & ML-Based Fault Detection
Summary: An IoT-integrated system designed to detect operational anomalies in industrial machinery using Machine Learning.
Description: Designed to enhance industrial safety and efficiency, this system monitors sensor data in real-time to identify potential failures before they occur. I implemented Machine Learning algorithms (Classification & Anomaly Detection) to analyze data streams from sensors. The system successfully differentiates between normal operating conditions and fault states, enabling predictive maintenance strategies. This project demonstrates the practical application of AI in Industry 4.0 contexts.
Tech Stack: Machine Learning, Python, IoT, Data Analysis, Sensor Fusion
Links: View on GitHub
Physics-Informed ML: Airfoil Noise Prediction
Summary: An end-to-end Machine Learning pipeline utilizing XGBoost to predict aerodynamic noise levels with 96% accuracy, featuring a physics-aware data loader and an interactive simulation dashboard.
Description: In this project, I tackled the non-linear challenge of predicting aerodynamic noise generated by airfoils using the NASA NACA 0012 dataset. Moving beyond baseline Linear Regression (R²: 0.55), I engineered a high-precision XGBoost Regressor that achieved an R² score of 0.96, capturing complex airflow dynamics.
Key contributions include:
- Physics-Aware Pipeline: Developed a custom data loader to validate aerodynamic constraints (e.g., boundary layer thickness) before training.
- Interactive Simulation: Deployed a Streamlit dashboard, allowing users to visualize noise profiles under different flight conditions in real-time.
- Clean Code: Structured the project as a modular Python package suitable for scalability.
Tech Stack: Python, XGBoost, Streamlit, Pandas, Scikit-learn, Aeroacoustics
Links: View on GitHub
End-to-End Autonomous Driving: Lane Detection & Steering Control
Summary: A deep learning-based system for autonomous vehicle navigation, utilizing computer vision for lane tracking and real-time steering angle prediction.
Description: This project addresses the core challenges of autonomous driving, specifically lane keeping and trajectory following. Using a Convolutional Neural Network (CNN) architecture inspired by NVIDIA's end-to-end learning model, the system processes raw camera input to map visual data directly to steering commands. The implementation integrates advanced image preprocessing, data augmentation, and regression modeling to enable the vehicle to autonomously navigate and stabilize itself within a simulation environment.
Tech Stack: Python, OpenCV, TensorFlow/Keras, Deep Learning (CNN), Computer Vision, NumPy.
High-Altitude Model Rocket Design & Simulation
Summary: A G-class motor model rocket project designed in OpenRocket, optimized for aerodynamic stability (1.3 cal) and capable of reaching an altitude of 987 meters.
Description: This project involves the engineering and simulation of a custom model rocket based on aerodynamic principles and flight mechanics. Using OpenRocket software for computational analysis, the rocket's Center of Pressure (CP: 65.3 cm) and Center of Gravity (CG: 58.1 cm) were optimized to ensure a safe static stability margin of 1.3 cal.
The design features a Haack series fiberglass nose cone to minimize drag and trapezoidal plywood fins for flight stability. Simulation data indicates that, powered by a G73-P motor with 142 Ns total impulse , the rocket achieves a rail exit velocity of 15.3 m/s , a maximum velocity of 204 m/s , and an apogee of 987 meters. The system is designed for a safe parachute recovery with a landing velocity of 7.01 m/s after a total flight time of 151 seconds.
Tech Stack: OpenRocket, Aerodynamics, Flight Mechanics, Physics Simulation, CAD/Technical Drawing, Avionics Integration.