L o a d i n g

My Research Publications

Explore my contributions to cutting-edge research in AI, Machine Learning, Healthcare, and Industrial Engineering through peer-reviewed publications.

An Evolutionary Deep Reinforcement Learning-Based Framework for Efficient Anomaly Detection in Smart Power Distribution Grids

A novel DRL-based framework integrating CNN and RNN with NSABC optimization for enhanced anomaly detection in smart power distribution systems.

Energy-Efficient Secure Cell-Free Massive MIMO for Internet of Things: A Hybrid CNN–LSTM-Based Deep-Learning Approach

A groundbreaking AI framework that optimizes the trade-off between energy efficiency and security in Cell-Free Massive MIMO IoT networks using hybrid deep learning.

Energy-Efficient and Secure Double RIS-Aided Wireless Sensor Networks: A QoS-Aware Fuzzy Deep Reinforcement Learning Approach

A novel framework integrating double reconfigurable intelligent surfaces with fuzzy deep reinforcement learning to optimize energy efficiency and security in wireless sensor networks.

5DGWO-GAN: A Novel Five-Dimensional Gray Wolf Optimizer for Generative Adversarial Network-Enabled Intrusion Detection in IoT Systems

A groundbreaking framework that leverages bio-inspired optimization to enhance GAN-based intrusion detection in IoT networks, achieving superior accuracy and efficiency.

A Novel Six-Dimensional Chimp Optimization Algorithm-Deep Reinforcement Learning-Based Optimization Scheme for Reconfigurable Intelligent Surface-Assisted Energy Harvesting in Batteryless IoT Networks

A bio-inspired AI framework combining deep reinforcement learning with chimp optimization to solve energy harvesting challenges in batteryless IoT networks using reconfigurable intelligent surfaces.

Improved IChOA-Based Reinforcement Learning for Secrecy Rate Optimization in Smart Grid Communications

A novel AI framework using reinforcement learning optimized by chimp-inspired algorithms to secure smart grid communications against eavesdropping attacks.

Enhancing Hyper-Spectral Image Classification with Reinforcement Learning and Advanced Multi-Objective Binary Grey Wolf Optimization

A novel two-part AI framework combining Multi-Objective Binary Grey Wolf Optimizer (MOBGWO) for optimal band selection with Deep Q-Learning (DQL) for classification, achieving state-of-the-art accuracy in hyperspectral image analysis.

Utilizing Generative AI for the Production, Classification, and Annotation of Chronic Wound Images: A Systematic Review

A comprehensive systematic review examining four transformative applications of Generative AI in chronic wound management, including image generation, text-to-image synthesis, image-to-text analysis, and clinical documentation automation...

Moving Toward Resiliency in Health Supply Chain

A comprehensive framework for building resilient health supply chains through four key pillars: visibility, diversification, inventory management, and collaboration, with practical MCDM-based evaluation methods...

An Evolutionary Deep Reinforcement Learning-Based Framework for Efficient Anomaly Detection in Smart Power Distribution Grids

Abstract

The increasing complexity of modern smart power distribution systems (SPDSs) has made anomaly detection a significant challenge, as these systems generate vast amounts of heterogeneous and time-dependent data. Conventional detection methods often struggle with adaptability, generalization, and real-time decision-making, leading to high false alarm rates and inefficient fault detection. To address these challenges, this study proposes a novel deep reinforcement learning (DRL)-based framework, integrating a convolutional neural network (CNN) for hierarchical feature extraction and a recurrent neural network (RNN) for sequential pattern recognition and time-series modeling. To enhance model performance, we introduce a novel non-dominated sorting artificial bee colony (NSABC) algorithm, which fine-tunes the hyper-parameters of the CNN-RNN structure, including weights, biases, the number of layers and other critical parameters.

Key Contributions

The DRL-NSABC Framework: A hybrid AI approach combining Deep Reinforcement Learning with evolutionary optimization for superior anomaly detection in smart grids.

Core Components:

  • Deep Reinforcement Learning (DRL): An adaptive agent that learns through trial and error, making decisions on anomaly detection and receiving rewards for correct classifications.
  • Convolutional Neural Network (CNN): Processes incoming grid data to detect spatial patterns and identify localized anomaly signatures from multi-dimensional sensor inputs.
  • Recurrent Neural Network (RNN): Models temporal dependencies in power grid data, recognizing patterns that unfold over time for detecting subtle, long-term anomalies.
  • Non-Dominated Sorting Artificial Bee Colony (NSABC): A bio-inspired optimization algorithm that automatically fine-tunes model hyperparameters for faster convergence and higher accuracy.

Performance Results

The DRL-NSABC framework was evaluated on four benchmark datasets and consistently outperformed six baseline models:

Dataset Accuracy Recall AUC
Smart Grid 95.83% 96.21% 98.27%
AMI 96.19% 96.86% 98.10%
Smart Meter 96.61% 97.09% 96.63%
Pecan Street 96.45% 96.35% 95.63%

Research Impact

This research offers a robust, scalable, and computationally efficient solution for critical infrastructure challenges. The framework demonstrates:

  • Superior Performance: Consistently outperformed baseline models with statistical significance (99% confidence)
  • Faster Convergence: Reached near-optimal performance within 100-150 epochs compared to 250-300 epochs for competing models
  • Real-World Readiness: Practical solution for enhancing grid resilience, reducing energy losses, and preventing service disruptions

Future Directions

The researchers have outlined promising directions including federated learning integration for enhanced privacy, edge device deployment for lower latency, and incorporation of additional data sources like weather patterns for improved accuracy.

View Full Publication

Energy-Efficient Secure Cell-Free Massive MIMO for Internet of Things: A Hybrid CNN–LSTM-Based Deep-Learning Approach

Abstract

The Internet of Things (IoT) has revolutionized modern communication systems by enabling seamless connectivity among low-power devices. However, the increasing demand for high-performance wireless networks necessitates advanced frameworks that optimize both energy efficiency (EE) and security. Cell-free massive multiple-input multiple-output (CF m-MIMO) has emerged as a promising solution for IoT networks, offering enhanced spectral efficiency, low-latency communication, and robust connectivity. Nevertheless, balancing EE and security in such systems remains a significant challenge due to the stringent power and computational constraints of IoT devices.

Key Contributions

Secrecy Energy Efficiency (SEE) Metric: This study employs secrecy energy efficiency as a key performance metric to evaluate the trade-off between power consumption and secure communication efficiency. By jointly considering energy consumption and secrecy rate, our analysis provides a comprehensive assessment of security-aware energy efficiency in CF m-MIMO-based IoT networks.

Hybrid Deep Learning Framework: We introduce a hybrid deep-learning framework that integrates convolutional neural networks (CNN) and long short-term memory (LSTM) networks for joint EE and security optimization. The CNN extracts spatial features, while the LSTM captures temporal dependencies, enabling a more robust and adaptive modeling of dynamic IoT communication patterns.

MOIBBO Optimization: A multi-objective improved biogeography-based optimization (MOIBBO) algorithm is utilized to optimize hyperparameters, ensuring an improved balance between convergence speed and model performance.

Performance Results

Model RMSE R² Score MAPE Average Time (s)
MOIBBO-CNN-LSTM 0.08 0.97 1.03% 962
NSGA-II-CNN-LSTM 3.27 0.92 6.96% 1241
Vision Transformer (ViT) 6.12 0.86 8.32% 1012
Deep Reinforcement Learning 9.39 0.84 11.43% 1317

Research Impact

The proposed MOIBBO-CNN-LSTM framework achieved superior performance across all metrics, demonstrating the lowest error rates (RMSE and MAPE) and the highest R² score. The framework reached near-optimal convergence within the first 100 training epochs, significantly outperforming competing approaches in both accuracy and computational efficiency.

Future Directions

Future research will explore adaptive, real-time AP selection to address the AP scaling dilemma and further enhance the energy-security trade-off in CF m-MIMO IoT networks. This work lays a strong foundation for building IoT systems that are not just smart, but also secure, sustainable, and efficient.

View Publication

Energy-Efficient and Secure Double RIS-Aided Wireless Sensor Networks: A QoS-Aware Fuzzy Deep Reinforcement Learning Approach

Abstract

Wireless sensor networks (WSNs) are a cornerstone of modern Internet of Things (IoT) infrastructure, enabling seamless data collection and communication for many IoT applications. However, the deployment of WSNs in remote or inaccessible locations poses significant challenges in terms of energy efficiency and secure communication. Sensor nodes, with their limited battery capacities, require innovative strategies to minimize energy consumption while maintaining robust network performance. Additionally, ensuring secure data transmission is critical for safeguarding the integrity and confidentiality of IoT systems.

The Challenge: WSN Trilemma

Energy Scarcity: Sensor nodes have limited battery life, and deploying them in remote locations makes replacement impractical. Energy harvesting (EH) from ambient sources is a solution, but it's often unpredictable and difficult to manage efficiently.

Limited Coverage: Low-power backscatter communication, while energy-efficient, suffers from a short range and poor signal quality, hindering network reliability.

Security Vulnerabilities: The same weak signals that limit coverage also make it difficult to implement strong security, leaving the network open to eavesdroppers.

Innovation: Double RIS Architecture

RIS₁: Placed near the sensor node (SN) to enhance and focus ambient radio frequency (RF) signals, maximizing the energy harvested by the node.

RIS₂: Placed near the gateway (GW) to intelligently reflect the SN's data signal, boosting its strength at the legitimate gateway while simultaneously confusing any potential eavesdroppers.

Fuzzy Deep Reinforcement Learning (FDRL) Framework

Deep Reinforcement Learning with LSTM: The core of the system is a DRL agent that learns the optimal phase shifts for the RISs through trial and error, using LSTM networks to process sequential data and understand temporal patterns.

Fuzzy Logic: A parallel layer that excels at handling uncertainty, imprecision, and ambiguity—common issues with real-world sensor data, using "if-then" rules and membership functions.

Multi-Objective Artificial Bee Colony (MOABC) Optimizer: A bio-inspired algorithm that simultaneously fine-tunes the hyperparameters of the DRL model and optimizes the membership functions in the fuzzy logic layer.

Performance Results

Model RMSE MAPE R² Score
FDRL 0.09 1.07% 0.95
RL 6.32 10.76% 0.82
LSTM 5.96 8.24% 0.85
RNN 9.47 13.29% 0.80

Key Achievements

The FDRL framework improves energy efficiency by 35.4% and the secrecy rate by 29.7% compared to the next-best approach. The framework reached near-zero error rate within the first 50 epochs, demonstrating superior convergence speed and accuracy.

Future Directions

Future research will explore advanced technologies like 3D "STAR-RIS," integrated sensing and communication (ISAC), and lightweight AI heuristics to further push the boundaries of autonomous, self-sustaining, and fundamentally secure WSNs.

View Publication

5DGWO-GAN: A Novel Five-Dimensional Gray Wolf Optimizer for Generative Adversarial Network-Enabled Intrusion Detection in IoT Systems

Abstract

The Internet of Things (IoT) is integral to modern infrastructure, enabling connectivity among a wide range of devices from home automation to industrial control systems. With the exponential increase in data generated by these interconnected devices, robust anomaly detection mechanisms are essential. This paper presents a novel approach utilizing generative adversarial networks (GANs) for anomaly detection in IoT systems, optimized by a five-dimensional Gray wolf optimizer (5DGWO) to address the challenging hyperparameter tuning of GANs.

The Core Problem: Taming the AI Beast

Generative Adversarial Networks (GANs) are powerful tools for anomaly detection, working through two competing neural networks—a Generator and a Discriminator. However, training GANs requires finding optimal hyperparameters, and standard optimization methods often fail due to the model's complexity, getting trapped in local minima.

The Innovation: 4-Stage Framework with 5DGWO

Stage 1: Preprocessing - Raw network traffic data from standard datasets (NSL-KDD, UNSW-NB15, IoT-23) is cleaned, normalized, and balanced.

Stage 2: 5DGWO-GAN Synthetic Data Generation - The core innovation uses a Five-Dimensional Gray Wolf Optimizer that enhances standard GWO with two new wolf types:

  • Gamma (γ) Wolves: Elite wolves focusing on exploitation for faster convergence
  • Theta (θ) Wolves: Random wolves focusing on exploration to escape local minima

Stage 3: Feature Extraction with Autoencoder - The GAN's discriminator is repurposed as an autoencoder to extract critical features from both real and synthetic data.

Stage 4: Predictive Model Training - Various models (CNN, DBN, RNN, Random Forest, XGBoost) are trained on the feature-rich data, with the top-performing model being 5DGWO-GAN-CNNAE.

Outstanding Performance Results

NSL-KDD Dataset:

  • Binary Classification: 95.34% accuracy with RMSE of 0.24
  • Multiclass Classification: 94.12% accuracy with massive improvements in rare attack detection (R2L: 96.05% recall, U2R: 36.86% recall)

UNSW-NB15 Dataset: Achieved highest accuracy across most attack categories, including 95.68% for "Generic" attacks and 96.22% for "Fuzzers".

IoT-23 Dataset: Nearly flawless performance with 99.20% accuracy and 100% recall and precision on DDoS attacks.

Key Achievements

The 5DGWO-GAN framework demonstrates:

  • Superior Optimization: Successfully tames GAN complexity through bio-inspired 5D wolf pack intelligence
  • Fastest Convergence: Reaches lowest error in shortest time across all datasets
  • Real-World Ready: Designed for scalability and efficiency in large IoT networks
  • Breakthrough Performance: Consistently outperforms traditional ML and standard deep learning models

Future Impact

This research represents a significant leap forward for IoT security, paving the way for a new generation of intelligent, adaptive, and highly accurate security systems. The future of IoT security may very well be led by a pack of five-dimensional wolves.

View Publication

A Novel Six-Dimensional Chimp Optimization Algorithm-Deep Reinforcement Learning-Based Optimization Scheme for Reconfigurable Intelligent Surface-Assisted Energy Harvesting in Batteryless IoT Networks

Abstract

The rapid advancement of Internet of Things (IoT) networks has revolutionized modern connectivity by integrating many low-power devices into various applications. As IoT networks expand, the demand for energy-efficient, batteryless devices becomes increasingly critical for sustainable future networks. These devices play a pivotal role in next-generation IoT applications by reducing the dependence on conventional batteries and enabling continuous operation through energy harvesting capabilities. However, several challenges hinder the widespread adoption of batteryless IoT devices, including the limited transmission range, constrained energy resources, and low spectral efficiency in IoT receivers. To address these limitations, reconfigurable intelligent surfaces (RISs) offer a promising solution by dynamically manipulating the wireless propagation environment to enhance signal strength and improve energy harvesting capabilities.

Super-Smart Surfaces: How AI-Powered Chimps Are Solving the IoT Energy Crisis

The Internet of Things (IoT) promises a future of seamless connectivity, but it runs on a finite resource: energy. For the IoT to be truly sustainable and scalable, we need devices that don't rely on traditional batteries, which are costly to maintain and harmful to the environment. Batteryless devices, which harvest energy from ambient sources like radio waves, are the key to this future.

However, these devices face a major hurdle: the very low power they operate on limits their communication range and data transmission efficiency. A cutting-edge technology called Reconfigurable Intelligent Surfaces (RISs) offers a solution. An RIS is a smart surface that can be programmed to reflect and focus radio waves, boosting signal strength without using additional power.

The Core Challenge: A Delicate Balancing Act

Using an RIS to help a batteryless IoT device is a brilliant idea, but it creates a complex optimization problem. The RIS must reflect ambient RF signals strongly enough for the device to harvest energy and power on. However, that same reflected signal can become interference at the IoT receiver, drowning out the device's actual data transmission.

The goal is to precisely configure the thousands of tiny elements on the RIS to maximize the data rate at the receiver while ensuring the device harvests enough energy to operate. This is a non-convex problem that traditional optimization methods struggle to solve effectively.

The Solution: A Bio-Inspired, AI-Powered Framework

To tackle this challenge, the researchers developed a novel framework that combines Deep Reinforcement Learning (DRL) with a unique, bio-inspired optimization algorithm.

The Brains: Deep Reinforcement Learning (DRL)
At the heart of the system is a DRL agent. This agent learns the optimal phase shifts for the RIS through trial and error. It operates in a simulated environment by:

  • Observing the State: Monitoring the received signal strength, the signal quality (SINR) at the receiver, and the current RIS configuration.
  • Taking an Action: Adjusting the phase shifts of the RIS elements.
  • Receiving a Reward: Getting positive feedback for increasing harvested power and the data rate, and penalties for failing to meet energy constraints or causing too much interference.

The Secret Weapon: The 6D Chimp Optimizer (6DChOA)
A DRL agent is only as smart as its underlying neural network, which has many hyperparameters (like learning rate, weights, and biases) that need to be perfectly tuned. Doing this manually is nearly impossible.

This is where the researchers' key innovation comes in: the Six-Dimensional Chimp Optimization Algorithm (6DChOA). The standard Chimp Optimization Algorithm (ChOA) is a metaheuristic inspired by the four distinct roles chimpanzees play when hunting: attackers, barriers, chasers, and drivers.

The authors enhanced this by introducing two new, powerful chimp roles to create the 6DChOA:

  • The Leader: An elite chimp that enhances exploitation, guiding the search party to refine the most promising solutions and converge on an answer more quickly.
  • The Ranger: A chimp that introduces chaotic, random behavior to enhance exploration, preventing the group from getting stuck in a suboptimal rut and ensuring the entire solution space is considered.

This 6DChOA is used to automatically and efficiently fine-tune all the DRL agent's hyperparameters, ensuring it learns faster and finds a better solution than it could on its own.

Outstanding Performance Results

The proposed 6DChOA-DRL framework was tested in a simulated batteryless IoT environment and benchmarked against a suite of other algorithms, including standard RL, DNN, CNN, PSO, and the original ChOA.

Unrivaled Accuracy and Stability:

Method RMSE MAPE SD Mean Time (s)
6DCHOA-DRL 0.13 1.24% 0.27 486
RL 3.34 6.64% 5.21 549
CNN 6.91 8.17% 9.04 603
6DCHOA 5.27 7.96% 9.67 659
DNN 8.54 10.32% 11.43 724
ChOA 10.24 14.60% 17.29 851
PSO 12.76 17.29% 20.07 916

The proposed model's RMSE of 0.13 is orders of magnitude better than its competitors, showing incredibly precise optimization. Its low standard deviation also proves it is a highly stable and reliable algorithm.

Superior Energy Harvesting and Data Rates:

  • Energy Harvesting: The 6DChOA-DRL consistently enabled the IoT device to harvest the most power, especially over longer distances where other methods failed.
  • Achievable Data Rate: The framework also delivered the highest data rates, reaching over 14 bps/Hz when using an RIS with 100 elements, far surpassing the non-RIS scenario.

Lightning-Fast Convergence:
In real-time systems, finding the best solution quickly is critical. The 6DChOA-DRL was not only the most accurate but also the fastest. To reach a highly accurate RMSE threshold of < 5.00, the 6DChOA-DRL took only 209 seconds. The next best algorithm (RL) took nearly twice as long (406 seconds), and most other methods couldn't reach that level of accuracy at all.

Key Takeaways

  • Non-Convex Problem Solving: The 6DChOA-DRL successfully tackles the complex, non-convex optimization challenge of RIS-assisted energy harvesting.
  • Bio-Inspired AI Works: The chimp-inspired optimization algorithm proves that nature-inspired AI can outperform traditional methods in complex engineering problems.
  • Future of Autonomous IoT: This research paves the way for truly autonomous, batteryless IoT networks that can self-optimize for maximum efficiency.
View Publication

Improved IChOA-Based Reinforcement Learning for Secrecy Rate Optimization in Smart Grid Communications

Abstract

In the evolving landscape of the smart grid (SG), the integration of non-organic multiple access (NOMA) technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management. However, the open nature of wireless channels in SG raises significant concerns regarding the confidentiality of critical control messages, especially when broadcasted from a neighborhood gateway (NG) to smart meters (SMs). This paper introduces a novel approach based on reinforcement learning (RL) to fortify the performance of secrecy. Motivated by the need for efficient and effective training of the fully connected layers in the RL network, we employ an improved chimp optimization algorithm (IChOA) to update the parameters of the RL.

Securing the Smart Grid: How AI That Thinks Like a Chimp is Outsmarting Eavesdroppers

The smart grid (SG) is revolutionizing how we manage and distribute energy, making it more efficient, reliable, and sustainable. A key technology powering this transformation is Non-Orthogonal Multiple Access (NOMA), which allows multiple devices, like smart meters, to communicate over the same frequency, drastically boosting network capacity.

But this efficiency comes with a price: security. The open, broadcast nature of wireless NOMA channels makes them a prime target for eavesdroppers trying to intercept critical control messages. How can we protect this vital infrastructure from malicious actors?

The Problem: The Security Flaw in Efficient Communication

In a smart grid's Neighborhood Area Network (NAN), a central Neighborhood Gateway (NG) communicates with numerous Smart Meters (SMs). Using NOMA, the NG can broadcast signals to all SMs at once. While efficient, this creates a significant vulnerability. An eavesdropper can easily listen in on these communications, potentially stealing consumer data, manipulating the grid, or launching impersonation attacks.

The Solution: Reinforcement Learning Optimized by an Improved Chimp Algorithm (IChOA-RL)

To solve this complex optimization problem, the researchers developed a novel framework that combines the adaptive learning of Reinforcement Learning with the powerful search capabilities of a bio-inspired algorithm.

The Brains: Reinforcement Learning (RL)
The foundation of the framework is an RL agent. This AI agent learns the optimal strategy for allocating power to different smart meters to maximize the overall secrecy rate of the network. It operates within a simulated smart grid environment, modeled as a Markov Decision Process (MDP), where it learns through trial and error. The agent observes the state of the network (i.e., the channel conditions of the smart meters and the eavesdropper), takes an action (adjusts power allocation), and receives a reward based on whether its action increased or decreased the secrecy rate.

The Optimizer: The Improved Chimp Optimization Algorithm (IChOA)
An RL agent's performance depends heavily on the tuning of its internal neural network (a Deep Q-Network or DQN in this case). This is where the paper's key innovation comes in: the Improved Chimp Optimization Algorithm (IChOA).

The standard Chimp Optimization Algorithm (ChOA) is a metaheuristic inspired by the four coordinated roles chimpanzees adopt while hunting: drivers, barriers, chasers, and attackers. The authors improved upon this by adapting it for the binary optimization tasks needed to tune the RL model's weights and biases. The main novelty of their IChOA is the introduction of a new V-shaped transfer function, which allows the algorithm to more effectively navigate the discrete search space of the RL model's parameters.

This IChOA acts as a master trainer for the RL agent, efficiently fine-tuning its parameters to help it learn faster, avoid getting stuck in suboptimal solutions, and ultimately discover a more robust and effective security policy.

Outstanding Performance Results

The proposed IChOA-RL framework was rigorously tested and compared against eight other machine learning and optimization algorithms, including standard RL, RNN, LSTM, KNN, SVM, and other bio-inspired optimizers like Grey Wolf Optimizer (GWO) and Improved Crow Search Algorithm (I-CSA).

Unmatched Accuracy and Predictive Power:

Method R² (Validation) Accuracy (Validation) RMSE (Validation) Runtime (s)
IChOA-RL 95.77% 97.41% 0.95 724
I-CSA-RL 92.18% 94.53% 3.46 985
GWO-RL 91.52% 93.28% 4.24 1024
ChOA-RL 90.44% 92.76% 5.69 896
RL 89.36% 90.43% 8.37 659
LSTM 85.81% 88.82% 11.73 903
SVM 83.37% 85.63% 20.18 941

The IChOA-RL achieved the highest accuracy (97.41%) and R² score (95.77%), proving it has the most reliable predictive power. It also had the lowest Root Mean Square Error (RMSE) of just 0.95, indicating its predictions were incredibly precise.

Superior Secrecy Rate and Efficiency:
The framework's primary goal was to maximize the secrecy rate, and it excelled. The IChOA-RL model achieves a higher secrecy rate at every transmission power level compared to all other methods. Its curve is also the steepest, meaning it is the most efficient at converting additional power into enhanced security.

Blazing-Fast and Stable Convergence:
The IChOA-RL model learned its optimal policy much faster than its competitors. The model reached a stable, low-error state by the 100th epoch, while other algorithms were still struggling to reduce their errors. This rapid convergence is crucial for real-world systems that need to adapt quickly to new threats.

Key Takeaways

  • Synergistic AI: The combination of RL's adaptive decision-making and IChOA's powerful optimization creates a system that is more effective than either component alone.
  • Bio-Inspired Optimization Works: The novel IChOA, inspired by the complex social behaviors of chimpanzees, proves to be a highly effective method for training complex AI models.
  • Scalable and Efficient Security: The framework not only provides top-tier security but is also computationally efficient and scalable, demonstrating that secrecy performance improves as more users are added to the NOMA network.
View Publication

Enhancing Hyper-Spectral Image Classification with Reinforcement Learning and Advanced Multi-Objective Binary Grey Wolf Optimization

Abstract

Hyperspectral (HS) image classification plays a crucial role in numerous areas including remote sensing (RS), agriculture, and the monitoring of the environment. Optimal band selection in HS images is crucial for improving the efficiency and accuracy of image classification. This process involves selecting the most informative spectral bands, which leads to a reduction in data volume. Focusing on these key bands also enhances the accuracy of classification algorithms, as redundant or irrelevant bands, which can introduce noise and lower model performance, are excluded. In this paper, we propose an approach for HS image classification using deep Q learning (DQL) and a novel multi-objective binary grey wolf optimizer (MOBGWO). We investigate the MOBGWO for optimal band selection to further enhance the accuracy of HS image classification.

Sifting through the Spectrum: How a Wolf Pack AI is Revolutionizing Satellite Image Analysis

Hyperspectral (HS) imaging is a technology that allows us to see the world in hundreds of "colors" far beyond the range of human vision. This incredible detail is a game-changer for fields like precision agriculture (monitoring crop health), environmental science (tracking pollution), and urban planning (classifying materials).

But this power comes with a significant challenge: the "curse of dimensionality." A single HS image contains a massive amount of data across hundreds of spectral bands, many of which are redundant or just noise. This data overload can overwhelm even advanced AI models, leading to inaccurate classifications and slow processing times. The key to unlocking the full potential of HS imaging lies in intelligently selecting only the most informative bands.

The Problem: Finding the Signal in the Noise

The core challenge in HS image classification is to avoid the Hughes phenomenon, a paradox where adding more features (spectral bands) can actually make a classifier less accurate unless the amount of training data increases exponentially. Processing hundreds of bands is also computationally expensive and time-consuming.

The solution is optimal band selection: a process of identifying and using only the most information-rich spectral bands to classify the image. This reduces the data volume and eliminates noise, allowing the classification algorithm to perform far more effectively.

The Solution: A Two-Pronged AI Attack (MOBGWO-DQL)

The researchers designed a sophisticated framework that tackles this problem in two distinct stages: first, an intelligent optimizer selects the best bands, and second, a dynamic classifier analyzes them.

Part 1 - The Optimizer: The Multi-Objective Binary Grey Wolf Optimizer (MOBGWO)
The first task is to sift through hundreds of bands and choose the optimal subset. To navigate this vast search space, the researchers developed a novel metaheuristic algorithm called the Multi-Objective Binary Grey Wolf Optimizer (MOBGWO).

Inspired by Nature: The algorithm is based on the Grey Wolf Optimizer (GWO), which mimics the social hierarchy and cooperative hunting strategies of a wolf pack. The algorithm's "wolves" represent potential solutions (i.e., different combinations of bands).

Binary Adaptation: The authors adapted the GWO for a binary problem—a band is either selected (1) or discarded (0). They achieved this by introducing a new sigmoid transfer function that cleverly modifies how the wolves update their positions in the binary search space.

Multi-Objective: The MOBGWO is designed to balance two competing goals simultaneously:
• Maximize classification accuracy
• Minimize the number of selected bands

Part 2 - The Classifier: Deep Q-Learning (DQL)
Once the MOBGWO has selected the best bands, the data is passed to a Deep Q-Learning (DQL) model for the final classification.

Learning through Interaction: DQL is a type of Reinforcement Learning (RL) where an "agent" learns by making decisions and receiving rewards or penalties. In this context, each pixel in the HS image is treated as an agent that needs to be classified.

Dynamic and Adaptive: The DQL agent learns the optimal policy for classifying each pixel based on its unique spectral signature (from the selected bands). It is rewarded for correct classifications and penalized for incorrect ones, allowing it to continuously refine its accuracy through a trial-and-error process.

Outstanding Performance Results

This powerful MOBGWO-DQL framework was tested on three well-known public HS datasets: Indian Pines, Pavia University, and Washington Mall. The proposed model was benchmarked against nine other machine learning and deep learning algorithms.

Unrivaled Accuracy and Precision:

Dataset Model OA KC RMSE
Indian Pine MOBGWO-DQL 94.32% 97.68% 0.94
Pavia University MOBGWO-DQL 96.01% 98.72% 0.63
Washington Mall MOBGWO-DQL 96.74% 99.08% 0.51

The MOBGWO-DQL consistently achieved the highest accuracy and the lowest error, proving its ability to make highly precise classifications.

Superior Learning Efficiency:
Not only was the model more accurate, but it also learned faster and more effectively. The convergence curves show that MOBGWO-DQL reaches a stable, low-error state far more quickly than other algorithms.

Key Takeaways

  • It Solves the Band Selection Problem: The framework effectively tackles the "curse of dimensionality" by using a powerful and novel optimizer to select the most valuable data, dramatically improving efficiency and accuracy.
  • Synergy Is Key: The combination of a metaheuristic optimizer for feature selection (MOBGWO) and a dynamic learning model for classification (DQL) is more powerful than either approach used alone.
  • It Sets a New Performance Standard: The framework has established a new state of the art, delivering superior accuracy and faster convergence on multiple benchmark datasets.
View Publication

Utilizing Generative AI for the Production, Classification, and Annotation of Chronic Wound Images: A Systematic Review

Abstract

The rapid advancement of Generative AI impacted many areas in healthcare including chronic wound management. We conducted a systematic review to answer the following research question: How is generative AI used in the context of wound care and management? Our search across multiple databases resulted in more than 500 articles that matched our search criteria. After applying our inclusion/exclusion criteria, we identified 61 articles that are relevant to our research question. Preliminary analysis of these studies revealed four ways generative AI is utilized in chronic wound management context.

The Four Key Applications of Generative AI in Wound Care

1. Generating Images from Other Images

How it Works: This method primarily uses Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) to learn the complex patterns of chronic wounds and generate novel examples.

Why it Matters: High-quality, annotated medical images are often scarce due to privacy concerns and the difficulty of collection. By generating synthetic images, researchers can create vast datasets to train other diagnostic AI models, leading to improved early detection and more effective wound management strategies.

2. Generating Images from Text

How it Works: Technologies like DALL-E2 can take a detailed text prompt (e.g., "a venous leg ulcer with moderate exudate and 50% granulation tissue") and produce a high-resolution image that accurately visualizes the description.

Why it Matters: This capability can significantly improve communication among healthcare providers, enhance medical training materials, and support the development of precise treatment plans by allowing for clear, visual representations of complex wound conditions.

3. Generating Text from Images

How it Works: This is achieved using methods like CLIP (Contrastive Language Image Pre-training), where the AI learns the relationship between visual data and descriptive language.

Why it Matters: This can automate the tedious process of clinical documentation, providing detailed and consistent notes about wound characteristics. By generating objective, data-driven descriptions, it supports clinical decisions and provides deeper insights for treatment planning.

4. Generating Text from Other Text

How it Works: Large Language Models (LLMs) like GPT-3 are used to process vast amounts of unstructured text from clinical notes and patient records.

Why it Matters: By analyzing this data, the AI can help create improved, personalized treatment plans and enhance diagnostics. It can identify patterns and correlations across thousands of patient records that a human might miss, leading to more effective and data-driven wound care.

Key Takeaways

  • Comprehensive Review: Analysis of over 500 articles narrowed down to 61 relevant studies
  • Four Distinct Applications: Image-to-image, text-to-image, image-to-text, and text-to-text generation
  • Clinical Impact: Enhanced training datasets, improved communication, automated documentation, and personalized treatment plans
  • Technology Integration: GANs, VAEs, DALL-E2, CLIP, and LLMs working together for comprehensive wound care
  • Future Potential: Balancing innovation with responsible implementation in healthcare settings
View Publication

Moving Toward Resiliency in Health Supply Chain

Abstract

The COVID-19 pandemic served as a stark reminder of a critical vulnerability in our global infrastructure: the health supply chain. The complex network responsible for delivering everything from pharmaceuticals to personal protective equipment (PPE) was strained to its breaking point, leading to widespread shortages that impacted healthcare providers and patients alike. This research examines the challenges and proposes a clear, actionable framework for strengthening this vital system through four fundamental pillars for building a resilient health supply chain.

The Framework for Resilience: Four Key Pillars

1. Visibility

Healthcare organizations need to have complete, end-to-end visibility into their supply chains, from the original suppliers to the final distributors and hospitals. This transparency allows them to identify potential disruptions early and take proactive steps to mitigate their impact.

2. Diversification

Relying on a single supplier for any critical medical item is a high-risk strategy. Organizations should actively diversify their supply base to reduce the risk of a single point of failure.

3. Inventory

Maintaining adequate inventory levels of critical supplies is essential. This inventory acts as a buffer, ensuring that essential items remain available even when the supply chain is interrupted.

4. Collaboration

No single organization can achieve resilience alone. Healthcare organizations must collaborate closely with their suppliers and distributors to jointly develop and implement resilience strategies. A case study of a U.S. healthcare organization demonstrated that this type of collaboration was key to weathering the COVID-19 pandemic.

Multi-Criteria Decision Making (MCDM) Methods

The research highlights the use of several MCDM methods to improve healthcare supply chain resilience:

  • Analytical Hierarchy Process (AHP): Used to weigh criteria and compare alternatives
  • TOPSIS: Ranks alternatives based on their similarity to an ideal solution
  • Fuzzy Logic: Used to model and account for uncertainty in the supply chain
  • Machine Learning: Used to predict the impact of disruptions

Numerical Case Study Results

A numerical analysis using six different MCDM methods (TOPSIS, VIKOR, COPRAS, MOORA, MABAC, and ARAS) evaluated five hypothetical health centers:

Health Center Total Score
Health Center 1 0.46
Health Center 2 0.49
Health Center 3 0.46
Health Center 4 0.49
Health Center 5 0.50

Key Takeaways

  • Tangible Benefits: Cost reductions of up to 10%, improved patient satisfaction scores by up to 5%, and reduction in disruption risk by up to 20%
  • Four-Pillar Framework: Visibility, diversification, inventory, and collaboration as fundamental elements
  • Data-Driven Approach: MCDM methods provide structured evaluation of resilience strategies
  • Continuous Improvement: Building resilient supply chains requires ongoing adaptation and commitment
  • Collaborative Success: No single organization can achieve resilience alone
View Publication