top of page

Research Blog

Buscar

Unmanned Aerial Systems (UAS), commonly known as drones, have rapidly emerged as indispensable tools in Search and Rescue (SAR) operations. Their versatility, swift deployment capabilities, and high mobility make them uniquely suited for rapid assessment and intervention missions. As SAR operations become increasingly dependent on advanced technologies, the integration of Artificial Intelligence (AI), improved sensor payloads, and multi-drone coordination has significantly enhanced the efficiency and effectiveness of these aerial platforms.

ree

The Role of Drones in SAR Missions


Drones have revolutionized SAR missions by enabling rapid aerial surveys over vast and often inaccessible areas. Traditional search operations, reliant on ground teams and manned aircraft, are frequently constrained by terrain, weather conditions, and response time. In contrast, drones provide real-time situational awareness, allowing rescue teams to locate survivors, assess hazards, and plan rescue operations more effectively. Multi-UAV coordination has further expanded the operational capabilities of SAR teams. Swarm technologies allow multiple drones to work collaboratively, increasing coverage and efficiency. These coordinated systems can divide search areas, optimize flight paths, and share data instantaneously, leading to improved accuracy and reduced mission duration.


Advancements in Sensor Technologies


One of the most significant technological advancements in SAR drones is the integration of enhanced sensor payloads. Cutting-edge sensor technologies have dramatically improved the ability of drones to detect and identify survivors under challenging conditions. Some key sensor advancements include:

  • Infrared and Thermal Imaging: Essential for detecting human heat signatures in low-visibility conditions, such as at night or in densely forested areas.

  • Radar and Lidar Systems: Effective for mapping terrain, detecting obstacles, and identifying survivors hidden under debris or foliage.

  • Biometric Monitoring: Emerging technologies allow drones to assess vital signs remotely, offering critical data for medical response teams.

  • High-Resolution Optical and Multispectral Cameras: Providing clear imaging and real-time video feeds to enhance operational decision-making.


These sensor technologies collectively enhance the capability of SAR drones to operate in diverse and demanding environments, making them invaluable in disaster response scenarios.


Artificial Intelligence and Autonomy in SAR Operations


Artificial Intelligence has played a pivotal role in elevating drone efficiency in SAR missions. AI-powered image recognition and machine learning algorithms enable drones to autonomously detect survivors, identify hazards, and differentiate between natural and artificial objects. Key AI-driven enhancements include:


  • Automated Object Recognition: AI systems can analyze drone-captured imagery to identify survivors, vehicles, and relevant objects of interest with high accuracy.

  • Predictive Analytics: Machine learning models help anticipate the movement of lost individuals based on terrain data and environmental conditions.

  • Path Optimization: AI-powered drones can autonomously determine the most efficient flight paths, reducing time spent searching and maximizing coverage.

  • Real-Time Decision Support: AI algorithms process sensor data in real-time, assisting SAR teams in making informed decisions rapidly.


The increasing sophistication of AI enables drones to conduct complex missions with minimal human intervention, significantly reducing response times and enhancing mission success rates.


The Promise of Digital Twin Simulations


Digital twin simulations represent a groundbreaking innovation in SAR operations. These simulations create virtual replicas of real-world environments, allowing SAR teams to:

  • Test and optimize drone deployment strategies before actual missions.

  • Simulate various search scenarios and refine operational procedures.

  • Train AI algorithms in realistic conditions, improving their decision-making accuracy.

By leveraging digital twins, SAR teams can enhance their preparedness and response efficiency, ultimately leading to more successful rescue missions.


Challenges and Future Directions


Despite the significant progress in SAR drone technology, several challenges persist:


  • Regulatory Restrictions: Legal and regulatory frameworks governing drone operations vary across jurisdictions, often limiting their deployment in emergencies.

  • Battery Life Constraints: Limited flight endurance remains a significant challenge, necessitating advancements in battery technology or alternative power sources such as solar or hydrogen fuel cells.

  • Payload Limitations: While sensor technologies continue to improve, payload capacity remains a limiting factor, restricting the types and number of sensors that can be carried simultaneously.

  • Weather and Environmental Limitations: Adverse weather conditions, such as strong winds, heavy rain, and extreme temperatures, can impact drone performance and reliability.


Addressing these challenges will require continued research, policy advancements, and technological breakthroughs. Future innovations in AI, autonomous navigation, and sensor miniaturization will further enhance the effectiveness of drones in SAR missions.


Conclusion


The transformative potential of evolving drone technologies in SAR operations is undeniable. By combining AI-driven analytics, advanced sensor integration, and multi-UAV coordination, drones are revolutionizing search and rescue missions, enabling faster, more efficient, and more effective responses. As research and development efforts continue, the next generation of SAR drones will push the boundaries of what is possible, ultimately saving lives through improved real-time decision-making and operational capabilities. The future of SAR operations is increasingly automated, intelligent, and capable, paving the way for unprecedented advancements in disaster response and humanitarian aid.


BibTeX

@article{QUERO2025105199,
title = {Unmanned aerial systems in search and rescue: A global perspective on current challenges and future applications},
journal = {International Journal of Disaster Risk Reduction},
pages = {105199},
year = {2025},
issn = {2212-4209},
doi = {https://doi.org/10.1016/j.ijdrr.2025.105199},
url = {https://www.sciencedirect.com/science/article/pii/S2212420925000238},
author = {Carlos Osorio Quero and Jose Martinez-Carranza},
keywords = {Unmanned aerial vehicles (UAV), Unmanned aerial systems (UAS), Search and rescue (SAR), Multi-sensor technology, Automatic control, Disaster response, Intelligent autonomous system}
}

Physics-Informed Neural Networks (PINNs) have recently gained attention as an effective approach for tackling complex inverse problems in image processing, especially in image denoising. This paper introduces an innovative framework that utilizes a range of neural network architectures—including ResUNet, UNet, U2Net, and Res2UNet—to implement denoising strategies grounded in nonlinear partial differential equations (PDEs). The proposed methods employ PDEs such as the heat equation, diffusion processes, multiphase mixture and phase change (MPMC) models, and Zhichang Guo's technique, embedding physical laws within the learning process to enhance denoising robustness and accuracy. We demonstrate that these models can be trained to effectively reduce noise while preserving critical image features, using a blend of data-driven methods and physical constraints. Our experiments show that integrating PDEs leads to superior denoising performance compared to traditional techniques. The models were evaluated on multiple datasets, with metrics like Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) indicating significant improvements in image quality. These results highlight the potential of using PINNs with nonlinear PDEs for advanced image-denoising tasks, paving the way for future research at the intersection of deep learning and physics-based modeling in image processing.


ree



  • Foto del escritor: Carlos Osorio
    Carlos Osorio
  • 23 oct 2024
  • 4 Min. de lectura

Actualizado: 6 dic 2024

We explore the integration of Dynamic Mode Decomposition (DMD) with Physics-Informed Neural Networks (PINNs) to enhance control systems for UAV quadcopters. This innovative approach applies DMD techniques and PINNs to solve the Riccati equation, which is critical for accurate UAV position estimation. By embedding the UAV control problem within physics-based constraints, the models remain faithful to the physical principles governing UAV dynamics. DMD is used to extract key dynamic modes from a rich dataset of UAV flight parameters—such as position, velocity, and control inputs—yielding a reduced-order representation that encapsulates the essential UAV dynamics. This streamlined representation is then embedded within the PINN framework to solve the Riccati equation accurately. The resulting control strategy significantly enhances position estimation accuracy and optimizes overall control performance. Real-time validation was performed in a Unity-based physics simulation, factoring in real-world conditions like gravity and perturbation noise. The outcomes show notable improvements in estimation accuracy and control stability over conventional methods.

ree

Fig.1 . Integrating DMD with PINN for UAV position and orientation estimation involves a step-by-step workflow. This sequence highlights the training procedure, detailing the neural network architecture and the loss function used to optimize the model.


DYNAMIC MODE DECOMPOSITION (DMD) WITH PHYSICS-INFORMED NEURONAL NETWORK (PINNS)


The integration of DMD and PINNs provides a robust control strategy for precise estimation of UAV position and orientation, even in the presence of noise and environmental perturbations. By optimizing a composite loss function, the PINN framework ensures that the estimated states adhere to the physical laws governing UAV dynamics, leading to improved accuracy and stability. During training, the neural network parameters are fine-tuned to minimize this loss, which in turn enhances the overall performance of the control system.


For the process of integrating DMD with PINN define the following steps:


Data Acquisition and Preprocessing: A comprehensive dataset, denoted as Xin​, is collected, comprising UAV flight information such as Euler angles (ϕ,θ,ψ,), position coordinates (x,y,z), and noisy observations. This dataset serves as the input for the DMD process.


Dynamic Mode Decomposition (DMD): The input data is processed using DMD to extract system matrices [A, B, Q, R], which represent the UAV dynamics and control parameters. These matrices form a reduced-order model that captures the dominant modes of the UAV's behavior.

Solving the Discrete Algebraic Riccati Equation (DARE): The matrices derived from DMD are then used to solve the discrete algebraic Riccati equation (DARE), yielding an initial state estimate X^1​.


Neural Network (NN) Refinement: The initial state estimate ˆX1 is then refined using a Physics-Informed Neural Network (PINN). The architecture consists of fully connected layers with ReLU activation functions between layers: • Input Layer: The network takes as input system matrices. • First Layer: The flattened input passes through a fully connected layer with 128 neurons, followed by a ReLU activation. • Second Layer: The output of the first hidden layer is passed through another fully connected layer with 128 neurons, again followed by a ReLU activation. • Output Layer: The output from the second hidden layer is passed through a final fully connected layer that outputs a vector reshaped into the desired size of A (state dimension squared), producing an estimated matrix ˆXk for the given system. This matrix represents a control output in the form of a transformation of the input matrices. This neural network framework incorporates physical laws and constraints directly into the learning process, improving the accuracy of the state estimates. The refinement process involves minimizing a composite loss function, where the variable α=0.1 and β=0.05 used to adjust the loss function:

  • PDE Loss: Ensures that the neural network solution adheres to the partial differential equations governing the UAV dynamics.

  • Initial Condition Loss: Penalizes deviations from the initial conditions.

  • Boundary Condition Loss: Ensures continuity and smoothness in the state estimates over time.


 Control Law and Position Estimate: The refined state estimate Xk is used to update the control law.

Output: The final output position (X,Y,Z) and Euler angles (ϕ,θ,ψ,) estimate.


The training process for the PINNs focuses on minimizing composite loss functions by adjusting the neural network parameters. By incorporating physical constraints and utilizing the dynamic modes extracted through DMD, the PINNs framework ensures robust and accurate UAV state estimations, even in the presence of noise and environmental disturbances. For training, we employed the IMCIS and Package Delivery UAV datasets. The network was trained for up to 5000 iterations, with early stopping triggered by an error tolerance threshold of 10^{-4}. The Adam optimizer was used, and the training duration ranged from 30 to 50 minutes. The network architecture consisted of three fully connected (Linear) layers and two ReLU activation functions. All computations were performed on an NVIDIA GTX GeForce RTX 4060 GPU. For model testing, we developed a Unity environment that included a physics-based simulation, accounting for factors like gravity and perturbation noise. Integration between the control model and the simulation environment was achieved via the UDP communication protocol.


Discussion of Simulation Results:

ree

Fig.2 . Reference drone trajectory used for evaluating the performance of DMD and PINN control methods.


The integration of DMD with PINNs has led to significant improvements in UAV trajectory control, as evidenced by both quantitative and qualitative metrics. The DMD-PINN model achieved considerably lower RMSE and MAE compared to other models, including CNN, MLR, and the DMD-only model (refer to Table I). These results highlight the superior accuracy and reliability of the DMD-PINN approach for controlling UAV trajectories. Additionally, the DMD-PINN model closely matched the ground truth trajectories and demonstrated strong performance across various test scenarios, showcasing its robustness in handling noise and maintaining precision in dynamic environments. The combination of DMD and PINNs offers a powerful method for enhancing the fidelity of UAV control systems under diverse operational conditions.


C. A. Osorio Quero and J. Martinez-Carranza, "Physics-Informed Machine Learning for UAV Control," 2024 21st International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Mexico City, Mexico, 2024, pp. 1-6, doi: 10.1109/CCE62852.2024.10770871.


BibTeX

@INPROCEEDINGS{10770871,author={Osorio Quero, Carlos Alexander and Martinez-Carranza, Jose},booktitle={2024 21st International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE)}, title={Physics-Informed Machine Learning for UAV Control}, year={2024},volume={},number={},pages={1-6},doi={10.1109/CCE62852.2024.10770871}}



bottom of page