top of page
Buscar

Physics-Informed Neural Network for Denoising Images Using Nonlinear PDE

  • Foto del escritor: Carlos Osorio
    Carlos Osorio
  • 23 sept
  • 2 Min. de lectura

Why PINNs for image denoising?

Classical denoisers—such as median filters, BM3D, and standard CNNs—do a decent job of smoothing noise but often blur edges or hallucinate textures. Physics-Informed Neural Networks (PINNs) add missing structure: they embed physical priors (from PDEs you’d actually use to describe diffusion or speckle statistics) directly into the training objective. The result is a model that not only learns patterns from data but also obeys a governing equation that favors realistic, edge-aware solutions.

Qualitative results of PDE-guided denoising. Left to right: Original, Noisy, and Denoised (PDE) images for three scenes (urban block, complex facility, aircraft on runway). The PDE-informed model suppresses speckle and additive noise while preserving edges and fine structures, recovering contrast and texture.
Qualitative results of PDE-guided denoising. Left to right: Original, Noisy, and Denoised (PDE) images for three scenes (urban block, complex facility, aircraft on runway). The PDE-informed model suppresses speckle and additive noise while preserving edges and fine structures, recovering contrast and texture.

The core idea, in one picture

Think of denoising as finding a clean image u(x,y)u(x,y)u(x,y) that:

  1. matches the observed image yyy after accounting for noise, and

  2. satisfies a PDE prior that encodes how intensities should diffuse, sharpen, or stabilize.

Instead of only minimizing pixel loss ∥u−y∥\|u-y\|∥u−y∥, we add a PDE residual loss:



Which PDEs do we use?


Heat equation (linear diffusion)

A baseline smoother:


Great for reducing Gaussian noise, but risks oversmoothing edges.


MPMC (multi-phase / multi-component diffusion)


Useful when images contain distinct regions/phases (e.g., tissue types, materials). It regularizes piecewise-smooth areas and interfaces with tailored coupling terms.


Zhichang Guo (ZG) method for speckle

Speckle is multiplicative (common in SAR/ultrasound). The ZG family uses log-domain transforms and adaptive diffusion/regularization to suppress speckle while respecting radiometric statistics. In practice, we enforce a residual that stabilizes log-intensity variance and edge ratios consistent with speckle models.


Network backbones

We integrate the PDE losses into four families:

  • UNet: strong skip connections, excellent for low-level restoration.

  • ResUNet: residual blocks ease optimization for deeper models.

  • U²-Net: “U-in-U” modules capture multi-scale structure with fewer parameters.

  • Res2UNet: Res2 blocks split channels into granular groups, improving multi-scale feature interactions without big parameter growth.


Loss design (data + physics)

A typical training objective:


  • Pixel/SSIM terms encourage fidelity.

  • PDE residual promotes physics-consistent solutions.

  • TV (optional) adds mild piecewise-smoothness.


Training details that matter

  • Gradient stability: compute  via convolutional kernels to keep it GPU-friendly and stable.


  • Boundary handling: reflective padding better matches physical imaging boundaries.

  • Annealing: start with higher λpix\lambda_\text{pix}λpix​, softly increase λPDE\lambda_\text{PDE}λPDE​ as the network learns to reconstruct structure.

  • Mixed precision: fine, but keep PDE ops in FP32 to avoid gradient underflow.



Limitations & future directions

  • PDE choice matters: mismatched physics can bias results; validate per modality.

  • Parameter sensitivity: κ\kappaκ, diffusion weights, and λPDE\lambda_{\text{PDE}}λPDE​ need tuning.

  • Extensions: learned conductivity fields, plug-and-play priors, stochastic PDEs, and multi-task training (denoise + segmentation) are promising.

Takeaway

By marrying nonlinear PDE priors with deep networks, PINNs deliver denoisers that are both data-driven and physically grounded. In practice, that means higher PSNR/SSIM, better ENL/CNR for speckle, and—most importantly—clean images with sharp, trustworthy structure. If you’re already running a UNet-style pipeline, adding a PDE residual is a low-friction upgrade. For speckle-dominated imagery, fold in a ZG-style constraint and train in the log domain. You’ll get cleaner results without sacrificing the boundaries you care about.

 
 
 

Comentarios


bottom of page