Skip to content

Training and Evaluation metrics¤

CustomMSELoss ¤

Bases: Module

Custom MSE loss for PDEs.

MSE but summed over time and fields, then averaged over space and batch.

Parameters:

Name Type Description Default
reduction str

Reduction method. Defaults to "mean".

'mean'
Source code in pdearena/modules/loss.py
class CustomMSELoss(torch.nn.Module):
    """Custom MSE loss for PDEs.

    MSE but summed over time and fields, then averaged over space and batch.

    Args:
        reduction (str, optional): Reduction method. Defaults to "mean".
    """

    def __init__(self, reduction: str = "mean") -> None:
        super().__init__()
        self.reduction = reduction

    def forward(self, input: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
        return custommse_loss(input, target, reduction=self.reduction)

PearsonCorrelationScore ¤

Bases: Module

Pearson Correlation Score for PDEs.

Source code in pdearena/modules/loss.py
class PearsonCorrelationScore(torch.nn.Module):
    """Pearson Correlation Score for PDEs."""

    def __init__(self, channel: int = None, reduce_batch: bool = False) -> None:
        super().__init__()
        self.channel = channel
        self.reduce_batch = reduce_batch

    def forward(self, input: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
        if self.channel is not None:
            input = input[:, :, self.channel]
            target = target[:, :, self.channel]
        return pearson_correlation(input, target, reduce_batch=self.reduce_batch)

ScaledLpLoss ¤

Bases: Module

Scaled Lp loss for PDEs.

Parameters:

Name Type Description Default
p int

p in Lp norm. Defaults to 2.

2
reduction str

Reduction method. Defaults to "mean".

'mean'
Source code in pdearena/modules/loss.py
class ScaledLpLoss(torch.nn.Module):
    """Scaled Lp loss for PDEs.

    Args:
        p (int, optional): p in Lp norm. Defaults to 2.
        reduction (str, optional): Reduction method. Defaults to "mean".
    """

    def __init__(self, p: int = 2, reduction: str = "mean") -> None:
        super().__init__()
        self.p = p
        self.reduction = reduction

    def forward(self, input: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
        return scaledlp_loss(input, target, p=self.p, reduction=self.reduction)