loss package¶
Submodules¶
loss.arcface module¶
- class loss.arcface.ArcFace(in_features, out_features, s=30.0, m=0.5, bias=False)¶
Bases:
torch.nn.modules.module.Module
- forward(input, label)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- reset_parameters()¶
loss.center_loss module¶
- class loss.center_loss.CenterLoss(num_classes=751, feat_dim=2048, use_gpu=True)¶
Bases:
torch.nn.modules.module.Module
Center loss.
Reference: Wen et al. A Discriminative Feature Learning Approach for Deep Face Recognition. ECCV 2016.
- Args:
num_classes (int): number of classes. feat_dim (int): feature dimension.
- forward(x, labels)¶
- Args:
x: feature matrix with shape (batch_size, feat_dim). labels: ground truth labels with shape (num_classes).
loss.softmax_loss module¶
- class loss.softmax_loss.CrossEntropyLabelSmooth(num_classes, epsilon=0.1, use_gpu=True)¶
Bases:
torch.nn.modules.module.Module
Cross entropy loss with label smoothing regularizer.
Reference: Szegedy et al. Rethinking the Inception Architecture for Computer Vision. CVPR 2016. Equation: y = (1 - epsilon) * y + epsilon / K.
- Args:
num_classes (int): number of classes. epsilon (float): weight.
- forward(inputs, targets)¶
- Args:
inputs: prediction matrix (before softmax) with shape (batch_size, num_classes) targets: ground truth labels with shape (num_classes)
- class loss.softmax_loss.CrossEntropyLabelSmooth2d(num_classes, epsilon=0.1, use_gpu=True)¶
Bases:
torch.nn.modules.module.Module
Cross entropy loss with label smoothing regularizer.
Reference: Szegedy et al. Rethinking the Inception Architecture for Computer Vision. CVPR 2016. Equation: y = (1 - epsilon) * y + epsilon / K.
- Args:
num_classes (int): number of classes. epsilon (float): weight.
- forward(inputs, targets)¶
- Args:
inputs: prediction matrix (before softmax) with shape (batch_size, num_classes,w,h) targets: ground truth labels with shape (num_classes)
- class loss.softmax_loss.LabelSmoothSoftmaxCE(lb_pos=0.9, lb_neg=0.005, reduction='mean', lb_ignore=255)¶
Bases:
torch.nn.modules.module.Module
- forward(logits, label)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
loss.triplet_loss module¶
- class loss.triplet_loss.TripletLoss(margin=None, hard_factor=0.0)¶
Bases:
object
Triplet loss using HARDER example mining, modified based on original triplet loss using hard example mining
- loss.triplet_loss.euclidean_dist(x, y)¶
- Args:
x: pytorch Variable, with shape [m, d] y: pytorch Variable, with shape [n, d]
- Returns:
dist: pytorch Variable, with shape [m, n]
- loss.triplet_loss.hard_example_mining(dist_mat, labels, return_inds=False)¶
For each anchor, find the hardest positive and negative sample. Args:
dist_mat: pytorch Variable, pair wise distance between samples, shape [N, N] labels: pytorch LongTensor, with shape [N] return_inds: whether to return the indices. Save time if `False`(?)
- Returns:
dist_ap: pytorch Variable, distance(anchor, positive); shape [N] dist_an: pytorch Variable, distance(anchor, negative); shape [N] p_inds: pytorch LongTensor, with shape [N];
indices of selected hard positive samples; 0 <= p_inds[i] <= N - 1
- n_inds: pytorch LongTensor, with shape [N];
indices of selected hard negative samples; 0 <= n_inds[i] <= N - 1
- NOTE: Only consider the case in which all labels have same num of samples,
thus we can cope with all anchors in parallel.
- loss.triplet_loss.normalize(x, axis=- 1)¶
Normalizing to unit length along the specified dimension. Args:
x: pytorch Variable
- Returns:
x: pytorch Variable, same shape as input