Copy-and-Paste Network

Module containing the implementation of the Copy-and-Paste Network (CPN). This implementation has been slightly modified to fill the requirements of this thesis. The original version can be found in:

https://github.com/shleecs/Copy-and-Paste-Networks-for-Deep-Video-Inpainting

class master_thesis.model_cpn.CPN

Bases: torch.nn.modules.module.Module

Implementation of the Copy-and-Paste Network (CPN).

forward(x_target, m_target, x_refs, m_refs)

Forward pass through the Copy-and-Paste Network (CPN).

Parameters
  • x_target

  • m_target

  • x_refs

  • m_refs

Returns:

align(x_target, m_target, x_refs, m_refs)
copy_and_paste(x_target, v_target, x_aligned, v_aligned)
inpaint(x, m)
static get_indexes(t, n_frames, p=2, r_list_max_length=120)
static init_He(module)
static init_model_with_state(checkpoint_path, device='cpu')
training: bool
class master_thesis.model_cpn.A_Encoder

Bases: torch.nn.modules.module.Module

forward(in_f, in_v)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class master_thesis.model_cpn.A_Regressor

Bases: torch.nn.modules.module.Module

forward(feat1, feat2)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class master_thesis.model_cpn.Encoder

Bases: torch.nn.modules.module.Module

forward(in_f, in_v)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class master_thesis.model_cpn.CM_Module

Bases: torch.nn.modules.module.Module

forward(c_feats, v_t, v_aligned)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

static masked_softmax(vec, mask, dim)
training: bool
class master_thesis.model_cpn.Decoder

Bases: torch.nn.modules.module.Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class master_thesis.model_cpn.Conv2d(in_ch, out_ch, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), D=(1, 1), activation=None)

Bases: torch.nn.modules.module.Module

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool