ada.models package¶
Submodules¶
ada.models.architectures module¶
Domain adaptation architectures should be indifferent to the task at hand: digits, toy datasets, recsys etc. Take modules as input and organise them into an architecture.
-
class
ada.models.architectures.
BaseAdaptTrainer
(dataset, feature_extractor, task_classifier, method=None, lambda_init=1.0, adapt_lambda=True, adapt_lr=True, nb_init_epochs=10, nb_adapt_epochs=50, batch_size=32, init_lr=0.001, optimizer=None)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
compute_loss
(batch, split_name='V')[source]¶ Define the loss of the model
Parameters: - batch (tuple) – batches returned by the MultiDomainLoader.
- split_name (str, optional) – learning stage (one of [“T”, “V”, “Te”]). Defaults to “V” for validation. “T” is for training and “Te” for test. This is currently used only for naming the metrics used for logging.
Returns: a 3-element tuple with task_loss, adv_loss, log_metrics. log_metrics should be a dictionary.
Raises: NotImplementedError
– children of this classes should implement this method.
-
get_parameters_watch_list
()[source]¶ Update this list for parameters to watch while training (ie log with MLFlow)
-
method
¶
-
training_step
(batch, batch_nb)[source]¶ The most generic of training steps
Parameters: - batch (tuple) – the batch as returned by the MultiDomainLoader dataloader iterator: 2 tuples: (x_source, y_source), (x_target, y_target) in the unsupervised setting 3 tuples: (x_source, y_source), (x_target_labeled, y_target_labeled), (x_target_unlabeled, y_target_unlabeled) in the semi-supervised setting
- batch_nb (int) – id of the current batch.
Returns: - must contain a “loss” key with the loss to be used for back-propagation.
see pytorch-lightning for more details.
Return type: dict
-
-
class
ada.models.architectures.
BaseDANNLike
(dataset, feature_extractor, task_classifier, critic, alpha=1.0, entropy_reg=0.0, adapt_reg=True, batch_reweighting=False, **base_params)[source]¶ Bases:
ada.models.architectures.BaseAdaptTrainer
-
compute_loss
(batch, split_name='V')[source]¶ Define the loss of the model
Parameters: - batch (tuple) – batches returned by the MultiDomainLoader.
- split_name (str, optional) – learning stage (one of [“T”, “V”, “Te”]). Defaults to “V” for validation. “T” is for training and “Te” for test. This is currently used only for naming the metrics used for logging.
Returns: a 3-element tuple with task_loss, adv_loss, log_metrics. log_metrics should be a dictionary.
Raises: NotImplementedError
– children of this classes should implement this method.
-
-
class
ada.models.architectures.
BaseMMDLike
(dataset, feature_extractor, task_classifier, kernel_mul=2.0, kernel_num=5, **base_params)[source]¶ Bases:
ada.models.architectures.BaseAdaptTrainer
-
compute_loss
(batch, split_name='V')[source]¶ Define the loss of the model
Parameters: - batch (tuple) – batches returned by the MultiDomainLoader.
- split_name (str, optional) – learning stage (one of [“T”, “V”, “Te”]). Defaults to “V” for validation. “T” is for training and “Te” for test. This is currently used only for naming the metrics used for logging.
Returns: a 3-element tuple with task_loss, adv_loss, log_metrics. log_metrics should be a dictionary.
Raises: NotImplementedError
– children of this classes should implement this method.
-
-
class
ada.models.architectures.
CDANtrainer
(dataset, feature_extractor, task_classifier, critic, use_entropy=False, use_random=False, random_dim=1024, **base_params)[source]¶ Bases:
ada.models.architectures.BaseDANNLike
Implements CDAN: Long, Mingsheng, et al. “Conditional adversarial domain adaptation.” Advances in Neural Information Processing Systems. 2018. https://papers.nips.cc/paper/7436-conditional-adversarial-domain-adaptation.pdf
-
compute_loss
(batch, split_name='V')[source]¶ Define the loss of the model
Parameters: - batch (tuple) – batches returned by the MultiDomainLoader.
- split_name (str, optional) – learning stage (one of [“T”, “V”, “Te”]). Defaults to “V” for validation. “T” is for training and “Te” for test. This is currently used only for naming the metrics used for logging.
Returns: a 3-element tuple with task_loss, adv_loss, log_metrics. log_metrics should be a dictionary.
Raises: NotImplementedError
– children of this classes should implement this method.
-
-
class
ada.models.architectures.
DANNtrainer
(dataset, feature_extractor, task_classifier, critic, method=None, **base_params)[source]¶ Bases:
ada.models.architectures.BaseDANNLike
This class implements the DANN architecture from Ganin, Yaroslav, et al. “Domain-adversarial training of neural networks.” The Journal of Machine Learning Research (2016) https://arxiv.org/abs/1505.07818
-
class
ada.models.architectures.
DANtrainer
(dataset, feature_extractor, task_classifier, **base_params)[source]¶ Bases:
ada.models.architectures.BaseMMDLike
This is an implementation of DAN Long, Mingsheng, et al. “Learning Transferable Features with Deep Adaptation Networks.” International Conference on Machine Learning. 2015. http://proceedings.mlr.press/v37/long15.pdf code based on https://github.com/thuml/Xlearn.
-
class
ada.models.architectures.
FewShotDANNtrainer
(dataset, feature_extractor, task_classifier, critic, method, **base_params)[source]¶ Bases:
ada.models.architectures.BaseDANNLike
Implements adaptations of DANN to the semi-supervised setting
naive: task classifier is trained on labeled target data, in addition to source data. MME: immplements Saito, Kuniaki, et al. “Semi-supervised domain adaptation via minimax entropy.” Proceedings of the IEEE International Conference on Computer Vision. 2019 https://arxiv.org/pdf/1904.06487.pdf
-
compute_loss
(batch, split_name='V')[source]¶ Define the loss of the model
Parameters: - batch (tuple) – batches returned by the MultiDomainLoader.
- split_name (str, optional) – learning stage (one of [“T”, “V”, “Te”]). Defaults to “V” for validation. “T” is for training and “Te” for test. This is currently used only for naming the metrics used for logging.
Returns: a 3-element tuple with task_loss, adv_loss, log_metrics. log_metrics should be a dictionary.
Raises: NotImplementedError
– children of this classes should implement this method.
-
-
class
ada.models.architectures.
JANtrainer
(dataset, feature_extractor, task_classifier, kernel_mul=(2.0, 2.0), kernel_num=(5, 1), **base_params)[source]¶ Bases:
ada.models.architectures.BaseMMDLike
This is an implementation of JAN Long, Mingsheng, et al. “Deep transfer learning with joint adaptation networks.” International Conference on Machine Learning, 2017. https://arxiv.org/pdf/1605.06636.pdf code based on https://github.com/thuml/Xlearn.
-
class
ada.models.architectures.
Method
[source]¶ Bases:
enum.Enum
Lists the available methods. Provides a few methods that group the methods by type.
-
CDAN
= 'CDAN'¶
-
CDAN_E
= 'CDAN-E'¶
-
DAN
= 'DAN'¶
-
DANN
= 'DANN'¶
-
FSDANN
= 'FSDANN'¶
-
JAN
= 'JAN'¶
-
MME
= 'MME'¶
-
Source
= 'Source'¶
-
WDGRL
= 'WDGRL'¶
-
WDGRLMod
= 'WDGRLMod'¶
-
-
class
ada.models.architectures.
WDGRLtrainer
(dataset, feature_extractor, task_classifier, critic, k_critic=5, gamma=10, beta_ratio=0, **base_params)[source]¶ Bases:
ada.models.architectures.BaseDANNLike
Implements WDGRL as described in Shen, Jian, et al. “Wasserstein distance guided representation learning for domain adaptation.” Thirty-Second AAAI Conference on Artificial Intelligence. 2018. https://arxiv.org/pdf/1707.01217.pdf
This class also implements the asymmetric ($eta$) variant described in: Wu, Yifan, et al. “Domain adaptation with asymmetrically-relaxed distribution alignment.” ICML (2019) https://arxiv.org/pdf/1903.01689.pdf
-
compute_loss
(batch, split_name='V')[source]¶ Define the loss of the model
Parameters: - batch (tuple) – batches returned by the MultiDomainLoader.
- split_name (str, optional) – learning stage (one of [“T”, “V”, “Te”]). Defaults to “V” for validation. “T” is for training and “Te” for test. This is currently used only for naming the metrics used for logging.
Returns: a 3-element tuple with task_loss, adv_loss, log_metrics. log_metrics should be a dictionary.
Raises: NotImplementedError
– children of this classes should implement this method.
-
training_step
(batch, batch_id)[source]¶ The most generic of training steps
Parameters: - batch (tuple) – the batch as returned by the MultiDomainLoader dataloader iterator: 2 tuples: (x_source, y_source), (x_target, y_target) in the unsupervised setting 3 tuples: (x_source, y_source), (x_target_labeled, y_target_labeled), (x_target_unlabeled, y_target_unlabeled) in the semi-supervised setting
- batch_nb (int) – id of the current batch.
Returns: - must contain a “loss” key with the loss to be used for back-propagation.
see pytorch-lightning for more details.
Return type: dict
-
-
class
ada.models.architectures.
WDGRLtrainerMod
(dataset, feature_extractor, task_classifier, critic, k_critic=5, gamma=10, beta_ratio=0, **base_params)[source]¶ Bases:
ada.models.architectures.WDGRLtrainer
Implements a modified version WDGRL as described in Shen, Jian, et al. “Wasserstein distance guided representation learning for domain adaptation.” Thirty-Second AAAI Conference on Artificial Intelligence. 2018. https://arxiv.org/pdf/1707.01217.pdf
This class also implements the asymmetric ($eta$) variant described in: Wu, Yifan, et al. “Domain adaptation with asymmetrically-relaxed distribution alignment.” ICML (2019) https://arxiv.org/pdf/1903.01689.pdf
-
training_step
(batch, batch_id, optimizer_idx)[source]¶ The most generic of training steps
Parameters: - batch (tuple) – the batch as returned by the MultiDomainLoader dataloader iterator: 2 tuples: (x_source, y_source), (x_target, y_target) in the unsupervised setting 3 tuples: (x_source, y_source), (x_target_labeled, y_target_labeled), (x_target_unlabeled, y_target_unlabeled) in the semi-supervised setting
- batch_nb (int) – id of the current batch.
Returns: - must contain a “loss” key with the loss to be used for back-propagation.
see pytorch-lightning for more details.
Return type: dict
-
-
ada.models.architectures.
create_dann_like
(method: ada.models.architectures.Method, dataset, feature_extractor, task_classifier, critic, **train_params)[source]¶
-
ada.models.architectures.
create_fewshot_trainer
(method: ada.models.architectures.Method, dataset, feature_extractor, task_classifier, critic, **train_params)[source]¶
ada.models.layers module¶
-
class
ada.models.layers.
ReverseLayerF
[source]¶ Bases:
torch.autograd.function.Function
-
static
backward
(ctx, grad_output)[source]¶ Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
-
static
forward
(ctx, x, alpha)[source]¶ Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.
-
static
ada.models.losses module¶
ada.models.modules module¶
-
class
ada.models.modules.
AlexNetFeature
[source]¶ Bases:
torch.nn.modules.module.Module
PyTorch model convnet without the last layer adapted from https://github.com/thuml/Xlearn/blob/master/pytorch/src/network.py
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
ada.models.modules.
DataClassifierDigits
(input_size=128, n_class=10)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
ada.models.modules.
DomainClassifierDigits
(input_size=128, bigger_discrim=False)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
ada.models.modules.
FFSoftmaxClassifier
(input_dim=15, n_classes=2, name='c', hidden=(), activation_fn=<class 'torch.nn.modules.activation.ReLU'>, **activation_args)[source]¶ Bases:
torch.nn.modules.module.Module
-
extra_repr
()[source]¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input_data)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
ada.models.modules.
FeatureExtractFF
(input_dim, hidden_sizes=(15, ), activation_fn=<class 'torch.nn.modules.activation.ReLU'>, **activation_args)[source]¶ Bases:
torch.nn.modules.module.Module
-
extra_repr
()[source]¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input_data)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
ada.models.modules.
FeatureExtractorDigits
(num_channels=3, kernel_size=5)[source]¶ Bases:
torch.nn.modules.module.Module
Feature extractor for MNIST-like data
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
ada.models.modules.
ModuleType
[source]¶ Bases:
enum.Enum
An enumeration.
-
Classifier
= 'classifier'¶
-
Critic
= 'critic'¶
-
Feature
= 'feature'¶
-
-
class
ada.models.modules.
ResNet101Feature
[source]¶ Bases:
torch.nn.modules.module.Module
PyTorch model convnet without the last layer adapted from https://github.com/thuml/Xlearn/blob/master/pytorch/src/network.py
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
ada.models.modules.
ResNet152Feature
[source]¶ Bases:
torch.nn.modules.module.Module
PyTorch model convnet without the last layer adapted from https://github.com/thuml/Xlearn/blob/master/pytorch/src/network.py
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
ada.models.modules.
ResNet18Feature
[source]¶ Bases:
torch.nn.modules.module.Module
PyTorch model convnet without the last layer adapted from https://github.com/thuml/Xlearn/blob/master/pytorch/src/network.py
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
ada.models.modules.
ResNet34Feature
[source]¶ Bases:
torch.nn.modules.module.Module
PyTorch model convnet without the last layer adapted from https://github.com/thuml/Xlearn/blob/master/pytorch/src/network.py
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
ada.models.modules.
ResNet50Feature
[source]¶ Bases:
torch.nn.modules.module.Module
PyTorch model convnet without the last layer adapted from https://github.com/thuml/Xlearn/blob/master/pytorch/src/network.py
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-