Sparsity Scheduler
Contains classes that schedule when the sparsity mask should be applied
Periodic
Periodically applies sparsity every periodicity iterations after initial_epoch.
__init__(self, periodicity=50, initial_iteration=1000)
special
Periodically applies sparsity every periodicity iterations after initial_epoch.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
periodicity |
int |
after initial_iterations, apply sparsity mask per periodicity epochs |
50 |
initial_iteration |
int |
wait initial_iterations before applying sparsity |
1000 |
Source code in deepymod/training/sparsity_scheduler.py
def __init__(self, periodicity=50, initial_iteration=1000):
"""Periodically applies sparsity every periodicity iterations
after initial_epoch.
Args:
periodicity (int): after initial_iterations, apply sparsity mask per periodicity epochs
initial_iteration (int): wait initial_iterations before applying sparsity
"""
self.periodicity = periodicity
self.initial_iteration = initial_iteration
TrainTest
Early stops the training if validation loss doesn't improve after a given patience. Note that periodicity should be multitude of write_iterations.
__init__(self, patience=200, delta=1e-05, path='checkpoint.pt')
special
Early stops the training if validation loss doesn't improve after a given patience. Note that periodicity should be multitude of write_iterations.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
patience |
int |
wait patience epochs before checking TrainTest |
200 |
delta |
float |
desired accuracy |
1e-05 |
path |
str |
pathname where to store the savepoints, must have ".pt" extension |
'checkpoint.pt' |
Source code in deepymod/training/sparsity_scheduler.py
def __init__(self, patience=200, delta=1e-5, path='checkpoint.pt'):
"""Early stops the training if validation loss doesn't improve after a given patience.
Note that periodicity should be multitude of write_iterations.
Args:
patience (int): wait patience epochs before checking TrainTest
delta (float): desired accuracy
path (str): pathname where to store the savepoints, must have ".pt" extension
"""
self.path = path
self.patience = patience
self.delta = delta
self.best_iteration = None
self.best_loss = None
load_checkpoint(self, model, optimizer)
Loads model from disk
Source code in deepymod/training/sparsity_scheduler.py
def load_checkpoint(self, model, optimizer):
'''Loads model from disk'''
checkpoint_path = self.path + 'checkpoint.pt'
checkpoint = torch.load(checkpoint_path)
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
save_checkpoint(self, model, optimizer)
Saves model when validation loss decrease.
Source code in deepymod/training/sparsity_scheduler.py
def save_checkpoint(self, model, optimizer):
'''Saves model when validation loss decrease.'''
checkpoint_path = self.path + 'checkpoint.pt'
torch.save({'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict(),}, checkpoint_path)
TrainTestPeriodic
Early stops the training if validation loss doesn't improve after a given patience. Note that periodicity should be multitude of write_iterations.
__init__(self, periodicity=50, patience=200, delta=1e-05, path='checkpoint.pt')
special
Early stops the training if validation loss doesn't improve after a given patience. Note that periodicity should be multitude of write_iterations.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
periodicity |
int |
apply sparsity mask per periodicity epochs |
50 |
patience |
int |
wait patience epochs before checking TrainTest |
200 |
delta |
float |
desired accuracy |
1e-05 |
path |
str |
pathname where to store the savepoints, must have ".pt" extension |
'checkpoint.pt' |
Source code in deepymod/training/sparsity_scheduler.py
def __init__(self, periodicity=50, patience=200, delta=1e-5, path='checkpoint.pt'):
"""Early stops the training if validation loss doesn't improve after a given patience.
Note that periodicity should be multitude of write_iterations.
Args:
periodicity (int): apply sparsity mask per periodicity epochs
patience (int): wait patience epochs before checking TrainTest
delta (float): desired accuracy
path (str): pathname where to store the savepoints, must have ".pt" extension"""
self.path = path
self.patience = patience
self.delta = delta
self.periodicity = periodicity
self.best_iteration = None
self.best_loss = None
self.periodic = False
load_checkpoint(self, model, optimizer)
Loads model from disk
Source code in deepymod/training/sparsity_scheduler.py
def load_checkpoint(self, model, optimizer):
'''Loads model from disk'''
checkpoint_path = self.path + 'checkpoint.pt'
checkpoint = torch.load(checkpoint_path)
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
save_checkpoint(self, model, optimizer)
Saves model when validation loss decrease.
Source code in deepymod/training/sparsity_scheduler.py
def save_checkpoint(self, model, optimizer):
'''Saves model when validation loss decrease.'''
checkpoint_path = self.path + 'checkpoint.pt'
torch.save({'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict(),}, checkpoint_path)