Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 34 additions & 0 deletions neural_nets/regularizers/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
## Usage of regularizers

Regularizers allow to apply penalties on layer parameters or layer activity during optimization. These penalties are incorporated in the loss function that the network optimizes.

The penalties are applied on a per-layer basis.

## Example

```python
from neural_nets import regularizers

```

## Available penalties

```python
regularizers.l1(0.)
regularizers.l2(0.)
regularizers.l1_l2(l1=0.01, l2=0.01)
```

## Developing new regularizers

Any function that takes in a weight matrix and returns a loss contribution tensor can be used as a regularizer, e.g.:

```python
import numpy as np

def l1_reg(weight_matrix):
return 0.01 * np.sum(np.abs(weight_matrix))
```

Alternatively, you can write your regularizers in an object-oriented way;
see the [neural_nets/regularizers.py](regularizers.py) module for examples.
1 change: 1 addition & 0 deletions neural_nets/regularizers/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
from .regularizers import *
57 changes: 57 additions & 0 deletions neural_nets/regularizers/regularizers.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
"""
Built-in regularizers.
"""

import numpy as np


class Regularizer(object):
"""
Regularizer base class.
"""

def __call__(self, x):
return 0.

@classmethod
def from_config(cls, config):
return cls(**config)


class L1L2(Regularizer):
"""
Regularizer for L1 and L2 regularization.

Arguments
--------
l1: Float; L1 regularization factor.
l2: Float; L2 regularization factor.
"""

def __init__(self, l1=0., l2=0.):
self.l1 = l1
self.l2 = l2

def __call__(self, x):
regularization = 0.
if self.l1:
regularization += np.sum(self.l1 * np.abs(x))
if self.l2:
regularization += np.sum(self.l2 * np.square(x))
return regularization

def get_config(self):
return {'l1': float(self.l1),
'l2': float(self.l2)}


def l1(l=0.01):
return L1L2(l1=l)


def l2(l=0.01):
return L1L2(l2=l)


def l1_l2(l1=0.01, l2=0.01):
return L1L2(l1=l1, l2=l2)