Skip to content

Commit

Permalink
Added pseudocode to describe the SGD algorithm
Browse files Browse the repository at this point in the history
  • Loading branch information
Fady Bishara committed Jul 31, 2024
1 parent dfb748f commit 0288bad
Showing 1 changed file with 13 additions and 2 deletions.
15 changes: 13 additions & 2 deletions docs/linear_regression.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,17 @@ The parameter $\eta$ should be small (i.e., $\eta <1$) and is called the ==learn

!!! example "Implement stochastic gradient descent"

```python
for ...
The following pseudocode is adapted from the SGD algorithm (8.1) from the [Deep Learning](https://www.deeplearningbook.org/) book by Goodfellow, Bengio, and Courville [see chapter 8, page 291]
```ruby
Require: learning rate, eta
Require: initial parameters, w and b
k = 1
while (do another epoch == True) do
loop over minibatches
compute loss function
compute gradient of loss function: dloss_dw, dloss_db
update parameters: w = w - eta * dloss_dw
b = b - eta * dloss_db
k = k + 1
end while
```

0 comments on commit 0288bad

Please sign in to comment.