Skip to content

Commit

Permalink
maybe this example should run on the GPU, since it easily can, even t…
Browse files Browse the repository at this point in the history
…hough this is slower
  • Loading branch information
mcabbott committed Nov 26, 2022
1 parent 739197d commit c0994c7
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions docs/src/models/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,14 @@ model = Chain(
Dense(2 => 3, tanh), # activation function inside layer
BatchNorm(3),
Dense(3 => 2),
softmax)
softmax) |> gpu # move model to GPU, if available

# The model encapsulates parameters, randomly initialised. Its initial output is:
out1 = model(noisy) # 2×1000 Matrix{Float32}
out1 = model(noisy |> gpu) |> cpu # 2×1000 Matrix{Float32}

# To train the model, we use batches of 64 samples, and one-hot encoding:
target = Flux.onehotbatch(truth, [true, false]) # 2×1000 OneHotMatrix
loader = Flux.DataLoader((noisy, target), batchsize=64, shuffle=true);
loader = Flux.DataLoader((noisy, target) |> gpu, batchsize=64, shuffle=true);
# 16-element DataLoader with first element: (2×64 Matrix{Float32}, 2×64 OneHotMatrix)

pars = Flux.params(model) # contains references to arrays in model
Expand All @@ -34,7 +34,7 @@ opt = Flux.Adam(0.01) # will store optimiser momentum, etc.
losses = []
for epoch in 1:1_000
for (x, y) in loader
loss, grad = withgradient(pars) do
loss, grad = Flux.withgradient(pars) do
# Evaluate model and loss inside gradient context:
y_hat = model(x)
Flux.crossentropy(y_hat, y)
Expand All @@ -46,7 +46,7 @@ end

pars # parameters, momenta and output have all changed
opt
out2 = model(noisy) # first row is prob. of true, second row p(false)
out2 = model(noisy |> gpu) |> cpu # first row is prob. of true, second row p(false)

mean((out2[1,:] .> 0.5) .== truth) # accuracy 94% so far!
```
Expand Down

0 comments on commit c0994c7

Please sign in to comment.