diff --git a/dev/articles/examples/basic-nn-module.html b/dev/articles/examples/basic-nn-module.html index f7e2f4a1f9..ae9ad4ebb9 100644 --- a/dev/articles/examples/basic-nn-module.html +++ b/dev/articles/examples/basic-nn-module.html @@ -133,9 +133,9 @@
## $w
## torch_tensor
-## -0.3662
-## 1.1301
-## -0.1990
+## 0.5874
+## 0.2547
+## -0.2057
## [ CPUFloatType{3,1} ][ requires_grad = TRUE ]
##
## $b
@@ -146,9 +146,9 @@ basic-nn-module
# or individually
model$w
## torch_tensor
-## -0.3662
-## 1.1301
-## -0.1990
+## 0.5874
+## 0.2547
+## -0.2057
## [ CPUFloatType{3,1} ][ requires_grad = TRUE ]
model$b
y_pred
## torch_tensor
-## -0.4375
-## 0.1599
-## 1.8788
-## -1.8759
-## 0.4374
-## -1.1280
-## -3.3116
-## -0.2800
-## 1.0060
-## 0.8805
+## -0.9153
+## 1.0249
+## -1.0888
+## 0.4401
+## 0.8388
+## 0.2520
+## -0.1044
+## 1.1781
+## 0.0587
+## 0.0623
## [ CPUFloatType{10,1} ][ grad_fn = <AddBackward0> ]
diff --git a/dev/articles/indexing.html b/dev/articles/indexing.html
index 51d0d8b9aa..ca513715db 100644
--- a/dev/articles/indexing.html
+++ b/dev/articles/indexing.html
@@ -244,23 +244,23 @@ The following syntax will give you the first row:
x[1,]
#> torch_tensor
-#> 0.1092
-#> 0.7651
-#> 0.4560
+#> 0.2383
+#> -0.0072
+#> -0.4631
#> [ CPUFloatType{3} ]
And this would give you the first 2 columns:
x[,1:2]
#> torch_tensor
-#> 0.1092 0.7651
-#> -1.0189 0.1967
+#> 0.2383 -0.0072
+#> 0.7319 0.0022
#> [ CPUFloatType{2,2} ]
You can also use boolean vectors, for example:
x[c(TRUE, FALSE, TRUE, FALSE), c(TRUE, FALSE, TRUE, FALSE)]
#> torch_tensor
-#> -1.0368 1.1555
-#> -0.4369 -0.0500
+#> -1.4861 -0.0491
+#> -2.4130 0.4294
#> [ CPUFloatType{2,2} ]
The above examples also work if the index were long or boolean tensors, instead of R vectors. It’s also possible to index with diff --git a/dev/articles/loading-data.html b/dev/articles/loading-data.html index d69db9b1f0..81f29f1709 100644 --- a/dev/articles/loading-data.html +++ b/dev/articles/loading-data.html @@ -385,7 +385,7 @@
Another example is torch_ones
, which creates a tensor
filled with ones.
traced_fn(torch_randn(3))
#> torch_tensor
+#> 2.1653
#> 0.0000
-#> 0.0000
-#> 0.5605
+#> 0.3021
#> [ CPUFloatType{3} ]
It’s also possible to trace nn_modules()
defined in R,
for example:
traced_module(torch_randn(3, 10))
#> torch_tensor
-#> 0.3669
-#> 0.3222
-#> 0.4798
+#> -0.3578
+#> -0.1359
+#> -0.1780
#> [ CPUFloatType{3,1} ][ grad_fn = <AddmmBackward0> ]
We still manually compute the forward pass, and we still manually update the weights. In the last two chapters of this section, we’ll see how these parts of the logic can be made more modular and reusable, as diff --git a/dev/pkgdown.yml b/dev/pkgdown.yml index 9435834ca6..1351c8ac90 100644 --- a/dev/pkgdown.yml +++ b/dev/pkgdown.yml @@ -20,7 +20,7 @@ articles: tensor-creation: tensor-creation.html torchscript: torchscript.html using-autograd: using-autograd.html -last_built: 2025-01-16T22:20Z +last_built: 2025-01-17T13:48Z urls: reference: https://torch.mlverse.org/docs/reference article: https://torch.mlverse.org/docs/articles diff --git a/dev/reference/distr_bernoulli.html b/dev/reference/distr_bernoulli.html index d0db7aa251..e79072920d 100644 --- a/dev/reference/distr_bernoulli.html +++ b/dev/reference/distr_bernoulli.html @@ -118,7 +118,7 @@