diff --git a/dev/articles/examples/basic-nn-module.html b/dev/articles/examples/basic-nn-module.html index f6e9ad1cbc..cc3cb20100 100644 --- a/dev/articles/examples/basic-nn-module.html +++ b/dev/articles/examples/basic-nn-module.html @@ -133,9 +133,9 @@
## $w
## torch_tensor
-## -0.8666
-## -1.6154
-## -0.2515
+## 0.1571
+## -0.4497
+## 0.6261
## [ CPUFloatType{3,1} ][ requires_grad = TRUE ]
##
## $b
@@ -146,9 +146,9 @@ basic-nn-module
# or individually
model$w
## torch_tensor
-## -0.8666
-## -1.6154
-## -0.2515
+## 0.1571
+## -0.4497
+## 0.6261
## [ CPUFloatType{3,1} ][ requires_grad = TRUE ]
model$b
y_pred
## torch_tensor
-## 1.3638
-## 3.7643
-## 2.0929
-## 1.3960
-## -4.7317
-## 0.0139
-## -1.6080
-## 1.9066
-## -1.3871
-## 4.1967
+## -0.9991
+## -0.4384
+## 1.0320
+## -0.8622
+## 1.9835
+## 0.1517
+## 0.5564
+## 0.8829
+## -1.4770
+## 0.0957
## [ CPUFloatType{10,1} ][ grad_fn = <AddBackward0> ]
diff --git a/dev/articles/indexing.html b/dev/articles/indexing.html
index 96b1d5d94f..6cbd62c860 100644
--- a/dev/articles/indexing.html
+++ b/dev/articles/indexing.html
@@ -244,23 +244,23 @@ The following syntax will give you the first row:
x[1,]
#> torch_tensor
-#> -0.9774
-#> 0.6581
-#> -1.2705
+#> 1.2514
+#> 3.0536
+#> -1.2668
#> [ CPUFloatType{3} ]
And this would give you the first 2 columns:
x[,1:2]
#> torch_tensor
-#> -0.9774 0.6581
-#> 0.9108 0.8746
+#> 1.2514 3.0536
+#> -0.9527 -0.1414
#> [ CPUFloatType{2,2} ]
You can also use boolean vectors, for example:
x[c(TRUE, FALSE, TRUE, FALSE), c(TRUE, FALSE, TRUE, FALSE)]
#> torch_tensor
-#> -0.7239 0.3543
-#> 0.2118 -0.2658
+#> -0.3682 -0.4853
+#> -2.0325 0.6320
#> [ CPUFloatType{2,2} ]
The above examples also work if the index were long or boolean tensors, instead of R vectors. It’s also possible to index with diff --git a/dev/articles/loading-data.html b/dev/articles/loading-data.html index 6c01a6800a..d584e80266 100644 --- a/dev/articles/loading-data.html +++ b/dev/articles/loading-data.html @@ -385,7 +385,7 @@
Another example is torch_ones
, which creates a tensor
filled with ones.
traced_fn(torch_randn(3))
#> torch_tensor
+#> 0.1195
+#> 2.0253
#> 0.0000
-#> 0.0000
-#> 0.4806
#> [ CPUFloatType{3} ]
It’s also possible to trace nn_modules()
defined in R,
for example:
traced_module(torch_randn(3, 10))
#> torch_tensor
-#> 0.2074
-#> 0.2896
-#> 0.2279
+#> -0.3026
+#> 0.0974
+#> 0.1498
#> [ CPUFloatType{3,1} ][ grad_fn = <AddmmBackward0> ]
We still manually compute the forward pass, and we still manually update the weights. In the last two chapters of this section, we’ll see how these parts of the logic can be made more modular and reusable, as diff --git a/dev/pkgdown.yml b/dev/pkgdown.yml index 28d6f0897e..08484bece5 100644 --- a/dev/pkgdown.yml +++ b/dev/pkgdown.yml @@ -20,7 +20,7 @@ articles: tensor-creation: tensor-creation.html torchscript: torchscript.html using-autograd: using-autograd.html -last_built: 2025-01-21T11:16Z +last_built: 2025-01-21T20:34Z urls: reference: https://torch.mlverse.org/docs/reference article: https://torch.mlverse.org/docs/articles diff --git a/dev/reference/distr_gamma.html b/dev/reference/distr_gamma.html index a03060abe6..98934bae61 100644 --- a/dev/reference/distr_gamma.html +++ b/dev/reference/distr_gamma.html @@ -111,7 +111,7 @@