@@ -48,7 +48,7 @@ general and extensible framework to allow further development of ASR technology.
48
48
done via the cub library, parts of which we wrap with our own convenient
49
49
interface.
50
50
51
- The Finite State Automaton object is then implemented Ragged tensor templated
51
+ The Finite State Automaton object is then implemented as a Ragged tensor templated
52
52
on a specific data type (a struct representing an arc in the automaton).
53
53
54
54
@@ -73,7 +73,7 @@ general and extensible framework to allow further development of ASR technology.
73
73
74
74
## Current state of the code
75
75
76
- A lot of the code is still unfinished (note, this was written on Sep 11, 2020).
76
+ A lot of the code is still unfinished (Sep 11, 2020).
77
77
We finished the CPU versions of many algorithms and this code is in ` k2/csrc/host/ ` ;
78
78
however, after that we figured out how to implement things on the GPU and decided
79
79
to change the interfaces so the CPU and GPU code had a more unified interface.
@@ -84,13 +84,14 @@ general and extensible framework to allow further development of ASR technology.
84
84
with our GPU algorithms. Instead we will use the interfaces drafted in ` k2/csrc/ `
85
85
e.g. the Context object (which encapsulates things like memory managers from external
86
86
toolkits) and the Tensor object which can be used to wrap tensors from external toolkits;
87
- and wrap those in Python (using pybind11).
87
+ and wrap those in Python (using pybind11). The code in host/ will eventually
88
+ be either deprecated, rewritten or wrapped with newer-style interfaces.
88
89
89
90
## Plans for initial release
90
91
91
92
We hope to get the first version working in early October. The current
92
93
short-term aim is to finish the GPU implementation of pruned composition of a
93
- normal with dense FSA, which is the same as decoder search in speech
94
+ normal FSA with a dense FSA, which is the same as decoder search in speech
94
95
recognition and can be used to implement CTC training and lattice-free MMI (LF-MMI) training. The
95
96
proof-of-concept that we will release initially is something that's like CTC
96
97
but allowing more general supervisions (general FSAs rather than linear
0 commit comments