Skip to content

Commit ae84d49

Browse files
authored
Updates to README (k2-fsa#145)
1 parent 2f1e4bb commit ae84d49

File tree

1 file changed

+5
-4
lines changed

1 file changed

+5
-4
lines changed

README.md

+5-4
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ general and extensible framework to allow further development of ASR technology.
4848
done via the cub library, parts of which we wrap with our own convenient
4949
interface.
5050

51-
The Finite State Automaton object is then implemented Ragged tensor templated
51+
The Finite State Automaton object is then implemented as a Ragged tensor templated
5252
on a specific data type (a struct representing an arc in the automaton).
5353

5454

@@ -73,7 +73,7 @@ general and extensible framework to allow further development of ASR technology.
7373

7474
## Current state of the code
7575

76-
A lot of the code is still unfinished (note, this was written on Sep 11, 2020).
76+
A lot of the code is still unfinished (Sep 11, 2020).
7777
We finished the CPU versions of many algorithms and this code is in `k2/csrc/host/`;
7878
however, after that we figured out how to implement things on the GPU and decided
7979
to change the interfaces so the CPU and GPU code had a more unified interface.
@@ -84,13 +84,14 @@ general and extensible framework to allow further development of ASR technology.
8484
with our GPU algorithms. Instead we will use the interfaces drafted in `k2/csrc/`
8585
e.g. the Context object (which encapsulates things like memory managers from external
8686
toolkits) and the Tensor object which can be used to wrap tensors from external toolkits;
87-
and wrap those in Python (using pybind11).
87+
and wrap those in Python (using pybind11). The code in host/ will eventually
88+
be either deprecated, rewritten or wrapped with newer-style interfaces.
8889

8990
## Plans for initial release
9091

9192
We hope to get the first version working in early October. The current
9293
short-term aim is to finish the GPU implementation of pruned composition of a
93-
normal with dense FSA, which is the same as decoder search in speech
94+
normal FSA with a dense FSA, which is the same as decoder search in speech
9495
recognition and can be used to implement CTC training and lattice-free MMI (LF-MMI) training. The
9596
proof-of-concept that we will release initially is something that's like CTC
9697
but allowing more general supervisions (general FSAs rather than linear

0 commit comments

Comments
 (0)