Skip to content

Releases: Submitty/Lichen

v20.04.00

08 Apr 14:42
01f33bb
Compare
Choose a tag to compare
Remove previously tokenized files if concat occurs (#21)

* fix concat on regex

* temp git ignore change

* remove dir

* remove check internally

* fix tokenize

* fix filesystem tweaks

* rm filesystem stuff

* remove

v19.12.00

12 Dec 20:11
Compare
Choose a tag to compare
[Feature:Lichen] Add MIPS tokenizer (#17)

* Add MIPS tokenizer

The tokenizer identifies string literals, labels, dot-types, registers,
immediate values, instructions, addresses, and comments.  If two
instructions are placed on the same line, this will identify the first as
an instruction and the second as a label, however that's not too bad since
MIPS syntax only allows one instruction per line.

* Add installation

v.18.12.00: Allow regex for files in configuration (#15)

13 Dec 06:05
Compare
Choose a tag to compare
* regex for files

* remove comment

* regrex -> regex

v.18.09.00: Update llvm version for C tokenizer (#14)

01 Oct 03:33
86f40c8
Compare
Choose a tag to compare
* update print statement

* update llvm version

v.18.07.04: Sort processing order & parse json config file (#13)

31 Jul 06:13
1c1dd67
Compare
Choose a tag to compare
Sort the processing order of users/versions (helps debugging)
Moved from command line arguments for each script to parsing the config json

v.18.07.03

30 Jul 02:01
4c25f0c
Compare
Choose a tag to compare
Double hash size (#12)

v.18.07.02

30 Jul 01:30
e93dfb6
Compare
Choose a tag to compare
Workaround to skip utf-8 characters in plaintext tokenizer (#11)

v.18.07.01: Revision to Process all script (#10)

27 Jul 02:40
Compare
Choose a tag to compare
* fix python_tokenizer tok_name error

* remove import which is not required in python_tokenizer.py

* remove empty value tokens

* modified process_all.sh to take more argument

initial java tokenizer and bugfix for python tokenizer

13 Jul 15:43
Compare
Choose a tag to compare
fix python_tokenizer tok_name error (#9)

* fix python_tokenizer tok_name error

* remove import which is not required in python_tokenizer.py

* remove empty value tokens

v.18.06.01: Make sure the matches file does not output matches to the same user (#7)

20 Jun 02:28
7ca2951
Compare
Choose a tag to compare
Make sure the matches file does not output matches to the same user