Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid index error when generating model #8

Open
mirkov opened this issue Jan 2, 2025 · 0 comments
Open

Invalid index error when generating model #8

mirkov opened this issue Jan 2, 2025 · 0 comments

Comments

@mirkov
Copy link

mirkov commented Jan 2, 2025

I get invalid index when generating the model

This is on a MacBook M2 with SBCL 2.5.0

Here are the steps that lead to the error (Please see my comments at bottom of post):

CL-USER> (ql:quickload :lla )
(:LLA)
CL-USER> (ql:quickload :llama )
(:LLAMA)
CL-USER> (in-package :llama )
#<PACKAGE "LLAMA">
LLAMA> (init #P"stories15M.bin" #P"tokenizer.bin" 3200 )
; No values
LLAMA> (generate *model* *tokenizer*)
; Debugger entered on #<SB-INT:INVALID-ARRAY-INDEX-ERROR expected-type: (INTEGER 0 (3200)) datum: 9038>
[1] LLAMA>
; Evaluation aborted on #<SB-INT:INVALID-ARRAY-INDEX-ERROR expected-type: (INTEGER 0 (3200)) datum: 9038>

The error is

Invalid index 9038 for (SIMPLE-VECTOR 3200), should be a non-negative integer below 3200.
   [Condition of type SB-INT:INVALID-ARRAY-INDEX-ERROR]

Restarts:
 0: [RETRY] Retry SLY mREPL evaluation request.
 1: [*ABORT] Return to SLY's top level.
 2: [ABORT] abort thread (#<THREAD tid=5891 "sly-channel-1-mrepl-remote-1" RUNNING {7004FB05E3}>)

Backtrace:
 0: ((SB-VM::OPTIMIZED-DATA-VECTOR-REF T) #("<unk>" " ..)
      Locals:
        SB-INT:INDEX = 9038
        VECTOR = #("<unk>" "\n<s>\n" "\n</s>\n" "<0x00>" "<0x01>" "<0x02>" "<0x03>" "<0x04>" "<0x05>" "<0x06>" "<0x07>" "<0x08>" "<0x09>" "<0x0A>" "<0x0B>" "<0x0C>" "<0x0D>" "<0x0E>" "<0x0F>" "<0x10>" "<0x11>" ..)
 1: (GENERATE #<TRANSFORMER  {700610F523}> #<TOKENIZER  {700610F563}> :TOPP NIL :TEMPERATURE 0.9 :STEPS 256 :PROMPT NIL)
      Locals:
        #:.DEFAULTING-TEMP. = 256
        #:.DEFAULTING-TEMP.#1 = NIL
        MODEL = #<TRANSFORMER  {700610F523}>
        NEXT-TOKEN = 9038
        POSITION = 0
        PROMPT-TOKENS = #()
        TEMPERATURE = 0.9
        TOKENIZER = #<TOKENIZER  {700610F563}>
        TOPP = NIL
 2: (SB-INT:SIMPLE-EVAL-IN-LEXENV (GENERATE *MODEL* *TOKENIZER*) #<NULL-LEXENV>)
 3: (EVAL (GENERATE *MODEL* *TOKENIZER*))
 4: ((LAMBDA NIL :IN SLYNK-MREPL::MREPL-EVAL-1))
 --more--
  • The same error appears with different size tokens. I tried with 10000.
  • My installation of lla and binary-format is flaky so maybe I should not even expect llama to function correctly. lla tests drop me to low level debugger. binary-format tests fail.
  • May be this is a MacBook M2 platform related issue. I will try llama on the x86 platform next week.

I submitted issues for lla and binary-format.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant