Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hugging Face Demo #88

Open
Mandalorian007 opened this issue Dec 22, 2024 · 7 comments
Open

Hugging Face Demo #88

Mandalorian007 opened this issue Dec 22, 2024 · 7 comments
Labels
enhancement New feature or request

Comments

@Mandalorian007
Copy link

Motivation & Examples

This project looks absolutely incredible, but I have been really struggling to get it running locally due to not having a dedicated GPU and remote on a colab notebook is a bit messy trying to upload content through tunnels with gradio. I was thinking it would be pretty epic if this project had a Hugging Face style demo to actually try out the project easier. No pressure, but just some feedback from a novice AI vision user.

All the same incredible work this has to be one of the better looking promising projects that has potential to incorporate into other software.

@Mandalorian007 Mandalorian007 added the enhancement New feature or request label Dec 22, 2024
@luca-medeiros
Copy link
Owner

Appreciate the kind words.
Yea, thats something I thought about it before. I'll update you once I find some time to set up a HF space.

In the meantime, you might find lightning ai's studio a good option to try it out out of the box: https://lightning.ai/studios?section=featured

@Mandalorian007
Copy link
Author

Appreciate the kind words. Yea, thats something I thought about it before. I'll update you once I find some time to set up a HF space.

In the meantime, you might find lightning ai's studio a good option to try it out out of the box: https://lightning.ai/studios?section=featured

Thank you I will definitely try this out!

@ld-xy
Copy link

ld-xy commented Dec 24, 2024

Hello, I followed the instructions to install it on the docker server. After the installation is complete, I run python app.py, but I cannot access it on the local browser? Why?

@Mandalorian007
Copy link
Author

Appreciate the kind words. Yea, thats something I thought about it before. I'll update you once I find some time to set up a HF space.

In the meantime, you might find lightning ai's studio a good option to try it out out of the box: https://lightning.ai/studios?section=featured

Hey Luca I hope your holidays are going great.

I took your advice and have been trying to get this project up and running on lightning.ai like suggested, but I have been hitting some issues I am struggling to debug.

After swapping env to python 3.11.11 and following installation instructions on repo I leverage the gradio app to run the app.py file (I change the port in the startup to 8000) and then it loads up and I get the litserve running on the base "/" url.

When I copy the public url and go to "gradio" I keep getting http 307s. Any idea what I should check on?

GRADIO_SERVER_PORT=8000 gradio lang-segment-anything/app.py
⚡ ~ GRADIO_SERVER_PORT=8000 gradio lang-segment-anything/app.py

Warning: Cannot statically find a gradio demo called demo. Reload work may fail.
Watching: '/home/zeus/miniconda3/envs/cloudspace/lib/python3.11/site-packages/gradio', '/teamspace/studios/this_studio/lang-segment-anything'

WARNING:root:No GPU found, using CPU instead
WARNING:root:No GPU found, using CPU instead
Starting LitServe and Gradio server on port 8000...
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
Swagger UI is available at http://0.0.0.0:8000/docs
INFO:     Started server process [20226]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
IMPORTANT: You are using gradio version 3.50.2, however version 4.44.1 is available, please upgrade.
--------
INFO:     130.211.236.75:23616 - "GET / HTTP/1.1" 200 OK
WARNING:root:No GPU found, using CPU instead
WARNING:root:No GPU found, using CPU instead
IMPORTANT: You are using gradio version 3.50.2, however version 4.44.1 is available, please upgrade.
--------
INFO:     130.211.236.75:23616 - "GET / HTTP/1.1" 200 OK
LangSAM model initialized.
Setup complete for worker 0.
INFO:     130.211.236.75:23616 - "GET / HTTP/1.1" 200 OK
INFO:     130.211.236.75:23616 - "GET / HTTP/1.1" 200 OK
INFO:     130.211.236.75:23616 - "GET / HTTP/1.1" 200 OK
INFO:     130.211.236.75:46360 - "GET / HTTP/1.1" 200 OK
INFO:     130.211.236.75:23616 - "GET / HTTP/1.1" 200 OK
INFO:     130.211.236.75:23616 - "GET /gradio HTTP/1.1" 307 Temporary Redirect
INFO:     130.211.236.75:23616 - "GET / HTTP/1.1" 200 OK
INFO:     130.211.236.75:23616 - "GET / HTTP/1.1" 200 OK
INFO:     130.211.236.75:23616 - "GET / HTTP/1.1" 200 OK

@luca-medeiros
Copy link
Owner

You seem to be running GRADIO_SERVER_PORT=8000 gradio lang-segment-anything/app.py, mind trying python lang-segment-anything/app.py?

You may find better reproducibility by using docker and Dockefile on root.

@luca-medeiros
Copy link
Owner

does /docs work?

@Mandalorian007
Copy link
Author

does /docs work?

That was a good catch I guess the gradio plugin enforces the gradio start. I swapped the application to be fully setup by terminal and then using the Port Viewer Plugin, but I am seeing this behavior:

⚡ ~ python lang-segment-anything/app.py
/home/zeus/miniconda3/envs/cloudspace/lib/python3.11/site-packages/sam2/modeli
/sam/transformer.py:23: UserWarning: Flash Attention is disabled as it require
a GPU with Ampere (8.0) CUDA capability.
  OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = get_sdpa_settings()
Starting LitServe and Gradio server on port 8000...
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
Swagger UI is available at http://0.0.0.0:8000/docs
INFO:     Started server process [5365]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
/home/zeus/miniconda3/envs/cloudspace/lib/python3.11/site-packages/sam2/modeli
/sam/transformer.py:23: UserWarning: Flash Attention is disabled as it require
a GPU with Ampere (8.0) CUDA capability.
  OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = get_sdpa_settings()
LangSAM model initialized.
Setup complete for worker 0.
INFO:     130.211.236.75:48621 - "GET / HTTP/1.1" 200 OK
INFO:     130.211.236.75:48621 - "GET / HTTP/1.1" 200 OK
INFO:     130.211.236.75:48621 - "GET / HTTP/1.1" 200 OK
INFO:     130.211.236.75:48621 - "GET / HTTP/1.1" 200 OK
INFO:     130.211.236.75:48621 - "GET / HTTP/1.1" 200 OK
INFO:     130.211.236.75:48621 - "GET /gradio HTTP/1.1" 307 Temporary Redirect
INFO:     130.211.236.75:47700 - "GET / HTTP/1.1" 200 OK
INFO:     130.211.236.75:47700 - "GET / HTTP/1.1" 200 OK
INFO:     130.211.236.75:47700 - "GET / HTTP/1.1" 200 OK
INFO:     130.211.236.75:47700 - "GET / HTTP/1.1" 200 OK
INFO:     130.211.236.75:47700 - "GET / HTTP/1.1" 200 OK

Using the /docs endpoint seems to work correctly giving me a FastAPI Swagger page:

INFO:     130.211.236.75:46487 - "GET /openapi.json HTTP/1.1" 200 OK
INFO:     130.211.236.75:46974 - "GET / HTTP/1.1" 200 OK
INFO:     130.211.236.75:46974 - "GET / HTTP/1.1" 200 OK
Screenshot 2024-12-27 at 8 07 46 AM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants