Skip to content
View kwonmha's full-sized avatar

Organizations

@HYU-AILAB

Block or report kwonmha

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
kwonmha/README.md

👋 Hi, I’m @kwonmha

  • I'm working as a researcher at AI Center of Samsung Life Insurance.

👀 I’m interested in

  • NLP, including Language Modeling and Represenation Learning
  • And also ML, Recommend system

🌱 I’m currently working on

  • Fine-tuning and serving large language model
  • Retrieval augmented generation(RAG)

📫 How to reach me kwonmha@gmail.com

Pinned Loading

  1. bert-vocab-builder Public

    Builds wordpiece(subword) vocabulary compatible for Google Research's BERT

    Python 229 48

  2. Improving-RNN-recommendation-model Public

    Applied weight tying technique to RNN based recommendation model. Implemented with Tensorflow and Keras.

    Python 53 13

  3. Bidirectional-LSTM-with-attention-for-relation-classification Public

    Python 31 15

  4. Convolutional-Recurrent-Neural-Networks-for-Relation-Extraction Public

    Tensorflow Implementation of Convolutional Recurrent Neural Networks for Relation Extraction

    Python 3 2

  5. bert-multigpu-fp16-tf1 Public

    speed test of bert on tf1.x with multi gpus and mixed precision

    Python 3

  6. XOR-gate-from-scratch Public

    Python implementation of simple neural networks for XOR gates.

    Python 1

16 contributions in the last year

Contribution Graph
Day of Week April May June July August September October November December January February March April
Sunday
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
Less
No contributions.
Low contributions.
Medium-low contributions.
Medium-high contributions.
High contributions.
More

Contribution activity

April 2025

Created an issue in hiyouga/LLaMA-Factory that received 2 comments

Loss scales up with respect to gradient accumulation steps.

Reminder I have read the above rules and searched the existing issues. System Info llama-factory latest(same as current main branch) transformer…

1 task done
2 comments
Loading