LLM-Powered LinguaMed Research Team

2025-09-05: LLM Research Meeting

2025-09-19: LLM Research Meeting

Participants: Entire research team members

This session addressed fine-tuning techniques for downstream task models, exploring the application side of large language models. We discussed the mechanisms of pre-training and fine-tuning in BERT and GPT-style architectures, as well as strategies to enhance language-understanding performance.

Readings
  1. Korean Embeddings
  2. Hello, Transformer
  3. Attention Is All You Need
  4. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
  5. Improving Language Understanding by Generative Pre-Training

← Back to Meetings