Senior ML Engineer - LLM Post-training
DFINITY Stiftung
Veröffentlicht:
11 Oktober 2024Pensum:
100%Vertrag:
Festanstellung- Arbeitsort:Zürich, Zürich, Switzerland
Employment Type: 6 Month Contract
We are seeking a highly skilled LLM Training Engineer to join our team, with a focus on developing and refining content for post-training phases of large language models (LLMs). The ideal candidate will play a crucial role in curating, generating, and managing high-quality datasets, prompts, and scenarios used for fine-tuning and optimizing LLM performance after the initial pre-training stages. You will collaborate closely with machine learning engineers, data scientists, and domain experts to ensure that LLMs continue to improve in their specific tasks and domains.
Key Responsibilities:
- Content Creation and Curation: Develop, refine, and curate high-quality content (code, text, prompts, dialogues, scenarios) for use in the post-training of large language models optimized for code generation.
- Fine-tuning Data: Design datasets for tasks such as instruction following, question answering, reasoning, conversational models, etc., to improve model accuracy, robustness, and generalization.
- Post-training Optimization: Collaborate with ML engineers to integrate the curated content into fine-tuning pipelines, ensuring that models are tailored to specific use cases.
- Evaluation and Benchmarking: Create scenarios to evaluate LLM performance and establish benchmarks for targeted improvements in different areas such as comprehension, creativity, factuality, and bias reduction.
- Feedback Loop Development: Set up human-in-the-loop feedback systems for continuous model improvement, and work closely with annotators or SMEs to enhance dataset quality.
- Data Quality Control: Monitor data quality, diversity, and relevance; identify and resolve gaps or biases in the training data.
- Collaboration: Work with cross-functional teams including product managers, data scientists, and engineers to align post-training content with business goals and technical requirements.
- Documentation: Write and maintain clear and detailed documentation on content curation methodologies, dataset specifications, and post-training processes.
Required Qualifications:
- Educational Background: Bachelor’s or Master’s degree in Computer Science, Data Science, Computational Linguistics, or a related field.
- Experience: 3+ years of experience working with large language models or machine learning frameworks, with a focus on post-training or fine-tuning.
- Programming Skills: Proficiency in Python and experience with ML frameworks such as PyTorch or TensorFlow.
- Data Handling: Experience in curating, cleaning, and managing large datasets for machine learning applications.
- Collaborative Mindset: Strong communication skills and the ability to work effectively in a cross-functional team.
- Problem Solving: Strong analytical and problem-solving abilities with a focus on improving model performance through targeted post-training interventions.
Preferred Qualifications:
- Experience working with LLM architectures such as GPT, BERT, T5, or similar.
- Knowledge of reinforcement learning from human feedback (RLHF).
- Knowledge of fine-tuning techniques such as Low-Rank Adaptation
- Familiarity with tools for annotation, labeling, and dataset management.
- Understanding of bias and fairness issues in AI, and experience in mitigating these through training.