Role Overview
We are seeking a seasoned Senior Machine Learning Engineer to lead the development and deployment of scalable, production-grade generative AI systems. This role is central to our AI transformation initiative, focusing on bridging the gap between cutting-edge research and robust, high-performance infrastructure that delivers tangible business impact. You will be instrumental in building the solutions that operationalize AI at an enterprise scale.
Key Responsibilities
- Design, build, and deploy production-ready generative AI services and the underlying infrastructure, ensuring scalability, reliability, and performance.
- Develop and maintain a comprehensive MLOps platform, including automated CI/CD pipelines for model training, deployment, monitoring, and governance.
- Build and manage large-scale distributed systems, from data ingestion and processing layers to model serving and orchestration frameworks.
- Collaborate closely with cross-functional teams of ML scientists, data engineers, and software engineers to integrate complex ML models into our production environments.
- Contribute to the strategic development of an enterprise-wide AI platform by creating reusable tooling, scalable workflows, and comprehensive documentation.
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, a related quantitative field, or equivalent practical experience.
- 5+ years of professional experience in machine learning engineering, software engineering, or data engineering, with a proven track record of deploying models into live production environments.
- Hands-on experience with NLP and/or Large Language Model (LLM) projects, including fine-tuning, evaluation, and deployment.
- Strong proficiency in MLOps principles and tools, data orchestration (e.g., Airflow, Kubeflow), and model lifecycle management.
- Expertise with cloud infrastructure (AWS, GCP, or Azure), containerization (Docker, Kubernetes), and CI/CD pipelines.
Nice-to-Have Qualifications
- Master's or PhD in a relevant field.
- Experience with big data frameworks such as Spark or Dask.
- Familiarity with both relational (e.g., PostgreSQL) and non-relational (e.g., MongoDB, Vector DBs) databases.
- Contributions to open-source ML or data infrastructure projects.