Senior ML Ops Engineer
At Nintex, we are transforming the way people work, everywhere.
Nintex is the global standard for process intelligence and automation. Today more than 10,000 public and private sector organizations across 90 countries turn to the Nintex Process Platform to accelerate progress on their digital transformation journeys by quickly and easily managing, automating and optimizing business processes. We improve their lives though the technology we build.
Nintex engineers are building more than just software and know the impact one line of code can make. We are the experts that build the industry’s most complete process and automation platform to transform the way people work. If you’re interested, curious and want to learn and do more, the sky is the limit here. We take a solutions-oriented and collaborative approach and we don’t wait to create and carry out opportunities for innovation in our business – and our products. Our work makes the hard stuff appear easy to anyone with clicks-and-not-code process automation.
We are committed to fostering a workplace that supports amazing people in doing their very best work every day. Collaboration is constant, our workplace is fun, the environment is fast-paced and we value our people’s curiosity, ideas and enthusiasm. We deliver on our commitments, we don't wait to implement ideas or fix issues, and we treat each other with respect and consideration.
About the role
A MLOps Engineer will join our team in developing Nintex’s AI features. As an MLOps Engineer, you will play a crucial role in bridging the gap between machine learning (ML) development and operations, ensuring the seamless integration of ML models into our operational systems, while also managing standard DevOps functions. The MLOps Engineer will be responsible for deploying, monitoring, and maintaining machine learning models, ensuring they are scalable, reliable, and secure, while also managing infrastructure, automation, and security aspects in alignment with DevOps best practices.
About the Team
You will join a team tasked with integrating advanced AI capabilities, including generative AI and various other AI features, into our products. Collaborating closely with our data science team, you will be instrumental in developing innovative AI-driven features. Your team's primary objective will be to ensure the seamless integration of these AI technologies into our existing product ecosystem. Additionally, your responsibilities will extend to overseeing these services in production, ensuring their continuous functionality. By working on cutting-edge AI features and fostering their smooth incorporation into our products, you will play a vital role in shaping the success of these offerings.
Your contribution will be
- Model Deployment and Optimization: Deploy machine learning models into production environments, focusing on scalability, reliability, high throughput, and low latency. Continuously optimize model inference and prediction speed.
- Infrastructure Management: Design and maintain the infrastructure for model deployment, utilizing cloud-based platforms (e.g., AWS, GCP, Azure) and container orchestration (e.g., Kubernetes) for efficient, scalable solutions.
- Automation: Develop automated pipelines for model training, testing, deployment, and monitoring. Create Continuous Integration and Continuous Deployment (CI/CD) pipelines for efficient model deployment and updates.
- Collaboration with Data Science Teams: Collaborate closely with data science teams to understand their processes and optimize machine learning workflows. Provide technical expertise to enhance the efficiency of data science processes.
- Data Pipeline Collaboration: Work with data engineers to establish robust data pipelines for model training and inference. Ensure seamless integration between data engineering and machine learning components.
- Monitoring and Maintenance: Establish monitoring and logging systems to track model performance and system health. Address issues promptly to maintain optimal performance and reliability.
- Security and Compliance: Ensure data security, privacy, and regulatory compliance throughout the machine learning lifecycle. Implement best practices for model and data security.
To be successful we think you need
- Tertiary qualifications in a relevant discipline
- At least 5 years of experience as an MLOps Engineer, with a proven track record of deploying and managing machine learning models in production.
- Experience with AWS Sagemaker, Google Vertex AI, or similar platforms.
- Proficiency in machine learning frameworks such as TensorFlow and PyTorch.
- Knowledge of orchestration tools like Kubernetes and Docker.
- Coding proficiency in Python programming languages.
- Production experience with managing data pipelines.
What’s in it for you?
Nintex employees have the freedom to work how they work best. We are virtual-first across our global workforce. Our people work in the way that best suits them and their teams - whether at home, in an office, or another place that sparks creativity, focus and collaboration. Our work environment is such that our people can successfully deliver their work while adequately supporting their lifestyle and preferences.
While our offerings differ from country to country, we offer our entire global workforce an array of exciting perks and benefits, including
- Global Gratitude and Recharge Days
- Mindfulness and counseling resources
- Invention/patenting assistance
- Meaningful recognition
- Community impact opportunities
- Multiple tools through which to learn and grow, and an incredible global community
Equity Statement: Preference will be given to People Living with Disability who are members of the designated groups in line with the Employment Equity Plan and Targets of the Company.