Data Engineer, Speech Synthesis

Santa Clara, CA, USA
NVIDIA
ago
fulltime MLOps TTS

Widely considered to be one of the technology world’s most desirable employers, NVIDIA is an industry leader with groundbreaking developments in High-Performance Computing, Artificial Intelligence and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, autonomous cars and conversational AI that can perceive and understand the world. Today, we are increasingly known as “the AI computing company.” We're looking to grow our company, and build our teams with the smartest people in the world. Join us at the forefront of technological advancement.

NVIDIA is looking for Speech Data Engineers to develop high-impact, high-visibility Speech AI product "Riva" & improve the experience of millions of customers. If you're creative & passionate about solving real world conversational AI problems, come join our Riva Product engineering team. For more details on Riva check https://developer.nvidia.com/riva

What you’ll be doing:

  • Build speech training data sets for text-to-speech (TTS) systems

  • Develop WFST and Neural networks-based Text-Normalization and Inverse Text-Normalization

  • Use and extend internal MLOps tooling to scale model development and automate cross-team work

  • Apply data science techniques to characterize model performance and quality metrics across platforms for various speech AI components and identify areas for improvement

  • Collaborate with other engineers and scientists in data collection and development efforts

  • Write clear and concise documents (e.g. analysis reports)

  • Collaborate with various teams on new product features and improvements of existing products

  • Participate in developing and reviewing code, design documents, use case reviews, and test plan reviews

  • Help innovate, identify problems, recommend solutions and perform triage in a collaborative team environment

What we need to see:

  • Master’s degree (or equivalent experience) or PhD in Computer Science, Electrical Engineering, Artificial Intelligence, or Applied Math

  • 3+ years of experience

  • Knowledge of scripting languages (e.g. Python, bash)

  • Knowledge of phonetics/phonology and ability to analyze/validate phonetic transcriptions

  • Experience with WFST

  • Background with Pytorch

  • Experience with building ASR, NLP, and speech synthesis models

  • Excellent written and spoken communication skills

  • Experience with MLOPS workflows & traceability and versioning of datasets

  • Understanding of MLOPS life cycle

  • General background around version control and code review tools like Git, Gerrit.

  • Strong collaborative and interpersonal skills, specifically a proven ability to effectively guide and influence within a dynamic matrix environment

Ways to stand out from the crowd:

  • Master’s in Computational Linguistics (or equivalent field with computational emphasis); alternatively, 2 years of experience in the field.

  • Native or near-native fluency in a non-English language - Spanish / Mandarin / German / Japanese / Russian / French / UK English / Arabic / Hindi / Korean / Italian / Portuguese

  • Hands-on experience on Speech Technologies like Automatic Speech Recognition, Speech Command detection, Text to Speech etc

  • Experience in writing grammars and building FSTs

  • Strong personal interest in learning, researching, and creating new technologies related to foreign languages, linguistics, phonetics, phonology and language technology

  • Feeling comfortable and motivated when working in a fast paced, highly collaborative, dynamic work environment

  • Strong C++ programming skills.

  • Background with Dockers and Kubernetes

  • Background with deploying machine learning models on data center, cloud, and embedded systems