Senior Speech Scientist

Austin, Texas, USA
Rev
ago
fulltime NLP ASR

We are looking for an experienced speech scientist to join our team at Rev. You must be comfortable and well-versed with building current ASR and NLP solutions and up to date with the latest developments within industry and academia. You enjoy working with the latest machine/deep learning technologies and implementing the latest research findings.

Responsibilities:

As a Senior Speech Scientist, you will 

  • work with a team of engineers and researchers to improve and innovate on the existing ASR and NLP infrastructure.
  • Help develop new (speech) translation and generative audio pipelines
  • evaluate and benchmark existing ASR and NLP models.
  • Experiment and discover creative solutions to difficult problems
  • expand and prototype novel ASR and NLP solutions - improve word accuracy, distinguish and leverage speaker characteristics, and dynamically fine-tune and model speech in different acoustic environments.
  • innovate new approaches and product features.
  • automate and integrate workflow from diverse systems.
  • interact daily with other teams at Rev working towards a shared goal

Qualifications:

  • University degree in Computer Science, Software Engineering, or related fields
  • 3+ years of experience supporting and working on production ML systems (training models and tuning existing systems)
  • Fluency in Python, C++, shell scripting, and Linux usage.
  • Broad mastery of ASR or NLP techniques such as neural net architectures (Transformer / LSTM / CTC / Transducer), acoustic and language models, and decoding.
  • Experience with Deep Learning frameworks (such as TensorFlow or PyTorch) and training large models.
  • Excellent oral and written communication skills.
  • Comfortable working with remote teams as a proactive team member.

Nice to have knowledge of:

  • Large Language Models (LLMs), especially training, finetuning, and inference
  • Efficient training techniques
  • Monitoring production model performance
  • Different optimizers for model training
  • Fine tuning and knowledge distillation
  • Data forensic and conditioning
  • Low-resource languages ASR techniques