We are looking for a passionate junior Speech Enhancement and Denoising Software Engineer to develop advanced deep learning models for next-generation audio and speech systems on hearables and wearables. You will participate to the full lifecycle of hybrid DSP+ML based solutions — from research and prototyping to optimization and deployment on embedded hardware with customer support — while collaborating with cross-functional teams in AI, DSP, acoustics, hardware, and systems.
Requirements
- ML for Audio: Strong academic records and project experience developing ML models for audio applications (e.g., speech enhancement, separation, or classification).
- Data Handling: Familiarity with audio dataset curation, augmentation strategies, and defining robust evaluation metrics.
- ML Systems: Basic understanding of ML pipelines (training, validation, deployment) emphasizing code reproducibility.
- Good knowledge of deep learning architectures (CNNs, RNNs, Transformers, diffusion models, generative approaches) applied to acoustic data.
- Proficiency in Python or Matlab and ML frameworks (PyTorch, TensorFlow),
- Edge Optimization: Project experience optimizing models (quantization, pruning) for resource-constrained embedded platforms.
- Product Concepts: Conceptual understanding of integration challenges in real-time systems (latency, power, robustness).
- Research Output: Evidence of research contributions through academic publications or innovative thesis work in audio ML or acoustics.
- Continuous Learning: Demonstrated enthusiasm for staying current with the latest research in audio ML.
- Excellent communication and can-do mindset.