Advert
Advert

Real-time 3D British Sign Language avatar generation from multimodal inputs

  • DeadlineDeadline: 05/04/2026
  • South West, All EnglandSouth West, All England

Description

Deaf and hard-of-hearing users face persistent barriers in public transport, healthcare and customer-facing services, where interactions are time-sensitive and access to qualified interpreters is limited. SilenceSpeaks is developing AI sign-language avatars that translate text and voice into sign-language video. This PhD will develop the next-generation pipeline: real-time 3D British Sign Language (BSL) avatar generation from multimodal inputs (speech/audio, English text and BSL gloss), enabling scalable deployment in high-impact settings.

The research challenge is to produce faithful, temporally coherent signing with realistic 3D hands, face and body motion under strict latency constraints. Building on an existing 2D pipeline, the PhD will investigate a modular end-to-end approach: (1) multimodal conditioning to align speech/text to BSL gloss with controllable timing; (2) 3D motion synthesis for full-body and fine-grained hand articulation using diffusion-based generative models and neural 3D representations (e.g., NeRF and/or 3D Gaussian splatting, and/or mesh+rig pipelines), with constraints for sign correctness and temporal coherence; (3) real-time 3D avatar rendering with occlusion-aware post-processing, texture/colour consistency and artifact removal; and (4) systems optimisation for interactive deployment, including distillation/quantisation and memory-aware PyTorch implementations.

You will work jointly with the University of Bath (Computer Science) and Silence Speaks, with access to engineering support, domain experts, and Deaf-community annotation and evaluation to establish ground truth for BSL signing. The project is expected to produce publishable research suitable for leading machine learning, computer vision and computer graphics venues, alongside translational outputs supporting real-world accessibility deployments.

Entry Requirements

Applicants should hold, or expect to receive, a First Class or good Upper Second Class UK Honours degree (or the equivalent) in Computer Science, Machine Learning or a related discipline. Strong Python and PyTorch skills are essential, including GPU/memory profiling and optimisation. Experience with 3D rendering pipelines and motion/pose modelling is required. Experience with diffusion models is strongly desirable. Familiarity with neural 3D representations (e.g., NeRF and/or 3D Gaussian splatting) and deployment-oriented optimisation (distillation/quantisation) is advantageous.

Fees

Candidates may be considered for a University of Bath studentship tenable for 3 years. Funding covers tuition fees, a stipend (£20,780 p/a in 2025/6) and access to a training support budget. 

How To Apply

Informal enquiries are encouraged and should be directed to Dr Deblina Bhattacharjee.

Formal applications should be submitted via the University of Bath’s online application form for a PhD in Computer Science: https://samis.bath.ac.uk/urd/sits.urd/run/siw_ipp_lgn.login?process=siw_ipp_app&code1=RDUCM-FP01&code2=0020

IMPORTANT:

When completing the application form:

1.     In the Funding your studies section, select ‘University of Bath URSA’ as the studentship for which you are applying.

2.     In the Your PhD project section, quote the project title of this project and the name of the lead supervisor in the appropriate boxes. 

Failure to complete these two steps will cause delays in processing your application and may cause you to miss the deadline.

More information about applying for a PhD at Bath may be found on our website. 

PLEASE BE AWARE: Applications for this project may close earlier than the advertised deadline if a suitable candidate is found. We therefore recommend that you contact the lead supervisor prior to applying and submit your formal application as early as possible.

Find out more

Add to my list

Learn more about University of Bath

Where is University of Bath?