Back to latest version

MC1230459 - Microsoft Teams: Voice tethering

Message Center

Metadata at Feb 10, 2026

Published

Feb 10, 2026

Service

Microsoft Teams

Tag

New feature
User impact
Admin impact

Platforms

Desktop
Mac
Web

Metadata changes

Tags
Admin impact, New feature, Updated message, User impactAdmin impact, New feature, User impact
End date
Jun 29, 2026May 15, 2026

Body changes

removed textadded text

Updated April 29, 2026: We have updated the timeline. Thank you for your patience. 

Introduction

Voice tethering builds on the recent introduction of Sign Language Mode in Microsoft Teams. When a sign language interpreter voices on behalf of a Deaf or hard‑of‑hearing (D/HH) participant, Teams will now attribute captions, transcripts, and meeting intelligence—such as Copilot notes, summaries, action items, and insights—to the D/HH participant rather than the interpreter. This update ensures accurate representation in meetings, clarifies who is contributing to the conversation, and improves downstream meeting accuracy and accountability.

This message is associated with Microsoft 365 Roadmap ID 553223.

When this will happen

  • Targeted Release (Worldwide): We will begin rolling out in mid-March 2026 and expect to complete by late March 2026.
  • General Availability (Worldwide, GCC): We will begin rolling out in mid-Mayearly April 2026 (previously early April) and expect to complete by late Maymid-April 2026 (previously mid-April).

How this affects your organization

Who is affected

  • Organizations with meeting participants who are Deaf or hard‑of‑hearing and use sign language interpreters in Teams meetings.
  • Any users who participate in meetings with Sign Language Mode enabled.

What will happen

  • Voice contributions made by interpreters will be attributed to the D/HH participant across:
    • Live captions
    • Meeting transcripts
    • Copilot notes, summaries, action items, and insights
    • Other Teams meeting intelligence
  • Meeting data becomes more accurate by ensuring the correct participant is represented.
  • Interpreter identity is no longer conflated with the signer they support.
  • Sign Language Mode is already available to all users; voice tethering enhances it automatically.
  • The feature is on by default when sign language mode and interpreter assignment are used.
  • No admin controls are required to enable or manage the feature.

What you can do to prepare

No action is required.

Optional preparation steps:

  • Inform D/HH users and interpreters that speech attribution in meetings will change.
  • Update internal training or accessibility resources if you document interpreter workflows.
  • Notify helpdesk or support teams that captioning and transcript attribution will appear differently for interpreted meetings.

Compliance considerations

No compliance considerations identified. Review as appropriate for your organization.