Skip to main content
Android
iOS
macOS
Web
Windows
Electron
Flutter
React Native
Python
React JS
Unity
Unreal Engine
Unreal (Blueprint)

Configure audio encoding

Audio quality requirements vary with application scenario. For example. in professional scenarios such as radio stations and singing competitions users are particularly sensitive to audio quality. In such cases, support for dual-channel and high-quality sound is required. High-quality sound means setting a high sampling rate and a high bitrate to achieve realistic audio. Voice SDK enables you to configure audio encoding properties to meet such requirements.

This article shows you how to use Voice SDK to configure appropriate audio encoding properties and application scenarios in your game.

Understand the tech

Voice SDK uses default encoding parameters and a default audio scenario that are suitable for most common applications. If the default settings do not meet your needs, refer to the examples in the implementation section to set appropriate audio encoding properties and an application scenario.

Prerequisites

Ensure that you have implemented the SDK quickstart in your project.

Implementation

This section shows you how to set audio encoding properties and application scenarios for common applications. You use the following APIs to configure audio encoding:

APIDescription
Initialize(context.audioScenario)While initializing an IRtcEngine instance, set up the audio application scenario. The default value is AUDIO_SCENARIO_DEFAULT.
SetAudioProfile(profile, scenario)You can set audio encoding properties before or after joining a channel.
SetAudioScenarioYou can set an application scenario before or after joining a channel.

Set audio encoding properties

This subsection describes how to call SetAudioProfile to set audio encoding properties.

  1. Create UMG

Create a ComboBoxString (dropdown list) widget, named CBS_Audio_Profile, where you can select the audio encoding properties as needed. The default option is AUDIO_PROFILE_DEFAULT, with other options detailed in EAUDIO_PROFILE_TYPE, as shown in the image below:

ConfigureAudioEncodingBlueprint

  1. Bind UI event

Create a Bind Event to On Selection Changed node, connecting the CBS_Audio_Profile widget and the OnSetAudioProfile callback. Selecting different audio encoding properties in the CBS_Audio_Profile dropdown list triggers the OnSetAudioProfile callback, as shown in the image below:

ConfigureAudioEncodingBlueprint

  1. Implement callback function

Create the OnSetAudioProfile callback function, configuring the following parameters:

Input parameterData typeParameter description
SelectedItemStringThe selected audio encoding property option.
SelectionTypeEnumThe type of selection information, see ESelectInfo.

When this callback is triggered, the selected audio encoding property is passed into SetAudioProfile as a profile parameter, and the method is executed to complete the setting, as shown in the image below:

ConfigureAudioEncodingBlueprint

Set up audio application scenarios

This subsection explains how to call SetAudioScenario to set audio application scenarios.

  1. Create UMG

Create a ComboBoxString (dropdown list) widget, named CBS_Audio_Scenario, where you select the audio application scenarios as needed. The default option is AUDIO_SCENARIO_DEFAULT, with other options detailed in EAUDIO_SCENARIO_TYPE, as shown in the image below:

ConfigureAudioEncodingBlueprint

  1. Bind UI Event

Create a Bind Event to On Selection Changed node, connecting the CBS_Audio_Scenario widget and the OnSetAudioScenario callback. Selecting different audio application scenarios in the CBS_Audio_Scenario dropdown list triggers the OnSetAudioScenario callback, as shown in the image below:

ConfigureAudioEncodingBlueprint

  1. Implement callback function

Create the OnSetAudioScenario callback function, configuring the following parameters:

Input ParameterData TypeParameter Description
SelectedItemStringThe selected audio encoding property option.
SelectionTypeEnumThe type of selection information, see ESelectInfo.

When this callback is triggered, the selected audio application scenario is passed to SetAudioScenario as scenario parameter, and the method is executed to complete the setting, as shown in the image below:

ConfigureAudioEncodingBlueprint

The recommended settings for audio encoding properties and audio application scenarios for common applications are as follows:

Common applicationsAudio encoding propertiesAudio application scenario
1-on-1 interactive teaching: Requires ensuring call quality and smooth transmission.AUDIO_PROFILE_DEFAULTAUDIO_SCENARIO_DEFAULT
KTV: Requires high sound quality and good music and vocal performance.AUDIO_PROFILE_MUSIC_HIGH_QUALITYAUDIO_SCENARIO_GAME_STREAMING
Voice radio: Typically uses professional audio equipment. Requires high sound quality and stereo.AUDIO_PROFILE_MUSIC_HIGH_QUALITY_STEREOAUDIO_SCENARIO_GAME_STREAMING
Music teaching: Requires high sound quality and support for the transmission of speaker-played sound effects to the remote end.AUDIO_PROFILE_MUSIC_STANDARD_STEREOAUDIO_SCENARIO_GAME_STREAMING
Information

Due to iOS system limitations, some audio routes cannot be recognized in call volume mode. Therefore, if you need to use an external sound card, it is recommended to set the audio application scene to the high-quality scenario AUDIO_SCENARIO_GAME_STREAMING(3). In this scenario, the SDK switches to media volume to address the issue.

Reference

This section contains content that completes the information on this page, or points you to documentation that explains other aspects to this product.

For more audio settings, see Achieve high audio quality.

Frequently asked questions​

Sample projects

Agora offers the following open-source sample projects for setting audio encoding properties and application scenario functions for your reference.

API reference

Voice Calling