Skip to main content

You are looking at Interactive Live Streaming v3.x Docs. The newest version is  Interactive Live Streaming 4.x

Android
iOS
macOS
Windows C++
Windows C#
Unity
Flutter
React Native
Electron
Cocos Creator
Cocos2d-x

Custom Audio Source and Renderer

Introduction

Generally, Agora SDKs use default audio modules for capturing and rendering in real-time communications.

However, these default modules might not meet your development requirements, such as in the following scenarios:

  • Your app has its own audio module.
  • You need to process the captured audio with a pre-processing library.
  • You need flexible device resource allocation to avoid conflicts with other services.

Agora provides a solution to enable a custom audio source and/or renderer in the above scenarios. This article describes how to do so using the Agora Native SDK.

Sample project

Agora provides an open-source demo project on GitHub. You can view the source code on Github or download the project to try it out.

Implementation

Before proceeding, ensure that you have implemented the basic real-time communication functions in your project. For details, see Start a Voice Call or Start Interactive Live Audio Streaming.

Custom audio source

Refer to the following steps to implement a custom audio source in your project:

  1. Before calling joinChannel, call setExternalAudioSource to specify the custom audio source.
  2. Implement audio capture and processing yourself using methods from outside the SDK.
  3. Call pushExternalAudioFrame to send the audio frames to the SDK for later use.

API call sequence

Refer to the following diagram to implement the custom audio source:

1568968141511

Audio data transfer

The following diagram shows how the audio data is transferred when you customize the audio source:

1607671910645

  • You need to implement the capture module yourself using methods from outside the SDK.
  • Call pushExternalAudioFrame to send the captured audio frames to the SDK.

Code samples

Refer to the following code samples to implement the custom audio source in your project.

  1. Before the local user joins the channel, specify the custom audio source.

_4
// Specifies the custom audio source
_4
engine.setExternalAudioSource(true, DEFAULT_SAMPLE_RATE, DEFAULT_CHANNEL_COUNT);
_4
// The local user joins the channel
_4
int res = engine.joinChannel(accessToken, channelId, "Extra Optional Data", 0);

  1. Implement your own audio capture module. After the local user joins the channel, enable the capture module to start capturing audio frames from the custom audio source.

_47
public class RecordThread extends Thread
_47
{
_47
private AudioRecord audioRecord;
_47
public static final int DEFAULT_SAMPLE_RATE = 16000;
_47
public static final int DEFAULT_CHANNEL_COUNT = 1, DEFAULT_CHANNEL_CONFIG = AudioFormat.CHANNEL_IN_MONO;
_47
private byte[] buffer;
_47
_47
RecordThread()
_47
{
_47
int bufferSize = AudioRecord.getMinBufferSize(DEFAULT_SAMPLE_RATE, DEFAULT_CHANNEL_CONFIG,
_47
AudioFormat.ENCODING_PCM_16BIT);
_47
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, DEFAULT_SAMPLE_RATE, DEFAULT_CHANNEL_COUNT,
_47
AudioFormat.ENCODING_PCM_16BIT, bufferSize);
_47
buffer = new byte[bufferSize];
_47
}
_47
// Starts audio capture. Reads and sends the captured frames until audio capture stops.
_47
@Override
_47
public void run()
_47
{
_47
try
_47
{
_47
// Start audio recording
_47
audioRecord.startRecording();
_47
while (!stopped)
_47
{
_47
// Reads the captured audio frames
_47
int result = audioRecord.read(buffer, 0, buffer.length);
_47
if (result >= 0)
_47
{
_47
// Sends the captured audio frames to the SDK
_47
CustomAudioSource.engine.pushExternalAudioFrame(
_47
buffer, System.currentTimeMillis());
_47
}
_47
else
_47
{
_47
logRecordError(result);
_47
}
_47
Log.e(TAG, "Data size:" + result);
_47
}
_47
release();
_47
}
_47
catch (Exception e)
_47
{e.printStackTrace();}
_47
}
_47
_47
...
_47
}

API reference

Custom audio renderer

Refer to the following steps to implement a custom audio renderer in your project:

  1. Before calling joinChannel, call setExternalAudioSink to enable and configure the external audio renderer.
  2. After joining the channel, call pullPlaybackAudioFrame to retrieve the audio data sent by a remote user.
  3. Use your own audio renderer to process the audio data, then play the rendered data.

API call sequence

Refer to the following diagram to implement the custom audio renderer in your project:

1569378513078

Audio data transfer

The following diagram shows how the audio data is transferred when you customize the audio renderer:

1607672594828

  • You need to implement the rendering module yourself using methods from outside the SDK.
  • Call pullPlaybackAudioFrame to retrieve the audio data sent by a remote user.

Code samples

Refer to the following code samples to implement the custom audio renderer in your project:


_12
// Enables the custom audio renderer
_12
rtcEngine.setExternalAudioSink(
_12
true, // Enables external audio rendering
_12
44100, // Sampling rate (Hz). You can set this value as 8000, 16000, 32000, 441000, or 48000
_12
1 // The number of channels of the external audio source. This value must not exceed 2
_12
);
_12
_12
// Retrieves remote audio frames for playback
_12
rtcEngine.pullPlaybackAudioFrame(
_12
data, // The data type is byte[]
_12
lengthInByte // The size of the audio data (byte)
_12
);

API reference

Considerations

Performing the following operations requires you to use methods from outside the Agora SDK:

  • Manage the capture and processing of audio frames when using a custom audio source.
  • Manage the processing and playback of audio frames when using a custom audio renderer.

Interactive Live Streaming