Skip to main content
Android
iOS
macOS
Web
Windows
Electron
Flutter
React Native
Python
React JS
Unity
Unreal Engine
Unreal (Blueprint)

Custom audio source

By default, Voice SDK uses the standard audio module on the device your app runs on. However, there are certain scenarios where you want to integrate a custom audio source into your app, such as:

  • Your app has its own audio module.

  • You want to use a non-microphone source, such as recorded audio data.

  • You need to process the captured audio with a pre-processing library for audio enhancement.

  • You need flexible device resource allocation to avoid conflicts with other services.

Understand the tech

To set an external audio source, you configure the Agora Engine before joining a channel. To manage the capture and processing of audio frames, you use methods from outside the Voice SDK that are specific to your custom source. Voice SDK enables you to push processed audio data to subscribers in a channel.

The following figure shows the workflow you need to implement to stream a custom audio source in your app.

Process custom audio

Prerequisites

To test the code used in this page you need to have:

  • An Agora account and project.

  • A computer with Internet access.

    Ensure that no firewall is blocking your network communication.

Integrate custom audio or video

To stream from a custom source, you convert the data stream into a suitable format and push this data using Video SDK.

Implement a custom audio source

To push audio from a custom source to a channel, take the following steps:

Add the required imports

import { AgoraRTCProvider, useRTCClient, useConnectionState } from "agora-rtc-react";
import AgoraRTC, {ILocalAudioTrack } from "agora-rtc-sdk-ng";
import AuthenticationWorkflowManager from "../authentication-workflow/authenticationWorkflowManager";
import { useAgoraContext } from "../agora-manager/agoraManager";
import config from "../agora-manager/config";
Copy

Add the required variables

const [customAudioTrack, setCustomAudioTrack] = useState<ILocalAudioTrack | null>(null);
const connectionState = useConnectionState();
const [customMediaState, enableCustomMedia] = useState(false);
Copy

Create a custom source audio track

// Create custom audio track using the user's media devices
const createCustomAudioTrack = () => {
navigator.mediaDevices
.getUserMedia({ audio: true })
.then((stream) => {
const audioMediaStreamTracks = stream.getAudioTracks();
// For demonstration purposes, we used the default audio device. In a real-time scenario, you can use the dropdown to select the audio device of your choice.
setCustomAudioTrack(AgoraRTC.createCustomAudioTrack({ mediaStreamTrack: audioMediaStreamTracks[0] }));
}).catch((error) => console.error(error));
};
useEffect(() => {
if (connectionState === "CONNECTED") {
createCustomAudioTrack();
}
}, [connectionState]);
Copy

Play and publish the custom audio track

const CustomAudioComponent: React.FC<{ customAudioTrack: ILocalAudioTrack | null }> = ({ customAudioTrack }) => {
const agoraContext = useAgoraContext();

useEffect(() => {
if (customAudioTrack && agoraContext.localMicrophoneTrack) {
agoraContext.localMicrophoneTrack.stop(); // Stop the default microphone track
customAudioTrack?.play(); // Play the custom audio track for the local user
}
return () => {
customAudioTrack?.stop(); // Stop the custom audio track when the component unmounts
};
}, [customAudioTrack, agoraContext.localMicrophoneTrack]);
return <></>;
};
Copy

Test custom streams

To ensure that you have implemented streaming from a custom source into your app:

  1. Load the web demo

    1. Generate a temporary token in Agora Console

    2. In your browser, navigate to the Agora web demo and update App ID, Channel, and Token with the values for your temporary token, then click Join.

  2. Clone the documentation reference app

  3. Configure the project

    1. Open the file <samples-root>/src/agora-manager/config.json

    2. Set appId to the AppID of your project.

    3. Choose one of the following authentication methods:

      • Temporary token
        1. Generate an RTC token using your uid and channelName and set rtcToken to this value in config.json.
        2. Set channelName to the name of the channel you used to create the rtcToken.
      • Authentication server
        1. Setup an Authentication server
        2. In config.json, set:
          • channelName to the name of a channel you want to join.
          • token and rtcToken to empty strings.
          • serverUrl to the base URL for your token server. For example: https://agora-token-service-production-yay.up.railway.app.
  4. Run the reference app

    In Terminal, run the following command:

    yarn dev
    Copy

    If this is the first time you run the project, grant microphone access to the app.

  1. Choose this sample in the reference app

    In Choose a sample code, select Custom video and audio source.

  2. Join a channel

    Press Join to connect to the same channel as your web demo.

  3. Test custom audio and video source

    Click Enable Media Customization. The audio and video from customized sources is played and published in the channel.

Reference

This section contains content that completes the information on this page, or points you to documentation that explains other aspects to this product.

Voice Calling