Class: Media

Media


new Media()

WebRTC Media class

Properties:
Name Type Description
application Application

The parent application object

parentConversation Conversation

the conversation object this media instance belongs to

Properties
Name Type Description
streamIndex number

the latest index of the streams, updated in each new peer offer

rtcObjects Array.<object>

data related to the rtc connection

Properties
Name Type Argument Description
rtc_id string

the rtc_id

pc PeerConnection

the current PeerConnection object

stream Stream

the stream of the specific rtc_id

type string <optional>

audio the type of the stream

streamIndex number

the index number of the stream (e.g. use to mute)

rtcstats_conf RTCStatsConfig

the config needed to controll rtcstats analytics behavior

rtcstats RTCStatsAnalytics

an instance to collect analytics from a peer connection

Source:
Fires:

Methods


disable()

Disable media participation in the conversation for this application
if RtcStats MOS is enabled, a final report will be available in
NexmoClient#rtcstats:report

Source:
Returns:
Type
Promise
Example

Disable media in the Conversation

conversation.media.disable()
.then((response) => {
  console.log(response);
}).catch((error) => {
  console.error(error);
});

<async> enable(params)

Enable media participation in the conversation for this application (requires WebRTC)

Parameters:
Name Type Description
params object

rtc params

Properties
Name Type Argument Default Description
label string

Label is an application defined tag, eg. ‘fullscreen’

audio object <optional>
true

audio enablement mode. possible values "both", "send_only", "receive_only", "none", true or false

autoPlayAudio object <optional>
false

attach the audio stream automatically to start playing after enable media (default false)

Source:
Returns:
Type
Promise.<MediaStream>
Example

Enable media in the Conversation

conversation.media.enable()
.then((stream) => {
   const media = document.createElement("audio");
   const source = document.createElement("source");
   const media_div = document.createElement("div");
   media.appendChild(source);
   media_div.appendChild(media);
   document.insertBefore(media_div);
   // Older browsers may not have srcObject
   if ("srcObject" in media) {
     media.srcObject = stream;
   } else {
     // Avoid using this in new browsers, as it is going away.
     media.src = window.URL.createObjectURL(stream);
   }
   media.onloadedmetadata = (e) => {
     media.play();
   };
}).catch((error) => {
   console.error(error);
});

mute( [mute] [, streamIndex])

Mute your Member

Parameters:
Name Type Argument Default Description
mute boolean <optional>
false

true for mute, false for unmute

streamIndex number <optional>
null

stream id to set - if it's not set all streams will be muted

Source:
Example

Mute your audio stream in the Conversation

// Mute your Member
conversation.media.mute(true);

// Unmute your Member
conversation.media.mute(false);

<async> playStream(params)

Play an audio stream in the Conversation

Parameters:
Name Type Description
params object
Properties
Name Type Description
level number

Set the audio level of the audio stream: min=-1 max=1 increment=0.1.

stream_url array

Link to the audio file.

loop number

The number of times to repeat audio. Set to 0 to loop infinitely.

Source:
Returns:
Type
Promise.<NXMEvent>
Example

Play an audio stream in the Conversation

conversation.media.playStream({ level: 0.5, stream_url: ["https://nexmo-community.github.io/ncco-examples/assets/voice_api_audio_streaming.mp3"], loop: 1 })
.then((response) => {
  console.log("response: ", response);
})
.catch((error) => {
  console.error("error: ", error);
});

<async> sayText(params)

Play a voice text in the Conversation

Parameters:
Name Type Description
params object
Properties
Name Type Argument Default Description
text string

The text to say in the Conversation.

voice_name string <optional>
"Amy"

Name of the voice to use for speech to text.

level number <optional>
1

Set the audio level of the audio stream: min=-1 max=1 increment=0.1.

queue boolean <optional>
true

?

loop boolean <optional>
1

The number of times to repeat audio. Set to 0 to loop infinitely.

ssml boolean <optional>
false

Customize the spoken text with Speech Synthesis Markup Language (SSML) specification

Source:
Returns:
Type
Promise.<NXMEvent>
Example

Play speech to text in the Conversation

conversation.media.sayText({text:"hi"})
.then((response) => {
   console.log(response);
})
.catch((error) => {
    console.error(error);
});

<async> sendDTMF(digit)

Send DTMF in the Conversation

Parameters:
Name Type Description
digit string

the DTMF digit(s) to send

Source:
Returns:
Type
Promise.<NXMEvent>
Example

Send DTMF in the Conversation

conversation.media.sendDTMF("digit");
.then((response) => {
   console.log(response);
})
.catch((error) => {
    console.error(error);
});

<async> startRinging()

Send start ringing event

Source:
Returns:
Type
Promise.<NXMEvent>
Example

Send start ringing event in the Conversation

conversation.media.startRinging()
.then((response) => {
   console.log(response);
}).catch((error) => {
   console.error(error);
});

// Listen for start ringing event
conversation.on('audio:ringing:start', (data) => {
   console.log("ringing started: ", data);
});

<async> stopRinging()

Send stop ringing event

Source:
Returns:
Type
Promise.<NXMEvent>
Example

Send stop ringing event in the Conversation

conversation.media.stopRinging()
.then((response) => {
   console.log(response);
}).catch((error) => {
   console.error(error);
});

// Listen for stop ringing event
conversation.on('audio:ringing:stop', (data) => {
   console.log("ringing stopped: ", data);
});

<async> updateAudioConstraints(constraints, type)

Replaces the stream's audio tracks currently being used as the sender's sources with a new one

Parameters:
Name Type Description
constraints object

audio constraints - { deviceId: { exact: selectedAudioDeviceId } }

type string

rtc object type - audio

Source:
Returns:
  • Returns the new stream.
Type
Promise.<MediaStream>
Example

Update the stream currently being used with a new audio source

conversation.media.updateAudioConstraints({ deviceId: { exact: selectedAudioDeviceId } }, "audio")
.then((response) => {
  console.log(response);
}).catch((error) => {
  console.error(error);
});