Video Connector
The Vonage Video Connector Python library enables you to programmatically participate in Vonage Video API sessions as a server-side participant. This library allows you to connect to video sessions, publish and subscribe to streams, and process real-time audio and video data.
The library handles WebRTC connectivity, media processing, and session management automatically, allowing you to focus on building your application logic. Audio is delivered as Linear PCM 16-bit data, and video is delivered as 8-bit frames in YUV420P, RGB24, or ARGB32 formats, all at configurable sample rates, resolutions, and channel configurations.
Important The Vonage Video Connector Python library is designed for server-side applications and requires valid Vonage Video API credentials and tokens with appropriate permissions.
This topic includes the following sections:
- Private beta
- Requirements
- Data structures
- Connecting to a session
- Session settings
- Publishing streams
- Subscribing to streams
- Audio data handling
- Video data handling
- Media buffer management
- Event callbacks
Private beta
Vonage Video Connector is in beta stage. Contact us to get early access.
Requirements
This library requires Python 3.13 running Linux AMD64 and ARM64 platforms. We recommend using Debian Bookworm as it is the distribution where this has been most thoroughly tested.
Data structures
The Vonage Video Connector Python library uses several key data structures to represent sessions, connections, streams, and audio data. Understanding these structures is essential for working with the library effectively.
Session
Represents a Vonage Video API session that clients can connect to:
from vonage_video_connector.models import Session
# Session object properties
session.id # str: Unique identifier for the session
The Session object is passed to various callback functions to identify which session triggered the event.
Connection
Represents a participant's connection to a session:
from vonage_video_connector.models import Connection
# Connection object properties
connection.id # str: Unique identifier for the connection
connection.creation_time # datetime: When the connection was established
connection.data # str: Connection data (encoded in the token)
Connection data can be used to store custom metadata about participants, such as user IDs or roles.
Stream
Represents a media stream (audio/video) published by a participant:
from vonage_video_connector.models import Stream
# Stream object properties
stream.id # str: Unique identifier for the stream
stream.connection # Connection: The underlying connection that published this stream
Streams are created when participants publish media and are used for subscribing to receive their audio/video data.
Publisher
Represents your published stream in the session:
from vonage_video_connector.models import Publisher
# Publisher object properties
publisher.stream # Stream: The underlying stream for this publisher
The Publisher object is used in publisher-related callbacks and represents your own published media stream.
Subscriber
Represents a subscription to another participant's stream:
from vonage_video_connector.models import Subscriber
# Subscriber object properties
subscriber.stream # Stream: The underlying stream for this subscriber
The Subscriber object is used in subscriber-related callbacks and represents your subscription to receive another participant's media.
AudioData
Represents audio data being transmitted or received:
from vonage_video_connector.models import AudioData
# AudioData object properties
audio_data.sample_buffer # memoryview: 16-bit signed integer audio samples
audio_data.sample_rate # int: Sample rate (8000-48000 Hz)
audio_data.number_of_channels # int: 1 (mono) or 2 (stereo)
audio_data.number_of_frames # int: Number of audio frames
Audio format requirements:
- Sample buffer must contain 16-bit signed integers
- Valid sample rates: 8000, 12000, 16000, 24000, 32000, 44100, 48000 Hz
- Channels: 1 (mono) or 2 (stereo)
- Buffer size must accommodate:
number_of_frames * number_of_channelssamples
VideoFrame
Represents video frame data being transmitted or received:
from vonage_video_connector.models import VideoFrame, VideoResolution
# VideoFrame object properties
video_frame.frame_buffer # memoryview: 8-bit unsigned char video frame data
video_frame.resolution # VideoResolution: Width and height in pixels
video_frame.format # str: Video format (YUV420P, RGB24, or ARGB32)
Video format requirements:
- Frame buffer must contain 8-bit unsigned chars
- Valid formats: YUV420P, RGB24 (BGR), ARGB32 (BGRA)
- Maximum resolution: 1920x1080 pixels (2,073,600 total pixels)
- Buffer size varies by format and resolution
VideoResolution
Represents the dimensions of a video frame:
from vonage_video_connector.models import VideoResolution
# VideoResolution object properties
resolution = VideoResolution(
width=640, # int: Width in pixels
height=480 # int: Height in pixels
)
MediaBufferStats
Provides statistics about media buffers:
from vonage_video_connector.models import MediaBufferStats, AudioBufferStats, VideoBufferStats
# MediaBufferStats object properties
stats.audio # Optional[AudioBufferStats]: Audio buffer statistics
stats.video # Optional[VideoBufferStats]: Video buffer statistics
# AudioBufferStats properties
stats.audio.duration # timedelta: Duration of queued audio
# VideoBufferStats properties
stats.video.duration # timedelta: Duration of queued video
Configuration structures
SessionSettings
Configures session-level behavior:
from vonage_video_connector.models import SessionSettings
session_settings = SessionSettings(
enable_migration=False, # bool: Enable automatic session migration
av=av_settings, # Optional[SessionAVSettings]: Audio/video configuration
logging=logging_settings # Optional[LoggingSettings]: Logging configuration
)
SessionAVSettings
Configures audio and video settings for the session:
from vonage_video_connector.models import SessionAVSettings, SessionAudioSettings, SessionVideoPublisherSettings
av_settings = SessionAVSettings(
audio_publisher=SessionAudioSettings(sample_rate=48000, number_of_channels=2),
audio_subscribers_mix=SessionAudioSettings(sample_rate=48000, number_of_channels=1),
video_publisher=SessionVideoPublisherSettings(
resolution=VideoResolution(width=1280, height=720),
fps=30,
format="YUV420P"
)
)
Understanding audio configuration:
The SessionAVSettings allows you to configure different audio formats for publishing and receiving:
audio_publisher: Defines the format for audio data you provide via
add_audio(). The audio data you send must match this configuration's sample rate and number of channels.audio_subscribers_mix: Defines the format for the mixed audio you receive from all subscribed streams via the
on_audio_data_cbcallback. The library automatically handles mixing multiple subscribers' audio and resampling/channel conversion to match your specified format.
This separation allows you to optimize for your use case. For example:
- Publish in stereo (2 channels) for high-quality output while receiving a mono mix (1 channel) to simplify processing
- Publish at 16kHz for speech while receiving at 48kHz for high-fidelity playback
- Use different sample rates for publishing and subscription based on your audio processing pipeline requirements
SessionAudioSettings
Configures audio format for publishing or receiving audio data:
from vonage_video_connector.models import SessionAudioSettings
audio_settings = SessionAudioSettings(
sample_rate=48000, # int: Sample rate (8000-48000 Hz)
number_of_channels=1 # int: Channels - 1 (mono) or 2 (stereo)
)
SessionVideoPublisherSettings
Configures video settings for publishing:
from vonage_video_connector.models import SessionVideoPublisherSettings, VideoResolution
video_settings = SessionVideoPublisherSettings(
resolution=VideoResolution(width=1280, height=720), # Resolution in pixels
fps=30, # int: Frames per second (1-30)
format="YUV420P" # str: Video format (YUV420P, RGB24, or ARGB32)
)
LoggingSettings
Controls logging verbosity:
from vonage_video_connector.models import LoggingSettings
logging_settings = LoggingSettings(
level="INFO" # str: ERROR, WARN, INFO, DEBUG, or TRACE
)
PublisherSettings
Configures your published stream:
from vonage_video_connector.models import PublisherSettings, PublisherAudioSettings
publisher_settings = PublisherSettings(
name="My Application", # str: Name for your published stream (required, min 1 char)
has_audio=True, # bool: Whether to publish audio
has_video=True, # bool: Whether to publish video
audio_settings=audio_settings # Optional[PublisherAudioSettings]: Audio configuration
)
Note: At least one of has_audio or has_video must be True.
PublisherAudioSettings
Configures audio settings for your published stream:
from vonage_video_connector.models import PublisherAudioSettings
audio_settings = PublisherAudioSettings(
enable_stereo_mode=True, # bool: Publish in stereo (True) or mono (False)
enable_opus_dtx=False # bool: Enable discontinuous transmission
)
Discontinuous transmission (DTX) stops sending audio packets during silence, saving bandwidth.
SubscriberSettings
Configures subscriber behavior:
from vonage_video_connector.models import SubscriberSettings, SubscriberVideoSettings, VideoResolution
subscriber_settings = SubscriberSettings(
subscribe_to_audio=True, # bool: Whether to subscribe to audio
subscribe_to_video=True, # bool: Whether to subscribe to video
video_settings=SubscriberVideoSettings(
preferred_resolution=VideoResolution(width=640, height=480),
preferred_framerate=15
)
)
Note: At least one of subscribe_to_audio or subscribe_to_video must be True.
SubscriberVideoSettings
Configures video preferences for subscribers:
from vonage_video_connector.models import SubscriberVideoSettings, VideoResolution
video_settings = SubscriberVideoSettings(
preferred_resolution=VideoResolution(width=640, height=480), # Optional
preferred_framerate=15 # Optional: Preferred FPS (1-30)
)
Understanding preferred settings:
When subscribing to routed streams that use simulcast, the Vonage Video API SFU (Selective Forwarding Unit) can send different quality layers of the video. The preferred_resolution and preferred_framerate settings allow you to request a specific quality layer:
- preferred_resolution: Requests a specific spatial layer (resolution). The SFU will send the layer that most closely matches your preference.
- preferred_framerate: Requests a specific temporal layer (frame rate). The SFU will send the layer that most closely matches your preference.
These preferences help optimize bandwidth usage and processing requirements on the subscriber side by requesting only the quality level you need, rather than always receiving the highest quality available.
Data structure relationships
The data structures are related in the following hierarchy:
Session
├── Connection (multiple participants)
│ └── Stream (participant's published media)
│ ├── Publisher (your published stream)
│ └── Subscriber (your subscription to their stream)
├── AudioData (flowing through streams)
└── VideoFrame (flowing through streams)
Connecting to a session
Basic connection
To connect to a Vonage Video API session, you need your application ID (API key if using Tokbox), session ID, and a valid token:
from vonage_video_connector import VonageVideoClient
from vonage_video_connector.models import SessionSettings, SessionAudioSettings, LoggingSettings
# Create client instance
client = VonageVideoClient()
# Configure session settings
session_settings = SessionSettings(
enable_migration=False,
av=SessionAVSettings(
audio_subscribers_mix=SessionAudioSettings(
sample_rate=48000,
number_of_channels=1
)
),
logging=LoggingSettings(level="INFO")
)
# Connect to session
success = client.connect(
application_id="your_application_id",
session_id="your_session_id",
token="your_token",
session_settings=session_settings,
on_connected_cb=on_session_connected,
on_error_cb=on_session_error
)
Connection with all callbacks
For full session management, implement all available callbacks:
success = client.connect(
application_id="your_application_id",
session_id="your_session_id",
token="your_token",
session_settings=session_settings,
on_error_cb=on_session_error,
on_connected_cb=on_session_connected,
on_disconnected_cb=on_session_disconnected,
on_connection_created_cb=on_connection_created,
on_connection_dropped_cb=on_connection_dropped,
on_stream_received_cb=on_stream_received,
on_stream_dropped_cb=on_stream_dropped,
on_audio_data_cb=on_audio_data,
on_ready_for_audio_cb=on_ready_for_audio,
on_media_buffer_drained_cb=on_media_buffer_drained
)
Disconnecting from a session
Disconnect from the session when done:
success = client.disconnect()
Session settings
Audio and video configuration
Configure audio and video settings for the session to control the format of media data:
from vonage_video_connector.models import (
SessionAVSettings,
SessionAudioSettings,
SessionVideoPublisherSettings,
VideoResolution
)
# Configure audio for publisher and subscriber mix
audio_publisher = SessionAudioSettings(
sample_rate=48000, # Valid: 8000, 12000, 16000, 24000, 32000, 44100, 48000
number_of_channels=2 # 1 for mono, 2 for stereo
)
audio_subscribers_mix = SessionAudioSettings(
sample_rate=48000,
number_of_channels=1
)
# Configure video publisher settings
video_publisher = SessionVideoPublisherSettings(
resolution=VideoResolution(width=1280, height=720),
fps=30,
format="YUV420P" # Valid: YUV420P, RGB24, ARGB32
)
# Combine into session AV settings
av_settings = SessionAVSettings(
audio_publisher=audio_publisher,
audio_subscribers_mix=audio_subscribers_mix,
video_publisher=video_publisher
)
Logging configuration
Control the verbosity of console logging:
from vonage_video_connector.models import LoggingSettings
# Configure logging level
logging_settings = LoggingSettings(
level="DEBUG" # Valid: ERROR, WARN, INFO, DEBUG, TRACE
)
Session migration
Enable automatic session migration in case of SFU rotation:
from vonage_video_connector.models import SessionSettings
session_settings = SessionSettings(
enable_migration=True, # Enable automatic migration
av=av_settings,
logging=logging_settings
)
Publishing streams
Publisher configuration
Configure publisher settings before starting to publish:
from vonage_video_connector.models import PublisherSettings, PublisherAudioSettings
# Configure publisher audio settings
audio_settings = PublisherAudioSettings(
enable_stereo_mode=True, # Publish in stereo
enable_opus_dtx=False # Enable discontinuous transmission for bandwidth savings
)
# Create publisher settings for audio and video
publisher_settings = PublisherSettings(
name="AI Assistant Bot",
has_audio=True,
has_video=True,
audio_settings=audio_settings
)
# Or audio-only publisher
audio_only_settings = PublisherSettings(
name="Audio Bot",
has_audio=True,
has_video=False,
audio_settings=audio_settings
)
Start publishing
Begin publishing a stream to the session:
success = client.publish(
settings=publisher_settings,
on_error_cb=on_publisher_error,
on_stream_created_cb=on_stream_created,
on_stream_destroyed_cb=on_stream_destroyed
)
Important:
If you're publishing audio (has_audio=True), you must wait for the on_ready_for_audio_cb callback to be invoked before calling add_audio(). This callback indicates that the audio system is initialized and ready to accept audio data. This requirement does not apply to video-only publishing scenarios.
# Example: Wait for audio system to be ready
audio_ready = False
def on_ready_for_audio(session):
global audio_ready
audio_ready = True
print("Audio system ready - can now add audio")
# Connect with the callback
client.connect(
application_id="your_application_id",
session_id="your_session_id",
token="your_token",
session_settings=session_settings,
on_ready_for_audio_cb=on_ready_for_audio
)
# Publish
client.publish(settings=publisher_settings)
# Wait for audio to be ready before adding audio
while not audio_ready:
time.sleep(0.01)
# Now safe to add audio
client.add_audio(audio_data)
Adding audio data
Send audio data to your published stream:
from vonage_video_connector.models import AudioData
# Create audio data (example with 16-bit PCM samples)
audio_buffer = memoryview(your_audio_samples) # Must be 16-bit signed integers
audio_data = AudioData(
sample_buffer=audio_buffer,
sample_rate=48000,
number_of_channels=1,
number_of_frames=960 # 20ms at 48kHz
)
# Add audio to the published stream
success = client.add_audio(audio_data)
Stop publishing
Stop publishing when done:
success = client.unpublish()
Subscribing to streams
Subscribe to streams
When a new stream is received, subscribe to it to receive audio and/or video data:
from vonage_video_connector.models import SubscriberSettings, SubscriberVideoSettings, VideoResolution
def on_stream_received(session, stream):
print(f"New stream received: {stream.id}")
print(f"From connection: {stream.connection.id}")
# Configure subscriber settings (optional)
subscriber_settings = SubscriberSettings(
subscribe_to_audio=True,
subscribe_to_video=True,
video_settings=SubscriberVideoSettings(
preferred_resolution=VideoResolution(width=640, height=480),
preferred_framerate=15
)
)
# Subscribe to the stream
success = client.subscribe(
stream=stream,
settings=subscriber_settings,
on_error_cb=on_subscriber_error,
on_connected_cb=on_subscriber_connected,
on_disconnected_cb=on_subscriber_disconnected,
on_render_frame_cb=on_render_frame
)
Receiving subscribed media
When you subscribe to streams, the library delivers audio and video data through different callbacks:
Video data: Video frames are delivered individually per subscribed stream through the on_render_frame_cb callback. Each callback invocation includes the subscriber object that identifies which stream the video frame belongs to. This allows you to process video from different participants separately.
def on_render_frame(subscriber, video_frame):
"""Called for each subscribed stream's video frames"""
stream_id = subscriber.stream.id
width = video_frame.resolution.width
height = video_frame.resolution.height
print(f"Video frame from stream {stream_id}: {width}x{height}")
# Process video for this specific stream
Audio data: Audio is delivered as a single mixed stream through the on_audio_data_cb callback registered during connect(). The library automatically mixes audio from all subscribed streams together into a single audio stream. You cannot distinguish between individual participants' audio in this callback.
def on_audio_data(session, audio_data):
"""Called with mixed audio from all subscribed streams"""
sample_rate = audio_data.sample_rate
channels = audio_data.number_of_channels
print(f"Mixed audio from all subscribers: {sample_rate}Hz, {channels} channel(s)")
# Process the combined audio from all participants
This design allows you to:
- Process video from each participant independently for tasks like layout management, individual recording, or per-stream video effects
- Receive pre-mixed audio optimized for playback or further processing without manual mixing
- Configure the mixed audio format via
audio_subscribers_mixinSessionAVSettingsto match your processing requirements
Unsubscribe from streams
Stop receiving media from a specific stream:
def on_stream_dropped(session, stream):
print(f"Stream dropped: {stream.id}")
# Unsubscribe from the stream
success = client.unsubscribe(stream)
Audio data handling
Audio format
Audio data is delivered as Linear PCM 16-bit signed integers with the following characteristics:
- Sample rates: 8000, 12000, 16000, 24000, 32000, 44100, or 48000 Hz
- Channels: 1 (mono) or 2 (stereo)
- Format: 16-bit signed integers in a
memoryviewbuffer - Frame size: Typically 20ms chunks (varies by sample rate)
Processing audio data
Handle incoming audio in the audio data callback:
def on_audio_data(session, audio_data):
"""Process incoming audio data from subscribed streams"""
# Access audio properties
sample_rate = audio_data.sample_rate
channels = audio_data.number_of_channels
frames = audio_data.number_of_frames
# Access the audio buffer (memoryview of 16-bit signed integers)
audio_buffer = audio_data.sample_buffer
print(f"Received {frames} frames at {sample_rate}Hz, {channels} channel(s)")
# Convert to bytes if needed
audio_bytes = audio_buffer.tobytes()
# Process the audio (e.g., transcription, analysis, etc.)
process_audio(audio_buffer, sample_rate, channels)
# Generate response audio and add it back
response_audio = generate_response(audio_buffer)
if response_audio:
client.add_audio(response_audio)
Creating audio data
When adding audio, create properly formatted AudioData objects:
import array
# Create 16-bit PCM audio samples (example: sine wave)
sample_rate = 48000
duration_ms = 20 # 20ms frame
num_samples = int(sample_rate * duration_ms / 1000)
# Generate audio samples as 16-bit signed integers
samples = array.array('h') # 'h' = signed short (16-bit)
for i in range(num_samples):
# Example: generate a 440Hz sine wave
sample = int(32767 * 0.5 * math.sin(2 * math.pi * 440 * i / sample_rate))
samples.append(sample)
# Create AudioData object
audio_data = AudioData(
sample_buffer=memoryview(samples),
sample_rate=sample_rate,
number_of_channels=1,
number_of_frames=num_samples
)
# Add the audio
client.add_audio(audio_data)
Audio data continuity
When you publish audio, the library manages audio continuity automatically in several scenarios:
Initial publishing:
When you start publishing audio (via publish() with has_audio=True), the library automatically sends silence (zero-filled audio frames) until you provide your first audio data via add_audio(). This ensures the audio stream is immediately available to subscribers without waiting for your application to generate audio data.
Silence tolerance:
If you temporarily stop providing audio data via add_audio(), the library tolerates brief gaps by not sending any audio packets. This hysteresis period prevents unnecessary silence packets during momentary processing delays.
Explicit silence: After the tolerance period, if no new audio data is available, the library switches to sending explicit silence frames (zero-filled audio). This maintains the audio stream while indicating that no active audio is being provided.
Buffer flush: If you provide less than a full period's worth of audio data, the library will flush the remaining data and pad it with silence to maintain the correct timing and prevent audio drift.
Best practices:
- Maintain a consistent audio rate by calling
add_audio()at regular intervals matching your configured sample rate - Monitor buffer statistics using
get_media_buffer_stats()to ensure adequate audio data - Handle the
on_media_buffer_drained_cbcallback to detect when your audio buffer is depleted - Consider implementing an audio generation strategy that adapts to varying processing loads
This automatic audio management ensures that your published audio stream remains continuous and properly timed even during temporary gaps in data availability.
Video data handling
Video format
Video data is delivered as 8-bit unsigned chars in one of three formats:
- YUV420P: Planar YUV format with 4:2:0 chroma subsampling
- RGB24: 24-bit BGR format (8 bits per channel)
- ARGB32: 32-bit BGRA format with alpha channel
Video specifications:
- Resolutions: Up to 1920x1080 (Full HD)
- Frame rates: 1-30 FPS
- Format: 8-bit unsigned chars in a
memoryviewbuffer
Processing video frames
Handle incoming video frames in the render frame callback:
def on_render_frame(subscriber, video_frame):
"""Process incoming video frames from subscribed streams"""
# Access video properties
width = video_frame.resolution.width
height = video_frame.resolution.height
format = video_frame.format
# Access the video buffer (memoryview of 8-bit unsigned chars)
frame_buffer = video_frame.frame_buffer
print(f"Received {width}x{height} frame in {format} format")
# Convert to bytes if needed
frame_bytes = frame_buffer.tobytes()
# Process the video frame (e.g., computer vision, recording, etc.)
process_video(frame_buffer, width, height, format)
Creating video frames
When publishing video, create properly formatted VideoFrame objects:
import array
from vonage_video_connector.models import VideoFrame, VideoResolution
# Create a video frame (example: solid color in YUV420P format)
width = 640
height = 480
# YUV420P format calculation:
# Y plane: width * height
# U plane: (width/2) * (height/2)
# V plane: (width/2) * (height/2)
y_size = width * height
uv_size = (width // 2) * (height // 2)
total_size = y_size + 2 * uv_size
# Create frame buffer as 8-bit unsigned chars
frame_data = array.array('B', [128] * total_size) # 'B' = unsigned char (8-bit)
# Create VideoFrame object
video_frame = VideoFrame(
frame_buffer=memoryview(frame_data),
resolution=VideoResolution(width=width, height=height),
format="YUV420P"
)
# Add the video frame
client.add_video(video_frame)
Video frame continuity
When you publish video, the library manages frame continuity automatically in several scenarios:
Initial publishing:
When you start publishing video (via publish() with has_video=True), the library automatically sends black frames until you provide your first frame via add_video(). This ensures the video stream is immediately available to subscribers without waiting for your application to generate video data.
Last frame repetition:
If you stop providing video frames via add_video(), the library will automatically repeat the last frame you provided. This ensures smooth playback for subscribers without interruption. The last frame will be repeated for up to 2 seconds.
Black frame fallback: After the maximum repetition period (2 seconds), the library switches to publishing black frames. This indicates to subscribers that video data is no longer actively being provided while maintaining the video stream.
Best practices:
- Maintain a consistent frame rate by calling
add_video()at regular intervals matching your configured FPS - Monitor buffer statistics using
get_media_buffer_stats()to ensure adequate video data - Handle the
on_media_buffer_drained_cbcallback to detect when your video buffer is depleted - Consider implementing a frame generation strategy that adapts to varying processing loads
This automatic frame management ensures that your published video stream remains continuous even during temporary gaps in data availability.
Media buffer management
Checking buffer stats
Monitor the state of your media buffers:
# Get current buffer statistics
stats = client.get_media_buffer_stats()
if stats.audio:
print(f"Audio buffer duration: {stats.audio.duration.total_seconds()}s")
if stats.video:
print(f"Video buffer duration: {stats.video.duration.total_seconds()}s")
Clearing media buffers
Clear both audio and video buffers when needed:
# Clear all media buffers
success = client.clear_media_buffers()
if success:
print("Media buffers cleared successfully")
Buffer drained callback
Handle buffer drain events:
def on_media_buffer_drained(stats):
"""Called when media buffers are drained"""
print("Media buffers drained")
if stats.audio:
print(f"Audio buffer: {stats.audio.duration.total_seconds()}s remaining")
if stats.video:
print(f"Video buffer: {stats.video.duration.total_seconds()}s remaining")
Understanding buffer drain events:
The on_media_buffer_drained_cb callback is invoked when the internal audio or video buffers are depleted. This occurs when media data is being transmitted to the session at a rate that exceeds the rate at which new media data is being provided via add_audio() or add_video() calls.
This callback serves as a notification that you should increase your media production rate or adjust your publishing strategy to maintain continuous media flow. Monitoring these events helps prevent gaps or interruptions in your published stream.
Callback hysteresis behavior:
The callback implements hysteresis to prevent excessive triggering. After the initial drain event, the callback will not be invoked again until the buffer is replenished with new media data and subsequently becomes depleted again. This prevents a flood of repeated notifications while the buffer remains empty.
Getting connection info
Retrieve your local connection information:
# Get the local connection
connection = client.get_connection()
if connection:
print(f"Connection ID: {connection.id}")
print(f"Connection data: {connection.data}")
print(f"Created at: {connection.creation_time}")
Event callbacks
Session callbacks
Handle session-level events:
def on_session_error(session, error_description, error_code):
"""Handle session errors"""
print(f"Session error: {error_description} (Code: {error_code})")
def on_session_connected(session):
"""Handle successful session connection"""
print(f"Connected to session: {session.id}")
def on_session_disconnected(session):
"""Handle session disconnection"""
print(f"Disconnected from session: {session.id}")
def on_ready_for_audio(session):
"""Called when the audio system is ready"""
print("Audio system ready - can now add audio")
Connection callbacks
Monitor participant connections:
def on_connection_created(session, connection):
"""Handle new participant joining"""
print(f"Participant joined: {connection.id}")
print(f"Connection data: {connection.data}")
print(f"Created at: {connection.creation_time}")
def on_connection_dropped(session, connection):
"""Handle participant leaving"""
print(f"Participant left: {connection.id}")
Stream callbacks
Handle stream events:
def on_stream_received(session, stream):
"""Handle new streams from other participants"""
print(f"Stream received: {stream.id} from connection {stream.connection.id}")
# Decide whether to subscribe based on your application logic
def on_stream_dropped(session, stream):
"""Handle streams being removed"""
print(f"Stream dropped: {stream.id}")
Publisher callbacks
Handle publishing events:
def on_publisher_error(publisher, error_description, error_code):
"""Handle publisher errors"""
print(f"Publisher error: {error_description} (Code: {error_code})")
def on_stream_created(publisher):
"""Handle successful stream creation"""
print(f"Published stream created: {publisher.stream.id}")
def on_stream_destroyed(publisher):
"""Handle stream destruction"""
print(f"Published stream destroyed: {publisher.stream.id}")
Subscriber callbacks
Handle subscription events:
def on_subscriber_error(subscriber, error_description, error_code):
"""Handle subscriber errors"""
print(f"Subscriber error: {error_description} (Code: {error_code})")
def on_subscriber_connected(subscriber):
"""Handle successful subscription"""
print(f"Subscribed to stream: {subscriber.stream.id}")
def on_subscriber_disconnected(subscriber):
"""Handle subscription disconnection"""
print(f"Unsubscribed from stream: {subscriber.stream.id}")
def on_render_frame(subscriber, video_frame):
"""Handle incoming video frames"""
width = video_frame.resolution.width
height = video_frame.resolution.height
print(f"Video frame: {width}x{height} in {video_frame.format} format")
Media buffer callbacks
Handle media buffer events:
def on_media_buffer_drained(stats):
"""Handle media buffer drain events"""
print("Media buffers have been drained")
if stats.audio:
duration_s = stats.audio.duration.total_seconds()
print(f"Audio buffer: {duration_s}s")
if stats.video:
duration_s = stats.video.duration.total_seconds()
print(f"Video buffer: {duration_s}s")
Resource cleanup
Always clean up resources properly:
try:
# Your application logic
success = client.connect(...)
# ... do work ...
except Exception as e:
print(f"Application error: {e}")
finally:
# Clean up resources
if client.is_publishing():
client.unpublish()
if client.is_connected():
client.disconnect()