Swift

Summary

How it works

A timer runs at ~10 fps. On each tick, the capturer renders the target view (a UIView) into a UIImage via drawHierarchy(in:afterScreenUpdates:), resizes and pads the image for encoder compatibility, converts it to a CVPixelBuffer, wraps it in an OTVideoFrame, and passes it to the Vonage SDK via videoCaptureConsumer?.consumeFrame(_:). The SDK encodes and publishes the frames as a screen-share stream. The target view can be the full root view or any UIView you provide—including one produced by a UIViewRepresentable (see alternative below).

Testing

This example should with multiple participants ( at least 2 ), to see the screensharing working for another person in the call.

Test on iOS Simulator

  1. Run the app in the iOS Simulator
  2. The simulator will use a demo video (no camera access)
  3. You should see the timer text

Test on Physical Device

  1. Connect an iOS device
  2. Select it as the run destination
  3. Grant camera permissions when prompted
  4. You should see the timer text

Test with Multiple Participants

  1. Run the app on a device or simulator
  2. Use the Vonage Video Playground to join the same session
  3. You should see both your custom-rendered stream and the standard subscriber stream