We're distributing a virtual camera with our app that does not profit in the slightest from automatically applied system video effects both to the video going in (physical camera device) or out (virtual camera device). I'm aware of setting NSCameraReactionEffectGesturesEnabledDefault in Info.plist and determining active video effects via AVCaptureDevice API. Those are obviously crutches, because having to tell users to go look for and click around in menu bar apps is the opposite of a great UX.
To make our product's video output more deterministic, I'm looking for a way to tell the CMIO subsystem that our virtual camera does not support any of the system video effects. I'm seeing properties like
AVCaptureDevice.Format.isPortraitEffectSupported and AVCaptureDevice.Format.isStudioLightSupported whose documentation refers to the format's ability to support these effects. Since we're setting a CMFormatDescription via CMIOExtensionStreamSource.formats I was hoping to find something in the extensions, but wasn't successful so far.
Can this be done?
Camera
RSS for tagDiscuss using the camera on Apple devices.
Posts under Camera tag
132 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Prerequisite: After the MDM APP issues the command, the camera on the phone is no longer visible (unusable).
After upgrading to iOS 26.1, the isSourceTypeAvailable: UIImagePickerControlSourceTypeCamera method keeps returning true when the camera is unavailable.
The isSourceTypeAvailable: UIImagePickerControlSourceTypeCamera method on iOS 26.0.1 is normal, returning false when the camera is unavailable and true when it is available.
My app is getting video from UVC device, and I wish to display it in an Immersive Space. But when I open Immersive Space, the UVC capture will just stop.AI said it's due to confliction in Camera pipeline. But I don't really understand, I don't need to use any on device camera, why it conflict with my UVC...
Where can I find the documentation of the Genlock feature of the iPhone 17 Pro? How does it work and how can I use it in my app?
The iPhone 17’s front camera with the new 18MP square sensor and iOS 26’s Center Stage feature can auto-rotate between portrait and landscape like the native iOS Camera app.
Is there a Swift or AVFoundation API that allows developers to manually control front camera orientation in the same way the native Camera app does? Or is this auto-rotation strictly handled by the system without public API access?
In iOS 26.1 beta 4, under MDM restrictions that disable the camera via a configuration profile, the Camera and FaceTime apps are hidden as expected. However, other third-party apps can still access and use the camera function normally. This is unreasonable.
Hi everyone,
We're encountering an unexpected issue with our iPhone-only camera app:
👉 TimeMark - Photo Proof
https://apps.apple.com/us/app/timemark-photo-proof/id6446071834
Problem Description:
Our app uses a full-screen camera view via AVCaptureSession. In some cases reported by users, the camera fails immediately upon app launch, and we receive this interruption reason:
AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps
According to the Apple documentation https://developer.apple.com/documentation/avfoundation/avcapturesession/interruptionreason/videodevicenotavailablewithmultipleforegroundapps?language=objc , this interruption typically occurs when the app is running in a multi-app layout such as Slide Over, Split View, or Picture in Picture — all of which are iPad-only features.
However, this issue is being reported on iPhones, and our app does not support iPad at all.
Also noted in the documentation:
"Given your present AVCaptureSession configuration, the session may only be run if your app occupies the full screen."
Additional Context:
The issue occurs immediately on app launch, before the user can interact with the camera.
We don’t enable multitaskingCameraAccessEnabled.
We are 100% sure this is happening on iPhone, not iPad.
It’s hard to reproduce; users report it happening sporadically.
Locally, we tried playing Picture-in-Picture videos (e.g., Safari/YouTube) before launching our app, but we could not reproduce the issue.
Questions:
Why is this interruption reason occurring on iPhone, which doesn’t officially support Slide Over or Split View?
Could this be caused by some system-level multitasking or resource contention (e.g., Picture in Picture from FaceTime or Safari)?
Would enabling multitaskingCameraAccessEnabled help prevent this issue on iPhone, even though it's designed for iPad?
Enabling multitaskingCameraAccessEnabled seems to require enabling UIBackgroundModes → voip.
Would adding this background mode cause any App Store review risk or rejection if our app doesn't actually use VoIP functionality?
Any help, insight, or suggestions would be greatly appreciated. Thanks in advance!
iPhoneで撮影した映像をブラウザのアプリへ送信して画面に映す機能を持ったアプリを開発しています。
iPhone 17 Pro, 17 Pro Maxでこのアプリを利用するとブラウザ側に表示される映像が緑一色や、緑がメインのカラフルな映像になってしまいます。
調べてみると17Proと17ProMaxで超広角カメラと望遠カメラの画素数が変更になっている(1200万画素→4800万画素)ためエンコーディングで失敗しているのではないかと疑っています。
なんでも情報下さい。
環境情報
WebRTCライブラリ: GoogleWebRTC バージョン 1.1 (CocoaPodsで導入)
シグナリングサーバー: AWS Kinesis Video Streams
問題が発生するデバイス:
モデル名: iPhone18,1, OS: 26.0
モデル名: iPhone18,1, OS: 26.1
問題が発生しないデバイス:
iPhone17,5 以前の多数のモデル
モデル名: iPhone18,1, OS: 26.0
モデル名: iPhone18,3, OS: 26.0
Hi everyone,
I’m working on a custom camera implementation in iOS using native code. My goal is to capture unprocessed, realistic images directly from the camera — without any filters or post-image processing applied by the system.
I’ve implemented RAW image capture using the native camera APIs (AVFoundation) and successfully received .dng files. However, even the RAW outputs don’t look like the real environment — the colors, tone, and exposure still seem processed or corrected in some way.
I’ve tried various configurations such as photoSettings.rawPhotoPixelFormatType, experimenting with AVCaptureDevice and AVCapturePhotoOutput settings, and reviewing ProRAW and standard RAW behavior, but I’m still not getting truly unprocessed results that reflect the actual sensor data.
Has anyone experienced similar results when capturing RAW images on iOS, or found a way to bypass Apple’s image signal processing (ISP) pipeline for more realistic captures?
Any insights or references from Apple’s camera framework behavior would be greatly appreciated.
Thank you!
Prerequisite: After the MDM APP issues the command, the camera on the phone is no longer visible (unusable).
After upgrading to iOS 26.1, the isSourceTypeAvailable: UIImagePickerControlSourceTypeCamera method keeps returning true when the camera is unavailable.
The isSourceTypeAvailable: UIImagePickerControlSourceTypeCamera method on iOS 26.0.1 is normal, returning false when the camera is unavailable and true when it is available.
Please fix this method to determine
If the isSourceTypeAvailable: UIImagePickerControlSourceTypeCamera method cannot determine whether the camera is available, please provide an available judgment method.
Issue: After iOS 18.5 release, our app is experiencing a significant increase in AVCaptureSessionInterruptionReason.videoDeviceNotAvailableWithMultipleForegroundApps errors.
Details:
Our camera-related code has not been updated recently.However, we've observed that the error rate has significantly increased starting from May 2025. The error rate has risen from approximately 0.02% (2 in 10,000 users) to 0.1% (1 in 1,000 users). This represents a 5x increase in error occurrence.
The frequency has increased noticeably since iOS 18.5
This is affecting our app's camera functionality and user experience
Questions:
Are there any known changes in iOS 18.5 regarding camera access management?
What are the recommended best practices to handle this interruption reason?
Are there any API changes we should be aware of?
Best,
Shay
My implementation of LockedCameraCapture does not launch my app when tapped from locked screen. But when the same widget is in the Control Center, it launches the app successfully.
Standard Xcode target template:
Lock_Screen_Capture.swift
@main
struct Lock_Screen_Capture: LockedCameraCaptureExtension {
var body: some LockedCameraCaptureExtensionScene {
LockedCameraCaptureUIScene { session in
Lock_Screen_CaptureViewFinder(session: session)
}
}
}
Lock_Screen_CaptureViewFinder.swift:
import SwiftUI
import UIKit
import UniformTypeIdentifiers
import LockedCameraCapture
struct Lock_Screen_CaptureViewFinder: UIViewControllerRepresentable {
let session: LockedCameraCaptureSession
var sourceType: UIImagePickerController.SourceType = .camera
init(session: LockedCameraCaptureSession) {
self.session = session
}
func makeUIViewController(context: Self.Context) -> UIImagePickerController {
let imagePicker = UIImagePickerController()
imagePicker.sourceType = sourceType
imagePicker.mediaTypes = [UTType.image.identifier, UTType.movie.identifier]
imagePicker.cameraDevice = .rear
return imagePicker
}
func updateUIViewController(_ uiViewController: UIImagePickerController, context: Self.Context) {
}
}
Then I have my widget:
struct CameraWidgetControl: ControlWidget {
var body: some ControlWidgetConfiguration {
StaticControlConfiguration(
kind: "com.myCompany.myAppName.lock-screen") {
ControlWidgetButton(action: MyAppCaptureIntent()) {
Label("Capture", systemImage: "camera.shutter.button.fill")
}
}
}
}
My AppIntent:
struct MyAppContext: Codable {}
struct MyAppCaptureIntent: CameraCaptureIntent {
typealias AppContext = MyAppContext
static let title: LocalizedStringResource = "MyAppCaptureIntent"
static let description = IntentDescription("Capture photos and videos with MyApp.")
@MainActor
func perform() async throws -> some IntentResult {
.result()
}
}
The Issue
LockedCameraCapture Widget does not launch my app when tapped from locked screen. You get the Face ID prompt and takes you to just Home Screen. But when the same widget is in the Control Center, it launches the app successfully.
Error Message
When tapped on Lock Screen, I get the following error code:
LaunchServices: store ‹private > or url ‹private > was nil: Error Domain=NSOSStatusErrorDomain Code=-54 "process may not map database"
UserInfo=&NSDebugDescription=process may not map database, _LSLine=72, _LSFunction=_LSServer_GetServerStoreForConnectionWithCompletionHandler}
Attempt to map database failed: permission was denied. This attempt will not be retried.
Failed to initialize client context with error Error Domain=NSOSStatusErrorDomain Code=-54 "process may not map database"
UserInfo=&NSDebugDescription=process may not map database, _LSLine=72, _LSFunction=_LSServer_GetServerStoreForConnectionWithCompletionHandler}
Things I tried
Widget image displays correctly
App ID and the Provisioning Profile seem to be fine since they work fine when the same code injected in to AVCam sample app and when used the same App ID's.
AppIntent file contains the target memberships of the Lock Screen capture and Widget
Apple compiles without errors or warnings.
I'm developing an iOS app using AVFoundation for real-time video capture and object detection.
While implementing torch functionality with camera switching (between Wide and Ultra-Wide lenses), I encountered a critical issue where the camera freezes when toggling the torch while the Ultra-Wide camera is active.
Issue
If the torch is ON and I switch from Wide to Ultra-Wide, the camera freezes
If the Ultra-Wide camera is active and I try to turn the torch ON, the camera freezes
The iPhone Camera app allows using the torch while recording video with the Ultra-Wide lens, so this should be possible via AVFoundation as well.
Code snippet
DispatchQueue.global(qos: .userInitiated).async { [weak self] in
guard let self = self else { return }
let isSwitchingToUltraWide = !self.isUsingFisheyeCamera
let cameraType: AVCaptureDevice.DeviceType = isSwitchingToUltraWide ? .builtInUltraWideCamera : .builtInWideAngleCamera
let cameraName = isSwitchingToUltraWide ? "Ultra Wide" : "Wide"
guard let selectedCamera = AVCaptureDevice.default(cameraType, for: .video, position: .back) else {
DispatchQueue.main.async {
self.showAlert(title: "Camera Error", message: "\(cameraName) camera is not available on this device.")
}
return
}
do {
let currentInput = self.videoCapture.captureSession.inputs.first as? AVCaptureDeviceInput
self.videoCapture.captureSession.beginConfiguration()
if isSwitchingToUltraWide && self.isFlashlightOn {
self.forceEnableTorchThroughWide()
}
if let currentInput = currentInput {
self.videoCapture.captureSession.removeInput(currentInput)
}
let videoInput = try AVCaptureDeviceInput(device: selectedCamera)
self.videoCapture.captureSession.addInput(videoInput)
self.videoCapture.captureSession.commitConfiguration()
self.videoCapture.updateVideoOrientation()
DispatchQueue.main.async {
if let barButton = sender as? UIBarButtonItem {
barButton.title = isSwitchingToUltraWide ? "Wide" : "Ultra Wide"
barButton.tintColor = isSwitchingToUltraWide ? UIColor.systemGreen : UIColor.white
}
print("Switched to \(cameraName) camera.")
}
self.isUsingFisheyeCamera.toggle()
} catch {
DispatchQueue.main.async {
self.showAlert(title: "Camera Error", message: "Failed to switch to \(cameraName) camera: \(error.localizedDescription)")
}
}
}
}
Expected Behavior
Torch should be able to work when Ultra-Wide is active, just like the iPhone Camera app does.
The camera should not freeze when switching between Wide and Ultra-Wide with the torch ON.
AVCaptureSession should not crash when toggling the torch while Ultra-Wide is active.
Questions & Help Needed
Is this a known issue with AVFoundation?
How does the iPhone Camera app allow using the torch while recording in Ultra-Wide?
What’s the correct way to switch between Wide and Ultra-Wide cameras without freezing when the torch is active?
Info
Device tested: iPhone 13 Pro / iPhone 15 Pro / Iphone 15
iOS Version: iOS 17.3 / iOS 18.0
Xcode Version: 16.2
Before iOS26.1, allowCamera set false, all app can't use camera.
On iOS26.1, allowCamera set false, removes camera icon from the Home Screen, but third app can still use camera, such as Safari and other apps that can call camera.
Is it a bug or a new features?
I'm at my wit's end with a problem I'm facing while developing an app. The app is designed to send video captured on an iPhone to a browser application for real-time display.
While it works on many older iPhone models, whenever I test it on an iPhone 17 Pro or 17 Pro Max, the video displayed in the browser becomes a solid green screen, or a colorful, garbled image that's mostly green.
I've been digging into this, and my main suspicion is an encoding failure. It seems the resolution of the ultra-wide and telephoto cameras was significantly increased on the 17 Pro and Pro Max (from 12MP to 48MP), and I think this might be overwhelming the encoder.
I'm really hoping someone here has encountered a similar issue or has any suggestions. I'm open to any information or ideas you might have. Please help!
Environment Information:
WebRTC Library: GoogleWebRTC Version 1.1 (via CocoaPods)
Signaling Server: AWS Kinesis Video Streams
Problem Occurs on:
Model: iPhone18,1, OS: 26.0
Model: iPhone18,1, OS: 26.1
Works Fine on:
Many models before iPhone17,5
Model: iPhone18,1, OS: 26.0
Model: iPhone18,3, OS: 26.0
After applying the MDM camera restriction on iOS 26.1 beta 2, the camera availability status is reported incorrectly.
After applying the MDM camera restriction
[UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera] return YES
Foreword: I filed a feedback a week ago and so far no one replied or at least acknowledged my feedback. That's why I'm writing here now to get some more attention (hopefully, thanks).
I recorded a video to show you what exactly is happening. I am not sure where the issue belongs to exactly but it is related to associated domains, the camera (QR codes), the NFC module, App Intents and Control Center controls.
https://youtu.be/sT2bZLs_6rA
FB20418059
(I added some tags that are somehow related to the associated domain issue)
Hello Apple team,
I'm working on an iOS AR app using SwiftUI and RealityKit,
and I was wondering if the Cinematic API can be used with a RealityKit scene. I’d like to achieve a shallow depth of field while keeping the 3D asset in focus, and vice versa.
Thanks!
I am able to capture 48mp photos using .builtInWideAngleCamera, but it seems like .builtInTripleCamera is capped at 12mp?
Is there a way to capture 48mp photos using .builtInTripleCamera? Because .builtInTripleCamera provides smooth transition between cameras during zooming, and I'd like to keep this behavior.
New iPhone 17 Pro have all their cameras at 48mp. Is there a chance that their .builtInTripleCamera is capable of capturing 48mp? Or is this an API limitation?
Hi,
I’m trying to configure camera feed in ARKit to be in Apple Log color space.
I can change Capture Device’s format to one that has Apple Log and I see one frame being in proper log-gray colors but then all AR tracking stops and tracking state hangs at “initializing”. In other combinations I see error “sensor failed to initialize” and session restarts with default format.
I suspect that this is because normal AR capture formats are 420f, whereas ones that have Apple Log are 422.
Could someone confirm if it’s even possible to run ARKit session with camera feed in a different pixel format?
I’m trying it on iphone 15 pro