Core Audio

RSS for tag

Interact with the audio hardware of a device using Core Audio.

Posts under Core Audio tag

56 Posts

Post

Replies

Boosts

Views

Activity

SystemAudio Capture API Fails with OSStatus error 1852797029 (kAudioCodecIllegalOperationError)
Issue Description I'm implementing a system audio capture feature using AudioHardwareCreateProcessTap and AudioHardwareCreateAggregateDevice. The app successfully creates the tap and aggregate device, but when starting the IO procedure with AudioDeviceStart, it sometimes fails with OSStatus error 1852797029. (The operation couldn’t be completed. (OSStatus error 1852797029.)) The error occurs inconsistently, which makes it particularly difficult to debug and reproduce. Questions Has anyone encountered this intermittent "nope" error code (0x6e6f7065) when working with system audio capture? Are there specific conditions or system states that might trigger this error sporadically? Are there any known workarounds for handling this intermittent failure case? Any insights or guidance would be greatly appreciated. I'm wondering if anyone else has encountered this specific "nope" error code (0x6e6f7065) when working with system audio capture.
0
0
108
May ’25
how to enumerate build-in media devices?
Hello, I need to enumerate built-in media devices (cameras, microphones, etc.). For this purpose, I am using the CoreAudio and CoreMediaIO frameworks. According to the table 'Daemon-Safe Frameworks' in Apple’s TN2083, CoreAudio is daemon-safe. However, the documentation does not mention CoreMediaIO. Can CoreMediaIO be used in a daemon? If not, are there any documented alternatives to detect built-in cameras in a daemon (e.g., via device classes in IOKit)? Thank you in advance, Pavel
0
0
80
May ’25
The impact of MicrophoneMode on my Mac application.
I have a 4-input, 4-output hardware device and an 8-input, 8-output virtual device, which I combine into an aggregate device. I am using the SimplyCoreAudio library to get the channel count. The code is as follows: aggregationDevice!.channels(scope: .input) =>> 12 aggregationDevice!.channels(scope: .output) =>> 12 When the program's MicrophoneMode is set to standard, the channel count is correct. However, when I set the MicrophoneMode to voiceIsolation, the channel count is incorrect: aggregationDevice!.channels(scope: .input) =>> 4 aggregationDevice!.channels(scope: .output) =>> 12 Below is the code for creating the aggregate device: func createAggregateDevice(mainDevice: AudioDevice, secondDevice: AudioDevice?, named name: String, uid: String) -> AudioDevice? { guard let mainDeviceUID = mainDevice.uid else { return nil } var deviceList: [[String: Any]] = [ [ kAudioSubDeviceUIDKey: mainDeviceUID, kAudioSubDeviceDriftCompensationKey:1 ] ] // make sure same device isn't added twice if let secondDeviceUID = secondDevice?.uid, secondDeviceUID != mainDeviceUID { deviceList.append([ kAudioSubDeviceUIDKey: secondDeviceUID, kAudioSubDeviceDriftCompensationKey:1, kAudioSubDeviceInputChannelsKey:8 ]) } let desc: [String: Any] = [ kAudioAggregateDeviceNameKey: name, kAudioAggregateDeviceUIDKey: uid, kAudioAggregateDeviceSubDeviceListKey: deviceList, kAudioAggregateDeviceMainSubDeviceKey: mainDeviceUID, kAudioAggregateDeviceIsPrivateKey:false, ] var deviceID: AudioDeviceID = 0 let error = AudioHardwareCreateAggregateDevice(desc as CFDictionary, &deviceID) guard error == noErr else { return nil } return AudioDevice.lookup(by: deviceID) } I hope someone can tell me the reason Thank you!
1
0
478
May ’25
How to request permission for System Audio Recording Only?
Hi community, I'm wondering how can I request the permission of "System Audio Recording Only" under the Privacy & Security -> Screen & System Audio Recording via swift? Did a bunch of search but didn't find good documentation on it. Tried another approach here https://github.com/insidegui/AudioCap/blob/main/AudioCap/ProcessTap/AudioRecordingPermission.swift which doesn't work very reliably.
2
0
676
May ’25
CoreAudio server plugin: updating kAudioStreamPropertyAvailablePhysicalFormats
Hi, our CoreAudio server plugin supports different clock sources. A switch might result in a change of the selectable sample rates (and other settings). On a clock source switch the plugin reconfigures the set of available kAudioStreamPropertyAvailablePhysicalFormats and announces the change via AudioServerPlugInHostInterface::PropertiesChanged(). However at least the Audio MIDI Setup seems to ignore to update it's UI. The changes are first reflected after selecting another device and re-selecting the device of interest. (Latest macOS, M4 macMini) Is this a bug? Or is our CoreAudio server plugin required to indicate the change in the list of available audio formats differently? Thanks!
0
0
69
May ’25
Cleaning up CoreAudio preferences
Hi, our virtual CoreAudio server plugin creates and removes dynamically CoreAudio devices. Each time it does so it leaves traces in /Library/Preferences/Audio com.apple.audio.DeviceSettings.plist com.apple.audio.SystemSettings.plist The files on the test machine now have become >1Mb and the system keeps recreating them. How can I manually remove/cleanup these files? (This is for development only. It already became pretty tedious to evaluate current device settings for debugging purposes.) How can the CoreAudio server plugin make sure once a device has been removed also its entries are removed from the .plist (It already removes it's storage, but the system still keeps other settings.) Is there some documentation about what gets stored and how the settings are organized in these preferences? (This is also for development and debugging only. We are not intending to access these settings directly ) Thanks!
5
0
144
Apr ’25
CoreAudio server plugin gaining write access with SystemConfiguration.framework functions
Hi, our CourAudio server plugin utilizes the SystemConfiguration.framework to store and restore specific shared system wide settings. While our application can authenticate to utilize the SystemConfiguration.framework to gain write access to the shared configuration settings the CoreAudio server plugin obviously can't have any user interaction and therefor does not authenticate. Is it possible to authenticate the CoreAudio server plugin to gain write permissions? Are there any entitlements or other means that would allow this? Thanks!
2
0
88
Apr ’25
Is PushToTalk Framework Half-Duplex Only, or Does It Include Built-in Full-Duplex Audio Capabilities?
Hello everyone, Our team is currently developing an iOS application requiring real-time audio communication and evaluating the most suitable frameworks. Options include CallKit, custom solutions using AVAudioEngine/Audio Units, and the PushToTalk framework. Regarding the PushToTalk framework, we have some questions about its core design and capabilities that we'd appreciate clarification on from the community or Apple engineers. Based on the PushToTalk framework documentation, its API design (e.g., methods like requestBeginTransmission, endTransmission which imply explicitly requesting transmission rights), and its system UI integration, it strongly appears oriented towards half-duplex communication scenarios, similar to traditional walkie-talkies where only one participant transmits audio at a time. Is this understanding accurate? Is the PushToTalk framework's design strictly limited to managing half-duplex audio interactions? Or, does the framework itself also provide built-in mechanisms or APIs to manage simultaneous, bi-directional (full-duplex) audio streaming between participants? To be clear, we are asking about the inherent capabilities of the PushToTalk framework itself. We understand it's possible to use PushToTalk for signaling and UI management, and separately implement the actual full-duplex audio stream using AVAudioEngine or other audio APIs. However, we want to confirm if the framework itself is designed to support or simplify full-duplex audio communication. Have other developers investigated the specific limitations or capabilities of the PushToTalk framework regarding audio transmission modes (half-duplex vs. full-duplex)? Are there any official documentation references or WWDC sessions that explicitly clarify the framework's support (or lack thereof) for full-duplex operation? If PushToTalk is indeed limited to half-duplex, what are the generally accepted best practices for apps requiring full-duplex calls – transitioning directly to CallKit (where applicable) or building custom audio processing pipelines? Clarifying this point is crucial for us to select the correct technology stack for our application. Any relevant insights, documentation pointers, or shared development experiences would be greatly appreciated. Thank you for your help!
1
0
126
Apr ’25
Does USB Audio in macOS support 12 channels with 16bit PCM samples?
I have a custom USB Audio Class 2 (UAC2) compatible device. When I connect this custom device to a MacBook with a configuration of up to 10 channels (16-bit), everything seems to work fine. However, when I increase the channel count to 12, the MacBook does not recognize the 12 channels. It only shows the channel count as 0. TN2274 is the only source where I found some information about Apple's Audio Class Drivers, but it doesn't mention any limitations regarding channel counts. Could you let me know the current limitations of the Audio Class Drivers on the latest macOS versions? What configuration should I use to get 12 channels working? P.S. I also found that a 12-channel, 8-bit configuration is detected by the MacBook, bit I want it to work with 16bits. For more detail please check FB17098863
6
0
179
Apr ’25
Issue with Audio Sample Rate Conversion in Video Calls
Hey everyone, I'm encountering an issue with audio sample rate conversion that I'm hoping someone can help with. Here's the breakdown: Issue Description: I've installed a tap on an input device to convert audio to an optimal sample rate. There's a converter node added on top of this setup. The problem arises when joining Zoom or FaceTime calls—the converter gets deallocated from memory, causing the program to crash. Symptoms: The converter node is being deallocated during video calls. The program crashes entirely when this happens. Traditional methods of monitoring sample rate changes (tracking nominal or actual sample rates) aren't working as expected. The Big Challenge: I can't figure out how to properly monitor sample rate changes. Listeners set up to track these changes don't trigger when the device joins a Zoom or FaceTime call. Please, if anyone has experience with this or knows a solution, I'd really appreciate your help. Thanks in advance! ⁠
0
0
50
Apr ’25
AVAudioEngine Voice Processing Fails with Mismatched Input/Output Devices: AggregateDevice Channel Count Mismatch
I'm encountering errors while using AVAudioEngine with voice processing enabled (setVoiceProcessingEnabled(true)) in scenarios where the input and output audio devices are not the same. This issue arises specifically with mismatched devices, preventing the application from functioning as expected. Works: Paired devices (e.g., MacBook Pro mic → MacBook Pro speakers) Fails: Mismatched devices (e.g., AirPods mic → MacBook Pro speakers) When using paired input and output devices: The setup works as expected. Example: MacBook Pro microphone → MacBook Pro speakers. When using mismatched devices: AVAudioEngine setup fails during aggregate device construction. Example: AirPods microphone → MacBook Pro speakers. Error logs indicate a channel count mismatch. Here are the partial logs. Due to the content limit, I cannot post the entire logs. AUVPAggregate.cpp:1000 client-side input and output formats do not match (err=-10875) AUVPAggregate.cpp:1036 err=-10875 AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error -10875 AggregateDevice.mm:329 Failed expectation of constructed aggregate (312): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (312): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (336): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (336): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AudioHardware-mac-imp.cpp:3484 AudioDeviceSetProperty: no device with given ID AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (348): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (348): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) Is it possible to use voice processing with different input/output devices? If yes, are there any specific configurations required to handle mismatched devices? How can we resolve channel count mismatch errors during aggregate device construction? Are there settings or API adjustments to enforce compatibility between input/output devices? Are there any workarounds or alternative approaches to achieve voice processing functionality with mismatched devices? For instance, can we force an intermediate channel configuration or downmix input/output formats?
3
2
652
Mar ’25
AudioUnit (AUv2) Session Compatibility After Adding MIDI Support
Hi there! We have a suite of AudioUnit v2 plugins that have been shipped for some time as aufx plugins, and we are looking into MIDI-related platform upgrades, so we need a way to update these plugins to request MIDI from Logic (and other AU hosts) but avoid changing our AU type and subtype so we don't break existing sessions. Any ideas on how we can do this?
1
0
78
Mar ’25
CoreAudio server plugin: propagate kAudioObjectPropertyName change
When my virtual CoreAudio server plugins propagates a change to it´s device name the CoreAudio system does not seem to reflect the change. My user mode application subscribes to the property change and receives the change though. I also alternatively submitted a kAudioObjectPropertyName change with the same effect. Is this possible at all and what needs to be done then? Are there restrictions about which properties can be successfully changed and are reflected by the system? Any hint is highly appreciated! Thanks
1
0
57
Mar ’25
CoreAudio HAL plugin vs dext
The presentation "create audio drivers with DriverKit" from WWDC 2021 demonstrates how to use a dext to implement a virtual audio driver. It also says " If a virtual audio driver or device is all that is needed, the audio server plug-in driver model should continue to be used". Indeed, in AudioDriverKit/AudioDriverKitTypes.h, there is no IOUserAudioTransportType Virtual, although CoreAudio/AudioHardwareBase.h includes kAudioDeviceTransportTypeVirtual. For one of our products, we require virtual devices to implement a software loopback "cable". We've implemented this using the "traditional" HAL plugin, and as a proof-of-concept, also using a dext. In the dext, I tried setting the transport type to 'virt', which seems to only have the effect of changing the icon shown in Audio Midi Setup. HAL plugins require an installer, and the installer has to kill coreaudiod in a post-install script. You have to turn off SIP to debug them. Just like AudioDriverKit drivers, they are out-of-process and run in a process not owned by the hosting app. Our HAL plugin's interface is property based; we had to write a lot of boiler-plate code to implement required properties. Writing an AudioDriverKit driver is in most respects easier - a lot of the scaffolding is implemented in the base driver, which we only alter where required. Debugging and installation is much easier. The dext works just fine, as far as we can ascertain, just as well as a HAL plugin. So, my question is - is the advice to use a HAL plugin for a virtual device still correct in 2025? And if so, what's the objection? We'd really prefer to ship the AudioDriverKit virtual audio device.
2
1
465
Mar ’25
Failure of AudioUnitSetProperty when using MacCatalyst (works on macOS)
I was trying to set custom audio output device for a generated audio on macCatalyst. While using let status = AudioUnitSetProperty(outputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &outputDeviceID, UInt32(MemoryLayout.size)) kAudioOutputUnitProperty_CurrentDevice is invalid, and status = -10879, indicating an error. STEPS TO REPRODUCE Set Run Destination to MacOS and run the program. "AudioUnitSetProperty: 0" should be printed, indicating it works fine. Set Run Destination to Mac Catalyst and run the program. "Error setting output device: -10879" should be printed, indicating an error.
4
1
644
Mar ’25
Error -50 writing to AVAudioFile
I'm trying to write 16-bit interleaved 2-channel data captured from a LiveSwitch audio source to a AVAudioFile. The buffer and file formats match but I get a bad parameter error from the API. Does this API not support the specified format or is there some other issue? Here is the debugger output. (lldb) po audioFile.url ▿ file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201 - _url : file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201 - _parseInfo : nil - _baseParseInfo : nil (lldb) po error Error Domain=com.apple.coreaudio.avfaudio Code=-50 "(null)" UserInfo={failed call=ExtAudioFileWrite(_impl->_extAudioFile, buffer.frameLength, buffer.audioBufferList)} (lldb) po buffer.format <AVAudioFormat 0x302a12b20: 2 ch, 44100 Hz, Int16, interleaved> (lldb) po audioFile.fileFormat <AVAudioFormat 0x302a515e0: 2 ch, 44100 Hz, Int16, interleaved> (lldb) po buffer.frameLength 882 (lldb) po buffer.audioBufferList ▿ 0x0000000300941e60 - pointerValue : 12894608992 This code handles the details of converting the Live Switch frame into an AVAudioPCMBuffer. extension FMLiveSwitchAudioFrame { func convertedToPCMBuffer() -> AVAudioPCMBuffer { Self.convertToAVAudioPCMBuffer(from: self)! } static func convertToAVAudioPCMBuffer(from frame: FMLiveSwitchAudioFrame) -> AVAudioPCMBuffer? { // Retrieve the audio buffer and format details from the FMLiveSwitchAudioFrame guard let buffer = frame.buffer(), let format = buffer.format() as? FMLiveSwitchAudioFormat else { return nil } // Extract PCM format details from FMLiveSwitchAudioFormat let sampleRate = Double(format.clockRate()) let channelCount = AVAudioChannelCount(format.channelCount()) // Determine bytes per sample based on bit depth let bitsPerSample = 16 let bytesPerSample = bitsPerSample / 8 let bytesPerFrame = bytesPerSample * Int(channelCount) let frameLength = AVAudioFrameCount(Int(buffer.dataBuffer().length()) / bytesPerFrame) // Create an AVAudioFormat from the FMLiveSwitchAudioFormat guard let avAudioFormat = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: sampleRate, channels: channelCount, interleaved: true) else { return nil } // Create an AudioBufferList to wrap the existing buffer let audioBufferList = UnsafeMutablePointer<AudioBufferList>.allocate(capacity: 1) audioBufferList.pointee.mNumberBuffers = 1 audioBufferList.pointee.mBuffers.mNumberChannels = channelCount audioBufferList.pointee.mBuffers.mDataByteSize = UInt32(buffer.dataBuffer().length()) audioBufferList.pointee.mBuffers.mData = buffer.dataBuffer().data().mutableBytes // Directly use LiveSwitch buffer // Transfer ownership of the buffer to AVAudioPCMBuffer let pcmBuffer = AVAudioPCMBuffer(pcmFormat: avAudioFormat, bufferListNoCopy: audioBufferList) /* { buffer in // Ensure the buffer is freed when AVAudioPCMBuffer is deallocated buffer.deallocate() // Only call this if LiveSwitch allows manual deallocation } */ pcmBuffer?.frameLength = frameLength return pcmBuffer } } This is the handler that is invoked with every frame in order to convert it for use with AVAudioFile and optionally update a scrolling signal display on the screen. private func onRaisedFrame(obj: Any!) -> Void { // Bail out early if no one is interested in the data. guard isMonitoring else { return } // Convert LS frame to AVAudioPCMBuffer (no-copy) let frame = obj as! FMLiveSwitchAudioFrame let buffer = frame.convertedToPCMBuffer() // Hand subscribers a reference to the buffer for rendering to display. bufferPublisher?.send(buffer) // If we have and output file, store the data there, as well. guard let audioFile = self.audioFile else { return } do { try audioFile.write(from: buffer) // FIXME: This call is throwing error -50 } catch { FMLiveSwitchLog.error(withMessage: "Failed to write buffer to audio file at \(audioFile.url): \(error)") self.audioFile = nil } } This is how the audio file is being setup. static var recordingFormat: AVAudioFormat = { AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44_100, channels: 2, interleaved: true)! }() let audioFile = try AVAudioFile(forWriting: outputURL, settings: Self.recordingFormat.settings)
0
0
359
Mar ’25
USB Audio Input Blocked When Screen Mirroring via AirPlay
Hello Apple Developers, I am experiencing an issue where USB audio input (e.g., external USB microphone) is blocked when using AirPlay screen mirroring from my iPhone to a Mac or Apple TV. However, the built-in microphone continues to work without any problem. Issue Details: I am using an iPhone 15 (or latest device) running iOS [your iOS version]. I connect a USB audio interface/microphone via a USB-C adapter or Lightning adapter. The USB microphone works perfectly for audio input before starting AirPlay. The moment I enable AirPlay screen mirroring, the USB microphone stops working, and it disappears from available audio input sources. The built-in microphone continues to function, but I cannot use the USB microphone while mirroring. When I stop screen mirroring, the USB microphone immediately becomes available again. Expected Behavior: I would expect iOS to allow me to continue using an external USB microphone while mirroring my screen, just like it allows the built-in microphone to work. Questions: Is this an intentional restriction in iOS? Is there any workaround to enable USB audio input while using AirPlay screen mirroring? Is there a way to request a feature or configuration option to allow external USB microphones during AirPlay? I appreciate any insights or guidance from the Apple team or fellow developers. Thanks in advance! Best regards,
0
0
257
Mar ’25
Distorted Audio When Recording External Mics With AVCaptureSession and AVAssetWriter
I’m working on a macOS app, written in Swift. My goal is to record audio from an external microphone, e.g., one connected via USB. For this, I’m using an AVCaptureSession and recording its output with an AVAssetWriter. This works perfectly in principle (and reliably with internal microphones, for example). The problem occurs after my app has successfully completed the first recording and I then want to make additional recordings (which makes me think it might be process-dependent, because it works again after restarting the app). The problem: Noisy or distorted-sounding audio files. In addition, the following error message appears in the Console from CoreAudio / its AudioConverter: Input data proc returned inconsistent 512 packets for 2048 bytes; at 3 bytes per packet, that is actually 682 packets It is easy to reproduce. This problem is reproducible even if I don’t configure the AVAssetWriter manually and instead let it receive its audioSettings using a preset from an AVOutputSettingsAssistant. I’m running on macOS 15.0 (24A335). I’ve filed a feedback including a demo project → FB15333298 🎟️ I would greatly appreciate any help! Have a great day, Martin
6
0
823
Feb ’25
Swift Testing environment differences from regular executable
I am working on a Swift package which uses CoreAudio, and includes some tests in a testTarget which use the Testing framework, and a couple of executableTarget targets which exercise the same code. I'm using Xcode 16.2 on macOS 15.3.1. One of the things I do in the test code is create a HAL plugin, then find that plugin using the kAudioHardwarePropertyTranslateUIDToDevice. Finding the plugin that I just created always fails from within a Swift Testing test, unless I run the test which creates the plugin individually first, then separately, run the test which finds the plugin, by clicking on the little arrows next to the function names. If I put the tests in a serialized suite (so creation always happens first, then finding), running the suite always fails - it creates the plugin, but can't find it. If I run the 'find my plugin' test again manually, it is always found. If I call the same functions from a regular executable (the thing created by a "executableTarget" in my .package.swift file), the just-created plugin is always found. Is there a way to mimic the runtime environment of a regular executable in a Swift Testing target, or am I misunderstanding something? this my be related to this issue: https://github.com/swiftlang/swift/issues/76882 but I don't understand it well enough to be sure.
4
0
452
Feb ’25
MacOS: AudioUnit packaged as .appex won't load when host app is sandboxed
Hi, I'm working on an audio mixing app, that comes with bundled audio units that provide some of the app's core functionality. For the next release of that app, we are planning to make two changes: make the app sandboxed package the bundled audio units as .appex bundles instead as .component bundles, so we don't need to take care of the installation at the correct spot in the file system When trying this new approach, we run into problems where [[AVAudioUnitEffect alloc] initWithAudioComponentDescription:] crashes when trying to load our audio unit with the exception: AVAEInternal.h:109 [AUInterface.mm:468:AUInterfaceBaseV3: (AudioComponentInstanceNew(comp, &_auv2)): error -10863 Our audio unit has the `sandboxSafe flag enabled, and loads fine when the host app is not sandboxed, so I'm guessing I got the bundle id/code signing requirements for the .appex correct. It seems, that my .appex isn't even loaded, and the system rejects it because of its metadata. Maybe there something wrong the Info.plist generated by Juice? "BuildMachineOSBuild" => "23H222" "CFBundleDisplayName" => "elgato_sample_recorder" "CFBundleExecutable" => "ElgatoSampleRecorder" "CFBundleIdentifier" => "com.iwascoding.EffectLoader.samplerecorderAUv3" "CFBundleName" => "elgato_sample_recorder" "CFBundlePackageType" => "XPC!" "CFBundleShortVersionString" => "1.0.0.0" "CFBundleSignature" => "????" "CFBundleSupportedPlatforms" => [ 0 => "MacOSX" ] "CFBundleVersion" => "1.0.0.0" "DTCompiler" => "com.apple.compilers.llvm.clang.1_0" "DTPlatformBuild" => "24C94" "DTPlatformName" => "macosx" "DTPlatformVersion" => "15.2" "DTSDKBuild" => "24C94" "DTSDKName" => "macosx15.2" "DTXcode" => "1620" "DTXcodeBuild" => "16C5032a" "LSMinimumSystemVersion" => "10.13" "NSExtension" => { "NSExtensionAttributes" => { "AudioComponents" => [ 0 => { "description" => "Elgato Sample Recorder" "factoryFunction" => "elgato_sample_recorderAUFactoryAUv3" "manufacturer" => "Manu" "name" => "Elgato: Elgato Sample Recorder" "sandboxSafe" => 1 "subtype" => "Znyk" "tags" => [ 0 => "Effects" ] "type" => "aufx" "version" => 65536 } ] } "NSExtensionPointIdentifier" => "com.apple.AudioUnit-UI" "NSExtensionPrincipalClass" => "elgato_sample_recorderAUFactoryAUv3" } "NSHighResolutionCapable" => 1 } Any ideas what I am missing?
4
0
429
Feb ’25