Hi
We're on tensorflow 2.20 that has support now for python 3.13 (finally!). tensorflow-metal is still only supporting 2.18 which is over a year old.
When can we expect to see support in tensorflow-metal for tf 2.20 (or later!) ?
I bought a mac thinking I would be able to get great performance from the M processors but here I am using my CPU for my ML projects.
If it's taking so long to release it, why not open source it so the community can keep it more up to date?
cheers
Matt
                    
                  
                Metal
RSS for tagRender advanced 3D graphics and perform data-parallel computations using graphics processors using Metal.
Posts under Metal tag
            
              
                149 Posts
              
            
            
              
                
              
            
          
          
  
    
    Selecting any option will automatically load the page
  
  
  
  
    
  
  
              Post
Replies
Boosts
Views
Activity
                    
                      Also submitted as feedback (ID: FB20612561).
Tensorflow-metal fails on tensorflow versions above 2.18.1, but works fine on tensorflow 2.18.1
In a new python 3.12 virtual environment:
pip install tensorflow
pip install tensor flow-metal
python -c "import tensorflow as tf"
Prints error:
Traceback (most recent call last):
File "", line 1, in 
File "/Users//pt/venv/lib/python3.12/site-packages/tensorflow/init.py", line 438, in 
_ll.load_library(_plugin_dir)
File "/Users//pt/venv/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so
Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Reason: tried: '/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file)
                    
                  
                
              
                
              
              
                
                Topic:
                  
	
		Machine Learning & AI
  	
                
                
                SubTopic:
                  
                    
	
		General
		
  	
                  
                
              
              
                Tags:
              
              
  
  
    
      
      
      
        
          
            Developer Tools
          
        
        
      
      
    
      
      
      
        
          
            Metal
          
        
        
      
      
    
      
      
      
        
          
            Machine Learning
          
        
        
      
      
    
      
      
      
        
          
            tensorflow-metal
          
        
        
      
      
    
  
  
              
                
                
              
            
          
                    
                      I have a Core Image filter in my app that uses Metal. I cannot compile it because it complains that the executable tool metal is not available, but I have installed it in Xcode.
If I go to the "Components" section of Xcode Settings, it shows it as downloaded. And if I run the suggested command, it also shows it as installed. Any advice?
Xcode Version
Version 26.0 beta (17A5241e)
Build Output
Showing All Errors Only
Build target Lessons of project StudyJapanese with configuration Light
RuleScriptExecution /Users/chris/Library/Developer/Xcode/DerivedData/StudyJapanese-glbneyedpsgxhscqueifpekwaofk/Build/Intermediates.noindex/StudyJapanese.build/Light-iphonesimulator/Lessons.build/DerivedSources/OtsuThresholdKernel.ci.air /Users/chris/Code/SerpentiSei/Shared/iOS/CoreImage/OtsuThresholdKernel.ci.metal normal undefined_arch (in target 'Lessons' from project 'StudyJapanese')
    cd /Users/chris/Code/SerpentiSei/StudyJapanese
    /bin/sh -c xcrun\ metal\ -w\ -c\ -fcikernel\ \"\$\{INPUT_FILE_PATH\}\"\ -o\ \"\$\{SCRIPT_OUTPUT_FILE_0\}\"'
'
error: error: cannot execute tool 'metal' due to missing Metal Toolchain; use: xcodebuild -downloadComponent MetalToolchain
/Users/chris/Code/SerpentiSei/StudyJapanese/error:1:1: cannot execute tool 'metal' due to missing Metal Toolchain; use: xcodebuild -downloadComponent MetalToolchain
Build failed    6/9/25, 8:31 PM    27.1 seconds
Result of xcodebuild -downloadComponent MetalToolchain (after switching Xcode-beta.app with xcode-select)
xcodebuild -downloadComponent MetalToolchain
Beginning asset download...
Downloaded asset to: /System/Library/AssetsV2/com_apple_MobileAsset_MetalToolchain/4d77809b60771042e514cfcf39662c6d1c195f7d.asset/AssetData/Restore/022-19457-035.dmg
Done downloading: Metal Toolchain (17A5241c).
Screenshots from Xcode
Result of "Copy Information"
Metal Toolchain 26.0 [com.apple.MobileAsset.MetalToolchain: 17.0 (17A5241c)] (Installed)
                    
                  
                
                    
                      Hello
Xcode 26.0.1 (17A400)   Missing some Metal components
When building a program using Metal, it induces an unexpected error :
“error: error: cannot execute tool 'metal' due to missing Metal Toolchain; use: xcodebuild -downloadComponent MetalToolchain
Command CompileMetalFile failed with a nonzero exit code”
Which terminates the build
The fix given “xcodebuild -downloadComponent MetalToolchain”  using sudo does not work
Did someone find a work around or could resolve the issue?
Many thanks
Jean
MacBook Air M4;
macOS 26.0.1;
Xcode 26.0.1
                    
                  
                
                    
                      Context
I’m deploying large language models on iPhone using llama.cpp. A new iPhone Air (12 GB RAM) reports a Metal MTLDevice.recommendedMaxWorkingSetSize of 8,192 MB, and my attempt to load Llama-2-13B Q4_K (~7.32 GB weights) fails during model initialization.
Environment
Device: iPhone Air (12 GB RAM)
iOS: 26
Xcode: 26.0.1
Build: Metal backend enabled llama.cpp
App runs on device (not Simulator)
What I’m seeing
MTLCreateSystemDefaultDevice().recommendedMaxWorkingSetSize == 8192 MiB
Loading Llama-2-13B Q4_K (7.32 GB) fails to complete. Logs indicate memory pressure / allocation issues consistent with the 8 GB working-set guidance.
Smaller models (e.g., 7B/8B with similar quantization) load and run (8B Q4_K provide around 9 tokens/second decoding speed).
Questions
Is 8,192 MB an expected recommendedMaxWorkingSetSize on a 12 GB iPhone?
What values should I expect on other 2025 devices including iPhone 17 (8 GB RAM) and iPhone 17 Pro (12 GB RAM)
Is it strictly enforced by Metal allocations (heaps/buffers), or advisory for best performance/eviction behavior?
Can a process practically exceed this for long-lived buffers without immediate Jetsam risk?
Any guidance for LLM scenarios near the limit?
                    
                  
                
                    
                      Fundamentally, my questions are: is there a known transform I can apply onto a given (pixel) position (passed into a Metal Fragment Function) to correctly sample a texture provided by the main cameras + processed by a Vision request. If so, what is it? If not, how can I accurately sample my masks?
My goal is to highlight people in a Vision Pro app using Compositor Services.
To start, I asynchronously receive camera frames for the main left and right cameras. This is the breakdown of the specific CameraVideoFormat I pass along to the CameraFrameProvider:
minFrameDuration: 0.03
maxFrameDuration: 0.033333335
frameSize: (1920.0, 1080.0)
pixelFormat: 875704422
cameraType: main
cameraPositions: [left, right]
cameraRectification: mono
From each camera frame sample, I extract the left and right buffers (CVReadOnlyPixelBuffer.withUnsafebuffer ==> CVPixelBuffer).
I asynchronously process the extracted buffers by performing a VNGeneratePersonSegmentationRequest on both of them:
// NOTE: This block of code and all following code blocks contain simplified representations of my code for clarity's sake.
var request = VNGeneratePersonSegmentationRequest()
request.qualityLevel = .balanced
request.outputPixelFormat = kCVPixelFormatType_OneComponent8
...
let lHandler = VNSequenceRequestHandler()
let rHandler = VNSequenceRequestHandler()
...
func processBuffers() async {
    try lHandler.perform([request], on: lBuffer)
    guard let lMask = request.results?.first?.pixelBuffer else {...}
    try rHandler.perform([request], on: rBuffer)
    guard let rMask = request.results?.first?.pixelBuffer else {...}
    appModel.latestPersonMasks = (lMask, rMask)
}
I store the two resulting CVPixelBuffers in my appModel. For both of these buffers aka grayscale masks:
width (in pixels) = 512
height (in pixels) = 384
byters per row = 512
plane count = 0
pixel format type = 1278226488
I am using Compositor Services to render my content in Immersive Space. My implementation of Compositor Services is based off of the same code from Interacting with virtual content blended with passthrough.
Within the Shaders.metal, the tint's Fragment Shader is now passed the grayscale masks (converted from CVPixelBuffer to MTLTexture via CVMetalTextureCacheCreateTextureFromImage() at the beginning of the main render pipeline).
fragment float4 tintFragmentShader(
                                   TintInOut in [[stage_in]],
                                   ushort amp_id [[amplification_id]],
                                   texture2d<uint> leftMask [[texture(0)]],
                                   texture2d<uint> rightMask [[texture(1)]]
                                   )
{
    if (in.color.a <= 0.0) {
        discard_fragment();
    }
    float2 uv;
    
    if (amp_id == 0) { // LEFT
        uv = ??????????????????????;
    } else { // RIGHT
        uv = ??????????????????????;
    }
    
    constexpr sampler linearSampler (mip_filter::linear, mag_filter::linear, min_filter::linear);
    
    // Sample the PersonSegmentation grayscale mask
    float maskValue = 0.0;
    
    if (amp_id == 0) { // LEFT
        if (leftMask.get_width() > 0) {
             maskValue = rightMask.sample(linearSampler, uv).r;
        }
    } else { // RIGHT
        if (rightMask.get_width() > 0) {
            maskValue = rightMask.sample(linearSampler, uv).r;
        }
    }
    
    if (maskValue > 250) {
        return (1.0, 1.0, 1.0, 0.5)
    }
    return in.color;
}
I need to correctly sample the masks for a given fragment.
The LayerRenderer.Layout is set to .layered. From Developer Documentation.
A layout that specifies each view’s content as a slice of a single texture.
Using the Metal debugger, I know that the final render target texture for each view / eye is 1888 x 1792 pixels, giving an aspect ratio of 59:56.
The initial CVPixelBuffer provided by the main left and right cameras is 1920x1080 (16:9).
The grayscale CVPixelBuffer returned by the VNPersonSegmentationRequest is 512x384 (4:3).
All of these aspect ratios are different.
My questions come down to: is there a known transform I can apply onto a given (pixel) position to correctly sample a texture provided by the main cameras + processed by a Vision request. If so, what is it? If not, how can I accurately sample my masks?
Within the tint's Vertex Shader, after applying the modelViewProjectionMatrix, I have tried every version I have been able to find that takes the pixel space position (= vertices[vertexID].position.xy) and the viewport size (1888x1792) to compute the correct clip space position (maybe = pixel space position.xy / (viewport size * 0.5)???) of the grayscale masks but nothing has worked. The "highlight" of the person segmentations is off: scaled a little too big, offset little to far up and off to the side.
                    
                  
                
                    
                      We’ve encountered what appears to be a CoreML regression between macOS 26.0.1 and macOS 26.1 Beta.
In macOS 26.0.1, CoreML models run and produce correct results. However, in macOS 26.1 Beta, the same models produce scrambled or corrupted outputs, suggesting that tensor memory is being read or written incorrectly. The behavior is consistent with a low-level stride or pointer arithmetic issue — for example, using 16-bit strides on 32-bit data or other mismatches in tensor layout handling.
Reproduction
Install ON1 Photo RAW 2026 or ON1 Resize 2026 on macOS 26.0.1.
Use the newest Highest Quality resize model, which is Stable Diffusion–based and runs through CoreML.
Observe correct, high-quality results.
Upgrade to macOS 26.1 Beta and run the same operation again.
The output becomes visually scrambled or corrupted.
We are also seeing similar issues with another Stable Diffusion UNet model that previously worked correctly on macOS 26.0.1. This suggests the regression may affect multiple diffusion-style architectures, likely due to a change in CoreML’s tensor stride, layout computation, or memory alignment between these versions.
Notes
The affected models are exported using standard CoreML conversion pipelines.
No custom operators or third-party CoreML runtime layers are used.
The issue reproduces consistently across multiple machines.
It would be helpful to know if there were changes to CoreML’s tensor layout, precision handling, or MLCompute backend between macOS 26.0.1 and 26.1 Beta, or if this is a known regression in the current beta.
                    
                  
                
                    
                      Hello,
I'm getting started for my project with Xcode Cloud since I upgraded to the macOS Sequioa Beta and Xcode 16 now refuses to archive builds for TestFlight.
Somewhere very late in the build process I get the following error:
realitytool requires Metal for this operation and it is not available in this build environment
The log says this happens at:
Compile Skybox urban.skybox
My project uses RealityKit. How can I fix this issue?
Thanks!
                    
                  
                
                    
                      Hello,
I recently watched the WWDC2025 session titled “Combine Metal 4 machine learning and graphics” (https://developer.apple.com/videos/play/wwdc2025/262/ ), and I’m very excited about the new Metal 4 features that integrate machine learning with graphics—such as neural ambient occlusion, shader-based ML inference, and the use of MTLTensor and MTL4MachineLearningCommandEncoder.
While the session includes helpful code snippets and a compelling debug demo (e.g., the neural ambient occlusion example), the implementation details are not fully shown, and I haven’t been able to find a complete, runnable sample project that demonstrates end-to-end integration of ML and rendering in Metal 4.
Would Apple be able to provide a full, working example—such as an Xcode project—that shows how to:
Export a model to an .mlpackage,
Convert it to an .mtlpackage,
Use MTL4MachineLearningCommandEncoder alongside render passes,
Or embed small neural networks directly in shaders using Shader ML?
Having such a sample would greatly help developers like me adopt these powerful new capabilities correctly and efficiently.
Thank you very much for your time and support!
Best regards,
                    
                  
                
                    
                      I'm trying to find the installer for Metal Toolchain 26. It seems to fix an issue I have but I don't want to have to install Xcode 26 just to get the toolchain installed.
Is this possible?
                    
                  
                
                    
                      How do I configure a Unity project for a fully immersive VR app on Apple Vision Pro using Metal Rendering, and add a simple pinch-to-teleport-where-looking feature? I've tried the available samples and docs, but they don't cover this clearly (to me).
So far, I've reviewed Unity XR docs, Apple dev guides, and tutorials, but most emphasize spatial apps. Metal examples exist but don't include teleportation. Specifically:
visionOS sample "XRI_SimpleRig" – Deploys to device/simulator, but no full immersion or teleport.
XRI Toolkit sample "XR Origin Hands (XR Rig)" – Pinch gestures detect, but not linked to movement.
visionOS "XR Plugin" sample "Metal Sample URP" – Metal setup works, but static scene without locomotion.
I'm new in Unity XR development and would appreciate a simple, standalone scene or document focused only on the essentials for "teleport to gaze on pinch" in VR mode—no extra features.  I do have some experience in unreal, world toolkit, cosmo, etc from the 90's and I'm ok with code.
Please include steps for:
Setting up immersive VR (disabling spatial defaults if needed).
Integrating pinch detection with ray-based teleport.
Any config changes or basic scripts.
Project Configuration:
Unity Editor Version: 6000.2.5f1.2588.7373 (Revision: 6000.2/staging 43d04cd1df69)
Installed Packages:
Apple visionOS XR Plugin: 2.3.1
AR Foundation: 6.2.0
PolySpatial XR: 2.3.1
XR Core Utilities: 2.5.3
XR Hands: 1.6.1
XR Interaction Toolkit: 3.2.1
XR Legacy Input Helpers: 2.1.12
XR Plugin Management: 4.5.1
Imported Samples:
Apple visionOS XR Plugin 2.3.1: Metal Sample - URP
XR Hands 1.6.1
XR Interaction Toolkit 3.2.1: Hands Interaction Demo, Starter Assets, visionOS
Build Platform Settings:
Target: Apple visionOS
App Mode: Metal Rendering with Compositor Services
Selected Validation Profiles: visionOS Metal
Documentation: Enabled
Xcode Version: 26.01
visionOS SDK: 26
Mac Hardware: Apple M1 Max
Target visionOS Version: 20 or 26
Test Environment: Model: Apple Vision Pro, visionOS 26.0.1 (23M341), Apple M1 Max
No errors in builds so far; just missing the desired functionality.
Thanks for a complete response with actionable steps.
                    
                  
                
                    
                      Hi there,
We’re encountering this error in all of our builds when using the latest Xcode and macOS:
The Metal Toolchain was not installed and could not compile the Metal source files. Download the Metal Toolchain from Xcode > Settings > Components and try again.
In short, all builds are failing. I’ve tried fixing this by installing Metal and applying other solutions, but none of them worked reliably.
Is there a way to ensure that the Metal Toolchain is installed on the CI machine?
                    
                  
                
                    
                      Hi,
In the iOS13 and macOS Catalina release notes it says:
Metal CIKernel instances now support arguments with arbitrarily structured data.
I've been trying to use this functionality in a CIKernel with mixed results. I'm particularly interested in passing data in the form of a dynamically sized array. It seems to work up to a certain size. Beyond the threshold excessive data is discarded and the kernel becomes unstable. I assume there is some kind of memory alignment issue going on, but I've tried various types in my array and always get a similar result.
I have not found any documentation or sample code regarding this. It would be great to know how this is intended to work and what the limitations are.
In the forums there are two similar unanswered questions about data arguments, so I'm sure there are a few out there with similar issues.
Thanks!
Michael
                    
                  
                
                    
                      Hi,
I’m using the latest iPad Pro (13-inch) and I can see that Metal offers an rgb10a2unorm texture for rendering, but when I render a grey ramp and measure the actual luminance, I get a pattern that I would expect from an 8-bit texture (see below). Before I start ripping apart all my code, is there anything else I need to do to convince iOS to render my texture in 10-bit?
I already tried setting the PixelFormat in my CMetalLayer to rgb10a2unorm, but that didn’t change anything.
                    
                  
                
                    
                      Hello,
Shaders in our application is written using HLSL and we rely on Metal Shader Converter to convert DXIL to Metal IR. We ran into an issue that causes metal pipeline state creation to fail when vertex stage-in function is used on AMD GPUs.
Here's the error reported by Metal in Xcode output:
Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED
XPC_ERROR_CONNECTION_INTERRUPTED
MTLCompiler: Compilation failed with XPC_ERROR_CONNECTION_INTERRUPTED on 4 try. This error suggests an unexpected interruption in the connection. Possible reasons: a crash in the compiler service, termination by the OS due to resource constraints (e.g., jetsam), a timeout in the service, or an issue with IPC. Verify system stability and check the logs for more details.
Compiler failed with XPC_ERROR_CONNECTION_INVALID
XPC_ERROR_CONNECTION_INVALID
MTLCompiler: Compiler encountered XPC_ERROR_CONNECTION_INVALID: failed to check-in, peer may have been unloaded: mach_error=10000003 (is the OS shutting down or process jetsammed?)
Compilation failed due to an interrupted connection: XPC_ERROR_CONNECTION_INTERRUPTED. This error occurred after multiple retries.
which seems to indicate a internal compiler error.
I have a minimal repro here: https://github.com/kcloudy0717/metal_pso_fail/tree/main, simply follow the instructions in README.
                    
                  
                
                    
                      I've been getting intermittent failures on Xcode code compiling my app on multiple platforms because it fails to compile a metal shader.
The Metal Toolchain was not installed and could not compile the Metal source files. Download the Metal Toolchain from Xcode > Settings > Components and try again.
Sometimes if I re-run it, it works fine. Then I'll run it again, and it will fail.
If you tell me to file a feedback, please tell me what information would be useful and actionable, because this is all I have.
                    
                  
                
                    
                      Hello,
I'm currently trying to make a collaborative app. But it just works only on Reality View, when I tried to use Compositor Layer like below, the personas disappeared.
ImmersiveSpace(id: "ImmersiveSpace-Metal") {
            CompositorLayer(configuration: MetalLayerConfiguration()) { layerRenderer in
                SpatialRenderer_InitAndRun(layerRenderer)
            }
}
Is there any potential solution too see Personas in Metal view?
Thanks in advance!
                    
                  
                
                    
                      Hello community,
After unable to use Metal Toolchain for Xcode Beta 1 and 2 even after following the workarounds, and using Xcode Beta 3, I am still unable to use the downloaded Metal Compiler on my device.
Building any project with metal file results in
warning: Could not read serialized diagnostics file: error("Failed to open diagnostics file") (in target '<target>' from project '<project>')
error: error: cannot execute tool 'metal' due to missing Metal Toolchain; use: xcodebuild -downloadComponent MetalToolchain
Command CompileMetalFile failed with a nonzero exit code
Here is my build environment,
$ xcodebuild -version
Xcode 26.0
Build version 17A5276g
I have also checked the downloaded metal toolchain.
$ xcodebuild -downloadComponent metalToolchain -exportPath /tmp/MyMetalExport/
2025-07-13 13:16:17.293 xcodebuild[2153:34019]  IDEDownloadableMetalToolchainCoordinator: Failed to remount the Metal Toolchain: The file “com.apple.MobileAsset.MetalToolchain-v17.1.5276.7.3KEJwX” couldn’t be opened because you don’t have permission to view it.
Beginning asset download...
2025-07-13 13:16:17.427 xcodebuild[2153:34022]  IDEDownloadableMetalToolchainCoordinator: Failed to remount the Metal Toolchain: The file “com.apple.MobileAsset.MetalToolchain-v17.1.5276.7.3KEJwX” couldn’t be opened because you don’t have permission to view it.
Downloaded asset to: /System/Library/AssetsV2/com_apple_MobileAsset_MetalToolchain/47af11e2964f385d510c6a9d1a49c8165f334a51.asset/AssetData/Restore/022-19457-052.dmg
Beginning asset export...
Done exporting: /tmp/MyMetalExport/MetalToolchain-17A5276g.exportedBundle
Done downloading: Metal Toolchain 17A5276g.
The ExportMetadata.plist has a buildUpdateVersion of 17A5276g.
Any suggestions are greatly appreciated.
                    
                  
                
                    
                      We set the CVDisplayLink on macOS to 0 or 120, and get the following.  This then clamps maximum refresh to 60Hz on the 120Hz ProMotion display on a MBP M2 Max laptop.  How is this not fixed in 4 macOS releases?
CoreVideo: currentVBLDelta returned 200000 for display 1 -- ignoring unreasonable value
CoreVideo: [0x7fe2fb816020] Bad CurrentVBLDelta for display 1 is zero. defaulting to 60Hz.
                    
                  
                
                    
                      Under Xcode 26 beta 3 I'm trying to build a project which uses Metal. I've installed the Metal Toolchain 26.0 under Settings -> Components, but when I start a build it fails during the "Prepare build" step with the following error (repeated many times):
stat(/var/run/com.apple.security.cryptexd/mnt/com.apple.MobileAsset.MetalToolchain-v17.1.5276.7.Pb9SLL/Metal.xctoolchain/usr/bin/clang): No such file or directory (2) 
I've confirmed that there is in fact no 'clang' binary in that directory. I've tried using xcode-select to set the Xcode 26 Beta app as the active developer directory, and xcodebuild -version shows:
Xcode 26.0
Build version 17A5276g
Any ideas on other things to try?