-
Notifications
You must be signed in to change notification settings - Fork 27
Description
Using VisionKit, there's two main ways to get text from images.
Based on some OCR tests, I'm seeing that the outputs from these two methods are different. Initially, I thought ImageAnalyzer was running VNRequestTextRecognitionLevel.fast because it's for Live Text, but the outputs from ImageAnalyzer are sometimes better than VNRequestTextRecognitionLevel.accurate.
VNRecognizeRequest does have more options, including language correction and custom words.
Do you know what ImageAnalyzer is calling in the background? Is it essentially running VNRecognizeRequest or is it a separate model/pipeline? And this naturally begs the question, which model would be better for OCR? My initial tests show a pretty similar performance in aggregation between ImageAnalyzer and VNRequestTextRecognitionLevel.accurate, but the results per test case can sometimes be highly variable between the two.
For documentation & in case this is outside the scope of your expertise, I've asked the same question on Apple Developers forum here.