coreml之通过URL加载模型
在xcode中使用mlmodel模型,之前说的最简单的方法是将模型拖进工程中即可,xcode会自动生成有关模型的前向预测接口,这种方式非常简单,但是更新模型就很不方便。
今天说下另外一种通过URL加载mlmodel的方式。具体可以查阅apple开发者官方文档 https://developer.apple.com/documentation/coreml/mlmodel:
流程如下:
1.提供mlmodel的文件所在路径model_path
NSString *model_path = "path_to/.mlmodel"
2.将NSSting类型转换为NSURL,并根据路径对模型进行编译(编译出的为.mlmodelc 文件, 这是一个临时文件,如果需要,可以将其保存到一个固定位置:https://developer.apple.com/documentation/coreml/core_ml_api/downloading_and_compiling_a_model_on_the_user_s_device)
NSURL *url = [NSURL fileURLWithPath:model_path isDirectory:FALSE];
NSURL *compile_url = [MLModel compileModelAtURL:url error:&error];
3.根据编译后模型所在路径,加载模型,类型为MLModel
MLModel *compiled_model = [MLModel modelWithContentsOfURL:compile_url configuration:model_config error:&error];
4.需要注意的是采用动态编译方式,coreml只是提供了一种代理方式MLFeatureProvider,类似于C++中的虚函数。因此需要自己重写模型输入和获取模型输出的类接口(该类继承自MLFeatureProvider)。如下自己封装的MLModelInput和MLModelOutput类。MLModelInput类可以根据模型的输入名称InputName,传递data给模型。而MLModelOutput可以根据不同的输出名称featureName获取预测结果。
这个是头文件:
#import <Foundation/Foundation.h> #import <CoreML/CoreML.h> NS_ASSUME_NONNULL_BEGIN /// Model Prediction Input Type API_AVAILABLE(macos(10.13), ios(11.0), watchos(4.0), tvos(11.0)) @interface MLModelInput : NSObject<MLFeatureProvider> //the input name,default is image @property (nonatomic, strong) NSString *inputName; //data as color (kCVPixelFormatType_32BGRA) image buffer @property (readwrite, nonatomic) CVPixelBufferRef data; - (instancetype)init NS_UNAVAILABLE; - (instancetype)initWithData:(CVPixelBufferRef)data inputName:(NSString *)inputName; @end API_AVAILABLE(macos(10.13), ios(11.0), watchos(4.0), tvos(11.0)) @interface MLModelOutput : NSObject<MLFeatureProvider> //the output name, defalut is feature @property (nonatomic, strong) NSString *outputName; // feature as multidimensional array of doubles @property (readwrite, nonatomic) MLMultiArray *feature; - (instancetype)init NS_UNAVAILABLE; - (instancetype)initWithFeature:(MLMultiArray *)feature; @end NS_ASSUME_NONNULL_END
这个是类方法实现的文件:
@implementation MLModelInput - (instancetype)initWithData:(CVPixelBufferRef)data inputName:(nonnull NSString *)inputName { if (self) { _data = data; _inputName = inputName; } return self; } - (NSSet<NSString *> *)featureNames { return [NSSet setWithArray:@[self.inputName]]; } - (nullable MLFeatureValue *)featureValueForName:(nonnull NSString *)featureName { if ([featureName isEqualToString:self.inputName]) { return [MLFeatureValue featureValueWithPixelBuffer:_data]; } return nil; } @end @implementation MLModelOutput - (instancetype)initWithFeature:(MLMultiArray *)feature{ if (self) { _feature = feature; _outputName = DefalutOutputValueName; } return self; } - (NSSet<NSString *> *)featureNames{ return [NSSet setWithArray:@[self.outputName]]; } - (nullable MLFeatureValue *)featureValueForName:(nonnull NSString *)featureName { if ([featureName isEqualToString:self.outputName]) { return [MLFeatureValue featureValueWithMultiArray:_feature]; } return nil; } @end
5. 模型预测,获取预测结果。上面这两个类接口写完后,就可以整理输入数据为CvPixelBuffer,然后通过获取模型描述MLModelDescription得到输入名称,根据输入名称创建MLModelInput,预测,然后再根据MLModelOutput中的featureNames获取对应的预测输出数据,类型为MLMultiArray:
MLModelDescription *model_description = compiled_model.modelDescription; NSDictionary *dict = model_description.inputDescriptionsByName;
NSArray<NSString *> *feature_names = [dict allKeys]; NSString *input_feature_name = feature_names[0]; NSError *error; MLModelInput *model_input = [[MLModelInput alloc] initWithData:buffer inputName:input_feature_name];
id<MLFeatureProvider> model_output = [compiled_model predictionFromFeatures:model_input options:option error:&error];
NSSet<NSString *> *out_feature_names = [model_output featureNames];
NSArray<NSString *> *name_list = [out_feature_names allObjects];
NSUInteger size = [name_list count];
std::vector<MLMultiArray *> feature_list;
for (NSUInteger i = 0; i < size; i++) {
NSString *name = [name_list objectAtIndex:i];
MLMultiArray *feature = [model_output featureValueForName:name].multiArrayValue;
feature_list.push_back(feature);
}
6.读取MLMultiArray中的预测结果数据做后续处理..