If you need urgent consulting help click here
GNA
Overview
The GNA API provides access to Intel’s Gaussian Mixture Model and Neural Network Accelerator (GNA).
API Reference
- group gna_interface
This file contains the driver APIs for Intel’s Gaussian Mixture Model and Neural Network Accelerator (GNA)
Enums
Functions
-
static inline int gna_configure(const struct device *dev, struct gna_config *cfg)
Configure the GNA device.
Configure the GNA device. The GNA device must be configured before registering a model or performing inference
- Parameters
dev – Pointer to the device structure for the driver instance.
cfg – Device configuration information
- Return values
0 – If the configuration is successful
A – negative error code in case of a failure.
-
static inline int gna_register_model(const struct device *dev, struct gna_model_info *model, void **model_handle)
Register a neural network model.
Register a neural network model with the GNA device A model needs to be registered before it can be used to perform inference
- Parameters
dev – Pointer to the device structure for the driver instance.
model – Information about the neural network model
model_handle – Handle to the registered model if registration succeeds
- Return values
0 – If registration of the model is successful.
A – negative error code in case of a failure.
-
static inline int gna_deregister_model(const struct device *dev, void *model)
De-register a previously registered neural network model.
De-register a previously registered neural network model from the GNA device De-registration may be done to free up memory for registering another model Once de-registered, the model can no longer be used to perform inference
- Parameters
dev – Pointer to the device structure for the driver instance.
model – Model handle output by gna_register_model API
- Return values
0 – If de-registration of the model is successful.
A – negative error code in case of a failure.
-
static inline int gna_infer(const struct device *dev, struct gna_inference_req *req, gna_callback callback)
Perform inference on a model with input vectors.
Make an inference request on a previously registered model with an of input data vector A callback is provided for notification of inference completion
- Parameters
dev – Pointer to the device structure for the driver instance.
req – Information required to perform inference on a neural network
callback – A callback function to notify inference completion
- Return values
0 – If the request is accepted
A – negative error code in case of a failure.
-
struct gna_config
- #include <gna.h>
GNA driver configuration structure. Currently empty.
-
struct gna_model_header
- #include <gna.h>
GNA Neural Network model header Describes the key parameters of the neural network model
-
struct gna_model_info
- #include <gna.h>
GNA Neural Network model information to be provided by application during model registration
-
struct gna_inference_req
- #include <gna.h>
Request to perform inference on the given neural network model
-
struct gna_inference_stats
- #include <gna.h>
Statistics of the inference operation returned after completion
-
struct gna_inference_resp
- #include <gna.h>
Structure containing a response to the inference request
-
static inline int gna_configure(const struct device *dev, struct gna_config *cfg)