@@ -163,7 +163,7 @@ from deepsparse.pipelines.custom_pipeline import CustomTaskPipeline
163
163
164
164
def preprocess (inputs ):
165
165
pass # define your function
166
- def postprocess (outputs )
166
+ def postprocess (outputs ):
167
167
pass # define your function
168
168
169
169
custom_pipeline = CustomTaskPipeline(
@@ -182,7 +182,7 @@ pipeline_outputs = custom_pipeline(pipeline_inputs)
182
182
** Additional Resources**
183
183
184
184
- Get Started and [ Use A Model] ( /get-started/use-a-model )
185
- - Get Started and [ Use A Model in a Custom Use Case) ] ( /get-started/use-a-model/custom-use-case )
185
+ - Get Started and [ Use A Model in a Custom Use Case] ( /get-started/use-a-model/custom-use-case )
186
186
- Refer to [ Use Cases] ( /use-cases ) for details on usage of supported use cases
187
187
- List of Supported Use Cases [ Docs Coming Soon]
188
188
@@ -207,20 +207,19 @@ predictions.
207
207
208
208
DeepSparse Server is launched from the CLI, with configuration via either command line arguments or a configuration file.
209
209
210
- With the command line argument path, users specify a use case via the ` task ` argument (e.g. ` image_classification ` or ` question_answering ` ) as
210
+ With the command line argument path, users specify a use case via the ` task ` argument (e.g., ` image_classification ` or ` question_answering ` ) as
211
211
well as a model (either a local ONNX file or a SparseZoo stub) via the ` model_path ` argument:
212
212
``` bash
213
- deepsparse.server task [use_case_name] --model_path [model_path]
213
+ deepsparse.server -- task [use_case_name] --model_path [model_path]
214
214
```
215
215
216
216
With the config file path, users create a YAML file that specifies the server configuration. A YAML file looks like the following:
217
217
218
218
``` yaml
219
- num_workers : 4 # specify multi-stream (more than one worker)
220
219
endpoints :
221
- - task : [ task_name] # specifiy use case (e.g. image_classification, question_answering)
220
+ - task : task_name # specifiy use case (e.g., image_classification, question_answering)
222
221
route : /predict # specify the route of the endpoint
223
- model : [ model_path] # specify sparsezoo stub or path to local onnx file
222
+ model : model_path # specify sparsezoo stub or path to local onnx file
224
223
name : any_name_you_want
225
224
226
225
# - ... add as many endpoints as neeede
@@ -229,7 +228,7 @@ endpoints:
229
228
The Server is then launched with the following:
230
229
231
230
``` bash
232
- deepsparse.server config_file config.yaml
231
+ deepsparse.server -- config_file config.yaml
233
232
```
234
233
235
234
Clients interact with the Server via HTTP. Because the Server uses Pipelines internally,
@@ -284,7 +283,7 @@ onnx_filepath = "path/to/onnx/model.onnx"
284
283
batch_size = 64
285
284
286
285
# Generate random sample input
287
- inputs = generate_random_inputs(model = onnx_filepath, batch_size = batch_size)
286
+ inputs = generate_random_inputs(onnx_filepath, batch_size)
288
287
289
288
# Compile and run
290
289
engine = Engine(onnx_filepath, batch_size)
0 commit comments