|
| 1 | +--- |
| 2 | +title: "Logging" |
| 3 | +metaTitle: "DeepSparse Logging" |
| 4 | +metaDescription: "System and Data Logging with DeepSparse" |
| 5 | +index: 6000 |
| 6 | +--- |
| 7 | + |
| 8 | +# DeepSparse Logging |
| 9 | + |
| 10 | +This page explains how to use DeepSparse Logging to monitor your deployment. |
| 11 | + |
| 12 | +There are many types of monitoring tasks that you may want to perform to confirm your production system is working correctly. |
| 13 | +The difficulty of the tasks varies from relatively easy (simple system performance analysis) to challenging |
| 14 | +(assessing the accuracy of the system in the wild by manually labeling the input data distribution post-factum). Examples include: |
| 15 | +- **System performance:** what is the latency/throughput of a query? |
| 16 | +- **Data quality:** is there an issue getting data to my model? |
| 17 | +- **Data distribution shift:** does the input data distribution deviates over time to the point where the model stops to deliver reliable predictions? |
| 18 | +- **Model accuracy:** what is the percentage of correct predictions that a model achieves? |
| 19 | + |
| 20 | +DeepSparse Logging is designed to provide maximum flexibility for you to extract whatever data is needed from a |
| 21 | +production inference pipeline into the logging system of your choice. |
| 22 | + |
| 23 | +## Installation |
| 24 | + |
| 25 | +This page requires the [DeepSparse Server Install](/get-started/install/deepsparse). |
| 26 | + |
| 27 | +## Metrics |
| 28 | +DeepSparse Logging provides access to two types of metrics. |
| 29 | + |
| 30 | +### System Logging Metrics |
| 31 | + |
| 32 | +System Logging gives you access to granular performance metrics for quick and efficient diagnosis of system health. |
| 33 | + |
| 34 | +There is one group of System Logging Metrics currently available: Inference Latency. For each inference request, DeepSparse Server logs the following: |
| 35 | +1. Pre-processing Time - seconds in the pre-processing step |
| 36 | +2. Engine Time - seconds in the engine forward pass step |
| 37 | +3. Post-processing Time - seconds in the post-processing step |
| 38 | +4. Total Time - second for the end-to-end response time (sum of the prior three) |
| 39 | + |
| 40 | +### Data Logging Metrics |
| 41 | + |
| 42 | +Data Logging gives you access to data at each stage of an inference pipeline. |
| 43 | +This facilitates inspection of the data, understanding of its properties, detecting edge cases, and possible data drift. |
| 44 | + |
| 45 | +There are four stages in the inference pipeline where Data Logging can occur: |
| 46 | +- `pipeline_inputs`: raw input passed to the inference pipeline by the user |
| 47 | +- `engine_inputs`: pre-processed tensors passed to the engine for the forward pass |
| 48 | +- `engine_outputs`: result of the engine forward pass (e.g., the raw logits) |
| 49 | +- `pipeline_outputs`: final output returned to the pipeline caller |
| 50 | + |
| 51 | +At each stage, you can specify functions to be applied to the data before logging. Example functions include the identity function |
| 52 | +(for logging the raw input/output) or the mean function (e.g., for monitoring the mean pixel value of an image). |
| 53 | + |
| 54 | +There are three types of functions that can be applied to target data at each stage: |
| 55 | +- Built-in functions: pre-written functions provided by DeepSparse ([see list on GitHub](https://github.com/neuralmagic/deepsparse/blob/main/src/deepsparse/loggers/metric_functions/built_ins.py)) |
| 56 | +- Framework functions: functions from `torch` or `numpy` |
| 57 | +- Custom functions: custom user-provided functions |
| 58 | + |
| 59 | +## Configuration |
| 60 | + |
| 61 | +The YAML-based Server Config file is used to configure both System and Data Logging. |
| 62 | +- System Logging is *enabled* by default. If no logger is specified, Python Logger is used. |
| 63 | +- Data Logging is *disabled* by default. The config allows you to specify what data to log. |
| 64 | + |
| 65 | +See [the Server documentation](/user-guide/deploying-deepsparse/deepsparse-server) for more details on the Server config file. |
| 66 | + |
| 67 | +### Logging YAML Syntax |
| 68 | + |
| 69 | +There are two key elements that should be added to the Server Config to setup logging. |
| 70 | + |
| 71 | +First is `loggers`. This element configures the loggers that are used by the Server. Each element is a dictionary of the form `{logger_name: {arg_1: arg_value}}`. |
| 72 | + |
| 73 | +Second is `data_logging`. This element identifies which/how data should be logged for an endpoint. It is a dictionary of the form `{identifier: [log_config]}`. |
| 74 | + |
| 75 | +- `identifier` specifies the stages where logging should occur. It can either be a pipeline `stage` (see stages above) or `stage.property` if the data type |
| 76 | +at a particular stage has a property. If the data type at a `stage` is a dictionary or list, you can access via slicing, indexing, or dict access, |
| 77 | +for example `stage[0][:,:,0]['key3']`. |
| 78 | + |
| 79 | +- `log_config` specifies which function to apply, which logger(s) to use, and how often to log. It is a dictionary of the form |
| 80 | +`{func: name, frequency: freq, target_loggers: [logger_names]}`. |
| 81 | + |
| 82 | +### Tangible Example |
| 83 | +Here's an example for an image classification server: |
| 84 | + |
| 85 | +```yaml |
| 86 | +# example-config.yaml |
| 87 | +loggers: |
| 88 | + python: # logs to stdout |
| 89 | + prometheus: # logs to prometheus on port 6100 |
| 90 | + port: 6100 |
| 91 | + |
| 92 | +endpoints: |
| 93 | + - task: image_classification |
| 94 | + route: /image_classification/predict |
| 95 | + model: zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_quant-none |
| 96 | + data_logging: |
| 97 | + pipeline_inputs.images: # applies to the images (of the form stage.property) |
| 98 | + - func: np.shape # framework function |
| 99 | + frequency: 1 |
| 100 | + target_loggers: |
| 101 | + - python |
| 102 | + |
| 103 | + pipeline_inputs.images[0]: # applies to the first image (of the form stage.property[idx]) |
| 104 | + - func: mean_pixels_per_channel # built-in function |
| 105 | + frequency: 2 |
| 106 | + target_loggers: |
| 107 | + - python |
| 108 | + - func: fraction_zeros # built-in function |
| 109 | + frequency: 2 |
| 110 | + target_loggers: |
| 111 | + - prometheus |
| 112 | + |
| 113 | + engine_inputs: # applies to the engine_inputs data (of the form stage) |
| 114 | + - func: np.shape # framework function |
| 115 | + frequency: 1 |
| 116 | + target_loggers: |
| 117 | + - python |
| 118 | +``` |
| 119 | +
|
| 120 | +This configuration does the following data logging at each respective stage of the pipeline: |
| 121 | +- System logging is enabled by default and logs to Prometheus and StdOut |
| 122 | +- Logs the shape of the input batch provided by the user to stdout |
| 123 | +- Logs the mean pixels and % of 0 pixels of the first image in the batch to Prometheus |
| 124 | +- Logs the raw data and shape of the input passed to the engine to Python |
| 125 | +- No logging occurs at any other pipeline stages |
| 126 | +
|
| 127 | +## Loggers |
| 128 | +
|
| 129 | +DeepSparse Logging includes options to log to Standard Output and to Prometheus out of the box as well as |
| 130 | +the ability to create a Custom Logger. |
| 131 | +
|
| 132 | +### Python Logger |
| 133 | +
|
| 134 | +Python Logger logs data to Standard Output. It is useful for debugging and inspecting an inference pipeline. It |
| 135 | +accepts no arguments and is configured with the following: |
| 136 | +
|
| 137 | +```yaml |
| 138 | +loggers: |
| 139 | + python: |
| 140 | +``` |
| 141 | +
|
| 142 | +### Prometheus Logger |
| 143 | +
|
| 144 | +DeepSparse is integrated with Prometheus, enabling you to easily instrument your model service. |
| 145 | +The Prometheus Logger accepts some optional arguments and is configured as follows: |
| 146 | +
|
| 147 | +```yaml |
| 148 | +loggers: |
| 149 | + prometheus: |
| 150 | + port: 6100 |
| 151 | + text_log_save_frequency: 10 # optional |
| 152 | + text_log_save_dir: text/log/save/dir # optional |
| 153 | + text_log_file_name: text_log_file_name # optional |
| 154 | +``` |
| 155 | +
|
| 156 | +There are four types of metrics in Prometheus (Counter, Gauge, Summary, and Histogram). DeepSparse uses |
| 157 | +[Summary](https://prometheus.io/docs/concepts/metric_types/#summary) under the hood, so make sure the data you |
| 158 | +are logging to Prometheus is an Int or a Float. |
| 159 | +
|
| 160 | +### Custom Logger |
| 161 | +
|
| 162 | +If you need a custom logger, you can create a class that inherits from the `BaseLogger` |
| 163 | +and implements the `log` method. The `log` method is called at each pipeline stage and should handle exposing the metric to the Logger. |
| 164 | + |
| 165 | +```python |
| 166 | +from deepsparse.loggers import BaseLogger |
| 167 | +from typing import Any, Optional |
| 168 | +
|
| 169 | +class CustomLogger(BaseLogger): |
| 170 | + def log(self, identifier: str, value: Any, category: Optional[str]=None): |
| 171 | + """ |
| 172 | + :param identifier: The name of the item that is being logged. |
| 173 | + By default, in the simplest case, that would be a string in the form |
| 174 | + of "<pipeline_name>/<logging_target>" |
| 175 | + e.g. "image_classification/pipeline_inputs" |
| 176 | + :param value: The item that is logged along with the identifier |
| 177 | + :param category: The metric category that the log belongs to. |
| 178 | + By default, we recommend sticking to our internal convention |
| 179 | + established in the MetricsCategories enum. |
| 180 | + """ |
| 181 | + print("Logging from a custom logger") |
| 182 | + print(identifier) |
| 183 | + print(value) |
| 184 | +``` |
| 185 | + |
| 186 | +Once a custom logger is implemented, it can be referenced from a config file: |
| 187 | + |
| 188 | +```yaml |
| 189 | +# server-config-with-custom-logger.yaml |
| 190 | +loggers: |
| 191 | + custom_logger: |
| 192 | + path: example_custom_logger.py:CustomLogger |
| 193 | + # arg_1: your_arg_1 |
| 194 | +
|
| 195 | +endpoints: |
| 196 | + - task: sentiment_analysis |
| 197 | + route: /sentiment_analysis/predict |
| 198 | + model: zoo:nlp/sentiment_analysis/bert-base/pytorch/huggingface/sst2/12layer_pruned80_quant-none-vnni |
| 199 | + name: sentiment_analysis_pipeline |
| 200 | + data_logging: |
| 201 | + pipeline_inputs: |
| 202 | + - func: identity |
| 203 | + frequency: 1 |
| 204 | + target_loggers: |
| 205 | + - custom_logger |
| 206 | +``` |
| 207 | + |
| 208 | +Download the following for an example of a custom logger: |
| 209 | + |
| 210 | +```bash |
| 211 | +wget https://raw.githubusercontent.com/neuralmagic/docs/rs/logging-feature/src/files-for-examples/logging/example_custom_logger.py |
| 212 | +wget https://raw.githubusercontent.com/neuralmagic/docs/rs/logging-feature/src/files-for-examples/logging/server-config-with-custom-logger.yaml |
| 213 | +``` |
| 214 | + |
| 215 | +Launch the server: |
| 216 | + |
| 217 | +```bash |
| 218 | +deepsparse.server --config-file server-config-with-custom-logger.yaml |
| 219 | +``` |
| 220 | + |
| 221 | +Submit a request: |
| 222 | + |
| 223 | +```python |
| 224 | +import requests |
| 225 | +url = "http://0.0.0.0:5543/sentiment_analysis/predict" |
| 226 | +obj = {"sequences": "Snorlax loves my Tesla!"} |
| 227 | +resp = requests.post(url=url, json=obj) |
| 228 | +print(resp.text) |
| 229 | +``` |
| 230 | + |
| 231 | +You should see data printed to the Server's standard output. |
| 232 | + |
| 233 | +See [our Prometheus logger implementation](https://github.com/neuralmagic/deepsparse/blob/main/src/deepsparse/loggers/prometheus_logger.py) |
| 234 | +for inspiration on implementing a logger. |
| 235 | + |
| 236 | +## Usage |
| 237 | + |
| 238 | +DeepSparse Logging is currently supported for usage with DeepSparse Server. |
| 239 | + |
| 240 | +### Server Usage |
| 241 | + |
| 242 | +The Server startup CLI command accepts a YAML configuration file (which contains both logging-specific and general |
| 243 | +configuration details) via the `--config-file` argument. |
| 244 | + |
| 245 | +Data Logging is configured at the endpoint level. The configuration file below creates a Server with two endpoints |
| 246 | +(one for image classification and one for sentiment analysis): |
| 247 | + |
| 248 | +```yaml |
| 249 | +# server-config.yaml |
| 250 | +loggers: |
| 251 | + python: |
| 252 | + prometheus: |
| 253 | + port: 6100 |
| 254 | + |
| 255 | +endpoints: |
| 256 | + - task: image_classification |
| 257 | + route: /image_classification/predict |
| 258 | + model: zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_quant-none |
| 259 | + name: image_classification_pipeline |
| 260 | + data_logging: |
| 261 | + pipeline_inputs.images: |
| 262 | + - func: np.shape |
| 263 | + frequency: 1 |
| 264 | + target_loggers: |
| 265 | + - python |
| 266 | +
|
| 267 | + pipeline_inputs.images[0]: |
| 268 | + - func: max_pixels_per_channel |
| 269 | + frequency: 1 |
| 270 | + target_loggers: |
| 271 | + - python |
| 272 | + - func: mean_pixels_per_channel |
| 273 | + frequency: 1 |
| 274 | + target_loggers: |
| 275 | + - python |
| 276 | + - func: fraction_zeros |
| 277 | + frequency: 1 |
| 278 | + target_loggers: |
| 279 | + - prometheus |
| 280 | + |
| 281 | + pipeline_outputs.scores[0]: |
| 282 | + - func: identity |
| 283 | + frequency: 1 |
| 284 | + target_loggers: |
| 285 | + - prometheus |
| 286 | +
|
| 287 | + - task: sentiment_analysis |
| 288 | + route: /sentiment_analysis/predict |
| 289 | + model: zoo:nlp/sentiment_analysis/bert-base/pytorch/huggingface/sst2/12layer_pruned80_quant-none-vnni |
| 290 | + name: sentiment_analysis_pipeline |
| 291 | + data_logging: |
| 292 | + engine_inputs: |
| 293 | + - func: example_custom_fn.py:sequence_length |
| 294 | + frequency: 1 |
| 295 | + target_loggers: |
| 296 | + - python |
| 297 | + - prometheus |
| 298 | + |
| 299 | + pipeline_outputs.scores[0]: |
| 300 | + - func: identity |
| 301 | + frequency: 1 |
| 302 | + target_loggers: |
| 303 | + - python |
| 304 | + - prometheus |
| 305 | +``` |
| 306 | + |
| 307 | +#### Custom Data Logging Function |
| 308 | + |
| 309 | +The example above included a custom function for computing sequence lengths. Custom |
| 310 | +Functions should be defined in a local Python file. They should accept one argument |
| 311 | +and return a single output. |
| 312 | + |
| 313 | +The `example_custom_fn.py` file could look like the following: |
| 314 | + |
| 315 | +```python |
| 316 | +import numpy as np |
| 317 | +from typing import List |
| 318 | +
|
| 319 | +# Engine inputs to transformers is 3 lists of np.arrays representing |
| 320 | +# the encoded input, the attention mask, and token types. |
| 321 | +# Each of the np.arrays is of shape (batch, max_seq_len), so |
| 322 | +# engine_inputs[0][0] gives the encodings of the first item in the batch. |
| 323 | +# The number of non-zeros in this slice is the sequence length. |
| 324 | +def sequence_length(engine_inputs: List[np.ndarray]): |
| 325 | + return np.count_nonzero(engine_inputs[0][0]) |
| 326 | +``` |
| 327 | + |
| 328 | +#### Launching the Server and Logging Metrics |
| 329 | + |
| 330 | +Download the `server-config.yaml`, `example_custom_fn.py`, and `goldfish.jpeg` for the demo. |
| 331 | + |
| 332 | +```bash |
| 333 | +wget https://raw.githubusercontent.com/neuralmagic/docs/rs/logging-feature/src/files-for-examples/logging/server-config.yaml |
| 334 | +wget https://raw.githubusercontent.com/neuralmagic/docs/rs/logging-feature/src/files-for-examples/logging/example_custom_fn.py |
| 335 | +wget https://raw.githubusercontent.com/neuralmagic/docs/rs/logging-feature/src/files-for-examples/logging/goldfish.jpg |
| 336 | +
|
| 337 | +``` |
| 338 | + |
| 339 | +Launch the Server with the following: |
| 340 | + |
| 341 | +```bash |
| 342 | +deepsparse.server --config-file server-config.yaml |
| 343 | +``` |
| 344 | + |
| 345 | +Submit a request to the image classification endpoint. |
| 346 | + |
| 347 | +```python |
| 348 | +import requests |
| 349 | +url = "http://0.0.0.0:5543/image_classification/predict/from_files" |
| 350 | +paths = ["goldfish.jpg"] |
| 351 | +files = [("request", open(img, 'rb')) for img in paths] |
| 352 | +resp = requests.post(url=url, files=files) |
| 353 | +print(resp.text) |
| 354 | +``` |
| 355 | + |
| 356 | +Submit a request to the sentiment analysis endpoint with the following: |
| 357 | + |
| 358 | +```python |
| 359 | +import requests |
| 360 | +url = "http://0.0.0.0:5543/sentiment_analysis/predict" |
| 361 | +obj = {"sequences": "Snorlax loves my Tesla!"} |
| 362 | +resp = requests.post(url=url, json=obj) |
| 363 | +print(resp.text) |
| 364 | +``` |
| 365 | + |
| 366 | +You should see the metrics logged to the Server's standard output and to Prometheus (see at `http://localhost:6100` to quickly inspect the exposed metrics). |
0 commit comments