Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Commit 68d269a

Browse files
Rs/intro (#121)
* updated DeepSparse Enterprise / DeepSparse Community Name; Added New Home, Optimize for Inference, Deploy on CPUs (WIP), and Quick Tour Pages; Updated Try-A-Model to Use-A-Model * deepsparse engine > deepsparse on side bar * removed DeepSparse Engine, removed DeepSparse Platform, removed DeepSparse Enterprise Edition * added new files * removed embedding extraction * fixed typo * finished intro content * Update faqs.mdx * Update src/content/index.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/get-started/install.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/get-started/install.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/optimize-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/optimize-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/deploy-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/get-started/install/deepsparse-ent.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update nlp-text-classification.mdx * Update src/content/products/deepsparse/enterprise.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/details/faqs.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/quick-tour.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/quick-tour.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/deploy-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/quick-tour.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/quick-tour.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/quick-tour.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/quick-tour.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update quick-tour.mdx * Update src/content/products/deepsparse/community.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/products/deepsparse/enterprise.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/user-guide/deploying-deepsparse.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/user-guide/deploying-deepsparse.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/products/deepsparse/enterprise.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update faqs.mdx * Update nlp-text-classification.mdx * Update cv-object-detection.mdx * Update nlp-text-classification.mdx * Update nlp-text-classification.mdx * Update cv-object-detection.mdx * Update custom-use-case.mdx * Update nlp-text-classification.mdx * Update src/content/index/quick-tour.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/quick-tour.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/deploy-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/deploy-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/deploy-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * added lambda and gcp cloud run examples * Update src/content/index/deploy-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/get-started/use-a-model/cv-object-detection.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/use-cases/natural-language-processing/deploying.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/get-started/use-a-model/nlp-text-classification.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/use-cases/natural-language-processing/deploying.mdx Co-authored-by: Jeannie Finks <[email protected]> * cleaned up a couple of rendering issues * Update src/content/details/faqs.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/deploy-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/deploy-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/deploy-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/deploy-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/deploy-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/deploy-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/deploy-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/deploy-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/deploy-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/get-started/install/deepsparse-ent.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/optimize-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/optimize-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/optimize-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/quick-tour.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/quick-tour.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/quick-tour.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/quick-tour.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/products/deepsparse/community.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/products/deepsparse/community.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/products/deepsparse/community.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/products/deepsparse/community.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/products/deepsparse/enterprise.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/user-guide/deploying-deepsparse/aws-lambda.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/products/deepsparse/enterprise.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/user-guide/deploying-deepsparse/aws-lambda.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/user-guide/deploying-deepsparse/aws-lambda.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/user-guide/deploying-deepsparse/google-cloud-run.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/details/faqs.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/get-started/install/deepsparse.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/optimize-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/get-started/use-a-model/cv-object-detection.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/get-started/use-a-model/nlp-text-classification.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/get-started/use-a-model/nlp-text-classification.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index.mdx Co-authored-by: Jeannie Finks <[email protected]> * Update src/content/index/deploy-workflow.mdx Co-authored-by: Jeannie Finks <[email protected]> Co-authored-by: Jeannie Finks <[email protected]>
1 parent adceb91 commit 68d269a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

55 files changed

+1105
-313
lines changed

src/content/details.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,4 @@ index: 5000
66
skipToChild: True
77
---
88

9-
# Details
9+
# Details

src/content/details/faqs.mdx

Lines changed: 29 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
---
22
title: "FAQs"
33
metaTitle: "FAQs"
4-
metaDescription: "FAQs for the DeepSparse product from Neural Magic"
5-
index: 2000
4+
metaDescription: "FAQs for the Neural Magic Platform"
5+
index: 4000
66
---
77

88
# FAQs
@@ -11,22 +11,22 @@ index: 2000
1111

1212
**What is Neural Magic?**
1313

14-
Founded by a team of award-winning MIT computer scientists and funded by Amdocs, Andreessen Horowitz, Comcast Ventures, NEA, Pillar VC, and
15-
Ridgeline Partners, Neural Magic is the creator and maintainer of the Deep Sparse Platform. It has several components, including the
16-
[DeepSparse Engine,](/products/deepsparse) a CPU runtime that runs sparse models at GPU speeds. To enable companies the ability to use
17-
ubiquitous and unconstrained CPU resources, Neural Magic includes [SparseML](/products/sparseml) and the [SparseZoo,](/products/sparsezoo)
18-
open-sourced model optimization technologies that allow users to achieve performance breakthroughs, at scale, with all the flexibility of software.
14+
Neural Magic was founded by a team of award-winning MIT computer scientists and is funded by Amdocs, Andreessen Horowitz, Comcast Ventures, NEA, Pillar
15+
VC, and Ridgeline Partners. The Neural Magic Platform includes several components, including [DeepSparse,](/products/deepsparse), [SparseML]
16+
(/products/sparseml), and [SparseZoo[(/products/sparsezoo). DeepSparse is an inference runtime offering GPU-class performance on CPUs and tooling to
17+
integrate ML into your application. [SparseML](/products/sparseml) and [SparseZoo,](/products/sparsezoo) are and open-source tooling and model repository
18+
combination that enable you to create an inference-optimized sparse-model for deployment with DeepSparse.
1919

20-
**What is the DeepSparse Engine?**
20+
Together, these components remove the tradeoff between performance and the simplicity and scalability of software-delivered deployments.
2121

22-
The DeepSparse Engine, created by Neural Magic, is a general purpose engine for machine learning, enabling machine learning to be practically
23-
run in new places, on new kinds of workloads. It delivers state of art, GPU-class performance for the deep learning applications running on x86
24-
CPUs. The DeepSparse Engine achieves its performance using breakthrough algorithms that reduce the computation needed for neural network execution
25-
and accelerate the resulting memory-bound computation.
22+
**What is DeepSparse?**
23+
24+
DeepSparse, created by Neural Magic, is an inference runtime for deep learning models. It delivers state of art, GPU-class performance on commodity CPUs
25+
as well as tooling for integrating a model into an application and monitoring models in production.
2626

2727
**Why Neural Magic?**
2828

29-
Learn more about Neural Magic and the DeepSparse Engine (formerly known as the Neural Magic Inference Engine).
29+
Learn more about Neural Magic and DeepSparse (formerly known as the Neural Magic Inference Engine).
3030
[Watch the Why Neural Magic video](https://youtu.be/zJy_8uPZd0o)
3131

3232
**How does Neural Magic make it work?**
@@ -44,8 +44,8 @@ for our end users to train and infer on for their deep learning needs, and have
4444
Our inference engine supports all versions of TensorFlow <= 2.0; support for the Keras API is through TensorFlow 2.0.
4545

4646
**Do you run on AMD hardware?**
47-
48-
The DeepSparse Engine is validated to work on x86 Intel (Haswell generation and later) and AMD CPUs running Linux, with
47+
48+
DeepSparse is validated to work on x86 Intel (Haswell generation and later) and AMD CPUs running Linux, with
4949
support for AVX2, AVX-512, and VNNI instruction sets. Specific support details for some algorithms over different microarchitectures
5050
[is available.](/user-guide/deepsparse-engine/hardware-support)
5151

@@ -57,7 +57,7 @@ market adoption and deep learning use cases.
5757
We are actively working on ARM support and it’s slated for release late-2022. We would like to hear your use cases and keep you in the
5858
loop! [Contact us to continue the conversation.](https://neuralmagic.com/contact/)
5959

60-
**To what use cases is the Deep Sparse Platform best suited?**
60+
**To what use cases is the Neural Magic Platform best suited?**
6161

6262
We focus on the models and use cases related to computer vision and NLP due to cost sensitivity and both real time and throughput constraints.
6363
The belief now is GPUs are required for deployment.
@@ -98,18 +98,18 @@ ___
9898

9999
**Which instruction sets are supported and do we have to enable certain settings?**
100100

101-
AVX2, AVX-512, and VNNI. The DeepSparse Engine will automatically utilize the most effective available
101+
AVX2, AVX-512, and VNNI. DeepSparse will automatically utilize the most effective available
102102
instructions for the task. Depending on your goals and hardware priorities, optimal performance can be found.
103103
Neural Magic is happy to discuss your use cases and offer recommendations.
104104

105105
**Are you suitable for edge deployments (i.e., in-store devices, cameras)?**
106106

107107
Yes, absolutely. We can run anywhere you have a CPU with x86 instructions, including on bare metal, in the cloud,
108108
on-prem, or at the edge. Additionally, our model optimization tools are able to reduce the footprint of models
109-
across all architectures. We only guarantee performance in the DeepSparse Engine.
109+
across all architectures. We only guarantee performance in DeepSparse.
110110

111111
We’d love to hear from users highly interested in ML performance. If you want to chat about your use cases
112-
or how others are leveraging the Deep Sparse Platform, [please contact us.](https://neuralmagic.com/contact/)
112+
or how others are leveraging the Neural Magic Platform, [please contact us.](https://neuralmagic.com/contact/)
113113
Or simply head over to the [Neural Magic GitHub repo](https://github.com/neuralmagic) and check out our tools.
114114

115115
**Do you have available solutions or applications on the Microsoft/Azure platform?**
@@ -119,10 +119,10 @@ We deploy extremely easily. We are completely infrastructure-agnostic. As long a
119119

120120
**Can the inference engine run on Kubernetes? How do you containerize and take advantage of underlying infrastructure?**
121121

122-
The DeepSparse Engine becomes a component of your model serving solution. As a result, it can
122+
DeepSparse becomes a component of your model serving solution. As a result, it can
123123
simply plug into an existing CI/CD deployment pipeline. How you deploy, where you deploy, and what you deploy on
124-
becomes abstracted to the DeepSparse Engine so you can tailor your experiences. For example, you can run the
125-
DeepSparse Engine on a CPU VM environment, deployed via a Docker file and managed through a Kubernetes environment.
124+
becomes abstracted to DeepSparse so you can tailor your experiences. For example, you can run the
125+
DeepSparse on a CPU VM environment, deployed via a Docker file and managed through a Kubernetes environment.
126126

127127
___
128128

@@ -141,7 +141,7 @@ Neural Magic, _[WoodFisher: Efficient Second-Order Approximation for Neural Netw
141141

142142
**When does sparsification actually happen?**
143143

144-
In a scenario in which you want to sparsify and then run your own model in the DeepSparse Engine, you would first
144+
In a scenario in which you want to sparsify and then run your own model with DeepSparse, you would first
145145
sparsify your model to achieve the desired level of performance and accuracy using Neural Magic’s [SparseML](/products/sparseml) tooling.
146146

147147
**What does the sparsification process look like?**
@@ -166,9 +166,9 @@ hyperparameters are fully under your control and allow you the flexibility to ea
166166

167167
**Do you support INT8 and INT16 (quantized) operations?**
168168

169-
The DeepSparse Engine runs at FP32 and has support for INT8. With Intel Cascade Lake generation chips and later,
169+
DeepSparse runs at FP32 and has support for INT8. With Intel Cascade Lake generation chips and later,
170170
Intel CPUs include VNNI instructions and support both INT8 and INT16 operations. On these machines, performance improvements
171-
from quantization will be greater. The DeepSparse Engine has INT8 support for the ONNX operators QLinearConv, QuantizeLinear,
171+
from quantization will be greater. DeepSparse has INT8 support for the ONNX operators QLinearConv, QuantizeLinear,
172172
DequantizeLinear, QLinearMatMul, and MatMulInteger. Our engine also supports 8-bit QLinearAdd, an ONNX Runtime custom operator.
173173

174174
**Do you support FP16 (half precision) and BF16 operations?**
@@ -179,12 +179,12 @@ ___
179179

180180
## Runtime FAQs
181181

182-
**Do users have to do any model conversion before using the DeepSparse Engine?**
182+
**Do users have to do any model conversion before using DeepSparse?**
183183

184-
DeepSparse Engine executes on an ONNX (Open Neural Network Exchange) representation of a deep learning model.
184+
DeepSparse executes on an ONNX (Open Neural Network Exchange) representation of a deep learning model.
185185
Our software allows you to produce an ONNX representation. If working with PyTorch, we use the built-in ONNX
186186
export and for TensorFlow, we convert from a standard exported protobuf file to ONNX. Outside of those frameworks,
187-
you would need to convert your model to ONNX first before passing it to the DeepSparse Engine.
187+
you would need to convert your model to ONNX first before passing it to DeepSparse.
188188

189189
**Why is ONNX the file format used by Neural Magic?**
190190

@@ -212,6 +212,6 @@ Specifically for sparsification, our software keeps the architecture intact and
212212

213213
**For a CPU are you using all the cores?**
214214

215-
The DeepSparse Engine optimizes _how_ the model is run on the infrastructure resources applied to it. But, the Neural
215+
DeepSparse optimizes _how_ the model is run on the infrastructure resources applied to it. But, Neural
216216
Magic does not optimize for the number of cores. You are in control to specify how much of the system Neural Magic will use and run on.
217217
Depending on your goals (latency, throughput, and cost constraints), you can optimize your pipeline for maximum efficiency.

src/content/details/glossary.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ The machine learning community includes a vast array of terminology that can hav
124124
</tr>
125125
<tr>
126126
<td>Unstructured pruning</td>
127-
<td>A method for compressing a neural network. Unstructured pruning removes individual weight connections from a trained network. Software like Neural Magic's DeepSparse Engine runs these pruned networks faster.</td>
127+
<td>A method for compressing a neural network. Unstructured pruning removes individual weight connections from a trained network. Software like Neural Magic's DeepSparse runs these pruned networks faster.</td>
128128
</tr>
129129
<tr>
130130
<td>VNNI</td>

src/content/get-started.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: "Get Started"
33
metaTitle: "Get Started"
4-
metaDescription: "Getting started with the Neural Magic DeepSparse Platform"
4+
metaDescription: "Getting started with the Neural Magic Platform"
55
index: 1000
66
skipToChild: True
77
---

src/content/get-started/deploy-a-model.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
---
22
title: "Deploy a Model"
33
metaTitle: "Deploy a Model"
4-
metaDescription: "Deploy a model with the DeepSparse server for easy and performant ML deployments"
4+
metaDescription: "Deploy a model with DeepSparse Server for easy and performant ML deployments"
55
index: 5000
66
---
77

88
# Deploy a Model
99

10-
The DeepSparse package comes pre-installed with a server to enable easy and performant model deployments.
10+
DeepSparse comes pre-installed with a server to enable easy and performant model deployments.
1111
The server provides an HTTP interface to communicate and run inferences on the deployed model rather than the Python APIs or CLIs.
1212
It is a production-ready model serving solution built on Neural Magic's sparsification solutions resulting in faster and cheaper deployments.
1313

src/content/get-started/deploy-a-model/cv-object-detection.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,9 @@ index: 2000
99

1010
This page walks through an example of deploying an object detection model with DeepSparse Server.
1111

12-
The DeepSparse Server is a server wrapper around `Pipelines`, including the object detection pipeline. As such,
12+
DeepSparse Server is a server wrapper around `Pipelines`, including the object detection pipeline. As such,
1313
the server provides and HTTP interface that accepts images and image files as inputs and outputs the labeled predictions.
14-
With all of this built on top of the DeepSparse Engine, the simplicity of servable pipelines is combined with GPU-class performance on CPUs for sparse models.
14+
In this way, DeepSparse combines the simplicity of servable pipelines with GPU-class performance on CPUs for sparse models.
1515

1616
## Install Requirements
1717

@@ -20,12 +20,12 @@ This example requires [DeepSparse Server+YOLO Install](/get-started/install/deep
2020
## Start the Server
2121

2222
Before starting the server, the model must be set up in the format expected for DeepSparse `Pipelines`.
23-
See an example of how to setup `Pipelines` in the [Try a Model](../../try-a-model) section.
23+
See an example of how to setup `Pipelines` in the [Use a Model](../../use-a-model) section.
2424

2525
Once the `Pipelines` are set up, the `deepsparse.server` command launches a server with the model at `--model_path` inside. The `model_path` can either
2626
be a SparseZoo stub or a path to a local `model.onnx` file.
2727

28-
The command below shows how to start up the DeepSparse Server for a sparsified YOLOv5l model trained on the COCO dataset from the SparseZoo.
28+
The command below shows how to start up DeepSparse Server for a sparsified YOLOv5l model trained on the COCO dataset from the SparseZoo.
2929
The output confirms the server was started on port `:5543` with a `/docs` route for general info and a `/predict/from_files` route for inference.
3030

3131
```bash

src/content/get-started/deploy-a-model/nlp-text-classification.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,9 @@ index: 1000
99

1010
This page walks through an example of deploying a text-classification model with DeepSparse Server.
1111

12-
The DeepSparse Server is a server wrapper around `Pipelines`, including the sentiment analysis pipeline. As such,
12+
DeepSparse Server is a server wrapper around `Pipelines`, including the sentiment analysis pipeline. As such,
1313
the server provides an HTTP interface that accepts raw text sequences as inputs and responds with the labeled predictions.
14-
With all of this built on top of the DeepSparse Engine, the simplicity of servable pipelines is combined with GPU-class performance on CPUs for sparse models.
14+
In this way, DeepSparse combines the simplicity of servable pipelines with GPU-class performance on CPUs for sparse models.
1515

1616
## Install Requirements
1717

@@ -20,12 +20,12 @@ This example requires [DeepSparse Server Install](/get-started/install/deepspars
2020
## Start the Server
2121

2222
Before starting the server, the model must be set up in the format expected for DeepSparse `Pipelines`.
23-
See an example of how to set up `Pipelines` in the [Try a Model](../../try-a-model) section.
23+
See an example of how to set up `Pipelines` in the [Use a Model](../../use-a-model) section.
2424

2525
Once the `Pipelines` are set up, the `deepsparse.server` command launches a server with the model at `--model_path` inside. The `model_path` can either
2626
be a SparseZoo stub or a local model path.
2727

28-
The command below starts up the DeepSparse Server for a sparsified DistilBERT model (from the SparseZoo) trained on the SST2 dataset for sentiment analysis.
28+
The command below starts up DeepSparse Server for a sparsified DistilBERT model (from the SparseZoo) trained on the SST2 dataset for sentiment analysis.
2929
The output confirms the server was started on port `:5543` with a `/docs` route for general info and a `/predict` route for inference.
3030

3131
```bash

src/content/get-started/install.mdx

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,16 @@
11
---
22
title: "Installation"
3-
metaTitle: "Install Deep Sparse Platform"
4-
metaDescription: "Installation instructions for the Deep Sparse Platform including DeepSparse Engine, SparseML, SparseZoo"
3+
metaTitle: "Install Neural Magic Platform"
4+
metaDescription: "Installation instructions for the Neural Magic Platform including DeepSparse, SparseML, SparseZoo"
55
index: 0
66
---
77

88
# Installation
99

10-
The Neural Magic Platform is made up of core libraries that are available as Python APIs and CLIs.
11-
All Python APIs and CLIs are installed through pip utilizing [PyPI](https://pypi.org/user/neuralmagic/).
12-
We recommend you install in a [virtual environment](https://docs.python.org/3/library/venv.html) to encapsulate your local environment.
10+
The Neural Magic Platform contains several products: DeepSparse (available in two editions, Community and Enterprise), SparseML, and SparseZoo.
11+
12+
Each package is installed with [PyPI](https://pypi.org/user/neuralmagic/). It is recommended to install in
13+
a [virtual environment](https://docs.python.org/3/library/venv.html) to encapsulate your local environment.
1314

1415
## Installing the Neural Magic Platform
1516

@@ -24,12 +25,12 @@ Now, you are ready to install one of the Neural Magic products.
2425
## Installing Products
2526

2627
<LinkCards>
27-
<LinkCard href="./deepsparse" heading="DeepSparse">
28-
Install the DeepSparse Community Edition for performant inference on CPUs.
28+
<LinkCard href="./deepsparse" heading="DeepSparse Community">
29+
Install DeepSparse Community for performant inference on CPUs in dev or testing environments.
2930
</LinkCard>
3031

3132
<LinkCard href="./deepsparse-ent" heading="DeepSparse Enterprise">
32-
Install the DeepSparse Enterprise Edition for performant inference on CPUs in production deployments.
33+
Install DeepSparse Enterprise for performant inference on CPUs in production deployments.
3334
</LinkCard>
3435

3536
<LinkCard href="./sparseml" heading="SparseML">

src/content/get-started/install/deepsparse-ent.mdx

Lines changed: 12 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,21 @@
11
---
22
title: "DeepSparse Enterprise"
33
metaTitle: "DeepSparse Enterprise Installation"
4-
metaDescription: "Installation instructions for the DeepSparse Engine enabling performant neural network deployments"
4+
metaDescription: "Installation instructions for DeepSparse enabling performant neural network deployments"
55
index: 2000
66
---
77

8-
# DeepSparse Enterprise Edition Installation
8+
# DeepSparse Enterprise Installation
99

10-
The [DeepSparse Engine](/products/deepsparse-ent) enables GPU-class performance on CPUs, leveraging sparsity within models to reduce FLOPs and the unique cache hierarchy on CPUs to reduce memory movement.
11-
The engine accepts models in the open-source [ONNX format](https://onnx.ai/), which are easily created from PyTorch and TensorFlow models.
10+
[DeepSparse Enterprise](/products/deepsparse-ent) enables GPU-class performance on CPUs.
1211

13-
Currently, DeepSparse is tested on Python 3.7-3.10, ONNX 1.5.0-1.10.1, ONNX opset version 11+ and is [manylinux compliant](https://peps.python.org/pep-0513/).
14-
It is limited to Linux systems running on x86 CPU architectures.
12+
Currently, DeepSparse is tested on Python 3.7-3.10, ONNX 1.5.0-1.10.1, ONNX opset version 11+, and [manylinux compliant systems](https://peps.python.org/pep-0513/).
13+
14+
We currently support x86 CPU architectures.
15+
16+
DeepSparse is available in two versions:
17+
1. [**DeepSparse Community**](/products/deepsparse) is free for evaluation, research, and non-production use with our [DeepSparse Community License](https://neuralmagic.com/legal/engine-license-agreement/).
18+
2. [**DeepSparse Enterprise**](/products/deepsparse-ent) requires a Trial License or [can be fully licensed](https://neuralmagic.com/legal/master-software-license-and-service-agreement/) for production, commercial applications.
1519

1620
## Installing DeepSparse Enterprise
1721

@@ -23,7 +27,7 @@ pip install deepsparse-ent
2327

2428
## Installing the Server
2529

26-
The [DeepSparse Server](/use-cases/deploying-deepsparse/deepsparse-server) allows you to serve models and pipelines through an HTTP interface using the deepsparse.server CLI.
30+
[DeepSparse Server](/user-guide/deploying-deepsparse/deepsparse-server) allows you to serve models and pipelines through an HTTP interface using the deepsparse.server CLI.
2731
To install, use the following extra option:
2832

2933
```bash
@@ -37,6 +41,6 @@ To use YOLO models, install with the following extra option:
3741

3842
```bash
3943
pip install deepsparse-ent[yolo] # just yolo requirements
40-
pip install deepsparse-ent[yolo,server] # both yolo + server requirements
44+
pip install deepsparse-ent[yolo,server] # both yolo + server requirements
4145
```
4246

0 commit comments

Comments
 (0)