From 125796916a964199cab176f9e4cb2a3d652a019a Mon Sep 17 00:00:00 2001 From: egenc Date: Fri, 1 Oct 2021 16:04:45 +0000 Subject: [PATCH 1/3] demo links are connected to models --- docs/_posts/muhammetsnts/2021-07-24-assertion_jsl_en.md | 2 +- docs/_posts/muhammetsnts/2021-07-24-assertion_jsl_large_en.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/_posts/muhammetsnts/2021-07-24-assertion_jsl_en.md b/docs/_posts/muhammetsnts/2021-07-24-assertion_jsl_en.md index 0bf1621fc418cd..9021a3c2506e9e 100644 --- a/docs/_posts/muhammetsnts/2021-07-24-assertion_jsl_en.md +++ b/docs/_posts/muhammetsnts/2021-07-24-assertion_jsl_en.md @@ -24,7 +24,7 @@ The deep neural network architecture for assertion status detection in Spark NLP `Present`, `Absent`, `Possible`, `Planned`, `Someoneelse`, `Past`, `Family`, `None`, `Hypotetical`. {:.btn-box} - +[Live Demo](https://demo.johnsnowlabs.com/healthcare/ASSERTION/){:.button.button-orange} [Open in Colab](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/2.Clinical_Assertion_Model.ipynb){:.button.button-orange.button-orange-trans.co.button-icon} [Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/clinical/models/assertion_jsl_en_3.1.2_2.4_1627139823450.zip){:.button.button-orange.button-orange-trans.arr.button-icon} diff --git a/docs/_posts/muhammetsnts/2021-07-24-assertion_jsl_large_en.md b/docs/_posts/muhammetsnts/2021-07-24-assertion_jsl_large_en.md index dbbc54286d7cb8..f824ba0358fd16 100644 --- a/docs/_posts/muhammetsnts/2021-07-24-assertion_jsl_large_en.md +++ b/docs/_posts/muhammetsnts/2021-07-24-assertion_jsl_large_en.md @@ -24,7 +24,7 @@ The deep neural network architecture for assertion status detection in Spark NLP `present`, `absent`, `possible`, `planned`, `someoneelse`, `past`. {:.btn-box} - +[Live Demo](https://demo.johnsnowlabs.com/healthcare/ASSERTION/){:.button.button-orange} [Open in Colab](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/2.Clinical_Assertion_Model.ipynb){:.button.button-orange.button-orange-trans.co.button-icon} [Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/clinical/models/assertion_jsl_large_en_3.1.2_2.4_1627156678782.zip){:.button.button-orange.button-orange-trans.arr.button-icon} From 16330b778cdf051bb31aabdec8bfa900bc65ae19 Mon Sep 17 00:00:00 2001 From: Ubuntu Date: Fri, 12 Nov 2021 11:48:24 +0000 Subject: [PATCH 2/3] amazon linux 2 installation --- docs/en/install.md | 43 +++++++++++++++++++++++ docs/en/licensed_install.md | 70 +++++++++++++++++++++++++++++++++++++ 2 files changed, 113 insertions(+) diff --git a/docs/en/install.md b/docs/en/install.md index 52209feba548c5..e14c718b142607 100644 --- a/docs/en/install.md +++ b/docs/en/install.md @@ -650,3 +650,46 @@ PipelineModel.load("/tmp/explain_document_dl_en_2.0.2_2.4_1556530585689/") - Since you are downloading and loading models/pipelines manually, this means Spark NLP is not downloading the most recent and compatible models/pipelines for you. Choosing the right model/pipeline is on you - If you are local, you can load the model/pipeline from your local FileSystem, however, if you are in a cluster setup you need to put the model/pipeline on a distributed FileSystem such as HDFS, DBFS, S3, etc. (i.e., `hdfs:///tmp/explain_document_dl_en_2.0.2_2.4_1556530585689/`) + +## Amazon Linux 2 Support + +```bash +# Update Package List & Install Required Packages +sudo yum update +sudo yum install -y amazon-linux-extras +sudo yum -y install python3-pip + +# Create Python virtual environment and activate it: +python3 -m venv .sparknlp-env +source .sparknlp-env/bin/activate +``` + +Check JAVA version: +- For Sparknlp versions above 3.x, please use JAVA-11 +- For Sparknlp versions below 3.x and SparkOCR, please use JAVA-8 + +Checking Java versions installed on your machine: +```bash +sudo alternatives --config java +``` + +You can pick the index number (I am using java-8 as default - index 2): + +
+ + + +
+ +If you dont have java-11 or java-8 in you system, you can easily install via: + +```bash +sudo yum install java-1.8.0-openjdk +``` + +Now, we can start installing the required libraries: + +```bash +pip install pyspark==3.1.2 +pip install spark-nlp +``` diff --git a/docs/en/licensed_install.md b/docs/en/licensed_install.md index 6e677fb8695f6c..def1630ae08906 100644 --- a/docs/en/licensed_install.md +++ b/docs/en/licensed_install.md @@ -413,6 +413,76 @@ As you see, we did not set `.master('local[*]')` explicitly to let YARN manage t Or you can set `.master('yarn')`. +## Amazon Linux 2 Support + +```bash +# Update Package List & Install Required Packages +sudo yum update +sudo yum install -y amazon-linux-extras +sudo yum -y install python3-pip + +# Create Python virtual environment and activate it: +python3 -m venv .sparknlp-env +source .sparknlp-env/bin/activate +``` + +Check JAVA version: +- For Sparknlp versions above 3.x, please use JAVA-11 +- For Sparknlp versions below 3.x and SparkOCR, please use JAVA-8 + +Checking Java versions installed on your machine: +```bash +sudo alternatives --config java +``` + +You can pick the index number (I am using java-8 as default - index 2): + +
+ + + +
+ +If you dont have java-11 or java-8 in you system, you can easily install via: + +```bash +sudo yum install java-1.8.0-openjdk +``` + +Now, we can start installing the required libraries: + +```bash +pip install jupyter +``` + +We can start jupyter notebook via: +```bash +jupyter notebook +``` + +```bash +### Now we are in the jupyter notebook cell: +import json +import os + +with open('sparknlp_for_healthcare.json) as f: + license_keys = json.load(f) + +# Defining license key-value pairs as local variables +locals().update(license_keys) + +# Adding license key-value pairs to environment variables +os.environ.update(license_keys) + +# Installing pyspark and spark-nlp +! pip install --upgrade -q pyspark==3.1.2 spark-nlp==$PUBLIC_VERSION + +# Installing Spark NLP Healthcare +! pip install --upgrade -q spark-nlp-jsl==$JSL_VERSION --extra-index-url https://pypi.johnsnowlabs.com/$SECRET +``` + + + ## Get a Spark NLP for Healthcare license You can ask for a free trial for Spark NLP for Healthcare [here](https://www.johnsnowlabs.com/install/). This will automatically create a new account for you on [my.JohnSnowLabs.com](https://my.johnsnowlabs.com/). Login in to your new account and from `My Subscriptions` section, you can download your license key as a json file. From 7bae0c0b0e8f352c7ceba9687ca707751d1434b2 Mon Sep 17 00:00:00 2001 From: Maziyar Panahi Date: Fri, 12 Nov 2021 12:54:46 +0100 Subject: [PATCH 3/3] Reposition Amazon Linux 2 Support --- docs/en/install.md | 86 +++++++++++++++++++++++----------------------- 1 file changed, 43 insertions(+), 43 deletions(-) diff --git a/docs/en/install.md b/docs/en/install.md index e14c718b142607..908c016cc450b7 100644 --- a/docs/en/install.md +++ b/docs/en/install.md @@ -467,6 +467,49 @@ gcloud dataproc clusters create ${CLUSTER_NAME} \
+## Amazon Linux 2 Support + +```bash +# Update Package List & Install Required Packages +sudo yum update +sudo yum install -y amazon-linux-extras +sudo yum -y install python3-pip + +# Create Python virtual environment and activate it: +python3 -m venv .sparknlp-env +source .sparknlp-env/bin/activate +``` + +Check JAVA version: +- For Sparknlp versions above 3.x, please use JAVA-11 +- For Sparknlp versions below 3.x and SparkOCR, please use JAVA-8 + +Checking Java versions installed on your machine: +```bash +sudo alternatives --config java +``` + +You can pick the index number (I am using java-8 as default - index 2): + +
+ + + +
+ +If you dont have java-11 or java-8 in you system, you can easily install via: + +```bash +sudo yum install java-1.8.0-openjdk +``` + +Now, we can start installing the required libraries: + +```bash +pip install pyspark==3.1.2 +pip install spark-nlp +``` + ## Docker Support For having Spark NLP, PySpark, Jupyter, and other ML/DL dependencies as a Docker image you can use the following template: @@ -650,46 +693,3 @@ PipelineModel.load("/tmp/explain_document_dl_en_2.0.2_2.4_1556530585689/") - Since you are downloading and loading models/pipelines manually, this means Spark NLP is not downloading the most recent and compatible models/pipelines for you. Choosing the right model/pipeline is on you - If you are local, you can load the model/pipeline from your local FileSystem, however, if you are in a cluster setup you need to put the model/pipeline on a distributed FileSystem such as HDFS, DBFS, S3, etc. (i.e., `hdfs:///tmp/explain_document_dl_en_2.0.2_2.4_1556530585689/`) - -## Amazon Linux 2 Support - -```bash -# Update Package List & Install Required Packages -sudo yum update -sudo yum install -y amazon-linux-extras -sudo yum -y install python3-pip - -# Create Python virtual environment and activate it: -python3 -m venv .sparknlp-env -source .sparknlp-env/bin/activate -``` - -Check JAVA version: -- For Sparknlp versions above 3.x, please use JAVA-11 -- For Sparknlp versions below 3.x and SparkOCR, please use JAVA-8 - -Checking Java versions installed on your machine: -```bash -sudo alternatives --config java -``` - -You can pick the index number (I am using java-8 as default - index 2): - -
- - - -
- -If you dont have java-11 or java-8 in you system, you can easily install via: - -```bash -sudo yum install java-1.8.0-openjdk -``` - -Now, we can start installing the required libraries: - -```bash -pip install pyspark==3.1.2 -pip install spark-nlp -```