You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -79,19 +102,13 @@ model = keras_cv.models.YOLOV8Detector.from_preset(
79
102
predictions = model.predict(image_resized)
80
103
```
81
104
82
-
Until Keras Core is officially released as Keras 3.0, KerasCV will use
83
-
`tf.keras` as the default backend. To restore this default behavior, simply
84
-
`unset KERAS_BACKEND` and ensure that `"multi_backend": False` or is unset in
85
-
`.keras/keras_cv.json`. You will need to restart the Python runtime for changes
86
-
to take effect.
87
-
88
105
## Quickstart
89
106
90
107
```python
91
108
import tensorflow as tf
92
109
import keras_cv
93
110
import tensorflow_datasets as tfds
94
-
import keras_core as keras
111
+
from keras_cv.backend import keras
95
112
96
113
# Create a preprocessing pipeline with augmentations
97
114
BATCH_SIZE=16
@@ -157,9 +174,9 @@ We would like to leverage/outsource the Keras community not only for bug reporti
157
174
but also for active development for feature delivery. To achieve this, here is the predefined
158
175
process for how to contribute to this repository:
159
176
160
-
1) Contributors are always welcome to help us fix an issue, add tests, better documentation.
177
+
1) Contributors are always welcome to help us fix an issue, add tests, better documentation.
161
178
2) If contributors would like to create a backbone, we usually require a pre-trained weight set
162
-
with the model for one dataset as the first PR, and a training script as a follow-up. The training script will preferably help us reproduce the results claimed from paper. The backbone should be generic but the training script can contain paper specific parameters such as learning rate schedules and weight decays. The training script will be used to produce leaderboard results.
179
+
with the model for one dataset as the first PR, and a training script as a follow-up. The training script will preferably help us reproduce the results claimed from paper. The backbone should be generic but the training script can contain paper specific parameters such as learning rate schedules and weight decays. The training script will be used to produce leaderboard results.
163
180
Exceptions apply to large transformer-based models which are difficult to train. If this is the case,
164
181
contributors should let us know so the team can help in training the model or providing GCP resources.
165
182
3) If contributors would like to create a meta arch, please try to be aligned with our roadmap and create a PR for design review to make sure the meta arch is modular.
@@ -185,7 +202,7 @@ An example of this can be found in the ImageNet classification training
185
202
All results are reproducible using the training scripts in this repository.
186
203
187
204
Historically, many models have been trained on image datasets rescaled via manually
188
-
crafted normalization schemes.
205
+
crafted normalization schemes.
189
206
The most common variant of manually crafted normalization scheme is subtraction of the
190
207
imagenet mean pixel followed by standard deviation normalization based on the imagenet
0 commit comments