strickvl commited on
Commit
f1fa80d
·
verified ·
1 Parent(s): 318b4fc

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,1265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - sentence-transformers
7
+ - sentence-similarity
8
+ - feature-extraction
9
+ - generated_from_trainer
10
+ - dataset_size:3284
11
+ - loss:MatryoshkaLoss
12
+ - loss:MultipleNegativesRankingLoss
13
+ base_model: Snowflake/snowflake-arctic-embed-m-v1.5
14
+ widget:
15
+ - source_sentence: Does ZenML officially support Macs running on Apple Silicon, and
16
+ are there any specific configurations needed?
17
+ sentences:
18
+ - 'ding ZenML to learn more!
19
+
20
+
21
+ Do you support Windows?ZenML officially supports Windows if you''re using WSL.
22
+ Much of ZenML will also work on Windows outside a WSL environment, but we don''t
23
+ officially support it and some features don''t work (notably anything that requires
24
+ spinning up a server process).
25
+
26
+
27
+ Do you support Macs running on Apple Silicon?
28
+
29
+
30
+ Yes, ZenML does support Macs running on Apple Silicon. You just need to make sure
31
+ that you set the following environment variable:
32
+
33
+
34
+ export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
35
+
36
+
37
+ This is a known issue with how forking works on Macs running on Apple Silicon
38
+ and it will enable you to use ZenML and the server. This environment variable
39
+ is needed if you are working with a local server on your Mac, but if you''re just
40
+ using ZenML as a client / CLI and connecting to a deployed server then you don''t
41
+ need to set it.
42
+
43
+
44
+ How can I make ZenML work with my custom tool? How can I extend or build on ZenML?
45
+
46
+
47
+ This depends on the tool and its respective MLOps category. We have a full guide
48
+ on this over here!
49
+
50
+
51
+ How can I contribute?
52
+
53
+
54
+ We develop ZenML together with our community! To get involved, the best way to
55
+ get started is to select any issue from the good-first-issue label. If you would
56
+ like to contribute, please review our Contributing Guide for all relevant details.
57
+
58
+
59
+ How can I speak with the community?
60
+
61
+
62
+ The first point of the call should be our Slack group. Ask your questions about
63
+ bugs or specific use cases and someone from the core team will respond.
64
+
65
+
66
+ Which license does ZenML use?
67
+
68
+
69
+ ZenML is distributed under the terms of the Apache License Version 2.0. A complete
70
+ version of the license is available in the LICENSE.md in this repository. Any
71
+ contribution made to this project will be licensed under the Apache License Version
72
+ 2.0.
73
+
74
+
75
+ PreviousCommunity & content
76
+
77
+
78
+ Last updated 3 months ago'
79
+ - 'Registering a Model
80
+
81
+
82
+ PreviousUse the Model Control PlaneNextDeleting a Model
83
+
84
+
85
+ Last updated 4 months ago'
86
+ - 'Synthetic data generation
87
+
88
+
89
+ Generate synthetic data with distilabel to finetune embeddings.
90
+
91
+
92
+ PreviousImprove retrieval by finetuning embeddingsNextFinetuning embeddings with
93
+ Sentence Transformers
94
+
95
+
96
+ Last updated 21 days ago'
97
+ - source_sentence: How can I change the logging verbosity level in ZenML for both
98
+ local and remote pipeline runs?
99
+ sentences:
100
+ - 'ncepts covered in this guide to your own projects.By the end of this guide, you''ll
101
+ have a solid understanding of how to leverage LLMs in your MLOps workflows using
102
+ ZenML, enabling you to build powerful, scalable, and maintainable LLM-powered
103
+ applications. First up, let''s take a look at a super simple implementation of
104
+ the RAG paradigm to get started.
105
+
106
+
107
+ PreviousAn end-to-end projectNextRAG with ZenML
108
+
109
+
110
+ Last updated 21 days ago'
111
+ - 'Configuring a pipeline at runtime
112
+
113
+
114
+ Configuring a pipeline at runtime.
115
+
116
+
117
+ PreviousUse pipeline/step parametersNextReference environment variables in configurations
118
+
119
+
120
+ Last updated 28 days ago'
121
+ - "Set logging verbosity\n\nHow to set the logging verbosity in ZenML.\n\nBy default,\
122
+ \ ZenML sets the logging verbosity to INFO. If you wish to change this, you can\
123
+ \ do so by setting the following environment variable:\n\nexport ZENML_LOGGING_VERBOSITY=INFO\n\
124
+ \nChoose from INFO, WARN, ERROR, CRITICAL, DEBUG. This will set the logs to whichever\
125
+ \ level you suggest.\n\nNote that setting this on the client environment (e.g.\
126
+ \ your local machine which runs the pipeline) will not automatically set the same\
127
+ \ logging verbosity for remote pipeline runs. That means setting this variable\
128
+ \ locally with only effect pipelines that run locally.\n\nIf you wish to control\
129
+ \ for remote pipeline runs, you can set the ZENML_LOGGING_VERBOSITY environment\
130
+ \ variable in your pipeline runs environment as follows:\n\ndocker_settings =\
131
+ \ DockerSettings(environment={\"ZENML_LOGGING_VERBOSITY\": \"DEBUG\"})\n\n# Either\
132
+ \ add it to the decorator\n@pipeline(settings={\"docker\": docker_settings})\n\
133
+ def my_pipeline() -> None:\n my_step()\n\n# Or configure the pipelines options\n\
134
+ my_pipeline = my_pipeline.with_options(\n settings={\"docker\": docker_settings}\n\
135
+ )\n\nPreviousEnable or disable logs storageNextDisable rich traceback output\n\
136
+ \nLast updated 21 days ago"
137
+ - source_sentence: How can I autogenerate a template yaml file for my specific pipeline
138
+ using ZenML?
139
+ sentences:
140
+ - "Autogenerate a template yaml file\n\nTo help you figure out what you can put\
141
+ \ in your configuration file, simply autogenerate a template.\n\nIf you want to\
142
+ \ generate a template yaml file of your specific pipeline, you can do so by using\
143
+ \ the .write_run_configuration_template() method. This will generate a yaml file\
144
+ \ with all options commented out. This way you can pick and choose the settings\
145
+ \ that are relevant to you.\n\nfrom zenml import pipeline\n...\n\n@pipeline(enable_cache=True)\
146
+ \ # set cache behavior at step level\ndef simple_ml_pipeline(parameter: int):\n\
147
+ \ dataset = load_data(parameter=parameter)\n train_model(dataset)\n\nsimple_ml_pipeline.write_run_configuration_template(path=\"\
148
+ <Insert_path_here>\")\n\nWhen you want to configure your pipeline with a certain\
149
+ \ stack in mind, you can do so as well: `...write_run_configuration_template(stack=<Insert_stack_here>)\n\
150
+ \nPreviousFind out which configuration was used for a runNextCustomize Docker\
151
+ \ builds\n\nLast updated 21 days ago"
152
+ - 'Deleting a Model
153
+
154
+
155
+ Learn how to delete models.
156
+
157
+
158
+ PreviousRegistering a ModelNextAssociate a pipeline with a Model
159
+
160
+
161
+ Last updated 4 months ago'
162
+ - 'Load artifacts into memory
163
+
164
+
165
+ Often ZenML pipeline steps consume artifacts produced by one another directly
166
+ in the pipeline code, but there are scenarios where you need to pull external
167
+ data into your steps. Such external data could be artifacts produced by non-ZenML
168
+ codes. For those cases, it is advised to use ExternalArtifact, but what if we
169
+ plan to exchange data created with other ZenML pipelines?
170
+
171
+
172
+ ZenML pipelines are first compiled and only executed at some later point. During
173
+ the compilation phase, all function calls are executed, and this data is fixed
174
+ as step input parameters. Given all this, the late materialization of dynamic
175
+ objects, like data artifacts, is crucial. Without late materialization, it would
176
+ not be possible to pass not-yet-existing artifacts as step inputs, or their metadata,
177
+ which is often the case in a multi-pipeline setting.
178
+
179
+
180
+ We identify two major use cases for exchanging artifacts between pipelines:
181
+
182
+
183
+ You semantically group your data products using ZenML Models
184
+
185
+
186
+ You prefer to use ZenML Client to bring all the pieces together
187
+
188
+
189
+ We recommend using models to group and access artifacts across pipelines. Find
190
+ out how to load an artifact from a ZenML Model here.
191
+
192
+
193
+ Use client methods to exchange artifacts
194
+
195
+
196
+ If you don''t yet use the Model Control Plane, you can still exchange data between
197
+ pipelines with late materialization. Let''s rework the do_predictions pipeline
198
+ code as follows:
199
+
200
+
201
+ from typing import Annotated
202
+
203
+ from zenml import step, pipeline
204
+
205
+ from zenml.client import Client
206
+
207
+ import pandas as pd
208
+
209
+ from sklearn.base import ClassifierMixin'
210
+ - source_sentence: How can I create a Kubernetes cluster on EKS and configure it to
211
+ run Spark with a custom Docker image?
212
+ sentences:
213
+ - 'View logs on the dashboard
214
+
215
+
216
+ PreviousControl loggingNextEnable or disable logs storage
217
+
218
+
219
+ Last updated 21 days ago'
220
+ - "Datasets in ZenML\n\nModel datasets using simple abstractions.\n\nAs machine\
221
+ \ learning projects grow in complexity, you often need to work with various data\
222
+ \ sources and manage intricate data flows. This chapter explores how to use custom\
223
+ \ Dataset classes and Materializers in ZenML to handle these challenges efficiently.\
224
+ \ For strategies on scaling your data processing for larger datasets, refer to\
225
+ \ scaling strategies for big data.\n\nIntroduction to Custom Dataset Classes\n\
226
+ \nCustom Dataset classes in ZenML provide a way to encapsulate data loading, processing,\
227
+ \ and saving logic for different data sources. They're particularly useful when:\n\
228
+ \nWorking with multiple data sources (e.g., CSV files, databases, cloud storage)\n\
229
+ \nDealing with complex data structures that require special handling\n\nImplementing\
230
+ \ custom data processing or transformation logic\n\nImplementing Dataset Classes\
231
+ \ for Different Data Sources\n\nLet's create a base Dataset class and implement\
232
+ \ it for CSV and BigQuery data sources:\n\nfrom abc import ABC, abstractmethod\n\
233
+ import pandas as pd\nfrom google.cloud import bigquery\nfrom typing import Optional\n\
234
+ \nclass Dataset(ABC):\n @abstractmethod\n def read_data(self) -> pd.DataFrame:\n\
235
+ \ pass\n\nclass CSVDataset(Dataset):\n def __init__(self, data_path:\
236
+ \ str, df: Optional[pd.DataFrame] = None):\n self.data_path = data_path\n\
237
+ \ self.df = df\n\ndef read_data(self) -> pd.DataFrame:\n if self.df\
238
+ \ is None:\n self.df = pd.read_csv(self.data_path)\n return\
239
+ \ self.df\n\nclass BigQueryDataset(Dataset):\n def __init__(\n self,\n\
240
+ \ table_id: str,\n df: Optional[pd.DataFrame] = None,\n project:\
241
+ \ Optional[str] = None,\n ):\n self.table_id = table_id\n self.project\
242
+ \ = project\n self.df = df\n self.client = bigquery.Client(project=self.project)\n\
243
+ \ndef read_data(self) -> pd.DataFrame:\n query = f\"SELECT * FROM `{self.table_id}`\"\
244
+ \n self.df = self.client.query(query).to_dataframe()\n return self.df"
245
+ - 'e the correct region is selected on the top right.Click on Add cluster and select
246
+ Create.
247
+
248
+
249
+ Enter a name and select the cluster role for Cluster service role.
250
+
251
+
252
+ Keep the default values for the networking and logging steps and create the cluster.
253
+
254
+
255
+ Note down the cluster name and the API server endpoint:
256
+
257
+
258
+ EKS_CLUSTER_NAME=<EKS_CLUSTER_NAME>
259
+
260
+ EKS_API_SERVER_ENDPOINT=<API_SERVER_ENDPOINT>
261
+
262
+
263
+ After the cluster is created, select it and click on Add node group in the Compute
264
+ tab.
265
+
266
+
267
+ Enter a name and select the node role.
268
+
269
+
270
+ For the instance type, we recommend t3a.xlarge, as it provides up to 4 vCPUs and
271
+ 16 GB of memory.
272
+
273
+
274
+ Docker image for the Spark drivers and executors
275
+
276
+
277
+ When you want to run your steps on a Kubernetes cluster, Spark will require you
278
+ to choose a base image for the driver and executor pods. Normally, for this purpose,
279
+ you can either use one of the base images in Spark’s dockerhub or create an image
280
+ using the docker-image-tool which will use your own Spark installation and build
281
+ an image.
282
+
283
+
284
+ When using Spark in EKS, you need to use the latter and utilize the docker-image-tool.
285
+ However, before the build process, you also need to download the following packages
286
+
287
+
288
+ hadoop-aws = 3.3.1
289
+
290
+
291
+ aws-java-sdk-bundle = 1.12.150
292
+
293
+
294
+ and put them in the jars folder within your Spark installation. Once that is set
295
+ up, you can build the image as follows:
296
+
297
+
298
+ cd $SPARK_HOME # If this empty for you then you need to set the SPARK_HOME variable
299
+ which points to your Spark installation
300
+
301
+
302
+ SPARK_IMAGE_TAG=<SPARK_IMAGE_TAG>
303
+
304
+
305
+ ./bin/docker-image-tool.sh -t $SPARK_IMAGE_TAG -p kubernetes/dockerfiles/spark/bindings/python/Dockerfile
306
+ -u 0 build
307
+
308
+
309
+ BASE_IMAGE_NAME=spark-py:$SPARK_IMAGE_TAG
310
+
311
+
312
+ If you are working on an M1 Mac, you will need to build the image for the amd64
313
+ architecture, by using the prefix -X on the previous command. For example:
314
+
315
+
316
+ ./bin/docker-image-tool.sh -X -t $SPARK_IMAGE_TAG -p kubernetes/dockerfiles/spark/bindings/python/Dockerfile
317
+ -u 0 build
318
+
319
+
320
+ Configuring RBAC'
321
+ - source_sentence: How can I configure a pipeline with a YAML file in ZenML?
322
+ sentences:
323
+ - 'atically retry steps
324
+
325
+
326
+ Run pipelines asynchronouslyControl execution order of steps
327
+
328
+
329
+ Using a custom step invocation ID
330
+
331
+
332
+ Name your pipeline runs
333
+
334
+
335
+ Use failure/success hooks
336
+
337
+
338
+ Hyperparameter tuning
339
+
340
+
341
+ Access secrets in a step
342
+
343
+
344
+ Run an individual step
345
+
346
+
347
+ Fetching pipelines
348
+
349
+
350
+ Get past pipeline/step runs
351
+
352
+
353
+ 🚨Trigger a pipeline
354
+
355
+
356
+ Use templates: Python SDK
357
+
358
+
359
+ Use templates: Dashboard
360
+
361
+
362
+ Use templates: Rest API
363
+
364
+
365
+ 📃Use configuration files
366
+
367
+
368
+ How to configure a pipeline with a YAML
369
+
370
+
371
+ What can be configured
372
+
373
+
374
+ Runtime settings for Docker, resources, and stack components
375
+
376
+
377
+ Configuration hierarchy
378
+
379
+
380
+ Find out which configuration was used for a run
381
+
382
+
383
+ Autogenerate a template yaml file
384
+
385
+
386
+ 🐳Customize Docker builds
387
+
388
+
389
+ Docker settings on a pipeline
390
+
391
+
392
+ Docker settings on a step
393
+
394
+
395
+ Use a prebuilt image for pipeline execution
396
+
397
+
398
+ Specify pip dependencies and apt packages
399
+
400
+
401
+ Use your own Dockerfiles
402
+
403
+
404
+ Which files are built into the image
405
+
406
+
407
+ How to reuse builds
408
+
409
+
410
+ Define where an image is built
411
+
412
+
413
+ 📔Run remote pipelines from notebooks
414
+
415
+
416
+ Limitations of defining steps in notebook cells
417
+
418
+
419
+ Run a single step from a notebook
420
+
421
+
422
+ 🤹Manage your ZenML server
423
+
424
+
425
+ Best practices for upgrading ZenML
426
+
427
+
428
+ Upgrade your ZenML server
429
+
430
+
431
+ Using ZenML server in production
432
+
433
+
434
+ Troubleshoot your ZenML server
435
+
436
+
437
+ Migration guide
438
+
439
+
440
+ Migration guide 0.13.2 → 0.20.0
441
+
442
+
443
+ Migration guide 0.23.0 → 0.30.0
444
+
445
+
446
+ Migration guide 0.39.1 → 0.41.0
447
+
448
+
449
+ Migration guide 0.58.2 → 0.60.0
450
+
451
+
452
+ 📍Develop locally
453
+
454
+
455
+ Use config files to develop locally
456
+
457
+
458
+ Keep your pipelines and dashboard clean
459
+
460
+
461
+ ⚒️Manage stacks & components
462
+
463
+
464
+ Deploy a cloud stack with ZenML
465
+
466
+
467
+ Deploy a cloud stack with Terraform
468
+
469
+
470
+ Register a cloud stack
471
+
472
+
473
+ Reference secrets in stack configuration
474
+
475
+
476
+ Implement a custom stack component
477
+
478
+
479
+ 🚜Train with GPUs
480
+
481
+
482
+ Distributed Training with 🤗 Accelerate
483
+
484
+
485
+ 🌲Control logging
486
+
487
+
488
+ View logs on the dashboard
489
+
490
+
491
+ Enable or disable logs storage
492
+
493
+
494
+ Set logging verbosity
495
+
496
+
497
+ Disable rich traceback output
498
+
499
+
500
+ Disable colorful logging
501
+
502
+
503
+ 🗄️Handle Data/Artifacts
504
+
505
+
506
+ How ZenML stores data
507
+
508
+
509
+ Return multiple outputs from a step
510
+
511
+
512
+ Delete an artifact
513
+
514
+
515
+ Organize data with tags
516
+
517
+
518
+ Get arbitrary artifacts in a step'
519
+ - 'Security best practices
520
+
521
+
522
+ Best practices concerning the various authentication methods implemented by Service
523
+ Connectors.
524
+
525
+
526
+ Service Connector Types, especially those targeted at cloud providers, offer a
527
+ plethora of authentication methods matching those supported by remote cloud platforms.
528
+ While there is no single authentication standard that unifies this process, there
529
+ are some patterns that are easily identifiable and can be used as guidelines when
530
+ deciding which authentication method to use to configure a Service Connector.
531
+
532
+
533
+ This section explores some of those patterns and gives some advice regarding which
534
+ authentication methods are best suited for your needs.
535
+
536
+
537
+ This section may require some general knowledge about authentication and authorization
538
+ to be properly understood. We tried to keep it simple and limit ourselves to talking
539
+ about high-level concepts, but some areas may get a bit too technical.
540
+
541
+
542
+ Username and password
543
+
544
+
545
+ The key takeaway is this: you should avoid using your primary account password
546
+ as authentication credentials as much as possible. If there are alternative authentication
547
+ methods that you can use or other types of credentials (e.g. session tokens, API
548
+ keys, API tokens), you should always try to use those instead.
549
+
550
+
551
+ Ultimately, if you have no choice, be cognizant of the third parties you share
552
+ your passwords with. If possible, they should never leave the premises of your
553
+ local host or development environment.
554
+
555
+
556
+ This is the typical authentication method that uses a username or account name
557
+ plus the associated password. While this is the de facto method used to log in
558
+ with web consoles and local CLIs, this is the least secure of all authentication
559
+ methods and never something you want to share with other members of your team
560
+ or organization or use to authenticate automated workloads.'
561
+ - "━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━┛$ zenml orchestrator connect\
562
+ \ <ORCHESTRATOR_NAME> --connector aws-iam-multi-us\nRunning with active stack:\
563
+ \ 'default' (repository)\nSuccessfully connected orchestrator `<ORCHESTRATOR_NAME>`\
564
+ \ to the following resources:\n┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━┓\n\
565
+ ┃ CONNECTOR ID │ CONNECTOR NAME │ CONNECTOR TYPE │ RESOURCE\
566
+ \ TYPE │ RESOURCE NAMES ┃\n┠──────────────────────────────────────┼──────────────────┼────────────────┼───────────────────────┼──────────────────┨\n\
567
+ ┃ ed528d5a-d6cb-4fc4-bc52-c3d2d01643e5 │ aws-iam-multi-us │ \U0001F536 aws \
568
+ \ │ \U0001F300 kubernetes-cluster │ zenhacks-cluster ┃\n┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┛\n\
569
+ \n# Register and activate a stack with the new orchestrator\n$ zenml stack register\
570
+ \ <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set\n\nif you don't have a Service\
571
+ \ Connector on hand and you don't want to register one , the local Kubernetes\
572
+ \ kubectl client needs to be configured with a configuration context pointing\
573
+ \ to the remote cluster. The kubernetes_context stack component must also be configured\
574
+ \ with the value of that context:\n\nzenml orchestrator register <ORCHESTRATOR_NAME>\
575
+ \ \\\n --flavor=kubernetes \\\n --kubernetes_context=<KUBERNETES_CONTEXT>\n\
576
+ \n# Register and activate a stack with the new orchestrator\nzenml stack register\
577
+ \ <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set\n\nZenML will build a Docker image\
578
+ \ called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code\
579
+ \ and use it to run your pipeline steps in Kubernetes. Check out this page if\
580
+ \ you want to learn more about how ZenML builds these images and how you can customize\
581
+ \ them.\n\nYou can now run any ZenML pipeline using the Kubernetes orchestrator:\n\
582
+ \npython file_that_runs_a_zenml_pipeline.py"
583
+ datasets: []
584
+ pipeline_tag: sentence-similarity
585
+ library_name: sentence-transformers
586
+ metrics:
587
+ - cosine_accuracy@1
588
+ - cosine_accuracy@3
589
+ - cosine_accuracy@5
590
+ - cosine_accuracy@10
591
+ - cosine_precision@1
592
+ - cosine_precision@3
593
+ - cosine_precision@5
594
+ - cosine_precision@10
595
+ - cosine_recall@1
596
+ - cosine_recall@3
597
+ - cosine_recall@5
598
+ - cosine_recall@10
599
+ - cosine_ndcg@10
600
+ - cosine_mrr@10
601
+ - cosine_map@100
602
+ model-index:
603
+ - name: zenml/finetuned-snowflake-arctic-embed-m-v1.5
604
+ results:
605
+ - task:
606
+ type: information-retrieval
607
+ name: Information Retrieval
608
+ dataset:
609
+ name: dim 384
610
+ type: dim_384
611
+ metrics:
612
+ - type: cosine_accuracy@1
613
+ value: 0.1863013698630137
614
+ name: Cosine Accuracy@1
615
+ - type: cosine_accuracy@3
616
+ value: 0.4794520547945205
617
+ name: Cosine Accuracy@3
618
+ - type: cosine_accuracy@5
619
+ value: 0.6602739726027397
620
+ name: Cosine Accuracy@5
621
+ - type: cosine_accuracy@10
622
+ value: 0.7972602739726027
623
+ name: Cosine Accuracy@10
624
+ - type: cosine_precision@1
625
+ value: 0.1863013698630137
626
+ name: Cosine Precision@1
627
+ - type: cosine_precision@3
628
+ value: 0.1598173515981735
629
+ name: Cosine Precision@3
630
+ - type: cosine_precision@5
631
+ value: 0.13205479452054794
632
+ name: Cosine Precision@5
633
+ - type: cosine_precision@10
634
+ value: 0.07972602739726026
635
+ name: Cosine Precision@10
636
+ - type: cosine_recall@1
637
+ value: 0.1863013698630137
638
+ name: Cosine Recall@1
639
+ - type: cosine_recall@3
640
+ value: 0.4794520547945205
641
+ name: Cosine Recall@3
642
+ - type: cosine_recall@5
643
+ value: 0.6602739726027397
644
+ name: Cosine Recall@5
645
+ - type: cosine_recall@10
646
+ value: 0.7972602739726027
647
+ name: Cosine Recall@10
648
+ - type: cosine_ndcg@10
649
+ value: 0.47459290361092754
650
+ name: Cosine Ndcg@10
651
+ - type: cosine_mrr@10
652
+ value: 0.3725994781474232
653
+ name: Cosine Mrr@10
654
+ - type: cosine_map@100
655
+ value: 0.37953809566266083
656
+ name: Cosine Map@100
657
+ - task:
658
+ type: information-retrieval
659
+ name: Information Retrieval
660
+ dataset:
661
+ name: dim 256
662
+ type: dim_256
663
+ metrics:
664
+ - type: cosine_accuracy@1
665
+ value: 0.18356164383561643
666
+ name: Cosine Accuracy@1
667
+ - type: cosine_accuracy@3
668
+ value: 0.4876712328767123
669
+ name: Cosine Accuracy@3
670
+ - type: cosine_accuracy@5
671
+ value: 0.6602739726027397
672
+ name: Cosine Accuracy@5
673
+ - type: cosine_accuracy@10
674
+ value: 0.7917808219178082
675
+ name: Cosine Accuracy@10
676
+ - type: cosine_precision@1
677
+ value: 0.18356164383561643
678
+ name: Cosine Precision@1
679
+ - type: cosine_precision@3
680
+ value: 0.16255707762557076
681
+ name: Cosine Precision@3
682
+ - type: cosine_precision@5
683
+ value: 0.1320547945205479
684
+ name: Cosine Precision@5
685
+ - type: cosine_precision@10
686
+ value: 0.07917808219178081
687
+ name: Cosine Precision@10
688
+ - type: cosine_recall@1
689
+ value: 0.18356164383561643
690
+ name: Cosine Recall@1
691
+ - type: cosine_recall@3
692
+ value: 0.4876712328767123
693
+ name: Cosine Recall@3
694
+ - type: cosine_recall@5
695
+ value: 0.6602739726027397
696
+ name: Cosine Recall@5
697
+ - type: cosine_recall@10
698
+ value: 0.7917808219178082
699
+ name: Cosine Recall@10
700
+ - type: cosine_ndcg@10
701
+ value: 0.47334554819769054
702
+ name: Cosine Ndcg@10
703
+ - type: cosine_mrr@10
704
+ value: 0.3724179169384647
705
+ name: Cosine Mrr@10
706
+ - type: cosine_map@100
707
+ value: 0.37931260226095775
708
+ name: Cosine Map@100
709
+ - task:
710
+ type: information-retrieval
711
+ name: Information Retrieval
712
+ dataset:
713
+ name: dim 128
714
+ type: dim_128
715
+ metrics:
716
+ - type: cosine_accuracy@1
717
+ value: 0.18356164383561643
718
+ name: Cosine Accuracy@1
719
+ - type: cosine_accuracy@3
720
+ value: 0.4684931506849315
721
+ name: Cosine Accuracy@3
722
+ - type: cosine_accuracy@5
723
+ value: 0.6356164383561644
724
+ name: Cosine Accuracy@5
725
+ - type: cosine_accuracy@10
726
+ value: 0.7780821917808219
727
+ name: Cosine Accuracy@10
728
+ - type: cosine_precision@1
729
+ value: 0.18356164383561643
730
+ name: Cosine Precision@1
731
+ - type: cosine_precision@3
732
+ value: 0.1561643835616438
733
+ name: Cosine Precision@3
734
+ - type: cosine_precision@5
735
+ value: 0.12712328767123285
736
+ name: Cosine Precision@5
737
+ - type: cosine_precision@10
738
+ value: 0.07780821917808219
739
+ name: Cosine Precision@10
740
+ - type: cosine_recall@1
741
+ value: 0.18356164383561643
742
+ name: Cosine Recall@1
743
+ - type: cosine_recall@3
744
+ value: 0.4684931506849315
745
+ name: Cosine Recall@3
746
+ - type: cosine_recall@5
747
+ value: 0.6356164383561644
748
+ name: Cosine Recall@5
749
+ - type: cosine_recall@10
750
+ value: 0.7780821917808219
751
+ name: Cosine Recall@10
752
+ - type: cosine_ndcg@10
753
+ value: 0.46219638130094637
754
+ name: Cosine Ndcg@10
755
+ - type: cosine_mrr@10
756
+ value: 0.3628680147858229
757
+ name: Cosine Mrr@10
758
+ - type: cosine_map@100
759
+ value: 0.37047490630037583
760
+ name: Cosine Map@100
761
+ - task:
762
+ type: information-retrieval
763
+ name: Information Retrieval
764
+ dataset:
765
+ name: dim 64
766
+ type: dim_64
767
+ metrics:
768
+ - type: cosine_accuracy@1
769
+ value: 0.2054794520547945
770
+ name: Cosine Accuracy@1
771
+ - type: cosine_accuracy@3
772
+ value: 0.4767123287671233
773
+ name: Cosine Accuracy@3
774
+ - type: cosine_accuracy@5
775
+ value: 0.6273972602739726
776
+ name: Cosine Accuracy@5
777
+ - type: cosine_accuracy@10
778
+ value: 0.7534246575342466
779
+ name: Cosine Accuracy@10
780
+ - type: cosine_precision@1
781
+ value: 0.2054794520547945
782
+ name: Cosine Precision@1
783
+ - type: cosine_precision@3
784
+ value: 0.15890410958904108
785
+ name: Cosine Precision@3
786
+ - type: cosine_precision@5
787
+ value: 0.12547945205479452
788
+ name: Cosine Precision@5
789
+ - type: cosine_precision@10
790
+ value: 0.07534246575342465
791
+ name: Cosine Precision@10
792
+ - type: cosine_recall@1
793
+ value: 0.2054794520547945
794
+ name: Cosine Recall@1
795
+ - type: cosine_recall@3
796
+ value: 0.4767123287671233
797
+ name: Cosine Recall@3
798
+ - type: cosine_recall@5
799
+ value: 0.6273972602739726
800
+ name: Cosine Recall@5
801
+ - type: cosine_recall@10
802
+ value: 0.7534246575342466
803
+ name: Cosine Recall@10
804
+ - type: cosine_ndcg@10
805
+ value: 0.46250756548591326
806
+ name: Cosine Ndcg@10
807
+ - type: cosine_mrr@10
808
+ value: 0.37069906501413347
809
+ name: Cosine Mrr@10
810
+ - type: cosine_map@100
811
+ value: 0.37874559284369463
812
+ name: Cosine Map@100
813
+ ---
814
+
815
+ # zenml/finetuned-snowflake-arctic-embed-m-v1.5
816
+
817
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m-v1.5](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
818
+
819
+ ## Model Details
820
+
821
+ ### Model Description
822
+ - **Model Type:** Sentence Transformer
823
+ - **Base model:** [Snowflake/snowflake-arctic-embed-m-v1.5](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5) <!-- at revision 3b5a16eaf17e47bd997da998988dce5877a57092 -->
824
+ - **Maximum Sequence Length:** 512 tokens
825
+ - **Output Dimensionality:** 768 tokens
826
+ - **Similarity Function:** Cosine Similarity
827
+ <!-- - **Training Dataset:** Unknown -->
828
+ - **Language:** en
829
+ - **License:** apache-2.0
830
+
831
+ ### Model Sources
832
+
833
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
834
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
835
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
836
+
837
+ ### Full Model Architecture
838
+
839
+ ```
840
+ SentenceTransformer(
841
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
842
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
843
+ (2): Normalize()
844
+ )
845
+ ```
846
+
847
+ ## Usage
848
+
849
+ ### Direct Usage (Sentence Transformers)
850
+
851
+ First install the Sentence Transformers library:
852
+
853
+ ```bash
854
+ pip install -U sentence-transformers
855
+ ```
856
+
857
+ Then you can load this model and run inference.
858
+ ```python
859
+ from sentence_transformers import SentenceTransformer
860
+
861
+ # Download from the 🤗 Hub
862
+ model = SentenceTransformer("zenml/finetuned-snowflake-arctic-embed-m-v1.5")
863
+ # Run inference
864
+ sentences = [
865
+ 'How can I configure a pipeline with a YAML file in ZenML?',
866
+ 'atically retry steps\n\nRun pipelines asynchronouslyControl execution order of steps\n\nUsing a custom step invocation ID\n\nName your pipeline runs\n\nUse failure/success hooks\n\nHyperparameter tuning\n\nAccess secrets in a step\n\nRun an individual step\n\nFetching pipelines\n\nGet past pipeline/step runs\n\n🚨Trigger a pipeline\n\nUse templates: Python SDK\n\nUse templates: Dashboard\n\nUse templates: Rest API\n\n📃Use configuration files\n\nHow to configure a pipeline with a YAML\n\nWhat can be configured\n\nRuntime settings for Docker, resources, and stack components\n\nConfiguration hierarchy\n\nFind out which configuration was used for a run\n\nAutogenerate a template yaml file\n\n🐳Customize Docker builds\n\nDocker settings on a pipeline\n\nDocker settings on a step\n\nUse a prebuilt image for pipeline execution\n\nSpecify pip dependencies and apt packages\n\nUse your own Dockerfiles\n\nWhich files are built into the image\n\nHow to reuse builds\n\nDefine where an image is built\n\n📔Run remote pipelines from notebooks\n\nLimitations of defining steps in notebook cells\n\nRun a single step from a notebook\n\n🤹Manage your ZenML server\n\nBest practices for upgrading ZenML\n\nUpgrade your ZenML server\n\nUsing ZenML server in production\n\nTroubleshoot your ZenML server\n\nMigration guide\n\nMigration guide 0.13.2 → 0.20.0\n\nMigration guide 0.23.0 → 0.30.0\n\nMigration guide 0.39.1 → 0.41.0\n\nMigration guide 0.58.2 → 0.60.0\n\n📍Develop locally\n\nUse config files to develop locally\n\nKeep your pipelines and dashboard clean\n\n⚒️Manage stacks & components\n\nDeploy a cloud stack with ZenML\n\nDeploy a cloud stack with Terraform\n\nRegister a cloud stack\n\nReference secrets in stack configuration\n\nImplement a custom stack component\n\n🚜Train with GPUs\n\nDistributed Training with 🤗 Accelerate\n\n🌲Control logging\n\nView logs on the dashboard\n\nEnable or disable logs storage\n\nSet logging verbosity\n\nDisable rich traceback output\n\nDisable colorful logging\n\n🗄️Handle Data/Artifacts\n\nHow ZenML stores data\n\nReturn multiple outputs from a step\n\nDelete an artifact\n\nOrganize data with tags\n\nGet arbitrary artifacts in a step',
867
+ "━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━┛$ zenml orchestrator connect <ORCHESTRATOR_NAME> --connector aws-iam-multi-us\nRunning with active stack: 'default' (repository)\nSuccessfully connected orchestrator `<ORCHESTRATOR_NAME>` to the following resources:\n┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━┓\n┃ CONNECTOR ID │ CONNECTOR NAME │ CONNECTOR TYPE │ RESOURCE TYPE │ RESOURCE NAMES ┃\n┠──────────────────────────────────────┼──────────────────┼────────────────┼───────────────────────┼──────────────────┨\n┃ ed528d5a-d6cb-4fc4-bc52-c3d2d01643e5 │ aws-iam-multi-us │ 🔶 aws │ 🌀 kubernetes-cluster │ zenhacks-cluster ┃\n┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┛\n\n# Register and activate a stack with the new orchestrator\n$ zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set\n\nif you don't have a Service Connector on hand and you don't want to register one , the local Kubernetes kubectl client needs to be configured with a configuration context pointing to the remote cluster. The kubernetes_context stack component must also be configured with the value of that context:\n\nzenml orchestrator register <ORCHESTRATOR_NAME> \\\n --flavor=kubernetes \\\n --kubernetes_context=<KUBERNETES_CONTEXT>\n\n# Register and activate a stack with the new orchestrator\nzenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set\n\nZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your pipeline steps in Kubernetes. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them.\n\nYou can now run any ZenML pipeline using the Kubernetes orchestrator:\n\npython file_that_runs_a_zenml_pipeline.py",
868
+ ]
869
+ embeddings = model.encode(sentences)
870
+ print(embeddings.shape)
871
+ # [3, 768]
872
+
873
+ # Get the similarity scores for the embeddings
874
+ similarities = model.similarity(embeddings, embeddings)
875
+ print(similarities.shape)
876
+ # [3, 3]
877
+ ```
878
+
879
+ <!--
880
+ ### Direct Usage (Transformers)
881
+
882
+ <details><summary>Click to see the direct usage in Transformers</summary>
883
+
884
+ </details>
885
+ -->
886
+
887
+ <!--
888
+ ### Downstream Usage (Sentence Transformers)
889
+
890
+ You can finetune this model on your own dataset.
891
+
892
+ <details><summary>Click to expand</summary>
893
+
894
+ </details>
895
+ -->
896
+
897
+ <!--
898
+ ### Out-of-Scope Use
899
+
900
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
901
+ -->
902
+
903
+ ## Evaluation
904
+
905
+ ### Metrics
906
+
907
+ #### Information Retrieval
908
+ * Dataset: `dim_384`
909
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
910
+
911
+ | Metric | Value |
912
+ |:--------------------|:-----------|
913
+ | cosine_accuracy@1 | 0.1863 |
914
+ | cosine_accuracy@3 | 0.4795 |
915
+ | cosine_accuracy@5 | 0.6603 |
916
+ | cosine_accuracy@10 | 0.7973 |
917
+ | cosine_precision@1 | 0.1863 |
918
+ | cosine_precision@3 | 0.1598 |
919
+ | cosine_precision@5 | 0.1321 |
920
+ | cosine_precision@10 | 0.0797 |
921
+ | cosine_recall@1 | 0.1863 |
922
+ | cosine_recall@3 | 0.4795 |
923
+ | cosine_recall@5 | 0.6603 |
924
+ | cosine_recall@10 | 0.7973 |
925
+ | cosine_ndcg@10 | 0.4746 |
926
+ | cosine_mrr@10 | 0.3726 |
927
+ | **cosine_map@100** | **0.3795** |
928
+
929
+ #### Information Retrieval
930
+ * Dataset: `dim_256`
931
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
932
+
933
+ | Metric | Value |
934
+ |:--------------------|:-----------|
935
+ | cosine_accuracy@1 | 0.1836 |
936
+ | cosine_accuracy@3 | 0.4877 |
937
+ | cosine_accuracy@5 | 0.6603 |
938
+ | cosine_accuracy@10 | 0.7918 |
939
+ | cosine_precision@1 | 0.1836 |
940
+ | cosine_precision@3 | 0.1626 |
941
+ | cosine_precision@5 | 0.1321 |
942
+ | cosine_precision@10 | 0.0792 |
943
+ | cosine_recall@1 | 0.1836 |
944
+ | cosine_recall@3 | 0.4877 |
945
+ | cosine_recall@5 | 0.6603 |
946
+ | cosine_recall@10 | 0.7918 |
947
+ | cosine_ndcg@10 | 0.4733 |
948
+ | cosine_mrr@10 | 0.3724 |
949
+ | **cosine_map@100** | **0.3793** |
950
+
951
+ #### Information Retrieval
952
+ * Dataset: `dim_128`
953
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
954
+
955
+ | Metric | Value |
956
+ |:--------------------|:-----------|
957
+ | cosine_accuracy@1 | 0.1836 |
958
+ | cosine_accuracy@3 | 0.4685 |
959
+ | cosine_accuracy@5 | 0.6356 |
960
+ | cosine_accuracy@10 | 0.7781 |
961
+ | cosine_precision@1 | 0.1836 |
962
+ | cosine_precision@3 | 0.1562 |
963
+ | cosine_precision@5 | 0.1271 |
964
+ | cosine_precision@10 | 0.0778 |
965
+ | cosine_recall@1 | 0.1836 |
966
+ | cosine_recall@3 | 0.4685 |
967
+ | cosine_recall@5 | 0.6356 |
968
+ | cosine_recall@10 | 0.7781 |
969
+ | cosine_ndcg@10 | 0.4622 |
970
+ | cosine_mrr@10 | 0.3629 |
971
+ | **cosine_map@100** | **0.3705** |
972
+
973
+ #### Information Retrieval
974
+ * Dataset: `dim_64`
975
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
976
+
977
+ | Metric | Value |
978
+ |:--------------------|:-----------|
979
+ | cosine_accuracy@1 | 0.2055 |
980
+ | cosine_accuracy@3 | 0.4767 |
981
+ | cosine_accuracy@5 | 0.6274 |
982
+ | cosine_accuracy@10 | 0.7534 |
983
+ | cosine_precision@1 | 0.2055 |
984
+ | cosine_precision@3 | 0.1589 |
985
+ | cosine_precision@5 | 0.1255 |
986
+ | cosine_precision@10 | 0.0753 |
987
+ | cosine_recall@1 | 0.2055 |
988
+ | cosine_recall@3 | 0.4767 |
989
+ | cosine_recall@5 | 0.6274 |
990
+ | cosine_recall@10 | 0.7534 |
991
+ | cosine_ndcg@10 | 0.4625 |
992
+ | cosine_mrr@10 | 0.3707 |
993
+ | **cosine_map@100** | **0.3787** |
994
+
995
+ <!--
996
+ ## Bias, Risks and Limitations
997
+
998
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
999
+ -->
1000
+
1001
+ <!--
1002
+ ### Recommendations
1003
+
1004
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
1005
+ -->
1006
+
1007
+ ## Training Details
1008
+
1009
+ ### Training Dataset
1010
+
1011
+ #### Unnamed Dataset
1012
+
1013
+
1014
+ * Size: 3,284 training samples
1015
+ * Columns: <code>positive</code> and <code>anchor</code>
1016
+ * Approximate statistics based on the first 1000 samples:
1017
+ | | positive | anchor |
1018
+ |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
1019
+ | type | string | string |
1020
+ | details | <ul><li>min: 10 tokens</li><li>mean: 22.7 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 316.5 tokens</li><li>max: 512 tokens</li></ul> |
1021
+ * Samples:
1022
+ | positive | anchor |
1023
+ |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
1024
+ | <code>How does ZenML help in integrating machine learning with operational processes?</code> | <code>ZenML - Bridging the gap between ML & Ops<br><br>Legacy Docs<br><br>Bleeding EdgeLegacy Docs0.67.0<br><br>🧙‍♂️Find older version our docs<br><br>Powered by GitBook</code> |
1025
+ | <code>How can I configure a data integrity check step in ZenML to perform outlier sample detection and string length verification on a dataset with specific conditions?</code> | <code>ks. For example, the following step configuration:deepchecks_data_integrity_check_step(<br> check_list=[<br> DeepchecksDataIntegrityCheck.TABULAR_OUTLIER_SAMPLE_DETECTION,<br> DeepchecksDataIntegrityCheck.TABULAR_STRING_LENGTH_OUT_OF_BOUNDS,<br> ],<br> dataset_kwargs=dict(label='class', cat_features=['country', 'state']),<br> check_kwargs={<br> DeepchecksDataIntegrityCheck.TABULAR_OUTLIER_SAMPLE_DETECTION: dict(<br> nearest_neighbors_percent=0.01,<br> extent_parameter=3,<br> condition_outlier_ratio_less_or_equal=dict(<br> max_outliers_ratio=0.007,<br> outlier_score_threshold=0.5,<br> ),<br> condition_no_outliers=dict(<br> outlier_score_threshold=0.6,<br> )<br> ),<br> DeepchecksDataIntegrityCheck.TABULAR_STRING_LENGTH_OUT_OF_BOUNDS: dict(<br> num_percentiles=1000,<br> min_unique_values=3,<br> condition_number_of_outliers_less_or_equal=dict(<br> max_outliers=3,<br> )<br> ),<br> },<br> ...<br>)<br><br>is equivalent to running the following Deepchecks tests:<br><br>import deepchecks.tabular.checks as tabular_checks<br>from deepchecks.tabular import Suite<br>from deepchecks.tabular import Dataset<br><br>train_dataset = Dataset(<br> reference_dataset,<br> label='class',<br> cat_features=['country', 'state']<br>)<br><br>suite = Suite(name="custom")<br>check = tabular_checks.OutlierSampleDetection(<br> nearest_neighbors_percent=0.01,<br> extent_parameter=3,<br>)<br>check.add_condition_outlier_ratio_less_or_equal(<br> max_outliers_ratio=0.007,<br> outlier_score_threshold=0.5,<br>)<br>check.add_condition_no_outliers(<br> outlier_score_threshold=0.6,<br>)<br>suite.add(check)<br>check = tabular_checks.StringLengthOutOfBounds(<br> num_percentiles=1000,<br> min_unique_values=3,<br>)<br>check.add_condition_number_of_outliers_less_or_equal(<br> max_outliers=3,<br>)<br>suite.run(train_dataset=train_dataset)<br><br>The Deepchecks Data Validator</code> |
1026
+ | <code>How can I develop a custom data validator in ZenML?</code> | <code>custom data validator<br><br>📈Experiment Trackers<br><br>CometMLflow<br><br>Neptune<br><br>Weights & Biases<br><br>Develop a custom experiment tracker<br><br>🏃‍♀️Model Deployers<br><br>MLflow<br><br>Seldon<br><br>BentoML<br><br>Hugging Face<br><br>Databricks<br><br>Develop a Custom Model Deployer<br><br>👣Step Operators<br><br>Amazon SageMaker<br><br>Google Cloud VertexAI<br><br>AzureML<br><br>Kubernetes<br><br>Spark<br><br>Develop a Custom Step Operator<br><br>❗Alerters<br><br>Discord Alerter<br><br>Slack Alerter<br><br>Develop a Custom Alerter<br><br>🖼️Image Builders<br><br>Local Image Builder<br><br>Kaniko Image Builder<br><br>Google Cloud Image Builder<br><br>Develop a Custom Image Builder<br><br>🏷️Annotators<br><br>Argilla<br><br>Label Studio<br><br>Pigeon<br><br>Prodigy<br><br>Develop a Custom Annotator<br><br>📓Model Registries<br><br>MLflow Model Registry<br><br>Develop a Custom Model Registry<br><br>📊Feature Stores<br><br>Feast<br><br>Develop a Custom Feature Store<br><br>Examples<br><br>🚀Quickstart<br><br>🔏End-to-End Batch Inference<br><br>📚Basic NLP with BERT<br><br>👁️Computer Vision with YoloV8<br><br>📖LLM Finetuning<br><br>🧩More Projects...<br><br>Reference<br><br>🐍Python Client<br><br>📼Global settings<br><br>🌎Environment Variables<br><br>👀API reference<br><br>🤷SDK & CLI reference<br><br>📚How do I...?<br><br>♻️Migration guide<br><br>Migration guide 0.13.2 → 0.20.0<br><br>Migration guide 0.23.0 → 0.30.0<br><br>Migration guide 0.39.1 → 0.41.0<br><br>Migration guide 0.58.2 → 0.60.0<br><br>💜Community & content<br><br>❓FAQ<br><br>Powered by GitBook</code> |
1027
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
1028
+ ```json
1029
+ {
1030
+ "loss": "MultipleNegativesRankingLoss",
1031
+ "matryoshka_dims": [
1032
+ 384,
1033
+ 256,
1034
+ 128,
1035
+ 64
1036
+ ],
1037
+ "matryoshka_weights": [
1038
+ 1,
1039
+ 1,
1040
+ 1,
1041
+ 1
1042
+ ],
1043
+ "n_dims_per_step": -1
1044
+ }
1045
+ ```
1046
+
1047
+ ### Training Hyperparameters
1048
+ #### Non-Default Hyperparameters
1049
+
1050
+ - `eval_strategy`: epoch
1051
+ - `per_device_train_batch_size`: 4
1052
+ - `per_device_eval_batch_size`: 16
1053
+ - `gradient_accumulation_steps`: 16
1054
+ - `learning_rate`: 2e-05
1055
+ - `num_train_epochs`: 4
1056
+ - `lr_scheduler_type`: cosine
1057
+ - `warmup_ratio`: 0.1
1058
+ - `tf32`: False
1059
+ - `load_best_model_at_end`: True
1060
+ - `optim`: adamw_torch_fused
1061
+ - `batch_sampler`: no_duplicates
1062
+
1063
+ #### All Hyperparameters
1064
+ <details><summary>Click to expand</summary>
1065
+
1066
+ - `overwrite_output_dir`: False
1067
+ - `do_predict`: False
1068
+ - `eval_strategy`: epoch
1069
+ - `prediction_loss_only`: True
1070
+ - `per_device_train_batch_size`: 4
1071
+ - `per_device_eval_batch_size`: 16
1072
+ - `per_gpu_train_batch_size`: None
1073
+ - `per_gpu_eval_batch_size`: None
1074
+ - `gradient_accumulation_steps`: 16
1075
+ - `eval_accumulation_steps`: None
1076
+ - `torch_empty_cache_steps`: None
1077
+ - `learning_rate`: 2e-05
1078
+ - `weight_decay`: 0.0
1079
+ - `adam_beta1`: 0.9
1080
+ - `adam_beta2`: 0.999
1081
+ - `adam_epsilon`: 1e-08
1082
+ - `max_grad_norm`: 1.0
1083
+ - `num_train_epochs`: 4
1084
+ - `max_steps`: -1
1085
+ - `lr_scheduler_type`: cosine
1086
+ - `lr_scheduler_kwargs`: {}
1087
+ - `warmup_ratio`: 0.1
1088
+ - `warmup_steps`: 0
1089
+ - `log_level`: passive
1090
+ - `log_level_replica`: warning
1091
+ - `log_on_each_node`: True
1092
+ - `logging_nan_inf_filter`: True
1093
+ - `save_safetensors`: True
1094
+ - `save_on_each_node`: False
1095
+ - `save_only_model`: False
1096
+ - `restore_callback_states_from_checkpoint`: False
1097
+ - `no_cuda`: False
1098
+ - `use_cpu`: False
1099
+ - `use_mps_device`: False
1100
+ - `seed`: 42
1101
+ - `data_seed`: None
1102
+ - `jit_mode_eval`: False
1103
+ - `use_ipex`: False
1104
+ - `bf16`: False
1105
+ - `fp16`: False
1106
+ - `fp16_opt_level`: O1
1107
+ - `half_precision_backend`: auto
1108
+ - `bf16_full_eval`: False
1109
+ - `fp16_full_eval`: False
1110
+ - `tf32`: False
1111
+ - `local_rank`: 0
1112
+ - `ddp_backend`: None
1113
+ - `tpu_num_cores`: None
1114
+ - `tpu_metrics_debug`: False
1115
+ - `debug`: []
1116
+ - `dataloader_drop_last`: False
1117
+ - `dataloader_num_workers`: 0
1118
+ - `dataloader_prefetch_factor`: None
1119
+ - `past_index`: -1
1120
+ - `disable_tqdm`: True
1121
+ - `remove_unused_columns`: True
1122
+ - `label_names`: None
1123
+ - `load_best_model_at_end`: True
1124
+ - `ignore_data_skip`: False
1125
+ - `fsdp`: []
1126
+ - `fsdp_min_num_params`: 0
1127
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
1128
+ - `fsdp_transformer_layer_cls_to_wrap`: None
1129
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
1130
+ - `deepspeed`: None
1131
+ - `label_smoothing_factor`: 0.0
1132
+ - `optim`: adamw_torch_fused
1133
+ - `optim_args`: None
1134
+ - `adafactor`: False
1135
+ - `group_by_length`: False
1136
+ - `length_column_name`: length
1137
+ - `ddp_find_unused_parameters`: None
1138
+ - `ddp_bucket_cap_mb`: None
1139
+ - `ddp_broadcast_buffers`: False
1140
+ - `dataloader_pin_memory`: True
1141
+ - `dataloader_persistent_workers`: False
1142
+ - `skip_memory_metrics`: True
1143
+ - `use_legacy_prediction_loop`: False
1144
+ - `push_to_hub`: False
1145
+ - `resume_from_checkpoint`: None
1146
+ - `hub_model_id`: None
1147
+ - `hub_strategy`: every_save
1148
+ - `hub_private_repo`: False
1149
+ - `hub_always_push`: False
1150
+ - `gradient_checkpointing`: False
1151
+ - `gradient_checkpointing_kwargs`: None
1152
+ - `include_inputs_for_metrics`: False
1153
+ - `eval_do_concat_batches`: True
1154
+ - `fp16_backend`: auto
1155
+ - `push_to_hub_model_id`: None
1156
+ - `push_to_hub_organization`: None
1157
+ - `mp_parameters`:
1158
+ - `auto_find_batch_size`: False
1159
+ - `full_determinism`: False
1160
+ - `torchdynamo`: None
1161
+ - `ray_scope`: last
1162
+ - `ddp_timeout`: 1800
1163
+ - `torch_compile`: False
1164
+ - `torch_compile_backend`: None
1165
+ - `torch_compile_mode`: None
1166
+ - `dispatch_batches`: None
1167
+ - `split_batches`: None
1168
+ - `include_tokens_per_second`: False
1169
+ - `include_num_input_tokens_seen`: False
1170
+ - `neftune_noise_alpha`: None
1171
+ - `optim_target_modules`: None
1172
+ - `batch_eval_metrics`: False
1173
+ - `eval_on_start`: False
1174
+ - `eval_use_gather_object`: False
1175
+ - `batch_sampler`: no_duplicates
1176
+ - `multi_dataset_batch_sampler`: proportional
1177
+
1178
+ </details>
1179
+
1180
+ ### Training Logs
1181
+ | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_384_cosine_map@100 | dim_64_cosine_map@100 |
1182
+ |:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
1183
+ | 0.3893 | 10 | 1.7142 | - | - | - | - |
1184
+ | 0.7786 | 20 | 0.4461 | - | - | - | - |
1185
+ | 0.9732 | 25 | - | 0.3544 | 0.3592 | 0.3674 | 0.3523 |
1186
+ | 1.1655 | 30 | 0.1889 | - | - | - | - |
1187
+ | 1.5547 | 40 | 0.1196 | - | - | - | - |
1188
+ | 1.9440 | 50 | 0.0717 | - | - | - | - |
1189
+ | 1.9830 | 51 | - | 0.3672 | 0.3727 | 0.3728 | 0.3797 |
1190
+ | 2.3309 | 60 | 0.0474 | - | - | - | - |
1191
+ | 2.7202 | 70 | 0.0418 | - | - | - | - |
1192
+ | **2.9927** | **77** | **-** | **0.3722** | **0.3772** | **0.3798** | **0.3783** |
1193
+ | 3.1071 | 80 | 0.0355 | - | - | - | - |
1194
+ | 3.4964 | 90 | 0.0351 | - | - | - | - |
1195
+ | 3.8856 | 100 | 0.0276 | 0.3705 | 0.3793 | 0.3795 | 0.3787 |
1196
+
1197
+ * The bold row denotes the saved checkpoint.
1198
+
1199
+ ### Framework Versions
1200
+ - Python: 3.12.3
1201
+ - Sentence Transformers: 3.0.1
1202
+ - Transformers: 4.44.0
1203
+ - PyTorch: 2.5.0+cu124
1204
+ - Accelerate: 0.33.0
1205
+ - Datasets: 2.20.0
1206
+ - Tokenizers: 0.19.1
1207
+
1208
+ ## Citation
1209
+
1210
+ ### BibTeX
1211
+
1212
+ #### Sentence Transformers
1213
+ ```bibtex
1214
+ @inproceedings{reimers-2019-sentence-bert,
1215
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
1216
+ author = "Reimers, Nils and Gurevych, Iryna",
1217
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
1218
+ month = "11",
1219
+ year = "2019",
1220
+ publisher = "Association for Computational Linguistics",
1221
+ url = "https://arxiv.org/abs/1908.10084",
1222
+ }
1223
+ ```
1224
+
1225
+ #### MatryoshkaLoss
1226
+ ```bibtex
1227
+ @misc{kusupati2024matryoshka,
1228
+ title={Matryoshka Representation Learning},
1229
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
1230
+ year={2024},
1231
+ eprint={2205.13147},
1232
+ archivePrefix={arXiv},
1233
+ primaryClass={cs.LG}
1234
+ }
1235
+ ```
1236
+
1237
+ #### MultipleNegativesRankingLoss
1238
+ ```bibtex
1239
+ @misc{henderson2017efficient,
1240
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
1241
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
1242
+ year={2017},
1243
+ eprint={1705.00652},
1244
+ archivePrefix={arXiv},
1245
+ primaryClass={cs.CL}
1246
+ }
1247
+ ```
1248
+
1249
+ <!--
1250
+ ## Glossary
1251
+
1252
+ *Clearly define terms in order to be accessible across audiences.*
1253
+ -->
1254
+
1255
+ <!--
1256
+ ## Model Card Authors
1257
+
1258
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
1259
+ -->
1260
+
1261
+ <!--
1262
+ ## Model Card Contact
1263
+
1264
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
1265
+ -->
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Snowflake/snowflake-arctic-embed-m-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.44.0",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.44.0",
5
+ "pytorch": "2.5.0+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": null
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b8c58ad12b1547c240e2803ab66e9f6a8fa7baaa022ce3c0336bc103cdbb96f
3
+ size 435588776
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "mask_token": "[MASK]",
48
+ "max_length": 512,
49
+ "model_max_length": 512,
50
+ "pad_to_multiple_of": null,
51
+ "pad_token": "[PAD]",
52
+ "pad_token_type_id": 0,
53
+ "padding_side": "right",
54
+ "sep_token": "[SEP]",
55
+ "stride": 0,
56
+ "strip_accents": null,
57
+ "tokenize_chinese_chars": true,
58
+ "tokenizer_class": "BertTokenizer",
59
+ "truncation_side": "right",
60
+ "truncation_strategy": "longest_first",
61
+ "unk_token": "[UNK]"
62
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff