ailexej commited on
Commit
9e7a946
·
verified ·
1 Parent(s): f1fa80d

Add new SentenceTransformer model

Browse files
Files changed (4) hide show
  1. README.md +480 -609
  2. config.json +1 -1
  3. config_sentence_transformers.json +3 -3
  4. model.safetensors +1 -1
README.md CHANGED
@@ -7,580 +7,460 @@ tags:
7
  - sentence-similarity
8
  - feature-extraction
9
  - generated_from_trainer
10
- - dataset_size:3284
11
  - loss:MatryoshkaLoss
12
  - loss:MultipleNegativesRankingLoss
13
  base_model: Snowflake/snowflake-arctic-embed-m-v1.5
14
  widget:
15
- - source_sentence: Does ZenML officially support Macs running on Apple Silicon, and
16
- are there any specific configurations needed?
17
  sentences:
18
- - 'ding ZenML to learn more!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
 
21
- Do you support Windows?ZenML officially supports Windows if you''re using WSL.
22
- Much of ZenML will also work on Windows outside a WSL environment, but we don''t
23
- officially support it and some features don''t work (notably anything that requires
24
- spinning up a server process).
25
 
26
 
27
- Do you support Macs running on Apple Silicon?
28
 
29
 
30
- Yes, ZenML does support Macs running on Apple Silicon. You just need to make sure
31
- that you set the following environment variable:
32
 
33
 
34
- export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
35
-
36
-
37
- This is a known issue with how forking works on Macs running on Apple Silicon
38
- and it will enable you to use ZenML and the server. This environment variable
39
- is needed if you are working with a local server on your Mac, but if you''re just
40
- using ZenML as a client / CLI and connecting to a deployed server then you don''t
41
- need to set it.
42
-
43
-
44
- How can I make ZenML work with my custom tool? How can I extend or build on ZenML?
45
 
46
 
47
- This depends on the tool and its respective MLOps category. We have a full guide
48
- on this over here!
49
 
50
 
51
- How can I contribute?
 
 
 
52
 
53
 
54
- We develop ZenML together with our community! To get involved, the best way to
55
- get started is to select any issue from the good-first-issue label. If you would
56
- like to contribute, please review our Contributing Guide for all relevant details.
57
 
58
 
59
- How can I speak with the community?
 
 
60
 
61
 
62
- The first point of the call should be our Slack group. Ask your questions about
63
- bugs or specific use cases and someone from the core team will respond.
64
 
65
 
66
- Which license does ZenML use?
 
 
67
 
68
 
69
- ZenML is distributed under the terms of the Apache License Version 2.0. A complete
70
- version of the license is available in the LICENSE.md in this repository. Any
71
- contribution made to this project will be licensed under the Apache License Version
72
- 2.0.
73
 
74
 
75
- PreviousCommunity & content
 
 
 
76
 
77
 
78
- Last updated 3 months ago'
79
- - 'Registering a Model
80
 
81
 
82
- PreviousUse the Model Control PlaneNextDeleting a Model
 
 
83
 
84
 
85
- Last updated 4 months ago'
86
- - 'Synthetic data generation
87
 
88
 
89
- Generate synthetic data with distilabel to finetune embeddings.
90
 
91
 
92
- PreviousImprove retrieval by finetuning embeddingsNextFinetuning embeddings with
93
- Sentence Transformers
94
 
95
 
96
- Last updated 21 days ago'
97
- - source_sentence: How can I change the logging verbosity level in ZenML for both
98
- local and remote pipeline runs?
99
- sentences:
100
- - 'ncepts covered in this guide to your own projects.By the end of this guide, you''ll
101
- have a solid understanding of how to leverage LLMs in your MLOps workflows using
102
- ZenML, enabling you to build powerful, scalable, and maintainable LLM-powered
103
- applications. First up, let''s take a look at a super simple implementation of
104
- the RAG paradigm to get started.
105
 
106
 
107
- PreviousAn end-to-end projectNextRAG with ZenML
108
 
109
 
110
- Last updated 21 days ago'
111
- - 'Configuring a pipeline at runtime
112
 
113
 
114
- Configuring a pipeline at runtime.
115
 
116
 
117
- PreviousUse pipeline/step parametersNextReference environment variables in configurations
 
118
 
119
 
120
- Last updated 28 days ago'
121
- - "Set logging verbosity\n\nHow to set the logging verbosity in ZenML.\n\nBy default,\
122
- \ ZenML sets the logging verbosity to INFO. If you wish to change this, you can\
123
- \ do so by setting the following environment variable:\n\nexport ZENML_LOGGING_VERBOSITY=INFO\n\
124
- \nChoose from INFO, WARN, ERROR, CRITICAL, DEBUG. This will set the logs to whichever\
125
- \ level you suggest.\n\nNote that setting this on the client environment (e.g.\
126
- \ your local machine which runs the pipeline) will not automatically set the same\
127
- \ logging verbosity for remote pipeline runs. That means setting this variable\
128
- \ locally with only effect pipelines that run locally.\n\nIf you wish to control\
129
- \ for remote pipeline runs, you can set the ZENML_LOGGING_VERBOSITY environment\
130
- \ variable in your pipeline runs environment as follows:\n\ndocker_settings =\
131
- \ DockerSettings(environment={\"ZENML_LOGGING_VERBOSITY\": \"DEBUG\"})\n\n# Either\
132
- \ add it to the decorator\n@pipeline(settings={\"docker\": docker_settings})\n\
133
- def my_pipeline() -> None:\n my_step()\n\n# Or configure the pipelines options\n\
134
- my_pipeline = my_pipeline.with_options(\n settings={\"docker\": docker_settings}\n\
135
- )\n\nPreviousEnable or disable logs storageNextDisable rich traceback output\n\
136
- \nLast updated 21 days ago"
137
- - source_sentence: How can I autogenerate a template yaml file for my specific pipeline
138
- using ZenML?
 
 
 
 
 
 
 
 
 
 
 
139
  sentences:
140
- - "Autogenerate a template yaml file\n\nTo help you figure out what you can put\
141
- \ in your configuration file, simply autogenerate a template.\n\nIf you want to\
142
- \ generate a template yaml file of your specific pipeline, you can do so by using\
143
- \ the .write_run_configuration_template() method. This will generate a yaml file\
144
- \ with all options commented out. This way you can pick and choose the settings\
145
- \ that are relevant to you.\n\nfrom zenml import pipeline\n...\n\n@pipeline(enable_cache=True)\
146
- \ # set cache behavior at step level\ndef simple_ml_pipeline(parameter: int):\n\
147
- \ dataset = load_data(parameter=parameter)\n train_model(dataset)\n\nsimple_ml_pipeline.write_run_configuration_template(path=\"\
148
- <Insert_path_here>\")\n\nWhen you want to configure your pipeline with a certain\
149
- \ stack in mind, you can do so as well: `...write_run_configuration_template(stack=<Insert_stack_here>)\n\
150
- \nPreviousFind out which configuration was used for a runNextCustomize Docker\
151
- \ builds\n\nLast updated 21 days ago"
152
- - 'Deleting a Model
153
-
154
-
155
- Learn how to delete models.
156
-
157
 
158
- PreviousRegistering a ModelNextAssociate a pipeline with a Model
159
 
 
160
 
161
- Last updated 4 months ago'
162
- - 'Load artifacts into memory
163
 
 
164
 
165
- Often ZenML pipeline steps consume artifacts produced by one another directly
166
- in the pipeline code, but there are scenarios where you need to pull external
167
- data into your steps. Such external data could be artifacts produced by non-ZenML
168
- codes. For those cases, it is advised to use ExternalArtifact, but what if we
169
- plan to exchange data created with other ZenML pipelines?
170
 
 
171
 
172
- ZenML pipelines are first compiled and only executed at some later point. During
173
- the compilation phase, all function calls are executed, and this data is fixed
174
- as step input parameters. Given all this, the late materialization of dynamic
175
- objects, like data artifacts, is crucial. Without late materialization, it would
176
- not be possible to pass not-yet-existing artifacts as step inputs, or their metadata,
177
- which is often the case in a multi-pipeline setting.
178
 
 
 
179
 
180
- We identify two major use cases for exchanging artifacts between pipelines:
181
 
 
182
 
183
- You semantically group your data products using ZenML Models
184
 
 
 
185
 
186
- You prefer to use ZenML Client to bring all the pieces together
187
 
 
188
 
189
- We recommend using models to group and access artifacts across pipelines. Find
190
- out how to load an artifact from a ZenML Model here.
191
 
 
192
 
193
- Use client methods to exchange artifacts
194
-
195
-
196
- If you don''t yet use the Model Control Plane, you can still exchange data between
197
- pipelines with late materialization. Let''s rework the do_predictions pipeline
198
- code as follows:
199
-
200
-
201
- from typing import Annotated
202
-
203
- from zenml import step, pipeline
204
-
205
- from zenml.client import Client
206
-
207
- import pandas as pd
208
-
209
- from sklearn.base import ClassifierMixin'
210
- - source_sentence: How can I create a Kubernetes cluster on EKS and configure it to
211
- run Spark with a custom Docker image?
212
- sentences:
213
- - 'View logs on the dashboard
214
 
 
215
 
216
- PreviousControl loggingNextEnable or disable logs storage
217
 
 
218
 
219
- Last updated 21 days ago'
220
- - "Datasets in ZenML\n\nModel datasets using simple abstractions.\n\nAs machine\
221
- \ learning projects grow in complexity, you often need to work with various data\
222
- \ sources and manage intricate data flows. This chapter explores how to use custom\
223
- \ Dataset classes and Materializers in ZenML to handle these challenges efficiently.\
224
- \ For strategies on scaling your data processing for larger datasets, refer to\
225
- \ scaling strategies for big data.\n\nIntroduction to Custom Dataset Classes\n\
226
- \nCustom Dataset classes in ZenML provide a way to encapsulate data loading, processing,\
227
- \ and saving logic for different data sources. They're particularly useful when:\n\
228
- \nWorking with multiple data sources (e.g., CSV files, databases, cloud storage)\n\
229
- \nDealing with complex data structures that require special handling\n\nImplementing\
230
- \ custom data processing or transformation logic\n\nImplementing Dataset Classes\
231
- \ for Different Data Sources\n\nLet's create a base Dataset class and implement\
232
- \ it for CSV and BigQuery data sources:\n\nfrom abc import ABC, abstractmethod\n\
233
- import pandas as pd\nfrom google.cloud import bigquery\nfrom typing import Optional\n\
234
- \nclass Dataset(ABC):\n @abstractmethod\n def read_data(self) -> pd.DataFrame:\n\
235
- \ pass\n\nclass CSVDataset(Dataset):\n def __init__(self, data_path:\
236
- \ str, df: Optional[pd.DataFrame] = None):\n self.data_path = data_path\n\
237
- \ self.df = df\n\ndef read_data(self) -> pd.DataFrame:\n if self.df\
238
- \ is None:\n self.df = pd.read_csv(self.data_path)\n return\
239
- \ self.df\n\nclass BigQueryDataset(Dataset):\n def __init__(\n self,\n\
240
- \ table_id: str,\n df: Optional[pd.DataFrame] = None,\n project:\
241
- \ Optional[str] = None,\n ):\n self.table_id = table_id\n self.project\
242
- \ = project\n self.df = df\n self.client = bigquery.Client(project=self.project)\n\
243
- \ndef read_data(self) -> pd.DataFrame:\n query = f\"SELECT * FROM `{self.table_id}`\"\
244
- \n self.df = self.client.query(query).to_dataframe()\n return self.df"
245
- - 'e the correct region is selected on the top right.Click on Add cluster and select
246
- Create.
247
 
 
 
248
 
249
- Enter a name and select the cluster role for Cluster service role.
250
 
 
251
 
252
- Keep the default values for the networking and logging steps and create the cluster.
253
 
 
 
 
 
254
 
255
- Note down the cluster name and the API server endpoint:
256
 
 
 
257
 
258
- EKS_CLUSTER_NAME=<EKS_CLUSTER_NAME>
259
 
260
- EKS_API_SERVER_ENDPOINT=<API_SERVER_ENDPOINT>
261
 
262
 
263
- After the cluster is created, select it and click on Add node group in the Compute
264
- tab.
265
 
266
 
267
- Enter a name and select the node role.
268
 
269
 
270
- For the instance type, we recommend t3a.xlarge, as it provides up to 4 vCPUs and
271
- 16 GB of memory.
272
 
273
 
274
- Docker image for the Spark drivers and executors
 
275
 
276
 
277
- When you want to run your steps on a Kubernetes cluster, Spark will require you
278
- to choose a base image for the driver and executor pods. Normally, for this purpose,
279
- you can either use one of the base images in Spark’s dockerhub or create an image
280
- using the docker-image-tool which will use your own Spark installation and build
281
- an image.
282
 
283
 
284
- When using Spark in EKS, you need to use the latter and utilize the docker-image-tool.
285
- However, before the build process, you also need to download the following packages
 
286
 
287
 
288
- hadoop-aws = 3.3.1
289
 
290
 
291
- aws-java-sdk-bundle = 1.12.150
 
292
 
293
 
294
- and put them in the jars folder within your Spark installation. Once that is set
295
- up, you can build the image as follows:
296
 
297
 
298
- cd $SPARK_HOME # If this empty for you then you need to set the SPARK_HOME variable
299
- which points to your Spark installation
300
-
301
-
302
- SPARK_IMAGE_TAG=<SPARK_IMAGE_TAG>
303
-
304
-
305
- ./bin/docker-image-tool.sh -t $SPARK_IMAGE_TAG -p kubernetes/dockerfiles/spark/bindings/python/Dockerfile
306
- -u 0 build
307
-
308
-
309
- BASE_IMAGE_NAME=spark-py:$SPARK_IMAGE_TAG
310
-
311
-
312
- If you are working on an M1 Mac, you will need to build the image for the amd64
313
- architecture, by using the prefix -X on the previous command. For example:
314
-
315
-
316
- ./bin/docker-image-tool.sh -X -t $SPARK_IMAGE_TAG -p kubernetes/dockerfiles/spark/bindings/python/Dockerfile
317
- -u 0 build
318
-
319
-
320
- Configuring RBAC'
321
- - source_sentence: How can I configure a pipeline with a YAML file in ZenML?
 
322
  sentences:
323
- - 'atically retry steps
324
-
325
-
326
- Run pipelines asynchronouslyControl execution order of steps
327
-
328
-
329
- Using a custom step invocation ID
330
-
331
-
332
- Name your pipeline runs
333
-
334
-
335
- Use failure/success hooks
336
-
337
-
338
- Hyperparameter tuning
339
-
340
-
341
- Access secrets in a step
342
-
343
-
344
- Run an individual step
345
-
346
-
347
- Fetching pipelines
348
-
349
-
350
- Get past pipeline/step runs
351
-
352
-
353
- 🚨Trigger a pipeline
354
-
355
-
356
- Use templates: Python SDK
357
-
358
-
359
- Use templates: Dashboard
360
-
361
-
362
- Use templates: Rest API
363
-
364
-
365
- 📃Use configuration files
366
-
367
-
368
- How to configure a pipeline with a YAML
369
-
370
-
371
- What can be configured
372
-
373
-
374
- Runtime settings for Docker, resources, and stack components
375
-
376
-
377
- Configuration hierarchy
378
-
379
-
380
- Find out which configuration was used for a run
381
-
382
-
383
- Autogenerate a template yaml file
384
-
385
-
386
- 🐳Customize Docker builds
387
-
388
 
389
- Docker settings on a pipeline
390
 
 
391
 
392
- Docker settings on a step
393
 
 
394
 
395
- Use a prebuilt image for pipeline execution
396
 
 
 
397
 
398
- Specify pip dependencies and apt packages
399
 
 
 
400
 
401
- Use your own Dockerfiles
402
 
 
 
 
 
 
403
 
404
- Which files are built into the image
405
 
 
406
 
407
- How to reuse builds
408
 
 
 
 
 
 
 
409
 
410
- Define where an image is built
411
 
 
 
412
 
413
- 📔Run remote pipelines from notebooks
414
 
 
415
 
416
- Limitations of defining steps in notebook cells
417
 
 
418
 
419
- Run a single step from a notebook
420
 
 
 
421
 
422
- 🤹Manage your ZenML server
423
 
 
 
424
 
425
- Best practices for upgrading ZenML
426
 
 
427
 
428
- Upgrade your ZenML server
429
 
 
 
 
430
 
431
- Using ZenML server in production
432
 
 
433
 
434
- Troubleshoot your ZenML server
435
 
436
-
437
- Migration guide
438
-
439
-
440
- Migration guide 0.13.2 → 0.20.0
441
-
442
-
443
- Migration guide 0.23.0 0.30.0
444
-
445
-
446
- Migration guide 0.39.1 0.41.0
447
-
448
-
449
- Migration guide 0.58.2 0.60.0
450
-
451
-
452
- 📍Develop locally
453
-
454
-
455
- Use config files to develop locally
456
-
457
-
458
- Keep your pipelines and dashboard clean
459
-
460
-
461
- ⚒️Manage stacks & components
462
-
463
-
464
- Deploy a cloud stack with ZenML
465
-
466
-
467
- Deploy a cloud stack with Terraform
468
-
469
-
470
- Register a cloud stack
471
-
472
-
473
- Reference secrets in stack configuration
474
-
475
-
476
- Implement a custom stack component
477
-
478
-
479
- 🚜Train with GPUs
480
-
481
-
482
- Distributed Training with 🤗 Accelerate
483
-
484
-
485
- 🌲Control logging
486
-
487
-
488
- View logs on the dashboard
489
-
490
-
491
- Enable or disable logs storage
492
-
493
-
494
- Set logging verbosity
495
-
496
-
497
- Disable rich traceback output
498
-
499
-
500
- Disable colorful logging
501
-
502
-
503
- 🗄️Handle Data/Artifacts
504
-
505
-
506
- How ZenML stores data
507
-
508
-
509
- Return multiple outputs from a step
510
-
511
-
512
- Delete an artifact
513
-
514
-
515
- Organize data with tags
516
-
517
-
518
- Get arbitrary artifacts in a step'
519
- - 'Security best practices
520
-
521
-
522
- Best practices concerning the various authentication methods implemented by Service
523
- Connectors.
524
-
525
-
526
- Service Connector Types, especially those targeted at cloud providers, offer a
527
- plethora of authentication methods matching those supported by remote cloud platforms.
528
- While there is no single authentication standard that unifies this process, there
529
- are some patterns that are easily identifiable and can be used as guidelines when
530
- deciding which authentication method to use to configure a Service Connector.
531
-
532
-
533
- This section explores some of those patterns and gives some advice regarding which
534
- authentication methods are best suited for your needs.
535
-
536
-
537
- This section may require some general knowledge about authentication and authorization
538
- to be properly understood. We tried to keep it simple and limit ourselves to talking
539
- about high-level concepts, but some areas may get a bit too technical.
540
-
541
-
542
- Username and password
543
-
544
-
545
- The key takeaway is this: you should avoid using your primary account password
546
- as authentication credentials as much as possible. If there are alternative authentication
547
- methods that you can use or other types of credentials (e.g. session tokens, API
548
- keys, API tokens), you should always try to use those instead.
549
-
550
-
551
- Ultimately, if you have no choice, be cognizant of the third parties you share
552
- your passwords with. If possible, they should never leave the premises of your
553
- local host or development environment.
554
-
555
-
556
- This is the typical authentication method that uses a username or account name
557
- plus the associated password. While this is the de facto method used to log in
558
- with web consoles and local CLIs, this is the least secure of all authentication
559
- methods and never something you want to share with other members of your team
560
- or organization or use to authenticate automated workloads.'
561
- - "━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━┛$ zenml orchestrator connect\
562
- \ <ORCHESTRATOR_NAME> --connector aws-iam-multi-us\nRunning with active stack:\
563
- \ 'default' (repository)\nSuccessfully connected orchestrator `<ORCHESTRATOR_NAME>`\
564
- \ to the following resources:\n┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━┓\n\
565
- ┃ CONNECTOR ID │ CONNECTOR NAME │ CONNECTOR TYPE │ RESOURCE\
566
- \ TYPE │ RESOURCE NAMES ┃\n┠──────────────────────────────────────┼──────────────────┼────────────────┼───────────────────────┼──────────────────┨\n\
567
- ┃ ed528d5a-d6cb-4fc4-bc52-c3d2d01643e5 │ aws-iam-multi-us │ \U0001F536 aws \
568
- \ │ \U0001F300 kubernetes-cluster │ zenhacks-cluster ┃\n┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┛\n\
569
- \n# Register and activate a stack with the new orchestrator\n$ zenml stack register\
570
- \ <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set\n\nif you don't have a Service\
571
- \ Connector on hand and you don't want to register one , the local Kubernetes\
572
- \ kubectl client needs to be configured with a configuration context pointing\
573
- \ to the remote cluster. The kubernetes_context stack component must also be configured\
574
- \ with the value of that context:\n\nzenml orchestrator register <ORCHESTRATOR_NAME>\
575
- \ \\\n --flavor=kubernetes \\\n --kubernetes_context=<KUBERNETES_CONTEXT>\n\
576
- \n# Register and activate a stack with the new orchestrator\nzenml stack register\
577
- \ <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set\n\nZenML will build a Docker image\
578
- \ called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code\
579
- \ and use it to run your pipeline steps in Kubernetes. Check out this page if\
580
- \ you want to learn more about how ZenML builds these images and how you can customize\
581
- \ them.\n\nYou can now run any ZenML pipeline using the Kubernetes orchestrator:\n\
582
- \npython file_that_runs_a_zenml_pipeline.py"
583
- datasets: []
584
  pipeline_tag: sentence-similarity
585
  library_name: sentence-transformers
586
  metrics:
@@ -610,49 +490,49 @@ model-index:
610
  type: dim_384
611
  metrics:
612
  - type: cosine_accuracy@1
613
- value: 0.1863013698630137
614
  name: Cosine Accuracy@1
615
  - type: cosine_accuracy@3
616
- value: 0.4794520547945205
617
  name: Cosine Accuracy@3
618
  - type: cosine_accuracy@5
619
- value: 0.6602739726027397
620
  name: Cosine Accuracy@5
621
  - type: cosine_accuracy@10
622
- value: 0.7972602739726027
623
  name: Cosine Accuracy@10
624
  - type: cosine_precision@1
625
- value: 0.1863013698630137
626
  name: Cosine Precision@1
627
  - type: cosine_precision@3
628
- value: 0.1598173515981735
629
  name: Cosine Precision@3
630
  - type: cosine_precision@5
631
- value: 0.13205479452054794
632
  name: Cosine Precision@5
633
  - type: cosine_precision@10
634
- value: 0.07972602739726026
635
  name: Cosine Precision@10
636
  - type: cosine_recall@1
637
- value: 0.1863013698630137
638
  name: Cosine Recall@1
639
  - type: cosine_recall@3
640
- value: 0.4794520547945205
641
  name: Cosine Recall@3
642
  - type: cosine_recall@5
643
- value: 0.6602739726027397
644
  name: Cosine Recall@5
645
  - type: cosine_recall@10
646
- value: 0.7972602739726027
647
  name: Cosine Recall@10
648
  - type: cosine_ndcg@10
649
- value: 0.47459290361092754
650
  name: Cosine Ndcg@10
651
  - type: cosine_mrr@10
652
- value: 0.3725994781474232
653
  name: Cosine Mrr@10
654
  - type: cosine_map@100
655
- value: 0.37953809566266083
656
  name: Cosine Map@100
657
  - task:
658
  type: information-retrieval
@@ -662,49 +542,49 @@ model-index:
662
  type: dim_256
663
  metrics:
664
  - type: cosine_accuracy@1
665
- value: 0.18356164383561643
666
  name: Cosine Accuracy@1
667
  - type: cosine_accuracy@3
668
- value: 0.4876712328767123
669
  name: Cosine Accuracy@3
670
  - type: cosine_accuracy@5
671
- value: 0.6602739726027397
672
  name: Cosine Accuracy@5
673
  - type: cosine_accuracy@10
674
- value: 0.7917808219178082
675
  name: Cosine Accuracy@10
676
  - type: cosine_precision@1
677
- value: 0.18356164383561643
678
  name: Cosine Precision@1
679
  - type: cosine_precision@3
680
- value: 0.16255707762557076
681
  name: Cosine Precision@3
682
  - type: cosine_precision@5
683
- value: 0.1320547945205479
684
  name: Cosine Precision@5
685
  - type: cosine_precision@10
686
- value: 0.07917808219178081
687
  name: Cosine Precision@10
688
  - type: cosine_recall@1
689
- value: 0.18356164383561643
690
  name: Cosine Recall@1
691
  - type: cosine_recall@3
692
- value: 0.4876712328767123
693
  name: Cosine Recall@3
694
  - type: cosine_recall@5
695
- value: 0.6602739726027397
696
  name: Cosine Recall@5
697
  - type: cosine_recall@10
698
- value: 0.7917808219178082
699
  name: Cosine Recall@10
700
  - type: cosine_ndcg@10
701
- value: 0.47334554819769054
702
  name: Cosine Ndcg@10
703
  - type: cosine_mrr@10
704
- value: 0.3724179169384647
705
  name: Cosine Mrr@10
706
  - type: cosine_map@100
707
- value: 0.37931260226095775
708
  name: Cosine Map@100
709
  - task:
710
  type: information-retrieval
@@ -714,49 +594,49 @@ model-index:
714
  type: dim_128
715
  metrics:
716
  - type: cosine_accuracy@1
717
- value: 0.18356164383561643
718
  name: Cosine Accuracy@1
719
  - type: cosine_accuracy@3
720
- value: 0.4684931506849315
721
  name: Cosine Accuracy@3
722
  - type: cosine_accuracy@5
723
- value: 0.6356164383561644
724
  name: Cosine Accuracy@5
725
  - type: cosine_accuracy@10
726
- value: 0.7780821917808219
727
  name: Cosine Accuracy@10
728
  - type: cosine_precision@1
729
- value: 0.18356164383561643
730
  name: Cosine Precision@1
731
  - type: cosine_precision@3
732
- value: 0.1561643835616438
733
  name: Cosine Precision@3
734
  - type: cosine_precision@5
735
- value: 0.12712328767123285
736
  name: Cosine Precision@5
737
  - type: cosine_precision@10
738
- value: 0.07780821917808219
739
  name: Cosine Precision@10
740
  - type: cosine_recall@1
741
- value: 0.18356164383561643
742
  name: Cosine Recall@1
743
  - type: cosine_recall@3
744
- value: 0.4684931506849315
745
  name: Cosine Recall@3
746
  - type: cosine_recall@5
747
- value: 0.6356164383561644
748
  name: Cosine Recall@5
749
  - type: cosine_recall@10
750
- value: 0.7780821917808219
751
  name: Cosine Recall@10
752
  - type: cosine_ndcg@10
753
- value: 0.46219638130094637
754
  name: Cosine Ndcg@10
755
  - type: cosine_mrr@10
756
- value: 0.3628680147858229
757
  name: Cosine Mrr@10
758
  - type: cosine_map@100
759
- value: 0.37047490630037583
760
  name: Cosine Map@100
761
  - task:
762
  type: information-retrieval
@@ -766,55 +646,55 @@ model-index:
766
  type: dim_64
767
  metrics:
768
  - type: cosine_accuracy@1
769
- value: 0.2054794520547945
770
  name: Cosine Accuracy@1
771
  - type: cosine_accuracy@3
772
- value: 0.4767123287671233
773
  name: Cosine Accuracy@3
774
  - type: cosine_accuracy@5
775
- value: 0.6273972602739726
776
  name: Cosine Accuracy@5
777
  - type: cosine_accuracy@10
778
- value: 0.7534246575342466
779
  name: Cosine Accuracy@10
780
  - type: cosine_precision@1
781
- value: 0.2054794520547945
782
  name: Cosine Precision@1
783
  - type: cosine_precision@3
784
- value: 0.15890410958904108
785
  name: Cosine Precision@3
786
  - type: cosine_precision@5
787
- value: 0.12547945205479452
788
  name: Cosine Precision@5
789
  - type: cosine_precision@10
790
- value: 0.07534246575342465
791
  name: Cosine Precision@10
792
  - type: cosine_recall@1
793
- value: 0.2054794520547945
794
  name: Cosine Recall@1
795
  - type: cosine_recall@3
796
- value: 0.4767123287671233
797
  name: Cosine Recall@3
798
  - type: cosine_recall@5
799
- value: 0.6273972602739726
800
  name: Cosine Recall@5
801
  - type: cosine_recall@10
802
- value: 0.7534246575342466
803
  name: Cosine Recall@10
804
  - type: cosine_ndcg@10
805
- value: 0.46250756548591326
806
  name: Cosine Ndcg@10
807
  - type: cosine_mrr@10
808
- value: 0.37069906501413347
809
  name: Cosine Mrr@10
810
  - type: cosine_map@100
811
- value: 0.37874559284369463
812
  name: Cosine Map@100
813
  ---
814
 
815
  # zenml/finetuned-snowflake-arctic-embed-m-v1.5
816
 
817
- This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m-v1.5](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
818
 
819
  ## Model Details
820
 
@@ -824,7 +704,8 @@ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [S
824
  - **Maximum Sequence Length:** 512 tokens
825
  - **Output Dimensionality:** 768 tokens
826
  - **Similarity Function:** Cosine Similarity
827
- <!-- - **Training Dataset:** Unknown -->
 
828
  - **Language:** en
829
  - **License:** apache-2.0
830
 
@@ -862,9 +743,9 @@ from sentence_transformers import SentenceTransformer
862
  model = SentenceTransformer("zenml/finetuned-snowflake-arctic-embed-m-v1.5")
863
  # Run inference
864
  sentences = [
865
- 'How can I configure a pipeline with a YAML file in ZenML?',
866
- 'atically retry steps\n\nRun pipelines asynchronouslyControl execution order of steps\n\nUsing a custom step invocation ID\n\nName your pipeline runs\n\nUse failure/success hooks\n\nHyperparameter tuning\n\nAccess secrets in a step\n\nRun an individual step\n\nFetching pipelines\n\nGet past pipeline/step runs\n\n🚨Trigger a pipeline\n\nUse templates: Python SDK\n\nUse templates: Dashboard\n\nUse templates: Rest API\n\n📃Use configuration files\n\nHow to configure a pipeline with a YAML\n\nWhat can be configured\n\nRuntime settings for Docker, resources, and stack components\n\nConfiguration hierarchy\n\nFind out which configuration was used for a run\n\nAutogenerate a template yaml file\n\n🐳Customize Docker builds\n\nDocker settings on a pipeline\n\nDocker settings on a step\n\nUse a prebuilt image for pipeline execution\n\nSpecify pip dependencies and apt packages\n\nUse your own Dockerfiles\n\nWhich files are built into the image\n\nHow to reuse builds\n\nDefine where an image is built\n\n📔Run remote pipelines from notebooks\n\nLimitations of defining steps in notebook cells\n\nRun a single step from a notebook\n\n🤹Manage your ZenML server\n\nBest practices for upgrading ZenML\n\nUpgrade your ZenML server\n\nUsing ZenML server in production\n\nTroubleshoot your ZenML server\n\nMigration guide\n\nMigration guide 0.13.2 0.20.0\n\nMigration guide 0.23.0 0.30.0\n\nMigration guide 0.39.1 0.41.0\n\nMigration guide 0.58.2 0.60.0\n\n📍Develop locally\n\nUse config files to develop locally\n\nKeep your pipelines and dashboard clean\n\n⚒��Manage stacks & components\n\nDeploy a cloud stack with ZenML\n\nDeploy a cloud stack with Terraform\n\nRegister a cloud stack\n\nReference secrets in stack configuration\n\nImplement a custom stack component\n\n🚜Train with GPUs\n\nDistributed Training with 🤗 Accelerate\n\n🌲Control logging\n\nView logs on the dashboard\n\nEnable or disable logs storage\n\nSet logging verbosity\n\nDisable rich traceback output\n\nDisable colorful logging\n\n🗄️Handle Data/Artifacts\n\nHow ZenML stores data\n\nReturn multiple outputs from a step\n\nDelete an artifact\n\nOrganize data with tags\n\nGet arbitrary artifacts in a step',
867
- "━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━┛$ zenml orchestrator connect <ORCHESTRATOR_NAME> --connector aws-iam-multi-us\nRunning with active stack: 'default' (repository)\nSuccessfully connected orchestrator `<ORCHESTRATOR_NAME>` to the following resources:\n┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━┓\n┃ CONNECTOR ID │ CONNECTOR NAME │ CONNECTOR TYPE RESOURCE TYPE │ RESOURCE NAMES ┃\n┠──────────────────────────────────────┼──────────────────┼────────────────┼───────────────────────┼──────────────────┨\n┃ ed528d5a-d6cb-4fc4-bc52-c3d2d01643e5 aws-iam-multi-us 🔶 aws │ 🌀 kubernetes-cluster zenhacks-cluster ┃\n┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┛\n\n# Register and activate a stack with the new orchestrator\n$ zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set\n\nif you don't have a Service Connector on hand and you don't want to register one , the local Kubernetes kubectl client needs to be configured with a configuration context pointing to the remote cluster. The kubernetes_context stack component must also be configured with the value of that context:\n\nzenml orchestrator register <ORCHESTRATOR_NAME> \\\n --flavor=kubernetes \\\n --kubernetes_context=<KUBERNETES_CONTEXT>\n\n# Register and activate a stack with the new orchestrator\nzenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set\n\nZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your pipeline steps in Kubernetes. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them.\n\nYou can now run any ZenML pipeline using the Kubernetes orchestrator:\n\npython file_that_runs_a_zenml_pipeline.py",
868
  ]
869
  embeddings = model.encode(sentences)
870
  print(embeddings.shape)
@@ -908,89 +789,89 @@ You can finetune this model on your own dataset.
908
  * Dataset: `dim_384`
909
  * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
910
 
911
- | Metric | Value |
912
- |:--------------------|:-----------|
913
- | cosine_accuracy@1 | 0.1863 |
914
- | cosine_accuracy@3 | 0.4795 |
915
- | cosine_accuracy@5 | 0.6603 |
916
- | cosine_accuracy@10 | 0.7973 |
917
- | cosine_precision@1 | 0.1863 |
918
- | cosine_precision@3 | 0.1598 |
919
- | cosine_precision@5 | 0.1321 |
920
- | cosine_precision@10 | 0.0797 |
921
- | cosine_recall@1 | 0.1863 |
922
- | cosine_recall@3 | 0.4795 |
923
- | cosine_recall@5 | 0.6603 |
924
- | cosine_recall@10 | 0.7973 |
925
- | cosine_ndcg@10 | 0.4746 |
926
- | cosine_mrr@10 | 0.3726 |
927
- | **cosine_map@100** | **0.3795** |
928
 
929
  #### Information Retrieval
930
  * Dataset: `dim_256`
931
  * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
932
 
933
- | Metric | Value |
934
- |:--------------------|:-----------|
935
- | cosine_accuracy@1 | 0.1836 |
936
- | cosine_accuracy@3 | 0.4877 |
937
- | cosine_accuracy@5 | 0.6603 |
938
- | cosine_accuracy@10 | 0.7918 |
939
- | cosine_precision@1 | 0.1836 |
940
- | cosine_precision@3 | 0.1626 |
941
- | cosine_precision@5 | 0.1321 |
942
- | cosine_precision@10 | 0.0792 |
943
- | cosine_recall@1 | 0.1836 |
944
- | cosine_recall@3 | 0.4877 |
945
- | cosine_recall@5 | 0.6603 |
946
- | cosine_recall@10 | 0.7918 |
947
- | cosine_ndcg@10 | 0.4733 |
948
- | cosine_mrr@10 | 0.3724 |
949
- | **cosine_map@100** | **0.3793** |
950
 
951
  #### Information Retrieval
952
  * Dataset: `dim_128`
953
  * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
954
 
955
- | Metric | Value |
956
- |:--------------------|:-----------|
957
- | cosine_accuracy@1 | 0.1836 |
958
- | cosine_accuracy@3 | 0.4685 |
959
- | cosine_accuracy@5 | 0.6356 |
960
- | cosine_accuracy@10 | 0.7781 |
961
- | cosine_precision@1 | 0.1836 |
962
- | cosine_precision@3 | 0.1562 |
963
- | cosine_precision@5 | 0.1271 |
964
- | cosine_precision@10 | 0.0778 |
965
- | cosine_recall@1 | 0.1836 |
966
- | cosine_recall@3 | 0.4685 |
967
- | cosine_recall@5 | 0.6356 |
968
- | cosine_recall@10 | 0.7781 |
969
- | cosine_ndcg@10 | 0.4622 |
970
- | cosine_mrr@10 | 0.3629 |
971
- | **cosine_map@100** | **0.3705** |
972
 
973
  #### Information Retrieval
974
  * Dataset: `dim_64`
975
  * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
976
 
977
- | Metric | Value |
978
- |:--------------------|:-----------|
979
- | cosine_accuracy@1 | 0.2055 |
980
- | cosine_accuracy@3 | 0.4767 |
981
- | cosine_accuracy@5 | 0.6274 |
982
- | cosine_accuracy@10 | 0.7534 |
983
- | cosine_precision@1 | 0.2055 |
984
- | cosine_precision@3 | 0.1589 |
985
- | cosine_precision@5 | 0.1255 |
986
- | cosine_precision@10 | 0.0753 |
987
- | cosine_recall@1 | 0.2055 |
988
- | cosine_recall@3 | 0.4767 |
989
- | cosine_recall@5 | 0.6274 |
990
- | cosine_recall@10 | 0.7534 |
991
- | cosine_ndcg@10 | 0.4625 |
992
- | cosine_mrr@10 | 0.3707 |
993
- | **cosine_map@100** | **0.3787** |
994
 
995
  <!--
996
  ## Bias, Risks and Limitations
@@ -1008,22 +889,22 @@ You can finetune this model on your own dataset.
1008
 
1009
  ### Training Dataset
1010
 
1011
- #### Unnamed Dataset
1012
-
1013
 
1014
- * Size: 3,284 training samples
 
1015
  * Columns: <code>positive</code> and <code>anchor</code>
1016
- * Approximate statistics based on the first 1000 samples:
1017
- | | positive | anchor |
1018
- |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
1019
- | type | string | string |
1020
- | details | <ul><li>min: 10 tokens</li><li>mean: 22.7 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 316.5 tokens</li><li>max: 512 tokens</li></ul> |
1021
  * Samples:
1022
- | positive | anchor |
1023
- |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
1024
- | <code>How does ZenML help in integrating machine learning with operational processes?</code> | <code>ZenML - Bridging the gap between ML & Ops<br><br>Legacy Docs<br><br>Bleeding EdgeLegacy Docs0.67.0<br><br>🧙‍♂️Find older version our docs<br><br>Powered by GitBook</code> |
1025
- | <code>How can I configure a data integrity check step in ZenML to perform outlier sample detection and string length verification on a dataset with specific conditions?</code> | <code>ks. For example, the following step configuration:deepchecks_data_integrity_check_step(<br> check_list=[<br> DeepchecksDataIntegrityCheck.TABULAR_OUTLIER_SAMPLE_DETECTION,<br> DeepchecksDataIntegrityCheck.TABULAR_STRING_LENGTH_OUT_OF_BOUNDS,<br> ],<br> dataset_kwargs=dict(label='class', cat_features=['country', 'state']),<br> check_kwargs={<br> DeepchecksDataIntegrityCheck.TABULAR_OUTLIER_SAMPLE_DETECTION: dict(<br> nearest_neighbors_percent=0.01,<br> extent_parameter=3,<br> condition_outlier_ratio_less_or_equal=dict(<br> max_outliers_ratio=0.007,<br> outlier_score_threshold=0.5,<br> ),<br> condition_no_outliers=dict(<br> outlier_score_threshold=0.6,<br> )<br> ),<br> DeepchecksDataIntegrityCheck.TABULAR_STRING_LENGTH_OUT_OF_BOUNDS: dict(<br> num_percentiles=1000,<br> min_unique_values=3,<br> condition_number_of_outliers_less_or_equal=dict(<br> max_outliers=3,<br> )<br> ),<br> },<br> ...<br>)<br><br>is equivalent to running the following Deepchecks tests:<br><br>import deepchecks.tabular.checks as tabular_checks<br>from deepchecks.tabular import Suite<br>from deepchecks.tabular import Dataset<br><br>train_dataset = Dataset(<br> reference_dataset,<br> label='class',<br> cat_features=['country', 'state']<br>)<br><br>suite = Suite(name="custom")<br>check = tabular_checks.OutlierSampleDetection(<br> nearest_neighbors_percent=0.01,<br> extent_parameter=3,<br>)<br>check.add_condition_outlier_ratio_less_or_equal(<br> max_outliers_ratio=0.007,<br> outlier_score_threshold=0.5,<br>)<br>check.add_condition_no_outliers(<br> outlier_score_threshold=0.6,<br>)<br>suite.add(check)<br>check = tabular_checks.StringLengthOutOfBounds(<br> num_percentiles=1000,<br> min_unique_values=3,<br>)<br>check.add_condition_number_of_outliers_less_or_equal(<br> max_outliers=3,<br>)<br>suite.run(train_dataset=train_dataset)<br><br>The Deepchecks Data Validator</code> |
1026
- | <code>How can I develop a custom data validator in ZenML?</code> | <code>custom data validator<br><br>📈Experiment Trackers<br><br>CometMLflow<br><br>Neptune<br><br>Weights & Biases<br><br>Develop a custom experiment tracker<br><br>🏃‍♀️Model Deployers<br><br>MLflow<br><br>Seldon<br><br>BentoML<br><br>Hugging Face<br><br>Databricks<br><br>Develop a Custom Model Deployer<br><br>👣Step Operators<br><br>Amazon SageMaker<br><br>Google Cloud VertexAI<br><br>AzureML<br><br>Kubernetes<br><br>Spark<br><br>Develop a Custom Step Operator<br><br>❗Alerters<br><br>Discord Alerter<br><br>Slack Alerter<br><br>Develop a Custom Alerter<br><br>🖼️Image Builders<br><br>Local Image Builder<br><br>Kaniko Image Builder<br><br>Google Cloud Image Builder<br><br>Develop a Custom Image Builder<br><br>🏷️Annotators<br><br>Argilla<br><br>Label Studio<br><br>Pigeon<br><br>Prodigy<br><br>Develop a Custom Annotator<br><br>📓Model Registries<br><br>MLflow Model Registry<br><br>Develop a Custom Model Registry<br><br>📊Feature Stores<br><br>Feast<br><br>Develop a Custom Feature Store<br><br>Examples<br><br>🚀Quickstart<br><br>🔏End-to-End Batch Inference<br><br>📚Basic NLP with BERT<br><br>👁️Computer Vision with YoloV8<br><br>📖LLM Finetuning<br><br>🧩More Projects...<br><br>Reference<br><br>🐍Python Client<br><br>📼Global settings<br><br>🌎Environment Variables<br><br>👀API reference<br><br>🤷SDK & CLI reference<br><br>📚How do I...?<br><br>♻️Migration guide<br><br>Migration guide 0.13.2 0.20.0<br><br>Migration guide 0.23.0 0.30.0<br><br>Migration guide 0.39.1 0.41.0<br><br>Migration guide 0.58.2 0.60.0<br><br>💜Community & content<br><br>❓FAQ<br><br>Powered by GitBook</code> |
1027
  * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
1028
  ```json
1029
  {
@@ -1117,7 +998,7 @@ You can finetune this model on your own dataset.
1117
  - `dataloader_num_workers`: 0
1118
  - `dataloader_prefetch_factor`: None
1119
  - `past_index`: -1
1120
- - `disable_tqdm`: True
1121
  - `remove_unused_columns`: True
1122
  - `label_names`: None
1123
  - `load_best_model_at_end`: True
@@ -1178,31 +1059,21 @@ You can finetune this model on your own dataset.
1178
  </details>
1179
 
1180
  ### Training Logs
1181
- | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_384_cosine_map@100 | dim_64_cosine_map@100 |
1182
- |:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
1183
- | 0.3893 | 10 | 1.7142 | - | - | - | - |
1184
- | 0.7786 | 20 | 0.4461 | - | - | - | - |
1185
- | 0.9732 | 25 | - | 0.3544 | 0.3592 | 0.3674 | 0.3523 |
1186
- | 1.1655 | 30 | 0.1889 | - | - | - | - |
1187
- | 1.5547 | 40 | 0.1196 | - | - | - | - |
1188
- | 1.9440 | 50 | 0.0717 | - | - | - | - |
1189
- | 1.9830 | 51 | - | 0.3672 | 0.3727 | 0.3728 | 0.3797 |
1190
- | 2.3309 | 60 | 0.0474 | - | - | - | - |
1191
- | 2.7202 | 70 | 0.0418 | - | - | - | - |
1192
- | **2.9927** | **77** | **-** | **0.3722** | **0.3772** | **0.3798** | **0.3783** |
1193
- | 3.1071 | 80 | 0.0355 | - | - | - | - |
1194
- | 3.4964 | 90 | 0.0351 | - | - | - | - |
1195
- | 3.8856 | 100 | 0.0276 | 0.3705 | 0.3793 | 0.3795 | 0.3787 |
1196
 
1197
  * The bold row denotes the saved checkpoint.
1198
 
1199
  ### Framework Versions
1200
- - Python: 3.12.3
1201
- - Sentence Transformers: 3.0.1
1202
- - Transformers: 4.44.0
1203
- - PyTorch: 2.5.0+cu124
1204
- - Accelerate: 0.33.0
1205
- - Datasets: 2.20.0
1206
  - Tokenizers: 0.19.1
1207
 
1208
  ## Citation
@@ -1225,7 +1096,7 @@ You can finetune this model on your own dataset.
1225
  #### MatryoshkaLoss
1226
  ```bibtex
1227
  @misc{kusupati2024matryoshka,
1228
- title={Matryoshka Representation Learning},
1229
  author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
1230
  year={2024},
1231
  eprint={2205.13147},
@@ -1237,7 +1108,7 @@ You can finetune this model on your own dataset.
1237
  #### MultipleNegativesRankingLoss
1238
  ```bibtex
1239
  @misc{henderson2017efficient,
1240
- title={Efficient Natural Language Response Suggestion for Smart Reply},
1241
  author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
1242
  year={2017},
1243
  eprint={1705.00652},
 
7
  - sentence-similarity
8
  - feature-extraction
9
  - generated_from_trainer
10
+ - dataset_size:36
11
  - loss:MatryoshkaLoss
12
  - loss:MultipleNegativesRankingLoss
13
  base_model: Snowflake/snowflake-arctic-embed-m-v1.5
14
  widget:
15
+ - source_sentence: What are the abstract methods provided for managing model servers
16
+ in ZenML's BaseModelDeployerFlavor class?
17
  sentences:
18
+ - "quired for your GCP Image Builder by running e.g.:zenml service-connector list-resources\
19
+ \ --resource-type gcp-generic\n\nExample Command Output\n\nThe following 'gcp-generic'\
20
+ \ resources can be accessed by service connectors that you have configured:\n\
21
+ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓\n\
22
+ ┃ CONNECTOR ID │ CONNECTOR NAME │ CONNECTOR TYPE │ RESOURCE\
23
+ \ TYPE │ RESOURCE NAMES ┃\n┠──────────────────────────────────────┼────────────────┼────────────────┼────────────────┼────────────────┨\n\
24
+ ┃ bfdb657d-d808-47e7-9974-9ba6e4919d83 │ gcp-generic │ \U0001F535 gcp \
25
+ \ │ \U0001F535 gcp-generic │ zenml-core ┃\n┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛\n\
26
+ \nAfter having set up or decided on a GCP Service Connector to use to authenticate\
27
+ \ to GCP, you can register the GCP Image Builder as follows:\n\nzenml image-builder\
28
+ \ register <IMAGE_BUILDER_NAME> \\\n --flavor=gcp \\\n --cloud_builder_image=<BUILDER_IMAGE_NAME>\
29
+ \ \\\n --network=<DOCKER_NETWORK> \\\n --build_timeout=<BUILD_TIMEOUT_IN_SECONDS>\n\
30
+ \n# Connect the GCP Image Builder to GCP via a GCP Service Connector\nzenml image-builder\
31
+ \ connect <IMAGE_BUILDER_NAME> -i\n\nA non-interactive version that connects the\
32
+ \ GCP Image Builder to a target GCP Service Connector:\n\nzenml image-builder\
33
+ \ connect <IMAGE_BUILDER_NAME> --connector <CONNECTOR_ID>\n\nExample Command Output"
34
+ - '🧙Installation
35
 
36
 
37
+ Installing ZenML and getting started.
 
 
 
38
 
39
 
40
+ ZenML is a Python package that can be installed directly via pip:
41
 
42
 
43
+ pip install zenml
 
44
 
45
 
46
+ Note that ZenML currently supports Python 3.8, 3.9, 3.10, and 3.11. Please make
47
+ sure that you are using a supported Python version.
 
 
 
 
 
 
 
 
 
48
 
49
 
50
+ Install with the dashboard
 
51
 
52
 
53
+ ZenML comes bundled with a web dashboard that lives inside a sister repository.
54
+ In order to get access to the dashboard locally, you need to launch the ZenML
55
+ Server and Dashboard locally. For this, you need to install the optional dependencies
56
+ for the ZenML Server:
57
 
58
 
59
+ pip install "zenml[server]"
 
 
60
 
61
 
62
+ We highly encourage you to install ZenML in a virtual environment. At ZenML, We
63
+ like to use virtualenvwrapper or pyenv-virtualenv to manage our Python virtual
64
+ environments.
65
 
66
 
67
+ Installing onto MacOS with Apple Silicon (M1, M2)
 
68
 
69
 
70
+ A change in how forking works on Macs running on Apple Silicon means that you
71
+ should set the following environment variable which will ensure that your connections
72
+ to the server remain unbroken:
73
 
74
 
75
+ export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
 
 
 
76
 
77
 
78
+ You can read more about this here. This environment variable is needed if you
79
+ are working with a local server on your Mac, but if you''re just using ZenML as
80
+ a client / CLI and connecting to a deployed server then you don''t need to set
81
+ it.
82
 
83
 
84
+ Nightly builds
 
85
 
86
 
87
+ ZenML also publishes nightly builds under the zenml-nightly package name. These
88
+ are built from the latest develop branch (to which work ready for release is published)
89
+ and are not guaranteed to be stable. To install the nightly build, run:
90
 
91
 
92
+ pip install zenml-nightly
 
93
 
94
 
95
+ Verifying installations
96
 
97
 
98
+ Once the installation is completed, you can check whether the installation was
99
+ successful either through Bash:
100
 
101
 
102
+ zenml version
 
 
 
 
 
 
 
 
103
 
104
 
105
+ or through Python:
106
 
107
 
108
+ import zenml
 
109
 
110
 
111
+ print(zenml.__version__)
112
 
113
 
114
+ If you would like to learn more about the current release, please visit our PyPi
115
+ package page.
116
 
117
 
118
+ Running with Docker'
119
+ - ":\n \"\"\"Abstract method to deploy a model.\"\"\"@staticmethod\n @abstractmethod\n\
120
+ \ def get_model_server_info(\n service: BaseService,\n ) -> Dict[str,\
121
+ \ Optional[str]]:\n \"\"\"Give implementation-specific way to extract relevant\
122
+ \ model server\n properties for the user.\"\"\"\n\n@abstractmethod\n \
123
+ \ def perform_stop_model(\n self,\n service: BaseService,\n \
124
+ \ timeout: int = DEFAULT_DEPLOYMENT_START_STOP_TIMEOUT,\n force: bool\
125
+ \ = False,\n ) -> BaseService:\n \"\"\"Abstract method to stop a model\
126
+ \ server.\"\"\"\n\n@abstractmethod\n def perform_start_model(\n self,\n\
127
+ \ service: BaseService,\n timeout: int = DEFAULT_DEPLOYMENT_START_STOP_TIMEOUT,\n\
128
+ \ ) -> BaseService:\n \"\"\"Abstract method to start a model server.\"\
129
+ \"\"\n\n@abstractmethod\n def perform_delete_model(\n self,\n \
130
+ \ service: BaseService,\n timeout: int = DEFAULT_DEPLOYMENT_START_STOP_TIMEOUT,\n\
131
+ \ force: bool = False,\n ) -> None:\n \"\"\"Abstract method to\
132
+ \ delete a model server.\"\"\"\n\nclass BaseModelDeployerFlavor(Flavor):\n \
133
+ \ \"\"\"Base class for model deployer flavors.\"\"\"\n\n@property\n @abstractmethod\n\
134
+ \ def name(self):\n \"\"\"Returns the name of the flavor.\"\"\"\n\n\
135
+ @property\n def type(self) -> StackComponentType:\n \"\"\"Returns the\
136
+ \ flavor type.\n\nReturns:\n The flavor type.\n \"\"\"\n \
137
+ \ return StackComponentType.MODEL_DEPLOYER\n\n@property\n def config_class(self)\
138
+ \ -> Type[BaseModelDeployerConfig]:\n \"\"\"Returns `BaseModelDeployerConfig`\
139
+ \ config class.\n\nReturns:\n The config class.\n \"\"\"\
140
+ \n return BaseModelDeployerConfig\n\n@property\n @abstractmethod\n \
141
+ \ def implementation_class(self) -> Type[BaseModelDeployer]:\n \"\"\"\
142
+ The class that implements the model deployer.\"\"\"\n\nThis is a slimmed-down\
143
+ \ version of the base implementation which aims to highlight the abstraction layer.\
144
+ \ In order to see the full implementation and get the complete docstrings, please\
145
+ \ check the SDK docs .\n\nBuilding your own model deployers"
146
+ - source_sentence: How can you successfully connect the image builder `gcp-image-builder`
147
+ to the resources using a connector ID?
148
  sentences:
149
+ - 'ZenML - Bridging the gap between ML & Ops
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
150
 
 
151
 
152
+ Legacy Docs
153
 
 
 
154
 
155
+ Bleeding EdgeLegacy Docs0.67.0
156
 
 
 
 
 
 
157
 
158
+ 🧙‍♂️Find older version our docs
159
 
 
 
 
 
 
 
160
 
161
+ Powered by GitBook'
162
+ - 'Google Cloud Image Builder
163
 
 
164
 
165
+ Building container images with Google Cloud Build
166
 
 
167
 
168
+ The Google Cloud image builder is an image builder flavor provided by the ZenML
169
+ gcp integration that uses Google Cloud Build to build container images.
170
 
 
171
 
172
+ When to use it
173
 
 
 
174
 
175
+ You should use the Google Cloud image builder if:
176
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
177
 
178
+ you''re unable to install or use Docker on your client machine.
179
 
 
180
 
181
+ you''re already using GCP.
182
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
183
 
184
+ your stack is mainly composed of other Google Cloud components such as the GCS
185
+ Artifact Store or the Vertex Orchestrator.
186
 
 
187
 
188
+ How to deploy it
189
 
 
190
 
191
+ Would you like to skip ahead and deploy a full ZenML cloud stack already, including
192
+ the Google Cloud image builder? Check out the in-browser stack deployment wizard,
193
+ the stack registration wizard, or the ZenML GCP Terraform module for a shortcut
194
+ on how to deploy & register this stack component.
195
 
 
196
 
197
+ In order to use the ZenML Google Cloud image builder you need to enable Google
198
+ Cloud Build relevant APIs on the Google Cloud project.
199
 
 
200
 
201
+ How to use it
202
 
203
 
204
+ To use the Google Cloud image builder, we need:
 
205
 
206
 
207
+ The ZenML gcp integration installed. If you haven''t done so, run:
208
 
209
 
210
+ zenml integration install gcp
 
211
 
212
 
213
+ A GCP Artifact Store where the build context will be uploaded, so Google Cloud
214
+ Build can access it.
215
 
216
 
217
+ A GCP container registry where the built image will be pushed.
 
 
 
 
218
 
219
 
220
+ Optionally, the GCP project ID in which you want to run the build and a service
221
+ account with the needed permissions to run the build. If not provided, then the
222
+ project ID and credentials will be inferred from the environment.
223
 
224
 
225
+ Optionally, you can change:
226
 
227
 
228
+ the Docker image used by Google Cloud Build to execute the steps to build and
229
+ push the Docker image. By default, the builder image will be ''gcr.io/cloud-builders/docker''.
230
 
231
 
232
+ The network to which the container used to build the ZenML pipeline Docker image
233
+ will be attached. More information: Cloud build network.
234
 
235
 
236
+ The build timeout for the build, and for the blocking operation waiting for the
237
+ build to finish. More information: Build Timeout.'
238
+ - "--connector <CONNECTOR_ID>\n\nExample Command Output$ zenml image-builder connect\
239
+ \ gcp-image-builder --connector gcp-generic\nSuccessfully connected image builder\
240
+ \ `gcp-image-builder` to the following resources:\n┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓\n\
241
+ ┃ CONNECTOR ID │ CONNECTOR NAME │ CONNECTOR TYPE │ RESOURCE\
242
+ \ TYPE │ RESOURCE NAMES ┃\n┠──────────────────────────────────────┼────────────────┼────────────────┼────────────────┼────────────────┨\n\
243
+ ┃ bfdb657d-d808-47e7-9974-9ba6e4919d83 gcp-generic │ \U0001F535 gcp \
244
+ \ │ \U0001F535 gcp-generic zenml-core ┃\n┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛\n\
245
+ \nAs a final step, you can use the GCP Image Builder in a ZenML Stack:\n\n# Register\
246
+ \ and set a stack with the new image builder\nzenml stack register <STACK_NAME>\
247
+ \ -i <IMAGE_BUILDER_NAME> ... --set\n\nWhen you register the GCP Image Builder,\
248
+ \ you can generate a GCP Service Account Key, save it to a local file and then\
249
+ \ reference it in the Image Builder configuration.\n\nThis method has the advantage\
250
+ \ that you don't need to install and configure the GCP CLI on your host, but it's\
251
+ \ still not as secure as using a GCP Service Connector and the stack component\
252
+ \ configuration is not portable to other hosts.\n\nFor this method, you need to\
253
+ \ create a user-managed GCP service account, and grant it privileges to access\
254
+ \ the Cloud Build API and to run Cloud Builder jobs (e.g. the Cloud Build Editor\
255
+ \ IAM role.\n\nWith the service account key downloaded to a local file, you can\
256
+ \ register the GCP Image Builder as follows:\n\nzenml image-builder register <IMAGE_BUILDER_NAME>\
257
+ \ \\\n --flavor=gcp \\\n --project=<GCP_PROJECT_ID> \\\n --service_account_path=<PATH_TO_SERVICE_ACCOUNT_KEY>\
258
+ \ \\\n --cloud_builder_image=<BUILDER_IMAGE_NAME> \\\n --network=<DOCKER_NETWORK>\
259
+ \ \\\n --build_timeout=<BUILD_TIMEOUT_IN_SECONDS>"
260
+ - source_sentence: How do I finetune embeddings using Sentence Transformers in ZenML?
261
  sentences:
262
+ - "nsible for cluster-manager-specific configuration._io_configuration is a critical\
263
+ \ method. Even though we have materializers, Spark might require additional packages\
264
+ \ and configuration to work with a specific filesystem. This method is used as\
265
+ \ an interface to provide this configuration.\n\n_additional_configuration takes\
266
+ \ the submit_args, converts, and appends them to the overall configuration.\n\n\
267
+ Once the configuration is completed, _launch_spark_job comes into play. This takes\
268
+ \ the completed configuration and runs a Spark job on the given master URL with\
269
+ \ the specified deploy_mode. By default, this is achieved by creating and executing\
270
+ \ a spark-submit command.\n\nWarning\n\nIn its first iteration, the pre-configuration\
271
+ \ with _io_configuration method is only effective when it is paired with an S3ArtifactStore\
272
+ \ (which has an authentication secret). When used with other artifact store flavors,\
273
+ \ you might be required to provide additional configuration through the submit_args.\n\
274
+ \nStack Component: KubernetesSparkStepOperator\n\nThe KubernetesSparkStepOperator\
275
+ \ is implemented by subclassing the base SparkStepOperator and uses the PipelineDockerImageBuilder\
276
+ \ class to build and push the required Docker images.\n\nfrom typing import Optional\n\
277
+ \nfrom zenml.integrations.spark.step_operators.spark_step_operator import (\n\
278
+ \ SparkStepOperatorConfig\n)\n\nclass KubernetesSparkStepOperatorConfig(SparkStepOperatorConfig):\n\
279
+ \ \"\"\"Config for the Kubernetes Spark step operator.\"\"\"\n\nnamespace:\
280
+ \ Optional[str] = None\n service_account: Optional[str] = None\n\nfrom pyspark.conf\
281
+ \ import SparkConf\n\nfrom zenml.utils.pipeline_docker_image_builder import PipelineDockerImageBuilder\n\
282
+ from zenml.integrations.spark.step_operators.spark_step_operator import (\n \
283
+ \ SparkStepOperator\n)\n\nclass KubernetesSparkStepOperator(SparkStepOperator):\n\
284
+ \ \"\"\"Step operator which runs Steps with Spark on Kubernetes.\"\"\""
285
+ - 'Finetuning embeddings with Sentence Transformers
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
286
 
 
287
 
288
+ Finetune embeddings with Sentence Transformers.
289
 
 
290
 
291
+ PreviousSynthetic data generationNextEvaluating finetuned embeddings
292
 
 
293
 
294
+ Last updated 1 month ago'
295
+ - 'Whylogs
296
 
 
297
 
298
+ How to collect and visualize statistics to track changes in your pipelines'' data
299
+ with whylogs/WhyLabs profiling.
300
 
 
301
 
302
+ The whylogs/WhyLabs Data Validator flavor provided with the ZenML integration
303
+ uses whylogs and WhyLabs to generate and track data profiles, highly accurate
304
+ descriptive representations of your data. The profiles can be used to implement
305
+ automated corrective actions in your pipelines, or to render interactive representations
306
+ for further visual interpretation, evaluation and documentation.
307
 
 
308
 
309
+ When would you want to use it?
310
 
 
311
 
312
+ Whylogs is an open-source library that analyzes your data and creates statistical
313
+ summaries called whylogs profiles. Whylogs profiles can be processed in your pipelines
314
+ and visualized locally or uploaded to the WhyLabs platform, where more in depth
315
+ analysis can be carried out. Even though whylogs also supports other data types,
316
+ the ZenML whylogs integration currently only works with tabular data in pandas.DataFrame
317
+ format.
318
 
 
319
 
320
+ You should use the whylogs/WhyLabs Data Validator when you need the following
321
+ data validation features that are possible with whylogs and WhyLabs:
322
 
 
323
 
324
+ Data Quality: validate data quality in model inputs or in a data pipeline
325
 
 
326
 
327
+ Data Drift: detect data drift in model input features
328
 
 
329
 
330
+ Model Drift: Detect training-serving skew, concept drift, and model performance
331
+ degradation
332
 
 
333
 
334
+ You should consider one of the other Data Validator flavors if you need a different
335
+ set of data validation features.
336
 
 
337
 
338
+ How do you deploy it?
339
 
 
340
 
341
+ The whylogs Data Validator flavor is included in the whylogs ZenML integration,
342
+ you need to install it on your local machine to be able to register a whylogs
343
+ Data Validator and add it to your stack:
344
 
 
345
 
346
+ zenml integration install whylogs -y
347
 
 
348
 
349
+ If you don''t need to connect to the WhyLabs platform to upload and store the
350
+ generated whylogs data profiles, the Data Validator stack component does not require
351
+ any configuration parameters. Adding it to a stack is as simple as running e.g.:'
352
+ - source_sentence: What is the purpose of ZenML in the context of ML and Ops?
353
+ sentences:
354
+ - "> \\\n --build_timeout=<BUILD_TIMEOUT_IN_SECONDS># Register and set a stack\
355
+ \ with the new image builder\nzenml stack register <STACK_NAME> -i <IMAGE_BUILDER_NAME>\
356
+ \ ... --set\n\nCaveats\n\nAs described in this Google Cloud Build documentation\
357
+ \ page, Google Cloud Build uses containers to execute the build steps which are\
358
+ \ automatically attached to a network called cloudbuild that provides some Application\
359
+ \ Default Credentials (ADC), that allow the container to be authenticated and\
360
+ \ therefore use other GCP services.\n\nBy default, the GCP Image Builder is executing\
361
+ \ the build command of the ZenML Pipeline Docker image with the option --network=cloudbuild,\
362
+ \ so the ADC provided by the cloudbuild network can also be used in the build.\
363
+ \ This is useful if you want to install a private dependency from a GCP Artifact\
364
+ \ Registry, but you will also need to use a custom base parent image with the\
365
+ \ keyrings.google-artifactregistry-auth installed, so pip can connect and authenticate\
366
+ \ in the private artifact registry to download the dependency.\n\nFROM zenmldocker/zenml:latest\n\
367
+ \nRUN pip install keyrings.google-artifactregistry-auth\n\nThe above Dockerfile\
368
+ \ uses zenmldocker/zenml:latest as a base image, but is recommended to change\
369
+ \ the tag to specify the ZenML version and Python version like 0.33.0-py3.10.\n\
370
+ \nPreviousKaniko Image BuilderNextDevelop a Custom Image Builder\n\nLast updated\
371
+ \ 21 days ago"
372
+ - "Control caching behavior\n\nBy default steps in ZenML pipelines are cached whenever\
373
+ \ code and parameters stay unchanged.\n\n@step(enable_cache=True) # set cache\
374
+ \ behavior at step level\ndef load_data(parameter: int) -> dict:\n ...\n\n\
375
+ @step(enable_cache=False) # settings at step level override pipeline level\ndef\
376
+ \ train_model(data: dict) -> None:\n ...\n\n@pipeline(enable_cache=True) #\
377
+ \ set cache behavior at step level\ndef simple_ml_pipeline(parameter: int):\n\
378
+ \ ...\n\nCaching only happens when code and parameters stay the same.\n\nLike\
379
+ \ many other step and pipeline settings, you can also change this afterward:\n\
380
+ \n# Same as passing it in the step decorator\nmy_step.configure(enable_cache=...)\n\
381
+ \n# Same as passing it in the pipeline decorator\nmy_pipeline.configure(enable_cache=...)\n\
382
+ \nFind out here how to configure this in a YAML file\n\nPreviousStep output typing\
383
+ \ and annotationNextSchedule a pipeline\n\nLast updated 4 months ago"
384
+ - 'ZenML - Bridging the gap between ML & Ops
385
+
386
+
387
+ Legacy Docs
388
+
389
+
390
+ Bleeding EdgeLegacy Docs0.67.0
391
+
392
+
393
+ 🧙‍♂️Find older version our docs
394
+
395
+
396
+ Powered by GitBook'
397
+ - source_sentence: How does ZenML facilitate the flow of data between steps in a pipeline?
398
+ sentences:
399
+ - "tainer_registry \\\n -i local_builder \\\n --setOnce you added the step\
400
+ \ operator to your active stack, you can use it to execute individual steps of\
401
+ \ your pipeline by specifying it in the @step decorator as follows:\n\nfrom zenml\
402
+ \ import step\n\n@step(step_operator=<STEP_OPERATOR_NAME>)\ndef step_on_spark(...)\
403
+ \ -> ...:\n \"\"\"Some step that should run with Spark on Kubernetes.\"\"\"\
404
+ \n ...\n\nAfter successfully running any step with a KubernetesSparkStepOperator,\
405
+ \ you should be able to see that a Spark driver pod was created in your cluster\
406
+ \ for each pipeline step when running kubectl get pods -n $KUBERNETES_NAMESPACE.\n\
407
+ \nInstead of hardcoding a step operator name, you can also use the Client to dynamically\
408
+ \ use the step operator of your active stack:\n\nfrom zenml.client import Client\n\
409
+ \nstep_operator = Client().active_stack.step_operator\n\n@step(step_operator=step_operator.name)\n\
410
+ def step_on_spark(...) -> ...:\n ...\n\nAdditional configuration\n\nFor additional\
411
+ \ configuration of the Spark step operator, you can pass SparkStepOperatorSettings\
412
+ \ when defining or running your pipeline. Check out the SDK docs for a full list\
413
+ \ of available attributes and this docs page for more information on how to specify\
414
+ \ settings.\n\nPreviousKubernetesNextDevelop a Custom Step Operator\n\nLast updated\
415
+ \ 4 months ago"
416
+ - "\U0001F5C4️Handle Data/Artifacts\n\nStep outputs in ZenML are stored in the artifact\
417
+ \ store. This enables caching, lineage and auditability. Using type annotations\
418
+ \ helps with transparency, passing data between steps, and serializing/des\n\n\
419
+ For best results, use type annotations for your outputs. This is good coding practice\
420
+ \ for transparency, helps ZenML handle passing data between steps, and also enables\
421
+ \ ZenML to serialize and deserialize (referred to as 'materialize' in ZenML) the\
422
+ \ data.\n\n@step\ndef load_data(parameter: int) -> Dict[str, Any]:\n\n# do something\
423
+ \ with the parameter here\n\ntraining_data = [[1, 2], [3, 4], [5, 6]]\n labels\
424
+ \ = [0, 1, 0]\n return {'features': training_data, 'labels': labels}\n\n@step\n\
425
+ def train_model(data: Dict[str, Any]) -> None:\n total_features = sum(map(sum,\
426
+ \ data['features']))\n total_labels = sum(data['labels'])\n \n # Train\
427
+ \ some model here\n \n print(f\"Trained model using {len(data['features'])}\
428
+ \ data points. \"\n f\"Feature sum is {total_features}, label sum is\
429
+ \ {total_labels}\")\n\n@pipeline \ndef simple_ml_pipeline(parameter: int):\n\
430
+ \ dataset = load_data(parameter=parameter) # Get the output \n train_model(dataset)\
431
+ \ # Pipe the previous step output into the downstream step\n\nIn this code, we\
432
+ \ define two steps: load_data and train_model. The load_data step takes an integer\
433
+ \ parameter and returns a dictionary containing training data and labels. The\
434
+ \ train_model step receives the dictionary from load_data, extracts the features\
435
+ \ and labels, and trains a model (not shown here).\n\nFinally, we define a pipeline\
436
+ \ simple_ml_pipeline that chains the load_data and train_model steps together.\
437
+ \ The output from load_data is passed as input to train_model, demonstrating how\
438
+ \ data flows between steps in a ZenML pipeline.\n\nPreviousDisable colorful loggingNextHow\
439
+ \ ZenML stores data\n\nLast updated 4 months ago"
440
+ - "in the ZenML dashboard.\n\nThe whylogs standard stepZenML wraps the whylogs/WhyLabs\
441
+ \ functionality in the form of a standard WhylogsProfilerStep step. The only field\
442
+ \ in the step config is a dataset_timestamp attribute which is only relevant when\
443
+ \ you upload the profiles to WhyLabs that uses this field to group and merge together\
444
+ \ profiles belonging to the same dataset. The helper function get_whylogs_profiler_step\
445
+ \ used to create an instance of this standard step takes in an optional dataset_id\
446
+ \ parameter that is also used only in the context of WhyLabs upload to identify\
447
+ \ the model in the context of which the profile is uploaded, e.g.:\n\nfrom zenml.integrations.whylogs.steps\
448
+ \ import get_whylogs_profiler_step\n\ntrain_data_profiler = get_whylogs_profiler_step(dataset_id=\"\
449
+ model-2\")\ntest_data_profiler = get_whylogs_profiler_step(dataset_id=\"model-3\"\
450
+ )\n\nThe step can then be inserted into your pipeline where it can take in a pandas.DataFrame\
451
+ \ dataset, e.g.:\n\nfrom zenml import pipeline\n\n@pipeline\ndef data_profiling_pipeline():\n\
452
+ \ data, _ = data_loader()\n train, test = data_splitter(data)\n train_data_profiler(train)\n\
453
+ \ test_data_profiler(test)\n\ndata_profiling_pipeline()\n\nAs can be seen from\
454
+ \ the step definition , the step takes in a dataset and returns a whylogs DatasetProfileView\
455
+ \ object:\n\n@step\ndef whylogs_profiler_step(\n dataset: pd.DataFrame,\n \
456
+ \ dataset_timestamp: Optional[datetime.datetime] = None,\n) -> DatasetProfileView:\n\
457
+ \ ...\n\nYou should consult the official whylogs documentation for more information\
458
+ \ on what you can do with the collected profiles.\n\nYou can view the complete\
459
+ \ list of configuration parameters in the SDK docs.\n\nThe whylogs Data Validator\n\
460
+ \nThe whylogs Data Validator implements the same interface as do all Data Validators,\
461
+ \ so this method forces you to maintain some level of compatibility with the overall\
462
+ \ Data Validator abstraction, which guarantees an easier migration in case you\
463
+ \ decide to switch to another Data Validator."
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
464
  pipeline_tag: sentence-similarity
465
  library_name: sentence-transformers
466
  metrics:
 
490
  type: dim_384
491
  metrics:
492
  - type: cosine_accuracy@1
493
+ value: 1.0
494
  name: Cosine Accuracy@1
495
  - type: cosine_accuracy@3
496
+ value: 1.0
497
  name: Cosine Accuracy@3
498
  - type: cosine_accuracy@5
499
+ value: 1.0
500
  name: Cosine Accuracy@5
501
  - type: cosine_accuracy@10
502
+ value: 1.0
503
  name: Cosine Accuracy@10
504
  - type: cosine_precision@1
505
+ value: 1.0
506
  name: Cosine Precision@1
507
  - type: cosine_precision@3
508
+ value: 0.3333333333333333
509
  name: Cosine Precision@3
510
  - type: cosine_precision@5
511
+ value: 0.2
512
  name: Cosine Precision@5
513
  - type: cosine_precision@10
514
+ value: 0.1
515
  name: Cosine Precision@10
516
  - type: cosine_recall@1
517
+ value: 1.0
518
  name: Cosine Recall@1
519
  - type: cosine_recall@3
520
+ value: 1.0
521
  name: Cosine Recall@3
522
  - type: cosine_recall@5
523
+ value: 1.0
524
  name: Cosine Recall@5
525
  - type: cosine_recall@10
526
+ value: 1.0
527
  name: Cosine Recall@10
528
  - type: cosine_ndcg@10
529
+ value: 1.0
530
  name: Cosine Ndcg@10
531
  - type: cosine_mrr@10
532
+ value: 1.0
533
  name: Cosine Mrr@10
534
  - type: cosine_map@100
535
+ value: 1.0
536
  name: Cosine Map@100
537
  - task:
538
  type: information-retrieval
 
542
  type: dim_256
543
  metrics:
544
  - type: cosine_accuracy@1
545
+ value: 1.0
546
  name: Cosine Accuracy@1
547
  - type: cosine_accuracy@3
548
+ value: 1.0
549
  name: Cosine Accuracy@3
550
  - type: cosine_accuracy@5
551
+ value: 1.0
552
  name: Cosine Accuracy@5
553
  - type: cosine_accuracy@10
554
+ value: 1.0
555
  name: Cosine Accuracy@10
556
  - type: cosine_precision@1
557
+ value: 1.0
558
  name: Cosine Precision@1
559
  - type: cosine_precision@3
560
+ value: 0.3333333333333333
561
  name: Cosine Precision@3
562
  - type: cosine_precision@5
563
+ value: 0.2
564
  name: Cosine Precision@5
565
  - type: cosine_precision@10
566
+ value: 0.1
567
  name: Cosine Precision@10
568
  - type: cosine_recall@1
569
+ value: 1.0
570
  name: Cosine Recall@1
571
  - type: cosine_recall@3
572
+ value: 1.0
573
  name: Cosine Recall@3
574
  - type: cosine_recall@5
575
+ value: 1.0
576
  name: Cosine Recall@5
577
  - type: cosine_recall@10
578
+ value: 1.0
579
  name: Cosine Recall@10
580
  - type: cosine_ndcg@10
581
+ value: 1.0
582
  name: Cosine Ndcg@10
583
  - type: cosine_mrr@10
584
+ value: 1.0
585
  name: Cosine Mrr@10
586
  - type: cosine_map@100
587
+ value: 1.0
588
  name: Cosine Map@100
589
  - task:
590
  type: information-retrieval
 
594
  type: dim_128
595
  metrics:
596
  - type: cosine_accuracy@1
597
+ value: 1.0
598
  name: Cosine Accuracy@1
599
  - type: cosine_accuracy@3
600
+ value: 1.0
601
  name: Cosine Accuracy@3
602
  - type: cosine_accuracy@5
603
+ value: 1.0
604
  name: Cosine Accuracy@5
605
  - type: cosine_accuracy@10
606
+ value: 1.0
607
  name: Cosine Accuracy@10
608
  - type: cosine_precision@1
609
+ value: 1.0
610
  name: Cosine Precision@1
611
  - type: cosine_precision@3
612
+ value: 0.3333333333333333
613
  name: Cosine Precision@3
614
  - type: cosine_precision@5
615
+ value: 0.2
616
  name: Cosine Precision@5
617
  - type: cosine_precision@10
618
+ value: 0.1
619
  name: Cosine Precision@10
620
  - type: cosine_recall@1
621
+ value: 1.0
622
  name: Cosine Recall@1
623
  - type: cosine_recall@3
624
+ value: 1.0
625
  name: Cosine Recall@3
626
  - type: cosine_recall@5
627
+ value: 1.0
628
  name: Cosine Recall@5
629
  - type: cosine_recall@10
630
+ value: 1.0
631
  name: Cosine Recall@10
632
  - type: cosine_ndcg@10
633
+ value: 1.0
634
  name: Cosine Ndcg@10
635
  - type: cosine_mrr@10
636
+ value: 1.0
637
  name: Cosine Mrr@10
638
  - type: cosine_map@100
639
+ value: 1.0
640
  name: Cosine Map@100
641
  - task:
642
  type: information-retrieval
 
646
  type: dim_64
647
  metrics:
648
  - type: cosine_accuracy@1
649
+ value: 0.75
650
  name: Cosine Accuracy@1
651
  - type: cosine_accuracy@3
652
+ value: 1.0
653
  name: Cosine Accuracy@3
654
  - type: cosine_accuracy@5
655
+ value: 1.0
656
  name: Cosine Accuracy@5
657
  - type: cosine_accuracy@10
658
+ value: 1.0
659
  name: Cosine Accuracy@10
660
  - type: cosine_precision@1
661
+ value: 0.75
662
  name: Cosine Precision@1
663
  - type: cosine_precision@3
664
+ value: 0.3333333333333333
665
  name: Cosine Precision@3
666
  - type: cosine_precision@5
667
+ value: 0.2
668
  name: Cosine Precision@5
669
  - type: cosine_precision@10
670
+ value: 0.1
671
  name: Cosine Precision@10
672
  - type: cosine_recall@1
673
+ value: 0.75
674
  name: Cosine Recall@1
675
  - type: cosine_recall@3
676
+ value: 1.0
677
  name: Cosine Recall@3
678
  - type: cosine_recall@5
679
+ value: 1.0
680
  name: Cosine Recall@5
681
  - type: cosine_recall@10
682
+ value: 1.0
683
  name: Cosine Recall@10
684
  - type: cosine_ndcg@10
685
+ value: 0.9077324383928644
686
  name: Cosine Ndcg@10
687
  - type: cosine_mrr@10
688
+ value: 0.875
689
  name: Cosine Mrr@10
690
  - type: cosine_map@100
691
+ value: 0.875
692
  name: Cosine Map@100
693
  ---
694
 
695
  # zenml/finetuned-snowflake-arctic-embed-m-v1.5
696
 
697
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m-v1.5](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
698
 
699
  ## Model Details
700
 
 
704
  - **Maximum Sequence Length:** 512 tokens
705
  - **Output Dimensionality:** 768 tokens
706
  - **Similarity Function:** Cosine Similarity
707
+ - **Training Dataset:**
708
+ - json
709
  - **Language:** en
710
  - **License:** apache-2.0
711
 
 
743
  model = SentenceTransformer("zenml/finetuned-snowflake-arctic-embed-m-v1.5")
744
  # Run inference
745
  sentences = [
746
+ 'How does ZenML facilitate the flow of data between steps in a pipeline?',
747
+ '🗄️Handle Data/Artifacts\n\nStep outputs in ZenML are stored in the artifact store. This enables caching, lineage and auditability. Using type annotations helps with transparency, passing data between steps, and serializing/des\n\nFor best results, use type annotations for your outputs. This is good coding practice for transparency, helps ZenML handle passing data between steps, and also enables ZenML to serialize and deserialize (referred to as \'materialize\' in ZenML) the data.\n\n@step\ndef load_data(parameter: int) -> Dict[str, Any]:\n\n# do something with the parameter here\n\ntraining_data = [[1, 2], [3, 4], [5, 6]]\n labels = [0, 1, 0]\n return {\'features\': training_data, \'labels\': labels}\n\n@step\ndef train_model(data: Dict[str, Any]) -> None:\n total_features = sum(map(sum, data[\'features\']))\n total_labels = sum(data[\'labels\'])\n \n # Train some model here\n \n print(f"Trained model using {len(data[\'features\'])} data points. "\n f"Feature sum is {total_features}, label sum is {total_labels}")\n\n@pipeline \ndef simple_ml_pipeline(parameter: int):\n dataset = load_data(parameter=parameter) # Get the output \n train_model(dataset) # Pipe the previous step output into the downstream step\n\nIn this code, we define two steps: load_data and train_model. The load_data step takes an integer parameter and returns a dictionary containing training data and labels. The train_model step receives the dictionary from load_data, extracts the features and labels, and trains a model (not shown here).\n\nFinally, we define a pipeline simple_ml_pipeline that chains the load_data and train_model steps together. The output from load_data is passed as input to train_model, demonstrating how data flows between steps in a ZenML pipeline.\n\nPreviousDisable colorful loggingNextHow ZenML stores data\n\nLast updated 4 months ago',
748
+ 'in the ZenML dashboard.\n\nThe whylogs standard stepZenML wraps the whylogs/WhyLabs functionality in the form of a standard WhylogsProfilerStep step. The only field in the step config is a dataset_timestamp attribute which is only relevant when you upload the profiles to WhyLabs that uses this field to group and merge together profiles belonging to the same dataset. The helper function get_whylogs_profiler_step used to create an instance of this standard step takes in an optional dataset_id parameter that is also used only in the context of WhyLabs upload to identify the model in the context of which the profile is uploaded, e.g.:\n\nfrom zenml.integrations.whylogs.steps import get_whylogs_profiler_step\n\ntrain_data_profiler = get_whylogs_profiler_step(dataset_id="model-2")\ntest_data_profiler = get_whylogs_profiler_step(dataset_id="model-3")\n\nThe step can then be inserted into your pipeline where it can take in a pandas.DataFrame dataset, e.g.:\n\nfrom zenml import pipeline\n\n@pipeline\ndef data_profiling_pipeline():\n data, _ = data_loader()\n train, test = data_splitter(data)\n train_data_profiler(train)\n test_data_profiler(test)\n\ndata_profiling_pipeline()\n\nAs can be seen from the step definition , the step takes in a dataset and returns a whylogs DatasetProfileView object:\n\n@step\ndef whylogs_profiler_step(\n dataset: pd.DataFrame,\n dataset_timestamp: Optional[datetime.datetime] = None,\n) -> DatasetProfileView:\n ...\n\nYou should consult the official whylogs documentation for more information on what you can do with the collected profiles.\n\nYou can view the complete list of configuration parameters in the SDK docs.\n\nThe whylogs Data Validator\n\nThe whylogs Data Validator implements the same interface as do all Data Validators, so this method forces you to maintain some level of compatibility with the overall Data Validator abstraction, which guarantees an easier migration in case you decide to switch to another Data Validator.',
749
  ]
750
  embeddings = model.encode(sentences)
751
  print(embeddings.shape)
 
789
  * Dataset: `dim_384`
790
  * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
791
 
792
+ | Metric | Value |
793
+ |:--------------------|:--------|
794
+ | cosine_accuracy@1 | 1.0 |
795
+ | cosine_accuracy@3 | 1.0 |
796
+ | cosine_accuracy@5 | 1.0 |
797
+ | cosine_accuracy@10 | 1.0 |
798
+ | cosine_precision@1 | 1.0 |
799
+ | cosine_precision@3 | 0.3333 |
800
+ | cosine_precision@5 | 0.2 |
801
+ | cosine_precision@10 | 0.1 |
802
+ | cosine_recall@1 | 1.0 |
803
+ | cosine_recall@3 | 1.0 |
804
+ | cosine_recall@5 | 1.0 |
805
+ | cosine_recall@10 | 1.0 |
806
+ | cosine_ndcg@10 | 1.0 |
807
+ | cosine_mrr@10 | 1.0 |
808
+ | **cosine_map@100** | **1.0** |
809
 
810
  #### Information Retrieval
811
  * Dataset: `dim_256`
812
  * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
813
 
814
+ | Metric | Value |
815
+ |:--------------------|:--------|
816
+ | cosine_accuracy@1 | 1.0 |
817
+ | cosine_accuracy@3 | 1.0 |
818
+ | cosine_accuracy@5 | 1.0 |
819
+ | cosine_accuracy@10 | 1.0 |
820
+ | cosine_precision@1 | 1.0 |
821
+ | cosine_precision@3 | 0.3333 |
822
+ | cosine_precision@5 | 0.2 |
823
+ | cosine_precision@10 | 0.1 |
824
+ | cosine_recall@1 | 1.0 |
825
+ | cosine_recall@3 | 1.0 |
826
+ | cosine_recall@5 | 1.0 |
827
+ | cosine_recall@10 | 1.0 |
828
+ | cosine_ndcg@10 | 1.0 |
829
+ | cosine_mrr@10 | 1.0 |
830
+ | **cosine_map@100** | **1.0** |
831
 
832
  #### Information Retrieval
833
  * Dataset: `dim_128`
834
  * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
835
 
836
+ | Metric | Value |
837
+ |:--------------------|:--------|
838
+ | cosine_accuracy@1 | 1.0 |
839
+ | cosine_accuracy@3 | 1.0 |
840
+ | cosine_accuracy@5 | 1.0 |
841
+ | cosine_accuracy@10 | 1.0 |
842
+ | cosine_precision@1 | 1.0 |
843
+ | cosine_precision@3 | 0.3333 |
844
+ | cosine_precision@5 | 0.2 |
845
+ | cosine_precision@10 | 0.1 |
846
+ | cosine_recall@1 | 1.0 |
847
+ | cosine_recall@3 | 1.0 |
848
+ | cosine_recall@5 | 1.0 |
849
+ | cosine_recall@10 | 1.0 |
850
+ | cosine_ndcg@10 | 1.0 |
851
+ | cosine_mrr@10 | 1.0 |
852
+ | **cosine_map@100** | **1.0** |
853
 
854
  #### Information Retrieval
855
  * Dataset: `dim_64`
856
  * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
857
 
858
+ | Metric | Value |
859
+ |:--------------------|:----------|
860
+ | cosine_accuracy@1 | 0.75 |
861
+ | cosine_accuracy@3 | 1.0 |
862
+ | cosine_accuracy@5 | 1.0 |
863
+ | cosine_accuracy@10 | 1.0 |
864
+ | cosine_precision@1 | 0.75 |
865
+ | cosine_precision@3 | 0.3333 |
866
+ | cosine_precision@5 | 0.2 |
867
+ | cosine_precision@10 | 0.1 |
868
+ | cosine_recall@1 | 0.75 |
869
+ | cosine_recall@3 | 1.0 |
870
+ | cosine_recall@5 | 1.0 |
871
+ | cosine_recall@10 | 1.0 |
872
+ | cosine_ndcg@10 | 0.9077 |
873
+ | cosine_mrr@10 | 0.875 |
874
+ | **cosine_map@100** | **0.875** |
875
 
876
  <!--
877
  ## Bias, Risks and Limitations
 
889
 
890
  ### Training Dataset
891
 
892
+ #### json
 
893
 
894
+ * Dataset: json
895
+ * Size: 36 training samples
896
  * Columns: <code>positive</code> and <code>anchor</code>
897
+ * Approximate statistics based on the first 36 samples:
898
+ | | positive | anchor |
899
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
900
+ | type | string | string |
901
+ | details | <ul><li>min: 13 tokens</li><li>mean: 23.92 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 321.11 tokens</li><li>max: 512 tokens</li></ul> |
902
  * Samples:
903
+ | positive | anchor |
904
+ |:------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
905
+ | <code>How does ZenML integrate Spark step operators for executing individual steps?</code> | <code>Spark<br><br>Executing individual steps on Spark<br><br>The spark integration brings two different step operators:<br><br>Step Operator: The SparkStepOperator serves as the base class for all the Spark-related step operators.<br><br>Step Operator: The KubernetesSparkStepOperator is responsible for launching ZenML steps as Spark applications with Kubernetes as a cluster manager.<br><br>Step Operators: SparkStepOperator<br><br>A summarized version of the implementation can be summarized in two parts. First, the configuration:<br><br>from typing import Optional, Dict, Any<br>from zenml.step_operators import BaseStepOperatorConfig<br><br>class SparkStepOperatorConfig(BaseStepOperatorConfig):<br> """Spark step operator config.<br><br>Attributes:<br> master: is the master URL for the cluster. You might see different<br> schemes for different cluster managers which are supported by Spark<br> like Mesos, YARN, or Kubernetes. Within the context of this PR,<br> the implementation supports Kubernetes as a cluster manager.<br> deploy_mode: can either be 'cluster' (default) or 'client' and it<br> decides where the driver node of the application will run.<br> submit_kwargs: is the JSON string of a dict, which will be used<br> to define additional params if required (Spark has quite a<br> lot of different parameters, so including them, all in the step<br> operator was not implemented).<br> """<br><br>master: str<br> deploy_mode: str = "cluster"<br> submit_kwargs: Optional[Dict[str, Any]] = None<br><br>and then the implementation:<br><br>from typing import List<br>from pyspark.conf import SparkConf<br><br>from zenml.step_operators import BaseStepOperator<br><br>class SparkStepOperator(BaseStepOperator):<br> """Base class for all Spark-related step operators."""<br><br>def _resource_configuration(<br> self,<br> spark_config: SparkConf,<br> resource_configuration: "ResourceSettings",<br> ) -> None:<br> """Configures Spark to handle the resource configuration."""</code> |
906
+ | <code>How can ZenML be used to finetune LLMs for specific tasks or to improve performance and cost?</code> | <code>Finetuning LLMs with ZenML<br><br>Finetune LLMs for specific tasks or to improve performance and cost.<br><br>PreviousEvaluating finetuned embeddingsNextSet up a project repository<br><br>Last updated 6 months ago</code> |
907
+ | <code>How can I develop a custom model deployer in ZenML for efficient deployment and management of machine-learning models?</code> | <code>Develop a Custom Model Deployer<br><br>Learning how to develop a custom model deployer.<br><br>Before diving into the specifics of this component type, it is beneficial to familiarize yourself with our general guide to writing custom component flavors in ZenML. This guide provides an essential understanding of ZenML's component flavor concepts.<br><br>To deploy and manage your trained machine-learning models, ZenML provides a stack component called Model Deployer. This component is responsible for interacting with the deployment tool, framework, or platform.<br><br>When present in a stack, the model deployer can also act as a registry for models that are served with ZenML. You can use the model deployer to list all models that are currently deployed for online inference or filtered according to a particular pipeline run or step, or to suspend, resume or delete an external model server managed through ZenML.<br><br>Base Abstraction<br><br>In ZenML, the base abstraction of the model deployer is built on top of three major criteria:<br><br>It needs to ensure efficient deployment and management of models in accordance with the specific requirements of the serving infrastructure, by holding all the stack-related configuration attributes required to interact with the remote model serving tool, service, or platform.<br><br>It needs to implement the continuous deployment logic necessary to deploy models in a way that updates an existing model server that is already serving a previous version of the same model instead of creating a new model server for every new model version (see the deploy_model abstract method). This functionality can be consumed directly from ZenML pipeline steps, but it can also be used outside the pipeline to deploy ad-hoc models. It is also usually coupled with a standard model deployer step, implemented by each integration, that hides the details of the deployment process from the user.</code> |
908
  * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
909
  ```json
910
  {
 
998
  - `dataloader_num_workers`: 0
999
  - `dataloader_prefetch_factor`: None
1000
  - `past_index`: -1
1001
+ - `disable_tqdm`: False
1002
  - `remove_unused_columns`: True
1003
  - `label_names`: None
1004
  - `load_best_model_at_end`: True
 
1059
  </details>
1060
 
1061
  ### Training Logs
1062
+ | Epoch | Step | dim_384_cosine_map@100 | dim_256_cosine_map@100 | dim_128_cosine_map@100 | dim_64_cosine_map@100 |
1063
+ |:-------:|:-----:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
1064
+ | 1.0 | 1 | 1.0 | 1.0 | 0.875 | 0.875 |
1065
+ | **2.0** | **3** | **1.0** | **1.0** | **1.0** | **0.875** |
1066
+ | 3.0 | 4 | 1.0 | 1.0 | 1.0 | 0.875 |
 
 
 
 
 
 
 
 
 
 
1067
 
1068
  * The bold row denotes the saved checkpoint.
1069
 
1070
  ### Framework Versions
1071
+ - Python: 3.11.10
1072
+ - Sentence Transformers: 3.2.1
1073
+ - Transformers: 4.43.1
1074
+ - PyTorch: 2.5.1+cu124
1075
+ - Accelerate: 1.0.1
1076
+ - Datasets: 3.0.2
1077
  - Tokenizers: 0.19.1
1078
 
1079
  ## Citation
 
1096
  #### MatryoshkaLoss
1097
  ```bibtex
1098
  @misc{kusupati2024matryoshka,
1099
+ title={Matryoshka Representation Learning},
1100
  author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
1101
  year={2024},
1102
  eprint={2205.13147},
 
1108
  #### MultipleNegativesRankingLoss
1109
  ```bibtex
1110
  @misc{henderson2017efficient,
1111
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
1112
  author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
1113
  year={2017},
1114
  eprint={1705.00652},
config.json CHANGED
@@ -19,7 +19,7 @@
19
  "pad_token_id": 0,
20
  "position_embedding_type": "absolute",
21
  "torch_dtype": "float32",
22
- "transformers_version": "4.44.0",
23
  "type_vocab_size": 2,
24
  "use_cache": true,
25
  "vocab_size": 30522
 
19
  "pad_token_id": 0,
20
  "position_embedding_type": "absolute",
21
  "torch_dtype": "float32",
22
+ "transformers_version": "4.43.1",
23
  "type_vocab_size": 2,
24
  "use_cache": true,
25
  "vocab_size": 30522
config_sentence_transformers.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "__version__": {
3
- "sentence_transformers": "3.0.1",
4
- "transformers": "4.44.0",
5
- "pytorch": "2.5.0+cu124"
6
  },
7
  "prompts": {
8
  "query": "Represent this sentence for searching relevant passages: "
 
1
  {
2
  "__version__": {
3
+ "sentence_transformers": "3.2.1",
4
+ "transformers": "4.43.1",
5
+ "pytorch": "2.5.1+cu124"
6
  },
7
  "prompts": {
8
  "query": "Represent this sentence for searching relevant passages: "
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6b8c58ad12b1547c240e2803ab66e9f6a8fa7baaa022ce3c0336bc103cdbb96f
3
  size 435588776
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24b8936c919b841952b05d08c369ac02d906ad7f6e4471b4141ddb8e9799e622
3
  size 435588776