ailexej's picture
Add new SentenceTransformer model
3959705 verified
|
raw
history blame
56.2 kB
metadata
language:
  - en
license: apache-2.0
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:36
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-m-v1.5
widget:
  - source_sentence: Where can I find older versions of the ZenML documentation?
    sentences:
      - >-
        gister flavors.my_flavor.MyExperimentTrackerFlavorZenML resolves the
        flavor class by taking the path where you initialized zenml (via zenml
        init) as the starting point of resolution. Therefore, please ensure you
        follow the best practice of initializing zenml at the root of your
        repository.


        If ZenML does not find an initialized ZenML repository in any parent
        directory, it will default to the current working directory, but
        usually, it's better to not have to rely on this mechanism and
        initialize zenml at the root.


        Afterward, you should see the new flavor in the list of available
        flavors:


        zenml experiment-tracker flavor list


        It is important to draw attention to when and how these base
        abstractions are coming into play in a ZenML workflow.


        The CustomExperimentTrackerFlavor class is imported and utilized upon
        the creation of the custom flavor through the CLI.


        The CustomExperimentTrackerConfig class is imported when someone tries
        to register/update a stack component with this custom flavor.
        Especially, during the registration process of the stack component, the
        config will be used to validate the values given by the user. As Config
        objects are inherently pydantic objects, you can also add your own
        custom validators here.


        The CustomExperimentTracker only comes into play when the component is
        ultimately in use.


        The design behind this interaction lets us separate the configuration of
        the flavor from its implementation. This way we can register flavors and
        components even when the major dependencies behind their implementation
        are not installed in our local setting (assuming the
        CustomExperimentTrackerFlavor and the CustomExperimentTrackerConfig are
        implemented in a different module/path than the actual
        CustomExperimentTracker).


        PreviousWeights & BiasesNextModel Deployers


        Last updated 21 days ago
      - |-
        ZenML - Bridging the gap between ML & Ops

        Legacy Docs

        Bleeding EdgeLegacy Docs0.67.0

        🧙‍♂️Find older version our docs

        Powered by GitBook
      - |-
        ZenML - Bridging the gap between ML & Ops

        Legacy Docs

        Bleeding EdgeLegacy Docs0.67.0

        🧙‍♂️Find older version our docs

        Powered by GitBook
  - source_sentence: Where can I find older versions of the ZenML documentation?
    sentences:
      - >-
        Whylogs


        How to collect and visualize statistics to track changes in your
        pipelines' data with whylogs/WhyLabs profiling.


        The whylogs/WhyLabs Data Validator flavor provided with the ZenML
        integration uses whylogs and WhyLabs to generate and track data
        profiles, highly accurate descriptive representations of your data. The
        profiles can be used to implement automated corrective actions in your
        pipelines, or to render interactive representations for further visual
        interpretation, evaluation and documentation.


        When would you want to use it?


        Whylogs is an open-source library that analyzes your data and creates
        statistical summaries called whylogs profiles. Whylogs profiles can be
        processed in your pipelines and visualized locally or uploaded to the
        WhyLabs platform, where more in depth analysis can be carried out. Even
        though whylogs also supports other data types, the ZenML whylogs
        integration currently only works with tabular data in pandas.DataFrame
        format.


        You should use the whylogs/WhyLabs Data Validator when you need the
        following data validation features that are possible with whylogs and
        WhyLabs:


        Data Quality: validate data quality in model inputs or in a data
        pipeline


        Data Drift: detect data drift in model input features


        Model Drift: Detect training-serving skew, concept drift, and model
        performance degradation


        You should consider one of the other Data Validator flavors if you need
        a different set of data validation features.


        How do you deploy it?


        The whylogs Data Validator flavor is included in the whylogs ZenML
        integration, you need to install it on your local machine to be able to
        register a whylogs Data Validator and add it to your stack:


        zenml integration install whylogs -y


        If you don't need to connect to the WhyLabs platform to upload and store
        the generated whylogs data profiles, the Data Validator stack component
        does not require any configuration parameters. Adding it to a stack is
        as simple as running e.g.:
      - |-
        ZenML - Bridging the gap between ML & Ops

        Legacy Docs

        Bleeding EdgeLegacy Docs0.67.0

        🧙‍♂️Find older version our docs

        Powered by GitBook
      - |-
        ZenML - Bridging the gap between ML & Ops

        Legacy Docs

        Bleeding EdgeLegacy Docs0.67.0

        🧙‍♂️Find older version our docs

        Powered by GitBook
  - source_sentence: >-
      How can I install ZenML with support for a local dashboard, and what
      precautions should I take when installing on a Mac with Apple Silicon?
    sentences:
      - |-
        Finetuning LLMs with ZenML

        Finetune LLMs for specific tasks or to improve performance and cost.

        PreviousEvaluating finetuned embeddingsNextSet up a project repository

        Last updated 6 months ago
      - >-
        🧙Installation


        Installing ZenML and getting started.


        ZenML is a Python package that can be installed directly via pip:


        pip install zenml


        Note that ZenML currently supports Python 3.8, 3.9, 3.10, and 3.11.
        Please make sure that you are using a supported Python version.


        Install with the dashboard


        ZenML comes bundled with a web dashboard that lives inside a sister
        repository. In order to get access to the dashboard locally, you need to
        launch the ZenML Server and Dashboard locally. For this, you need to
        install the optional dependencies for the ZenML Server:


        pip install "zenml[server]"


        We highly encourage you to install ZenML in a virtual environment. At
        ZenML, We like to use virtualenvwrapper or pyenv-virtualenv to manage
        our Python virtual environments.


        Installing onto MacOS with Apple Silicon (M1, M2)


        A change in how forking works on Macs running on Apple Silicon means
        that you should set the following environment variable which will ensure
        that your connections to the server remain unbroken:


        export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES


        You can read more about this here. This environment variable is needed
        if you are working with a local server on your Mac, but if you're just
        using ZenML as a client / CLI and connecting to a deployed server then
        you don't need to set it.


        Nightly builds


        ZenML also publishes nightly builds under the zenml-nightly package
        name. These are built from the latest develop branch (to which work
        ready for release is published) and are not guaranteed to be stable. To
        install the nightly build, run:


        pip install zenml-nightly


        Verifying installations


        Once the installation is completed, you can check whether the
        installation was successful either through Bash:


        zenml version


        or through Python:


        import zenml


        print(zenml.__version__)


        If you would like to learn more about the current release, please visit
        our PyPi package page.


        Running with Docker
      - >-
        se you decide to switch to another Data Validator.All you have to do is
        call the whylogs Data Validator methods when you need to interact with
        whylogs to generate data profiles. You may optionally enable whylabs
        logging to automatically upload the returned whylogs profile to WhyLabs,
        e.g.:


        import pandas as pd

        from whylogs.core import DatasetProfileView

        from zenml.integrations.whylogs.data_validators.whylogs_data_validator
        import (
            WhylogsDataValidator,
        )

        from zenml.integrations.whylogs.flavors.whylogs_data_validator_flavor
        import (
            WhylogsDataValidatorSettings,
        )

        from zenml import step


        whylogs_settings = WhylogsDataValidatorSettings(
            enable_whylabs=True, dataset_id="<WHYLABS_DATASET_ID>"
        )


        @step(
            settings={
                "data_validator": whylogs_settings
            }
        )

        def data_profiler(
                dataset: pd.DataFrame,
        ) -> DatasetProfileView:
            """Custom data profiler step with whylogs

        Args:
                dataset: a Pandas DataFrame

        Returns:
                Whylogs profile generated for the data
            """

        # validation pre-processing (e.g. dataset preparation) can take place
        here


        data_validator = WhylogsDataValidator.get_active_data_validator()
            profile = data_validator.data_profiling(
                dataset,
            )
            # optionally upload the profile to WhyLabs, if WhyLabs credentials are configured
            data_validator.upload_profile_view(profile)

        # validation post-processing (e.g. interpret results, take actions) can
        happen here


        return profile


        Have a look at the complete list of methods and parameters available in
        the WhylogsDataValidator API in the SDK docs.


        Call whylogs directly


        You can use the whylogs library directly in your custom pipeline steps,
        and only leverage ZenML's capability of serializing, versioning and
        storing the DatasetProfileView objects in its Artifact Store. You may
        optionally enable whylabs logging to automatically upload the returned
        whylogs profile to WhyLabs, e.g.:
  - source_sentence: >-
      How can I finetune embeddings using Sentence Transformers as described in
      the ZenML documentation?
    sentences:
      - |-
        Evaluation and metrics

        Track how your RAG pipeline improves using evaluation and metrics.

        PreviousBasic RAG inference pipelineNextEvaluation in 65 lines of code

        Last updated 4 months ago
      - >-
        :
                """Abstract method to deploy a model."""@staticmethod
            @abstractmethod
            def get_model_server_info(
                    service: BaseService,
            ) -> Dict[str, Optional[str]]:
                """Give implementation-specific way to extract relevant model server
                properties for the user."""

        @abstractmethod
            def perform_stop_model(
                self,
                service: BaseService,
                timeout: int = DEFAULT_DEPLOYMENT_START_STOP_TIMEOUT,
                force: bool = False,
            ) -> BaseService:
                """Abstract method to stop a model server."""

        @abstractmethod
            def perform_start_model(
                self,
                service: BaseService,
                timeout: int = DEFAULT_DEPLOYMENT_START_STOP_TIMEOUT,
            ) -> BaseService:
                """Abstract method to start a model server."""

        @abstractmethod
            def perform_delete_model(
                self,
                service: BaseService,
                timeout: int = DEFAULT_DEPLOYMENT_START_STOP_TIMEOUT,
                force: bool = False,
            ) -> None:
                """Abstract method to delete a model server."""

        class BaseModelDeployerFlavor(Flavor):
            """Base class for model deployer flavors."""

        @property
            @abstractmethod
            def name(self):
                """Returns the name of the flavor."""

        @property
            def type(self) -> StackComponentType:
                """Returns the flavor type.

        Returns:
                    The flavor type.
                """
                return StackComponentType.MODEL_DEPLOYER

        @property
            def config_class(self) -> Type[BaseModelDeployerConfig]:
                """Returns `BaseModelDeployerConfig` config class.

        Returns:
                        The config class.
                """
                return BaseModelDeployerConfig

        @property
            @abstractmethod
            def implementation_class(self) -> Type[BaseModelDeployer]:
                """The class that implements the model deployer."""

        This is a slimmed-down version of the base implementation which aims to
        highlight the abstraction layer. In order to see the full implementation
        and get the complete docstrings, please check the SDK docs .


        Building your own model deployers
      - |-
        Finetuning embeddings with Sentence Transformers

        Finetune embeddings with Sentence Transformers.

        PreviousSynthetic data generationNextEvaluating finetuned embeddings

        Last updated 1 month ago
  - source_sentence: >-
      How does ZenML utilize type annotations in step outputs to enhance data
      handling between pipeline steps?
    sentences:
      - >-
        ator which runs Steps with Spark on Kubernetes."""def
        _backend_configuration(
                    self,
                    spark_config: SparkConf,
                    step_config: "StepConfiguration",
            ) -> None:
                """Configures Spark to run on Kubernetes."""
                # Build and push the image
                docker_image_builder = PipelineDockerImageBuilder()
                image_name = docker_image_builder.build_and_push_docker_image(...)

        # Adjust the spark configuration
                spark_config.set("spark.kubernetes.container.image", image_name)
                ...

        For Kubernetes, there are also some additional important configuration
        parameters:


        namespace is the namespace under which the driver and executor pods will
        run.


        service_account is the service account that will be used by various
        Spark components (to create and watch the pods).


        Additionally, the _backend_configuration method is adjusted to handle
        the Kubernetes-specific configuration.


        When to use it


        You should use the Spark step operator:


        when you are dealing with large amounts of data.


        when you are designing a step that can benefit from distributed
        computing paradigms in terms of time and resources.


        How to deploy it


        To use the KubernetesSparkStepOperator you will need to setup a few
        things first:


        Remote ZenML server: See the deployment guide for more information.


        Kubernetes cluster: There are many ways to deploy a Kubernetes cluster
        using different cloud providers or on your custom infrastructure. For
        AWS, you can follow the Spark EKS Setup Guide below.


        Spark EKS Setup Guide


        The following guide will walk you through how to spin up and configure a
        Amazon Elastic Kubernetes Service with Spark on it:


        EKS Kubernetes Cluster


        Follow this guide to create an Amazon EKS cluster role.


        Follow this guide to create an Amazon EC2 node role.


        Go to the IAM website, and select Roles to edit both roles.


        Attach the AmazonRDSFullAccess and AmazonS3FullAccess policies to both
        roles.


        Go to the EKS website.


        Make sure the correct region is selected on the top right.
      - >-
        🗄️Handle Data/Artifacts


        Step outputs in ZenML are stored in the artifact store. This enables
        caching, lineage and auditability. Using type annotations helps with
        transparency, passing data between steps, and serializing/des


        For best results, use type annotations for your outputs. This is good
        coding practice for transparency, helps ZenML handle passing data
        between steps, and also enables ZenML to serialize and deserialize
        (referred to as 'materialize' in ZenML) the data.


        @step

        def load_data(parameter: int) -> Dict[str, Any]:


        # do something with the parameter here


        training_data = [[1, 2], [3, 4], [5, 6]]
            labels = [0, 1, 0]
            return {'features': training_data, 'labels': labels}

        @step

        def train_model(data: Dict[str, Any]) -> None:
            total_features = sum(map(sum, data['features']))
            total_labels = sum(data['labels'])
            
            # Train some model here
            
            print(f"Trained model using {len(data['features'])} data points. "
                  f"Feature sum is {total_features}, label sum is {total_labels}")

        @pipeline  

        def simple_ml_pipeline(parameter: int):
            dataset = load_data(parameter=parameter)  # Get the output 
            train_model(dataset)  # Pipe the previous step output into the downstream step

        In this code, we define two steps: load_data and train_model. The
        load_data step takes an integer parameter and returns a dictionary
        containing training data and labels. The train_model step receives the
        dictionary from load_data, extracts the features and labels, and trains
        a model (not shown here).


        Finally, we define a pipeline simple_ml_pipeline that chains the
        load_data and train_model steps together. The output from load_data is
        passed as input to train_model, demonstrating how data flows between
        steps in a ZenML pipeline.


        PreviousDisable colorful loggingNextHow ZenML stores data


        Last updated 4 months ago
      - >2-
         your GCP Image Builder to the GCP cloud platform.To set up the GCP Image Builder to authenticate to GCP and access the GCP Cloud Build services, it is recommended to leverage the many features provided by the GCP Service Connector such as auto-configuration, best security practices regarding long-lived credentials and reusing the same credentials across multiple stack components.

        If you don't already have a GCP Service Connector configured in your
        ZenML deployment, you can register one using the interactive CLI
        command. You also have the option to configure a GCP Service Connector
        that can be used to access more than just the GCP Cloud Build service:


        zenml service-connector register --type gcp -i


        A non-interactive CLI example that leverages the Google Cloud CLI
        configuration on your local machine to auto-configure a GCP Service
        Connector for the GCP Cloud Build service:


        zenml service-connector register <CONNECTOR_NAME> --type gcp
        --resource-type gcp-generic --resource-name <GCS_BUCKET_NAME>
        --auto-configure


        Example Command Output


        $ zenml service-connector register gcp-generic --type gcp
        --resource-type gcp-generic --auto-configure

        Successfully registered service connector `gcp-generic` with access to
        the following resources:

        ┏━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓

        ┃ RESOURCE TYPE  │ RESOURCE NAMES ┃

        ┠────────────────┼────────────────┨

        ┃ 🔵 gcp-generic │ zenml-core     ┃

        ┗━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛


        Note: Please remember to grant the entity associated with your GCP
        credentials permissions to access the Cloud Build API and to run Cloud
        Builder jobs (e.g. the Cloud Build Editor IAM role). The GCP Service
        Connector supports many different authentication methods with different
        levels of security and convenience. You should pick the one that best
        fits your use case.


        If you already have one or more GCP Service Connectors configured in
        your ZenML deployment, you can check which of them can be used to access
        generic GCP resources like the GCP Image Builder required for your GCP
        Image Builder by running e.g.:
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: zenml/finetuned-snowflake-arctic-embed-m-v1.5
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 384
          type: dim_384
        metrics:
          - type: cosine_accuracy@1
            value: 0.75
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 1
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 1
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.75
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3333333333333333
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.2
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.1
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.75
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 1
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 1
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.875
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.8333333333333334
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.8333333333333334
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 256
          type: dim_256
        metrics:
          - type: cosine_accuracy@1
            value: 0.75
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 1
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 1
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.75
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3333333333333333
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.2
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.1
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.75
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 1
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 1
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.875
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.8333333333333334
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.8333333333333334
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 128
          type: dim_128
        metrics:
          - type: cosine_accuracy@1
            value: 0.75
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.75
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 1
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.75
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.25
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.2
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.1
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.75
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.75
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 1
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.8576691395183482
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.8125
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.8125
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 64
          type: dim_64
        metrics:
          - type: cosine_accuracy@1
            value: 0.75
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 1
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 1
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.75
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3333333333333333
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.2
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.1
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.75
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 1
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 1
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.875
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.8333333333333334
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.8333333333333334
            name: Cosine Map@100

zenml/finetuned-snowflake-arctic-embed-m-v1.5

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-m-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("zenml/finetuned-snowflake-arctic-embed-m-v1.5")
# Run inference
sentences = [
    'How does ZenML utilize type annotations in step outputs to enhance data handling between pipeline steps?',
    '🗄️Handle Data/Artifacts\n\nStep outputs in ZenML are stored in the artifact store. This enables caching, lineage and auditability. Using type annotations helps with transparency, passing data between steps, and serializing/des\n\nFor best results, use type annotations for your outputs. This is good coding practice for transparency, helps ZenML handle passing data between steps, and also enables ZenML to serialize and deserialize (referred to as \'materialize\' in ZenML) the data.\n\n@step\ndef load_data(parameter: int) -> Dict[str, Any]:\n\n# do something with the parameter here\n\ntraining_data = [[1, 2], [3, 4], [5, 6]]\n    labels = [0, 1, 0]\n    return {\'features\': training_data, \'labels\': labels}\n\n@step\ndef train_model(data: Dict[str, Any]) -> None:\n    total_features = sum(map(sum, data[\'features\']))\n    total_labels = sum(data[\'labels\'])\n    \n    # Train some model here\n    \n    print(f"Trained model using {len(data[\'features\'])} data points. "\n          f"Feature sum is {total_features}, label sum is {total_labels}")\n\n@pipeline  \ndef simple_ml_pipeline(parameter: int):\n    dataset = load_data(parameter=parameter)  # Get the output \n    train_model(dataset)  # Pipe the previous step output into the downstream step\n\nIn this code, we define two steps: load_data and train_model. The load_data step takes an integer parameter and returns a dictionary containing training data and labels. The train_model step receives the dictionary from load_data, extracts the features and labels, and trains a model (not shown here).\n\nFinally, we define a pipeline simple_ml_pipeline that chains the load_data and train_model steps together. The output from load_data is passed as input to train_model, demonstrating how data flows between steps in a ZenML pipeline.\n\nPreviousDisable colorful loggingNextHow ZenML stores data\n\nLast updated 4 months ago',
    " your GCP Image Builder to the GCP cloud platform.To set up the GCP Image Builder to authenticate to GCP and access the GCP Cloud Build services, it is recommended to leverage the many features provided by the GCP Service Connector such as auto-configuration, best security practices regarding long-lived credentials and reusing the same credentials across multiple stack components.\n\nIf you don't already have a GCP Service Connector configured in your ZenML deployment, you can register one using the interactive CLI command. You also have the option to configure a GCP Service Connector that can be used to access more than just the GCP Cloud Build service:\n\nzenml service-connector register --type gcp -i\n\nA non-interactive CLI example that leverages the Google Cloud CLI configuration on your local machine to auto-configure a GCP Service Connector for the GCP Cloud Build service:\n\nzenml service-connector register <CONNECTOR_NAME> --type gcp --resource-type gcp-generic --resource-name <GCS_BUCKET_NAME> --auto-configure\n\nExample Command Output\n\n$ zenml service-connector register gcp-generic --type gcp --resource-type gcp-generic --auto-configure\nSuccessfully registered service connector `gcp-generic` with access to the following resources:\n┏━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓\n┃ RESOURCE TYPE  │ RESOURCE NAMES ┃\n┠────────────────┼────────────────┨\n┃ 🔵 gcp-generic │ zenml-core     ┃\n┗━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛\n\nNote: Please remember to grant the entity associated with your GCP credentials permissions to access the Cloud Build API and to run Cloud Builder jobs (e.g. the Cloud Build Editor IAM role). The GCP Service Connector supports many different authentication methods with different levels of security and convenience. You should pick the one that best fits your use case.\n\nIf you already have one or more GCP Service Connectors configured in your ZenML deployment, you can check which of them can be used to access generic GCP resources like the GCP Image Builder required for your GCP Image Builder by running e.g.:",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.75
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.75
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.75
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.875
cosine_mrr@10 0.8333
cosine_map@100 0.8333

Information Retrieval

Metric Value
cosine_accuracy@1 0.75
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.75
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.75
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.875
cosine_mrr@10 0.8333
cosine_map@100 0.8333

Information Retrieval

Metric Value
cosine_accuracy@1 0.75
cosine_accuracy@3 0.75
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.75
cosine_precision@3 0.25
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.75
cosine_recall@3 0.75
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.8577
cosine_mrr@10 0.8125
cosine_map@100 0.8125

Information Retrieval

Metric Value
cosine_accuracy@1 0.75
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.75
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.75
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.875
cosine_mrr@10 0.8333
cosine_map@100 0.8333

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 36 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 36 samples:
    positive anchor
    type string string
    details
    • min: 13 tokens
    • mean: 23.14 tokens
    • max: 38 tokens
    • min: 31 tokens
    • mean: 311.83 tokens
    • max: 512 tokens
  • Samples:
    positive anchor
    What are the necessary steps to deploy the KubernetesSparkStepOperator, and what configurations are required for running Spark on Kubernetes? ator which runs Steps with Spark on Kubernetes."""def _backend_configuration(
    self,
    spark_config: SparkConf,
    step_config: "StepConfiguration",
    ) -> None:
    """Configures Spark to run on Kubernetes."""
    # Build and push the image
    docker_image_builder = PipelineDockerImageBuilder()
    image_name = docker_image_builder.build_and_push_docker_image(...)

    # Adjust the spark configuration
    spark_config.set("spark.kubernetes.container.image", image_name)
    ...

    For Kubernetes, there are also some additional important configuration parameters:

    namespace is the namespace under which the driver and executor pods will run.

    service_account is the service account that will be used by various Spark components (to create and watch the pods).

    Additionally, the _backend_configuration method is adjusted to handle the Kubernetes-specific configuration.

    When to use it

    You should use the Spark step operator:

    when you are dealing with large amounts of data.

    when you are designing a step that can benefit from distributed computing paradigms in terms of time and resources.

    How to deploy it

    To use the KubernetesSparkStepOperator you will need to setup a few things first:

    Remote ZenML server: See the deployment guide for more information.

    Kubernetes cluster: There are many ways to deploy a Kubernetes cluster using different cloud providers or on your custom infrastructure. For AWS, you can follow the Spark EKS Setup Guide below.

    Spark EKS Setup Guide

    The following guide will walk you through how to spin up and configure a Amazon Elastic Kubernetes Service with Spark on it:

    EKS Kubernetes Cluster

    Follow this guide to create an Amazon EKS cluster role.

    Follow this guide to create an Amazon EC2 node role.

    Go to the IAM website, and select Roles to edit both roles.

    Attach the AmazonRDSFullAccess and AmazonS3FullAccess policies to both roles.

    Go to the EKS website.

    Make sure the correct region is selected on the top right.
    How do I set up a GCP Service Connector within ZenML to authenticate and access GCP Cloud Build services? your GCP Image Builder to the GCP cloud platform.To set up the GCP Image Builder to authenticate to GCP and access the GCP Cloud Build services, it is recommended to leverage the many features provided by the GCP Service Connector such as auto-configuration, best security practices regarding long-lived credentials and reusing the same credentials across multiple stack components.

    If you don't already have a GCP Service Connector configured in your ZenML deployment, you can register one using the interactive CLI command. You also have the option to configure a GCP Service Connector that can be used to access more than just the GCP Cloud Build service:

    zenml service-connector register --type gcp -i

    A non-interactive CLI example that leverages the Google Cloud CLI configuration on your local machine to auto-configure a GCP Service Connector for the GCP Cloud Build service:

    zenml service-connector register --type gcp --resource-type gcp-generic --resource-name --auto-configure

    Example Command Output

    $ zenml service-connector register gcp-generic --type gcp --resource-type gcp-generic --auto-configure
    Successfully registered service connector gcp-generic with access to the following resources:
    ┏━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓
    ┃ RESOURCE TYPE │ RESOURCE NAMES ┃
    ┠────────────────┼────────────────┨
    ┃ 🔵 gcp-generic │ zenml-core ┃
    ┗━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛

    Note: Please remember to grant the entity associated with your GCP credentials permissions to access the Cloud Build API and to run Cloud Builder jobs (e.g. the Cloud Build Editor IAM role). The GCP Service Connector supports many different authentication methods with different levels of security and convenience. You should pick the one that best fits your use case.

    If you already have one or more GCP Service Connectors configured in your ZenML deployment, you can check which of them can be used to access generic GCP resources like the GCP Image Builder required for your GCP Image Builder by running e.g.:
    How do I register and activate a ZenML stack with a new GCP Image Builder while ensuring proper authentication? build to finish. More information: Build Timeout.We can register the image builder and use it in our active stack:

    zenml image-builder register <br> --flavor=gcp <br> --cloud_builder_image= <br> --network= <br> --build_timeout=

    # Register and activate a stack with the new image builder
    zenml stack register -i ... --set

    You also need to set up authentication required to access the Cloud Build GCP services.

    Authentication Methods

    Integrating and using a GCP Image Builder in your pipelines is not possible without employing some form of authentication. If you're looking for a quick way to get started locally, you can use the Local Authentication method. However, the recommended way to authenticate to the GCP cloud platform is through a GCP Service Connector. This is particularly useful if you are configuring ZenML stacks that combine the GCP Image Builder with other remote stack components also running in GCP.

    This method uses the implicit GCP authentication available in the environment where the ZenML code is running. On your local machine, this is the quickest way to configure a GCP Image Builder. You don't need to supply credentials explicitly when you register the GCP Image Builder, as it leverages the local credentials and configuration that the Google Cloud CLI stores on your local machine. However, you will need to install and set up the Google Cloud CLI on your machine as a prerequisite, as covered in the Google Cloud documentation , before you register the GCP Image Builder.

    Stacks using the GCP Image Builder set up with local authentication are not portable across environments. To make ZenML pipelines fully portable, it is recommended to use a GCP Service Connector to authenticate your GCP Image Builder to the GCP cloud platform.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            384,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 4
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • tf32: False
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 4
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: False
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: True
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step dim_384_cosine_map@100 dim_256_cosine_map@100 dim_128_cosine_map@100 dim_64_cosine_map@100
1.0 1 0.8333 0.8333 0.8125 0.8333
2.0 3 0.8333 0.8333 0.8125 0.8333
3.0 4 0.8333 0.8333 0.8125 0.8333
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.9
  • Sentence Transformers: 3.2.0
  • Transformers: 4.45.2
  • PyTorch: 2.5.0+cu124
  • Accelerate: 1.0.1
  • Datasets: 3.0.1
  • Tokenizers: 0.20.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}