Scheduled Commit
Browse files
data/retrieval_individual-3403941f-f995-496f-9668-ac196a27ebc6.jsonl
CHANGED
@@ -388,3 +388,5 @@
|
|
388 |
{"tstamp": 1741865535.288, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1741865532.9896, "finish": 1741865535.288, "ip": "", "conv_id": "1647d00edccb4970894e655930bf5713", "model_name": "voyage-multilingual-2", "prompt": "I am looking for a paper that introduces an adaptive mechanism for learning attention span in Transformers.", "output": [["I am looking for a paper that introduces an adaptive mechanism for learning attention span in Transformers.", "Title: Adaptive Attention Span in Transformers\n\nAbstract: We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time. We show the effectiveness of our approach on the task of character level language modeling, where we achieve state-of-the-art performances on text8 and enwiki8 by using a maximum context of 8k characters."]], "corpus": "arxiv"}
|
389 |
{"tstamp": 1741866540.6351, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1741866540.4244, "finish": 1741866540.6351, "ip": "", "conv_id": "54ee482a203541ee9da92dad802e865a", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Federated learning language model focusing on minimizing the weighted distance between server and client models.", "output": [["Federated learning language model focusing on minimizing the weighted distance between server and client models.", "Title: FeDeRA:Efficient Fine-tuning of Language Models in Federated Learning Leveraging Weight Decomposition\n\nAbstract: Despite their exceptional performance on various tasks after fine-tuning, pre-trained language models (PLMs) face significant challenges due to growing privacy concerns with data in centralized training methods. We consider federated learning (FL) to fine-tune PLMs in this paper. However, the substantial number of parameters in PLMs poses significant difficulties for client devices with limited communication and computational resources. One promising solution is to exploit parameter-efficient fine-tuning (PEFT) into FL, which trains a much smaller set of parameters than full parameter fine-tuning (FFT). Although remarkably improving training efficiency, PEFT methods may lead to degraded performance especially when data across different clients are non i.i.d, as revealed by experimental results. To overcome this, we propose FeDeRA, which extends and improves a widely used PEFT method, i.e., low-rank adaption (LoRA). FeDeRA follows LoRA by decomposing the weight matrices of the PLMs into low-rank matrices, which allows for more efficient computation and parameter updates during fine-tuning. Different from LoRA which simply initializes these low-rank matrices by random sampling or zeros, the proposed FeDeRA initializes these matrices by the results of performing singular value decomposition (SVD) on the pre-trained weight matrices. Extensive experiments across various tasks and datasets show that FeDeRA outperforms the considered PEFT baselines and is comparable to or even surpasses FFT method within the FL setting in terms of task performance. Moreover, FeDeRA requires only 1% trainable paramentes compared to FFT, significantly reducing training time costs by more than 90% to achieve the same task performance level. The experimental results also highlight the robustness of FeDeRA against data heterogeneity, as it maintains stable task performance even as data heterogeneity increases."]], "corpus": "arxiv"}
|
390 |
{"tstamp": 1741866540.6351, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1741866540.4244, "finish": 1741866540.6351, "ip": "", "conv_id": "c8286ec14637478388ace78dc106ade8", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Federated learning language model focusing on minimizing the weighted distance between server and client models.", "output": [["Federated learning language model focusing on minimizing the weighted distance between server and client models.", "Title: Conquering the Communication Constraints to Enable Large Pre-Trained Models in Federated Learning\n\nAbstract: Federated learning (FL) has emerged as a promising paradigm for enabling the collaborative training of models without centralized access to the raw data on local devices. In the typical FL paradigm (e.g., FedAvg), model weights are sent to and from the server each round to participating clients. Recently, the use of small pre-trained models has been shown effective in federated learning optimization and improving convergence. However, recent state-of-the-art pre-trained models are getting more capable but also have more parameters. In conventional FL, sharing the enormous model weights can quickly put a massive communication burden on the system, especially if more capable models are employed. Can we find a solution to enable those strong and readily-available pre-trained models in FL to achieve excellent performance while simultaneously reducing the communication burden? To this end, we investigate the use of parameter-efficient fine-tuning in federated learning and thus introduce a new framework: FedPEFT. Specifically, we systemically evaluate the performance of FedPEFT across a variety of client stability, data distribution, and differential privacy settings. By only locally tuning and globally sharing a small portion of the model weights, significant reductions in the total communication overhead can be achieved while maintaining competitive or even better performance in a wide range of federated learning scenarios, providing insight into a new paradigm for practical and effective federated systems."]], "corpus": "arxiv"}
|
|
|
|
|
|
388 |
{"tstamp": 1741865535.288, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1741865532.9896, "finish": 1741865535.288, "ip": "", "conv_id": "1647d00edccb4970894e655930bf5713", "model_name": "voyage-multilingual-2", "prompt": "I am looking for a paper that introduces an adaptive mechanism for learning attention span in Transformers.", "output": [["I am looking for a paper that introduces an adaptive mechanism for learning attention span in Transformers.", "Title: Adaptive Attention Span in Transformers\n\nAbstract: We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time. We show the effectiveness of our approach on the task of character level language modeling, where we achieve state-of-the-art performances on text8 and enwiki8 by using a maximum context of 8k characters."]], "corpus": "arxiv"}
|
389 |
{"tstamp": 1741866540.6351, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1741866540.4244, "finish": 1741866540.6351, "ip": "", "conv_id": "54ee482a203541ee9da92dad802e865a", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Federated learning language model focusing on minimizing the weighted distance between server and client models.", "output": [["Federated learning language model focusing on minimizing the weighted distance between server and client models.", "Title: FeDeRA:Efficient Fine-tuning of Language Models in Federated Learning Leveraging Weight Decomposition\n\nAbstract: Despite their exceptional performance on various tasks after fine-tuning, pre-trained language models (PLMs) face significant challenges due to growing privacy concerns with data in centralized training methods. We consider federated learning (FL) to fine-tune PLMs in this paper. However, the substantial number of parameters in PLMs poses significant difficulties for client devices with limited communication and computational resources. One promising solution is to exploit parameter-efficient fine-tuning (PEFT) into FL, which trains a much smaller set of parameters than full parameter fine-tuning (FFT). Although remarkably improving training efficiency, PEFT methods may lead to degraded performance especially when data across different clients are non i.i.d, as revealed by experimental results. To overcome this, we propose FeDeRA, which extends and improves a widely used PEFT method, i.e., low-rank adaption (LoRA). FeDeRA follows LoRA by decomposing the weight matrices of the PLMs into low-rank matrices, which allows for more efficient computation and parameter updates during fine-tuning. Different from LoRA which simply initializes these low-rank matrices by random sampling or zeros, the proposed FeDeRA initializes these matrices by the results of performing singular value decomposition (SVD) on the pre-trained weight matrices. Extensive experiments across various tasks and datasets show that FeDeRA outperforms the considered PEFT baselines and is comparable to or even surpasses FFT method within the FL setting in terms of task performance. Moreover, FeDeRA requires only 1% trainable paramentes compared to FFT, significantly reducing training time costs by more than 90% to achieve the same task performance level. The experimental results also highlight the robustness of FeDeRA against data heterogeneity, as it maintains stable task performance even as data heterogeneity increases."]], "corpus": "arxiv"}
|
390 |
{"tstamp": 1741866540.6351, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1741866540.4244, "finish": 1741866540.6351, "ip": "", "conv_id": "c8286ec14637478388ace78dc106ade8", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Federated learning language model focusing on minimizing the weighted distance between server and client models.", "output": [["Federated learning language model focusing on minimizing the weighted distance between server and client models.", "Title: Conquering the Communication Constraints to Enable Large Pre-Trained Models in Federated Learning\n\nAbstract: Federated learning (FL) has emerged as a promising paradigm for enabling the collaborative training of models without centralized access to the raw data on local devices. In the typical FL paradigm (e.g., FedAvg), model weights are sent to and from the server each round to participating clients. Recently, the use of small pre-trained models has been shown effective in federated learning optimization and improving convergence. However, recent state-of-the-art pre-trained models are getting more capable but also have more parameters. In conventional FL, sharing the enormous model weights can quickly put a massive communication burden on the system, especially if more capable models are employed. Can we find a solution to enable those strong and readily-available pre-trained models in FL to achieve excellent performance while simultaneously reducing the communication burden? To this end, we investigate the use of parameter-efficient fine-tuning in federated learning and thus introduce a new framework: FedPEFT. Specifically, we systemically evaluate the performance of FedPEFT across a variety of client stability, data distribution, and differential privacy settings. By only locally tuning and globally sharing a small portion of the model weights, significant reductions in the total communication overhead can be achieved while maintaining competitive or even better performance in a wide range of federated learning scenarios, providing insight into a new paradigm for practical and effective federated systems."]], "corpus": "arxiv"}
|
391 |
+
{"tstamp": 1741867085.5193, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1741867085.3503, "finish": 1741867085.5193, "ip": "", "conv_id": "9a59d4fc2d604cc2899080870ba1e4f0", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "a barrier that runs across a river or stream to control the flow of water", "output": [["a barrier that runs across a river or stream to control the flow of water", "Floodgate\n\nFloodgates, also called stop gates, are adjustable gates used to control water flow in flood barriers, reservoir, river, stream, or levee systems. They may be designed to set spillway crest heights in dams, to adjust flow rates in sluices and canals, or they may be designed to stop water flow entirely as part of a levee or storm surge system. Since most of these devices operate by controlling the water surface elevation being stored or routed, they are also known as crest gates. In the case of flood bypass systems, floodgates sometimes are also used to lower the water levels in a main river or canal channels by allowing more water to flow into a flood bypass or detention basin when the main river or canal is approaching a flood stage.\nTypes"]], "corpus": "wikipedia"}
|
392 |
+
{"tstamp": 1741867085.5193, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1741867085.3503, "finish": 1741867085.5193, "ip": "", "conv_id": "7f9b24aea98343818a5eb1f90a5da8a5", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "a barrier that runs across a river or stream to control the flow of water", "output": [["a barrier that runs across a river or stream to control the flow of water", "Weir\n\nA weir or low-head dam is a barrier across the width of a river that alters the flow characteristics of water and usually results in a change in the height of the river level. Weirs are also used to control the flow of water for outlets of lakes, ponds, and reservoirs. There are many weir designs, but commonly water flows freely over the top of the weir crest before cascading down to a lower level. There is no single definition as to what constitutes a weir.\nWeir can also refer to the skimmer found in most in-ground swimming pools, which controls the flow of water pulled into the filtering system.\nEtymology\nThe word likely originated from Middle English were, Old English wer, a derivative of the root of the verb werian, meaning \"to defend, dam\". The German cognate is Wehr, which means the same as English weir.\nFunction\nCommonly, weirs are used to prevent flooding, measure water discharge, and help render rivers more navigable by boat. In some locations, the terms dam and weir are synonymous, but normally there is a clear distinction made between the structures. Usually, a dam is designed specifically to impound water behind a wall, whilst a weir is designed to alter the river flow characteristics."]], "corpus": "wikipedia"}
|