====== Perplexity statistics ====== Mean PPL(Q) : 26.027373 ± 0.255226 Mean PPL(base) : 24.931431 ± 0.241228 Cor(ln(PPL(Q)), ln(PPL(base))): 98.91% Mean ln(PPL(Q)/PPL(base)) : 0.043020 ± 0.001445 Mean PPL(Q)/PPL(base) : 1.043958 ± 0.001509 Mean PPL(Q)-PPL(base) : 1.095943 ± 0.039245 ====== KL divergence statistics ====== Mean KLD: 0.080142 ± 0.000319 Maximum KLD: 6.251925 99.9% KLD: 1.256165 99.0% KLD: 0.545236 99.0% KLD: 0.545236 Median KLD: 0.043817 10.0% KLD: 0.000973 5.0% KLD: 0.000158 1.0% KLD: 0.000006 Minimum KLD: -0.000183 ====== Token probability statistics ====== Mean Δp: -0.308 ± 0.018 % Maximum Δp: 90.671% 99.9% Δp: 40.078% 99.0% Δp: 20.806% 95.0% Δp: 9.213% 90.0% Δp: 4.792% 75.0% Δp: 0.558% Median Δp: -0.000% 25.0% Δp: -0.862% 10.0% Δp: -5.664% 5.0% Δp: -10.659% 1.0% Δp: -25.039% 0.1% Δp: -47.819% Minimum Δp: -89.317% RMS Δp : 6.946 ± 0.038 % Same top p: 86.343 ± 0.089 % llama_perf_context_print: load time = 1821.07 ms llama_perf_context_print: prompt eval time = 1019684.89 ms / 299008 tokens ( 3.41 ms per token, 293.24 tokens per second) llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second) llama_perf_context_print: total time = 1045149.32 ms / 299009 tokens ggml_metal_free: deallocating