Inference error: The current context does not support K-shift
I was wondering if anyone else has run into an error like this:
"
Llama.generate: 56 prefix-match hit, remaining 126 prompt tokens to eval
Prompt evaluation: 0%| | 0/126 [00:00<?, ?it/s]/home/runner/work/llama-cpp-python-cuBLAS-wheels/llama-cpp-python-cuBLAS-wheels/vendor/llama.cpp/src/llama.cpp:9227: The current context does not support K-shift
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
"
even though i set the context very high i run into errors like this typically early in the conversation and was wondering if there was a known fix for this or anything.