emin temiz PRO

etemiz

AI & ML interests

None yet

Recent Activity

View all activity

Articles

Organizations

None yet

Posts 12

view post
Post
530
-= DeepSeek V3 =-

After installing the new CUDA toolkit and compiling llama.cpp again I tested DeepSeek V3 yesterday.

In terms of human alignment DeepSeek V3 did worse on:
- health
- fasting
- nostr
- misinfo
- nutrition

did better on:
- faith
- bitcoin
- alternative medicine
- ancient wisdom

compared to DeepSeek 2.5. In my opinion overall it is worse than 2.5. And 2.5 wasn't that great.

There is a general tendency of models getting smarter but at the same time getting less wiser, less human aligned, less beneficial to humans.

I don't know what is causing this. But maybe synthetic dataset use for further training the LLMs makes it more and more detached from humanity. This is not going in the right direction.

My solution is to come up with a curator council to determine the datasets that are closest to human preference. "Humans that care about other humans the most" could be a definition of this dataset. What do you think?
view post
Post
457
Going by the theory that says: most wise people who care about other people should go into an LLM with higher weights to make it more people caring / human aligned.

Who cares about the humanity the most? Lets add those wisdom into an LLM. Then the robots will think that way and be friendly to humans and even saving humans.

I'll go first: Eric Berg is a doctor on youtube who is saving millions of lives. A very good candidate to be included and emphasized.

Who are your people? Lets come up with a list of "beneficial humans".

datasets

None public yet