翻译[3]-获取ollama模型的下载直链

获取ollama模型的下载链接

🌟【开源神器】Ollama 模型直接下载链接获取工具,拯救你的网络困境!🌟

Hey 小伙伴们~👋
今天想和大家分享一个超实用的开源小工具,专治各种网络不给力!💪

你是否遇到过这些情况:
🌐 网络连接差到让人抓狂?
💻 DevOps 工程师天天抱怨服务器带宽不够?

如果你中招了,别担心!AmirrezaDev开发了一个超方便的开源应用,可以帮你直接获取 Ollama 模型的下载链接,让你先下载再安装,轻松搞定!🎉

🔗 使用说明在这里:[https://github.com/amirrezaDev1378/ollama-model-direct-download]

快来试试吧!😊 如果你有任何想法或建议,欢迎随时告诉我哦~如果需要帮助,也可以直接私信我。如果你觉得这个工具不错,别忘了给仓库点个 star 支持一下!⭐️

感谢大家的支持!💖

好主意!我的网络连接有限,这样我就可以在本地图书馆下载模型,然后带回家了。👍

很高兴你喜欢它 😊

它提供量化选项或选择吗?Q4 默认有点不太行啊

当然可以啦!你可以在模型名称后面添加标签来设置大小,比如:llama3.1:70b。
如果你不提供这个标签,它会获取默认的模型。
你说的量化选项是指模型大小对吧?😉

感谢分享,肯定会有用武之地的
很高兴你喜欢它 💕


夹带的私货

获取适用于ollama的deepseek-v3模型链接

ollama开箱即用版

小工具[https://github.com/amirrezaDev1378/ollama-model-direct-download.git]
国内链接:[https://gitcode.com/QS2002/ollama-model-direct-download/releases/v2.0.0]

# 获取下载链接
./omdd-linux-amd64 get huihui_ai/deepseek-v3:671b-q2_K
# 输出
get direct download link for model : huihui_ai/deepseek-v3:671b-q2_K
Manifest download link: https://registry.ollama.ai/v2/huihui_ai/deepseek-v3/manifests/671b-q2_K
Download links for layers:
1- https://registry.ollama.ai/v2/huihui_ai/deepseek-v3/blobs/sha256:80a7e241bb3480b254f11ebdc1a426145dc165ff0168d7be84b2f8d13902fd75
2- https://registry.ollama.ai/v2/huihui_ai/deepseek-v3/blobs/sha256:c5ce92dfece191ff732e0a40245acac9a09ad23ab418875c1ba3bb2e8ce6e97d
3- https://registry.ollama.ai/v2/huihui_ai/deepseek-v3/blobs/sha256:3a7c2cf04638420056babe5607700fbe8c80b4b2ef568a5c06ad1836809c77a4
4- https://registry.ollama.ai/v2/huihui_ai/deepseek-v3/blobs/sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588
5- https://registry.ollama.ai/v2/huihui_ai/deepseek-v3/blobs/sha256:0b7462323d858e49028a6cedcdba4f198cc9cc013841139880f49ea6926ac009
Generated download links for model :  huihui_ai/deepseek-v3:671b-q2_K finished successfully.

GGUF版(需要编辑ModelFile文件)

# 另一个Q2量化版本
wget https://modelscope.cn/models/okwinds/DeepSeek-V3-GGUF-V3-LOT/resolve/master/q2/DeepSeek-V3-Q2-00001-of-00007.gguf
wget https://modelscope.cn/models/okwinds/DeepSeek-V3-GGUF-V3-LOT/resolve/master/q2/DeepSeek-V3-Q2-00002-of-00007.gguf
wget https://modelscope.cn/models/okwinds/DeepSeek-V3-GGUF-V3-LOT/resolve/master/q2/DeepSeek-V3-Q2-00003-of-00007.gguf
wget https://modelscope.cn/models/okwinds/DeepSeek-V3-GGUF-V3-LOT/resolve/master/q2/DeepSeek-V3-Q2-00004-of-00007.gguf
wget https://modelscope.cn/models/okwinds/DeepSeek-V3-GGUF-V3-LOT/resolve/master/q2/DeepSeek-V3-Q2-00005-of-00007.gguf
wget https://modelscope.cn/models/okwinds/DeepSeek-V3-GGUF-V3-LOT/resolve/master/q2/DeepSeek-V3-Q2-00006-of-00007.gguf
wget https://modelscope.cn/models/okwinds/DeepSeek-V3-GGUF-V3-LOT/resolve/master/q2/DeepSeek-V3-Q2-00007-of-00007.gguf
wget https://modelscope.cn/models/okwinds/DeepSeek-V3-GGUF-V3-LOT/resolve/master/q2/hash.txt
wget https://modelscope.cn/models/okwinds/DeepSeek-V3-GGUF-V3-LOT/resolve/master/.gitattributes
wget https://modelscope.cn/models/okwinds/DeepSeek-V3-GGUF-V3-LOT/resolve/master/configuration.json
wget https://modelscope.cn/models/okwinds/DeepSeek-V3-GGUF-V3-LOT/resolve/master/mergedhash.txt
wget https://modelscope.cn/models/okwinds/DeepSeek-V3-GGUF-V3-LOT/resolve/master/README.md

ollama离线安装

[https://www.modelscope.cn/models/modelscope/ollama-linux]

  • Apache License 2.0

本repo提供ollama的Linux版本安装文件,及对应的安装脚本,可在包括在ModelScope Notebook等Linux环境上运行。

# 使用命令行前,请确保已经通过pip install modelscope 安装ModelScope。
pip install modelscope
modelscope download --model=modelscope/ollama-linux --local_dir ./ollama-linux --revision v0.5.7
# 运行ollama安装脚本
cd ollama-linux
sudo chmod 777 ./ollama-modelscope-install.sh
./ollama-modelscope-install.sh

Get Direct Download Links for Ollama Models with My New Open Source App!

Hi there,

Are you struggling with an awful internet connection?
Are DevOps engineers complaining about server bandwidth?
Are you in a country where Ollama registry servers are banned (yes, it's true, some countries have banned Ollama)?

If any of these sound like your situation, I have a solution for you! I've created an open-source app that provides direct download links for Ollama models, allowing you to download them and install them on your machine later.

You can find clear instructions on how to use this tool here: [https://github.com/amirrezaDev1378/ollama-model-direct-download]

Feel free to give it a try! I'd love to hear your thoughts and suggestions. If you need any help, don't hesitate to send me a DM. Also, if you like the tool, I'd really appreciate it if you could give the repository a star.

Thanks!

Good idea. I have a limited connection, so this will make it easier for me to download the models at my local library and then bring them home. 👍

Glad you liked it 🙂

Does it give you quant options or a choice perhaps. Q4 default is a bit doge

Sure, you can set your size by adding a tag at the end of your model like: llama3.1:70b.
If you don't provide that, it will fetch the default one.
By quant options you meant the model size, right?

Thanks for sharing. It will be handy at some point I'm sure
Happy you liked it 😃


Generating modelfiles from huggingface repositories

Hello all.

I usually use oobabooga's text-gen-webui or llama-cpp-python/transformers directly in Python for running LLMs. However, I currently try to get an overview over other broadly used LLM frontends/engines. Most of the tools allow to use already downloaded models and use somewhat similar configuration patterns, for Ollama, that seems to be done via modelfiles.

I already have a large local collection of models, which I want to import to Ollama. Since these are mostly huggingface repositories, it would be possible to write code for generating modelfiles from hugginface's configs and modelcards, however, it would take some effort to extract the templates from modelcards and to find the correct base model repos for quantization repos, since quantization repos usually do not mirror the original config files...

So following this route, I have three questions:

Is there already a tool available, which allows for generating Ollama modelfiles from huggingface API/repos?
Is there any option to download only modelfiles from Ollama hub / library, preferably by huggingface repo ID, model file hash, model file name or something like that?
Will Ollama create larger amounts of data when importing local models with ollama create -f (e.g. mirror the model files or something like that), or will it just link to the existing files. Since the current model collection exceeds 12TB of storage space, I would absolutely not want the import process to create large amounts of additional data per model.
Any additional information, that might help with the above problem, is welcome too 😃
Thanks for your time and help!

I have to create my own model files from HF Repo's.
Wrote high level steps here : https://github.com/shamitv/hf_2_gguf

Thanks. I haven't done any quantization myself, so this is interesting for me.
I wanted to take the info from the huggingface modelcards and configs like this:
https://huggingface.co/theo77186/Llama-3-8B-Instruct-norefusal/blob/main/config.json
and generate an ollama modelfile, which should look something like this:

FROM /<my-local-models>/mradermacher_Llama-3-8B-Instruct-norefusal-i1-GGUF/Llama-3-8B-Instruct-norefusal.i1-Q6_K.gguf
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""

PARAMETER num_ctx 8192
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
...

I know, that GGUF files contain most of these settings in their internal metadata section. So will importing a GGUF file by only using the first line (the "FROM"-part) of the above modelfile automatically set all those other parameters? And you would just need to manually add them if you want to overwrite defaults?
Most of my models already are quantized and in GGUF format, so this behavior would be very helpful.

Since GGUF has all the Metadata, only the binary file is sufficient.
https://github.com/ollama/ollama/blob/main/docs/import.md

Great, thanks for clarifying!

posted @   qsBye  阅读(787)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 记一次.NET内存居高不下排查解决与启示
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· DeepSeek 开源周回顾「GitHub 热点速览」
历史上的今天:
2024-02-01 esp32笔记[12]-使用串口屏显示时间和温湿度
点击右上角即可分享
微信分享提示