🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
-
Updated
Sep 7, 2024 - Python
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
Large Language Model Text Generation Inference
BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
中文nlp解决方案(大模型、数据、模型、训练、推理)
Go package implementing Bloom filters, used by Milvus and Beego.
🩹Editing large language models within 10 seconds⚡
[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichuan, TinyLlama, etc.
Spicetify theme inspired by Microsoft's Fluent Design, Always up-to-date!, A Powerful Theme to Calm your Eyes While Listening to Your Favorite Beats
Fast Inference Solutions for BLOOM
Crosslingual Generalization through Multitask Finetuning
OpenGL C++ Graphics Engine
💬 Chatbot web app + HTTP and Websocket endpoints for LLM inference with the Petals client
Cuckoo Filter go implement, better than Bloom Filter, configurable and space optimized 布谷鸟过滤器的Go实现,优于布隆过滤器,可以定制化过滤器参数,并进行了空间优化
DirectX 11 Renderer written in C++11
Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).
Add a description, image, and links to the bloom topic page so that developers can more easily learn about it.
To associate your repository with the bloom topic, visit your repo's landing page and select "manage topics."