Tumgik
#Ollama
abluebabyforfood · 6 days
Text
4 notes · View notes
reasonoptional · 1 month
Text
So I wrote a script to turn a tag soup into proper descriptions, using Ollama. Worked out great, no issues.
I then used those descriptions, 145 of them, to generate images, using Flux Schnell via ComfyUI API calls.
The good:
I could leave the computer overnight
out of 118 images featuring humans, only five had bad anatomy
The bad:
Flux Schnell is absolutely not schnell. Average generation time, for a 832x1216 image, was a bit over 4 minutes.
The one image with very bad anatomy was super cursed
The compositions are good but kinda samey; This might be an artifact of the image aspect ratio (5:8 portrait)
I don't have money for a better video card (currently NVIDIA GeForce RTX 4080 with 12 GB VRAM, and I saved for 3 years to get it)
Overall:
Ridiculously good prompt adhesion
Only one weird-ass text image
Only one bad anatomy image
Slow as baaaaalls
And my favorite 9 images:
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
3 notes · View notes
linuxtldr · 4 months
Text
3 notes · View notes
siegram-com · 2 days
Text
Yapay Zeka Devriminde Ollama: Geleceği Şekillendiren Modeller ve Türkiye'nin YZ Atılımı
Yapay Zeka (YZ) modelleri çeşitli uygulamalarda vazgeçilmezdir ve Ollama, farklı YZ modellerine kolay erişim sağlar. Her model, farklı ihtiyaçlara ve kullanım senaryolarına hitap eden benzersiz bir işleve sahiptir. Bu rehber, Ollama ile kullanılabilen çeşitli YZ modelleri hakkında daha fazla bilgi vererek, özel işlevlerini, uygulamalarını ve farklarını detaylandırıyor. Ollama YZ Modelleri Özet…
0 notes
donaldjreynolds · 8 days
Text
1 note · View note
handleerz · 12 days
Text
Tumblr media
0 notes
Text
Tumblr media
0 notes
tsubakicraft · 26 days
Text
GPUの有無による文書生成速度の違い
左はRTX A4000搭載のCore i5 13500でGemma 2 9Bモデルを使用、右はCore i5 13400でGemma2 9Bの4bit量子化モデルを使用。 GPU有りの場合が16秒、GPU無しの場合が1分30秒かかりました。 どちらもOllamaサーバーをモデルプロバイダーとし、Difyでチャットボットを作成して実行しています。i5 13500と13400という違いがあるので完全に同じ条件での比較ではありませんが、概ねこんな速度さだと思います。
0 notes
newcodesociety · 2 months
Text
0 notes
minhphong306 · 3 months
Text
[Llama learning log] #8: Agents with locals models
https://docs.llamaindex.ai/en/stable/understanding/agent/local_models/ Chạy model ở local Bài này để hướng dẫn anh em chạy models ở local Sử dụng Ollama Tải về model tại đây: https://ollama.com/download Chạy model lên: ollama run mixtral:8x7b Continue reading [Llama learning log] #8: Agents with locals models
0 notes
honysoytquimalpence · 3 months
Text
*rustles limbs* i accidentally made an indefinite "two trees hate on humans" chat loop
Tumblr media Tumblr media Tumblr media
1 note · View note
elbrunoc · 3 months
Text
Full RAG scenario using #Phi3, #SemanticKernel and TextMemory in local mode
Hi! Today’s scenario is once again, using Phi-3. You know, a groundbreaking Small Language Model (SLM) that is redefining the capabilities of AI for developers and businesses alike. In this blog post, we will explore the importance of leveraging Phi-3 for full-scale scenarios and how you can test these scenarios for free using the Ollama C# Playground. The demo sceanrio below is designed to…
Tumblr media
View On WordPress
1 note · View note
virtualizationhowto · 3 months
Text
Local LLM Model in Private AI server in WSL
Local LLM Model in Private AI server in WSL - learn how to setup a local AI server with WSL Ollama and llama3 #ai #localllm #localai #privateaiserver #wsl #linuxai #nvidiagpu #homelab #homeserver #privateserver #selfhosting #selfhosted
We are in the age of AI and machine learning. It seems like everyone is using it. However, is the only real way to use AI tied to public services like OpenAI? No. We can run an LLM locally, which has many great benefits, such as keeping the data local to your environment, either in the home network or home lab environment. Let’s see how we can run a local LLM model to host our own private local…
Tumblr media
View On WordPress
0 notes
linuxtldr · 1 month
Text
1 note · View note
outer-space-youtube · 4 months
Text
Llama 3  
I just sat through a video by a fanboy of Llama 3.?? Okay, Llama 3 is a good AI that is open source. I first used Llama 3 with Ollama, which runs in the background of Windows, with the PowerShell 7 command prompt.   Maybe I missed it, but all the video said was that Llama 3 is great and improving every day, but I didn’t hear how Lama 3 has improved. Unless Tech Genius A.I. is trying to tell us…
Tumblr media
View On WordPress
0 notes
realityfragments · 4 months
Text
Writing With a LLM Co-Pilot
I wouldn’t characterize myself as an advocate for AI. I’m largely skeptical and remain so. Still, with generative AI all over and clogging up self-publishing with it’s slop, it’s impossible to ignore. I’ve embarked on a quest to see whether generative AI that is available can help me in various ways. One of these ways is with writing, not in generating text but helping me write. Since I don’t…
Tumblr media
View On WordPress
0 notes