DEV Community

# llm

Posts

đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.
A Semantic Kernel Alternative for .NET — When and Why You'd Reach for One

A Semantic Kernel Alternative for .NET — When and Why You'd Reach for One

Comments
5 min read
I wrote a custom CUDA inference engine to run Qwen3.5-27B on $130 mining cards

I wrote a custom CUDA inference engine to run Qwen3.5-27B on $130 mining cards

1
Comments
5 min read
The Council has Decided

The Council has Decided

Comments
5 min read
Entrené mi propio LLM desde cero en 2025: lo que el tutorial viral de HN no te dice sobre el costo real

Entrené mi propio LLM desde cero en 2025: lo que el tutorial viral de HN no te dice sobre el costo real

1
Comments
10 min read
I Built My Own Entropy Coder Because Deflate Doesn't Know What GN Knows

I Built My Own Entropy Coder Because Deflate Doesn't Know What GN Knows

1
Comments
3 min read
Stop Using AI Only to Build—Start Using It to Break Your Systems

Stop Using AI Only to Build—Start Using It to Break Your Systems

1
Comments
4 min read
Stop Starting From Scratch: How Claude Projects Will Change the Way You Work

Stop Starting From Scratch: How Claude Projects Will Change the Way You Work

Comments
5 min read
Qwen3.6-27B Local Inference on RTX 3090 with Native vLLM & Ollama Fallback

Qwen3.6-27B Local Inference on RTX 3090 with Native vLLM & Ollama Fallback

Comments
3 min read
LLM Foundry finally stops being a toy and starts acting like a system

LLM Foundry finally stops being a toy and starts acting like a system

Comments
3 min read
Why Identity-Framing Jailbreaks Bypass Your LLM Safety Filters

Why Identity-Framing Jailbreaks Bypass Your LLM Safety Filters

1
Comments
5 min read
The fix for: I want to go to a car wash to wash my car and it's 50 meters away. Should I drive or should I walk?

The fix for: I want to go to a car wash to wash my car and it's 50 meters away. Should I drive or should I walk?

Comments
1 min read
The Math Behind Local LLMs: How to Calculate Exact VRAM Requirements Before You Crash Your GPU

The Math Behind Local LLMs: How to Calculate Exact VRAM Requirements Before You Crash Your GPU

Comments
3 min read
LLM Observability Tools Compared: The 2026 Landscape

LLM Observability Tools Compared: The 2026 Landscape

Comments
5 min read
"A Survey of LLM-based Deep Search Agents" (2026)

"A Survey of LLM-based Deep Search Agents" (2026)

Comments 1
2 min read
Built an open-source memory layer for local LLMs — single-shot calls, auto-extracted constraints, no context degradation

Built an open-source memory layer for local LLMs — single-shot calls, auto-extracted constraints, no context degradation

Comments
1 min read
đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.