An n8n workflow that watches my blog RSS feed, summarizes new posts with a local LLM (Gemma on Ollama), and cross-posts to Facebook, LinkedIn, X, and Instagram. I write the post once; the workflow handles the rest.

Overview

AspectDetails
PlatformSelf-hosted n8n
LLMGemma via Ollama (local GPU)
Social APIsPostiz
SchedulingCron trigger (10 min)
HardwareRTX 3090 Ti for inference

How It Works

  1. Fetch - RSS feed polling for new blog posts
  2. Summarize - Ollama (Gemma) generates platform-specific summaries
  3. Extract - Parse HTML for images, download and resize
  4. Generate - AI creates relevant hashtags
  5. Publish - Postiz API posts to all platforms
  6. Dedupe - Hash-based tracking prevents duplicates

Architecture

┌─────────────────────────────────────────────────────────────────┐
                       Hugo RSS Feed                              
└───────────────────────────┬─────────────────────────────────────┘
                            
                            
┌─────────────────────────────────────────────────────────────────┐
                      n8n Workflow                                
  ┌──────────────────────────────────────────────────────────┐  
                    Cron Trigger (10 min)                      
  └──────────────────────────┬───────────────────────────────┘  
                                                                
  ┌──────────────────────────▼───────────────────────────────┐  
                RSS Feed Node (Fetch Latest)                   
  └──────────────────────────┬───────────────────────────────┘  
                                                                
  ┌──────────────────────────▼───────────────────────────────┐  
             Hash Check (Duplicate Prevention)                 
  └──────────────────────────┬───────────────────────────────┘  
                                                                
  ┌──────────────────────────▼───────────────────────────────┐  
                Ollama Node (Gemma LLM)                        
            Summarize (platform char limits)                  
            Generate hashtags                                 
  └──────────────────────────┬───────────────────────────────┘  
                                                                
  ┌──────────────────────────▼───────────────────────────────┐  
                Image Extraction & Processing                  
            HTML parsing                                      
            Download & resize                                 
  └──────────────────────────┬───────────────────────────────┘  
                                                                
  ┌──────────────────────────▼───────────────────────────────┐  
                      Postiz API                               
      ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐        
      Facebook  LinkedIn      X     Instagram        
      └─────────┘ └─────────┘ └─────────┘ └─────────┘        
  └──────────────────────────────────────────────────────────┘  
└─────────────────────────────────────────────────────────────────┘

What Was Harder Than Expected

Getting the LLM to respect character limits was the recurring headache. X needs 280 characters, LinkedIn gets more room. The prompt engineering for consistent output length took more iteration than the actual workflow logic.

Postiz’s API has quirks — Instagram JSON handling in particular required workarounds that aren’t documented. The LLM output parsing needed to be fault-tolerant because Gemma occasionally returns slightly different JSON structures.

Duplicate prevention uses a simple link hash stored in a file. It works. The whole thing runs on the local RTX 3090 Ti — no API calls to OpenAI or anyone else.

Tech Stack

ComponentTechnology
AutomationSelf-hosted n8n with community nodes
AIGemma via Ollama
Social APIPostiz
ContentRSS feed parsing
ImagesHTML parsing, HTTP download, resize
DeduplicationJavaScript hash with file tracking
SchedulingCron trigger

The workflow is available as a template on n8n.io for anyone who wants to set up something similar.


Thanks to Grok (xAI) for help with debugging and optimization