A content agent for your codebase

Glossia uses LLMs to transform, translate, and generate content directly in your repo. It validates output locally and produces drafts your team can review.

How it works

01
Give agents context

Add CONTENT.md context files alongside your content. Glossia tracks their dependency so when context or content changes, only the affected outputs get regenerated.

02
Choose the best model for the job

Assign different models to different roles and mix providers freely. Glossia acts as a broker between your code and the models, so you don't need to register with each provider separately.

03
Close the feedback loop

Glossia ships first-party tools for syntax validation and content checks, and you can bring your own. Agents run these tools after each generation and retry on errors, so the loop closes before you even review.

Two workflows, one tool

Whether you need to reach new audiences or sharpen what you already have, the same config file and CLI drives both.

Translate

Make your content speak new languages. Point Glossia at your source files, list target languages, and it produces localized versions that preserve structure, code blocks, and formatting. Run glossia translate and ship to every market.

Revisit

Improve your content in place. Glossia reviews source files for clarity, accuracy, and tone using the context you provide. The revised output overwrites the original or writes to a separate path. Run glossia revisit to sharpen your docs, guides, or blog posts.

Closing the loop

Agents don't just generate content. They verify it, catch errors, and retry until the output meets your standards.

Generate

Agents produce content based on your source files and the context you provide in CONTENT.md.

Verify

Built-in tools check syntax, structure, and preserved tokens. You can also plug in your own linters, compilers, or validators.

Retry

When a check fails, agents see the error and try again. The loop repeats until the output is valid or the retry limit is reached.

Context flows down, config stays close

The narrower the scope, the closer to the code. Global defaults live at the root, project-specific rules nest deeper, and shared resources like glossaries live in Glossia so they can be reused across repositories.

CONTENT.md

Define content sources, targets, and output patterns in TOML frontmatter.

+++
[[content]]
source = "site/src/_data/home.json"
targets = ["es", "de", "ko", "ja"]
output = "site/src/_data/i18n/{lang}/{basename}.{ext}"

[[content]]
source = "docs/guide/*.md"
+++

# Context for the content agent...

Project layout

Context files nest alongside your content. Language overrides sit next door.

  • CONTENT.md
  • CONTENT/
    • es.md
    • ja.md
  • docs/
    • CONTENT.md
    • CONTENT/
      • de.md
    • guide/
      • intro.md
  • site/
    • src/
      • home.json

Ship content like you ship code

Glossia borrows proven patterns from software engineering and applies them to content workflows, so you can ship content with the same confidence you ship code.

#

Content hashing

Like build systems that skip unchanged targets, Glossia hashes sources and context to regenerate only what changed.

Linting and validation

The same idea behind CI checks: run syntax validators, linters, and custom commands against every output before it lands.

»

Retry on failure

When a check fails, agents see the error and try again. Automatic retries with feedback, the same pattern behind resilient distributed systems.

Pull request review

Content changes go through the same code review process your team already uses. Diffs, comments, approvals, all in Git.

Progressive Refinement

Outputs don't have to be perfect on day one. Like code, they improve through iteration.

First drafts from LLMs are structurally correct but may miss nuance, tone, or domain-specific phrasing. That is by design. Each review cycle, a pull request comment, an updated context file, a glossary tweak, feeds back into the next run. Quality converges over successive passes, not in a single shot.

Draft

LLM generates a structurally valid first pass based on your context files.

Review

Your team flags issues through pull requests, just like code review.

Refine

Updated context and glossary corrections feed into the next run, closing the gap.

Converge

Each cycle narrows the distance to production quality. The system learns your product's voice.

This follows the same principle behind Kaizen in manufacturing and successive approximation in engineering: start with a good-enough baseline and systematically improve it with human judgment in the loop.

FAQ

Why build this now?

For the first time in history, we have tools that can capture context and interact with the outside world to produce and transform content across languages. LLMs changed what is possible, but possibility alone is not enough. You need a system that orchestrates those capabilities reliably: hashing to avoid redundant work, validation to catch errors, retries to close the loop, and Git to keep humans in control. That system is what we are building from first principles and from years of experience building developer tools.

Do I need to bring my own models?

No. Glossia acts as a broker between your content and the models. The landscape is moving fast, and you want the flexibility to switch models quickly without rewiring your pipeline. We continuously monitor and test which models perform best for different tasks, so you get strong defaults out of the box while keeping the freedom to override with any provider you prefer.

How do humans review outputs?

Today, reviewers check generated content through pull requests and diffs, and can update context files to force regeneration when needed. In the future, we expect them to become part of the loop by running Glossia locally, the same way developers already work with coding agents like Codex.