Fair Housing Compliance and AI-Generated Listing Descriptions

General-purpose AI tools have no awareness of Fair Housing law. Here's why that matters, which phrases put agents at risk, and how a purpose-built system approaches language screening differently.

What the Fair Housing Act requires of listing language

The Fair Housing Act of 1968, along with its subsequent amendments, prohibits discrimination in housing based on seven federally protected classes. Many states and municipalities add further protections. For real estate agents, these laws extend directly to the language used in property listings. A description that indicates a preference for or against any protected group can form the basis of a complaint, even if no discriminatory intent existed.

Federally Protected Classes

Race
Color
National Origin
Religion
Sex
Familial Status
Disability
+ State/Local

Many jurisdictions add protections for age, sexual orientation, gender identity, source of income, and other classes.

The enforcement standard is important to understand: the question is not whether the agent meant to discriminate, but whether a reasonable person could interpret the language as expressing a preference. Phrases like "ideal for young professionals" or "great for singles" are clear violations. But subtler language can also create exposure. References to "quiet tenants," descriptions of neighborhood demographics, or terms that imply preferences about disability or familial status all carry risk.

Common phrases that create risk

Many of the phrases that create Fair Housing exposure are ones agents have used for years without thinking twice. Some have become so embedded in real estate language that they feel routine. "Master bedroom" is perhaps the most widely discussed example. While its origin is debated, the term has been dropped by many MLSs and industry organizations in favor of "primary bedroom." AI tools trained on web content, however, still produce it regularly because it appears so frequently in their training data.

Phrases that carry Fair Housing risk
Risky Phrase
Why It's Flagged
Safer Alternative
"master bedroom"
Dropped by NAR and many MLSs
"primary bedroom"
"perfect for young families"
Expresses preference based on familial status and age
"ideal for today's lifestyle"
"great for empty nesters"
Implies preference for older adults without children
"low-maintenance living"
"no children allowed"
Discriminates based on familial status
Omit entirely
"quiet neighborhood"
Can be interpreted as code for excluding families with children
"peaceful setting"
"no Section 8"
Discriminates based on source of income (illegal in many jurisdictions)
Omit entirely
"exclusive community"
Can imply discriminatory selection of residents
"private community"
"bachelor pad"
Expresses preference based on sex and familial status
"modern living space"

This table represents a small fraction of the language that carries risk. Nila June's engine screens against over 150 terms and phrases, covering categories that range from familial status and sex to disability and national origin.

Why general-purpose AI fails at Fair Housing compliance

Large language models generate text by predicting the most statistically likely next word based on patterns in their training data. Their training data includes millions of real estate listings written over decades, many of which predate current Fair Housing awareness. The phrases agents are trying to avoid are, statistically speaking, among the most common patterns in property description language.

When you ask a general-purpose AI tool to write a listing description, it doesn't consult a list of prohibited terms. It generates text based on what property descriptions typically contain. And property descriptions have historically contained exactly the language that Fair Housing guidelines now flag.

Some AI tools attempt to address this with post-processing filters that scan generated text for prohibited terms. This approach is better than nothing, but it has structural weaknesses. Filters can catch exact phrase matches, but they struggle with variations, context-dependent terms, and novel phrasings that carry the same discriminatory implication. A filter that catches "perfect for young families" may not catch "a young family's dream" or "ideal starter home for newlyweds." The underlying model is still trying to produce this kind of language because that's what its training data tells it property descriptions look like.

There's also the question of trust. When an agent uses a tool that generates text and then runs it through a compliance check, the agent is relying on two systems: the generator and the filter. If either one fails, the violation reaches the MLS. An agent reviewing the output for quality may reasonably assume the tool has already handled compliance, reducing the scrutiny they give to individual phrases.

How Nila June handles it differently

Language screening by architecture, not by filter

Nila June doesn't generate text and then screen it. The templates that produce the output were written without flagged terms, so the system is built to prevent them from appearing in the first place. The system doesn't need to catch "master bedroom" because no template contains it. It doesn't need to filter "ideal for young families" because no branch of the logic produces it.

Over 150 commonly flagged terms and phrases are excluded from the engine's templates
Built to prevent flagged language from appearing, not to catch it after the fact
Deterministic output means the same survey answers always produce the same screened results
Every sentence traces to a specific survey answer provided by the agent

Language interpretation can vary by jurisdiction, and no automated system replaces an agent's own review. Nila June is designed to reduce risk, not eliminate it.

Built for a regulated industry from day one

This distinction is fundamental. In an LLM-based system, language screening is a constraint imposed on an engine that is naturally inclined to produce flagged terms. In Nila June's system, avoidance of risky language is a property of the engine itself. The system's default behavior is cautious behavior, because it was built that way.

Important: Nila June is designed to help agents avoid commonly flagged Fair Housing language, but it does not provide legal advice and does not guarantee compliance with all federal, state, and local fair housing laws. Agents are responsible for reviewing all descriptions before use. If you are uncertain whether a description complies with applicable fair housing laws, consult a qualified attorney. See our Terms of Service for full details.

See screened descriptions built from your property details. Three free.

Start Free

No subscription. $19.99 per listing after your 3 free descriptions.

Descriptions written with care

Accurate to your property data. Built-in fair-housing language awareness. Original every time.

Try Nila June Free