Fair Housing Compliance and AI-Generated Listing Descriptions
General-purpose AI tools have no awareness of Fair Housing law. Here's why that matters, which phrases put agents at risk, and how a purpose-built system approaches language screening differently.
What the Fair Housing Act requires of listing language
The Fair Housing Act of 1968, along with its subsequent amendments, prohibits discrimination in housing based on seven federally protected classes. Many states and municipalities add further protections. For real estate agents, these laws extend directly to the language used in property listings. A description that indicates a preference for or against any protected group can form the basis of a complaint, even if no discriminatory intent existed.
Federally Protected Classes
Many jurisdictions add protections for age, sexual orientation, gender identity, source of income, and other classes.
The enforcement standard is important to understand: the question is not whether the agent meant to discriminate, but whether a reasonable person could interpret the language as expressing a preference. Phrases like "ideal for young professionals" or "great for singles" are clear violations. But subtler language can also create exposure. References to "quiet tenants," descriptions of neighborhood demographics, or terms that imply preferences about disability or familial status all carry risk.
Common phrases that create risk
Many of the phrases that create Fair Housing exposure are ones agents have used for years without thinking twice. Some have become so embedded in real estate language that they feel routine. "Master bedroom" is perhaps the most widely discussed example. While its origin is debated, the term has been dropped by many MLSs and industry organizations in favor of "primary bedroom." AI tools trained on web content, however, still produce it regularly because it appears so frequently in their training data.
This table represents a small fraction of the language that carries risk. Nila June's engine screens against over 150 terms and phrases, covering categories that range from familial status and sex to disability and national origin.
Why general-purpose AI fails at Fair Housing compliance
Large language models generate text by predicting the most statistically likely next word based on patterns in their training data. Their training data includes millions of real estate listings written over decades, many of which predate current Fair Housing awareness. The phrases agents are trying to avoid are, statistically speaking, among the most common patterns in property description language.
When you ask a general-purpose AI tool to write a listing description, it doesn't consult a list of prohibited terms. It generates text based on what property descriptions typically contain. And property descriptions have historically contained exactly the language that Fair Housing guidelines now flag.
Some AI tools attempt to address this with post-processing filters that scan generated text for prohibited terms. This approach is better than nothing, but it has structural weaknesses. Filters can catch exact phrase matches, but they struggle with variations, context-dependent terms, and novel phrasings that carry the same discriminatory implication. A filter that catches "perfect for young families" may not catch "a young family's dream" or "ideal starter home for newlyweds." The underlying model is still trying to produce this kind of language because that's what its training data tells it property descriptions look like.
There's also the question of trust. When an agent uses a tool that generates text and then runs it through a compliance check, the agent is relying on two systems: the generator and the filter. If either one fails, the violation reaches the MLS. An agent reviewing the output for quality may reasonably assume the tool has already handled compliance, reducing the scrutiny they give to individual phrases.
How Nila June handles it differently
Language screening by architecture, not by filter
Nila June doesn't generate text and then screen it. The templates that produce the output were written without flagged terms, so the system is built to prevent them from appearing in the first place. The system doesn't need to catch "master bedroom" because no template contains it. It doesn't need to filter "ideal for young families" because no branch of the logic produces it.
Language interpretation can vary by jurisdiction, and no automated system replaces an agent's own review. Nila June is designed to reduce risk, not eliminate it.
This distinction is fundamental. In an LLM-based system, language screening is a constraint imposed on an engine that is naturally inclined to produce flagged terms. In Nila June's system, avoidance of risky language is a property of the engine itself. The system's default behavior is cautious behavior, because it was built that way.
See screened descriptions built from your property details. Three free.
Start Free →No subscription. $19.99 per listing after your 3 free descriptions.