The button in the listing form

A handful of MLSs have started adding AI-generated description tools right into the listing entry workflow. The idea is straightforward: instead of writing public remarks from scratch, agents click a button and the system produces a paragraph for them.

It's fast. It's convenient. And for agents who view the public remarks field as a formality, it may be enough.

But the way these tools work matters, and the two approaches that MLSs are trying have different problems.

Approach 1: Generate from data fields

Some MLS tools pull from the structured data the agent has already entered: beds, baths, square footage, lot size, year built, pool, garage. The system reads the fields and writes a paragraph.

The output is accurate, because it's drawing from verified listing data. But it reads like a prose version of the listing details:

"This 3-bedroom, 2-bathroom ranch home offers 1,850 square feet of living space on a 0.25-acre lot. Features include a 2-car attached garage and an in-ground pool."

That's correct. It's also everything the buyer already knows from glancing at the listing summary. The description adds nothing that the data fields don't already show.

Approach 2: Generate from a text box

Other MLSs take a different approach: they give the agent a text box to type notes about the property, then generate public remarks from whatever the agent wrote.

This is essentially the same prompt-based model that third-party AI description tools use. The agent types a few sentences, the system sends them to a language model, and the language model fills in whatever the agent didn't mention. A "sun-drenched breakfast nook" appears because it sounds plausible. The "mature landscaping" is a guess. The output reads well, but it includes details that no one provided.

The difference is that this version comes from inside the MLS platform, which may make agents less likely to question the output. When a description is generated by the same system you use to enter listings, it carries an implicit authority that a standalone ChatGPT window does not.

What both approaches leave out

Whether the system pulls from data fields or from typed notes, the output can only be as good as the input. And neither approach asks the right questions.

The kitchen was recently updated with quartz counters and a mosaic tile backsplash, and it opens into the living room and out to the deck. The MLS field says "updated kitchen." The agent's notes might not mention the layout at all.

The primary bedroom has an ensuite with double sinks and a walk-in shower, plus two walk-in closets and a sitting area. The MLS field says "3 bedrooms." The notes say "3 bed 2 bath ranch, primary ensuite."

The back porch is screened, and it looks out toward the Smoky Mountains. The MLS has a "mountain views" checkbox. The language model may turn that into a sentence about mountain views from rooms the agent never specified.

The oak hardwood floors run through the living areas, kitchen, and entryway. The MLS field says "hardwood." The notes don't mention the species or which rooms.

Each of these is a sentence that a good description would include. Neither approach gives the system enough material to write them accurately.

The survey approach

Nila June takes a different approach to the input problem. Instead of pulling from MLS data fields or asking for free-form notes, a guided property briefing asks agents specific questions about the details that matter for a written description: kitchen specifics, architectural character, outdoor spaces, views from particular rooms, neighborhood context, and recent updates.

The questions are structured so nothing important gets skipped, but the answers are specific to each property. The system generates two descriptions — a multi-paragraph narrative and a concise version for MLS public remarks — from that input alone. Every detail in the output came from the agent's answers. Nothing is invented, and the language is screened for overused phrases and commonly flagged Fair Housing terms.

For agents whose MLS offers auto-generated remarks, the question isn't whether the auto-fill is convenient. It is convenient. The question is whether "convenient" is the standard you want for the one piece of marketing that represents your listing.

If you're part of an MLS technology team exploring AI-generated descriptions for your platform, we'd welcome a conversation about integration.

More than a data field

Descriptions built from what your MLS fields can't capture. Accurate, original, and ready to paste.

Try Nila June Free