Sign up
Generated Assets

A technical guide to successful prompting

Updated: 1.15.25

Generated Assets lets you turn any thesis into a fully formed, investable index. Instead of choosing between thousands of off-the-shelf ETFs—each built by someone else, for someone else—you can now build an index that reflects your view of the market.

But like any powerful system, the quality of the output depends on the quality of your input.

This guide walks through how Generated Assets interprets your prompt, how the underlying evaluation agents work, and how to write prompts that produce the most accurate, expressive indices. We’ll also cover how to refine and iterate on your asset using follow-up instructions.

Whether you’re building your first “AI-filtered Nasdaq-100” or exploring something more experimental—like “founder-led companies with improving margins”—this guide will show you how to get the most out of Generated Assets.

Note that this guide is for educational and illustrative purposes only, and the content here should not be viewed as investment advice or recommendations.

How the engine works: The swarm architecture

To write better prompts, it helps to understand what happens when you hit “Generate.” Most consumer AI tools work linearly: You ask a question, and the LLM generates an answer from memory, or a few web searches. This is great for building small lists of relevant companies, but doesn’t do well at building larger lists or mixing larger lists of criteria. That’s why we built our architecture to separate intent parsing from evaluation. The primary LLM encodes your idea into a concrete set of rules, then evaluates thousands of companies in parallel using a swarm of research agents..

  1. Intent parsing: When you type a prompt, a primary agent analyzes your text to extract any hard financial requirements as well as softer criteria (“Custom Criteria”). You can see how the agent interpreted your prompt in the filter sidebar.
  2. The evaluation swarm: The system then dispatches a concurrent swarm of evaluation agents. Imagine 5,000 analysts working in parallel. Each agent is assigned specific assets to evaluate against your criteria using real-time and historical market data. First, structured data requirements are used to filter down the list of candidates, then the research agents use unstructured data to evaluate any custom criteria.
  3. Construction and weighting: All candidates that pass your requirements are given a relevance score, then weighted to your desired metric. By default, if you provide any custom criteria, the system will weight your asset by relevance. Note, if no custom criteria are provided, all assets will receive a relevance score of 100.

Because of this architecture, the AI flourishes when you give it specific, multi-factor instructions. It doesn’t just “guess” good stocks; it screens for them based on the logic you provide.

A complete walkthrough: From prompt to evaluation to index

Here is an example of a full prompt, the system’s interpretation, and the resulting index. This mirrors how the swarm architecture works behind the scenes.

User prompt:
“Take the S&P 500 and remove low cash flow companies. Include companies with exposure to AI infrastructure, semiconductor supply chains, and data-center operations.”

Intent extracted:
– Base universe: S&P 500
– Exclude: Companies with weak or negative free cash flow -> Structured filter
– Include: AI infrastructure, semiconductor supply chains, data centers -> Custom criteria

Evaluation phase:
– 500 companies queued for evaluation from base universe
– 143 companies have positive free cash flow and move on to swarm evaluation.
– 58 of those companies receive relevance scores over 50 for “Exposure to AI infrastructure, semiconductor supply chains, or data centers” from their agentic evaluation.
– These remaining candidates are then weighted by relevance to the custom criteria.

Final index:
– 58 holdings
– Top weights: MSFT, KLAC, SMCI, ANET, CDW
– Reasoning provided for each holding (excerpt below)

Example holding rationale (MSFT):
“Microsoft is a leading provider of AI infrastructure through its Azure cloud platform, operates extensive global data centers, and is deeply involved in the AI ecosystem, making it an obvious match for significant exposure to AI infrastructure and data-center operations.”

Please note that LLMs are stochastic, meaning the system may produce slightly different results each time, especially when a company sits on the edge of your criteria. Behind the scenes, we mitigate this by evaluating multiple passes and averaging the outcomes.

Anatomy of a great prompt

In our early observations, we’ve found many high-quality prompts feature three components:

1. Universe
The starting point: S&P 500, Nasdaq-100, small-cap tech, etc.

2. Constraints
Hard rules based on financials:
– Revenue growth
– Leverage
– Profitability
– Margin stability
– Cash flow traits

3. Signals
Nuanced traits you want the agents to screen for:
  – AI infrastructure exposure 
  – Reactivity to news events
  – High stock based compensation
  – Founder-led

Example:


“Take the S&P 500. Exclude high-debt companies. Include firms with solid margin and free cash flow.”

Indexed Data vs Custom Criteria

We are constantly working to introduce more advanced structured data to the system to improve consistency and breadth of use cases. When building a great prompt, it often helps to know which data is directly available in the system, and which data points the system may need to aggregate via agentic research.

Directly Indexed Data:
Market capP/E ratioBeta Debt/Equity ratioPrice (Intraday)Previous closePrice change Revenue growth Earnings growthGross profitGross marginEBITDAEBITDA marginRevenueDividend yieldDividend rateAverage daily volume CapExFree cash flowOperating marginR&D as percentage of salesReturn on common equityReturn on average total equityAverage return on invested capitalEPS

What does this mean?
When your prompt filters around any of these metrics, our system can reference first party, proprietary financial data sources to provide 100% accuracy. If your prompt includes data or nuanced criteria not in this list, it will rely on agentic research to provide any answer, which can have varying results quality depending on the availability of the data in question. We strive to make our agentic evaluation as consistent, timely, and correct as possible, but of course, with all LLMs, results are never 100% consistent or correct.

Prompt library: Ready-to-use structures

Here are three examples of complete prompts you can copy, paste, and adapt.

Quality + Growth blend

“Take the S&P100. Filter for companies with high revenue growth, positive free cash flow, and ROIC above 10%. Exclude firms with poor margins. Screen for companies with consistent positive EPS revisions.”

Why it works: It balances growth with discipline, using ROIC and FCF to avoid hype names.

Small-cap value

“US small-cap companies with low P/E, stable cash flow, and improving operating margins. Remove companies with net debt > 3x EBITDA.”

Why it works: It identifies undervalued operators with improving fundamentals.

Rate-cut scenario

“Companies historically sensitive to falling interest rates: long-duration cash flows, utilities, REITs, select tech. Exclude firms with volatile earnings.”

Why it works: It captures a macro theme without blindly overloading on rate-linked sectors.

Common mistakes and how to avoid them

Asking for predictions

“Give me the next Tesla.”

Why it fails: The system can only evaluate evidence, not predict outcomes.

Using vibes instead of variables

“Find the most innovative companies.”

Why it fails: “Innovation” is too vague. You need to provide measurable proxies:
“Find companies with R&D intensity above 15% and accelerating gross margins.”

Over-constraining the problem

“AI companies under $10B market cap, low volatility, high ROE, high growth, low debt, high dividend yields, and founder-led.”

Why it fails: An impossible constraint set will lead to an empty universe.

Advanced prompting techniques

For users who want to build sophisticated, multi-factor indices, Generated Assets supports more advanced prompt structures.

Multi-objective optimization

“Screen for companies with high ROIC, and optionally include those with strong FCF yield.”

Composite factor definitions

“Define quality as a combination of ROE, gross margin stability,and operating leverage.”

Scenario-based prompts

“Companies historically resilient during disinflationary periods.”

Hard caps and diversification constraints

“Cap any one sector at 25%.”

Refinement prompts that reshape the thesis

“Shift this from pure growth to GARP by enforcing a valuation cap of EV/EBIT < 20.”  

How the system explains its decisions

Unlike black-box AI tools, Generated Assets provides transparent reasoning for each holding.

For every company, the system surfaces:

  • Why it was included
  • Which constraints it satisfied
  • Which signals it aligned with
  • How conflicting factors were resolved
  • Evidence from fundamentals, historical behavior, and sector linkages

Example

AMD included due to strong multi-year gross margin expansion, positive sensitivity to AI compute demand, stable FCF growth, and low leverage relative to peers. Excluded three semiconductor peers due to declining EPS revisions and negative operating leverage.

Refining your index: The iterative loop

Once you’ve built an index, the real magic happens in the refinement loop. You can tighten constraints, strengthen signals, or reshape the universe entirely with follow-up prompts:

  • “Drop companies with high EPS.”
  • “Narrow results to companies with high margins.”
  • “Remove megacaps to reduce concentration.”

With each iteration, you’re guiding the swarm—like directing a research team toward a sharper interpretation of your thesis.

Final Thoughts

Generated Assets translates your investment thesis into a structured, evidence-driven index. The more clearly you can articulate your intent, the more precisely the system can execute it.

As you experiment with Generated Assets, remember that the system is built to respond to clarity. The more your prompt reflects a coherent investment thesis, the more precisely the swarm can translate it into a structured index.

If you can describe your idea, you can invest in it.

The content above is for illustrative and informational purposes only. It should not be construed as investment advice or a recommendation of any particular security or strategy.

Generated Assets (“GenA”) is an AI-powered interactive analysis tool that allows you to screen for securities based on objective criteria entered through a natural language interface. Output from GenA is generated at your direction and is for informational purposes only. Output should not be considered individualized investment advice or recommendations. Public Advisors does not guarantee the accuracy, completeness, relevance, or timeliness of such output and will not be responsible for any losses that may result from your reliance on such information. See additional disclosures.

You are solely responsible for deciding whether to invest in the GenA portfolio you have constructed. Before investing, please carefully consider whether it is suitable for you based on your investment objectives, risk tolerance, and other individual factors. If you elect to invest, then investment advisory services for your account will be provided by Public Advisors LLC (“Public Advisors”), an SEC-registered investment adviser, and brokerage services will be provided by Open to the Public Investing, Inc. (“Public Investing”), member FINRA / SIPC. Public Advisors and Public Investing are affiliates, and both charge fees for their respective services. For more details, see Public Advisors’ Firm Brochure, Form CRS, and Fee Schedule.