Finding Alpha with Perplexity: How Finance Teams Use Perplexity Spaces, Labs, and Comet

Featured

I’m Ray Yang — I lead enterprise product strategy for financial services at Perplexity — and in this piece I’m going to walk you through how I and finance teams across institutions use Perplexity’s toolbox to find alpha, save analyst-hours, and improve decision-making. I recently laid out these workflows on a Perplexity demo and Q&A, and here I’ll do the same in article form: practical, step-by-step, and candid about what works, what’s coming, and how to make this technology part of your everyday finance workflow.

This article covers the full stack we talked about — Comet (our browser with embedded AI), Perplexity’s query modes (Simple Search, Research / Deep Research, and Labs), Spaces (collaborative, repeatable workflows), and enterprise data connectors like FactSet and Crunchbase. I’ll describe the demos I ran, explain the rationale behind features, and share tips I’ve learned from customers in both public and private markets, including hedge funds, investment banks, and PE firms.

Read it like a news piece: I’ll open with the big picture, drill into features and use cases, share live-demo style examples (including pulling SEC tables into Excel, running regressions, and identifying M&A targets), and finish strong with best practices, security notes, and a thorough FAQ.

Table of Contents

What I covered: the big picture and why it matters 🔎

Here’s the short version I gave in the demo: financial teams are drowning in data and stretched on time. Perplexity’s job is to be the fastest path from question to defensible answer. That means three things for finance teams:

  • Deliver reliable, cited answers fast — so analysts spend their time interpreting results, not hunting documents.
  • Automate time-consuming data extraction tasks (SEC tables, historical line items, comparables, etc.) so work that used to take hours takes minutes.
  • Provide collaborative, repeatable workflows so entire teams can iterate on the same templates and share outputs easily.

Those translate into practical wins: quicker market thesis validation, faster diligence and modeling, cleaner M&A target generation, and more informed portfolio-level decisions.

How Perplexity differs from other AI tools 🧭

People always ask how we differ from model providers and other tool vendors. The simplest answer is that Perplexity is an orchestrator and search-first product, not just an LLM vendor. We integrate with multiple frontier LLMs (Claude, OpenAI models, and others), and we give users options: pick a model, or toggle the “Best” classifier and let Perplexity choose which model fits the query.

But the bigger differentiator is our focus on grounded search and real-time sources. We built Perplexity starting from the search problem: indexing the web, SEC filings, and premium datasets, and returning answers that contain citations back to original sources. For finance teams that care about auditability and source-tracing, that grounding matters.

“Perplexity first and foremost started as a search company. We’ve done a lot of work to make sure that search is fast and accurate by grounding the results in citations...”

That’s why our UI shows sources inline, exposes a “check sources” flow, and provides an assets tab where backups (CSV, Python snippets, charts) live alongside the narrative. Those pieces reduce the time spent validating where numbers came from — which, in finance, is huge.

My demo flow: three search modes explained 🧰

In the demo I showcased the three main query modes people use in finance workflows. Each mode is tuned to a different problem and cost/latency tradeoff.

  • Simple Search — use this for short, direct questions when you want a quick citation-backed answer. Think: “What was Apple’s revenue last quarter?” or “Who’s the CEO of Chime?” Fastest latency, lowest cost.
  • Research / Deep Research — this is the long-form, thoughtful report mode. It allocates more compute, runs a planning chain-of-thought, gathers many sources, and returns a structured report with charts and visuals. Expect a couple minutes for a full report — but that report often replaces a half-day of manual work.
  • Labs — use Labs when you want non-text outputs: an Excel you can drop into a model, a CSV, or visual assets like charts and tables designed for slides.

Each of those modes plugs into the same index and model-selection infrastructure. You can also choose specific model vendors per query or let Perplexity orchestrate “Best.” That gives flexible control for teams that care about reproducibility or prefer a particular model’s style for certain tasks.

Sources and connectors: where Perplexity looks for answers 🗂️

One of the recurring demo questions was about data connectors and how often data is updated. Finance customers need authoritative sources (SEC Edgar filings, premium data vendors) and timely updates.

Quick rundown:

  • Web Index — major news outlets, blogs, regulatory filings surfaced and cited directly.
  • SEC / Edgar integration — toggle on the finance source and Perplexity will point queries at Edgar, extract tables, and cite filings (10-Ks, 10-Qs, 8-Ks).
  • Premium connectors (enterprise-only) — we’ve built direct connectors with FactSet and Crunchbase for enterprise customers. For FactSet, we currently expose M&A precedent transaction data and live transcripts, and are extending to fundamentals and estimates. Crunchbase gives private company info when you sign in via Perplexity.

Important practical note: the premium connectors are only available to enterprise customers and require sign-ins. We’re constantly expanding these integrations, and we prioritize vendor partnerships that matter for finance workflows.

Deep Research demo: producing a structured report on government shutdowns 🏛️

For the live demo I ran a deep research query on a topical theme: previous U.S. government shutdowns — how long they lasted, what the economic impact was, and possible effects for the current shutdown. I pre-ran the query to save time, but I also initiated a live run to show the experience.

What you should know about Deep Research outputs:

  • They’re long and structured: executive summary, timeline, economic impact, references, and follow-ups.
  • They include charts and visual breakdowns.
  • Every factual assertion has inline citations that you can hover over and dig into; you can highlight text and ask for “check sources” to pull up the underlying docs.

Why this matters: a report that would take an analyst a day or two (collecting articles, pulling timelines, assessing economic impact) now gets synthesized in minutes, with a traceable bibliography. That lets analysts focus on interpretation: sensitivity analysis, scenario-building, and client-ready slides.

“Our customers in financial services — whether that’s Bridgewater on the public market side or Carlisle on the private market side — are using Perplexity to generate these reports that would take an analyst maybe a day or two to put together.”

Hands-on: assets, steps, and verification ⚙️

Every research output in Perplexity gives you a few consistent interfaces I call the minimum reproducibility kit:

  • Main Output — the narrative report you can export to PDF with inline citations.
  • Assets Tab — the backups: CSVs, Python snippets, charts. If you asked for a table, you can download an Excel-ready CSV from here.
  • Steps View — a trace of the planning chain-of-thought the system used: the sources visited, intermediate sub-queries, and the reasoning routes. It’s helpful if you want to refine or critique the report’s pathway.

Practically, analysts use Steps to do two things: 1) drill into the thinking to identify where things could have gone wrong, and 2) take inspiration for further prompts (e.g., “go deeper on point 3” or “exclude press releases older than 2 years”). Steps make the process auditable and repeatable.

Labs: extract SEC tables to Excel and build charts automatically 📊

Labs is one of the parts that makes analysts’ lives easier fast. The workflow I demoed was a classic analyst chore: pulling CapEx (or any line item) across the last several 10-Qs / 10-Ks and putting it into an Excel model. That used to take an analyst a couple hours. With Labs it’s minutes.

Here’s the Labs workflow I showed in the demo:

  1. Toggle the finance source so Perplexity looks at Edgar / SEC filings.
  2. Ask a Labs query: “Pull CapEx for Facebook (Meta) across the last 8 quarters into an Excel.”
  3. Perplexity identifies the right tables across filings, extracts the numeric data, and surfaces a visual. In the Assets tab you can download a CSV or Excel-ready file.
  4. You get citations for each cell: which filing and which page the number came from.

That last bit — cell-level citations — is the game changer. You can send the spreadsheet to a teammate and they can verify specific numbers quickly without redoing the extraction. That improves trust in delegated work and reduces back-and-forths during deal prep or model reviews.

Note: we demoed a soon-to-be-released export feature that will let you generate full PowerPoints and Excel files directly; power users will be able to export slides that include the exact charts and tables the Labs query produced.

Spaces: collaborative, repeatable workflows for teams 👥

Spaces are my favorite productivity hack for teams. I call them multiplayer notebooks. You create a Space for a project (M&A diligence, industry coverage, account planning), define persistent instructions for the Space (persona, output structure, tone), then let team members create shorter prompts inside that Space that inherit the instructions.

Why Spaces matter:

  • Consistency: put an output template in the Space so every report follows the same sections (executive summary, strategic rationale, comps, risk factors, etc.).
  • Collaborative memory: each thread in a Space is shared and editable; teammates can iterate and ask follow-ups on the same research.
  • Reusability: complex multi-step prompts or templates get saved so you don’t repeat the same prompt engineering every time.

Example I ran: an investment banking Space with the custom instruction “You are an experienced investment banking associate. Help produce an M&A deck for a senior banker; include an executive summary, strategic rationale, market dynamics, target list, valuation considerations, and key risks.” Then, inside the Space I typed: “Give me M&A targets for Chime.” The result was a structured report with targets, rationale, and charts — and because it was in a Space, I could save or share the thread with the team for further iteration.

Pro tip from our product team: use the Steps view in a Space to see how the assistant is thinking, and refine the Space-level instructions accordingly. If the assistant is taking some unnecessary step, you can edit the Space template and reduce that noise across all future threads.

Comet: the browser that brings the assistant to your tabs 🧭

Comet is our browser with Perplexity’s search integrated right into the Omni bar and an always-available side panel assistant. For analysts who live in browsers — reading 10-Qs, watching earnings call transcripts, or scanning research — the sidebar assistant is the single most friction-reducing feature.

What Comet lets you do:

  • Invoke the assistant with a keystroke (I use Option-A) while reading any tab.
  • Highlight text on a webpage or within a PDF and ask the assistant to summarize, extract tables, or explain a passage.
  • Reference other open tabs in the same command (e.g., “Consolidate numbers from these four 10-Q tabs into a table”).

A demo I ran: I opened three 10-Qs in three tabs, asked the assistant to consolidate a revenue line into a table, and then asked it to export. The assistant used the open-tab context to extract the right numbers and produce a consolidated table. If it didn’t format the table perfectly, you can follow up inside the same thread. That “daisy chaining” — the assistant taking a sequence of tasks and executing them — is at the heart of how Comet changes an analyst’s day.

Agentic actions and “take over my screen” 🕹️

Beyond reading and summarizing, Comet can execute agentic actions when you’re signed into the browser: draft and send emails, check your calendar, and schedule meetings via the Assistant. We’ve built integration with Gmail and Outlook so the assistant can operate like a human assistant would — draft emails, propose calendar times, and even coordinate reschedules.

Key constraints to keep in mind:

  • Agentic actions (sending emails, rescheduling) require you to be signed into Comet; the browser provides the connected context.
  • If you’re not using the browser, you can still connect email/calendar to Perplexity to read context, but Perplexity won’t be able to execute agentic actions on your behalf without the browser.

When asked to perform complex screen-driven tasks, a helpful instruction is “take over my screen,” which tells the assistant to behave like a human would: read content, extract structured tables, and put them in a new destination. Users have found this especially handy when websites present data in formats that are hard to scrape programmatically.

Real-world use cases I’ve seen in finance teams 💼

Across customers, I see five recurring workflows where Perplexity adds immediate value:

  • Market thesis validation — quickly synthesize trends, gather sources, and produce scenario analyses that inform capital allocation and portfolio strategy.
  • Public company evaluation — watchlists, consensus estimates, and screener-like workflows to build shortlists and flag outliers.
  • M&A target identification — generate target lists with strategic rationale and synergies analysis; export targets into decks or models.
  • Diligence and financial statement extraction — pull CAPEX, SG&A, revenue, or other line items across multiple filings into a single spreadsheet with cell-level sourcing.
  • Analytic prototyping — run regressions, correlation analyses, or sensitivity testing in Labs and get downloadable charts/data for presentations.

To make those workflows real, teams combine Spaces (for repeatability), Labs (for extraction and charts), and Comet (for agentic, tab-aware actions). That’s the stack I recommend: Comet for context and execution, Spaces for orchestration, and Labs for data export.

Regression example: correlation analysis demo 📈

During the session, someone asked about regression analysis — specifically, the correlation between Facebook’s CapEx and its revenue growth over a multi-year window. I suggested using a Labs query (so we could get a chart) and toggled the finance source for SEC filings. The assistant pulled CapEx and revenue numbers from filings, computed the regression, and returned an R-squared with a plotted regression line. The results showed a weak/negative correlation over the period — a nice reality check against the narrative that more CapEx always means faster top-line growth.

Important notes about regression workflows:

  • Make your query precise: specify the assets (tickers), date range, frequency (weekly, monthly, quarterly), and the exact line items (GAAP revenue, CapEx, operating cash flow). That reduces ambiguity and helps Labs find the correct table mappings.
  • Use assets to download the underlying CSV if you want to run the regression locally or in a notebook. Labs gives you both a chart and the raw data.
  • Ask for diagnostics: residual plots, p-values, autocorrelation checks — the assistant can produce standard regression diagnostics if you request them.

How customers actually organize work: a few real templates 🗂️

Here are three templates I’ve seen customers use in Spaces. They’re short, practical, and you can adapt them instantly.

1) M&A Pre-Screen Space (template)

  • Space-level instruction: “You are an experienced investment banking associate. Produce a 4-section M&A pre-screen: Executive Summary, Strategic Rationale, Target Shortlist (5-10 companies), and Quick Valuation Considerations.”
  • Typical prompt: “Screen for acquisition targets for [Acquirer], with emphasis on moat, customer overlap, and regulatory risk.”
  • Outputs: target list, charts showing revenue/EBITDA trends, downloadable CSV of target fundamentals and sources.

2) Earnings Summary Space (template)

  • Space-level instruction: “Summarize each quarterly earnings release into: headline beats/misses, management commentary highlights, FY guidance changes, and items for follow-up.”
  • Typical prompt: “Summarize the MD&A and management prepared remarks for [Ticker] from the latest 10-Q/10-K and earnings transcript.”
  • Outputs: 1-page summary, slide-ready chart, sources for each bullet.

3) Diligence Extract Space (template)

  • Space-level instruction: “Extract financial statement line items across the last N filings, provide cell-level citations, and output a CSV for model ingestion.”
  • Typical prompt: “Extract CapEx, Gross Margin, R&D, and Inventory across last 8 quarters for [Ticker].”
  • Outputs: Excel/CSV, visual trend lines, and the “Steps” trace for auditability.

These templates show how Spaces let teams standardize outputs without burdening every analyst with the same prompt engineering. Once the Space is defined, the prompts inside can be short and fast.

Security, data privacy, and enterprise guarantees 🔐

Security and data governance are top of mind for finance customers. Two points I emphasized in the demo are essential:

  • No training of customer data on third-party LLMs — We’ve negotiated contracts so enterprise customer data is not used to train external LLM providers. If your team is worried about corporate confidentiality, this is a key guarantee.
  • SOC 2 Type II and enterprise controls — the assistant, and many Perplexity features built for enterprise, run on infrastructure that supports SOC 2 Type II compliance. We continue to expand enterprise security features such as SSO, audit logs, and connector-level access controls.

Those safeguards let teams use agentic features (email drafting, scheduling) without worrying that private data will leak into a model training corpus. We also give customers the ability to control which sources are searchable inside an org, which helps with internal knowledge base use cases.

Pricing tiers and Max benefits 💳

People asked about what Max users get that regular users don’t. High level:

  • Max users get earlier access to new product releases.
  • Max includes unlimited Labs queries and access to enhanced LLMs depending on availability.
  • We’ve shipped a dedicated email assistant and other early features to Max users first.

Enterprises get connector access (FactSet, Crunchbase), admin controls, and the ability to deploy Spaces and sharing features across an org. If you’re evaluating for a team, talk to our sales team about connector availability and audit requirements — enterprise plans have orchestration and governance that individual plans don’t.

Best practices: prompts, daisy-chaining, and when to ask follow-ups 🧭

Here’s my playbook for getting reliable outputs out of Perplexity in a finance context:

  1. Start with the output you want — instead of vague prompts, say “Give me a CSV with CapEx and revenue for the last 8 quarters and cite each row.”
  2. Use Space-level instructions for team projects so short prompts inside the Space inherit the formatting and persona expectations.
  3. Daisy-chain — break complex tasks into sub-steps and let the assistant execute them sequentially (extract → clean → analyze → export). Comet helps with this by maintaining context across tabs.
  4. Use Steps to audit — if a result looks off, check Steps to see which sources or sub-queries were used and refine from there.
  5. Ask for diagnostics in regressions or models: residuals, heteroskedasticity checks, and sensitivity tests help you trust the output.
  6. Be explicit with sources — toggle the finance index for SEC filings, or require FactSet/Crunchbase if that’s where your standards require data to come from.

Those practices reduce ambiguity and give you reproducible outputs you can defend to partners, clients, or auditors.

On hallucinations and validation: how to be safe 🛡️

Everyone worries about hallucinations. The single biggest mitigation is to treat Perplexity like a super-talented research analyst with instant recall, not as an oracle. Here’s how to protect yourself:

  • Always check inline citations for material numbers or legal claims.
  • Use the “Check Sources” flow when a section has multiple assertions; it surfaces the exact sources for each sentence.
  • If you need absolute certainty for a number used in a model or client deliverable, download the asset, open the original filing, and validate the cell-level source.

Perplexity’s design — explicit citations, assets, and Steps — is built around this verification process rather than black-box outputs. Use those features as part of the workflow rather than bypassing them.

Practical example: building an M&A pitch with Spaces and Labs 🧾

I want to walk through a practical example end-to-end because seeing the steps in sequence helps teams internalize the flow. Imagine you’re on an M&A team pitching a bank to a potential buyer looking for growth in fintech.

Step 1: Create an M&A Space

  • Space-level instruction: “You are an experienced M&A associate. Produce a 6-slide pitch: 1) Executive Summary, 2) Strategic Rationale, 3) Market Landscape, 4) Target Shortlist, 5) Valuation & Synergies, 6) Key Risks. Include a 1-paragraph bullet for each slide and a source list.”

Step 2: Short prompt inside Space

  • Prompt: “Draft the pitch for acquirer [Chime]. Provide 5 potential targets with brief rationale.”

Step 3: Generate and iterate

  • The Space produces a report with targets and charts. You open Steps to see how it selected targets (industry tags, revenue, customer overlaps), and you refine: “Exclude companies with less than $10m ARR and add a column for primary revenue model.”

Step 4: Use Labs to extract financials

  • Request a Labs query: “For the five chosen targets, extract latest revenue, EBITDA, and cap table info into CSV.”

Step 5: Build slides and deliver

  • Download CSV and charts, plug into your deck template. If you’re using the soon-to-ship export to PPT, generate a slide deck directly that includes the charts generated by Labs.

Outcome: what used to be a half-day to compile and format becomes a 30–60 minute process with source-backed numbers and an easily shareable Space for the team to iterate on the pitch.

What’s coming and product roadmap highlights 🚀

I shared a few teasers in the session and I’ll be candid here about what’s near-term:

  • PowerPoint and Excel export for Labs assets — generate client-ready slides and spreadsheet models directly from Perplexity outputs.
  • Expanded FactSet connector datasets (fundamentals and estimates) — we already have M&A precedent and transcripts; next up is deeper fundamentals integration.
  • More enterprise admin features — better sharing controls, audit logs, and improved governance over Spaces and connectors.

We also continuously tune our model selection classifier so you can either choose a model or rely on Perplexity’s “Best” selection for you. Expect incremental improvements as LLMs evolve; Perplexity’s value is in making the best model decisions for each query and exposing the reasoning where appropriate.

Common objections I hear — and how I respond 🗣️

Objection 1: “AI outputs feel untrustworthy.”

My reply: treat Perplexity as a high-skill research assistant that produces traceable outputs. Use the Steps and citations to validate. Our product isn’t designed to be a final gatekeeper — it’s a productivity multiplier that gives you a defensible first draft, often with all the citations you need to sign off quickly.

Objection 2: “We’re worried about data privacy and training.”

My reply: enterprise contracts ensure customer data isn’t used to train external LLMs, and Perplexity runs on SOC 2 Type II-compliant infrastructure. We also support enterprise-only connectors and admin controls so teams can restrict what appears in shared Spaces.

Objection 3: “This will replace junior analysts.”

My reply: the best outcome I’ve seen is augmentation, not replacement. Analysts free up time from mundane extraction tasks and spend more time on higher-value analysis: modeling scenarios, structuring deals, and client interaction. The tools increase output quality and speed, which benefits teams that want to scale coverage without sacrificing rigor.

Tips for your first week rolling this out to a team 🛠️

  1. Start with a pilot team of 3–5 analysts and one senior to define Space templates (M&A, earnings, due diligence).
  2. Define a simple governance policy: which connectors are allowed, who manages Spaces, and how to tag shared outputs.
  3. Run paired sessions: have an analyst and a senior run a workflow together in Comet, then refine the Space-level instructions.
  4. Collect three repeatable templates and standardize those across the team before wide rollout.
  5. Train the team on the Steps view and assets tab — make auditing outputs part of the checklist for client deliverables.

Quotes from the session that matter 🔖

“You can either add that to your follow up to say, hey, I want to zero in on this part of the query, or you can check for more sources.”

“Sometimes it’ll confirm with you to say, are you sure you want me to do this? Does this copy look good? You can give it instructions of, hey, don’t ask for my approval, go ahead.”

Both quotes capture the combined ethos: give the assistant guardrails and always validate outputs. Perplexity gives you the tools to do both — automation when you want it, checkpoints when you need them.

FAQ — questions I heard during the session (and my answers) ❓

Q: Can I choose which LLM to use for a query?

A: Yes. Perplexity lets you select a model for any query, or toggle on “Best” and have Perplexity automatically choose the most appropriate model. Many customers let Perplexity orchestrate the model selection to remove friction.

Q: Are FactSet and Crunchbase data available to all users?

A: No. Those connectors are enterprise features. If you have an enterprise account and appropriate vendor licenses, you can sign in via Perplexity and enable the connectors. We currently expose FactSet M&A precedent and transcripts and Crunchbase private company data for signed-in enterprise users. We’re expanding dataset coverage over time.

Q: How does Perplexity handle SEC filings and extracting tables?

A: Toggle the finance source (Edgar) and Perplexity will search filings and extract table data. Labs can identify specific line items and produce CSV/Excel outputs. All extracted numbers include citations linking back to the exact filing and, where possible, the page or table reference.

Q: How long does Deep Research take?

A: Deep Research queries typically take a couple minutes (2–4 minutes) because they allocate more compute and gather multiple sources, then plan a structured output. Simple searches return much faster, often seconds.

Q: What about sending emails or calendar actions?

A: If you use Comet and are signed in, the assistant can execute agentic actions like drafting and sending emails, proposing calendar times, or scheduling meetings — it can act like a human assistant. Without the browser, Perplexity can read calendar and email context when connected, but it won’t execute actions on your behalf.

Q: Can Perplexity be trusted to not train my enterprise data on public LLMs?

A: For enterprise customers, our agreements ensure customer data is not used to train external LLM providers. This is a common enterprise requirement and we’ve negotiated contracts that respect customer confidentiality.

Q: Can the assistant generate PowerPoint decks from research?

A: We showed an early preview in the demo. PowerPoint and Excel export of Labs assets is coming soon to customers, and Max users will see new features earlier. The export will include charts and tables produced in Labs.

Q: How do I do regression analysis and get the underlying data?

A: Use a Labs query to pull the numeric series (specify ticker, date range, frequency, and line items). Labs will compute regressions, provide R-squared and diagnostics, and give you the raw CSV in the assets tab for further analysis.

Q: How do Spaces help with consistency?

A: Spaces let you set persona, output structure, and formatting rules at the space level. All prompts within that Space inherit those instructions, so the outputs are consistent across analysts and over time.

Q: How do I validate numbers for compliance or audit?

A: Download the asset (CSV) and open the cited filings. Perplexity’s inline citations and Steps view show exactly where each number came from, making validation straightforward.

Closing: what I want you to try this week 🏁

If you take nothing else from this piece, try these three experiments this week:

  1. Create a Space for a routine task (earnings summary, M&A pre-screen, or diligence extract) and put in a one-paragraph instruction that defines the output structure.
  2. Run a Labs query extracting a simple table (CapEx, revenue) across the last 4 quarters and download the CSV. Compare the time it takes you versus your old process.
  3. Install Comet and try the side-panel assistant on a 10-Q: highlight an MD&A section and ask for an executive summary; then ask the assistant to draft an email summarizing the key follow-ups and include citations.

If you do those three things, you’ll see exactly why teams are building Spaces and using Comet. You’ll save hours, get better traceability, and free analysts to do more creative and value-add work.

If you’re on a team and want to go deeper, we can iterate on a Space template together. Perplexity gives you the plumbing; how you use it will determine the returns. For finance teams, the returns are real — faster diligence, cleaner models, and more defensible recommendations.

Thanks for reading. If you’ve got specific workflows you want me to translate into a Space template or a Labs query, tell me which ones and I’ll sketch a starting template you can copy.

Additional Resources & Next Steps 📚

  • Try the demo flows I outlined in Comet and the Perplexity app.
  • Start a pilot Space for a recurring team task.
  • Contact your enterprise rep to discuss FactSet/Crunchbase connectors and export features.


No external URLs were provided with your link list. Below are recommended 1–3 word anchor texts and the approximate sentence in the article where each link would be most relevant. Provide the URLs and I will insert them into the article for you.

  • Perplexity — place near the sentence that begins "Perplexity’s toolbox to find alpha..."
  • Comet — place near "Comet (our browser with embedded AI)"
  • Labs — place near "Labs is one of the parts that makes analysts’ lives easier fast."
  • Spaces — place near "Spaces are my favorite productivity hack for teams."
  • Edgar — place near "Toggle the finance source so Perplexity looks at Edgar / SEC filings."
  • FactSet — place near "we’ve built direct connectors with FactSet" in the connectors paragraph
  • Crunchbase — place near "Crunchbase gives private company info when you sign in via Perplexity."

Send the URLs for those anchors and I will return the article with the links inserted inline.


AIWorldVision

AI and Technology News