Alynd

Lyra Nexus documentation

Lyra is Alynd's assistant for working with your career data. When you use Lyra in Nexus context, it is designed to help you refine records, draft bullet points, and reason about your work history with structured application data in mind. This article describes how to write prompts that produce dependable results and how to avoid common failure modes that lead to missing or unusable output.

Overview

Successful use of Lyra in Nexus is less about "magic wording" and more about clarity: a single intent, explicit scope, and concrete facts. The assistant must infer what you want to change or create from plain language. When prompts bundle unrelated tasks, omit scope, or rely on unstated assumptions, the model may produce partial answers, skip structured fields, or return narrative text where you expected extractable items.

Note

Lyra does not automatically know which employer, role, or field you care about unless you state it or anchor it with a mention when the product supports mentions for that record.

How Lyra fits in Nexus

Nexus holds your career evidence: roles, projects, skills, and related professional data. Lyra can suggest edits, rephrase bullets for impact, align wording to a tone you specify, and help you break down achievements into clearer statements. It does not replace your judgment about what belongs on a CV or what happened in a given role; it accelerates articulation and structuring when your instructions are precise.

Lyra also uses Nexus context to target the right place. An @mentioned work item, the current Nexus page, or the record you recently selected can help it distinguish between creating a new work record and adding a nested project, skill, or responsibility to an existing record.

Education follows the same principle. Lyra is suitable for spot changes, such as adding one course to year 2 or updating a degree field on a selected education record. Use the education URL import review flow for broad extraction from university pages, module rosters, programme pages, or requests to populate all academic years.

Core prompt principles

  • State the artifact. Specify whether you want bullet points, a short summary, a list of skills, a rewrite, or a comparison. Ambiguous asks yield ambiguous output.
  • Narrow the scope. Name the employer, job title, time period, or project when relevant. "Improve my last role" is weaker than "Rewrite three bullets for my 'Senior Engineer' role at Contoso (2021–2024), focusing on platform reliability."
  • Use record context when available. Select or @mention the work item you want Lyra to update. For example, asking for a project while a role is selected tells Lyra to prepare an update for that role's Projects list instead of drafting a new job.
  • Provide raw facts when generating new content.Metrics, tools, team size, constraints, and outcomes help Lyra produce credible lines you can verify.
  • Prefer one deliverable per message. See the next section; this is the highest-leverage habit for reducing disappointment.

One goal per message

Avoid multi-step workflows or multiple unrelated actions in a single prompt. Examples of overloaded requests include: "Add a certification, rewrite my summary, and extract skills from this PDF" or "Update two roles and merge duplicate projects." Those patterns force the model to prioritize arbitrarily, merge incompatible intents, or return only the first part of the ask.

Important

Treat each message as a single instruction with a single completion criterion. When you need several outcomes, send several messages in sequence: first complete and review one task, then issue the next. This mirrors how professional documentation presents procedures: one decision or output per step.

Patterns to avoid

  • Multiple unrelated requests in one message (multi-action).
  • Implicit references with no anchor ("that project", "the usual role") when several exist.
  • Asking for structured extraction from very long unstructured dumps without stating field names or limits.
  • Assuming Lyra can access external systems, private URLs, or attachments it cannot read in your session.

Risks and edge cases

Certain behaviors increase the chance of empty responses, partial structured output, or content you cannot safely paste into your profile without review.

Warning

Generative models may omit optional fields, silently drop list items when overloaded, or fabricate plausible metrics if you ask for quantification without supplying numbers. Always verify facts against your records before publishing or submitting applications.

Note

Extremely long pasted content may be summarized or truncated in ways that lose detail needed for accurate extraction. Prefer shorter excerpts or split the work across messages by section.

Examples

The following patterns illustrate concise, high-signal prompts versus overloaded prompts that tend to underperform.

PreferAvoid
"Draft two achievement bullets for my 'Lead Developer' role at Fabrikam from these notes: [paste]. Use STAR-style, max 35 words each.""Fix everything in my work history, make it sound senior, and add skills."
"Rewrite the bullet below for clarity only. Do not add metrics." [single bullet]"Rewrite all bullets for three jobs and combine duplicates."
"Extract tool names from this paragraph as a comma-separated list. No prose.""Read this 4-page text and update Nexus." (no fields, no sequence)
"Add COMP1010 First Year Project to year 2 on this education record.""Update all my education from this course website." Use the URL import review flow for this.

When results disappoint

If output is thin, off-scope, or missing structured items, narrow the next prompt: remove secondary asks, name the target record, reduce source text size, and ask for a single format (for example, exactly five bullets or a bullet plus a short rationale). In most cases, disappointment traces back to scope creep or missing anchors rather than a single "wrong word" in the prompt.

This documentation follows conventions common in technical product guides (clear hierarchy, procedural tone, explicit warnings). For a public reference of that style, see Microsoft Learn's Azure Container Instances quickstart.