# Nova Browser Guides > Agent-readable map for the public Nova Browser documentation. Generated: 2026-04-28 Language: en AI usage policy: AI agents and crawlers may read, index, retrieve, cite, summarize, transform, and use this documentation for model training and retrieval-augmented generation where lawful. TDM reservation: 0 ## Core Files - [Human documentation](https://nova-cognitive.com/doc/en/) - [Full agent context](https://nova-cognitive.com/doc/en/llms-full.txt) - [XML sitemap](https://nova-cognitive.com/doc/en/sitemap.xml) - [TDMRep signal](https://nova-cognitive.com/doc/en/.well-known/tdmrep.json) ## Documentation Pages - [Installation](https://nova-cognitive.com/doc/en/installation): How to get Nova ready: check requirements, start the app, and set the first browser, agent, and local connection options. - [Browser Surface](https://nova-cognitive.com/doc/en/browser-shell): The visible browser surface: tabs, address bar, page view, notices, and the agent panel. - [Embedded Agents](https://nova-cognitive.com/doc/en/embedded-agents): Embedded agents connect provider sidecars with prompts, approvals, logs, and task workspaces. - [Website Crawler](https://nova-cognitive.com/doc/en/crawler): The crawler inspects websites in the background, collects pages, and makes changes visible. - [UI Explorer](https://nova-cognitive.com/doc/en/surface-explorer): The UI explorer finds hidden or conditional interface areas such as menus, dialogs, hover states, and flows. - [Learning](https://nova-cognitive.com/doc/en/knowledge-memory): Learning keeps observations, task outcomes, and reusable website knowledge available for later work. - [PKS](https://nova-cognitive.com/doc/en/knowledge-memory/pks): The Persistent Knowledge Store is Nova’s curated long-term memory for website patterns and proven actions. - [Plugins](https://nova-cognitive.com/doc/en/plugins): Plugins are isolated extensions that agents can create or use when the right permissions are granted. - [Scheduled Tasks](https://nova-cognitive.com/doc/en/scheduled-tasks): Scheduled tasks let Nova run recurring or on-demand work with a defined context, variables, and logs. - [Glossary](https://nova-cognitive.com/doc/en/glossar): The glossary explains the most important Nova terms and links them to the right documentation areas. - [AI Use & License](https://nova-cognitive.com/doc/en/ai-usage): This page explains how AI agents and search systems may use the public Nova documentation and which technical signals Nova publishes. ## Full Page Context ### Installation URL: https://nova-cognitive.com/doc/en/installation Summary: How to get Nova ready: check requirements, start the app, and set the first browser, agent, and local connection options. Responsibilities: - Check Windows 10 or Windows 11, Windows App SDK Runtime 1.8, and Microsoft Edge WebView2 Runtime. - Start NovaBrowser.exe and handle the first launch deliberately, including a possible SmartScreen notice on beta builds. - Configure language, browser identity, sandboxes, AI provider accounts, and agent permissions in settings. Agent-facing actions: - Prepare agent onboarding (nova.get_onboarding) - Install agent reference files (nova.install_onboarding) Policy sections: - Allowed Use: AI agents and crawlers may read, index, retrieve, and cite the public documentation. Summaries, translations, semantic search, retrieval-augmented generation, and model training are allowed where lawful. Canonical URLs, page titles, and source references should be preserved when content is cited or used in derived answers. - Boundaries: This permission applies to the public documentation in this build, not automatically to product code, private repositories, trademarks, or confidential data. Content must not be presented as if derived statements were official guarantees or legal advice. Crawlers should respect normal load limits and avoid login, admin, or unlinked internal areas. - Technical Signals: robots.txt allows classic search engines and common AI crawlers. tdm-reservation: 0 signals that text and data mining for this public documentation is not reserved. llms.txt and llms-full.txt provide a compact, agent-readable documentation map. ### Browser Surface URL: https://nova-cognitive.com/doc/en/browser-shell Summary: The visible browser surface: tabs, address bar, page view, notices, and the agent panel. Responsibilities: - Shows the website and the controls users work with. - Keeps tabs, downloads, settings, proxy notices, and agent status together. - Makes agent work visible without overloading the surface. Agent-facing actions: - List open tabs (nova.tabs) - Switch to a tab (nova.set_active_tab) - Read page zoom (nova.webview_get_zoom) - Read window size (nova.window_get_bounds) - Check proxy status (nova.proxy_status) Policy sections: - Allowed Use: AI agents and crawlers may read, index, retrieve, and cite the public documentation. Summaries, translations, semantic search, retrieval-augmented generation, and model training are allowed where lawful. Canonical URLs, page titles, and source references should be preserved when content is cited or used in derived answers. - Boundaries: This permission applies to the public documentation in this build, not automatically to product code, private repositories, trademarks, or confidential data. Content must not be presented as if derived statements were official guarantees or legal advice. Crawlers should respect normal load limits and avoid login, admin, or unlinked internal areas. - Technical Signals: robots.txt allows classic search engines and common AI crawlers. tdm-reservation: 0 signals that text and data mining for this public documentation is not reserved. llms.txt and llms-full.txt provide a compact, agent-readable documentation map. ### Embedded Agents URL: https://nova-cognitive.com/doc/en/embedded-agents Summary: Embedded agents connect provider sidecars with prompts, approvals, logs, and task workspaces. Responsibilities: - Starts and coordinates embedded provider sidecars. - Builds prompts, configuration, approvals, and raw logs around agent work. - Keeps workspaces and conversation context attached to the right task. Agent-facing actions: - Provider Sidecar Launch (provider sidecar launch) - Approval Cards (approval cards) - Conversation Archive (conversation archive) - Agent Workspace Templates (agent workspace templates) Policy sections: - Allowed Use: AI agents and crawlers may read, index, retrieve, and cite the public documentation. Summaries, translations, semantic search, retrieval-augmented generation, and model training are allowed where lawful. Canonical URLs, page titles, and source references should be preserved when content is cited or used in derived answers. - Boundaries: This permission applies to the public documentation in this build, not automatically to product code, private repositories, trademarks, or confidential data. Content must not be presented as if derived statements were official guarantees or legal advice. Crawlers should respect normal load limits and avoid login, admin, or unlinked internal areas. - Technical Signals: robots.txt allows classic search engines and common AI crawlers. tdm-reservation: 0 signals that text and data mining for this public documentation is not reserved. llms.txt and llms-full.txt provide a compact, agent-readable documentation map. ### Website Crawler URL: https://nova-cognitive.com/doc/en/crawler Summary: The crawler inspects websites in the background, collects pages, and makes changes visible. Responsibilities: - Runs background website inspections. - Collects URLs, pages, and changes for later review. - Keeps crawling separate from visible user browsing. Agent-facing actions: - Inspect a website (nova.crawl_start) - Crawl Status (nova.crawl_status) - Crawl Results (nova.crawl_results) - Crawl Stop (nova.crawl_stop) - Crawl Links (nova.crawl_links) - Crawl Diff (nova.crawl_diff) - Site Urls (nova.site_urls) Policy sections: - Allowed Use: AI agents and crawlers may read, index, retrieve, and cite the public documentation. Summaries, translations, semantic search, retrieval-augmented generation, and model training are allowed where lawful. Canonical URLs, page titles, and source references should be preserved when content is cited or used in derived answers. - Boundaries: This permission applies to the public documentation in this build, not automatically to product code, private repositories, trademarks, or confidential data. Content must not be presented as if derived statements were official guarantees or legal advice. Crawlers should respect normal load limits and avoid login, admin, or unlinked internal areas. - Technical Signals: robots.txt allows classic search engines and common AI crawlers. tdm-reservation: 0 signals that text and data mining for this public documentation is not reserved. llms.txt and llms-full.txt provide a compact, agent-readable documentation map. ### UI Explorer URL: https://nova-cognitive.com/doc/en/surface-explorer Summary: The UI explorer finds hidden or conditional interface areas such as menus, dialogs, hover states, and flows. Responsibilities: - Discovers hidden or conditional UI states. - Activates menus, hover areas, dialogs, and route states safely. - Stores findings so later tasks can use the same interface knowledge. Agent-facing actions: - Explore Surface Discover (nova.explore_surface(discover)) - Explore Surface Activate (nova.explore_surface(activate)) - Explore Surface Hover (nova.explore_surface(hover)) - Explore Surface Close (nova.explore_surface(close)) Policy sections: - Allowed Use: AI agents and crawlers may read, index, retrieve, and cite the public documentation. Summaries, translations, semantic search, retrieval-augmented generation, and model training are allowed where lawful. Canonical URLs, page titles, and source references should be preserved when content is cited or used in derived answers. - Boundaries: This permission applies to the public documentation in this build, not automatically to product code, private repositories, trademarks, or confidential data. Content must not be presented as if derived statements were official guarantees or legal advice. Crawlers should respect normal load limits and avoid login, admin, or unlinked internal areas. - Technical Signals: robots.txt allows classic search engines and common AI crawlers. tdm-reservation: 0 signals that text and data mining for this public documentation is not reserved. llms.txt and llms-full.txt provide a compact, agent-readable documentation map. ### Learning URL: https://nova-cognitive.com/doc/en/knowledge-memory Summary: Learning keeps observations, task outcomes, and reusable website knowledge available for later work. Responsibilities: - Captures observations and outcomes from real work. - Matches recurring tasks to known patterns. - Promotes useful knowledge into durable memory. Agent-facing actions: - Learn Suggest (nova.learn_suggest) - Learn Generate (nova.learn_generate) - Learn Promote (nova.learn_promote) - Pks Get (nova.pks_get) - Pks Match (nova.pks_match) - Ok Observe (nova.ok_observe) - Memory Stats (nova.memory_stats) - Task Match (nova.task_match) Policy sections: - Allowed Use: AI agents and crawlers may read, index, retrieve, and cite the public documentation. Summaries, translations, semantic search, retrieval-augmented generation, and model training are allowed where lawful. Canonical URLs, page titles, and source references should be preserved when content is cited or used in derived answers. - Boundaries: This permission applies to the public documentation in this build, not automatically to product code, private repositories, trademarks, or confidential data. Content must not be presented as if derived statements were official guarantees or legal advice. Crawlers should respect normal load limits and avoid login, admin, or unlinked internal areas. - Technical Signals: robots.txt allows classic search engines and common AI crawlers. tdm-reservation: 0 signals that text and data mining for this public documentation is not reserved. llms.txt and llms-full.txt provide a compact, agent-readable documentation map. ### PKS URL: https://nova-cognitive.com/doc/en/knowledge-memory/pks Summary: The Persistent Knowledge Store is Nova’s curated long-term memory for website patterns and proven actions. Responsibilities: - Stores curated website knowledge in a persistent form. - Keeps domain hints, platform templates, and proven actions available. - Lets agents retrieve and update durable patterns. Agent-facing actions: - Pks Get (nova.pks_get) - Pks Match (nova.pks_match) - Pks Upsert (nova.pks_upsert) - Pks Upsert Hint (nova.pks_upsert_hint) - Pks Patch (nova.pks_patch) - Pks Deprecate (nova.pks_deprecate) - Pks List (nova.pks_list) - Pks Platform Seed (nova.pks_platform_seed) - Pks Platform Get (nova.pks_platform_get) - Pks Platform List (nova.pks_platform_list) - Telemetry Report (nova.telemetry_report) Policy sections: - Allowed Use: AI agents and crawlers may read, index, retrieve, and cite the public documentation. Summaries, translations, semantic search, retrieval-augmented generation, and model training are allowed where lawful. Canonical URLs, page titles, and source references should be preserved when content is cited or used in derived answers. - Boundaries: This permission applies to the public documentation in this build, not automatically to product code, private repositories, trademarks, or confidential data. Content must not be presented as if derived statements were official guarantees or legal advice. Crawlers should respect normal load limits and avoid login, admin, or unlinked internal areas. - Technical Signals: robots.txt allows classic search engines and common AI crawlers. tdm-reservation: 0 signals that text and data mining for this public documentation is not reserved. llms.txt and llms-full.txt provide a compact, agent-readable documentation map. ### Plugins URL: https://nova-cognitive.com/doc/en/plugins Summary: Plugins are isolated extensions that agents can create or use when the right permissions are granted. Responsibilities: - Loads isolated agent-authored extensions. - Checks manifests, permissions, quotas, and runtime boundaries. - Exposes plugin tools only after the right grants exist. Agent-facing actions: - Create a plugin (nova.plugin_create) - Plugin Inspect (nova.plugin_inspect) - Plugin Request Permission (nova.plugin_request_permission) - Plugin Grant Active Tab (nova.plugin_grant_active_tab) - Plugin Test (nova.plugin_test) - PluginId ToolName (plugin.{pluginId}.{toolName}) Policy sections: - Allowed Use: AI agents and crawlers may read, index, retrieve, and cite the public documentation. Summaries, translations, semantic search, retrieval-augmented generation, and model training are allowed where lawful. Canonical URLs, page titles, and source references should be preserved when content is cited or used in derived answers. - Boundaries: This permission applies to the public documentation in this build, not automatically to product code, private repositories, trademarks, or confidential data. Content must not be presented as if derived statements were official guarantees or legal advice. Crawlers should respect normal load limits and avoid login, admin, or unlinked internal areas. - Technical Signals: robots.txt allows classic search engines and common AI crawlers. tdm-reservation: 0 signals that text and data mining for this public documentation is not reserved. llms.txt and llms-full.txt provide a compact, agent-readable documentation map. ### Scheduled Tasks URL: https://nova-cognitive.com/doc/en/scheduled-tasks Summary: Scheduled tasks let Nova run recurring or on-demand work with a defined context, variables, and logs. Responsibilities: - Creates and lists recurring or on-demand tasks. - Stores variables, workspaces, secrets, and run logs. - Lets operators trigger and inspect planned automation. Agent-facing actions: - Scheduled Task List (nova.scheduled_task_list) - Scheduled Task Create (nova.scheduled_task_create) - Scheduled Task Trigger (nova.scheduled_task_trigger) - Scheduled Task Runs (nova.scheduled_task_runs) - Scheduled Task Workspace Read (nova.scheduled_task_workspace_read) - Scheduled Task Secret Set (nova.scheduled_task_secret_set) Policy sections: - Allowed Use: AI agents and crawlers may read, index, retrieve, and cite the public documentation. Summaries, translations, semantic search, retrieval-augmented generation, and model training are allowed where lawful. Canonical URLs, page titles, and source references should be preserved when content is cited or used in derived answers. - Boundaries: This permission applies to the public documentation in this build, not automatically to product code, private repositories, trademarks, or confidential data. Content must not be presented as if derived statements were official guarantees or legal advice. Crawlers should respect normal load limits and avoid login, admin, or unlinked internal areas. - Technical Signals: robots.txt allows classic search engines and common AI crawlers. tdm-reservation: 0 signals that text and data mining for this public documentation is not reserved. llms.txt and llms-full.txt provide a compact, agent-readable documentation map. ### Glossary URL: https://nova-cognitive.com/doc/en/glossar Summary: The glossary explains the most important Nova terms and links them to the right documentation areas. Glossary terms: - Agent: An AI system that translates goals into steps and uses Nova tools to perform them. - Crawler: A background run that inspects websites, collects pages, and makes changes visible. - Approval: A conscious user decision before Nova or an agent performs sensitive or state-changing steps. - llms.txt: An agent-readable text file that bundles important pages, policies, and compact context. - OK: Observation Knowledge stores observations that can become reliable website knowledge later. - PKS: The Persistent Knowledge Store is Nova’s curated long-term memory for website patterns and proven actions. - Plugin: An isolated extension that agents can create or use when the right permissions exist. - Scheduled Task: A planned task that Nova runs repeatedly or on demand with a defined work context. - Surface Explorer: Ein Bereich, der versteckte UI-Zustände wie Menüs, Dialoge, Tabs oder Hover-Inhalte sichtbar macht. - TDM Reservation: Ein Signal für Text- und Data-Mining. Nova setzt es auf 0, damit die öffentliche Doku nutzbar ist. Policy sections: - Allowed Use: AI agents and crawlers may read, index, retrieve, and cite the public documentation. Summaries, translations, semantic search, retrieval-augmented generation, and model training are allowed where lawful. Canonical URLs, page titles, and source references should be preserved when content is cited or used in derived answers. - Boundaries: This permission applies to the public documentation in this build, not automatically to product code, private repositories, trademarks, or confidential data. Content must not be presented as if derived statements were official guarantees or legal advice. Crawlers should respect normal load limits and avoid login, admin, or unlinked internal areas. - Technical Signals: robots.txt allows classic search engines and common AI crawlers. tdm-reservation: 0 signals that text and data mining for this public documentation is not reserved. llms.txt and llms-full.txt provide a compact, agent-readable documentation map. ### AI Use & License URL: https://nova-cognitive.com/doc/en/ai-usage Summary: This page explains how AI agents and search systems may use the public Nova documentation and which technical signals Nova publishes. Policy sections: - Allowed Use: AI agents and crawlers may read, index, retrieve, and cite the public documentation. Summaries, translations, semantic search, retrieval-augmented generation, and model training are allowed where lawful. Canonical URLs, page titles, and source references should be preserved when content is cited or used in derived answers. - Boundaries: This permission applies to the public documentation in this build, not automatically to product code, private repositories, trademarks, or confidential data. Content must not be presented as if derived statements were official guarantees or legal advice. Crawlers should respect normal load limits and avoid login, admin, or unlinked internal areas. - Technical Signals: robots.txt allows classic search engines and common AI crawlers. tdm-reservation: 0 signals that text and data mining for this public documentation is not reserved. llms.txt and llms-full.txt provide a compact, agent-readable documentation map.