diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..0bb07da --- /dev/null +++ b/.gitignore @@ -0,0 +1,10 @@ +.venv/ +server/.venv/ +__pycache__/ +*.py[cod] +.pytest_cache/ +.ruff_cache/ +*.egg-info/ +node_modules/ +dist/ +build/ diff --git a/00-overview/mvp-implementation-plan.md b/00-overview/mvp-implementation-plan.md new file mode 100644 index 0000000..56263d8 --- /dev/null +++ b/00-overview/mvp-implementation-plan.md @@ -0,0 +1,948 @@ +--- +owner: gmikcon +status: draft +last_reviewed: 2026-05-09 +review_interval: 30d +confidence: medium +source_of_truth: planning +--- + +# Gnexus Book MVP Implementation Plan + +## Planning Method + +This plan combines two sources: + +- explicit answers from the owner; +- implementation defaults chosen by the agent when a decision is not risky or can be changed later. + +Open questions are grouped as an interview checklist. The default choices in this document should be treated as provisional until they are confirmed or changed. + +## Current Goal + +Build a maintainable documentation system for personal digital and server infrastructure. + +The first version should make it easy for an AI agent to: + +- find existing infrastructure knowledge; +- read canonical documentation; +- update Markdown and YAML safely; +- validate documentation structure; +- create Git commits through a controlled interface; +- identify stale or incomplete documentation. + +## Default Decisions + +### Repository Role + +Decision: keep this repository as the canonical documentation repository. + +Reasoning: + +- Markdown and YAML remain simple and portable. +- Git gives history, review, rollback, and diffability. +- The documentation server can be replaced later without migrating the knowledge base. + +### Documentation Server Location + +Default: keep the server implementation in the same repository during MVP. + +Proposed layout: + +```text +gnexus-book/ + docs content... + server/ + app/ + tests/ + pyproject.toml +``` + +Reasoning: + +- simpler local development; +- easier to keep schemas, API, and docs conventions together; +- no early overhead of coordinating two repositories. + +Possible later change: extract `server/` into a separate repository if it becomes a reusable product. + +### Server Stack + +Default: Python + FastAPI. + +Reasoning: + +- good fit for agent tooling; +- simple file and Git operations; +- easy YAML, Markdown, and schema validation; +- easy future MCP integration; +- lightweight enough for a personal infrastructure documentation server. + +Alternative stacks: + +- Laravel if tighter integration with existing PHP/Laravel infrastructure matters more. +- Node.js if the web UI becomes the dominant part of the project. + +### Web UI + +Decision: build a custom web UI instead of using MkDocs as the primary documentation UI. + +Use: + +- Vue.js for the frontend; +- `gnexus-ui-kit` as the UI foundation; +- FastAPI for the backend API, validation, freshness checks, and controlled writes. + +Reasoning: + +- the documentation system needs more than static rendering; +- the UI should expose inventory, relationships, stale docs, and agent-oriented operations; +- this is a useful project for testing Vue support in `gnexus-ui-kit`; +- a custom UI can become the main operational surface for both humans and agents. + +Static rendering can still be added later as an export feature, but it is not the primary MVP direction. + +### Agent Interface + +Default: REST API first, MCP later. + +Reasoning: + +- REST is simpler to debug and test. +- MCP can wrap stable domain operations once the workflow is clearer. + +The API should still be designed with MCP-style operations in mind. + +### Change Policy + +Decision: + +- low-risk documentation changes may be committed automatically; +- infrastructure facts, access paths, backup policies, and security-sensitive changes require review mode first; +- secrets must never be stored directly. + +Agents may create new documentation pages without review if validation passes and the change does not introduce sensitive information. + +### Deployment Model + +Decision: the MVP should not be public. + +The documentation server should be hosted on a VPS inside the local network. It may be reachable from trusted local or VPN-accessible environments, but it should not be exposed as a public internet service. + +Initial Git behavior should be local commits only. Remote push should remain a manual owner action for the MVP. The owner can periodically review commits and run `git push` manually. + +## Known Infrastructure Context + +This section captures early facts that should influence the data model. It is not yet the canonical inventory. + +### Physical Infrastructure + +Known physical server: + +```yaml +id: hp-proliant-dl380-g6 +name: HP ProLiant DL380 G6 +type: physical-server +status: active +location: home +os: Ubuntu 22.04.4 LTS +kernel: 5.15.0-176-generic +virtualization_stack: + type: kvm-libvirt + libvirt_version: 8.0.0 + management_tools: + - virsh + - cockpit-machines +management: + cockpit: + url: https://192.168.1.130:9090/system + version: 310.1-1~bpo22.04.1 +``` + +The home infrastructure currently has three physical servers and is expected to grow to four. One of the servers runs approximately two dozen VPS/VM instances. + +Confirmed on `hp-proliant-dl380-g6`: + +- system libvirt connection: `qemu:///system`; +- active services: `libvirtd`, `virtlogd`; +- installed stack includes `libvirt-daemon-driver-qemu`, `libvirt-clients`, `libvirt-daemon-system`, `python3-libvirt`, and `cockpit-machines`; +- user `bserv` belongs to the `libvirt` group. + +Known VM state from system libvirt: + +```yaml +running: + - ovpn + - gitbucket + - alex + - anicusi + - navi + - mctl + - jellyfin + - gnexus-home + - nextcloud + - ovpn_reserv + - home + - transmission + - files + - gnauth +shut_off: + - cats + - fdroid + - kitan + - m1connect + - mail-server + - mine + - radio + - shome + - topics + - unmanic + - vec_search + - yt +``` + +Known libvirt networks: + +```yaml +- default +- isolated-net +- OpenNet +- united +``` + +Known libvirt storage pools: + +```yaml +- default +- jellyfin_os +- vm_main +``` + +### Network Edge + +Known local edge device: + +```yaml +id: pfsense-router +name: pfSense Router +type: router-firewall +status: active +location: home +management_url: https://192.168.1.1/ +``` + +### Public Entry And VPN Path + +Known public-side components: + +- domain: `gnexus.space`; +- external VPS; +- external VPS acts as a VPN server; +- public traffic is sent through the VPN tunnel to an internal VPS; +- the internal VPS runs nginx; +- nginx proxies traffic to target VPS instances or other internal machines. + +Canonical traffic pattern: + +```text +Internet + -> gnexus.space + -> external VPS + -> VPN tunnel + -> internal VPS + -> nginx + -> target VPS or internal machine + -> target service +``` + +The data model should make this route explicit enough for AI agents to understand what is public, what is VPN-only, and what is local-only. + +## MVP Scope + +The MVP should include only the minimum that proves the full loop: + +```text +read -> search -> propose/update -> validate -> commit -> render -> freshness check +``` + +### Included + +- documentation directory conventions; +- Markdown frontmatter convention; +- structured YAML inventory; +- schemas for core inventory files; +- FastAPI documentation server; +- Git read/write/commit adapter; +- local text search; +- basic validation; +- basic freshness report; +- custom Vue-based documentation and maintenance UI; +- initial agent-oriented API. + +### Excluded For MVP + +- embedding search; +- automatic infrastructure discovery; +- complex approval workflow; +- multi-user permissions; +- remote deployment automation; +- automatic Git push; +- secret storage; +- integration with monitoring systems. + +### UI Scope + +The MVP UI is not a document editor. + +Included UI capabilities: + +- browse documents; +- view inventory records; +- view freshness and validation reports; +- view traffic route records; +- view recent commits or change history if available. + +Excluded UI capabilities: + +- direct Markdown editing; +- direct YAML editing; +- visual topology map; +- approval workflow UI. + +All writes should go through the API and agent workflow during the MVP. + +## Proposed Repository Layout + +```text +gnexus-book/ + README.md + + 00-overview/ + infrastructure-map.md + mvp-implementation-plan.md + project-implementation-notes.md + principles.md + glossary.md + + 10-systems/ + hardware/ + servers/ + virtualization/ + domains/ + networks/ + traffic-routes/ + storage/ + databases/ + applications/ + automations/ + + 20-services/ + + 30-runbooks/ + add-new-service.md + incident-response.md + restore-backup.md + + 40-inventory/ + hardware.yml + hosts.yml + virtual-machines.yml + services.yml + domains.yml + traffic-routes.yml + databases.yml + backups.yml + + 50-decisions/ + + 60-generated/ + inventory-index.md + + 90-maintenance/ + documentation-rules.md + freshness-checks.md + review-log.md + + schemas/ + hardware.schema.json + host.schema.json + virtual-machine.schema.json + service.schema.json + domain.schema.json + traffic-route.schema.json + database.schema.json + backup.schema.json + + server/ + app/ + main.py + config.py + docs_repository.py + git_adapter.py + inventory.py + markdown.py + search.py + validation.py + freshness.py + tests/ + pyproject.toml + + ui/ + package.json + src/ +``` + +## Core Data Model + +### Hardware Node + +Represents a physical server or important physical infrastructure device. + +The infrastructure includes several physical home servers, with more expected later. Physical topology is important and should be modeled explicitly rather than hidden inside generic host records. + +Required fields: + +- `id` +- `name` +- `type` +- `status` +- `location` +- `hardware_role` +- `management_address` +- `network_interfaces` +- `runs_hosts` +- `docs` +- `last_reviewed` + +Example: + +```yaml +- id: home-server-01 + name: Home Server 01 + type: physical-server + status: active + location: home + hardware_role: + - virtualization-host + - storage + management_address: unknown + network_interfaces: [] + runs_hosts: + - main-vps + docs: ../10-systems/hardware/home-server-01.md + last_reviewed: 2026-05-09 +``` + +### Host + +Represents a physical server, VPS, local machine, VM, container host, or managed hosting environment. + +For physical machines, a `hardware` record should exist as well. The `host` record describes the operating system or runtime environment; the `hardware` record describes the physical machine. + +Required fields: + +- `id` +- `name` +- `type` +- `status` +- `environment` +- `provider` +- `location` +- `roles` +- `hardware_node` +- `docs` +- `last_reviewed` + +Example: + +```yaml +- id: main-vps + name: Main VPS + type: vps + status: active + environment: production + provider: unknown + location: unknown + roles: + - web + - database + hardware_node: home-server-01 + docs: ../10-systems/servers/main-vps.md + last_reviewed: 2026-05-09 +``` + +### Virtual Machine + +Represents a VPS/VM running on a physical host or virtualization platform. + +This is separate from `service` because the same VM can run multiple services and participate in traffic routing. + +Required fields: + +- `id` +- `name` +- `status` +- `hypervisor_host` +- `os` +- `addresses` +- `exposed_ports` +- `local_only` +- `runs_services` +- `docs` +- `last_reviewed` + +### Service + +Represents an application, system service, website, worker, scheduled task, or infrastructure service. + +Websites are modeled as services that use one or more domains. + +Required fields: + +- `id` +- `name` +- `type` +- `status` +- `host` +- `domains` +- `ports` +- `criticality` +- `docs` +- `runbook` +- `last_reviewed` + +### Traffic Route + +Represents how traffic reaches a service or host. + +This model is important because the infrastructure contains a mix of externally exposed services, local-only services, port forwards, reverse proxies, VPS-to-host relationships, and internal routing. + +Required fields: + +- `id` +- `name` +- `status` +- `source` +- `entrypoint` +- `path` +- `destination` +- `protocols` +- `ports` +- `exposure` +- `used_by` +- `docs` +- `last_reviewed` + +Allowed `exposure` values: + +- `public` +- `vpn` +- `local` +- `private` + +Example: + +```yaml +- id: public-web-to-main-vps + name: Public web traffic to Main VPS + status: active + source: internet + entrypoint: router-or-edge-proxy + path: + - public-ip + - edge-router + - reverse-proxy + destination: main-vps + protocols: + - https + ports: + - 443 + exposure: public + used_by: + - example-service + docs: ../10-systems/traffic-routes/public-web-to-main-vps.md + last_reviewed: 2026-05-09 +``` + +### Domain + +Represents a domain or subdomain. + +Required fields: + +- `id` +- `fqdn` +- `status` +- `registrar` +- `dns_provider` +- `points_to` +- `used_by` +- `docs` +- `last_reviewed` + +### Database + +Represents a database instance or logical database. + +Required fields: + +- `id` +- `engine` +- `version` +- `host` +- `used_by` +- `backup_policy` +- `docs` +- `last_reviewed` + +### Backup Policy + +Represents backup rules and recovery expectations. + +Required fields: + +- `id` +- `target` +- `method` +- `frequency` +- `retention` +- `storage` +- `restore_runbook` +- `last_tested` +- `last_reviewed` + +## Markdown Frontmatter + +Each important Markdown document should use frontmatter: + +```yaml +--- +owner: gmikcon +status: active +last_reviewed: 2026-05-09 +review_interval: 90d +confidence: medium +source_of_truth: manual +--- +``` + +Required frontmatter fields for MVP: + +- `owner` +- `status` +- `last_reviewed` +- `review_interval` +- `confidence` +- `source_of_truth` + +Allowed `status` values: + +- `draft` +- `active` +- `deprecated` +- `archived` + +Allowed `confidence` values: + +- `low` +- `medium` +- `high` + +## Agent API MVP + +### Read Operations + +```text +GET /docs +GET /docs/{path} +GET /search?q= +GET /inventory/{type} +GET /inventory/{type}/{id} +GET /traffic-routes +GET /health/freshness +``` + +### Write Operations + +```text +POST /docs/{path} +POST /docs/{path}/propose +POST /inventory/{type} +PATCH /inventory/{type}/{id} +POST /traffic-routes +PATCH /traffic-routes/{id} +POST /commit +``` + +The commit endpoint should receive a short agent-provided summary and use it as the commit message or as the main part of the commit message. + +Example request: + +```json +{ + "summary": "Document external VPS traffic route", + "details": "Added traffic route from gnexus.space through external VPS and VPN tunnel to internal nginx proxy." +} +``` + +Current MVP implementation starts with a safer pending-changes layer: + +```text +GET /changes +GET /changes/{id} +POST /changes +``` + +Pending changes are stored under `90-maintenance/pending-changes/` and are not applied automatically. + +### Validation Operations + +```text +POST /validate +GET /validate +``` + +Current MVP implementation has `GET /validate` for read-only validation reporting. Write-side validation can later reuse the same validation module before accepting changes. + +### Generated Output + +```text +POST /generate/inventory-index +``` + +## Agent Change Rules + +Default rules for AI agents: + +- Do not store passwords, tokens, private keys, recovery codes, or secret values. +- Store references to secret locations instead, for example password manager item names or vault paths. +- Prefer structured inventory operations over raw Markdown edits when changing infrastructure entities. +- Update `last_reviewed` only when the agent has actually verified the information. +- Do not mark `confidence: high` unless the information came from a reliable source or was explicitly confirmed. +- Create review-mode changes for security-sensitive or access-related updates. +- Every service should have a linked docs page. +- Every critical service should have a linked runbook. + +## Freshness Checks + +The MVP freshness report should detect: + +- Markdown documents missing frontmatter; +- documents past their review interval; +- inventory records missing required fields; +- inventory records pointing to missing docs; +- services without runbooks; +- critical services without backup notes; +- domains not linked to services; +- public traffic routes without linked services; +- services marked public without a traffic route; +- virtual machines without a physical or hypervisor host; +- broken relative links. + +## Implementation Phases + +### Phase 1: Documentation Foundation + +Deliverables: + +- create repository structure; +- add documentation rules; +- add Markdown frontmatter convention; +- add starter inventory files; +- add starter schemas; +- add custom UI foundation. + +Done when: + +- documentation can be read through the local UI or API; +- inventory files have clear expected fields; +- a human or agent knows where to add new infrastructure facts. + +### Phase 2: Validation Foundation + +Deliverables: + +- implement schema validation for YAML inventory; +- implement Markdown frontmatter validation; +- implement link checks; +- add tests for validators. + +Done when: + +- invalid inventory is rejected; +- missing frontmatter is reported; +- broken doc references are reported. + +Current status: + +- `GET /validate` exists; +- YAML inventory parsing works through `PyYAML`; +- required inventory fields are checked from schema metadata; +- JSON schema files are checked for parseability; +- Markdown frontmatter is checked; +- inventory `docs:` links are checked; +- full JSON Schema semantics are not implemented yet. + +### Phase 3: Read API + +Deliverables: + +- FastAPI app skeleton; +- list docs endpoint; +- read doc endpoint; +- inventory read endpoint; +- search endpoint. + +Done when: + +- an agent can discover and read documentation without direct file access. + +### Phase 4: Write API And Git Adapter + +Deliverables: + +- controlled document writes; +- inventory create/update operations; +- diff/proposal support; +- Git commit operation; +- audit log entry for changes. + +Done when: + +- an agent can update docs through the server and create a validated commit. + +Current status: + +- pending-change records can be created and read through API; +- `kind=doc` pending changes can be applied after validation; +- `kind=inventory-item` pending changes can update existing inventory records after validation; +- `kind=inventory-item` pending changes can create new inventory records after validation; +- failed doc apply attempts are rolled back; +- failed inventory item apply attempts are rolled back; +- replacing whole inventory files is not implemented yet; +- local commit workflow exists with explicit file lists; +- `git push` remains manual. + +### Phase 5: Freshness And Generated Indexes + +Deliverables: + +- freshness report endpoint; +- generated inventory index; +- stale document report; +- missing runbook report. + +Done when: + +- the system can tell which documentation needs attention. + +### Phase 6: Agent Workflow Hardening + +Deliverables: + +- document allowed agent workflows; +- define review-required change types; +- add tests for risky update paths; +- optionally expose MCP-compatible operation names. + +Done when: + +- agent maintenance is predictable enough to use regularly. + +## Interview Checklist + +These questions should be answered before or during implementation. Defaults are provided where reasonable. + +### Infrastructure Shape + +1. What are the first infrastructure entities we should document? + Decision: hosts, services, domains, databases, backups, hardware infrastructure, virtualization, and traffic routes. + +2. Do you want to include local development environments in the inventory? + Default: yes, but mark them as `environment: local`. + +3. Should websites be modeled as services, domains, or a separate entity? + Decision: websites are services that use one or more domains. + +### Server And Deployment + +4. Should the documentation server run only locally at first? + Decision: no public access; hosted on a VPS inside the local network. + +5. Should it push to a remote Git repository during MVP? + Decision: no. The server creates local commits only; the owner reviews and pushes manually. + +6. Should the documentation web UI be public, private, or local-only? + Decision: private/local-network only. + +### Agent Permissions + +7. Which agents are allowed to commit directly? + Default: none until the workflow is tested. + +8. Which changes should always require review? + Default: access, security, backup, network exposure, and critical service changes. + +9. Should agents be allowed to create new docs without review? + Decision: yes, if validation passes and no sensitive fields are introduced. + +### Secrets + +10. Where should secret references point? + Default: named password-manager entries or future dedicated secret storage paths, never raw values. + +11. Should the documentation mention usernames, hostnames, and internal IPs? + Decision: yes for operational value, but never raw passwords, tokens, private keys, or recovery codes. + +### Technology + +12. Is Python/FastAPI acceptable for the server? + Decision: yes. + +13. Is MkDocs Material acceptable for the first documentation UI? + Decision: no. Use a custom Vue UI based on `gnexus-ui-kit`. + +14. Should MCP be in MVP or phase 2? + Default: phase 2, after REST operations stabilize. + +15. Which virtualization stack is used on the home servers? + Decision: at least HP ProLiant DL380 G6 uses KVM/QEMU through system libvirt (`qemu:///system`), managed with Cockpit Machines and `virsh`. + +16. Is a visual topology map required in MVP? + Decision: no. The priority is agent-readable structure, not visual representation. + +17. Should the UI support editing in MVP? + Decision: no. The UI should browse and report; editing goes through API and agent workflows. + +18. How should commit messages be created? + Decision: the agent passes a short change summary, and the server uses it for the Git commit message. + +## Immediate Next Steps + +1. Confirm or adjust the default decisions. +2. Create the documentation foundation files. +3. Define the first YAML schemas. +4. Scaffold the FastAPI server. +5. Scaffold the Vue UI with `gnexus-ui-kit`. +6. Implement read-only API first. +7. Add validation. +8. Add controlled writes and local Git commit support. +9. Add traffic route views. +10. Add recent local commit/change history view if practical. + +## Current Working Recommendation + +Proceed with: + +- monorepo layout; +- Python/FastAPI server under `server/`; +- custom Vue UI under `ui/` using `gnexus-ui-kit`; +- Markdown and YAML as canonical storage; +- REST API first; +- MCP wrapper later; +- private local-network deployment on a VPS and local Git commits for MVP; +- manual owner-controlled Git push after commit review; +- first-class modeling for physical hardware, VPS/VMs, services, domains, databases, backups, and traffic routes; +- no visual topology map in MVP; +- read-only UI for documents, inventory, freshness, validation, and traffic routes; +- review-required policy for sensitive infrastructure changes. diff --git a/00-overview/project-implementation-notes.md b/00-overview/project-implementation-notes.md new file mode 100644 index 0000000..013c2bb --- /dev/null +++ b/00-overview/project-implementation-notes.md @@ -0,0 +1,280 @@ +--- +owner: gmikcon +status: draft +last_reviewed: 2026-05-09 +review_interval: 30d +confidence: medium +source_of_truth: planning +--- + +# Gnexus Book Project Implementation Notes + +## Purpose + +Gnexus Book is intended to be a knowledge base for the owner's digital and server infrastructure. + +The documentation should be useful for both humans and AI agents. Its main job is to explain the high-level structure of the infrastructure: what exists, where it runs, how components depend on each other, and how operational changes should be made. + +The project should not become only a folder of Markdown files. Markdown is the preferred storage and reading format, but the system also needs an operational layer that helps AI agents keep the documentation current. + +## Agreed Direction + +The source of truth should remain a Git repository containing Markdown and structured inventory files. + +Git provides: + +- version history; +- rollback; +- review; +- portability; +- compatibility with existing developer tools; +- easy offline access. + +However, Git should not be the main editing interface for AI agents. The agent workflow of cloning the repository, editing files manually, committing, and pushing is too cumbersome and too error-prone for regular documentation maintenance. + +Instead, the project should include a documentation server that works as an operational layer over the repository. + +## Target Architecture + +```text +Human / AI Agent + | + v +Documentation Server + | + +-- Markdown and YAML Git repository + +-- Search index + +-- Validation layer + +-- Commit manager + +-- Audit log + +-- Web documentation UI + +-- Agent API / MCP interface +``` + +## Storage Model + +The repository should use a hybrid documentation model: + +- Markdown files for architecture, explanations, runbooks, decisions, and operational notes. +- YAML files for structured inventory such as hosts, services, domains, ports, databases, and relationships. +- YAML files for hardware, virtual machines, traffic routes, and exposure maps. +- Schemas for validating structured files. +- Generated indexes for navigation and agent-friendly lookup. + +Markdown remains the main human-readable format, but structured YAML should be used where agents need predictable data. + +## Documentation Server + +The documentation server should provide a safer and more convenient interface for maintaining the documentation. + +Expected responsibilities: + +- read Markdown and YAML documents; +- expose documentation through a web UI; +- expose an API for AI agents; +- provide search; +- accept proposed changes; +- validate document structure and inventory schemas; +- manage commits to the Git repository; +- optionally create review records or pull requests; +- track document freshness; +- maintain an audit trail of changes. + +The documentation server should hide most raw Git operations from agents. + +For the MVP, the server should be hosted privately on a VPS inside the local network. It should not be publicly exposed. The server should create local Git commits, while pushing to the remote repository remains a manual owner action after reviewing the commit history. + +The first UI should be focused on reading and inspection rather than editing: documents, inventory records, validation status, freshness reports, traffic routes, and recent changes. + +## Agent Interface + +The agent interface should expose domain-level operations instead of only raw file editing. + +Examples: + +```text +search_docs(query) +read_doc(path) +list_inventory(type) +create_doc(path, content) +propose_change(path, patch, reason) +add_service(data) +update_service(id, data) +check_staleness() +commit_changes(message) +``` + +For example, an agent should ideally be able to call `add_service(data)` instead of manually editing `services.yml`. + +The implementation may begin as a REST API and later be wrapped as an MCP server, or it may provide MCP support from the start. + +## Change Modes + +The server should support several change modes: + +1. Draft mode + - The agent can prepare changes without committing them immediately. + +2. Validated commit mode + - The server validates Markdown metadata, links, YAML structure, and required fields before creating a commit. + +3. Review mode + - Important changes can be held for human approval. + +4. Direct commit mode + - Low-risk changes can be committed automatically by trusted agents. + +The exact authorization and review rules still need to be designed. + +## Freshness Model + +Documents should include metadata that helps determine whether they are still reliable. + +Example: + +```yaml +--- +owner: gmikcon +status: active +last_reviewed: 2026-05-09 +review_interval: 90d +confidence: medium +source_of_truth: manual +--- +``` + +The server should be able to report: + +- stale documents; +- documents without owners; +- services without runbooks; +- hosts without backup policy; +- domains without DNS notes; +- inventory records without linked documentation; +- public traffic routes without linked services; +- services marked public without a traffic route; +- broken internal links. + +## Initial Repository Structure + +The exact structure is still flexible, but the current working proposal is: + +```text +gnexus-book/ + README.md + + 00-overview/ + infrastructure-map.md + project-implementation-notes.md + principles.md + glossary.md + + 10-systems/ + hardware/ + servers/ + virtualization/ + domains/ + networks/ + traffic-routes/ + storage/ + databases/ + applications/ + automations/ + + 20-services/ + gnexus-auth.md + websites.md + mail.md + backups.md + monitoring.md + + 30-runbooks/ + deploy.md + restore-backup.md + rotate-secrets.md + incident-response.md + add-new-service.md + + 40-inventory/ + hardware.yml + hosts.yml + virtual-machines.yml + domains.yml + services.yml + traffic-routes.yml + credentials-map.md + + 50-decisions/ + ADR-0001-documentation-structure.md + + 90-maintenance/ + documentation-rules.md + freshness-checks.md + review-log.md +``` + +## Practical MVP + +The first implementation should stay small. + +Recommended MVP: + +- Markdown and YAML repository structure. +- Basic documentation conventions. +- YAML inventory schemas. +- Simple documentation server. +- Git adapter for read/write/commit operations. +- Search using local text search first. +- API for reading, searching, proposing changes, and committing validated updates. +- Basic freshness checks. +- Custom Vue.js web UI based on `gnexus-ui-kit`. + +Possible minimal API: + +```text +GET /docs +GET /docs/{path} +GET /search?q= +GET /inventory/{type} +GET /traffic-routes +POST /docs/{path}/propose +POST /inventory/services +POST /commit +GET /health/freshness +``` + +## Later Capabilities + +Possible later additions: + +- MCP server for direct AI-agent access. +- Embedding-based semantic search. +- Auto-generated documentation indexes. +- Static documentation export. +- Rich web editor for humans. +- Approval workflow. +- Scheduled freshness review. +- Infrastructure probes that compare documentation with real state. +- CI validation for links, schemas, and metadata. +- Integration with deployment systems or monitoring tools. + +## Open Questions + +- Should the documentation server live inside this repository or in a separate repository? +- Which stack should be used for the server: Python/FastAPI, Laravel, Node.js, or something else? +- Should MCP support be part of the first MVP? +- What changes may agents commit directly? +- What changes require human review? +- How much of the custom Vue UI should be included in the first MVP? +- How should secrets and sensitive access information be represented without exposing actual credentials? +- How detailed should the first traffic route records be? +- Which exact virtualization layer should be recorded for the home servers? +- What review rules should apply to traffic exposure changes? + +## Current Decision + +Use Markdown and YAML in Git as the durable storage format. + +Build a documentation server that acts as the main operational interface for agents and humans. The server should provide controlled editing through API, validation, search, freshness checks, local commit management, and a custom Vue.js UI based on `gnexus-ui-kit`. + +The next planning step is to define the MVP implementation plan and choose the first version of the repository structure, schemas, and server responsibilities. diff --git a/10-systems/domains/gnexus.space.md b/10-systems/domains/gnexus.space.md new file mode 100644 index 0000000..2d47048 --- /dev/null +++ b/10-systems/domains/gnexus.space.md @@ -0,0 +1,25 @@ +--- +owner: gmikcon +status: active +last_reviewed: 2026-05-09 +review_interval: 90d +confidence: medium +source_of_truth: owner-confirmed +--- + +# gnexus.space + +Primary public domain used as the external entry point for services. + +## Current Understanding + +Traffic for `gnexus.space` reaches an external VPS. The external VPS acts as a VPN server and sends traffic through a VPN tunnel into the internal infrastructure, where an internal VPS with nginx proxies requests to target VPS instances or other internal machines. + +See [public traffic route](../traffic-routes/public-gnexus-space-to-internal-nginx.md). + +## Unknowns + +- Registrar. +- DNS provider. +- Full list of subdomains. +- Exact DNS records. diff --git a/10-systems/hardware/hp-proliant-dl380-g6.md b/10-systems/hardware/hp-proliant-dl380-g6.md new file mode 100644 index 0000000..9b0fb51 --- /dev/null +++ b/10-systems/hardware/hp-proliant-dl380-g6.md @@ -0,0 +1,38 @@ +--- +owner: gmikcon +status: active +last_reviewed: 2026-05-09 +review_interval: 90d +confidence: high +source_of_truth: ssh-libvirt +--- + +# HP ProLiant DL380 G6 + +Physical server in the home infrastructure. + +## Role + +- Virtualization host. +- Runs KVM/QEMU virtual machines through system libvirt. +- Managed through Cockpit and `virsh`. + +## Access + +- Cockpit: `https://192.168.1.130:9090/system` +- SSH user known for administration: `bserv` +- Secret values are not stored in this repository. + +## Operating System + +- OS: Ubuntu 22.04.4 LTS +- Kernel observed: `5.15.0-176-generic` + +## Virtualization + +- Stack: KVM/QEMU through libvirt. +- Libvirt connection: `qemu:///system` +- Tools observed: `virsh`, `cockpit-machines` +- Active services observed: `libvirtd`, `virtlogd` + +See [virtual machine inventory](../../40-inventory/virtual-machines.yml). diff --git a/10-systems/networks/libvirt-networks.md b/10-systems/networks/libvirt-networks.md new file mode 100644 index 0000000..d91b383 --- /dev/null +++ b/10-systems/networks/libvirt-networks.md @@ -0,0 +1,21 @@ +--- +owner: gmikcon +status: active +last_reviewed: 2026-05-09 +review_interval: 90d +confidence: high +source_of_truth: ssh-libvirt +--- + +# Libvirt Networks On HP ProLiant DL380 G6 + +The following libvirt networks were observed on `hp-proliant-dl380-g6` through `virsh -c qemu:///system net-list --all`: + +- `default` +- `isolated-net` +- `OpenNet` +- `united` + +All observed networks are active, persistent, and configured for autostart. + +Details still need to be documented: subnet ranges, bridge names, DHCP behavior, NAT/routing mode, and which VM uses each network. diff --git a/10-systems/networks/pfsense-router.md b/10-systems/networks/pfsense-router.md new file mode 100644 index 0000000..9ee1429 --- /dev/null +++ b/10-systems/networks/pfsense-router.md @@ -0,0 +1,25 @@ +--- +owner: gmikcon +status: active +last_reviewed: 2026-05-09 +review_interval: 90d +confidence: medium +source_of_truth: owner-confirmed +--- + +# pfSense Router + +Central router and firewall for the local network. + +## Access + +- Web UI: `https://192.168.1.1/` +- Secret values are not stored in this repository. + +## Role + +- Local network edge. +- Firewall and routing point for internal infrastructure. +- Part of the path between local infrastructure and services reachable through trusted network paths. + +Further details still need to be documented: interfaces, VLANs, firewall rules, port forwards, VPN routes, and DNS behavior. diff --git a/10-systems/traffic-routes/public-gnexus-space-to-internal-nginx.md b/10-systems/traffic-routes/public-gnexus-space-to-internal-nginx.md new file mode 100644 index 0000000..28203d0 --- /dev/null +++ b/10-systems/traffic-routes/public-gnexus-space-to-internal-nginx.md @@ -0,0 +1,44 @@ +--- +owner: gmikcon +status: active +last_reviewed: 2026-05-09 +review_interval: 90d +confidence: medium +source_of_truth: owner-confirmed +--- + +# Public gnexus.space Traffic To Internal nginx + +This route describes the main public traffic path into the local infrastructure. + +## Route + +```text +Internet + -> gnexus.space + -> external VPS + -> VPN tunnel + -> internal VPS + -> nginx + -> target VPS or internal machine + -> target service +``` + +## Exposure + +- Exposure: public. +- Public-facing entry: external VPS. +- Internal routing: VPN tunnel to internal VPS. +- Proxy layer: nginx. + +## Purpose + +The route lets public traffic terminate on an external VPS and then pass into selected internal services through a VPN tunnel. The internal nginx proxy decides which target VPS or machine receives the request. + +## Unknowns + +- External VPS hostname and provider. +- VPN technology and tunnel addresses. +- Internal VPS identity. +- nginx configuration location. +- Mapping of hostnames to target services. diff --git a/10-systems/virtualization/libvirt-vms.md b/10-systems/virtualization/libvirt-vms.md new file mode 100644 index 0000000..0a3b019 --- /dev/null +++ b/10-systems/virtualization/libvirt-vms.md @@ -0,0 +1,64 @@ +--- +owner: gmikcon +status: active +last_reviewed: 2026-05-09 +review_interval: 90d +confidence: high +source_of_truth: ssh-libvirt +--- + +# Libvirt Virtual Machines + +This document summarizes virtual machines observed on `hp-proliant-dl380-g6` through system libvirt. + +## Hypervisor + +- Physical host: `hp-proliant-dl380-g6` +- Libvirt connection: `qemu:///system` +- Virtualization stack: KVM/QEMU through libvirt +- Management tools: `virsh`, Cockpit Machines + +## Inventory + +The canonical machine list is stored in [virtual-machines.yml](../../40-inventory/virtual-machines.yml). + +Observed running VMs: + +- `ovpn` +- `gitbucket` +- `alex` +- `anicusi` +- `navi` +- `mctl` +- `jellyfin` +- `gnexus-home` +- `nextcloud` +- `ovpn_reserv` +- `home` +- `transmission` +- `files` +- `gnauth` + +Observed shut off VMs: + +- `cats` +- `fdroid` +- `kitan` +- `m1connect` +- `mail-server` +- `mine` +- `radio` +- `shome` +- `topics` +- `unmanic` +- `vec_search` +- `yt` + +## Known Addresses + +`virsh domifaddr` returned addresses only for: + +- `alex`: `192.168.105.195/24` +- `ovpn_reserv`: `192.168.105.181/24` + +Other VM addresses need to be documented from guest agents, DHCP leases, static host configuration, nginx config, or manual confirmation. diff --git a/40-inventory/backups.yml b/40-inventory/backups.yml new file mode 100644 index 0000000..bdf8013 --- /dev/null +++ b/40-inventory/backups.yml @@ -0,0 +1,3 @@ +# Backup policies and restore targets. +--- +[] diff --git a/40-inventory/databases.yml b/40-inventory/databases.yml new file mode 100644 index 0000000..38763a5 --- /dev/null +++ b/40-inventory/databases.yml @@ -0,0 +1,3 @@ +# Database instances and logical databases. +--- +[] diff --git a/40-inventory/domains.yml b/40-inventory/domains.yml new file mode 100644 index 0000000..87d6495 --- /dev/null +++ b/40-inventory/domains.yml @@ -0,0 +1,13 @@ +# Domains and subdomains. +--- +- id: gnexus-space + fqdn: gnexus.space + status: active + registrar: unknown + dns_provider: unknown + points_to: + - external-vps + used_by: [] + docs: ../10-systems/domains/gnexus.space.md + last_reviewed: 2026-05-09 + source_of_truth: owner-confirmed diff --git a/40-inventory/hardware.yml b/40-inventory/hardware.yml new file mode 100644 index 0000000..35553b0 --- /dev/null +++ b/40-inventory/hardware.yml @@ -0,0 +1,71 @@ +# Physical hardware and important infrastructure devices. +--- +- id: hp-proliant-dl380-g6 + name: HP ProLiant DL380 G6 + type: physical-server + status: active + location: home + hardware_role: + - virtualization-host + os: Ubuntu 22.04.4 LTS + kernel: 5.15.0-176-generic + management: + cockpit: + url: https://192.168.1.130:9090/system + version: 310.1-1~bpo22.04.1 + virtualization_stack: + type: kvm-libvirt + connection: qemu:///system + libvirt_version: 8.0.0 + management_tools: + - virsh + - cockpit-machines + network_interfaces: [] + runs_hosts: + - ovpn + - gitbucket + - alex + - anicusi + - navi + - mctl + - jellyfin + - gnexus-home + - nextcloud + - ovpn_reserv + - home + - transmission + - files + - gnauth + - cats + - fdroid + - kitan + - m1connect + - mail-server + - mine + - radio + - shome + - topics + - unmanic + - vec_search + - yt + docs: ../10-systems/hardware/hp-proliant-dl380-g6.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: pfsense-router + name: pfSense Router + type: router-firewall + status: active + location: home + hardware_role: + - router + - firewall + - local-network-edge + management: + web: + url: https://192.168.1.1/ + network_interfaces: [] + runs_hosts: [] + docs: ../10-systems/networks/pfsense-router.md + last_reviewed: 2026-05-09 + source_of_truth: owner-confirmed diff --git a/40-inventory/hosts.yml b/40-inventory/hosts.yml new file mode 100644 index 0000000..8669038 --- /dev/null +++ b/40-inventory/hosts.yml @@ -0,0 +1,3 @@ +# Operating-system hosts and runtime environments. +--- +[] diff --git a/40-inventory/networks.yml b/40-inventory/networks.yml new file mode 100644 index 0000000..c752b14 --- /dev/null +++ b/40-inventory/networks.yml @@ -0,0 +1,41 @@ +# Network segments, virtual networks, and routing domains. +--- +- id: libvirt-default + name: default + type: libvirt-network + status: active + owner_host: hp-proliant-dl380-g6 + autostart: true + docs: ../10-systems/networks/libvirt-networks.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: libvirt-isolated-net + name: isolated-net + type: libvirt-network + status: active + owner_host: hp-proliant-dl380-g6 + autostart: true + docs: ../10-systems/networks/libvirt-networks.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: libvirt-opennet + name: OpenNet + type: libvirt-network + status: active + owner_host: hp-proliant-dl380-g6 + autostart: true + docs: ../10-systems/networks/libvirt-networks.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: libvirt-united + name: united + type: libvirt-network + status: active + owner_host: hp-proliant-dl380-g6 + autostart: true + docs: ../10-systems/networks/libvirt-networks.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt diff --git a/40-inventory/services.yml b/40-inventory/services.yml new file mode 100644 index 0000000..58c250d --- /dev/null +++ b/40-inventory/services.yml @@ -0,0 +1,3 @@ +# Applications, websites, infrastructure services, workers, and daemons. +--- +[] diff --git a/40-inventory/traffic-routes.yml b/40-inventory/traffic-routes.yml new file mode 100644 index 0000000..e1c61e2 --- /dev/null +++ b/40-inventory/traffic-routes.yml @@ -0,0 +1,25 @@ +# Traffic routes explain how requests reach infrastructure targets. +--- +- id: public-gnexus-space-to-internal-nginx + name: Public gnexus.space traffic to internal nginx + status: active + source: internet + entrypoint: external-vps + path: + - gnexus.space + - external-vps + - vpn-tunnel + - internal-vps + - nginx + destination: target-vps-or-internal-machine + protocols: + - http + - https + ports: + - 80 + - 443 + exposure: public + used_by: [] + docs: ../10-systems/traffic-routes/public-gnexus-space-to-internal-nginx.md + last_reviewed: 2026-05-09 + source_of_truth: owner-confirmed diff --git a/40-inventory/virtual-machines.yml b/40-inventory/virtual-machines.yml new file mode 100644 index 0000000..c3f810a --- /dev/null +++ b/40-inventory/virtual-machines.yml @@ -0,0 +1,445 @@ +# Virtual machines discovered from system libvirt. +--- +- id: ovpn + name: ovpn + status: running + uuid: 7bdc7bb7-e75b-40c7-8e3c-6403ae427abb + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 4 + memory_mib: 2048 + autostart: true + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: gitbucket + name: gitbucket + status: running + uuid: aea6497f-594e-467f-bd08-edd9dbbc1538 + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 4 + memory_mib: 8192 + autostart: true + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: alex + name: alex + status: running + uuid: bb424e38-d78c-4828-bf1d-57e190150cf8 + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 4 + memory_mib: 4096 + autostart: true + addresses: + - 192.168.105.195/24 + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: anicusi + name: anicusi + status: running + uuid: d2f9dea5-ba5b-4f1e-a77a-d01a6e11d72f + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 1 + memory_mib: 1024 + autostart: true + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: navi + name: navi + status: running + uuid: 1561ac50-31f4-4a25-aa4c-05498e0a7a26 + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 8 + memory_mib: 8192 + autostart: true + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: mctl + name: mctl + status: running + uuid: 60562377-9bae-45a8-b5a2-cb20a3d8ac1d + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 20 + memory_mib: 4096 + autostart: true + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: jellyfin + name: jellyfin + status: running + uuid: 65771f76-c780-4c14-8842-2aa761cfb350 + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 20 + memory_mib: 16384 + autostart: true + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: gnexus-home + name: gnexus-home + status: running + uuid: 19635ebb-3936-49f1-9c90-0cf39f45e57c + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 1 + memory_mib: 1024 + autostart: true + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: nextcloud + name: nextcloud + status: running + uuid: d172c322-84b4-44aa-8ab4-2acbc12e1c0f + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 24 + memory_mib: 32768 + autostart: true + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: ovpn_reserv + name: ovpn_reserv + status: running + uuid: 81670cb5-99bf-41bc-bdab-a4787654f325 + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 4 + memory_mib: 4096 + autostart: true + addresses: + - 192.168.105.181/24 + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: home + name: home + status: running + uuid: b31a9e40-27d3-462d-854b-2cc01b8fc26d + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 2 + memory_mib: 2048 + autostart: true + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: transmission + name: transmission + status: running + uuid: 9ae79e15-d8b9-469d-9185-0639205937ba + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 2 + memory_mib: 2048 + autostart: true + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: files + name: files + status: running + uuid: 402fd140-72ce-4598-9239-2264bb51ea6f + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 2 + memory_mib: 2048 + autostart: true + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: gnauth + name: gnauth + status: running + uuid: 24fb9930-7365-4027-a4d8-6d076f56d3d2 + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 2 + memory_mib: 4096 + autostart: true + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: cats + name: cats + status: shut_off + uuid: cb509171-6233-42ef-b326-5b875f9f5b8c + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 4 + memory_mib: 4096 + autostart: false + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: fdroid + name: fdroid + status: shut_off + uuid: 4cf9ff45-ccab-4cb1-9436-543ace86f8fe + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 1 + memory_mib: 1024 + autostart: false + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: kitan + name: kitan + status: shut_off + uuid: c1f3f074-8733-4c93-a75a-a8ea6552d3b2 + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 4 + memory_mib: 4096 + autostart: false + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: m1connect + name: m1connect + status: shut_off + uuid: 1a52c4b8-158a-4409-9ed1-b3c081b6aecc + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 2 + memory_mib: 2048 + autostart: false + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: mail-server + name: mail-server + status: shut_off + uuid: fd940e9c-d831-4690-b7bf-b6e3c154ab9e + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 2 + memory_mib: 2048 + autostart: false + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: mine + name: mine + status: shut_off + uuid: 3fc32754-7e78-4aa5-9630-6713a5ba9cfd + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 12 + memory_mib: 16384 + autostart: false + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: radio + name: radio + status: shut_off + uuid: 3b6d1ec5-92ea-40fc-8237-cfd9f1756f85 + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 8 + memory_mib: 4096 + autostart: false + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: shome + name: shome + status: shut_off + uuid: 534b3901-5cdc-498e-a904-806ae28b2680 + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 2 + memory_mib: 4096 + autostart: false + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: topics + name: topics + status: shut_off + uuid: 713441e2-a745-4d94-9819-da98c1e83170 + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 2 + memory_mib: 2048 + autostart: false + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: unmanic + name: unmanic + status: shut_off + uuid: 2138c85a-214b-46c6-bd35-829ecd527597 + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 4 + memory_mib: 4096 + autostart: false + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: vec_search + name: vec_search + status: shut_off + uuid: 85c12993-7eeb-4655-be87-f7a0ad816a34 + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 8 + memory_mib: 4096 + autostart: false + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt + +- id: yt + name: yt + status: shut_off + uuid: 3227abc2-bbf2-40bd-a2d4-68253e11591d + hypervisor_host: hp-proliant-dl380-g6 + virtualization_stack: kvm-libvirt + libvirt_connection: qemu:///system + os_type: hvm + vcpus: 2 + memory_mib: 4096 + autostart: false + addresses: [] + runs_services: [] + docs: ../10-systems/virtualization/libvirt-vms.md + last_reviewed: 2026-05-09 + source_of_truth: ssh-libvirt diff --git a/60-generated/inventory-index.md b/60-generated/inventory-index.md new file mode 100644 index 0000000..7e77267 --- /dev/null +++ b/60-generated/inventory-index.md @@ -0,0 +1,14 @@ +--- +owner: system +status: draft +last_reviewed: 2026-05-09 +review_interval: 30d +confidence: low +source_of_truth: generated-placeholder +--- + +# Inventory Index + +This file is a placeholder for a generated inventory index. + +It should be regenerated by the documentation server after inventory changes. diff --git a/90-maintenance/documentation-rules.md b/90-maintenance/documentation-rules.md new file mode 100644 index 0000000..dcbaf4d --- /dev/null +++ b/90-maintenance/documentation-rules.md @@ -0,0 +1,47 @@ +--- +owner: gmikcon +status: active +last_reviewed: 2026-05-09 +review_interval: 90d +confidence: high +source_of_truth: project-policy +--- + +# Documentation Rules + +## Storage + +- Markdown is used for human-readable documentation. +- YAML is used for structured inventory. +- JSON Schema is used to validate structured inventory. +- Git is the durable source of truth. + +## Secrets + +Never store raw secret values in this repository. + +Do not store: + +- passwords; +- API tokens; +- private keys; +- recovery codes; +- session cookies. + +Store references to secret locations instead, such as password manager item names or future vault paths. + +## Review + +Update `last_reviewed` only when the information has actually been checked. + +Use `confidence: high` only for information confirmed from a reliable source, direct inspection, or owner confirmation. + +Security-sensitive changes should use review mode once the documentation server supports it. + +## Agent Changes + +Agents should prefer structured inventory operations over raw file edits when possible. + +Every important inventory record should link to a documentation page. + +Every public traffic route should list its exposure and target services when known. diff --git a/90-maintenance/freshness-checks.md b/90-maintenance/freshness-checks.md new file mode 100644 index 0000000..ff0a4f3 --- /dev/null +++ b/90-maintenance/freshness-checks.md @@ -0,0 +1,23 @@ +--- +owner: gmikcon +status: draft +last_reviewed: 2026-05-09 +review_interval: 90d +confidence: medium +source_of_truth: project-policy +--- + +# Freshness Checks + +The documentation server should eventually report these conditions: + +- Markdown documents missing required frontmatter. +- Documents past their `review_interval`. +- Inventory records missing required fields. +- Inventory records pointing to missing docs. +- Public traffic routes without linked services. +- Services marked public without a traffic route. +- Virtual machines without a hypervisor host. +- Broken relative links. +- Critical services without runbooks. +- Backup policies without a last restore test. diff --git a/90-maintenance/pending-changes/README.md b/90-maintenance/pending-changes/README.md new file mode 100644 index 0000000..f03216b --- /dev/null +++ b/90-maintenance/pending-changes/README.md @@ -0,0 +1,88 @@ +--- +owner: gmikcon +status: active +last_reviewed: 2026-05-09 +review_interval: 90d +confidence: high +source_of_truth: project-policy +--- + +# Pending Changes + +This directory stores proposed documentation changes created through the backend API. + +Pending changes are not applied to the canonical documentation automatically. + +## Current Behavior + +- `POST /changes` creates a JSON pending-change record. +- `GET /changes` lists pending changes. +- `GET /changes/{id}` reads a pending change. +- `POST /changes/{id}/apply` applies `kind=doc` and `kind=inventory-item` changes after full validation. + +Inventory item apply supports: + +- updating an existing item by `id` with `payload.mode: "update"`; +- creating a new item with `payload.mode: "create"`. + +Replacing full inventory files is not implemented yet. + +## Purpose + +Pending changes provide a reviewable buffer between AI-agent proposals and canonical documentation updates. + +The later apply workflow should be: + +```text +pending change + -> apply to canonical Markdown or inventory file + -> run full validation + -> rollback if validation fails + -> mark change as applied + -> create local Git commit through `POST /commit` + -> owner reviews commit + -> owner manually pushes when ready +``` + +## Secrets + +Pending changes must follow the same secret policy as canonical documentation. They must not contain raw passwords, tokens, private keys, recovery codes, or session cookies. + +## Inventory Item Payloads + +Update an existing item: + +```json +{ + "kind": "inventory-item", + "target": "virtual-machines/gnauth", + "summary": "Document gnauth service mapping", + "payload": { + "mode": "update", + "patch": { + "runs_services": ["gnexus-auth"] + } + } +} +``` + +Create a new item: + +```json +{ + "kind": "inventory-item", + "target": "virtual-machines/new-vm", + "summary": "Add new VM", + "payload": { + "mode": "create", + "patch": { + "name": "new-vm", + "status": "running", + "hypervisor_host": "hp-proliant-dl380-g6", + "virtualization_stack": "kvm-libvirt", + "docs": "../10-systems/virtualization/libvirt-vms.md", + "last_reviewed": "2026-05-09" + } + } +} +``` diff --git a/README.md b/README.md index 8e36225..e500d98 100644 --- a/README.md +++ b/README.md @@ -1,2 +1,25 @@ gnexus-book =============== + +Knowledge base for documenting personal digital and server infrastructure. + +## Planning + +- [Project implementation notes](00-overview/project-implementation-notes.md) +- [MVP implementation plan](00-overview/mvp-implementation-plan.md) + +## Foundation + +- [Documentation rules](90-maintenance/documentation-rules.md) +- [Freshness checks](90-maintenance/freshness-checks.md) +- [Hardware inventory](40-inventory/hardware.yml) +- [Virtual machine inventory](40-inventory/virtual-machines.yml) +- [Traffic routes](40-inventory/traffic-routes.yml) + +## Server + +- [Server README](server/README.md) + +## UI + +- [UI README](ui/README.md) diff --git a/schemas/backup.schema.json b/schemas/backup.schema.json new file mode 100644 index 0000000..625ecb1 --- /dev/null +++ b/schemas/backup.schema.json @@ -0,0 +1,22 @@ +{ + "$schema": "https://json-schema.org/draft/2020-12/schema", + "$id": "backup.schema.json", + "type": "array", + "items": { + "type": "object", + "required": ["id", "target", "method", "frequency", "retention", "storage", "restore_runbook", "last_reviewed"], + "properties": { + "id": { "type": "string" }, + "target": { "type": "string" }, + "method": { "type": "string" }, + "frequency": { "type": "string" }, + "retention": { "type": "string" }, + "storage": { "type": "string" }, + "restore_runbook": { "type": "string" }, + "last_tested": { "type": "string" }, + "last_reviewed": { "type": "string" }, + "source_of_truth": { "type": "string" } + }, + "additionalProperties": true + } +} diff --git a/schemas/common.defs.schema.json b/schemas/common.defs.schema.json new file mode 100644 index 0000000..94ae03f --- /dev/null +++ b/schemas/common.defs.schema.json @@ -0,0 +1,26 @@ +{ + "$schema": "https://json-schema.org/draft/2020-12/schema", + "$id": "common.defs.schema.json", + "definitions": { + "id": { + "type": "string", + "pattern": "^[a-z0-9][a-z0-9._-]*$" + }, + "status": { + "type": "string", + "enum": ["active", "inactive", "planned", "deprecated", "archived", "running", "shut_off", "unknown"] + }, + "date": { + "type": "string", + "pattern": "^[0-9]{4}-[0-9]{2}-[0-9]{2}$" + }, + "docPath": { + "type": "string", + "pattern": "^\\.\\./.+\\.md$" + }, + "stringList": { + "type": "array", + "items": { "type": "string" } + } + } +} diff --git a/schemas/database.schema.json b/schemas/database.schema.json new file mode 100644 index 0000000..221d456 --- /dev/null +++ b/schemas/database.schema.json @@ -0,0 +1,21 @@ +{ + "$schema": "https://json-schema.org/draft/2020-12/schema", + "$id": "database.schema.json", + "type": "array", + "items": { + "type": "object", + "required": ["id", "engine", "host", "used_by", "backup_policy", "docs", "last_reviewed"], + "properties": { + "id": { "type": "string" }, + "engine": { "type": "string" }, + "version": { "type": "string" }, + "host": { "type": "string" }, + "used_by": { "type": "array", "items": { "type": "string" } }, + "backup_policy": { "type": "string" }, + "docs": { "type": "string" }, + "last_reviewed": { "type": "string" }, + "source_of_truth": { "type": "string" } + }, + "additionalProperties": true + } +} diff --git a/schemas/domain.schema.json b/schemas/domain.schema.json new file mode 100644 index 0000000..72f177a --- /dev/null +++ b/schemas/domain.schema.json @@ -0,0 +1,22 @@ +{ + "$schema": "https://json-schema.org/draft/2020-12/schema", + "$id": "domain.schema.json", + "type": "array", + "items": { + "type": "object", + "required": ["id", "fqdn", "status", "points_to", "used_by", "docs", "last_reviewed"], + "properties": { + "id": { "type": "string" }, + "fqdn": { "type": "string" }, + "status": { "type": "string" }, + "registrar": { "type": "string" }, + "dns_provider": { "type": "string" }, + "points_to": { "type": "array", "items": { "type": "string" } }, + "used_by": { "type": "array", "items": { "type": "string" } }, + "docs": { "type": "string" }, + "last_reviewed": { "type": "string" }, + "source_of_truth": { "type": "string" } + }, + "additionalProperties": true + } +} diff --git a/schemas/hardware.schema.json b/schemas/hardware.schema.json new file mode 100644 index 0000000..c8f93b5 --- /dev/null +++ b/schemas/hardware.schema.json @@ -0,0 +1,27 @@ +{ + "$schema": "https://json-schema.org/draft/2020-12/schema", + "$id": "hardware.schema.json", + "type": "array", + "items": { + "type": "object", + "required": ["id", "name", "type", "status", "location", "hardware_role", "docs", "last_reviewed"], + "properties": { + "id": { "type": "string" }, + "name": { "type": "string" }, + "type": { "type": "string" }, + "status": { "type": "string" }, + "location": { "type": "string" }, + "hardware_role": { "type": "array", "items": { "type": "string" } }, + "os": { "type": "string" }, + "kernel": { "type": "string" }, + "management": { "type": "object" }, + "virtualization_stack": { "type": "object" }, + "network_interfaces": { "type": "array" }, + "runs_hosts": { "type": "array", "items": { "type": "string" } }, + "docs": { "type": "string" }, + "last_reviewed": { "type": "string" }, + "source_of_truth": { "type": "string" } + }, + "additionalProperties": true + } +} diff --git a/schemas/host.schema.json b/schemas/host.schema.json new file mode 100644 index 0000000..04e41d7 --- /dev/null +++ b/schemas/host.schema.json @@ -0,0 +1,24 @@ +{ + "$schema": "https://json-schema.org/draft/2020-12/schema", + "$id": "host.schema.json", + "type": "array", + "items": { + "type": "object", + "required": ["id", "name", "type", "status", "environment", "docs", "last_reviewed"], + "properties": { + "id": { "type": "string" }, + "name": { "type": "string" }, + "type": { "type": "string" }, + "status": { "type": "string" }, + "environment": { "type": "string" }, + "provider": { "type": "string" }, + "location": { "type": "string" }, + "roles": { "type": "array", "items": { "type": "string" } }, + "hardware_node": { "type": "string" }, + "docs": { "type": "string" }, + "last_reviewed": { "type": "string" }, + "source_of_truth": { "type": "string" } + }, + "additionalProperties": true + } +} diff --git a/schemas/network.schema.json b/schemas/network.schema.json new file mode 100644 index 0000000..777df8f --- /dev/null +++ b/schemas/network.schema.json @@ -0,0 +1,21 @@ +{ + "$schema": "https://json-schema.org/draft/2020-12/schema", + "$id": "network.schema.json", + "type": "array", + "items": { + "type": "object", + "required": ["id", "name", "type", "status", "docs", "last_reviewed"], + "properties": { + "id": { "type": "string" }, + "name": { "type": "string" }, + "type": { "type": "string" }, + "status": { "type": "string" }, + "owner_host": { "type": "string" }, + "autostart": { "type": "boolean" }, + "docs": { "type": "string" }, + "last_reviewed": { "type": "string" }, + "source_of_truth": { "type": "string" } + }, + "additionalProperties": true + } +} diff --git a/schemas/service.schema.json b/schemas/service.schema.json new file mode 100644 index 0000000..a6d4326 --- /dev/null +++ b/schemas/service.schema.json @@ -0,0 +1,24 @@ +{ + "$schema": "https://json-schema.org/draft/2020-12/schema", + "$id": "service.schema.json", + "type": "array", + "items": { + "type": "object", + "required": ["id", "name", "type", "status", "host", "docs", "last_reviewed"], + "properties": { + "id": { "type": "string" }, + "name": { "type": "string" }, + "type": { "type": "string" }, + "status": { "type": "string" }, + "host": { "type": "string" }, + "domains": { "type": "array", "items": { "type": "string" } }, + "ports": { "type": "array", "items": { "type": "integer" } }, + "criticality": { "type": "string", "enum": ["low", "medium", "high", "critical"] }, + "docs": { "type": "string" }, + "runbook": { "type": "string" }, + "last_reviewed": { "type": "string" }, + "source_of_truth": { "type": "string" } + }, + "additionalProperties": true + } +} diff --git a/schemas/traffic-route.schema.json b/schemas/traffic-route.schema.json new file mode 100644 index 0000000..86116f8 --- /dev/null +++ b/schemas/traffic-route.schema.json @@ -0,0 +1,26 @@ +{ + "$schema": "https://json-schema.org/draft/2020-12/schema", + "$id": "traffic-route.schema.json", + "type": "array", + "items": { + "type": "object", + "required": ["id", "name", "status", "source", "entrypoint", "path", "destination", "protocols", "ports", "exposure", "docs", "last_reviewed"], + "properties": { + "id": { "type": "string" }, + "name": { "type": "string" }, + "status": { "type": "string" }, + "source": { "type": "string" }, + "entrypoint": { "type": "string" }, + "path": { "type": "array", "items": { "type": "string" } }, + "destination": { "type": "string" }, + "protocols": { "type": "array", "items": { "type": "string" } }, + "ports": { "type": "array", "items": { "type": "integer" } }, + "exposure": { "type": "string", "enum": ["public", "vpn", "local", "private"] }, + "used_by": { "type": "array", "items": { "type": "string" } }, + "docs": { "type": "string" }, + "last_reviewed": { "type": "string" }, + "source_of_truth": { "type": "string" } + }, + "additionalProperties": true + } +} diff --git a/schemas/virtual-machine.schema.json b/schemas/virtual-machine.schema.json new file mode 100644 index 0000000..7ab195a --- /dev/null +++ b/schemas/virtual-machine.schema.json @@ -0,0 +1,28 @@ +{ + "$schema": "https://json-schema.org/draft/2020-12/schema", + "$id": "virtual-machine.schema.json", + "type": "array", + "items": { + "type": "object", + "required": ["id", "name", "status", "hypervisor_host", "virtualization_stack", "docs", "last_reviewed"], + "properties": { + "id": { "type": "string" }, + "name": { "type": "string" }, + "status": { "type": "string", "enum": ["running", "shut_off", "paused", "unknown"] }, + "uuid": { "type": "string" }, + "hypervisor_host": { "type": "string" }, + "virtualization_stack": { "type": "string" }, + "libvirt_connection": { "type": "string" }, + "os_type": { "type": "string" }, + "vcpus": { "type": "integer", "minimum": 0 }, + "memory_mib": { "type": "integer", "minimum": 0 }, + "autostart": { "type": "boolean" }, + "addresses": { "type": "array", "items": { "type": "string" } }, + "runs_services": { "type": "array", "items": { "type": "string" } }, + "docs": { "type": "string" }, + "last_reviewed": { "type": "string" }, + "source_of_truth": { "type": "string" } + }, + "additionalProperties": true + } +} diff --git a/server/README.md b/server/README.md new file mode 100644 index 0000000..95895e2 --- /dev/null +++ b/server/README.md @@ -0,0 +1,49 @@ +# Gnexus Book Server + +FastAPI backend for reading Gnexus Book documentation and inventory. + +## Development + +From the repository root: + +```bash +cd server +python3 -m venv .venv +. .venv/bin/activate +pip install -e ".[dev]" +uvicorn app.main:app --reload --host 127.0.0.1 --port 8080 +``` + +## Current API + +- Swagger UI: `GET /api-docs` +- `GET /health` +- `GET /docs` +- `GET /docs/read?path=...` +- `GET /search?q=...` +- `GET /inventory` +- `GET /inventory/{type}` +- `GET /inventory/{type}/{id}` +- `GET /traffic-routes` +- `GET /health/freshness` +- `GET /validate` +- `GET /changes` +- `GET /changes/{id}` +- `POST /changes` +- `POST /changes/{id}/apply` +- `GET /git/status` +- `GET /git/diff` +- `POST /commit` + +Inventory parsing requires `PyYAML`. + +## Current Limitations + +- Main documentation and inventory files are read-only. +- `POST /changes` can create pending change records under `90-maintenance/pending-changes/`. +- `POST /changes/{id}/apply` can apply `kind=doc` and `kind=inventory-item` changes after validation. +- No authentication yet. +- No commit or review workflow yet. +- `POST /commit` creates a local Git commit only. It does not push. +- Commit requests must provide an explicit file list. +- Validation uses JSON Schema 2020-12 for inventory files. diff --git a/server/app/__init__.py b/server/app/__init__.py new file mode 100644 index 0000000..75cb923 --- /dev/null +++ b/server/app/__init__.py @@ -0,0 +1 @@ +"""Gnexus Book server package.""" diff --git a/server/app/config.py b/server/app/config.py new file mode 100644 index 0000000..4d60ba5 --- /dev/null +++ b/server/app/config.py @@ -0,0 +1,17 @@ +from __future__ import annotations + +from functools import lru_cache +from pathlib import Path + + +class Settings: + def __init__(self, repo_root: Path | None = None) -> None: + self.repo_root = repo_root or Path(__file__).resolve().parents[2] + self.docs_extensions = {".md"} + self.inventory_dir = self.repo_root / "40-inventory" + self.generated_dir = self.repo_root / "60-generated" + + +@lru_cache +def get_settings() -> Settings: + return Settings() diff --git a/server/app/docs_repository.py b/server/app/docs_repository.py new file mode 100644 index 0000000..44c0729 --- /dev/null +++ b/server/app/docs_repository.py @@ -0,0 +1,87 @@ +from __future__ import annotations + +from dataclasses import asdict +from pathlib import Path + +from .config import Settings +from .markdown import parse_frontmatter + + +EXCLUDED_PARTS = { + ".git", + ".pytest_cache", + ".ruff_cache", + ".venv", + "__pycache__", + "gnexus_book_server.egg-info", + "node_modules", +} + + +class RepositoryError(ValueError): + pass + + +class DocsRepository: + def __init__(self, settings: Settings) -> None: + self.settings = settings + self.repo_root = settings.repo_root.resolve() + + def _resolve_repo_path(self, path: str) -> Path: + candidate = (self.repo_root / path).resolve() + if self.repo_root not in candidate.parents and candidate != self.repo_root: + raise RepositoryError("Path escapes repository root") + return candidate + + def list_docs(self) -> list[dict[str, object]]: + docs: list[dict[str, object]] = [] + for path in sorted(self.repo_root.rglob("*.md")): + if any(part in EXCLUDED_PARTS for part in path.parts): + continue + rel = path.relative_to(self.repo_root).as_posix() + raw = path.read_text(encoding="utf-8") + parsed = parse_frontmatter(rel, raw) + title = self._extract_title(parsed.body) or path.stem + docs.append( + { + "path": rel, + "title": title, + "frontmatter": parsed.frontmatter, + } + ) + return docs + + def read_doc(self, path: str) -> dict[str, object]: + file_path = self._resolve_repo_path(path) + if file_path.suffix != ".md" or not file_path.is_file(): + raise RepositoryError("Document not found") + rel = file_path.relative_to(self.repo_root).as_posix() + parsed = parse_frontmatter(rel, file_path.read_text(encoding="utf-8")) + return asdict(parsed) + + def search(self, query: str) -> list[dict[str, object]]: + normalized = query.strip().lower() + if not normalized: + return [] + + results: list[dict[str, object]] = [] + for doc in self.list_docs(): + path = str(doc["path"]) + raw = self._resolve_repo_path(path).read_text(encoding="utf-8") + lines = raw.splitlines() + matches = [] + for index, line in enumerate(lines, start=1): + if normalized in line.lower(): + matches.append({"line": index, "text": line.strip()}) + if len(matches) >= 5: + break + if matches: + results.append({"path": path, "title": doc["title"], "matches": matches}) + return results + + @staticmethod + def _extract_title(markdown_body: str) -> str | None: + for line in markdown_body.splitlines(): + if line.startswith("# "): + return line[2:].strip() + return None diff --git a/server/app/freshness.py b/server/app/freshness.py new file mode 100644 index 0000000..8441e66 --- /dev/null +++ b/server/app/freshness.py @@ -0,0 +1,103 @@ +from __future__ import annotations + +from datetime import date, datetime, timedelta +from pathlib import Path + +from .config import Settings +from .docs_repository import DocsRepository + + +REQUIRED_FRONTMATTER = { + "owner", + "status", + "last_reviewed", + "review_interval", + "confidence", + "source_of_truth", +} + + +def build_freshness_report(settings: Settings) -> dict[str, object]: + repo = DocsRepository(settings) + docs = repo.list_docs() + issues: list[dict[str, str]] = [] + + for doc in docs: + path = str(doc["path"]) + if path == "README.md" or path.startswith("server/"): + continue + frontmatter = doc.get("frontmatter") or {} + if not isinstance(frontmatter, dict): + continue + + missing = sorted(REQUIRED_FRONTMATTER - set(frontmatter)) + if missing: + issues.append( + { + "path": path, + "severity": "warning", + "code": "missing-frontmatter-fields", + "message": "Missing frontmatter fields: " + ", ".join(missing), + } + ) + continue + + stale = _is_stale(str(frontmatter["last_reviewed"]), str(frontmatter["review_interval"])) + if stale: + issues.append( + { + "path": path, + "severity": "warning", + "code": "stale-document", + "message": "Document is past its review interval", + } + ) + + missing_docs = _find_missing_inventory_docs(settings.repo_root) + for inventory_file, target in missing_docs: + issues.append( + { + "path": inventory_file, + "severity": "error", + "code": "missing-doc-link-target", + "message": f"docs target does not exist: {target}", + } + ) + + return { + "status": "ok" if not issues else "issues", + "document_count": len(docs), + "issue_count": len(issues), + "issues": issues, + } + + +def _is_stale(last_reviewed: str, interval: str) -> bool: + try: + reviewed = datetime.strptime(last_reviewed, "%Y-%m-%d").date() + except ValueError: + return True + + if not interval.endswith("d"): + return True + try: + days = int(interval[:-1]) + except ValueError: + return True + + return reviewed + timedelta(days=days) < date.today() + + +def _find_missing_inventory_docs(repo_root: Path) -> list[tuple[str, str]]: + missing: list[tuple[str, str]] = [] + inventory_dir = repo_root / "40-inventory" + for inventory_file in inventory_dir.glob("*.yml"): + for line in inventory_file.read_text(encoding="utf-8").splitlines(): + stripped = line.strip() + if not stripped.startswith("docs: "): + continue + rel = stripped.split(": ", 1)[1].strip().strip("\"'") + target = (inventory_file.parent / rel).resolve() + if not target.exists(): + missing.append((inventory_file.relative_to(repo_root).as_posix(), rel)) + return missing diff --git a/server/app/git_adapter.py b/server/app/git_adapter.py new file mode 100644 index 0000000..3c433b7 --- /dev/null +++ b/server/app/git_adapter.py @@ -0,0 +1,110 @@ +from __future__ import annotations + +import subprocess +from pathlib import Path +from typing import Any + +from pydantic import BaseModel, Field + +from .config import Settings +from .validation import validate_repository + + +DENIED_PARTS = { + ".codex", + ".git", + ".pytest_cache", + ".ruff_cache", + ".venv", + "__pycache__", + "gnexus_book_server.egg-info", + "node_modules", +} + + +class CommitRequest(BaseModel): + summary: str = Field(min_length=1, max_length=200) + details: str = Field(default="", max_length=4000) + files: list[str] = Field(min_length=1) + + +class GitAdapterError(ValueError): + pass + + +class GitAdapter: + def __init__(self, settings: Settings) -> None: + self.settings = settings + self.repo_root = settings.repo_root.resolve() + + def status(self) -> dict[str, Any]: + result = self._git("status", "--short") + entries = [] + for line in result.stdout.splitlines(): + if not line: + continue + entries.append({"status": line[:2], "path": line[3:]}) + return {"entries": entries, "raw": result.stdout} + + def diff(self, files: list[str] | None = None) -> dict[str, str]: + args = ["diff", "--"] + if files: + args.extend(self._validate_paths(files)) + result = self._git(*args) + return {"diff": result.stdout} + + def commit(self, request: CommitRequest) -> dict[str, Any]: + validation = validate_repository(self.settings) + if validation["status"] != "ok": + raise GitAdapterError("Repository validation failed; commit blocked") + + files = self._validate_paths(request.files) + self._git("add", "--", *files) + + staged = self._git("diff", "--cached", "--name-only").stdout.splitlines() + if not staged: + raise GitAdapterError("No staged changes to commit") + + message = request.summary.strip() + if request.details.strip(): + message += "\n\n" + request.details.strip() + + result = self._git("commit", "-m", message) + return { + "status": "committed", + "files": staged, + "stdout": result.stdout, + "stderr": result.stderr, + } + + def _validate_paths(self, paths: list[str]) -> list[str]: + valid: list[str] = [] + for raw_path in paths: + path = raw_path.strip() + if not path: + raise GitAdapterError("Empty file path is not allowed") + if path.startswith("/") or "\\" in path: + raise GitAdapterError(f"Invalid repository-relative path: {raw_path}") + + candidate = (self.repo_root / path).resolve() + if self.repo_root not in candidate.parents and candidate != self.repo_root: + raise GitAdapterError(f"Path escapes repository root: {raw_path}") + + rel = candidate.relative_to(self.repo_root).as_posix() + if any(part in DENIED_PARTS for part in Path(rel).parts): + raise GitAdapterError(f"Path is not allowed for commit: {raw_path}") + valid.append(rel) + return valid + + def _git(self, *args: str) -> subprocess.CompletedProcess[str]: + result = subprocess.run( + ["git", *args], + cwd=self.repo_root, + check=False, + text=True, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + ) + if result.returncode != 0: + raise GitAdapterError(result.stderr.strip() or result.stdout.strip() or "Git command failed") + return result diff --git a/server/app/inventory.py b/server/app/inventory.py new file mode 100644 index 0000000..d7438c2 --- /dev/null +++ b/server/app/inventory.py @@ -0,0 +1,49 @@ +from __future__ import annotations + +from pathlib import Path +from typing import Any + +from .config import Settings + + +class InventoryError(ValueError): + pass + + +class InventoryRepository: + def __init__(self, settings: Settings) -> None: + self.settings = settings + self.inventory_dir = settings.inventory_dir.resolve() + + def list_types(self) -> list[str]: + return sorted(path.stem for path in self.inventory_dir.glob("*.yml")) + + def read_raw(self, inventory_type: str) -> str: + path = self._path_for_type(inventory_type) + return path.read_text(encoding="utf-8") + + def read_parsed(self, inventory_type: str) -> Any: + try: + import yaml + except ModuleNotFoundError as exc: + raise InventoryError("PyYAML is required for parsed inventory") from exc + + raw = self.read_raw(inventory_type) + return yaml.safe_load(raw) + + def read_item(self, inventory_type: str, item_id: str) -> dict[str, Any]: + data = self.read_parsed(inventory_type) + if not isinstance(data, list): + raise InventoryError("Inventory file does not contain a list") + for item in data: + if isinstance(item, dict) and item.get("id") == item_id: + return item + raise InventoryError("Inventory item not found") + + def _path_for_type(self, inventory_type: str) -> Path: + if "/" in inventory_type or "\\" in inventory_type or inventory_type.startswith("."): + raise InventoryError("Invalid inventory type") + path = self.inventory_dir / f"{inventory_type}.yml" + if not path.is_file(): + raise InventoryError("Inventory type not found") + return path diff --git a/server/app/main.py b/server/app/main.py new file mode 100644 index 0000000..d528e5b --- /dev/null +++ b/server/app/main.py @@ -0,0 +1,172 @@ +from __future__ import annotations + +from fastapi import Depends, FastAPI, HTTPException, Query + +from .config import Settings, get_settings +from .docs_repository import DocsRepository, RepositoryError +from .freshness import build_freshness_report +from .git_adapter import CommitRequest, GitAdapter, GitAdapterError +from .inventory import InventoryError, InventoryRepository +from .pending_changes import ( + PendingChangeError, + PendingChangeRepository, + ProposedChangeRequest, +) +from .validation import validate_repository + + +app = FastAPI( + title="Gnexus Book Server", + version="0.1.0", + description="Read-only documentation and inventory API for Gnexus Book.", + docs_url="/api-docs", + swagger_ui_oauth2_redirect_url="/api-docs/oauth2-redirect", + redoc_url="/api-redoc", +) + + +def get_docs_repo(settings: Settings = Depends(get_settings)) -> DocsRepository: + return DocsRepository(settings) + + +def get_inventory_repo(settings: Settings = Depends(get_settings)) -> InventoryRepository: + return InventoryRepository(settings) + + +def get_pending_change_repo(settings: Settings = Depends(get_settings)) -> PendingChangeRepository: + return PendingChangeRepository(settings) + + +def get_git_adapter(settings: Settings = Depends(get_settings)) -> GitAdapter: + return GitAdapter(settings) + + +@app.get("/health") +def health(settings: Settings = Depends(get_settings)) -> dict[str, str]: + return {"status": "ok", "repo_root": settings.repo_root.as_posix()} + + +@app.get("/docs") +def list_docs(repo: DocsRepository = Depends(get_docs_repo)) -> list[dict[str, object]]: + return repo.list_docs() + + +@app.get("/docs/read") +def read_doc(path: str = Query(...), repo: DocsRepository = Depends(get_docs_repo)) -> dict[str, object]: + try: + return repo.read_doc(path) + except RepositoryError as exc: + raise HTTPException(status_code=404, detail=str(exc)) from exc + + +@app.get("/search") +def search(q: str = Query(...), repo: DocsRepository = Depends(get_docs_repo)) -> list[dict[str, object]]: + return repo.search(q) + + +@app.get("/inventory") +def list_inventory_types(repo: InventoryRepository = Depends(get_inventory_repo)) -> list[str]: + return repo.list_types() + + +@app.get("/inventory/{inventory_type}") +def read_inventory( + inventory_type: str, + raw: bool = False, + repo: InventoryRepository = Depends(get_inventory_repo), +) -> object: + try: + if raw: + return {"type": inventory_type, "raw": repo.read_raw(inventory_type)} + return repo.read_parsed(inventory_type) + except InventoryError as exc: + raise HTTPException(status_code=404, detail=str(exc)) from exc + + +@app.get("/inventory/{inventory_type}/{item_id}") +def read_inventory_item( + inventory_type: str, + item_id: str, + repo: InventoryRepository = Depends(get_inventory_repo), +) -> dict[str, object]: + try: + return repo.read_item(inventory_type, item_id) + except InventoryError as exc: + raise HTTPException(status_code=404, detail=str(exc)) from exc + + +@app.get("/traffic-routes") +def read_traffic_routes(repo: InventoryRepository = Depends(get_inventory_repo)) -> object: + try: + return repo.read_parsed("traffic-routes") + except InventoryError as exc: + raise HTTPException(status_code=404, detail=str(exc)) from exc + + +@app.get("/health/freshness") +def freshness(settings: Settings = Depends(get_settings)) -> dict[str, object]: + return build_freshness_report(settings) + + +@app.get("/validate") +def validate(settings: Settings = Depends(get_settings)) -> dict[str, object]: + return validate_repository(settings) + + +@app.get("/changes") +def list_changes(repo: PendingChangeRepository = Depends(get_pending_change_repo)) -> list[dict[str, object]]: + return repo.list() + + +@app.get("/changes/{change_id}") +def read_change( + change_id: str, + repo: PendingChangeRepository = Depends(get_pending_change_repo), +) -> dict[str, object]: + try: + return repo.get(change_id) + except PendingChangeError as exc: + raise HTTPException(status_code=404, detail=str(exc)) from exc + + +@app.post("/changes") +def propose_change( + request: ProposedChangeRequest, + repo: PendingChangeRepository = Depends(get_pending_change_repo), +) -> dict[str, object]: + return repo.create(request) + + +@app.post("/changes/{change_id}/apply") +def apply_change( + change_id: str, + repo: PendingChangeRepository = Depends(get_pending_change_repo), +) -> dict[str, object]: + try: + return repo.apply(change_id) + except PendingChangeError as exc: + raise HTTPException(status_code=400, detail=str(exc)) from exc + + +@app.get("/git/status") +def git_status(git: GitAdapter = Depends(get_git_adapter)) -> dict[str, object]: + try: + return git.status() + except GitAdapterError as exc: + raise HTTPException(status_code=400, detail=str(exc)) from exc + + +@app.get("/git/diff") +def git_diff(files: list[str] | None = Query(default=None), git: GitAdapter = Depends(get_git_adapter)) -> dict[str, str]: + try: + return git.diff(files) + except GitAdapterError as exc: + raise HTTPException(status_code=400, detail=str(exc)) from exc + + +@app.post("/commit") +def commit(request: CommitRequest, git: GitAdapter = Depends(get_git_adapter)) -> dict[str, object]: + try: + return git.commit(request) + except GitAdapterError as exc: + raise HTTPException(status_code=400, detail=str(exc)) from exc diff --git a/server/app/markdown.py b/server/app/markdown.py new file mode 100644 index 0000000..91539b3 --- /dev/null +++ b/server/app/markdown.py @@ -0,0 +1,34 @@ +from __future__ import annotations + +from dataclasses import dataclass + + +@dataclass(frozen=True) +class MarkdownDocument: + path: str + frontmatter: dict[str, str] + body: str + raw: str + + +def parse_frontmatter(path: str, raw: str) -> MarkdownDocument: + if not raw.startswith("---\n"): + return MarkdownDocument(path=path, frontmatter={}, body=raw, raw=raw) + + end = raw.find("\n---\n", 4) + if end == -1: + return MarkdownDocument(path=path, frontmatter={}, body=raw, raw=raw) + + metadata_text = raw[4:end] + body = raw[end + 5 :] + frontmatter: dict[str, str] = {} + + for line in metadata_text.splitlines(): + if not line.strip() or line.lstrip().startswith("#"): + continue + if ":" not in line: + continue + key, value = line.split(":", 1) + frontmatter[key.strip()] = value.strip().strip('"').strip("'") + + return MarkdownDocument(path=path, frontmatter=frontmatter, body=body, raw=raw) diff --git a/server/app/pending_changes.py b/server/app/pending_changes.py new file mode 100644 index 0000000..96ebdcd --- /dev/null +++ b/server/app/pending_changes.py @@ -0,0 +1,233 @@ +from __future__ import annotations + +import json +from datetime import UTC, datetime +from pathlib import Path +from typing import Any, Literal +from uuid import uuid4 + +import yaml +from pydantic import BaseModel, Field + +from .config import Settings +from .validation import validate_repository + + +ChangeKind = Literal["doc", "inventory", "inventory-item"] +ChangeStatus = Literal["pending", "applied", "rejected"] + + +class ProposedChangeRequest(BaseModel): + kind: ChangeKind + target: str = Field(min_length=1) + summary: str = Field(min_length=1, max_length=200) + reason: str = Field(default="", max_length=2000) + payload: dict[str, Any] + + +class PendingChange(BaseModel): + id: str + kind: ChangeKind + target: str + summary: str + reason: str + payload: dict[str, Any] + status: ChangeStatus + created_at: str + updated_at: str + + +class PendingChangeError(ValueError): + pass + + +class PendingChangeRepository: + def __init__(self, settings: Settings) -> None: + self.settings = settings + self.directory = settings.repo_root / "90-maintenance" / "pending-changes" + self.directory.mkdir(parents=True, exist_ok=True) + + def list(self) -> list[dict[str, Any]]: + changes = [self._read_file(path) for path in sorted(self.directory.glob("*.json"))] + return sorted(changes, key=lambda item: item["created_at"], reverse=True) + + def get(self, change_id: str) -> dict[str, Any]: + path = self._path_for_id(change_id) + if not path.exists(): + raise PendingChangeError("Pending change not found") + return self._read_file(path) + + def create(self, request: ProposedChangeRequest) -> dict[str, Any]: + now = datetime.now(UTC).replace(microsecond=0).isoformat().replace("+00:00", "Z") + change = PendingChange( + id=self._new_id(), + kind=request.kind, + target=request.target, + summary=request.summary, + reason=request.reason, + payload=request.payload, + status="pending", + created_at=now, + updated_at=now, + ) + path = self._path_for_id(change.id) + path.write_text( + json.dumps(change.model_dump(), indent=2, ensure_ascii=False) + "\n", + encoding="utf-8", + ) + return change.model_dump() + + def apply(self, change_id: str) -> dict[str, Any]: + change = self.get(change_id) + if change["status"] != "pending": + raise PendingChangeError("Only pending changes can be applied") + + if change["kind"] == "doc": + return self._apply_doc_change(change) + if change["kind"] == "inventory-item": + return self._apply_inventory_item_change(change) + raise PendingChangeError("Only doc and inventory-item changes can be applied in the current MVP") + + def _apply_doc_change(self, change: dict[str, Any]) -> dict[str, Any]: + target = self._resolve_doc_target(change["target"]) + content = self._extract_doc_content(change) + previous_content = target.read_text(encoding="utf-8") if target.exists() else None + + target.parent.mkdir(parents=True, exist_ok=True) + target.write_text(content, encoding="utf-8") + + validation = validate_repository(self.settings) + if validation["status"] != "ok": + if previous_content is None: + target.unlink(missing_ok=True) + else: + target.write_text(previous_content, encoding="utf-8") + raise PendingChangeError("Applied change failed validation and was rolled back") + + now = datetime.now(UTC).replace(microsecond=0).isoformat().replace("+00:00", "Z") + change["status"] = "applied" + change["updated_at"] = now + change["applied_at"] = now + change["validation"] = validation + self._write_change(change) + return change + + def _apply_inventory_item_change(self, change: dict[str, Any]) -> dict[str, Any]: + inventory_type, item_id = self._parse_inventory_item_target(change["target"]) + target = self._resolve_inventory_target(inventory_type) + previous_content = target.read_text(encoding="utf-8") + + data = yaml.safe_load(previous_content) + if data is None: + data = [] + if not isinstance(data, list): + raise PendingChangeError("Inventory file must contain a list") + + mode = self._extract_inventory_mode(change) + patch = self._extract_inventory_patch(change) + found = False + for item in data: + if isinstance(item, dict) and item.get("id") == item_id: + if mode == "create": + raise PendingChangeError("Inventory item already exists") + item.update(patch) + found = True + break + if not found and mode == "update": + raise PendingChangeError("Inventory item not found") + if not found and mode == "create": + data.append({"id": item_id, **patch}) + + target.write_text("---\n" + yaml.safe_dump(data, sort_keys=False, allow_unicode=True), encoding="utf-8") + + validation = validate_repository(self.settings) + if validation["status"] != "ok": + target.write_text(previous_content, encoding="utf-8") + raise PendingChangeError("Applied change failed validation and was rolled back") + + now = datetime.now(UTC).replace(microsecond=0).isoformat().replace("+00:00", "Z") + change["status"] = "applied" + change["updated_at"] = now + change["applied_at"] = now + change["validation"] = validation + self._write_change(change) + return change + + def _new_id(self) -> str: + return datetime.now(UTC).strftime("%Y%m%d%H%M%S") + "-" + uuid4().hex[:8] + + def _path_for_id(self, change_id: str) -> Path: + if "/" in change_id or "\\" in change_id or change_id.startswith("."): + raise PendingChangeError("Invalid pending change id") + return self.directory / f"{change_id}.json" + + def _resolve_doc_target(self, target: str) -> Path: + if not target.endswith(".md"): + raise PendingChangeError("Doc change target must be a Markdown file") + path = (self.settings.repo_root / target).resolve() + repo_root = self.settings.repo_root.resolve() + if repo_root not in path.parents and path != repo_root: + raise PendingChangeError("Doc change target escapes repository root") + if any(part in {".git", ".venv", "server/.venv", "node_modules"} for part in path.parts): + raise PendingChangeError("Doc change target is not allowed") + return path + + def _resolve_inventory_target(self, inventory_type: str) -> Path: + if "/" in inventory_type or "\\" in inventory_type or inventory_type.startswith("."): + raise PendingChangeError("Invalid inventory type") + path = (self.settings.inventory_dir / f"{inventory_type}.yml").resolve() + inventory_dir = self.settings.inventory_dir.resolve() + if inventory_dir not in path.parents and path.parent != inventory_dir: + raise PendingChangeError("Inventory target escapes inventory directory") + if not path.exists(): + raise PendingChangeError("Inventory type not found") + return path + + @staticmethod + def _parse_inventory_item_target(target: str) -> tuple[str, str]: + parts = target.split("/") + if len(parts) != 2 or not parts[0] or not parts[1]: + raise PendingChangeError("Inventory item target must use '/'") + return parts[0], parts[1] + + @staticmethod + def _extract_doc_content(change: dict[str, Any]) -> str: + payload = change.get("payload") + if not isinstance(payload, dict): + raise PendingChangeError("Doc change payload must be an object") + content = payload.get("content") + if not isinstance(content, str) or not content.strip(): + raise PendingChangeError("Doc change payload.content must be a non-empty string") + if not content.endswith("\n"): + content += "\n" + return content + + @staticmethod + def _extract_inventory_patch(change: dict[str, Any]) -> dict[str, Any]: + payload = change.get("payload") + if not isinstance(payload, dict): + raise PendingChangeError("Inventory change payload must be an object") + patch = payload.get("patch", payload) + if not isinstance(patch, dict) or not patch: + raise PendingChangeError("Inventory change patch must be a non-empty object") + if "id" in patch: + raise PendingChangeError("Inventory item id cannot be changed") + return patch + + @staticmethod + def _extract_inventory_mode(change: dict[str, Any]) -> str: + payload = change.get("payload") + if not isinstance(payload, dict): + raise PendingChangeError("Inventory change payload must be an object") + mode = payload.get("mode", "update") + if mode not in {"update", "create"}: + raise PendingChangeError("Inventory change mode must be 'update' or 'create'") + return mode + + @staticmethod + def _read_file(path: Path) -> dict[str, Any]: + return json.loads(path.read_text(encoding="utf-8")) + + def _write_change(self, change: dict[str, Any]) -> None: + path = self._path_for_id(str(change["id"])) + path.write_text(json.dumps(change, indent=2, ensure_ascii=False) + "\n", encoding="utf-8") diff --git a/server/app/validation.py b/server/app/validation.py new file mode 100644 index 0000000..e6e7f01 --- /dev/null +++ b/server/app/validation.py @@ -0,0 +1,229 @@ +from __future__ import annotations + +import json +from datetime import date +from dataclasses import dataclass +from pathlib import Path +from typing import Any + +import yaml +from jsonschema import Draft202012Validator +from jsonschema.exceptions import SchemaError + +from .config import Settings +from .docs_repository import DocsRepository +from .freshness import REQUIRED_FRONTMATTER + + +SCHEMA_BY_INVENTORY = { + "backups": "backup.schema.json", + "databases": "database.schema.json", + "domains": "domain.schema.json", + "hardware": "hardware.schema.json", + "hosts": "host.schema.json", + "networks": "network.schema.json", + "services": "service.schema.json", + "traffic-routes": "traffic-route.schema.json", + "virtual-machines": "virtual-machine.schema.json", +} + + +@dataclass(frozen=True) +class ValidationIssue: + path: str + severity: str + code: str + message: str + + +def validate_repository(settings: Settings) -> dict[str, object]: + issues: list[ValidationIssue] = [] + issues.extend(_validate_schema_json(settings.repo_root / "schemas")) + issues.extend(_validate_markdown_frontmatter(settings)) + issues.extend(_validate_inventory(settings)) + issues.extend(_validate_inventory_doc_links(settings.repo_root)) + + serialized = [issue.__dict__ for issue in issues] + return { + "status": "ok" if not serialized else "issues", + "issue_count": len(serialized), + "issues": serialized, + } + + +def _validate_schema_json(schema_dir: Path) -> list[ValidationIssue]: + issues: list[ValidationIssue] = [] + for path in sorted(schema_dir.glob("*.json")): + try: + schema = json.loads(path.read_text(encoding="utf-8")) + Draft202012Validator.check_schema(schema) + except json.JSONDecodeError as exc: + issues.append( + ValidationIssue( + path=path.as_posix(), + severity="error", + code="invalid-json-schema", + message=f"{exc.msg} at line {exc.lineno}, column {exc.colno}", + ) + ) + except SchemaError as exc: + issues.append( + ValidationIssue( + path=path.as_posix(), + severity="error", + code="invalid-json-schema", + message=exc.message, + ) + ) + return issues + + +def _validate_markdown_frontmatter(settings: Settings) -> list[ValidationIssue]: + issues: list[ValidationIssue] = [] + repo = DocsRepository(settings) + for doc in repo.list_docs(): + path = str(doc["path"]) + if path == "README.md" or path.startswith("server/"): + continue + frontmatter = doc.get("frontmatter") + if not isinstance(frontmatter, dict) or not frontmatter: + issues.append( + ValidationIssue( + path=path, + severity="error", + code="missing-frontmatter", + message="Markdown document is missing frontmatter", + ) + ) + continue + missing = sorted(REQUIRED_FRONTMATTER - set(frontmatter)) + if missing: + issues.append( + ValidationIssue( + path=path, + severity="error", + code="missing-frontmatter-fields", + message="Missing frontmatter fields: " + ", ".join(missing), + ) + ) + return issues + + +def _validate_inventory(settings: Settings) -> list[ValidationIssue]: + issues: list[ValidationIssue] = [] + schema_dir = settings.repo_root / "schemas" + + for inventory_type, schema_name in SCHEMA_BY_INVENTORY.items(): + inventory_path = settings.inventory_dir / f"{inventory_type}.yml" + schema_path = schema_dir / schema_name + if not inventory_path.exists(): + issues.append( + ValidationIssue( + path=inventory_path.relative_to(settings.repo_root).as_posix(), + severity="error", + code="missing-inventory-file", + message=f"Inventory file is missing for type {inventory_type}", + ) + ) + continue + + try: + data = yaml.safe_load(inventory_path.read_text(encoding="utf-8")) + except yaml.YAMLError as exc: + issues.append( + ValidationIssue( + path=inventory_path.relative_to(settings.repo_root).as_posix(), + severity="error", + code="invalid-yaml", + message=str(exc), + ) + ) + continue + + try: + schema = json.loads(schema_path.read_text(encoding="utf-8")) + except (OSError, json.JSONDecodeError) as exc: + issues.append( + ValidationIssue( + path=schema_path.relative_to(settings.repo_root).as_posix(), + severity="error", + code="unreadable-schema", + message=str(exc), + ) + ) + continue + + issues.extend(_validate_against_schema(settings.repo_root, inventory_path, data, schema)) + + return issues + + +def _validate_against_schema( + repo_root: Path, + inventory_path: Path, + data: Any, + schema: dict[str, Any], +) -> list[ValidationIssue]: + rel_path = inventory_path.relative_to(repo_root).as_posix() + if data is None: + data = [] + data = _normalize_yaml_scalars(data) + + issues: list[ValidationIssue] = [] + validator = Draft202012Validator(schema) + for error in sorted(validator.iter_errors(data), key=lambda item: list(item.path)): + location = _format_json_path(error.path) + issues.append( + ValidationIssue( + path=rel_path, + severity="error", + code="json-schema-validation-error", + message=f"{location}: {error.message}", + ) + ) + return issues + + +def _format_json_path(path: Any) -> str: + parts = list(path) + if not parts: + return "$" + formatted = "$" + for part in parts: + if isinstance(part, int): + formatted += f"[{part}]" + else: + formatted += f".{part}" + return formatted + + +def _normalize_yaml_scalars(value: Any) -> Any: + if isinstance(value, date): + return value.isoformat() + if isinstance(value, list): + return [_normalize_yaml_scalars(item) for item in value] + if isinstance(value, dict): + return {key: _normalize_yaml_scalars(item) for key, item in value.items()} + return value + + +def _validate_inventory_doc_links(repo_root: Path) -> list[ValidationIssue]: + issues: list[ValidationIssue] = [] + inventory_dir = repo_root / "40-inventory" + for inventory_file in sorted(inventory_dir.glob("*.yml")): + for line in inventory_file.read_text(encoding="utf-8").splitlines(): + stripped = line.strip() + if not stripped.startswith("docs: "): + continue + rel = stripped.split(": ", 1)[1].strip().strip("\"'") + target = (inventory_file.parent / rel).resolve() + if not target.exists(): + issues.append( + ValidationIssue( + path=inventory_file.relative_to(repo_root).as_posix(), + severity="error", + code="missing-doc-link-target", + message=f"docs target does not exist: {rel}", + ) + ) + return issues diff --git a/server/pyproject.toml b/server/pyproject.toml new file mode 100644 index 0000000..01ec7f9 --- /dev/null +++ b/server/pyproject.toml @@ -0,0 +1,21 @@ +[project] +name = "gnexus-book-server" +version = "0.1.0" +description = "Read-only API and validation foundation for Gnexus Book documentation." +requires-python = ">=3.11" +dependencies = [ + "fastapi>=0.115.0", + "jsonschema>=4.23.0", + "uvicorn[standard]>=0.30.0", + "pyyaml>=6.0.0", +] + +[project.optional-dependencies] +dev = [ + "pytest>=8.0.0", + "httpx>=0.27.0", +] + +[tool.pytest.ini_options] +testpaths = ["tests"] +pythonpath = ["."] diff --git a/server/tests/test_api.py b/server/tests/test_api.py new file mode 100644 index 0000000..44a2baf --- /dev/null +++ b/server/tests/test_api.py @@ -0,0 +1,123 @@ +from app.config import Settings, get_settings +from app.docs_repository import DocsRepository +from app.git_adapter import GitAdapter +from app.inventory import InventoryRepository +from app.main import ( + apply_change, + git_diff, + git_status, + health, + list_docs, + list_inventory_types, + read_doc, + read_change, + read_inventory, + read_inventory_item, + read_traffic_routes, + validate, +) +from app.pending_changes import PendingChangeRepository, ProposedChangeRequest + + +def test_health_endpoint() -> None: + response = health(get_settings()) + + assert response["status"] == "ok" + + +def test_docs_endpoint() -> None: + response = list_docs(DocsRepository(get_settings())) + + paths = {doc["path"] for doc in response} + assert "10-systems/hardware/hp-proliant-dl380-g6.md" in paths + + +def test_read_doc_endpoint() -> None: + response = read_doc( + path="10-systems/virtualization/libvirt-vms.md", + repo=DocsRepository(get_settings()), + ) + + assert response["frontmatter"]["source_of_truth"] == "ssh-libvirt" + + +def test_inventory_types_endpoint() -> None: + response = list_inventory_types(InventoryRepository(get_settings())) + + assert "virtual-machines" in response + assert "traffic-routes" in response + + +def test_inventory_endpoint() -> None: + response = read_inventory( + inventory_type="virtual-machines", + repo=InventoryRepository(get_settings()), + ) + + assert isinstance(response, list) + assert len(response) >= 20 + assert any(item["id"] == "gnauth" for item in response) + + +def test_inventory_item_endpoint() -> None: + response = read_inventory_item( + inventory_type="virtual-machines", + item_id="gnauth", + repo=InventoryRepository(get_settings()), + ) + + assert response["hypervisor_host"] == "hp-proliant-dl380-g6" + + +def test_traffic_routes_endpoint() -> None: + response = read_traffic_routes(InventoryRepository(get_settings())) + + assert response[0]["id"] == "public-gnexus-space-to-internal-nginx" + + +def test_validate_endpoint_is_clean() -> None: + response = validate(get_settings()) + + assert response["status"] == "ok" + assert response["issues"] == [] + + +def test_changes_endpoints(tmp_path) -> None: + repo = PendingChangeRepository(Settings(tmp_path)) + created = repo.create( + ProposedChangeRequest( + kind="doc", + target="10-systems/example.md", + summary="Add example doc", + payload={"content": "# Example"}, + ) + ) + + response = read_change(created["id"], repo) + + assert response["id"] == created["id"] + assert response["kind"] == "doc" + + +def test_apply_change_rejects_missing_change(tmp_path) -> None: + repo = PendingChangeRepository(Settings(tmp_path)) + + try: + apply_change("missing", repo) + except Exception as exc: + assert "Pending change not found" in str(exc) + else: + raise AssertionError("Expected an exception") + + +def test_git_status_endpoint() -> None: + response = git_status(GitAdapter(get_settings())) + + assert "entries" in response + assert "raw" in response + + +def test_git_diff_endpoint() -> None: + response = git_diff(None, GitAdapter(get_settings())) + + assert "diff" in response diff --git a/server/tests/test_docs_repository.py b/server/tests/test_docs_repository.py new file mode 100644 index 0000000..029c8f2 --- /dev/null +++ b/server/tests/test_docs_repository.py @@ -0,0 +1,49 @@ +from pathlib import Path + +from app.config import Settings +from app.docs_repository import DocsRepository +from app.freshness import build_freshness_report +from app.validation import validate_repository + + +def test_lists_docs_from_repo_root() -> None: + repo_root = Path(__file__).resolve().parents[2] + docs = DocsRepository(Settings(repo_root)).list_docs() + + paths = {doc["path"] for doc in docs} + assert "10-systems/hardware/hp-proliant-dl380-g6.md" in paths + assert "90-maintenance/documentation-rules.md" in paths + + +def test_reads_frontmatter() -> None: + repo_root = Path(__file__).resolve().parents[2] + doc = DocsRepository(Settings(repo_root)).read_doc("10-systems/hardware/hp-proliant-dl380-g6.md") + + assert doc["frontmatter"]["owner"] == "gmikcon" + assert "KVM/QEMU" in doc["body"] + + +def test_freshness_report_has_no_missing_doc_links() -> None: + repo_root = Path(__file__).resolve().parents[2] + report = build_freshness_report(Settings(repo_root)) + + missing_link_issues = [ + issue for issue in report["issues"] if issue["code"] == "missing-doc-link-target" + ] + assert missing_link_issues == [] + + +def test_freshness_report_is_clean_for_knowledge_docs() -> None: + repo_root = Path(__file__).resolve().parents[2] + report = build_freshness_report(Settings(repo_root)) + + assert report["status"] == "ok" + assert report["issues"] == [] + + +def test_validation_report_is_clean() -> None: + repo_root = Path(__file__).resolve().parents[2] + report = validate_repository(Settings(repo_root)) + + assert report["status"] == "ok" + assert report["issues"] == [] diff --git a/server/tests/test_git_adapter.py b/server/tests/test_git_adapter.py new file mode 100644 index 0000000..7942c54 --- /dev/null +++ b/server/tests/test_git_adapter.py @@ -0,0 +1,96 @@ +import subprocess + +from app.config import Settings +from app.git_adapter import CommitRequest, GitAdapter, GitAdapterError + + +def _copy_schema_files(tmp_path) -> None: + (tmp_path / "schemas").mkdir() + source_schema = Settings().repo_root / "schemas" + for schema in source_schema.glob("*.json"): + (tmp_path / "schemas" / schema.name).write_text(schema.read_text(), encoding="utf-8") + + +def _create_empty_inventory(tmp_path) -> None: + (tmp_path / "40-inventory").mkdir() + for name in [ + "backups", + "databases", + "domains", + "hardware", + "hosts", + "networks", + "services", + "traffic-routes", + "virtual-machines", + ]: + (tmp_path / "40-inventory" / f"{name}.yml").write_text("---\n[]\n", encoding="utf-8") + + +def _init_git_repo(tmp_path) -> None: + subprocess.run(["git", "init"], cwd=tmp_path, check=True, stdout=subprocess.PIPE) + subprocess.run(["git", "config", "user.email", "test@example.invalid"], cwd=tmp_path, check=True) + subprocess.run(["git", "config", "user.name", "Test User"], cwd=tmp_path, check=True) + + +def _write_valid_doc(tmp_path, path: str) -> None: + target = tmp_path / path + target.parent.mkdir(parents=True, exist_ok=True) + target.write_text( + "---\n" + "owner: gmikcon\n" + "status: active\n" + "last_reviewed: 2026-05-09\n" + "review_interval: 90d\n" + "confidence: medium\n" + "source_of_truth: test\n" + "---\n\n" + "# Test\n", + encoding="utf-8", + ) + + +def test_rejects_denied_paths(tmp_path) -> None: + adapter = GitAdapter(Settings(tmp_path)) + + try: + adapter._validate_paths([".codex"]) + except GitAdapterError as exc: + assert "not allowed" in str(exc) + else: + raise AssertionError("Expected GitAdapterError") + + +def test_commit_blocks_when_validation_fails(tmp_path) -> None: + _init_git_repo(tmp_path) + adapter = GitAdapter(Settings(tmp_path)) + (tmp_path / "README.md").write_text("README\n", encoding="utf-8") + + try: + adapter.commit(CommitRequest(summary="Test commit", files=["README.md"])) + except GitAdapterError as exc: + assert "validation failed" in str(exc) + else: + raise AssertionError("Expected GitAdapterError") + + +def test_commit_selected_files(tmp_path) -> None: + _init_git_repo(tmp_path) + _copy_schema_files(tmp_path) + _create_empty_inventory(tmp_path) + _write_valid_doc(tmp_path, "10-systems/test.md") + adapter = GitAdapter(Settings(tmp_path)) + + result = adapter.commit( + CommitRequest( + summary="Add test documentation", + files=[ + "10-systems/test.md", + "40-inventory", + "schemas", + ], + ) + ) + + assert result["status"] == "committed" + assert "10-systems/test.md" in result["files"] diff --git a/server/tests/test_pending_changes.py b/server/tests/test_pending_changes.py new file mode 100644 index 0000000..40937f1 --- /dev/null +++ b/server/tests/test_pending_changes.py @@ -0,0 +1,333 @@ +from app.config import Settings +from app.pending_changes import PendingChangeError, PendingChangeRepository, ProposedChangeRequest + + +def test_creates_and_reads_pending_change(tmp_path) -> None: + repo = PendingChangeRepository(Settings(tmp_path)) + request = ProposedChangeRequest( + kind="inventory-item", + target="virtual-machines/gnauth", + summary="Update gnauth VM metadata", + reason="Document newly confirmed service mapping.", + payload={"runs_services": ["gnexus-auth"]}, + ) + + created = repo.create(request) + fetched = repo.get(created["id"]) + changes = repo.list() + + assert created["status"] == "pending" + assert fetched["summary"] == "Update gnauth VM metadata" + assert changes[0]["id"] == created["id"] + assert (tmp_path / "90-maintenance" / "pending-changes" / f"{created['id']}.json").exists() + + +def _copy_schema_files(tmp_path) -> None: + (tmp_path / "schemas").mkdir() + source_schema = Settings().repo_root / "schemas" + for schema in source_schema.glob("*.json"): + (tmp_path / "schemas" / schema.name).write_text(schema.read_text(), encoding="utf-8") + + +def _create_empty_inventory(tmp_path) -> None: + (tmp_path / "40-inventory").mkdir() + for name in [ + "backups", + "databases", + "domains", + "hardware", + "hosts", + "networks", + "services", + "traffic-routes", + "virtual-machines", + ]: + (tmp_path / "40-inventory" / f"{name}.yml").write_text("---\n[]\n", encoding="utf-8") + + +def test_applies_doc_change(tmp_path) -> None: + _copy_schema_files(tmp_path) + _create_empty_inventory(tmp_path) + repo = PendingChangeRepository(Settings(tmp_path)) + created = repo.create( + ProposedChangeRequest( + kind="doc", + target="10-systems/example.md", + summary="Add example doc", + payload={ + "content": ( + "---\n" + "owner: gmikcon\n" + "status: active\n" + "last_reviewed: 2026-05-09\n" + "review_interval: 90d\n" + "confidence: medium\n" + "source_of_truth: test\n" + "---\n\n" + "# Example\n" + ) + }, + ) + ) + + applied = repo.apply(created["id"]) + + assert applied["status"] == "applied" + assert applied["validation"]["status"] == "ok" + assert (tmp_path / "10-systems" / "example.md").exists() + + +def test_rolls_back_doc_change_when_validation_fails(tmp_path) -> None: + _copy_schema_files(tmp_path) + _create_empty_inventory(tmp_path) + repo = PendingChangeRepository(Settings(tmp_path)) + created = repo.create( + ProposedChangeRequest( + kind="doc", + target="10-systems/bad.md", + summary="Add invalid doc", + payload={"content": "# Missing Frontmatter\n"}, + ) + ) + + try: + repo.apply(created["id"]) + except PendingChangeError as exc: + assert "failed validation" in str(exc) + else: + raise AssertionError("Expected PendingChangeError") + + assert not (tmp_path / "10-systems" / "bad.md").exists() + assert repo.get(created["id"])["status"] == "pending" + + +def test_rejects_non_doc_apply(tmp_path) -> None: + repo = PendingChangeRepository(Settings(tmp_path)) + created = repo.create( + ProposedChangeRequest( + kind="inventory", + target="virtual-machines", + summary="Update inventory", + payload={"items": []}, + ) + ) + + try: + repo.apply(created["id"]) + except PendingChangeError as exc: + assert "Only doc and inventory-item changes" in str(exc) + else: + raise AssertionError("Expected PendingChangeError") + + +def test_applies_inventory_item_change(tmp_path) -> None: + _copy_schema_files(tmp_path) + _create_empty_inventory(tmp_path) + inventory = tmp_path / "40-inventory" / "virtual-machines.yml" + inventory.write_text( + "---\n" + "- id: gnauth\n" + " name: gnauth\n" + " status: running\n" + " hypervisor_host: hp-proliant-dl380-g6\n" + " virtualization_stack: kvm-libvirt\n" + " docs: ../10-systems/virtualization/libvirt-vms.md\n" + " last_reviewed: 2026-05-09\n", + encoding="utf-8", + ) + docs = tmp_path / "10-systems" / "virtualization" + docs.mkdir(parents=True) + (docs / "libvirt-vms.md").write_text( + "---\n" + "owner: gmikcon\n" + "status: active\n" + "last_reviewed: 2026-05-09\n" + "review_interval: 90d\n" + "confidence: medium\n" + "source_of_truth: test\n" + "---\n\n" + "# Libvirt VMs\n", + encoding="utf-8", + ) + repo = PendingChangeRepository(Settings(tmp_path)) + created = repo.create( + ProposedChangeRequest( + kind="inventory-item", + target="virtual-machines/gnauth", + summary="Add gnauth service mapping", + payload={"patch": {"runs_services": ["gnexus-auth"]}}, + ) + ) + + applied = repo.apply(created["id"]) + + assert applied["status"] == "applied" + assert "gnexus-auth" in inventory.read_text(encoding="utf-8") + + +def test_inventory_item_change_rolls_back_on_validation_error(tmp_path) -> None: + _copy_schema_files(tmp_path) + _create_empty_inventory(tmp_path) + inventory = tmp_path / "40-inventory" / "virtual-machines.yml" + original = ( + "---\n" + "- id: gnauth\n" + " name: gnauth\n" + " status: running\n" + " hypervisor_host: hp-proliant-dl380-g6\n" + " virtualization_stack: kvm-libvirt\n" + " docs: ../10-systems/virtualization/libvirt-vms.md\n" + " last_reviewed: 2026-05-09\n" + ) + inventory.write_text(original, encoding="utf-8") + docs = tmp_path / "10-systems" / "virtualization" + docs.mkdir(parents=True) + (docs / "libvirt-vms.md").write_text( + "---\n" + "owner: gmikcon\n" + "status: active\n" + "last_reviewed: 2026-05-09\n" + "review_interval: 90d\n" + "confidence: medium\n" + "source_of_truth: test\n" + "---\n\n" + "# Libvirt VMs\n", + encoding="utf-8", + ) + repo = PendingChangeRepository(Settings(tmp_path)) + created = repo.create( + ProposedChangeRequest( + kind="inventory-item", + target="virtual-machines/gnauth", + summary="Break VM status", + payload={"patch": {"status": "invalid-status"}}, + ) + ) + + try: + repo.apply(created["id"]) + except PendingChangeError as exc: + assert "failed validation" in str(exc) + else: + raise AssertionError("Expected PendingChangeError") + + assert inventory.read_text(encoding="utf-8") == original + + +def test_creates_inventory_item(tmp_path) -> None: + _copy_schema_files(tmp_path) + _create_empty_inventory(tmp_path) + docs = tmp_path / "10-systems" / "virtualization" + docs.mkdir(parents=True) + (docs / "libvirt-vms.md").write_text( + "---\n" + "owner: gmikcon\n" + "status: active\n" + "last_reviewed: 2026-05-09\n" + "review_interval: 90d\n" + "confidence: medium\n" + "source_of_truth: test\n" + "---\n\n" + "# Libvirt VMs\n", + encoding="utf-8", + ) + repo = PendingChangeRepository(Settings(tmp_path)) + created = repo.create( + ProposedChangeRequest( + kind="inventory-item", + target="virtual-machines/new-vm", + summary="Add new VM", + payload={ + "mode": "create", + "patch": { + "name": "new-vm", + "status": "running", + "hypervisor_host": "hp-proliant-dl380-g6", + "virtualization_stack": "kvm-libvirt", + "docs": "../10-systems/virtualization/libvirt-vms.md", + "last_reviewed": "2026-05-09", + }, + }, + ) + ) + + applied = repo.apply(created["id"]) + + assert applied["status"] == "applied" + assert "new-vm" in (tmp_path / "40-inventory" / "virtual-machines.yml").read_text( + encoding="utf-8" + ) + + +def test_create_inventory_item_rejects_duplicate(tmp_path) -> None: + _copy_schema_files(tmp_path) + _create_empty_inventory(tmp_path) + inventory = tmp_path / "40-inventory" / "virtual-machines.yml" + inventory.write_text( + "---\n" + "- id: gnauth\n" + " name: gnauth\n" + " status: running\n" + " hypervisor_host: hp-proliant-dl380-g6\n" + " virtualization_stack: kvm-libvirt\n" + " docs: ../10-systems/virtualization/libvirt-vms.md\n" + " last_reviewed: 2026-05-09\n", + encoding="utf-8", + ) + repo = PendingChangeRepository(Settings(tmp_path)) + created = repo.create( + ProposedChangeRequest( + kind="inventory-item", + target="virtual-machines/gnauth", + summary="Duplicate VM", + payload={ + "mode": "create", + "patch": { + "name": "gnauth", + "status": "running", + "hypervisor_host": "hp-proliant-dl380-g6", + "virtualization_stack": "kvm-libvirt", + "docs": "../10-systems/virtualization/libvirt-vms.md", + "last_reviewed": "2026-05-09", + }, + }, + ) + ) + + try: + repo.apply(created["id"]) + except PendingChangeError as exc: + assert "already exists" in str(exc) + else: + raise AssertionError("Expected PendingChangeError") + + +def test_create_inventory_item_rolls_back_on_validation_error(tmp_path) -> None: + _copy_schema_files(tmp_path) + _create_empty_inventory(tmp_path) + inventory = tmp_path / "40-inventory" / "virtual-machines.yml" + original = inventory.read_text(encoding="utf-8") + repo = PendingChangeRepository(Settings(tmp_path)) + created = repo.create( + ProposedChangeRequest( + kind="inventory-item", + target="virtual-machines/bad-vm", + summary="Add invalid VM", + payload={ + "mode": "create", + "patch": { + "name": "bad-vm", + "status": "invalid-status", + }, + }, + ) + ) + + try: + repo.apply(created["id"]) + except PendingChangeError as exc: + assert "failed validation" in str(exc) + else: + raise AssertionError("Expected PendingChangeError") + + assert inventory.read_text(encoding="utf-8") == original diff --git a/ui/README.md b/ui/README.md new file mode 100644 index 0000000..18311bf --- /dev/null +++ b/ui/README.md @@ -0,0 +1,46 @@ +--- +owner: gmikcon +status: draft +last_reviewed: 2026-05-09 +review_interval: 30d +confidence: high +source_of_truth: owner-confirmed +--- + +# Gnexus Book UI + +The Gnexus Book UI should be a custom Vue.js application based on `gnexus-ui-kit`. + +`gnexus-ui-kit` is the official UI kit and design-system layer for Gnexus projects. It should be consumed as a dependency so the shared visual style can be managed centrally across projects. + +Source UI kit repository: + +```text +https://git.gnexus.space/root/gnexus-ui-kit +``` + +## MVP Role + +The first UI is not a document editor. + +It should provide: + +- documentation browsing; +- inventory browsing; +- validation and freshness reports; +- traffic route inspection; +- recent local change or commit inspection if practical. + +All write operations should initially go through the backend API and agent workflows. + +## Frontend Decision + +Use: + +- Vue.js; +- `gnexus-ui-kit` as a dependency; +- the FastAPI backend under `server/`. + +Do not replace this with MkDocs as the primary UI. + +Avoid copying UI kit styles into this project unless there is a temporary integration reason. Prefer upstream changes in `gnexus-ui-kit` for shared style decisions.