diff --git a/.github/agents/dead-code.agent.md b/.github/agents/dead-code.agent.md index d7309cd5..9799b852 100644 --- a/.github/agents/dead-code.agent.md +++ b/.github/agents/dead-code.agent.md @@ -3,43 +3,43 @@ description: "Dead code finder agent. Use when: searching for unused exports, un tools: [read, search] --- -You are a senior software engineer specializing in codebase hygiene. Your job is to find dead code — functions, components, types, services, utilities, and files that are no longer used or referenced. +You are a senior software engineer specializing in codebase hygiene. Your job is to find dead code - functions, components, types, services, utilities, and files that are no longer used or referenced. ## Project Context This is a **Next.js 16 (App Router)** + **TypeScript** + **Prisma** project. Key architecture: - **Path alias**: `@/` maps to `src/` -- **Server Actions**: `src/app/actions/*.ts` — thin wrappers calling services -- **Services**: `src/services/*.ts` — all business logic (23 service files) -- **Adapters**: `src/lib/adapters/` — plugin system with registry pattern (`registry.register()`) -- **Components**: `src/components/` — no barrel exports, direct path imports -- **Runner pipeline**: `src/lib/runner/steps/` — step-based backup execution -- **Hooks**: `src/hooks/` — custom React hooks +- **Server Actions**: `src/app/actions/*.ts` - thin wrappers calling services +- **Services**: `src/services/*.ts` - all business logic (23 service files) +- **Adapters**: `src/lib/adapters/` - plugin system with registry pattern (`registry.register()`) +- **Components**: `src/components/` - no barrel exports, direct path imports +- **Runner pipeline**: `src/lib/runner/steps/` - step-based backup execution +- **Hooks**: `src/hooks/` - custom React hooks - **Tests**: `tests/unit/`, `tests/integration/` ## What Counts as Dead Code ### High Confidence (Report Always) -1. **Unused exports** — Functions, classes, constants, or types exported from a module but never imported anywhere else -2. **Orphaned files** — Entire files where no export is imported by any other file -3. **Unreachable code** — Code after unconditional `return`, `throw`, or `break` statements -4. **Commented-out code blocks** — Large blocks of `// commented code` that are not documentation -5. **Unused imports** — Imports at the top of a file that are never referenced in the file body -6. **Dead feature flags / environment checks** — Conditions that always evaluate the same way +1. **Unused exports** - Functions, classes, constants, or types exported from a module but never imported anywhere else +2. **Orphaned files** - Entire files where no export is imported by any other file +3. **Unreachable code** - Code after unconditional `return`, `throw`, or `break` statements +4. **Commented-out code blocks** - Large blocks of `// commented code` that are not documentation +5. **Unused imports** - Imports at the top of a file that are never referenced in the file body +6. **Dead feature flags / environment checks** - Conditions that always evaluate the same way ### Medium Confidence (Report with Context) -7. **Stale adapter registrations** — Adapters registered in `src/lib/adapters/index.ts` but whose class is never instantiated via the registry -8. **Unused Zod schemas** — Schemas defined in `src/lib/adapters/definitions.ts` but never used for validation -9. **Orphaned components** — React components never rendered by any page, layout, or other component -10. **Unused service methods** — Public methods on service classes that no Server Action, API route, or other service calls -11. **Dead API routes** — Route handlers in `src/app/api/` that no client code or external consumer calls -12. **Unused Prisma model fields** — Fields defined in `prisma/schema.prisma` that are never selected, written, or queried +7. **Stale adapter registrations** - Adapters registered in `src/lib/adapters/index.ts` but whose class is never instantiated via the registry +8. **Unused Zod schemas** - Schemas defined in `src/lib/adapters/definitions.ts` but never used for validation +9. **Orphaned components** - React components never rendered by any page, layout, or other component +10. **Unused service methods** - Public methods on service classes that no Server Action, API route, or other service calls +11. **Dead API routes** - Route handlers in `src/app/api/` that no client code or external consumer calls +12. **Unused Prisma model fields** - Fields defined in `prisma/schema.prisma` that are never selected, written, or queried ### Low Confidence (Report as Suspects) -13. **Potentially dead utilities** — Functions in `src/lib/utils.ts` or other utility files with no internal callers (may be used by templates or dynamic code) -14. **Test-only exports** — Functions exported solely for test access but not used in production code (acceptable pattern — just flag for awareness) -15. **Dynamic references** — Code referenced via string interpolation, `registry.get()`, or `eval()` (cannot statically confirm as dead) +13. **Potentially dead utilities** - Functions in `src/lib/utils.ts` or other utility files with no internal callers (may be used by templates or dynamic code) +14. **Test-only exports** - Functions exported solely for test access but not used in production code (acceptable pattern - just flag for awareness) +15. **Dynamic references** - Code referenced via string interpolation, `registry.get()`, or `eval()` (cannot statically confirm as dead) ## Analysis Strategy @@ -100,10 +100,10 @@ implements NAME ## Important Exceptions (NOT Dead Code) Do NOT flag these as dead code: -- **Next.js conventions**: `page.tsx`, `layout.tsx`, `route.ts`, `loading.tsx`, `error.tsx`, `not-found.tsx` — auto-discovered by Next.js +- **Next.js conventions**: `page.tsx`, `layout.tsx`, `route.ts`, `loading.tsx`, `error.tsx`, `not-found.tsx` - auto-discovered by Next.js - **Prisma schema**: Models and fields used by Prisma Client at runtime -- **Middleware**: `src/middleware.ts` — auto-loaded by Next.js -- **Instrumentation**: `src/instrumentation.ts` — auto-loaded by Next.js +- **Middleware**: `src/middleware.ts` - auto-loaded by Next.js +- **Instrumentation**: `src/instrumentation.ts` - auto-loaded by Next.js - **Docker/CI files**: `docker-entrypoint.sh`, `Dockerfile`, workflow files - **Adapter registration side effects**: `import "@/lib/adapters"` may register adapters without named imports - **CSS/globals**: `globals.css`, CSS modules diff --git a/.github/agents/permission-audit.agent.md b/.github/agents/permission-audit.agent.md index aeb6a035..f9e3b14d 100644 --- a/.github/agents/permission-audit.agent.md +++ b/.github/agents/permission-audit.agent.md @@ -8,14 +8,14 @@ You are a senior access-control engineer auditing the RBAC system of this Next.j ## Permission System Overview ### Constants & Types -- **Permission constants**: `src/lib/permissions.ts` — `PERMISSIONS` object with categories (USERS, GROUPS, SOURCES, DESTINATIONS, JOBS, STORAGE, HISTORY, AUDIT, NOTIFICATIONS, VAULT, PROFILE, SETTINGS, API_KEYS) +- **Permission constants**: `src/lib/permissions.ts` - `PERMISSIONS` object with categories (USERS, GROUPS, SOURCES, DESTINATIONS, JOBS, STORAGE, HISTORY, AUDIT, NOTIFICATIONS, VAULT, PROFILE, SETTINGS, API_KEYS) - **Permission type**: `Permission` union type - **Access control functions**: `src/lib/access-control.ts` ### Guard Functions There are two patterns used in this codebase: -**Pattern 1 — Server Actions** (`src/app/actions/*.ts`): +**Pattern 1 - Server Actions** (`src/app/actions/*.ts`): ```typescript await checkPermission(PERMISSIONS.CATEGORY.ACTION); ``` @@ -23,7 +23,7 @@ await checkPermission(PERMISSIONS.CATEGORY.ACTION); - Throws `PermissionError` if the user lacks the permission - Also handles authentication (redirects if no session) -**Pattern 2 — API Routes** (`src/app/api/**/route.ts`): +**Pattern 2 - API Routes** (`src/app/api/**/route.ts`): ```typescript const authContext = await getAuthContext(await headers()); if (!authContext) return NextResponse.json({ error: "Unauthorized" }, { status: 401 }); @@ -62,28 +62,28 @@ Verify the permission used matches the resource and operation: | Resource | Read | Write/Create/Update/Delete | Special | |----------|------|---------------------------|---------| -| Users | `users:read` | `users:write` | — | -| Groups | `groups:read` | `groups:write` | — | -| Sources | `sources:read` | `sources:write` | — | -| Destinations | `destinations:read` | `destinations:write` | — | +| Users | `users:read` | `users:write` | - | +| Groups | `groups:read` | `groups:write` | - | +| Sources | `sources:read` | `sources:write` | - | +| Destinations | `destinations:read` | `destinations:write` | - | | Jobs | `jobs:read` | `jobs:write` | `jobs:execute` | | Storage | `storage:read` | `storage:delete` | `storage:download`, `storage:restore` | -| History | `history:read` | — | — | -| Audit | `audit:read` | — | — | -| Notifications | `notifications:read` | `notifications:write` | — | -| Vault | `vault:read` | `vault:write` | — | -| Settings | `settings:read` | `settings:write` | — | -| API Keys | `api-keys:read` | `api-keys:write` | — | -| Profile | — | — | `profile:update_name`, `profile:update_email`, `profile:update_password`, `profile:manage_2fa`, `profile:manage_passkeys` | +| History | `history:read` | - | - | +| Audit | `audit:read` | - | - | +| Notifications | `notifications:read` | `notifications:write` | - | +| Vault | `vault:read` | `vault:write` | - | +| Settings | `settings:read` | `settings:write` | - | +| API Keys | `api-keys:read` | `api-keys:write` | - | +| Profile | - | - | `profile:update_name`, `profile:update_email`, `profile:update_password`, `profile:manage_2fa`, `profile:manage_passkeys` | ### Cross-Cutting Concerns -- Services (`src/services/*.ts`) must NOT do their own permission checks — that's the caller's responsibility +- Services (`src/services/*.ts`) must NOT do their own permission checks - that's the caller's responsibility - Middleware (`src/middleware.ts`) handles route-level authentication but NOT fine-grained permissions - Scheduled/internal jobs bypass permission checks (they run as system) ## Known Patterns to Watch For -1. **Dead code guards**: `if (false) { checkPermission(...) }` — effectively disables the check +1. **Dead code guards**: `if (false) { checkPermission(...) }` - effectively disables the check 2. **Permission after data fetch**: Loading sensitive data BEFORE checking permission → information leak 3. **Wrong permission level**: Using `READ` for a mutation, or `WRITE` for a delete on storage 4. **Missing guards on new endpoints**: Recently added routes that might not have been wired up @@ -92,7 +92,7 @@ Verify the permission used matches the resource and operation: ## Constraints -- DO NOT modify any code — this is a read-only audit +- DO NOT modify any code - this is a read-only audit - DO NOT run any commands or tests - Only report findings with specific file paths, line numbers, and severity diff --git a/.github/agents/security-audit.agent.md b/.github/agents/security-audit.agent.md index ee14828b..74cd7f2e 100644 --- a/.github/agents/security-audit.agent.md +++ b/.github/agents/security-audit.agent.md @@ -9,16 +9,16 @@ You are a senior application security engineer specializing in Node.js/TypeScrip Focus on these vulnerability categories (OWASP Top 10 + project-specific): -1. **Injection** — SQL injection (Prisma raw queries), NoSQL injection, OS command injection (child_process, exec, spawn), XSS (unsanitized output in React) -2. **Broken Access Control** — Missing `checkPermission()` calls in Server Actions/API routes, privilege escalation, IDOR -3. **Cryptographic Failures** — Weak algorithms, hardcoded keys, improper IV/nonce handling, missing auth tags -4. **Insecure Design** — Race conditions in queue/job processing, TOCTOU issues, unsafe temp file handling -5. **Security Misconfiguration** — Overly permissive CORS, missing security headers, debug endpoints in production -6. **Authentication Failures** — Session handling issues, missing auth checks, token leaks -7. **SSRF** — User-controlled URLs passed to fetch/http without validation -8. **Secret Exposure** — Credentials in logs, error messages leaking internals, env vars in client bundles -9. **Path Traversal** — Unsanitized file paths in backup/restore/storage operations -10. **Dependency Risks** — Known vulnerable patterns in how external tools (mysqldump, pg_dump, mongodump) are invoked +1. **Injection** - SQL injection (Prisma raw queries), NoSQL injection, OS command injection (child_process, exec, spawn), XSS (unsanitized output in React) +2. **Broken Access Control** - Missing `checkPermission()` calls in Server Actions/API routes, privilege escalation, IDOR +3. **Cryptographic Failures** - Weak algorithms, hardcoded keys, improper IV/nonce handling, missing auth tags +4. **Insecure Design** - Race conditions in queue/job processing, TOCTOU issues, unsafe temp file handling +5. **Security Misconfiguration** - Overly permissive CORS, missing security headers, debug endpoints in production +6. **Authentication Failures** - Session handling issues, missing auth checks, token leaks +7. **SSRF** - User-controlled URLs passed to fetch/http without validation +8. **Secret Exposure** - Credentials in logs, error messages leaking internals, env vars in client bundles +9. **Path Traversal** - Unsanitized file paths in backup/restore/storage operations +10. **Dependency Risks** - Known vulnerable patterns in how external tools (mysqldump, pg_dump, mongodump) are invoked ## Approach @@ -32,7 +32,7 @@ Focus on these vulnerability categories (OWASP Top 10 + project-specific): ## Constraints -- DO NOT modify any code — this is a read-only audit +- DO NOT modify any code - this is a read-only audit - DO NOT run any commands or tests - DO NOT review styling, UI layout, or non-security concerns - ONLY report findings with specific file paths, line numbers, and severity ratings diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index 8d371436..4e03c4f2 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -10,6 +10,7 @@ Self-hosted web app for automating database backups (MySQL, PostgreSQL, MongoDB) ## Language & Commands - **Response**: German (Deutsch) | **Code/Comments**: English - **Package Manager**: Always use `pnpm` (e.g., `pnpm dev`, `pnpm add`, `pnpm test`) +- **Typography**: Never use em dashes (`-`). Use a hyphen (`-`) instead where needed. ## Architecture (4 Layers) diff --git a/.github/instructions/changelog.instructions.md b/.github/instructions/changelog.instructions.md index fa75291a..aa3fd56a 100644 --- a/.github/instructions/changelog.instructions.md +++ b/.github/instructions/changelog.instructions.md @@ -17,7 +17,7 @@ Every changelog entry uses a **bold component prefix** followed by a description ## Section Headings -Entries are grouped under emoji-prefixed `###` headings within each version. Only include sections that have entries. Sections must appear in **exactly this order** — never rearrange: +Entries are grouped under emoji-prefixed `###` headings within each version. Only include sections that have entries. Sections must appear in **exactly this order** - never rearrange: | Order | Section | Use for | |---|---|---| @@ -70,14 +70,14 @@ Tag rules: ## Rules -1. **Grouped sections** — Entries are organized under `###` section headings, not a flat list. -2. **Bold component prefix** — Every entry starts with `**component**:` to identify the affected area. -3. **One line per entry** — Each entry is a single bullet point. Max 1-2 sentences. -4. **No implementation details** — No file paths, function names, or technical internals. Those belong in git commits. -5. **Chronological order** — Newest version at the top. -6. **No separators** — Do not add `---` between versions. VitePress renders them automatically. -7. **Docker section last** — `### 🐳 Docker` is always the final section in a version block. -8. **Omit empty sections** — Only include section headings that have at least one entry. +1. **Grouped sections** - Entries are organized under `###` section headings, not a flat list. +2. **Bold component prefix** - Every entry starts with `**component**:` to identify the affected area. +3. **One line per entry** - Each entry is a single bullet point. Max 1-2 sentences. +4. **No implementation details** - No file paths, function names, or technical internals. Those belong in git commits. +5. **Chronological order** - Newest version at the top. +6. **No separators** - Do not add `---` between versions. VitePress renders them automatically. +7. **Docker section last** - `### 🐳 Docker` is always the final section in a version block. +8. **Omit empty sections** - Only include section headings that have at least one entry. ## Example diff --git a/.github/instructions/docs.instructions.md b/.github/instructions/docs.instructions.md index 55f77795..583afa99 100644 --- a/.github/instructions/docs.instructions.md +++ b/.github/instructions/docs.instructions.md @@ -7,7 +7,7 @@ applyTo: "wiki/**/*.md" ## Language - **Content language**: English -- **Tone**: Clear, concise, practical — write for self-hosters and sysadmins +- **Tone**: Clear, concise, practical - write for self-hosters and sysadmins - **Avoid filler**: No marketing fluff, no restating the obvious ## Unified Adapter Guide Structure @@ -65,10 +65,10 @@ Error message or symptom ### Rules -1. **One config table** — Do NOT split into "Basic Settings" and "Advanced Settings". One table, all fields. -2. **Required column** — Every config table must have a "Required" column (✅ / ❌). -3. **Consistent field names** — Use the exact label shown in the DBackup UI. -4. **Provider examples as collapsible** — External service setup (Gmail, MinIO, Synology, etc.) goes in `
` blocks: +1. **One config table** - Do NOT split into "Basic Settings" and "Advanced Settings". One table, all fields. +2. **Required column** - Every config table must have a "Required" column (✅ / ❌). +3. **Consistent field names** - Use the exact label shown in the DBackup UI. +4. **Provider examples as collapsible** - External service setup (Gmail, MinIO, Synology, etc.) goes in `
` blocks: ```markdown
Gmail SMTP Setup @@ -77,16 +77,16 @@ Error message or symptom
``` -5. **No comparison tables in individual guides** — Comparisons belong in the category index page only. -6. **No "Best Practices" laundry lists** — Integrate tips as `::: tip` callouts where relevant, or omit. -7. **Troubleshooting limit** — Max 5 entries per guide. Focus on errors users actually hit. -8. **Line budget** — Aim for 100–200 lines per adapter guide. Exceeding 250 is a warning sign. +5. **No comparison tables in individual guides** - Comparisons belong in the category index page only. +6. **No "Best Practices" laundry lists** - Integrate tips as `::: tip` callouts where relevant, or omit. +7. **Troubleshooting limit** - Max 5 entries per guide. Focus on errors users actually hit. +8. **Line budget** - Aim for 100–200 lines per adapter guide. Exceeding 250 is a warning sign. ## Index Pages (Overview) Each category (sources, destinations, notifications) has an index page with: 1. A table of all adapters with links -2. A "Choosing" section (brief prose or bullet comparison — not full paragraphs per adapter) +2. A "Choosing" section (brief prose or bullet comparison - not full paragraphs per adapter) 3. Common setup steps (if shared across adapters) 4. Links to individual adapter guides @@ -118,7 +118,7 @@ Use `
/` for optional/collapsible content (provider examples, a ## Content Principles -- **Verify claims against code** — Every config field, default value, and feature claim must match `src/lib/adapters/definitions.ts` and the adapter implementation. -- **Don't document external products** — Link to official docs instead of explaining how Gmail, AWS IAM, or Nginx work. -- **One source of truth** — Don't repeat information across pages. Link instead. -- **Screenshots are optional** — Only include if the UI flow is genuinely confusing. +- **Verify claims against code** - Every config field, default value, and feature claim must match `src/lib/adapters/definitions.ts` and the adapter implementation. +- **Don't document external products** - Link to official docs instead of explaining how Gmail, AWS IAM, or Nginx work. +- **One source of truth** - Don't repeat information across pages. Link instead. +- **Screenshots are optional** - Only include if the UI flow is genuinely confusing. diff --git a/.github/instructions/workflow.instructions.md b/.github/instructions/workflow.instructions.md index c6e9049d..50c02b4d 100644 --- a/.github/instructions/workflow.instructions.md +++ b/.github/instructions/workflow.instructions.md @@ -4,9 +4,9 @@ applyTo: "**/*" # Workflow Rules -## Changelog — Always Update on Every Change +## Changelog - Always Update on Every Change -**Whenever you make any change** — feature, bug fix, wiki article, CI/CD, refactor, or docs update — you **must** add a corresponding entry to `wiki/changelog.md` in the same response. Do not defer it. +**Whenever you make any change** - feature, bug fix, wiki article, CI/CD, refactor, or docs update - you **must** add a corresponding entry to `wiki/changelog.md` in the same response. Do not defer it. ### Finding the active version @@ -30,4 +30,4 @@ Sections must appear in **exactly this order** (skip sections that have no entri | 10 | Docker image info (always last) | `### 🐳 Docker` | - If the section heading already exists in the active version, append to it. If not, create it in the correct position relative to other existing sections. -- **Never reorder** existing sections — always follow the numbered order above. +- **Never reorder** existing sections - always follow the numbered order above. diff --git a/Dockerfile b/Dockerfile index 14fe31fd..44246b99 100644 --- a/Dockerfile +++ b/Dockerfile @@ -6,19 +6,11 @@ FROM node:24-alpine AS base # mongodb-tools -> mongodump # redis -> redis-cli (for Redis backups) # samba-client -> smbclient (for SMB/CIFS storage) -# PostgreSQL Versions Strategy (all versioned explicitly): -# - postgresql14-client (from Alpine 3.17 repo) -> handles PG 12, 13, 14 -# - postgresql16-client (from Alpine 3.23 repo) -> handles PG 15, 16 -# - postgresql17-client (from Alpine 3.23 repo) -> handles PG 17 -# - postgresql18-client (from Alpine 3.23 repo) -> handles PG 18+ - -RUN echo 'http://dl-cdn.alpinelinux.org/alpine/v3.17/main' >> /etc/apk/repositories && \ - apk update && \ +# postgresql18-client -> pg_dump, pg_restore, psql (backward compatible with PG 12-18) + +RUN apk update && \ apk add --no-cache \ mysql-client \ - postgresql14-client \ - postgresql16-client \ - postgresql17-client \ postgresql18-client \ mongodb-tools \ redis \ @@ -31,29 +23,15 @@ RUN echo 'http://dl-cdn.alpinelinux.org/alpine/v3.17/main' >> /etc/apk/repositor zip \ su-exec -# Enable corepack for pnpm support and create PostgreSQL symlinks -# All versioned: postgresql14-client (v3.17), postgresql16/17/18-client (v3.23) +# Enable corepack for pnpm support and symlink PostgreSQL 18 binaries RUN corepack enable && corepack prepare pnpm@10.29.3 --activate && \ - mkdir -p /opt/pg14/bin /opt/pg16/bin /opt/pg17/bin /opt/pg18/bin && \ - ln -sf /usr/libexec/postgresql14/pg_dump /opt/pg14/bin/pg_dump && \ - ln -sf /usr/libexec/postgresql14/pg_restore /opt/pg14/bin/pg_restore && \ - ln -sf /usr/libexec/postgresql14/psql /opt/pg14/bin/psql && \ - ln -sf /usr/libexec/postgresql16/pg_dump /opt/pg16/bin/pg_dump && \ - ln -sf /usr/libexec/postgresql16/pg_restore /opt/pg16/bin/pg_restore && \ - ln -sf /usr/libexec/postgresql16/psql /opt/pg16/bin/psql && \ - ln -sf /usr/libexec/postgresql17/pg_dump /opt/pg17/bin/pg_dump && \ - ln -sf /usr/libexec/postgresql17/pg_restore /opt/pg17/bin/pg_restore && \ - ln -sf /usr/libexec/postgresql17/psql /opt/pg17/bin/psql && \ - ln -sf /usr/libexec/postgresql18/pg_dump /opt/pg18/bin/pg_dump && \ - ln -sf /usr/libexec/postgresql18/pg_restore /opt/pg18/bin/pg_restore && \ - ln -sf /usr/libexec/postgresql18/psql /opt/pg18/bin/psql - -# Validate all pg_dump versions resolve correctly (fail-fast on broken symlinks/packages) -RUN /opt/pg14/bin/pg_dump --version | grep -q 'PostgreSQL) 14\.' && \ - /opt/pg16/bin/pg_dump --version | grep -q 'PostgreSQL) 16\.' && \ - /opt/pg17/bin/pg_dump --version | grep -q 'PostgreSQL) 17\.' && \ - /opt/pg18/bin/pg_dump --version | grep -q 'PostgreSQL) 18\.' || \ - (echo "ERROR: pg_dump version validation failed! Check PostgreSQL client packages." && exit 1) + ln -sf /usr/libexec/postgresql18/pg_dump /usr/local/bin/pg_dump && \ + ln -sf /usr/libexec/postgresql18/pg_restore /usr/local/bin/pg_restore && \ + ln -sf /usr/libexec/postgresql18/psql /usr/local/bin/psql + +# Validate pg_dump version resolves correctly (fail-fast on broken symlinks/packages) +RUN pg_dump --version | grep -q 'PostgreSQL) 18\.' || \ + (echo "ERROR: pg_dump version validation failed! Check PostgreSQL 18 client package." && exit 1) # 1. Install Dependencies FROM base AS deps diff --git a/README.md b/README.md index cf27f045..1b1f239d 100644 --- a/README.md +++ b/README.md @@ -196,11 +196,11 @@ Open [https://localhost:3000](https://localhost:3000) and create your admin acco Full documentation is available at **[dbackup.app](https://dbackup.app)**: -- [User Guide](https://dbackup.app/user-guide/getting-started) — Installation, configuration, usage -- [API Reference](https://api.dbackup.app) — Interactive REST API documentation -- [Developer Guide](https://dbackup.app/developer-guide/) — Architecture, adapters, contributing -- [Changelog](https://dbackup.app/changelog) — Release history -- [Roadmap](https://dbackup.app/roadmap) — Planned features +- [User Guide](https://dbackup.app/user-guide/getting-started) - Installation, configuration, usage +- [API Reference](https://api.dbackup.app) - Interactive REST API documentation +- [Developer Guide](https://dbackup.app/developer-guide/) - Architecture, adapters, contributing +- [Changelog](https://dbackup.app/changelog) - Release history +- [Roadmap](https://dbackup.app/roadmap) - Planned features ## 🛠️ Development diff --git a/api-docs/openapi.yaml b/api-docs/openapi.yaml index 28847fdf..4125fba8 100644 --- a/api-docs/openapi.yaml +++ b/api-docs/openapi.yaml @@ -1,16 +1,16 @@ openapi: 3.1.0 info: title: DBackup API - version: 1.4.0 + version: 1.4.1 description: | - REST API for DBackup — a self-hosted database backup automation platform with encryption, compression, and smart retention. + REST API for DBackup - a self-hosted database backup automation platform with encryption, compression, and smart retention. ## Authentication DBackup supports two authentication methods: - - **Session Authentication** — Used automatically when logged in via the web UI. Session cookies are sent with each request. - - **API Key Authentication** — For scripts, CI/CD pipelines, and external integrations. Create an API key under **Access Management → API Keys**. + - **Session Authentication** - Used automatically when logged in via the web UI. Session cookies are sent with each request. + - **API Key Authentication** - For scripts, CI/CD pipelines, and external integrations. Create an API key under **Access Management → API Keys**. > API keys do not inherit SuperAdmin privileges. Only explicitly assigned permissions are available. @@ -46,7 +46,7 @@ security: tags: - name: Jobs - description: Manage backup jobs — CRUD operations and manual triggers + description: Manage backup jobs - CRUD operations and manual triggers - name: Executions description: Poll execution status for running or completed backups/restores - name: History @@ -1049,11 +1049,11 @@ paths: Download a ZIP file containing the master encryption key and helper scripts for offline decryption. The ZIP contains: - - `master.key` — Hex-encoded encryption key - - `decrypt_backup.js` — Node.js decryption script - - `decrypt_drag_drop_windows.bat` — Windows drag & drop helper - - `decrypt_linux_mac.sh` — Linux/macOS helper script - - `README.txt` — Usage instructions + - `master.key` - Hex-encoded encryption key + - `decrypt_backup.js` - Node.js decryption script + - `decrypt_drag_drop_windows.bat` - Windows drag & drop helper + - `decrypt_linux_mac.sh` - Linux/macOS helper script + - `README.txt` - Usage instructions > This is a sensitive operation. An audit log entry is created. operationId: downloadRecoveryKit @@ -1486,11 +1486,11 @@ components: type: string enum: [Pending, Running, Success, Failed, Partial] description: | - - `Pending` — Queued, waiting for an execution slot - - `Running` — Currently executing - - `Success` — Completed successfully - - `Failed` — Execution failed - - `Partial` — Some destinations succeeded, others failed + - `Pending` - Queued, waiting for an execution slot + - `Running` - Currently executing + - `Success` - Completed successfully + - `Failed` - Execution failed + - `Partial` - Some destinations succeeded, others failed # ─── Adapters ──────────────────────────────────────────────── AdapterConfig: diff --git a/package.json b/package.json index 7749e446..dbf83fc5 100644 --- a/package.json +++ b/package.json @@ -1,6 +1,6 @@ { "name": "dbackup", - "version": "1.4.0", + "version": "1.4.1", "private": true, "scripts": { "dev": "next dev", diff --git a/public/openapi.yaml b/public/openapi.yaml index 28847fdf..4125fba8 100644 --- a/public/openapi.yaml +++ b/public/openapi.yaml @@ -1,16 +1,16 @@ openapi: 3.1.0 info: title: DBackup API - version: 1.4.0 + version: 1.4.1 description: | - REST API for DBackup — a self-hosted database backup automation platform with encryption, compression, and smart retention. + REST API for DBackup - a self-hosted database backup automation platform with encryption, compression, and smart retention. ## Authentication DBackup supports two authentication methods: - - **Session Authentication** — Used automatically when logged in via the web UI. Session cookies are sent with each request. - - **API Key Authentication** — For scripts, CI/CD pipelines, and external integrations. Create an API key under **Access Management → API Keys**. + - **Session Authentication** - Used automatically when logged in via the web UI. Session cookies are sent with each request. + - **API Key Authentication** - For scripts, CI/CD pipelines, and external integrations. Create an API key under **Access Management → API Keys**. > API keys do not inherit SuperAdmin privileges. Only explicitly assigned permissions are available. @@ -46,7 +46,7 @@ security: tags: - name: Jobs - description: Manage backup jobs — CRUD operations and manual triggers + description: Manage backup jobs - CRUD operations and manual triggers - name: Executions description: Poll execution status for running or completed backups/restores - name: History @@ -1049,11 +1049,11 @@ paths: Download a ZIP file containing the master encryption key and helper scripts for offline decryption. The ZIP contains: - - `master.key` — Hex-encoded encryption key - - `decrypt_backup.js` — Node.js decryption script - - `decrypt_drag_drop_windows.bat` — Windows drag & drop helper - - `decrypt_linux_mac.sh` — Linux/macOS helper script - - `README.txt` — Usage instructions + - `master.key` - Hex-encoded encryption key + - `decrypt_backup.js` - Node.js decryption script + - `decrypt_drag_drop_windows.bat` - Windows drag & drop helper + - `decrypt_linux_mac.sh` - Linux/macOS helper script + - `README.txt` - Usage instructions > This is a sensitive operation. An audit log entry is created. operationId: downloadRecoveryKit @@ -1486,11 +1486,11 @@ components: type: string enum: [Pending, Running, Success, Failed, Partial] description: | - - `Pending` — Queued, waiting for an execution slot - - `Running` — Currently executing - - `Success` — Completed successfully - - `Failed` — Execution failed - - `Partial` — Some destinations succeeded, others failed + - `Pending` - Queued, waiting for an execution slot + - `Running` - Currently executing + - `Success` - Completed successfully + - `Failed` - Execution failed + - `Partial` - Some destinations succeeded, others failed # ─── Adapters ──────────────────────────────────────────────── AdapterConfig: diff --git a/src/app/actions/api-key.ts b/src/app/actions/api-key.ts index 17f2ec25..17f1cabc 100644 --- a/src/app/actions/api-key.ts +++ b/src/app/actions/api-key.ts @@ -44,7 +44,7 @@ export async function getApiKeys() { } /** - * Create a new API key. Returns the raw key ONCE — it will not be shown again. + * Create a new API key. Returns the raw key ONCE - it will not be shown again. */ export async function createApiKey(data: CreateApiKeyFormValues) { await checkPermission(PERMISSIONS.API_KEYS.WRITE); @@ -152,7 +152,7 @@ export async function toggleApiKey(id: string, enabled: boolean) { } /** - * Rotate an API key — generates a new secret. Returns the new raw key ONCE. + * Rotate an API key - generates a new secret. Returns the new raw key ONCE. */ export async function rotateApiKey(id: string) { await checkPermission(PERMISSIONS.API_KEYS.WRITE); diff --git a/src/app/actions/audit-log.ts b/src/app/actions/audit-log.ts index 9e80bd62..008c11ef 100644 --- a/src/app/actions/audit-log.ts +++ b/src/app/actions/audit-log.ts @@ -9,7 +9,7 @@ import { wrapError } from "@/lib/errors"; const log = logger.child({ action: "audit-log" }); -/** @no-permission-required — Self-service: any authenticated user can log their own login. */ +/** @no-permission-required - Self-service: any authenticated user can log their own login. */ export async function logLoginSuccess() { try { const session = await auth.api.getSession({ @@ -17,7 +17,7 @@ export async function logLoginSuccess() { }); if (!session?.user) { - return; // Not authenticated — nothing to log + return; // Not authenticated - nothing to log } const reqHeaders = await headers(); diff --git a/src/app/api/adapters/database-stats/route.ts b/src/app/api/adapters/database-stats/route.ts index 25a5b43a..017b4cff 100644 --- a/src/app/api/adapters/database-stats/route.ts +++ b/src/app/api/adapters/database-stats/route.ts @@ -81,7 +81,7 @@ export async function POST(req: NextRequest) { serverEdition = testResult.edition; } } catch { - // Non-critical — version info is optional + // Non-critical - version info is optional } } diff --git a/src/app/api/adapters/dropbox/auth/route.ts b/src/app/api/adapters/dropbox/auth/route.ts index a8e303f1..382c71f4 100644 --- a/src/app/api/adapters/dropbox/auth/route.ts +++ b/src/app/api/adapters/dropbox/auth/route.ts @@ -12,7 +12,7 @@ const log = logger.child({ route: "adapters/dropbox/auth" }); /** * POST /api/adapters/dropbox/auth * Generates the Dropbox OAuth authorization URL. - * Body: { adapterId: string } — The saved adapter config ID to authorize. + * Body: { adapterId: string } - The saved adapter config ID to authorize. */ export async function POST(req: NextRequest) { const ctx = await getAuthContext(await headers()); @@ -58,7 +58,7 @@ export async function POST(req: NextRequest) { adapterId, // state parameter for callback "code", "offline", // Request offline access to get refresh_token - undefined, // scopes — use app-configured scopes + undefined, // scopes - use app-configured scopes "none", false ); diff --git a/src/app/api/adapters/google-drive/auth/route.ts b/src/app/api/adapters/google-drive/auth/route.ts index 39f3bbe9..b4ed89dc 100644 --- a/src/app/api/adapters/google-drive/auth/route.ts +++ b/src/app/api/adapters/google-drive/auth/route.ts @@ -17,7 +17,7 @@ const SCOPES = [ /** * POST /api/adapters/google-drive/auth * Generates the Google OAuth authorization URL. - * Body: { adapterId: string } — The saved adapter config ID to authorize. + * Body: { adapterId: string } - The saved adapter config ID to authorize. */ export async function POST(req: NextRequest) { const ctx = await getAuthContext(await headers()); diff --git a/src/app/api/adapters/onedrive/auth/route.ts b/src/app/api/adapters/onedrive/auth/route.ts index c02a81f9..b51dc8e0 100644 --- a/src/app/api/adapters/onedrive/auth/route.ts +++ b/src/app/api/adapters/onedrive/auth/route.ts @@ -17,7 +17,7 @@ const SCOPES = [ /** * POST /api/adapters/onedrive/auth * Generates the Microsoft OAuth authorization URL. - * Body: { adapterId: string } — The saved adapter config ID to authorize. + * Body: { adapterId: string } - The saved adapter config ID to authorize. */ export async function POST(req: NextRequest) { const ctx = await getAuthContext(await headers()); diff --git a/src/app/api/adapters/test-ssh/route.ts b/src/app/api/adapters/test-ssh/route.ts index 11456d0e..e8381577 100644 --- a/src/app/api/adapters/test-ssh/route.ts +++ b/src/app/api/adapters/test-ssh/route.ts @@ -129,7 +129,7 @@ async function testMssqlSsh(config: MSSQLConfig, sshHost: string, sshPort: numbe return NextResponse.json({ success: true, - message: `SSH connection to ${sshHost}:${sshPort} successful — backup path ${backupPath} is readable and writable`, + message: `SSH connection to ${sshHost}:${sshPort} successful - backup path ${backupPath} is readable and writable`, }); } catch (connectError: unknown) { sshTransfer.end(); diff --git a/src/app/api/health/route.ts b/src/app/api/health/route.ts index 457b238e..ff40dcac 100644 --- a/src/app/api/health/route.ts +++ b/src/app/api/health/route.ts @@ -5,7 +5,7 @@ import prisma from "@/lib/prisma"; * Health check endpoint for Docker HEALTHCHECK and monitoring. * Returns 200 if the app and database are reachable, 503 otherwise. * - * No authentication required — this is a public endpoint. + * No authentication required - this is a public endpoint. */ export async function GET() { const start = Date.now(); diff --git a/src/app/api/internal/rate-limit-config/route.ts b/src/app/api/internal/rate-limit-config/route.ts index 4984d85f..06c9669c 100644 --- a/src/app/api/internal/rate-limit-config/route.ts +++ b/src/app/api/internal/rate-limit-config/route.ts @@ -6,7 +6,7 @@ import { getRateLimitConfig } from "@/lib/rate-limit-server"; * configuration. This route runs in the Node.js runtime and can read * from the database via Prisma. * - * No auth required — this endpoint is excluded from middleware matching + * No auth required - this endpoint is excluded from middleware matching * and only consumed by the middleware itself. */ export const dynamic = "force-dynamic"; diff --git a/src/app/dashboard/history/page.tsx b/src/app/dashboard/history/page.tsx index f1f5e2d3..78de0a8f 100644 --- a/src/app/dashboard/history/page.tsx +++ b/src/app/dashboard/history/page.tsx @@ -293,7 +293,7 @@ function HistoryContent() {
{selectedLog?.status === "Pending" ? "Waiting in queue..." : stage} - {detail && — {detail}} + {detail && - {detail}} {selectedLog?.status === "Running" && progress > 0 && !detail && {progress}%}

Restore Backup

-

Redis restore requires manual steps — follow the wizard below.

+

Redis restore requires manual steps - follow the wizard below.

@@ -546,7 +546,7 @@ export function RestoreClient() { {targetSource && !isLoadingTargetDbs && targetServerVersion && compatibilityIssues.length === 0 && file?.engineVersion && (
- Version compatible — Backup {file.engineVersion} → Target {targetServerVersion} + Version compatible - Backup {file.engineVersion} → Target {targetServerVersion}
)} @@ -844,7 +844,7 @@ export function RestoreClient() { - {db.sizeInBytes != null ? formatBytes(db.sizeInBytes) : '—'} + {db.sizeInBytes != null ? formatBytes(db.sizeInBytes) : '-'} ); diff --git a/src/app/docs/api/layout.tsx b/src/app/docs/api/layout.tsx index 0ce3ac29..51f757e3 100644 --- a/src/app/docs/api/layout.tsx +++ b/src/app/docs/api/layout.tsx @@ -1,7 +1,7 @@ import type { Metadata } from "next"; export const metadata: Metadata = { - title: "API Reference — DBackup", + title: "API Reference - DBackup", description: "Interactive REST API reference for DBackup.", }; diff --git a/src/components/adapter/adapter-form.tsx b/src/components/adapter/adapter-form.tsx index 321611ff..f2f2b028 100644 --- a/src/components/adapter/adapter-form.tsx +++ b/src/components/adapter/adapter-form.tsx @@ -185,7 +185,7 @@ export function AdapterForm({ type, adapters, onSuccess, initialData, onBack }:
{adapters.length === 1 ? ( - // Single adapter pre-selected (from picker) — show as read-only badge + // Single adapter pre-selected (from picker) - show as read-only badge k !== "to") : NOTIFICATION_CONFIG_KEYS; diff --git a/src/components/adapter/utils.ts b/src/components/adapter/utils.ts index 89d44090..bddf6006 100644 --- a/src/components/adapter/utils.ts +++ b/src/components/adapter/utils.ts @@ -1,5 +1,5 @@ /** - * Adapter icon mapping — bundled Iconify icon data. + * Adapter icon mapping - bundled Iconify icon data. * * Icons are imported directly from tree-shakeable @iconify-icons/* packages * so they work offline without API calls (important for self-hosted deployments). @@ -11,7 +11,7 @@ import type { IconifyIcon } from "@iconify/react"; -// — SVG Logos (primary, multi-colored) — +// - SVG Logos (primary, multi-colored) - import mysqlIcon from "@iconify-icons/logos/mysql-icon"; import mariadbIcon from "@iconify-icons/logos/mariadb-icon"; import postgresqlIcon from "@iconify-icons/logos/postgresql"; @@ -28,12 +28,12 @@ import slackIcon from "@iconify-icons/logos/slack-icon"; import teamsIcon from "@iconify-icons/logos/microsoft-teams"; import telegramIcon from "@iconify-icons/logos/telegram"; -// — Simple Icons (fallback for brands not in SVG Logos) — +// - Simple Icons (fallback for brands not in SVG Logos) - import mssqlIcon from "@iconify-icons/simple-icons/microsoftsqlserver"; import minioIcon from "@iconify-icons/simple-icons/minio"; import hetznerIcon from "@iconify-icons/simple-icons/hetzner"; -// — Material Design Icons (protocol, storage & generic icons) — +// - Material Design Icons (protocol, storage & generic icons) - import harddiskIcon from "@iconify-icons/mdi/harddisk"; import sshIcon from "@iconify-icons/mdi/ssh"; import swapVerticalIcon from "@iconify-icons/mdi/swap-vertical"; @@ -58,21 +58,21 @@ const ADAPTER_ICON_MAP: Record = { "redis": redisIcon, "mssql": mssqlIcon, - // Storage — Local + // Storage - Local "local-filesystem": harddiskIcon, - // Storage — S3 + // Storage - S3 "s3-aws": awsS3Icon, "s3-generic": minioIcon, "s3-r2": cloudflareIcon, "s3-hetzner": hetznerIcon, - // Storage — Cloud Drives + // Storage - Cloud Drives "google-drive": googleDriveIcon, "dropbox": dropboxIcon, "onedrive": onedriveIcon, - // Storage — Network + // Storage - Network "sftp": sshIcon, "ftp": swapVerticalIcon, "webdav": cloudUploadIcon, diff --git a/src/components/api-keys/api-key-table.tsx b/src/components/api-keys/api-key-table.tsx index e4aeb5b5..dfac8ef8 100644 --- a/src/components/api-keys/api-key-table.tsx +++ b/src/components/api-keys/api-key-table.tsx @@ -67,7 +67,7 @@ export function ApiKeyTable({ data, canManage }: ApiKeyTableProps) { success: (result) => { if (result.success && result.data) { setRevealedKey(result.data.rawKey) - return "API key rotated — save the new key now" + return "API key rotated - save the new key now" } throw new Error(result.error) }, diff --git a/src/components/dashboard/explorer/database-explorer.tsx b/src/components/dashboard/explorer/database-explorer.tsx index 29d80f00..869a1348 100644 --- a/src/components/dashboard/explorer/database-explorer.tsx +++ b/src/components/dashboard/explorer/database-explorer.tsx @@ -116,7 +116,7 @@ export function DatabaseExplorer({ sources }: DatabaseExplorerProps) {

Database Explorer

- Inspect databases on your configured sources — view sizes, table counts, and server details. + Inspect databases on your configured sources - view sizes, table counts, and server details.

@@ -192,7 +192,7 @@ export function DatabaseExplorer({ sources }: DatabaseExplorerProps) { ) : (

- {selectedAdapter?.adapterId ?? "—"} + {selectedAdapter?.adapterId ?? "-"} {serverVersion && ( v{serverVersion} )} @@ -231,7 +231,7 @@ export function DatabaseExplorer({ sources }: DatabaseExplorerProps) { ) : (

- {hasStats ? formatBytes(totalSize) : "—"} + {hasStats ? formatBytes(totalSize) : "-"}

)}
@@ -308,7 +308,7 @@ export function DatabaseExplorer({ sources }: DatabaseExplorerProps) { - {db.sizeInBytes != null ? formatBytes(db.sizeInBytes) : "—"} + {db.sizeInBytes != null ? formatBytes(db.sizeInBytes) : "-"} {db.tableCount != null ? ( @@ -317,7 +317,7 @@ export function DatabaseExplorer({ sources }: DatabaseExplorerProps) { {db.tableCount} ) : ( - "—" + "-" )} {hasStats && ( diff --git a/src/components/dashboard/storage/storage-history-tab.tsx b/src/components/dashboard/storage/storage-history-tab.tsx index 0fa1583c..084f21cc 100644 --- a/src/components/dashboard/storage/storage-history-tab.tsx +++ b/src/components/dashboard/storage/storage-history-tab.tsx @@ -161,7 +161,7 @@ export const StorageHistoryTab = forwardRef
-

{adapterName} — Storage History

+

{adapterName} - Storage History

Storage usage and backup count over time.

diff --git a/src/components/dashboard/storage/storage-settings-tab.tsx b/src/components/dashboard/storage/storage-settings-tab.tsx index 67447545..eaf6e5e0 100644 --- a/src/components/dashboard/storage/storage-settings-tab.tsx +++ b/src/components/dashboard/storage/storage-settings-tab.tsx @@ -175,7 +175,7 @@ export const StorageSettingsTab = forwardRef
-

{adapterName} — Alerts

+

{adapterName} - Alerts

Configure monitoring alerts for this storage destination. Notifications are sent through the channels configured in Settings > Notifications.

diff --git a/src/components/execution/log-viewer.tsx b/src/components/execution/log-viewer.tsx index 70701800..064dd3a6 100644 --- a/src/components/execution/log-viewer.tsx +++ b/src/components/execution/log-viewer.tsx @@ -65,7 +65,7 @@ export function LogViewer({ logs, className, autoScroll = true, status, executio }); }, [logs]); - // Grouping Logic — group by stage, sort by stage order, fill pending stages + // Grouping Logic - group by stage, sort by stage order, fill pending stages const groupedLogs = useMemo(() => { // Detect execution type: explicit prop, or infer from stage names const stageOrder = executionType === "Restore" diff --git a/src/components/settings/certificate-settings.tsx b/src/components/settings/certificate-settings.tsx index 14d6bfc4..dd90e4e1 100644 --- a/src/components/settings/certificate-settings.tsx +++ b/src/components/settings/certificate-settings.tsx @@ -155,7 +155,7 @@ export function CertificateSettings() { HTTPS Enabled All connections to DBackup are encrypted with TLS. - {certInfo.isSelfSigned && " Using a self-signed certificate — browsers will show a security warning on first visit."} + {certInfo.isSelfSigned && " Using a self-signed certificate - browsers will show a security warning on first visit."} )} @@ -405,7 +405,7 @@ function CertField({ highlight ? "text-amber-600 dark:text-amber-400 font-medium" : "" }`} > - {value || "—"} + {value || "-"}

); diff --git a/src/components/settings/sessions-form.tsx b/src/components/settings/sessions-form.tsx index d3eaa53b..980c75a1 100644 --- a/src/components/settings/sessions-form.tsx +++ b/src/components/settings/sessions-form.tsx @@ -94,7 +94,7 @@ function parseUserAgent(ua: string | null): { browser: BrowserName; os: OsName; else if (ua.includes("iPhone") || ua.includes("iPad")) os = "iOS" else if (ua.includes("Linux")) os = "Linux" - // Detect Browser — order matters: specific browsers before generic Chrome/Safari + // Detect Browser - order matters: specific browsers before generic Chrome/Safari let browser: BrowserName = "Unknown" if (ua.includes("Firefox/")) browser = "Firefox" else if (ua.includes("Edg/")) browser = "Edge" diff --git a/src/lib/adapters/database/mongodb/connection.ts b/src/lib/adapters/database/mongodb/connection.ts index cf9a28d7..77793afe 100644 --- a/src/lib/adapters/database/mongodb/connection.ts +++ b/src/lib/adapters/database/mongodb/connection.ts @@ -94,7 +94,7 @@ export async function getDatabases(config: MongoDBConfig): Promise { const mongoshBin = await remoteBinaryCheck(ssh, "mongosh", "mongo"); const args = buildMongoArgs(config); - // Output JSON array of DB names — single print(), parsed in Node + // Output JSON array of DB names - single print(), parsed in Node const cmd = `${mongoshBin} ${args.join(" ")} --quiet --eval 'print(JSON.stringify(db.adminCommand({listDatabases:1}).databases.map(function(d){return d.name})))'`; log.debug("getDatabases SSH command", { cmd: cmd.replace(/--password\s+'[^']*'/, "--password '***'") }); const result = await ssh.exec(cmd); @@ -109,7 +109,7 @@ export async function getDatabases(config: MongoDBConfig): Promise { throw new Error(`Failed to list databases (code ${result.code}): ${result.stderr || result.stdout}`); } - // Parse JSON array from stdout — find the line that looks like a JSON array + // Parse JSON array from stdout - find the line that looks like a JSON array const lines = result.stdout.split('\n').map(s => s.trim()).filter(Boolean); const jsonLine = lines.find(l => l.startsWith('[')); @@ -165,7 +165,7 @@ export async function getDatabasesWithStats(config: MongoDBConfig): Promise 0 && error instanceof Error) { // Prepend detail messages so the actual cause is visible const details = serverMessages.join(" | "); - error.message = `${error.message} — Details: ${details}`; + error.message = `${error.message} - Details: ${details}`; } throw error; diff --git a/src/lib/adapters/database/mssql/dump.ts b/src/lib/adapters/database/mssql/dump.ts index bf4a88ff..f6c7d1fe 100644 --- a/src/lib/adapters/database/mssql/dump.ts +++ b/src/lib/adapters/database/mssql/dump.ts @@ -119,7 +119,7 @@ export async function dump( log(`Executing backup`, "info", "command", backupQuery); // Execute backup command on the server, capturing all SQL Server messages. - // Use requestTimeout=0 (no timeout) — large DB backups can run for hours. + // Use requestTimeout=0 (no timeout) - large DB backups can run for hours. // Stream progress messages in real-time so the UI shows live updates. await executeQueryWithMessages(config, backupQuery, undefined, 0, (msg) => { if (msg.message) { diff --git a/src/lib/adapters/database/mssql/restore.ts b/src/lib/adapters/database/mssql/restore.ts index 4b7300ec..d97119ed 100644 --- a/src/lib/adapters/database/mssql/restore.ts +++ b/src/lib/adapters/database/mssql/restore.ts @@ -228,7 +228,7 @@ export async function restore( log(`Executing restore`, "info", "command", restoreQuery); try { - // Use requestTimeout=0 (no timeout) — large DB restores can run for hours. + // Use requestTimeout=0 (no timeout) - large DB restores can run for hours. // Stream progress messages in real-time so the UI shows live updates. await executeQueryWithMessages(config, restoreQuery, undefined, 0, (msg) => { if (msg.message) { diff --git a/src/lib/adapters/database/mysql/dump.ts b/src/lib/adapters/database/mysql/dump.ts index 51ddd113..ff403c01 100644 --- a/src/lib/adapters/database/mysql/dump.ts +++ b/src/lib/adapters/database/mysql/dump.ts @@ -167,7 +167,7 @@ export async function dump(config: MySQLDumpConfig, destinationPath: string, onL else if (config.database) dbs = [config.database]; if (dbs.length === 0) { - log("No databases selected — backing up all databases"); + log("No databases selected - backing up all databases"); dbs = await getDatabases(config); log(`Found ${dbs.length} database(s): ${dbs.join(', ')}`); } diff --git a/src/lib/adapters/database/mysql/restore.ts b/src/lib/adapters/database/mysql/restore.ts index e360c7c6..2f21e388 100644 --- a/src/lib/adapters/database/mysql/restore.ts +++ b/src/lib/adapters/database/mysql/restore.ts @@ -229,7 +229,7 @@ async function restoreSingleFileSSH( const uploadPercent = Math.round((transferred / total) * 90); const elapsed = (Date.now() - uploadStart) / 1000; const speed = elapsed > 0 ? transferred / elapsed : 0; - onProgress(uploadPercent, `${formatBytes(transferred)} / ${formatBytes(total)} — ${formatBytes(speed)}/s`); + onProgress(uploadPercent, `${formatBytes(transferred)} / ${formatBytes(total)} - ${formatBytes(speed)}/s`); } }); @@ -246,7 +246,7 @@ async function restoreSingleFileSSH( onLog(`Upload verified: ${(remoteSize / 1024 / 1024).toFixed(1)} MB`, 'success'); } catch (e) { if (e instanceof Error && e.message.includes('mismatch')) throw e; - // stat command failed — non-critical + // stat command failed - non-critical } // 2. Run mysql restore on the remote server from the uploaded file @@ -294,7 +294,7 @@ async function restoreSingleFileSSH( if (aliveCheck.stdout.includes('alive')) { onLog(`Post-failure check: MySQL server is still running`, 'warning'); } else { - onLog(`Post-failure check: MySQL server NOT responding — ${aliveCheck.stderr.trim() || aliveCheck.stdout.trim()}`, 'error'); + onLog(`Post-failure check: MySQL server NOT responding - ${aliveCheck.stderr.trim() || aliveCheck.stdout.trim()}`, 'error'); } } catch { onLog(`Post-failure check: Could not reach MySQL server (likely crashed/OOM-killed)`, 'error'); diff --git a/src/lib/adapters/database/mysql/tools.ts b/src/lib/adapters/database/mysql/tools.ts index 56c24b6d..886cf44f 100644 --- a/src/lib/adapters/database/mysql/tools.ts +++ b/src/lib/adapters/database/mysql/tools.ts @@ -41,7 +41,7 @@ async function initCommands(): Promise { } export function getMysqlCommand(): string { - // Return cached value or fallback — initCommands() should be called before first use + // Return cached value or fallback - initCommands() should be called before first use return cachedMysqlCmd ?? 'mysql'; } diff --git a/src/lib/adapters/database/postgres/dump.ts b/src/lib/adapters/database/postgres/dump.ts index b75d5861..60829830 100644 --- a/src/lib/adapters/database/postgres/dump.ts +++ b/src/lib/adapters/database/postgres/dump.ts @@ -5,7 +5,6 @@ import { spawn } from "child_process"; import fs from "fs/promises"; import { createWriteStream } from "fs"; import path from "path"; -import { getPostgresBinary } from "./version-utils"; import { createMultiDbTar, createTempDir, @@ -45,8 +44,6 @@ async function dumpSingleDatabase( return dumpSingleDatabaseSSH(dbName, outputPath, config, log); } - const pgDumpBinary = await getPostgresBinary('pg_dump', config.detectedVersion); - const args = [ '-h', config.host, '-p', String(config.port), @@ -70,9 +67,9 @@ async function dumpSingleDatabase( } } - log(`Dumping database: ${dbName}`, 'info', 'command', `${pgDumpBinary} ${args.join(' ')}`); + log(`Dumping database: ${dbName}`, 'info', 'command', `pg_dump ${args.join(' ')}`); - const dumpProcess = spawn(pgDumpBinary, args, { env }); + const dumpProcess = spawn('pg_dump', args, { env }); const writeStream = createWriteStream(outputPath); dumpProcess.stdout.pipe(writeStream); @@ -203,7 +200,7 @@ export async function dump( // Auto-discover all databases if none specified if (dbs.length === 0) { - log("No DB selected — auto-discovering all databases…", "info"); + log("No DB selected - auto-discovering all databases…", "info"); dbs = await getDatabases(config); log(`Discovered ${dbs.length} database(s): ${dbs.join(", ")}`, "info"); if (dbs.length === 0) { @@ -212,16 +209,14 @@ export async function dump( } const dialect = getDialect('postgres', config.detectedVersion); - const pgDumpBinary = await getPostgresBinary('pg_dump', config.detectedVersion); - log(`Using ${pgDumpBinary} for PostgreSQL ${config.detectedVersion}`, 'info'); // Case 1: Single Database - Direct dump with custom format if (dbs.length <= 1) { const args = dialect.getDumpArgs(config, dbs); - log(`Starting single-database dump (custom format)`, 'info', 'command', `${pgDumpBinary} ${args.join(' ')}`); + log(`Starting single-database dump (custom format)`, 'info', 'command', `pg_dump ${args.join(' ')}`); - const dumpProcess = spawn(pgDumpBinary, args, { env }); + const dumpProcess = spawn('pg_dump', args, { env }); const writeStream = createWriteStream(destinationPath); dumpProcess.stdout.pipe(writeStream); diff --git a/src/lib/adapters/database/postgres/restore.ts b/src/lib/adapters/database/postgres/restore.ts index ecd89afa..51f422cb 100644 --- a/src/lib/adapters/database/postgres/restore.ts +++ b/src/lib/adapters/database/postgres/restore.ts @@ -5,7 +5,6 @@ import { getDialect } from "./dialects"; import { spawn } from "child_process"; import fs from "fs/promises"; import path from "path"; -import { getPostgresBinary } from "./version-utils"; import { isMultiDbTar, extractSelectedDatabases, @@ -155,8 +154,6 @@ async function restoreSingleDatabase( return restoreSingleDatabaseSSH(sourcePath, targetDb, config, log); } - const pgRestoreBinary = await getPostgresBinary('pg_restore', config.detectedVersion); - const args = [ '-h', config.host, '-p', String(config.port), @@ -174,10 +171,10 @@ async function restoreSingleDatabase( sourcePath ]; - log(`Restoring to database: ${targetDb}`, 'info', 'command', `${pgRestoreBinary} ${args.join(' ')}`); + log(`Restoring to database: ${targetDb}`, 'info', 'command', `pg_restore ${args.join(' ')}`); await new Promise((resolve, reject) => { - const pgRestore = spawn(pgRestoreBinary, args, { env, stdio: ['ignore', 'pipe', 'pipe'] }); + const pgRestore = spawn('pg_restore', args, { env, stdio: ['ignore', 'pipe', 'pipe'] }); let stderrBuffer = ""; @@ -207,7 +204,11 @@ async function restoreSingleDatabase( if (code === 0) { resolve(); } else if (code === 1 && stderrBuffer.includes('warning') && stderrBuffer.includes('errors ignored')) { - log('Restore completed with warnings (non-fatal)', 'warning'); + if (stderrBuffer.includes('transaction_timeout')) { + log('Restore completed - pg_restore 18 sent SET transaction_timeout which is unsupported on PostgreSQL < 17. This is cosmetic and does not affect the restore.', 'warning'); + } else { + log('Restore completed with warnings (non-fatal)', 'warning'); + } resolve(); } else { let errorMsg = `pg_restore exited with code ${code}`; @@ -279,7 +280,11 @@ async function restoreSingleDatabaseSSH( } if (result.code === 1 && result.stderr.includes('warning')) { - log('Restore completed with warnings (non-fatal)', 'warning'); + if (result.stderr.includes('transaction_timeout')) { + log('Restore completed - pg_restore 18 sent SET transaction_timeout which is unsupported on PostgreSQL < 17. This is cosmetic and does not affect the restore.', 'warning'); + } else { + log('Restore completed with warnings (non-fatal)', 'warning'); + } } if (result.stderr) { @@ -404,9 +409,6 @@ export async function restore( log(`Restoring single database to: ${targetDb}`, 'info'); - const pgRestoreBinary = await getPostgresBinary('pg_restore', config.detectedVersion); - log(`Using ${pgRestoreBinary} for PostgreSQL ${config.detectedVersion}`, 'info'); - await prepareRestore(usageConfig, [targetDb]); await restoreSingleDatabase(sourcePath, targetDb, usageConfig, env, log); } diff --git a/src/lib/adapters/database/postgres/version-utils.ts b/src/lib/adapters/database/postgres/version-utils.ts deleted file mode 100644 index 01a40503..00000000 --- a/src/lib/adapters/database/postgres/version-utils.ts +++ /dev/null @@ -1,129 +0,0 @@ -import { execFile } from "child_process"; -import { promisify } from "util"; -import { logger } from "@/lib/logger"; - -const execFileAsync = promisify(execFile); -const log = logger.child({ adapter: "postgres", module: "version-utils" }); - -/** - * Finds the correct PostgreSQL binary path for a specific major version. - * - * This is crucial to avoid compatibility issues where pg_dump version 17 - * creates dumps with PG17-specific syntax (like transaction_timeout) that - * fail to restore on PG16 or earlier. - * - * Uses intelligent fallback strategy: - * - If exact version not found, uses next higher version (backward compatible) - * - Example: PG 13 server → uses pg_dump 14 (if available) - * - Strategic versions: 14 (covers 12-14), 16 (covers 15-16), 17, 18 - * - * Search order: - * 1. Exact version match - * 2. Next higher strategic version (14, 16, 18) - * 3. Generic fallback (uses $PATH default) - * - * @param tool - The tool name (pg_dump, pg_restore, psql) - * @param targetVersion - Target major version (e.g., "16.1" → returns PG16 binary) - * @returns The full path to the binary, or the generic tool name as fallback - */ -export async function getPostgresBinary(tool: 'pg_dump' | 'pg_restore' | 'psql', targetVersion?: string): Promise { - if (!targetVersion) { - // No version detected, use default from PATH - return tool; - } - - // Extract major version (e.g., "PostgreSQL 16.1 on..." → "16") - const majorMatch = targetVersion.match(/(\d+)\./); - if (!majorMatch) { - return tool; - } - const majorVersion = parseInt(majorMatch[1], 10); - - // Strategic versions we support (each installed explicitly in Dockerfile) - const strategicVersions = [14, 16, 17, 18]; - - // Find the best matching version: - // 1. Try exact match first - // 2. Fall back to next higher strategic version - const versionsToTry: number[] = []; - - // Add exact version - versionsToTry.push(majorVersion); - - // Add next higher strategic versions as fallbacks - for (const strategic of strategicVersions) { - if (strategic >= majorVersion && !versionsToTry.includes(strategic)) { - versionsToTry.push(strategic); - } - } - - // Try each version in order - for (const version of versionsToTry) { - const candidatePaths = [ - // Homebrew (macOS) - versioned installations - `/opt/homebrew/opt/postgresql@${version}/bin/${tool}`, - `/usr/local/opt/postgresql@${version}/bin/${tool}`, - - // Alpine Linux (Docker) - custom symlinks from Dockerfile - `/opt/pg${version}/bin/${tool}`, - - // Alpine Linux - direct libexec paths - `/usr/libexec/postgresql${version}/${tool}`, - - // Linux package manager versioned installations - `/usr/lib/postgresql/${version}/bin/${tool}`, - `/usr/pgsql-${version}/bin/${tool}`, - ]; - - for (const candidatePath of candidatePaths) { - try { - // Check if file exists and is executable - const { stdout } = await execFileAsync(candidatePath, ['--version'], { timeout: 2000 }); - - // Verify the version matches - if (stdout.includes(`${version}.`)) { - if (version !== majorVersion) { - log.info("Using backward compatible pg_dump version", { toolVersion: version, serverVersion: majorVersion }); - } - return candidatePath; - } - } catch { - // File doesn't exist or isn't executable, continue - continue; - } - } - } - - // Final fallback: check generic system paths for latest version - const genericPaths = [ - `/opt/homebrew/opt/postgresql/bin/${tool}`, - `/usr/local/opt/postgresql/bin/${tool}`, - `/usr/bin/${tool}`, - `/usr/local/bin/${tool}`, - ]; - - for (const genericPath of genericPaths) { - try { - await execFileAsync(genericPath, ['--version'], { timeout: 2000 }); - log.warn("Could not find strategic version, using system default", { targetVersion: majorVersion, path: genericPath }); - return genericPath; - } catch { - continue; - } - } - - // Last resort: use generic tool name from PATH - log.warn("Could not find tool for version, using default from PATH", { tool, targetVersion: majorVersion }); - return tool; -} - -/** - * Get the major version number from a full PostgreSQL version string - * - * @param versionString - Full version string (e.g., "PostgreSQL 16.1 on x86_64...") - * @returns Major version number (e.g., 16) or null - */ -export function extractMajorVersion(versionString: string): number | null { - const match = versionString.match(/(\d+)\./); - return match ? parseInt(match[1], 10) : null; -} diff --git a/src/lib/adapters/notification/twilio-sms.ts b/src/lib/adapters/notification/twilio-sms.ts index 5c3ad8fd..8f51b841 100644 --- a/src/lib/adapters/notification/twilio-sms.ts +++ b/src/lib/adapters/notification/twilio-sms.ts @@ -52,7 +52,7 @@ export const TwilioSmsAdapter: NotificationAdapter = { const body = new URLSearchParams({ From: config.from, To: config.to, - Body: "🔔 DBackup Connection Test — This is a test SMS to verify your Twilio configuration.", + Body: "🔔 DBackup Connection Test - This is a test SMS to verify your Twilio configuration.", }); const response = await fetch(url, { diff --git a/src/lib/adapters/storage/dropbox.ts b/src/lib/adapters/storage/dropbox.ts index debd3093..3d156751 100644 --- a/src/lib/adapters/storage/dropbox.ts +++ b/src/lib/adapters/storage/dropbox.ts @@ -103,7 +103,7 @@ async function _ensureFolderExists(dbx: Dropbox, folderPath: string): Promise\n" — strip the command part + // Node's exec includes "Command failed: \n" - strip the command part const stripped = message.replace(/Command failed:[^\n]*\n?/g, "").trim(); // Remove SSH/sshpass warnings that leak connection details const cleaned = stripped diff --git a/src/lib/adapters/storage/webdav.ts b/src/lib/adapters/storage/webdav.ts index 7054bfcf..edd6fac0 100644 --- a/src/lib/adapters/storage/webdav.ts +++ b/src/lib/adapters/storage/webdav.ts @@ -98,7 +98,7 @@ export const WebDAVAdapter: StorageAdapter = { return true; } } catch { - // stat failed — proceed without progress + // stat failed - proceed without progress } } @@ -137,7 +137,7 @@ export const WebDAVAdapter: StorageAdapter = { const files: FileInfo[] = []; const prefixPath = prefix ? path.posix.join("/", prefix) : ""; - // Recursive walk — avoids Depth:infinity PROPFIND which many servers reject + // Recursive walk - avoids Depth:infinity PROPFIND which many servers reject const walk = async (currentDir: string) => { const items = await client.getDirectoryContents(currentDir) as FileStat[]; diff --git a/src/lib/backup-extensions.ts b/src/lib/backup-extensions.ts index 78d957aa..97b1f614 100644 --- a/src/lib/backup-extensions.ts +++ b/src/lib/backup-extensions.ts @@ -32,7 +32,7 @@ export function getBackupFileExtension(adapterId: string): string { /** * Get a human-readable description of the backup format * - * NOTE: Currently unused — kept for future UI integration (e.g. Storage Explorer, Backup Details). + * NOTE: Currently unused - kept for future UI integration (e.g. Storage Explorer, Backup Details). * * @param adapterId - The adapter identifier * @returns Description of the backup format diff --git a/src/lib/core/logs.ts b/src/lib/core/logs.ts index 80391aa1..2c8bdff6 100644 --- a/src/lib/core/logs.ts +++ b/src/lib/core/logs.ts @@ -6,7 +6,7 @@ export interface LogEntry { level: LogLevel; type: LogType; message: string; - stage?: string; // High-level stage grouping — should be a PipelineStage value + stage?: string; // High-level stage grouping - should be a PipelineStage value details?: string; // For long output like stdout/stderr context?: Record; // For metadata durationMs?: number; diff --git a/src/lib/notifications/types.ts b/src/lib/notifications/types.ts index 585ea78b..8577bb0b 100644 --- a/src/lib/notifications/types.ts +++ b/src/lib/notifications/types.ts @@ -67,7 +67,7 @@ export interface NotificationPayload { color?: string; /** Whether the event represents a success or failure */ success: boolean; - /** Optional badge label override (e.g. "Alert") — replaces auto-detected status badge in emails */ + /** Optional badge label override (e.g. "Alert") - replaces auto-detected status badge in emails */ badge?: string; } diff --git a/src/lib/prisma.ts b/src/lib/prisma.ts index d8ef991e..b13f3fca 100644 --- a/src/lib/prisma.ts +++ b/src/lib/prisma.ts @@ -33,10 +33,10 @@ const prismaClientSingleton = () => { if (!record) return record; try { if (record.clientId) record.clientId = decrypt(record.clientId); - } catch { /* Not encrypted or wrong key — return as-is */ } + } catch { /* Not encrypted or wrong key - return as-is */ } try { if (record.clientSecret) record.clientSecret = decrypt(record.clientSecret); - } catch { /* Not encrypted or wrong key — return as-is */ } + } catch { /* Not encrypted or wrong key - return as-is */ } try { if (record.oidcConfig) { const parsed = JSON.parse(record.oidcConfig); @@ -45,7 +45,7 @@ const prismaClientSingleton = () => { if (parsed.clientSecret) { try { parsed.clientSecret = decrypt(parsed.clientSecret); changed = true; } catch {} } if (changed) record.oidcConfig = JSON.stringify(parsed); } - } catch { /* Parse error or not encrypted — return as-is */ } + } catch { /* Parse error or not encrypted - return as-is */ } return record; }; diff --git a/src/lib/queue-manager.ts b/src/lib/queue-manager.ts index 81ce4ca1..03a70ed8 100644 --- a/src/lib/queue-manager.ts +++ b/src/lib/queue-manager.ts @@ -10,7 +10,7 @@ const log = logger.child({ module: "Queue" }); export async function processQueue() { // Skip queue processing during shutdown if (isShutdownRequested()) { - log.info("Shutdown in progress — skipping queue processing"); + log.info("Shutdown in progress - skipping queue processing"); return; } diff --git a/src/lib/rate-limit.ts b/src/lib/rate-limit.ts index 02175faf..1c03b88a 100644 --- a/src/lib/rate-limit.ts +++ b/src/lib/rate-limit.ts @@ -70,7 +70,7 @@ function rebuildLimiters(config: RateLimitConfig): void { * Apply an externally-fetched config to the module-local rate limiters. * * Called from middleware after fetching config via the internal API endpoint. - * This is Edge Runtime safe — no Prisma, no Node.js APIs. + * This is Edge Runtime safe - no Prisma, no Node.js APIs. */ export function applyExternalConfig(config: RateLimitConfig): void { rebuildLimiters(config); diff --git a/src/lib/runner.ts b/src/lib/runner.ts index ee2e9a3b..d14796ad 100644 --- a/src/lib/runner.ts +++ b/src/lib/runner.ts @@ -274,7 +274,7 @@ export async function performExecution(executionId: string, jobId: string) { await stepRetention(ctx); setStage(PIPELINE_STAGES.COMPLETED); - // Upload step may have set status to "Partial" — preserve it + // Upload step may have set status to "Partial" - preserve it if (ctx.status === "Running") { ctx.status = "Success"; } diff --git a/src/lib/runner/steps/03-upload.ts b/src/lib/runner/steps/03-upload.ts index fa440971..93b46fee 100644 --- a/src/lib/runner/steps/03-upload.ts +++ b/src/lib/runner/steps/03-upload.ts @@ -167,7 +167,7 @@ export async function stepUpload(ctx: RunnerContext) { const uploadedBytes = Math.round((percent / 100) * ctx.dumpSize); const elapsed = (Date.now() - uploadStart) / 1000; const speed = elapsed > 0 ? Math.round(uploadedBytes / elapsed) : 0; - ctx.updateDetail(`${dest.configName} — ${formatBytes(uploadedBytes)} / ${formatBytes(ctx.dumpSize)} – ${formatBytes(speed)}/s`); + ctx.updateDetail(`${dest.configName} - ${formatBytes(uploadedBytes)} / ${formatBytes(ctx.dumpSize)} – ${formatBytes(speed)}/s`); } else { ctx.updateDetail(`${dest.configName} (${percent}%)`); } @@ -242,7 +242,7 @@ export async function stepUpload(ctx: RunnerContext) { } } } else { - ctx.log("No local destinations — skipping integrity verification"); + ctx.log("No local destinations - skipping integrity verification"); } // --- EVALUATE RESULTS --- diff --git a/src/lib/runner/steps/04-completion.ts b/src/lib/runner/steps/04-completion.ts index 937d08d6..f4679d14 100644 --- a/src/lib/runner/steps/04-completion.ts +++ b/src/lib/runner/steps/04-completion.ts @@ -20,7 +20,7 @@ export async function stepCleanup(ctx: RunnerContext) { await fs.unlink(ctx.tempFile); ctx.log("Temporary file cleaned up"); } catch (_e) { - // File doesn't exist or cleanup failed — ignore + // File doesn't exist or cleanup failed - ignore } } } @@ -74,7 +74,7 @@ export async function stepFinalize(ctx: RunnerContext) { condition === "ALWAYS" || (condition === "SUCCESS_ONLY" && isSuccess) || (condition === "FAILURE_ONLY" && !isSuccess && !isPartial) || - // Partial counts as notable — notify on both ALWAYS and FAILURE_ONLY + // Partial counts as notable - notify on both ALWAYS and FAILURE_ONLY (condition === "FAILURE_ONLY" && isPartial); if (!shouldNotify) { diff --git a/src/lib/shutdown.ts b/src/lib/shutdown.ts index 7b7aa726..537c8789 100644 --- a/src/lib/shutdown.ts +++ b/src/lib/shutdown.ts @@ -34,12 +34,12 @@ export function isShutdownRequested(): boolean { export function registerShutdownHandlers(): void { const handler = (signal: string) => { if (isShuttingDown) { - log.warn("Forced shutdown — second signal received", { signal }); + log.warn("Forced shutdown - second signal received", { signal }); process.exit(1); } isShuttingDown = true; - log.info(`Received ${signal} — starting graceful shutdown...`); + log.info(`Received ${signal} - starting graceful shutdown...`); performShutdown(signal).then(() => { log.info("Graceful shutdown complete"); @@ -66,7 +66,7 @@ async function performShutdown(signal: string): Promise { log.warn("Failed to stop scheduler", { error: String(error) }); } - // 2. Wait for all running executions to complete (no timeout — the app + // 2. Wait for all running executions to complete (no timeout - the app // stays alive until every backup/restore finishes or a second signal // forces immediate exit) let lastLoggedCount = -1; @@ -98,7 +98,7 @@ async function performShutdown(signal: string): Promise { } } - // 3. Cancel pending jobs — they won't be picked up after shutdown + // 3. Cancel pending jobs - they won't be picked up after shutdown try { const pendingCount = await prisma.execution.count({ where: { status: "Pending" }, diff --git a/src/lib/ssh/utils.ts b/src/lib/ssh/utils.ts index 3f0766b0..990b8e37 100644 --- a/src/lib/ssh/utils.ts +++ b/src/lib/ssh/utils.ts @@ -11,7 +11,7 @@ export function shellEscape(value: string): string { /** * Build a remote command string with environment variables exported before execution. * Uses `export` statements separated by `;` so that if the main process is killed, - * bash's kill report only shows the command — not the secrets. + * bash's kill report only shows the command - not the secrets. * * Example: remoteEnv({ MYSQL_PWD: "secret" }, "mysqldump -h 127.0.0.1 mydb") * → "export MYSQL_PWD='secret'; mysqldump -h 127.0.0.1 mydb" diff --git a/src/middleware.ts b/src/middleware.ts index 8e105ceb..31d75ecf 100644 --- a/src/middleware.ts +++ b/src/middleware.ts @@ -166,7 +166,7 @@ export const config = { * - favicon.ico (favicon file) * - public assets if any * - * NOTE: api/auth is intentionally NOT excluded — the middleware must + * NOTE: api/auth is intentionally NOT excluded - the middleware must * run on auth endpoints so the auth rate limiter (5 req/min) can * protect /api/auth/sign-in against brute-force attacks. */ diff --git a/src/services/api-key-service.ts b/src/services/api-key-service.ts index 151dd25e..e453cfd1 100644 --- a/src/services/api-key-service.ts +++ b/src/services/api-key-service.ts @@ -53,7 +53,7 @@ export interface ValidatedApiKey { export class ApiKeyService { /** - * Create a new API key. Returns the raw key ONCE — it cannot be retrieved again. + * Create a new API key. Returns the raw key ONCE - it cannot be retrieved again. */ async create(input: CreateApiKeyInput): Promise<{ apiKey: ApiKeyListItem; rawKey: string }> { const rawKey = generateRawKey(); @@ -200,7 +200,7 @@ export class ApiKeyService { } /** - * Rotate an API key — generates a new key, replaces the hash. Returns new raw key ONCE. + * Rotate an API key - generates a new key, replaces the hash. Returns new raw key ONCE. */ async rotate(id: string): Promise<{ apiKey: ApiKeyListItem; rawKey: string }> { const existing = await prisma.apiKey.findUnique({ where: { id } }); diff --git a/src/services/certificate-service.ts b/src/services/certificate-service.ts index 01482ff8..4e64d57c 100644 --- a/src/services/certificate-service.ts +++ b/src/services/certificate-service.ts @@ -172,13 +172,13 @@ export function uploadCertificate(certPem: string, keyPem: string): void { throw new Error("Certificate and private key do not match."); } } catch (e) { - // For EC keys, modulus check doesn't apply — skip + // For EC keys, modulus check doesn't apply - skip if (e instanceof Error && e.message.includes("do not match")) { throw e; } } - // All validations passed — replace existing files + // All validations passed - replace existing files renameSync(tmpCert, CERT_PATH); renameSync(tmpKey, KEY_PATH); diff --git a/src/services/dashboard-service.ts b/src/services/dashboard-service.ts index 453e5a0e..ea192925 100644 --- a/src/services/dashboard-service.ts +++ b/src/services/dashboard-service.ts @@ -234,7 +234,7 @@ export async function getStorageVolume(): Promise { } } - // No cache yet — do a live refresh to populate it (first load only) + // No cache yet - do a live refresh to populate it (first load only) // This ensures accurate data from the start instead of inaccurate DB estimation try { return await refreshStorageStatsCache(); diff --git a/src/services/healthcheck-service.ts b/src/services/healthcheck-service.ts index 5834bd02..b94a9bf3 100644 --- a/src/services/healthcheck-service.ts +++ b/src/services/healthcheck-service.ts @@ -235,7 +235,7 @@ export class HealthCheckService { } if (newStatus === "OFFLINE") { - // Adapter just became or remains offline — check if we should notify + // Adapter just became or remains offline - check if we should notify if (shouldNotifyOffline(currentState, reminderCooldownMs)) { try { await notify({ @@ -259,7 +259,7 @@ export class HealthCheckService { stateChanged = true; } } else if (currentState?.active) { - // Adapter recovered — send recovery notification and reset state + // Adapter recovered - send recovery notification and reset state let downtime: string | undefined; if (currentState.lastNotifiedAt) { const ms = Date.now() - new Date(currentState.lastNotifiedAt).getTime(); diff --git a/src/services/storage-alert-service.ts b/src/services/storage-alert-service.ts index e427e4df..65009337 100644 --- a/src/services/storage-alert-service.ts +++ b/src/services/storage-alert-service.ts @@ -98,7 +98,7 @@ export function defaultAlertStates(): StorageAlertStates { function shouldNotify(state: AlertTypeState, cooldownMs?: number): boolean { if (!state.active) return true; if (!state.lastNotifiedAt) return true; - // cooldownMs === 0 means reminders are disabled — only notify on first occurrence + // cooldownMs === 0 means reminders are disabled - only notify on first occurrence if (cooldownMs === 0) return false; const effectiveCooldown = cooldownMs ?? ALERT_COOLDOWN_MS; return Date.now() - new Date(state.lastNotifiedAt).getTime() >= effectiveCooldown; @@ -317,7 +317,7 @@ async function checkUsageSpike( states.usageSpike = { active: true, lastNotifiedAt: new Date().toISOString() }; } } else { - // No spike — reset so next spike fires immediately + // No spike - reset so next spike fires immediately states.usageSpike = defaultAlertTypeState(); } } @@ -360,7 +360,7 @@ async function checkStorageLimit( states.storageLimit = { active: true, lastNotifiedAt: new Date().toISOString() }; } } else { - // Condition resolved — reset for future re-notification + // Condition resolved - reset for future re-notification states.storageLimit = defaultAlertTypeState(); } } @@ -428,7 +428,7 @@ async function checkMissingBackup( states.missingBackup = { active: true, lastNotifiedAt: new Date().toISOString() }; } } else { - // Condition resolved — reset for future re-notification + // Condition resolved - reset for future re-notification states.missingBackup = defaultAlertTypeState(); } } diff --git a/tests/unit/lib/access-control.test.ts b/tests/unit/lib/access-control.test.ts index 1c8a1098..d130d72b 100644 --- a/tests/unit/lib/access-control.test.ts +++ b/tests/unit/lib/access-control.test.ts @@ -210,7 +210,7 @@ describe("Access Control", () => { }, }); - // Both session and API key present — session should win + // Both session and API key present - session should win const headers = new Headers({ authorization: "Bearer dbackup_somekey", }); diff --git a/tests/unit/services/storage-alert-service.test.ts b/tests/unit/services/storage-alert-service.test.ts index ecd448fe..068c9993 100644 --- a/tests/unit/services/storage-alert-service.test.ts +++ b/tests/unit/services/storage-alert-service.test.ts @@ -174,7 +174,7 @@ describe("StorageAlertService", () => { }); it("should merge partial config with defaults", async () => { - // Stored config only has some fields — rest should come from defaults + // Stored config only has some fields - rest should come from defaults prismaMock.systemSetting.findUnique.mockResolvedValue({ key: "storage.alerts.cfg-1", value: JSON.stringify({ usageSpikeEnabled: true }), @@ -347,7 +347,7 @@ describe("StorageAlertService", () => { }); it("should process multiple destinations independently", async () => { - // Both have spike enabled — use mockImplementation to handle both configIds + // Both have spike enabled - use mockImplementation to handle both configIds prismaMock.systemSetting.findUnique.mockImplementation((async (args: any) => { const key: string = args.where.key; if (key === "storage.alerts.cfg-1" || key === "storage.alerts.cfg-2") { @@ -361,7 +361,7 @@ describe("StorageAlertService", () => { return null; // state keys → default }) as any); - // Both need 2+ snapshots for spike check — return <2 so no notify + // Both need 2+ snapshots for spike check - return <2 so no notify prismaMock.storageSnapshot.findMany.mockResolvedValue([]); await checkStorageAlerts([ @@ -741,7 +741,7 @@ describe("StorageAlertService", () => { missingBackupHours: 10, }); - // Count didn't change — oldest snapshot is 25h ago + // Count didn't change - oldest snapshot is 25h ago prismaMock.storageSnapshot.findMany.mockResolvedValue([ { count: 3, createdAt: new Date("2026-02-22T12:00:00Z") } as any, { count: 3, createdAt: new Date("2026-02-21T11:00:00Z") } as any, // 25h ago diff --git a/wiki/changelog.md b/wiki/changelog.md index c2e389f3..40b7bdc5 100644 --- a/wiki/changelog.md +++ b/wiki/changelog.md @@ -2,6 +2,29 @@ All notable changes to DBackup are documented here. +## v1.4.1 - PostgreSQL Client Cleanup +*Released: April 2, 2026* + +### 🎨 Improvements + +- **PostgreSQL**: Restore warning for PostgreSQL ≤ 16 now explains that `SET transaction_timeout` is a cosmetic pg_restore 18 issue and does not affect the restore +- **codebase**: Replaced all em dashes with hyphens across source code, docs, and config files for typographic consistency + +### 🗑️ Removed + +- **PostgreSQL**: Removed multi-version pg_dump/pg_restore strategy (PG 14, 16, 17, 18) - only PostgreSQL 18 client is now installed, which is backward compatible with all supported server versions (12–18) + +### 🔧 CI/CD + +- **Docker**: Simplified Dockerfile by removing postgresql14/16/17-client packages and multi-version symlink setup, reducing image size + +### 🐳 Docker + +- **Image**: `skyfay/dbackup:v1.4.1` +- **Also tagged as**: `latest`, `v1` +- **Platforms**: linux/amd64, linux/arm64 + + ## v1.4.0 - Live History Redesign *Released: March 31, 2026* @@ -9,7 +32,7 @@ All notable changes to DBackup are documented here. - **logging**: Pipeline stage system for backups (Queued → Initializing → Dumping → Processing → Uploading → Verifying → Retention → Notifications → Completed) and restores (Downloading → Decrypting → Decompressing → Restoring Database → Completed) with automatic progress calculation and duration tracking per stage - **ui**: LogViewer redesign with pipeline stage grouping, duration badges, pending stage placeholders, and auto-expanding latest stage during execution -- **ui**: Real-time speed (MB/s) and byte progress display for all backup and restore operations — dump, compress, encrypt, upload, download, decrypt, decompress, and SFTP transfer +- **ui**: Real-time speed (MB/s) and byte progress display for all backup and restore operations - dump, compress, encrypt, upload, download, decrypt, decompress, and SFTP transfer ### 🎨 Improvements diff --git a/wiki/developer-guide/adapters/database.md b/wiki/developer-guide/adapters/database.md index 5f2dacdf..3a931caf 100644 --- a/wiki/developer-guide/adapters/database.md +++ b/wiki/developer-guide/adapters/database.md @@ -139,7 +139,7 @@ If `getDatabasesWithStats()` is not implemented, falls back to `getDatabases()` ## SSH Mode Architecture -Most database adapters support an SSH remote execution mode. Instead of running CLI tools locally and connecting to the database over TCP, DBackup connects via SSH to the target server and runs database tools **remotely**. This is **not** an SSH tunnel — the dump/restore commands execute on the remote host. +Most database adapters support an SSH remote execution mode. Instead of running CLI tools locally and connecting to the database over TCP, DBackup connects via SSH to the target server and runs database tools **remotely**. This is **not** an SSH tunnel - the dump/restore commands execute on the remote host. ### Shared SSH Infrastructure (`src/lib/ssh/`) @@ -164,7 +164,7 @@ await client.connect(sshConfig); const result = await client.exec("mysqldump --version"); // { stdout: "...", stderr: "...", code: 0 } -// Streaming execution (for dumps — pipes stdout to a writable stream) +// Streaming execution (for dumps - pipes stdout to a writable stream) const stream = await client.execStream("pg_dump -F c mydb"); stream.pipe(outputFile); @@ -178,7 +178,7 @@ Configuration: `readyTimeout: 20000ms`, `keepaliveInterval: 10000ms`, `keepalive | Function | Purpose | | :--- | :--- | | `shellEscape(value)` | Wraps value in single quotes, escapes embedded quotes | -| `remoteEnv(vars, cmd)` | Exports env vars before a command (e.g., `export MYSQL_PWD='...'; mysqldump`) — uses `export` to prevent password leaking in OOM kill reports | +| `remoteEnv(vars, cmd)` | Exports env vars before a command (e.g., `export MYSQL_PWD='...'; mysqldump`) - uses `export` to prevent password leaking in OOM kill reports | | `remoteBinaryCheck(client, ...candidates)` | Checks if binary exists on remote host, returns resolved path | | `isSSHMode(config)` | Returns `true` if `config.connectionMode === "ssh"` | | `extractSshConfig(config)` | Extracts `SshConnectionConfig` from adapter config with `sshHost` prefix | @@ -488,7 +488,7 @@ async dump(config, destinationPath) { ## SQLite Adapter -SQLite is unique—it's just a file copy: +SQLite is unique-it's just a file copy: ```typescript async dump(config, destinationPath) { @@ -604,7 +604,7 @@ The restore function provides instructions but cannot perform the actual restore ## MSSQL Adapter -MSSQL is unique among database adapters — it uses the **TDS protocol** (via the `mssql` npm package) instead of CLI tools, and writes native `.bak` files to the server filesystem. A separate file transfer mechanism is needed to access these files. +MSSQL is unique among database adapters - it uses the **TDS protocol** (via the `mssql` npm package) instead of CLI tools, and writes native `.bak` files to the server filesystem. A separate file transfer mechanism is needed to access these files. ### Configuration Schema diff --git a/wiki/developer-guide/adapters/notification.md b/wiki/developer-guide/adapters/notification.md index a25f47bc..e5a9d483 100644 --- a/wiki/developer-guide/adapters/notification.md +++ b/wiki/developer-guide/adapters/notification.md @@ -46,7 +46,7 @@ interface NotificationAdapter { id: string; type: "notification"; name: string; - configSchema: ZodSchema; // Zod schema — UI form is auto-generated from this + configSchema: ZodSchema; // Zod schema - UI form is auto-generated from this send( config: unknown, @@ -183,10 +183,10 @@ All notification types (backup, login, restore, etc.) share this single template Sends Block Kit formatted messages to Slack Incoming Webhooks. Uses `attachments` with a color bar for status indication and structured `blocks` for content: -- **Header block** — Notification title -- **Section block** — Message body (Markdown) -- **Fields section** — Structured key-value pairs from `context.fields` -- **Context block** — Timestamp +- **Header block** - Notification title +- **Section block** - Message body (Markdown) +- **Fields section** - Structured key-value pairs from `context.fields` +- **Context block** - Timestamp - Optional channel, username, and icon emoji overrides ### Slack Schema @@ -204,8 +204,8 @@ const SlackSchema = z.object({ Sends Adaptive Cards v1.4 to Microsoft Teams via Power Automate Workflows webhooks. The payload follows the Teams message wrapper format with an `attachments` array containing the card: -- **TextBlock** — Title and message body -- **FactSet** — Structured key-value fields +- **TextBlock** - Title and message body +- **FactSet** - Structured key-value fields - Color mapping: hex → named Adaptive Card colors (`Good`, `Attention`, `Warning`, `Accent`, `Default`) ### Teams Schema @@ -256,7 +256,7 @@ const NtfySchema = z.object({ ## Generic Webhook Adapter -Sends JSON payloads to any HTTP endpoint with customizable templates. The most flexible adapter — used for services without a dedicated adapter: +Sends JSON payloads to any HTTP endpoint with customizable templates. The most flexible adapter - used for services without a dedicated adapter: - Configurable HTTP method (POST, PUT, PATCH) - `{{variable}}` placeholder system for custom payload templates @@ -539,7 +539,7 @@ This uses the same `renderTemplate()` and `NotificationPayload` system as system ## Creating a New Notification Adapter -Adding a new notification adapter requires changes across **multiple files** — the adapter code itself, schema definitions, UI constants, icon mapping, registry, and documentation. This section provides the complete step-by-step guide. +Adding a new notification adapter requires changes across **multiple files** - the adapter code itself, schema definitions, UI constants, icon mapping, registry, and documentation. This section provides the complete step-by-step guide. ### Quick Reference Checklist @@ -565,36 +565,36 @@ Every new notification adapter touches these files: | 16 | `wiki/developer-guide/adapters/notification.md` | Update "Available Adapters" table (this file) | | 17 | `tests/unit/adapters/notification/.test.ts` | Write unit tests for `test()` and `send()` | -### Step 1 — Define the Zod Schema +### Step 1 - Define the Zod Schema Add the schema, inferred type, and definition entry in `src/lib/adapters/definitions.ts`: ```typescript -// 1a. Schema — near the other notification schemas +// 1a. Schema - near the other notification schemas export const MyServiceSchema = z.object({ serverUrl: z.string().url("Valid URL is required"), apiToken: z.string().min(1, "API Token is required").describe("Your API token"), priority: z.coerce.number().min(1).max(10).default(5).describe("Default priority (1-10)"), }); -// 1b. Inferred type — in the "Notification Adapters" types section +// 1b. Inferred type - in the "Notification Adapters" types section export type MyServiceConfig = z.infer; -// 1c. Union type — add to NotificationConfig +// 1c. Union type - add to NotificationConfig export type NotificationConfig = DiscordConfig | SlackConfig | /* ... */ | MyServiceConfig | EmailConfig; -// 1d. Definition entry — in the ADAPTER_DEFINITIONS array +// 1d. Definition entry - in the ADAPTER_DEFINITIONS array { id: "my-service", type: "notification", name: "My Service", configSchema: MyServiceSchema }, ``` ::: tip Schema conventions -- Use `.describe("...")` on optional/non-obvious fields — this text appears as a tooltip in the UI -- Use `.default(value)` for sensible defaults — they auto-fill in the form +- Use `.describe("...")` on optional/non-obvious fields - this text appears as a tooltip in the UI +- Use `.default(value)` for sensible defaults - they auto-fill in the form - Use `.coerce.number()` for numeric fields to handle string input from forms - Use `.url()` for URL fields to get built-in validation ::: -### Step 2 — Implement the Adapter +### Step 2 - Implement the Adapter Create `src/lib/adapters/notification/.ts`: @@ -665,17 +665,17 @@ export const MyServiceAdapter: NotificationAdapter = { ``` **Key patterns to follow:** -- Always use `logger.child()` — never `console.log` +- Always use `logger.child()` - never `console.log` - Always use `wrapError()` in catch blocks -- `test()` returns `{ success, message }` — never throws -- `send()` returns `boolean` — `true` on success, `false` on failure (never throws) +- `test()` returns `{ success, message }` - never throws +- `send()` returns `boolean` - `true` on success, `false` on failure (never throws) - Handle `context` being `undefined` (plain text fallback) - Use `context.color` for status colors (`#00ff00` success, `#ff0000` failure) - Use `context.fields` for structured key-value data - Use `context.title` for the notification title - Use `context.success` to determine success/failure state -### Step 3 — Register the Adapter +### Step 3 - Register the Adapter In `src/lib/adapters/index.ts`: @@ -690,12 +690,12 @@ export function registerAdapters() { Place the import and registration near the other notification adapters to keep the file organized. -### Step 4 — Add an Icon +### Step 4 - Add an Icon In `src/components/adapter/utils.ts`, import an Iconify icon and map it: ```typescript -// Import — choose from available icon packages: +// Import - choose from available icon packages: // @iconify-icons/logos → Multi-colored brand SVGs (preferred for well-known brands) // @iconify-icons/simple-icons → Monochrome brand icons (add color via ADAPTER_COLOR_MAP) // @iconify-icons/mdi → Material Design Icons (generic/protocol icons) @@ -725,18 +725,18 @@ const ADAPTER_COLOR_MAP: Record = { }; ``` -### Step 5 — Configure Form Constants +### Step 5 - Configure Form Constants In `src/components/adapter/form-constants.ts`, categorize your schema fields into connection vs. configuration tabs and add placeholders: ```typescript -// Connection tab — fields needed to establish the connection +// Connection tab - fields needed to establish the connection export const NOTIFICATION_CONNECTION_KEYS = [ // ... existing keys 'serverUrl', 'apiToken', // Add your new keys here ]; -// Configuration tab — optional settings +// Configuration tab - optional settings export const NOTIFICATION_CONFIG_KEYS = [ // ... existing keys 'priority', // Add your new keys here @@ -759,7 +759,7 @@ If your adapter has **multi-line text fields** (like `payloadTemplate` or `custo const isTextArea = /* existing checks */ || fieldKey === "myMultiLineField"; ``` -### Step 6 — Add Details Summary +### Step 6 - Add Details Summary In `src/components/adapter/adapter-manager.tsx`, add a `case` to the `getSummary()` switch so the **Details** column in the adapter table shows meaningful info instead of `-`: @@ -775,7 +775,7 @@ const getSummary = (adapterId: string, configJson: string) => { }; ``` -**What to show:** Pick the most identifying field(s) from the config — URL, topic, phone number, channel name, etc. Keep it short and scannable. Examples from existing adapters: +**What to show:** Pick the most identifying field(s) from the config - URL, topic, phone number, channel name, etc. Keep it short and scannable. Examples from existing adapters: | Adapter | Details output | | :--- | :--- | @@ -787,11 +787,11 @@ const getSummary = (adapterId: string, configJson: string) => { | Twilio SMS | `+1234... → +5678...` | | Email | `from@... → to@...` | -### Step 7 — Documentation +### Step 7 - Documentation Create the following documentation: -**a) Wiki page** — `wiki/user-guide/notifications/.md` +**a) Wiki page** - `wiki/user-guide/notifications/.md` Follow the structure of existing adapter pages: - Overview (bullet points with key features) @@ -800,7 +800,7 @@ Follow the structure of existing adapter pages: - Message Format (example output) - Troubleshooting (common error messages) -### Step 8 — Unit Tests +### Step 8 - Unit Tests Create `tests/unit/adapters/notification/.test.ts` following the existing pattern: @@ -882,8 +882,8 @@ describe("My Service Adapter", () => { ``` **What to test:** -- `test()` — success, HTTP error, network error -- `send()` — success, payload structure with context, HTTP error, network error +- `test()` - success, HTTP error, network error +- `send()` - success, payload structure with context, HTTP error, network error - Adapter-specific features (e.g., priority escalation, color mapping, auth headers, template rendering) - Edge cases (trailing slashes in URLs, optional fields omitted, etc.) @@ -893,7 +893,7 @@ pnpm test -- tests/unit/adapters/notification/ ``` ::: -**b) VitePress sidebar** — `wiki/.vitepress/config.mts` +**b) VitePress sidebar** - `wiki/.vitepress/config.mts` Add the entry under the "Notification Channels" section: diff --git a/wiki/developer-guide/adapters/storage.md b/wiki/developer-guide/adapters/storage.md index 7e7746ee..f714a5b8 100644 --- a/wiki/developer-guide/adapters/storage.md +++ b/wiki/developer-guide/adapters/storage.md @@ -476,7 +476,7 @@ registry.register(WebDAVAdapter); The adapter form renders fields dynamically from the Zod schema. Fields are split into two tabs based on these arrays: -**Connection tab** — Add any new connection-related field keys your schema introduces: +**Connection tab** - Add any new connection-related field keys your schema introduces: ```typescript export const STORAGE_CONNECTION_KEYS = [ 'host', 'port', @@ -489,7 +489,7 @@ export const STORAGE_CONNECTION_KEYS = [ ]; ``` -**Configuration tab** — Add any new config-related field keys: +**Configuration tab** - Add any new config-related field keys: ```typescript export const STORAGE_CONFIG_KEYS = [ 'pathPrefix', 'storageClass', 'forcePathStyle', @@ -498,7 +498,7 @@ export const STORAGE_CONFIG_KEYS = [ ]; ``` -**Placeholders** — Add helpful placeholder values for your adapter's fields: +**Placeholders** - Add helpful placeholder values for your adapter's fields: ```typescript export const PLACEHOLDERS: Record = { // WebDAV @@ -602,7 +602,7 @@ If the new adapter requires browser-based OAuth (e.g., Google Drive, Dropbox, On | # | File | What to do | | :--- | :--- | :--- | | 13 | `src/app/api/adapters//auth/route.ts` | OAuth authorization URL generation endpoint | -| 14 | `src/app/api/adapters//callback/route.ts` | OAuth callback — exchange code for tokens, store refresh token encrypted | +| 14 | `src/app/api/adapters//callback/route.ts` | OAuth callback - exchange code for tokens, store refresh token encrypted | | 15 | `src/components/adapter/-oauth-button.tsx` | OAuth button component with authorized/unauthorized status | | 16 | `src/components/adapter/form-sections.tsx` | Special form layout: show OAuth button in connection tab, hide auto-managed fields (e.g., `refreshToken`) | | 17 | `src/lib/crypto.ts` | Add OAuth secret fields to `SENSITIVE_KEYS` (e.g., `clientSecret`, `refreshToken`) | diff --git a/wiki/developer-guide/advanced/api-keys.md b/wiki/developer-guide/advanced/api-keys.md index fdf1fe0e..4972bc0a 100644 --- a/wiki/developer-guide/advanced/api-keys.md +++ b/wiki/developer-guide/advanced/api-keys.md @@ -18,7 +18,7 @@ This document covers the API key authentication system and the webhook trigger m **Key Principles:** - API keys provide stateless, token-based authentication for programmatic access -- API keys **never** inherit SuperAdmin privileges — only explicitly assigned permissions apply +- API keys **never** inherit SuperAdmin privileges - only explicitly assigned permissions apply - The raw key is shown exactly once at creation; only a SHA-256 hash is stored - All API routes support both session (cookie) and API key (Bearer token) authentication via the unified `getAuthContext()` function @@ -199,7 +199,7 @@ export function checkPermissionWithContext( ctx: AuthContext, permission: Permission ): void { - // SuperAdmin bypass (session-only — API keys never have this) + // SuperAdmin bypass (session-only - API keys never have this) if (ctx.isSuperAdmin) return; if (!ctx.permissions.includes(permission)) { @@ -375,7 +375,7 @@ export class ApiKeyError extends DBackupError { ## Security Considerations 1. **No SuperAdmin for API Keys**: Even if the key owner is a SuperAdmin, the API key only has its explicitly assigned permissions -2. **Hash-Only Storage**: Raw keys are never persisted — only SHA-256 hashes +2. **Hash-Only Storage**: Raw keys are never persisted - only SHA-256 hashes 3. **One-Time Reveal**: The full key is displayed exactly once during creation 4. **Expiration**: Optional expiry dates provide time-limited access 5. **Rate Limiting**: API key requests go through the same IP-based rate limiter as browser requests @@ -422,7 +422,7 @@ Use `getAuthContext()` + `checkPermissionWithContext()` for all new routes. The ## Related Documentation -- [Authentication System](./auth.md) — Session-based auth, 2FA, Passkeys -- [Permission System (RBAC)](./permissions.md) — Group permissions, available permissions list -- [Audit Logging](./audit.md) — Audit event tracking -- [API Reference](/user-guide/features/api-reference) — Full endpoint documentation (user-facing) +- [Authentication System](./auth.md) - Session-based auth, 2FA, Passkeys +- [Permission System (RBAC)](./permissions.md) - Group permissions, available permissions list +- [Audit Logging](./audit.md) - Audit event tracking +- [API Reference](/user-guide/features/api-reference) - Full endpoint documentation (user-facing) diff --git a/wiki/developer-guide/advanced/auth.md b/wiki/developer-guide/advanced/auth.md index 287ae4fa..06bdc6dc 100644 --- a/wiki/developer-guide/advanced/auth.md +++ b/wiki/developer-guide/advanced/auth.md @@ -149,7 +149,7 @@ See [SSO Integration](./sso.md) for detailed OIDC implementation. Session lifetime is configurable by administrators via Settings → Authentication & Security. The value is stored in the `SystemSetting` table under the key `auth.sessionDuration` (in seconds). ```typescript -// src/lib/auth.ts — Dynamic session expiry via database hook +// src/lib/auth.ts - Dynamic session expiry via database hook databaseHooks: { session: { create: { @@ -234,10 +234,10 @@ All API routes support both authentication methods through the unified `getAuthC ## Security Best Practices -1. **Never trust client-side checks alone** — Always verify on server -2. **Use `getAuthContext()` + `checkPermissionWithContext()` in API routes** — Supports both session and API key auth -3. **Use `checkPermission()` in Server Actions** — Session-only, defense in depth -4. **Log authentication events** — Use the Audit System -5. **Implement rate limiting** — Prevent brute force attacks -6. **Secure session cookies** — HttpOnly, Secure, SameSite -7. **API keys: hash-only storage** — Never persist raw keys +1. **Never trust client-side checks alone** - Always verify on server +2. **Use `getAuthContext()` + `checkPermissionWithContext()` in API routes** - Supports both session and API key auth +3. **Use `checkPermission()` in Server Actions** - Session-only, defense in depth +4. **Log authentication events** - Use the Audit System +5. **Implement rate limiting** - Prevent brute force attacks +6. **Secure session cookies** - HttpOnly, Secure, SameSite +7. **API keys: hash-only storage** - Never persist raw keys diff --git a/wiki/developer-guide/advanced/encryption.md b/wiki/developer-guide/advanced/encryption.md index a0fed851..04cc65e6 100644 --- a/wiki/developer-guide/advanced/encryption.md +++ b/wiki/developer-guide/advanced/encryption.md @@ -412,12 +412,12 @@ async function importKey(name: string, hexKey: string) { ## Checksum & Encryption Interaction -The SHA-256 checksum is always calculated on the **final** backup file — after both compression and encryption have been applied. This means: +The SHA-256 checksum is always calculated on the **final** backup file - after both compression and encryption have been applied. This means: - The checksum verifies the encrypted file, not the raw dump - Integrity can be verified without decryption (no encryption key needed for checksum verification) - The checksum is stored alongside encryption metadata (`iv`, `authTag`) in the `.meta.json` sidecar file -- During restore, the checksum is verified **before** decryption begins — preventing wasted processing on corrupted files +- During restore, the checksum is verified **before** decryption begins - preventing wasted processing on corrupted files ## Related Documentation diff --git a/wiki/developer-guide/advanced/healthcheck.md b/wiki/developer-guide/advanced/healthcheck.md index 12a07e91..7ab3d00b 100644 --- a/wiki/developer-guide/advanced/healthcheck.md +++ b/wiki/developer-guide/advanced/healthcheck.md @@ -208,7 +208,7 @@ tests/unit/services/healthcheck-service.test.ts 2. Find "Health Check & Connectivity" 3. Click **Run Now** 4. Check the terminal logs for output -5. Navigate to **Sources** or **Destinations** — status badges should be updated +5. Navigate to **Sources** or **Destinations** - status badges should be updated ## Configuration diff --git a/wiki/developer-guide/architecture.md b/wiki/developer-guide/architecture.md index 9e142a46..7a5271e1 100644 --- a/wiki/developer-guide/architecture.md +++ b/wiki/developer-guide/architecture.md @@ -337,9 +337,9 @@ Periodic Integrity Check: ``` **Key Components:** -- `src/lib/checksum.ts` — SHA-256 utility (stream-based, memory-efficient) -- `src/services/integrity-service.ts` — Periodic full verification -- System task `system.integrity_check` — Weekly schedule (disabled by default) +- `src/lib/checksum.ts` - SHA-256 utility (stream-based, memory-efficient) +- `src/services/integrity-service.ts` - Periodic full verification +- System task `system.integrity_check` - Weekly schedule (disabled by default) ## Logging & Error Handling diff --git a/wiki/developer-guide/core/adapters.md b/wiki/developer-guide/core/adapters.md index 7376feaa..609a40e1 100644 --- a/wiki/developer-guide/core/adapters.md +++ b/wiki/developer-guide/core/adapters.md @@ -1,6 +1,6 @@ # Adapter System -DBackup uses a **Plugin/Adapter Architecture**. The core logic doesn't know about specific technologies—it only knows about interfaces. +DBackup uses a **Plugin/Adapter Architecture**. The core logic doesn't know about specific technologies-it only knows about interfaces. ## Overview diff --git a/wiki/developer-guide/core/icons.md b/wiki/developer-guide/core/icons.md index 4f0e9c87..fe021e51 100644 --- a/wiki/developer-guide/core/icons.md +++ b/wiki/developer-guide/core/icons.md @@ -1,6 +1,6 @@ # Icon System -DBackup uses **[Iconify](https://iconify.design/)** for adapter icons — brand logos for databases, storage providers, and notifications. Icons are **bundled offline** (no API calls at runtime), which is critical for self-hosted deployments. +DBackup uses **[Iconify](https://iconify.design/)** for adapter icons - brand logos for databases, storage providers, and notifications. Icons are **bundled offline** (no API calls at runtime), which is critical for self-hosted deployments. ## Architecture @@ -16,8 +16,8 @@ src/components/adapter/ | Pack | NPM Package | Usage | Coloring | |------|------------|-------|----------| -| **SVG Logos** | `@iconify-icons/logos` | Primary — multi-colored brand icons | Colors embedded in SVG | -| **Simple Icons** | `@iconify-icons/simple-icons` | Fallback — brands not in SVG Logos | Monochrome, brand color via `ADAPTER_COLOR_MAP` | +| **SVG Logos** | `@iconify-icons/logos` | Primary - multi-colored brand icons | Colors embedded in SVG | +| **Simple Icons** | `@iconify-icons/simple-icons` | Fallback - brands not in SVG Logos | Monochrome, brand color via `ADAPTER_COLOR_MAP` | | **Material Design Icons** | `@iconify-icons/mdi` | Protocol, storage & generic icons (SSH, FTP, SMB, email, fallback) | Inherits `currentColor` | > **Rule:** Always prefer **SVG Logos** first. Only use **Simple Icons** if the brand doesn't exist in SVG Logos (e.g. Hetzner, Minio). Use **MDI** for protocol/storage concepts and generic icons (SSH, FTP, email, fallback). @@ -67,7 +67,7 @@ const ADAPTER_COLOR_MAP: Record = { ### AdapterIcon Component -The `` component handles everything — it reads the icon data and optional color, then renders via Iconify's ``: +The `` component handles everything - it reads the icon data and optional color, then renders via Iconify's ``: ```tsx // Usage @@ -90,13 +90,13 @@ Note the icon name from the URL (e.g. `logos:mysql-icon` → import path is `@ic In `src/components/adapter/utils.ts`, add the import at the top in the appropriate section: ```typescript -// — SVG Logos (primary, multi-colored) — +// - SVG Logos (primary, multi-colored) - import myBrandIcon from "@iconify-icons/logos/my-brand-icon"; -// — OR Simple Icons (if not in SVG Logos) — +// - OR Simple Icons (if not in SVG Logos) - import myBrandIcon from "@iconify-icons/simple-icons/mybrand"; -// — OR MDI (generic) — +// - OR MDI (generic) - import myGenericIcon from "@iconify-icons/mdi/some-icon"; ``` @@ -121,7 +121,7 @@ const ADAPTER_COLOR_MAP: Record = { ``` ::: tip -SVG Logos icons already contain their brand colors — do **not** add them to `ADAPTER_COLOR_MAP` or the colors will be overridden. +SVG Logos icons already contain their brand colors - do **not** add them to `ADAPTER_COLOR_MAP` or the colors will be overridden. ::: ### Step 5: Verify @@ -164,6 +164,6 @@ pnpm build ## Key Decisions -- **Why bundled, not API?** — DBackup is self-hosted. Users may not have internet access or may block external API calls via CSP. Bundled icons render instantly without network requests. -- **Why Iconify over react-simple-icons?** — Iconify's SVG Logos pack provides multi-colored brand icons (MySQL dolphin in blue, PostgreSQL elephant in blue/white, etc.) rather than flat monochrome. It also covers more brands (OneDrive, AWS S3). -- **Why three packs?** — SVG Logos has the best brand icons but doesn't cover everything. Simple Icons fills the gaps (Hetzner, Minio). MDI provides expressive protocol icons (SSH lock, folder-network for SMB, folder-sync for rsync) as well as generic icons (email, disc fallback), eliminating the need for a separate Lucide pack. +- **Why bundled, not API?** - DBackup is self-hosted. Users may not have internet access or may block external API calls via CSP. Bundled icons render instantly without network requests. +- **Why Iconify over react-simple-icons?** - Iconify's SVG Logos pack provides multi-colored brand icons (MySQL dolphin in blue, PostgreSQL elephant in blue/white, etc.) rather than flat monochrome. It also covers more brands (OneDrive, AWS S3). +- **Why three packs?** - SVG Logos has the best brand icons but doesn't cover everything. Simple Icons fills the gaps (Hetzner, Minio). MDI provides expressive protocol icons (SSH lock, folder-network for SMB, folder-sync for rsync) as well as generic icons (email, disc fallback), eliminating the need for a separate Lucide pack. diff --git a/wiki/developer-guide/core/logging.md b/wiki/developer-guide/core/logging.md index 14c315cb..f432b5dc 100644 --- a/wiki/developer-guide/core/logging.md +++ b/wiki/developer-guide/core/logging.md @@ -509,7 +509,7 @@ await recordNotificationLog({ }); ``` -**Key design:** `recordNotificationLog()` is fire-and-forget — it catches and swallows all errors to never block notification delivery. +**Key design:** `recordNotificationLog()` is fire-and-forget - it catches and swallows all errors to never block notification delivery. ### Dispatch Points @@ -524,12 +524,12 @@ Each log entry stores adapter-specific rendered content for History page preview | Adapter | `renderedPayload` Content | `renderedHtml` | | :--- | :--- | :--- | -| Discord | Embed object (title, description, fields, color) | — | -| Slack | Block Kit blocks array | — | -| Teams | Adaptive Card body | — | -| Telegram | Parsed HTML message | — | -| Email | — | Full rendered React email HTML | -| Others | — | — | +| Discord | Embed object (title, description, fields, color) | - | +| Slack | Block Kit blocks array | - | +| Teams | Adaptive Card body | - | +| Telegram | Parsed HTML message | - | +| Email | - | Full rendered React email HTML | +| Others | - | - | ### Data Retention diff --git a/wiki/developer-guide/core/rate-limiting.md b/wiki/developer-guide/core/rate-limiting.md index 0c2bd00c..24114abd 100644 --- a/wiki/developer-guide/core/rate-limiting.md +++ b/wiki/developer-guide/core/rate-limiting.md @@ -58,9 +58,9 @@ src/app/dashboard/settings/page.tsx → Settings page: Rate Limits tab ### Config Flow -1. **Server startup** — `instrumentation.ts` calls `reloadRateLimits()` → reads DB → rebuilds limiters in server context -2. **Settings change** — Server action calls `reloadRateLimits()` → updates server context limiters -3. **Middleware request** — `syncRateLimitConfig()` fetches `/api/internal/rate-limit-config` (cached 30s) → calls `applyExternalConfig()` → rebuilds Edge limiter instances +1. **Server startup** - `instrumentation.ts` calls `reloadRateLimits()` → reads DB → rebuilds limiters in server context +2. **Settings change** - Server action calls `reloadRateLimits()` → updates server context limiters +3. **Middleware request** - `syncRateLimitConfig()` fetches `/api/internal/rate-limit-config` (cached 30s) → calls `applyExternalConfig()` → rebuilds Edge limiter instances ## Middleware Integration diff --git a/wiki/developer-guide/core/runner.md b/wiki/developer-guide/core/runner.md index 26fb08df..e6520d56 100644 --- a/wiki/developer-guide/core/runner.md +++ b/wiki/developer-guide/core/runner.md @@ -177,7 +177,7 @@ export async function stepDump(ctx: RunnerContext): Promise { ### Step 3: Upload (`03-upload.ts`) -Uploads the backup to **all destinations** sequentially (sorted by priority). The dump file is produced once—each destination receives the same file. +Uploads the backup to **all destinations** sequentially (sorted by priority). The dump file is produced once-each destination receives the same file. ```typescript export async function stepUpload(ctx: RunnerContext): Promise { @@ -321,7 +321,7 @@ export async function stepRetention(ctx: RunnerContext): Promise { } ``` -> Each destination can have a completely different retention strategy—e.g., keep 30 daily backups locally but only 12 monthly backups in cloud storage. +> Each destination can have a completely different retention strategy-e.g., keep 30 daily backups locally but only 12 monthly backups in cloud storage. ## Queue Manager diff --git a/wiki/developer-guide/core/services.md b/wiki/developer-guide/core/services.md index 96b07b5b..8cc5c893 100644 --- a/wiki/developer-guide/core/services.md +++ b/wiki/developer-guide/core/services.md @@ -1,6 +1,6 @@ # Service Layer -The Service Layer contains all business logic in DBackup. Server Actions and API routes delegate to services—they never contain business logic themselves. +The Service Layer contains all business logic in DBackup. Server Actions and API routes delegate to services-they never contain business logic themselves. ## Overview @@ -303,7 +303,7 @@ export async function getNotificationLogById(id: string): Promise diff --git a/wiki/user-guide/destinations/dropbox.md b/wiki/user-guide/destinations/dropbox.md index e770dc90..fba09910 100644 --- a/wiki/user-guide/destinations/dropbox.md +++ b/wiki/user-guide/destinations/dropbox.md @@ -23,16 +23,16 @@ Apps with "App folder" access can only read/write within their own folder (`/App | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Name** | Friendly name for this destination | — | ✅ | -| **App Key** | Dropbox App Key (Client ID) | — | ✅ | -| **App Secret** | Dropbox App Secret (Client Secret) | — | ✅ | +| **Name** | Friendly name for this destination | - | ✅ | +| **App Key** | Dropbox App Key (Client ID) | - | ✅ | +| **App Secret** | Dropbox App Secret (Client Secret) | - | ✅ | | **Folder Path** | Target folder within app folder | Root | ❌ | ## Setup Guide 1. Go to **Destinations** → **Add Destination** → **Dropbox** 2. Enter App Key and App Secret → **Save** -3. Click **Authorize with Dropbox** — you'll be redirected to Dropbox +3. Click **Authorize with Dropbox** - you'll be redirected to Dropbox 4. Sign in and grant DBackup access 5. After redirect, the status changes to **green** ("Authorized") 6. (Optional) Use the **Folder Browser** (📂) to select a subfolder @@ -40,10 +40,10 @@ Apps with "App folder" access can only read/write within their own folder (`/App ## How It Works -- **OAuth tokens** refresh automatically — no manual re-authorization needed +- **OAuth tokens** refresh automatically - no manual re-authorization needed - Files < 150 MB use simple upload; larger files use chunked upload (8 MB chunks) - All credentials (App Key, App Secret, Refresh Token) are stored AES-256-GCM encrypted -- Access tokens are short-lived and never stored — refreshed on-the-fly +- Access tokens are short-lived and never stored - refreshed on-the-fly ## Troubleshooting diff --git a/wiki/user-guide/destinations/ftp.md b/wiki/user-guide/destinations/ftp.md index 297657aa..a17dc131 100644 --- a/wiki/user-guide/destinations/ftp.md +++ b/wiki/user-guide/destinations/ftp.md @@ -6,13 +6,13 @@ Store backups on a remote FTP server. Supports plain FTP and explicit FTPS (FTP | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Name** | Friendly name for this destination | — | ✅ | -| **Host** | Hostname or IP of the FTP server | — | ✅ | +| **Name** | Friendly name for this destination | - | ✅ | +| **Host** | Hostname or IP of the FTP server | - | ✅ | | **Port** | FTP port | `21` | ❌ | | **Username** | FTP username | `anonymous` | ❌ | -| **Password** | FTP password | — | ❌ | +| **Password** | FTP password | - | ❌ | | **TLS** | Enable explicit FTPS (FTP over TLS) | `false` | ❌ | -| **Path Prefix** | Remote directory for backups | — | ❌ | +| **Path Prefix** | Remote directory for backups | - | ❌ | ## Setup Guide @@ -30,7 +30,7 @@ Plain FTP transfers credentials and data unencrypted. **Always enable TLS** when ## How It Works -- When TLS is enabled, DBackup uses explicit FTPS (AUTH TLS) — the connection upgrades from plain to encrypted +- When TLS is enabled, DBackup uses explicit FTPS (AUTH TLS) - the connection upgrades from plain to encrypted - DBackup creates subdirectories per job within the Path Prefix automatically - All credentials are stored AES-256-GCM encrypted in the database @@ -58,7 +58,7 @@ connect ECONNREFUSED SSL routines / handshake failure ``` -**Solution:** Ensure the server supports explicit FTPS (AUTH TLS). Implicit FTPS (port 990) is not supported — use explicit mode on port 21. +**Solution:** Ensure the server supports explicit FTPS (AUTH TLS). Implicit FTPS (port 990) is not supported - use explicit mode on port 21. ### Passive Mode Issues diff --git a/wiki/user-guide/destinations/google-drive.md b/wiki/user-guide/destinations/google-drive.md index 9a7bf70d..44629a9d 100644 --- a/wiki/user-guide/destinations/google-drive.md +++ b/wiki/user-guide/destinations/google-drive.md @@ -22,20 +22,20 @@ You need a Google Cloud project with the Drive API enabled (one-time setup): - Copy the **Client ID** and **Client Secret** ::: warning Testing Mode -While your OAuth consent screen is in "Testing" mode, only users listed as test users can authorize. This is fine for self-hosted use — no need to publish the app. +While your OAuth consent screen is in "Testing" mode, only users listed as test users can authorize. This is fine for self-hosted use - no need to publish the app. ::: ## Configuration | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Name** | Friendly name for this destination | — | ✅ | -| **Client ID** | Google OAuth Client ID | — | ✅ | -| **Client Secret** | Google OAuth Client Secret | — | ✅ | +| **Name** | Friendly name for this destination | - | ✅ | +| **Client ID** | Google OAuth Client ID | - | ✅ | +| **Client Secret** | Google OAuth Client Secret | - | ✅ | | **Folder ID** | Google Drive folder ID for backups | Root | ❌ | ::: tip Finding the Folder ID -Open the target folder in Google Drive — the Folder ID is the last part of the URL: +Open the target folder in Google Drive - the Folder ID is the last part of the URL: `https://drive.google.com/drive/folders/`**`1AbCdEfGhIjKlMnOpQrStUv`** ::: @@ -43,7 +43,7 @@ Open the target folder in Google Drive — the Folder ID is the last part of the 1. Go to **Destinations** → **Add Destination** → **Google Drive** 2. Enter Client ID and Client Secret → **Save** -3. Click **Authorize with Google** — you'll be redirected to Google +3. Click **Authorize with Google** - you'll be redirected to Google 4. Sign in and grant DBackup access to manage its files 5. After redirect, the status changes to **green** ("Authorized") 6. (Optional) Enter a **Folder ID** to store backups in a specific folder @@ -51,8 +51,8 @@ Open the target folder in Google Drive — the Folder ID is the last part of the ## How It Works -- **OAuth tokens** refresh automatically — no manual re-authorization needed -- Uses the `drive.file` scope — DBackup can only access files it created (not your entire Drive) +- **OAuth tokens** refresh automatically - no manual re-authorization needed +- Uses the `drive.file` scope - DBackup can only access files it created (not your entire Drive) - Files ≤ 5 MB use simple upload; larger files use resumable upload - All credentials (Client ID, Client Secret, Refresh Token) are stored AES-256-GCM encrypted diff --git a/wiki/user-guide/destinations/local.md b/wiki/user-guide/destinations/local.md index bded9429..96944881 100644 --- a/wiki/user-guide/destinations/local.md +++ b/wiki/user-guide/destinations/local.md @@ -1,12 +1,12 @@ # Local Storage -Store backups on the local filesystem of the server running DBackup. Simplest option — no external service required. +Store backups on the local filesystem of the server running DBackup. Simplest option - no external service required. ## Configuration | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Name** | Friendly name for this destination | — | ✅ | +| **Name** | Friendly name for this destination | - | ✅ | | **Base Path** | Absolute directory path for backups | `/backups` | ❌ | ## Setup Guide @@ -30,7 +30,7 @@ The default `/backups` path works with the default `docker-compose.yml` configur - Backups are written directly to the specified directory - DBackup creates subfolders per job automatically (e.g. `/backups/my-job/`) -- No network transfer — fastest destination option +- No network transfer - fastest destination option - File permissions inherit from the DBackup process user ## Troubleshooting diff --git a/wiki/user-guide/destinations/onedrive.md b/wiki/user-guide/destinations/onedrive.md index 61e51264..0010983b 100644 --- a/wiki/user-guide/destinations/onedrive.md +++ b/wiki/user-guide/destinations/onedrive.md @@ -19,7 +19,7 @@ You need an Azure App Registration to enable the Microsoft Graph API (one-time s 7. Copy the **Application (client) ID** from the Overview page ::: danger Don't Confuse the IDs -The Overview page shows three IDs — use **Application (client) ID** only. Do not use Directory (tenant) ID or Object ID. For secrets, copy the **Value** column, not the Secret ID. +The Overview page shows three IDs - use **Application (client) ID** only. Do not use Directory (tenant) ID or Object ID. For secrets, copy the **Value** column, not the Secret ID. :::
@@ -45,16 +45,16 @@ Or recreate the App Registration with the correct setting (third option). | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Name** | Friendly name for this destination | — | ✅ | -| **Client ID** | Application (client) ID from Azure Portal | — | ✅ | -| **Client Secret** | Client secret **Value** from Azure Portal | — | ✅ | +| **Name** | Friendly name for this destination | - | ✅ | +| **Client ID** | Application (client) ID from Azure Portal | - | ✅ | +| **Client Secret** | Client secret **Value** from Azure Portal | - | ✅ | | **Folder Path** | Target folder path (e.g. `/Backups/DBackup`) | Root | ❌ | ## Setup Guide 1. Go to **Destinations** → **Add Destination** → **Microsoft OneDrive** 2. Enter Client ID and Client Secret → **Save** -3. Click **Authorize with Microsoft** — you'll be redirected to Microsoft +3. Click **Authorize with Microsoft** - you'll be redirected to Microsoft 4. Sign in and accept the requested permissions 5. After redirect, the status changes to **green** ("Authorized") 6. (Optional) Use the **Folder Browser** (📂) to select a subfolder @@ -62,13 +62,13 @@ Or recreate the App Registration with the correct setting (third option). ## How It Works -- **OAuth tokens** refresh automatically — no manual re-authorization needed +- **OAuth tokens** refresh automatically - no manual re-authorization needed - Files ≤ 4 MB use simple PUT upload; larger files use upload sessions (10 MB chunks) - All credentials (Client ID, Client Secret, Refresh Token) are stored AES-256-GCM encrypted -- Access tokens are short-lived (~1 hour) and never stored — refreshed on-the-fly +- Access tokens are short-lived (~1 hour) and never stored - refreshed on-the-fly ::: warning Client Secret Expiration -Azure client secrets expire (max 24 months). Set a calendar reminder — Azure does not send expiration notifications for personal accounts. When expired, create a new secret in Azure Portal and update DBackup. +Azure client secrets expire (max 24 months). Set a calendar reminder - Azure does not send expiration notifications for personal accounts. When expired, create a new secret in Azure Portal and update DBackup. ::: ## Troubleshooting @@ -80,9 +80,9 @@ The redirect URI in Azure doesn't match your DBackup URL exactly. Check in App R ### AADSTS7000215 / invalid_client Common causes: -- Copied the **Secret ID** instead of the **Value** — recreate the secret and copy the correct column -- Secret expired — check expiration date in Azure Portal -- Wrong Client ID — ensure you're using Application (client) ID +- Copied the **Secret ID** instead of the **Value** - recreate the secret and copy the correct column +- Secret expired - check expiration date in Azure Portal +- Wrong Client ID - ensure you're using Application (client) ID ### Token Expired / Invalid diff --git a/wiki/user-guide/destinations/rsync.md b/wiki/user-guide/destinations/rsync.md index 126a89a0..05999fb1 100644 --- a/wiki/user-guide/destinations/rsync.md +++ b/wiki/user-guide/destinations/rsync.md @@ -15,16 +15,16 @@ The default DBackup Docker image includes rsync. If you're running DBackup outsi | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Name** | Friendly name for this destination | — | ✅ | -| **Host** | Hostname or IP of the remote server | — | ✅ | +| **Name** | Friendly name for this destination | - | ✅ | +| **Host** | Hostname or IP of the remote server | - | ✅ | | **Port** | SSH port | `22` | ❌ | -| **Username** | SSH username | — | ✅ | +| **Username** | SSH username | - | ✅ | | **Auth Type** | Authentication method | `password` | ❌ | -| **Password** | User password (when Auth Type = `password`) | — | ❌ | -| **Private Key** | PEM-encoded private key (when Auth Type = `privateKey`) | — | ❌ | -| **Passphrase** | Passphrase for encrypted private keys | — | ❌ | -| **Path Prefix** | Remote directory for backups | — | ✅ | -| **Options** | Additional rsync flags (e.g. `--bwlimit=1000`) | — | ❌ | +| **Password** | User password (when Auth Type = `password`) | - | ❌ | +| **Private Key** | PEM-encoded private key (when Auth Type = `privateKey`) | - | ❌ | +| **Passphrase** | Passphrase for encrypted private keys | - | ❌ | +| **Path Prefix** | Remote directory for backups | - | ✅ | +| **Options** | Additional rsync flags (e.g. `--bwlimit=1000`) | - | ❌ | ### Authentication Methods diff --git a/wiki/user-guide/destinations/s3-aws.md b/wiki/user-guide/destinations/s3-aws.md index 034937c6..37e91b90 100644 --- a/wiki/user-guide/destinations/s3-aws.md +++ b/wiki/user-guide/destinations/s3-aws.md @@ -6,12 +6,12 @@ Store backups in AWS S3 with support for storage classes, lifecycle policies, an | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Name** | Friendly name for this destination | — | ✅ | +| **Name** | Friendly name for this destination | - | ✅ | | **Region** | AWS region (e.g. `us-east-1`, `eu-central-1`) | `us-east-1` | ✅ | -| **Bucket** | S3 bucket name | — | ✅ | -| **Access Key ID** | AWS access key | — | ✅ | -| **Secret Access Key** | AWS secret key | — | ✅ | -| **Path Prefix** | Folder path within the bucket | — | ❌ | +| **Bucket** | S3 bucket name | - | ✅ | +| **Access Key ID** | AWS access key | - | ✅ | +| **Secret Access Key** | AWS secret key | - | ✅ | +| **Path Prefix** | Folder path within the bucket | - | ❌ | | **Storage Class** | S3 storage class for uploaded objects | `STANDARD` | ❌ | ### Storage Classes @@ -28,7 +28,7 @@ Store backups in AWS S3 with support for storage classes, lifecycle policies, an 1. **Create an S3 bucket** in your preferred region via the [AWS Console](https://s3.console.aws.amazon.com/) 2. **Create an IAM user** with programmatic access: - Go to [IAM Console](https://console.aws.amazon.com/iam/) → **Users** → **Create user** - - Attach the `AmazonS3FullAccess` policy (or a scoped policy — see below) + - Attach the `AmazonS3FullAccess` policy (or a scoped policy - see below) - Create an **Access Key** (use case: "Application outside AWS") and copy both keys 3. Go to **Destinations** → **Add Destination** → **Amazon S3** 4. Enter your Region, Bucket, Access Key ID, and Secret Access Key diff --git a/wiki/user-guide/destinations/s3-generic.md b/wiki/user-guide/destinations/s3-generic.md index 807d77c0..723bdf8d 100644 --- a/wiki/user-guide/destinations/s3-generic.md +++ b/wiki/user-guide/destinations/s3-generic.md @@ -1,19 +1,19 @@ # S3-Compatible Storage -Store backups in any S3-compatible storage provider — MinIO, Wasabi, DigitalOcean Spaces, Backblaze B2, and more. +Store backups in any S3-compatible storage provider - MinIO, Wasabi, DigitalOcean Spaces, Backblaze B2, and more. ## Configuration | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Name** | Friendly name for this destination | — | ✅ | -| **Endpoint** | S3-compatible API endpoint URL | — | ✅ | +| **Name** | Friendly name for this destination | - | ✅ | +| **Endpoint** | S3-compatible API endpoint URL | - | ✅ | | **Region** | Storage region | `us-east-1` | ❌ | -| **Bucket** | Bucket name | — | ✅ | -| **Access Key ID** | S3 access key | — | ✅ | -| **Secret Access Key** | S3 secret key | — | ✅ | +| **Bucket** | Bucket name | - | ✅ | +| **Access Key ID** | S3 access key | - | ✅ | +| **Secret Access Key** | S3 secret key | - | ✅ | | **Force Path Style** | Use path-style URLs (`endpoint/bucket`) instead of virtual-hosted | `false` | ❌ | -| **Path Prefix** | Folder path within the bucket | — | ❌ | +| **Path Prefix** | Folder path within the bucket | - | ❌ | ::: tip Force Path Style Enable this for providers that don't support virtual-hosted-style URLs (e.g. MinIO, Ceph). When enabled, requests go to `endpoint/bucket/key` instead of `bucket.endpoint/key`. diff --git a/wiki/user-guide/destinations/s3-hetzner.md b/wiki/user-guide/destinations/s3-hetzner.md index c7033111..472acc50 100644 --- a/wiki/user-guide/destinations/s3-hetzner.md +++ b/wiki/user-guide/destinations/s3-hetzner.md @@ -1,17 +1,17 @@ # Hetzner Object Storage -Store backups in Hetzner Object Storage — affordable S3-compatible storage in European and US data centers. +Store backups in Hetzner Object Storage - affordable S3-compatible storage in European and US data centers. ## Configuration | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Name** | Friendly name for this destination | — | ✅ | +| **Name** | Friendly name for this destination | - | ✅ | | **Region** | Hetzner data center region | `fsn1` | ✅ | -| **Bucket** | Bucket name | — | ✅ | -| **Access Key ID** | S3 credential Access Key | — | ✅ | -| **Secret Access Key** | S3 credential Secret Key | — | ✅ | -| **Path Prefix** | Folder path within the bucket | — | ✅ | +| **Bucket** | Bucket name | - | ✅ | +| **Access Key ID** | S3 credential Access Key | - | ✅ | +| **Secret Access Key** | S3 credential Secret Key | - | ✅ | +| **Path Prefix** | Folder path within the bucket | - | ✅ | ### Regions @@ -29,7 +29,7 @@ Store backups in Hetzner Object Storage — affordable S3-compatible storage in - Copy the **Access Key** and **Secret Key** immediately (shown only once) 3. Go to **Destinations** → **Add Destination** → **Hetzner Object Storage** 4. Select your **Region**, enter Bucket name, Access Key, and Secret Key -5. Enter a **Path Prefix** (required — e.g. `backups` or `dbackup/prod`) +5. Enter a **Path Prefix** (required - e.g. `backups` or `dbackup/prod`) 6. Click **Test** to verify the connection ::: warning Path Prefix Required @@ -39,7 +39,7 @@ Unlike other S3 adapters, Hetzner Object Storage **requires** a Path Prefix. Set ## How It Works - DBackup connects to `https://..your-objectstorage.com` automatically -- Uses S3-compatible API — uploads via multipart for large files +- Uses S3-compatible API - uploads via multipart for large files - All credentials are stored AES-256-GCM encrypted in the database ## Troubleshooting @@ -66,7 +66,7 @@ NoSuchBucket Validation error: path prefix is required ``` -**Solution:** Enter a Path Prefix — this field is mandatory for Hetzner Object Storage. +**Solution:** Enter a Path Prefix - this field is mandatory for Hetzner Object Storage. ## Next Steps diff --git a/wiki/user-guide/destinations/s3-r2.md b/wiki/user-guide/destinations/s3-r2.md index 667fd137..32e047ce 100644 --- a/wiki/user-guide/destinations/s3-r2.md +++ b/wiki/user-guide/destinations/s3-r2.md @@ -1,17 +1,17 @@ # Cloudflare R2 -Store backups in Cloudflare R2 — S3-compatible object storage with zero egress fees. +Store backups in Cloudflare R2 - S3-compatible object storage with zero egress fees. ## Configuration | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Name** | Friendly name for this destination | — | ✅ | -| **Account ID** | Cloudflare Account ID | — | ✅ | -| **Bucket** | R2 bucket name | — | ✅ | -| **Access Key ID** | R2 API token Access Key ID | — | ✅ | -| **Secret Access Key** | R2 API token Secret Access Key | — | ✅ | -| **Path Prefix** | Folder path within the bucket | — | ❌ | +| **Name** | Friendly name for this destination | - | ✅ | +| **Account ID** | Cloudflare Account ID | - | ✅ | +| **Bucket** | R2 bucket name | - | ✅ | +| **Access Key ID** | R2 API token Access Key ID | - | ✅ | +| **Secret Access Key** | R2 API token Secret Access Key | - | ✅ | +| **Path Prefix** | Folder path within the bucket | - | ❌ | ## Setup Guide @@ -33,7 +33,7 @@ R2 has no egress fees, making it ideal for backups you may need to restore frequ ## How It Works - DBackup connects to the R2 endpoint `https://.r2.cloudflarestorage.com` automatically -- Uses S3-compatible API — uploads via multipart for large files +- Uses S3-compatible API - uploads via multipart for large files - All credentials are stored AES-256-GCM encrypted in the database ## Troubleshooting diff --git a/wiki/user-guide/destinations/sftp.md b/wiki/user-guide/destinations/sftp.md index 25f538b4..b76b6270 100644 --- a/wiki/user-guide/destinations/sftp.md +++ b/wiki/user-guide/destinations/sftp.md @@ -6,15 +6,15 @@ Store backups on a remote server via SSH File Transfer Protocol. Supports passwo | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Name** | Friendly name for this destination | — | ✅ | -| **Host** | Hostname or IP of the SFTP server | — | ✅ | +| **Name** | Friendly name for this destination | - | ✅ | +| **Host** | Hostname or IP of the SFTP server | - | ✅ | | **Port** | SSH port | `22` | ❌ | -| **Username** | SSH username | — | ✅ | +| **Username** | SSH username | - | ✅ | | **Auth Type** | Authentication method | `password` | ❌ | -| **Password** | User password (when Auth Type = `password`) | — | ❌ | -| **Private Key** | PEM-encoded private key (when Auth Type = `privateKey`) | — | ❌ | -| **Passphrase** | Passphrase for encrypted private keys | — | ❌ | -| **Path Prefix** | Remote directory for backups | — | ❌ | +| **Password** | User password (when Auth Type = `password`) | - | ❌ | +| **Private Key** | PEM-encoded private key (when Auth Type = `privateKey`) | - | ❌ | +| **Passphrase** | Passphrase for encrypted private keys | - | ❌ | +| **Path Prefix** | Remote directory for backups | - | ❌ | ### Authentication Methods @@ -45,7 +45,7 @@ Paste the entire PEM key content including the `-----BEGIN` and `-----END` lines ## How It Works -- Files are uploaded via SFTP (SSH subsystem) — all transfers are encrypted in transit +- Files are uploaded via SFTP (SSH subsystem) - all transfers are encrypted in transit - DBackup creates subdirectories per job within the Path Prefix automatically - All credentials (passwords, private keys) are stored AES-256-GCM encrypted in the database diff --git a/wiki/user-guide/destinations/smb.md b/wiki/user-guide/destinations/smb.md index f56691b8..d3556eed 100644 --- a/wiki/user-guide/destinations/smb.md +++ b/wiki/user-guide/destinations/smb.md @@ -6,21 +6,21 @@ Store backups on a Windows share, NAS, or any SMB/CIFS-compatible network storag | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Name** | Friendly name for this destination | — | ✅ | -| **Address** | UNC share path (e.g. `//server/share`) | — | ✅ | +| **Name** | Friendly name for this destination | - | ✅ | +| **Address** | UNC share path (e.g. `//server/share`) | - | ✅ | | **Username** | SMB username | `guest` | ❌ | -| **Password** | SMB password | — | ❌ | -| **Domain** | Windows domain / workgroup | — | ❌ | +| **Password** | SMB password | - | ❌ | +| **Domain** | Windows domain / workgroup | - | ❌ | | **Max Protocol** | Highest SMB protocol version to use | `SMB3` | ❌ | -| **Path Prefix** | Subfolder within the share | — | ❌ | +| **Path Prefix** | Subfolder within the share | - | ❌ | ### Protocol Versions | Protocol | Notes | | :--- | :--- | -| `SMB3` | Default, recommended — encrypted transport | +| `SMB3` | Default, recommended - encrypted transport | | `SMB2` | Fallback for older NAS devices | -| `NT1` | SMB1 legacy — use only if required | +| `NT1` | SMB1 legacy - use only if required | ## Setup Guide @@ -40,7 +40,7 @@ Synology, QNAP, TrueNAS, and OpenMediaVault all support SMB shares. Create a ded ## How It Works - DBackup mounts the SMB share temporarily for each operation, then unmounts -- Files are written directly to the share — same behavior as local storage +- Files are written directly to the share - same behavior as local storage - All credentials are stored AES-256-GCM encrypted in the database - `smbclient` must be available in the DBackup container (included in the default Docker image) diff --git a/wiki/user-guide/destinations/webdav.md b/wiki/user-guide/destinations/webdav.md index 2d46a648..758f4ba5 100644 --- a/wiki/user-guide/destinations/webdav.md +++ b/wiki/user-guide/destinations/webdav.md @@ -1,16 +1,16 @@ # WebDAV -Store backups on any WebDAV-compatible server — Nextcloud, ownCloud, Synology, Apache, and more. +Store backups on any WebDAV-compatible server - Nextcloud, ownCloud, Synology, Apache, and more. ## Configuration | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Name** | Friendly name for this destination | — | ✅ | -| **URL** | WebDAV endpoint URL | — | ✅ | -| **Username** | WebDAV username | — | ✅ | -| **Password** | WebDAV password or app password | — | ❌ | -| **Path Prefix** | Subfolder path on the server | — | ❌ | +| **Name** | Friendly name for this destination | - | ✅ | +| **URL** | WebDAV endpoint URL | - | ✅ | +| **Username** | WebDAV username | - | ✅ | +| **Password** | WebDAV password or app password | - | ❌ | +| **Path Prefix** | Subfolder path on the server | - | ❌ | ## Setup Guide diff --git a/wiki/user-guide/features/api-keys.md b/wiki/user-guide/features/api-keys.md index 9e5a7f81..25312dbd 100644 --- a/wiki/user-guide/features/api-keys.md +++ b/wiki/user-guide/features/api-keys.md @@ -12,7 +12,7 @@ API keys provide a secure alternative to session-based authentication for progra - Can be **enabled/disabled** without deletion - Supports **rotation** for key cycling -> **Security**: API keys are stored as SHA-256 hashes. The raw key is only shown once — immediately after creation or rotation. +> **Security**: API keys are stored as SHA-256 hashes. The raw key is only shown once - immediately after creation or rotation. ## Creating an API Key @@ -27,7 +27,7 @@ API keys provide a secure alternative to session-based authentication for progra | **Permissions** | Yes | Select at least one permission the key should have. | 4. Click **Create Key** -5. **Copy the key immediately** — it won't be shown again +5. **Copy the key immediately** - it won't be shown again ### Recommended Permission Sets @@ -121,6 +121,6 @@ The audit log records which API key was used for each request, enabling full tra 1. **Least privilege**: Only assign permissions the key actually needs 2. **Set expiration dates** for temporary or CI/CD keys 3. **Use descriptive names** to identify the key's purpose -4. **Rotate keys regularly** — especially after team changes +4. **Rotate keys regularly** - especially after team changes 5. **Monitor the audit log** for unexpected API key usage 6. **Disable before deleting** if you want to test the impact first diff --git a/wiki/user-guide/features/notifications.md b/wiki/user-guide/features/notifications.md index 35403778..4a22ce30 100644 --- a/wiki/user-guide/features/notifications.md +++ b/wiki/user-guide/features/notifications.md @@ -47,7 +47,7 @@ Per-job notifications alert you when a specific backup job completes or fails. ### Multiple Channels -You can assign multiple notifications to one job — for example Discord for quick team awareness and Email for formal audit records. +You can assign multiple notifications to one job - for example Discord for quick team awareness and Email for formal audit records. ### Notification Conditions @@ -135,7 +135,7 @@ This feature only works with Email (SMTP) channels. At least one Email channel m 4. A **"Notify user directly"** dropdown appears below the channel selector 5. Choose the desired mode -The user's email address is taken from their account profile — no additional configuration needed. +The user's email address is taken from their account profile - no additional configuration needed. ### Test Notifications @@ -161,8 +161,8 @@ For channel-specific troubleshooting, see the individual channel pages: ### Notification Strategy -1. **Always notify on failure** — Critical for reliability -2. **Consider noise** — Too many success notifications get ignored +1. **Always notify on failure** - Critical for reliability +2. **Consider noise** - Too many success notifications get ignored 3. **Use channels appropriately**: - Discord / Slack: Team visibility - Teams: Enterprise communication @@ -171,18 +171,18 @@ For channel-specific troubleshooting, see the individual channel pages: - SMS (Twilio): Critical failure alerts to mobile phones - Generic Webhook: Automation and monitoring tools - Email: Audit trail, per-user alerts -4. **Test regularly** — Ensure notifications work +4. **Test regularly** - Ensure notifications work ### Security -1. **Don't log credentials** — Use environment variables -2. **Secure webhooks** — Don't share webhook URLs publicly -3. **Review recipients** — Only needed parties -4. **SMTP over TLS** — Encrypt email transport +1. **Don't log credentials** - Use environment variables +2. **Secure webhooks** - Don't share webhook URLs publicly +3. **Review recipients** - Only needed parties +4. **SMTP over TLS** - Encrypt email transport ## Next Steps -- [Notification Channels](/user-guide/notifications/) — Detailed setup per channel -- [Creating Jobs](/user-guide/jobs/) — Assign per-job notifications -- [Scheduling](/user-guide/jobs/scheduling) — Automate backups -- [Storage Explorer](/user-guide/features/storage-explorer) — Review backups +- [Notification Channels](/user-guide/notifications/) - Detailed setup per channel +- [Creating Jobs](/user-guide/jobs/) - Assign per-job notifications +- [Scheduling](/user-guide/jobs/scheduling) - Automate backups +- [Storage Explorer](/user-guide/features/storage-explorer) - Review backups diff --git a/wiki/user-guide/features/profile-settings.md b/wiki/user-guide/features/profile-settings.md index bc450077..3726543b 100644 --- a/wiki/user-guide/features/profile-settings.md +++ b/wiki/user-guide/features/profile-settings.md @@ -44,7 +44,7 @@ When enabled (default), starting a backup or restore job will automatically: If you prefer to stay on the current page when starting jobs, you can disable this option. ::: info -Preference toggles are saved immediately when changed—no save button required. +Preference toggles are saved immediately when changed-no save button required. ::: ### Security Tab @@ -63,7 +63,7 @@ View and manage all your active login sessions: - **IP Address**: The IP address of each session is displayed. On localhost, the IPv6 loopback address is shown as "localhost" - **Timestamps**: "Created" shows when the session was started, "Last seen" shows the most recent activity - **Current Session Badge**: Your current session is marked with a "Current" badge and cannot be revoked -- **Revoke Session**: Click the trash icon on any other session to revoke it — this forces an immediate sign-out on that device +- **Revoke Session**: Click the trash icon on any other session to revoke it - this forces an immediate sign-out on that device - **Revoke All Others**: Use the "Revoke All Others" button to sign out all devices except your current one. A confirmation dialog prevents accidental logouts ::: tip diff --git a/wiki/user-guide/features/rate-limits.md b/wiki/user-guide/features/rate-limits.md index 5944b837..cbc5ce3f 100644 --- a/wiki/user-guide/features/rate-limits.md +++ b/wiki/user-guide/features/rate-limits.md @@ -4,7 +4,7 @@ Configure how many requests clients can send to the application within a given t ## Overview -DBackup enforces rate limits at the middleware level — every incoming request is checked before reaching any route handler. Limits are applied **per IP address** and are split into three categories: +DBackup enforces rate limits at the middleware level - every incoming request is checked before reaching any route handler. Limits are applied **per IP address** and are split into three categories: | Category | Applies To | Default | | :--- | :--- | :--- | diff --git a/wiki/user-guide/features/restore.md b/wiki/user-guide/features/restore.md index 5a6898e4..f5047f2c 100644 --- a/wiki/user-guide/features/restore.md +++ b/wiki/user-guide/features/restore.md @@ -48,7 +48,7 @@ After selecting a target source, DBackup automatically queries the server and di | **Size** | Total size (data + indexes) | | **Tables** | Number of tables or collections | -**Conflict Detection**: If a database from the backup has the same target name as an existing database on the server, the row is highlighted in red with a ⚠️ warning icon — indicating that database will be overwritten during restore. +**Conflict Detection**: If a database from the backup has the same target name as an existing database on the server, the row is highlighted in red with a ⚠️ warning icon - indicating that database will be overwritten during restore. A summary footer shows the total number of databases and their combined size. diff --git a/wiki/user-guide/features/webhook-triggers.md b/wiki/user-guide/features/webhook-triggers.md index 44e13a26..46dc28d1 100644 --- a/wiki/user-guide/features/webhook-triggers.md +++ b/wiki/user-guide/features/webhook-triggers.md @@ -18,8 +18,8 @@ All API calls require an [API Key](/user-guide/features/api-keys) with appropria Navigate to **Access Management → API Keys** and create a key with at least these permissions: -- `jobs:execute` — Trigger backup jobs -- `history:read` — Poll execution status +- `jobs:execute` - Trigger backup jobs +- `history:read` - Poll execution status ### 2. Trigger a Backup @@ -72,7 +72,7 @@ curl "https://your-instance.com/api/executions/EXECUTION_ID" \ | `Pending` | Job is queued, waiting for an execution slot | | `Running` | Job is actively running | | `Success` | Job completed successfully | -| `Failed` | Job failed — check `error` field for details | +| `Failed` | Job failed - check `error` field for details | ### Include Execution Logs @@ -87,7 +87,7 @@ curl "https://your-instance.com/api/executions/EXECUTION_ID?includeLogs=true" \ You can find a job's ID in two ways: -1. **In the UI**: Go to **Jobs**, click the **API Trigger** button (webhook icon) on the job row — it shows pre-filled curl commands with the correct job ID +1. **In the UI**: Go to **Jobs**, click the **API Trigger** button (webhook icon) on the job row - it shows pre-filled curl commands with the correct job ID 2. **Via API**: List all jobs with a `GET /api/jobs` request: ```bash @@ -252,16 +252,16 @@ for i in $(seq 1 30); do -H "Authorization: Bearer ${DBACKUP_API_KEY}" | jq -r '.data.status') if [ "$STATUS" = "Success" ]; then - echo "Backup complete — safe to deploy" + echo "Backup complete - safe to deploy" exit 0 elif [ "$STATUS" = "Failed" ]; then - echo "Backup failed — aborting deploy!" + echo "Backup failed - aborting deploy!" exit 1 fi sleep 10 done -echo "Backup timed out — aborting deploy!" +echo "Backup timed out - aborting deploy!" exit 1 ``` diff --git a/wiki/user-guide/first-steps.md b/wiki/user-guide/first-steps.md index 2868c815..24ff8cd3 100644 --- a/wiki/user-guide/first-steps.md +++ b/wiki/user-guide/first-steps.md @@ -8,7 +8,7 @@ After installation, open [http://localhost:3000](http://localhost:3000) in your On first launch, you'll see a login page with a "Sign Up" option. This self-registration is **only available for the first user** and creates the administrator account. -Once logged in, you can use the **Quick Setup Wizard** (available in the sidebar under **Quick Setup**) to configure your first backup in a guided, step-by-step flow — this is the recommended approach for new users. It walks you through creating a database source, storage destination, optional encryption and notifications, and a backup job all in one place. +Once logged in, you can use the **Quick Setup Wizard** (available in the sidebar under **Quick Setup**) to configure your first backup in a guided, step-by-step flow - this is the recommended approach for new users. It walks you through creating a database source, storage destination, optional encryption and notifications, and a backup job all in one place. If you prefer to configure everything manually, follow the steps below. @@ -72,7 +72,7 @@ Now connect source and destination in a job. 3. In the **General** tab, configure: - **Name**: `Daily MySQL Backup` - **Source**: Select "Production MySQL" - - **Databases**: Click **Load** to fetch available databases, then select which ones to back up — leave empty to back up all databases + - **Databases**: Click **Load** to fetch available databases, then select which ones to back up - leave empty to back up all databases 4. In the **Destinations** tab, click **Add Destination** and select "Local Backups" - Each destination can have its own independent retention policy - You can add multiple destinations (e.g., local + S3) for redundancy diff --git a/wiki/user-guide/installation.md b/wiki/user-guide/installation.md index 61125b1f..b312b6f7 100644 --- a/wiki/user-guide/installation.md +++ b/wiki/user-guide/installation.md @@ -148,7 +148,7 @@ The health check runs every 30 seconds with a 30-second start period. Docker wil ## Graceful Shutdown -When stopping the container, DBackup **waits for all running backup/restore jobs to finish** before shutting down — no data is lost, regardless of how long the backup takes: +When stopping the container, DBackup **waits for all running backup/restore jobs to finish** before shutting down - no data is lost, regardless of how long the backup takes: ```bash docker stop dbackup # Sends SIGTERM → waits for running backups to finish @@ -161,9 +161,9 @@ docker compose down # Same graceful behavior - A second `Ctrl+C` / `docker kill` forces immediate exit for emergencies ::: warning Docker Stop Timeout -By default, Docker sends a `SIGKILL` **10 seconds** after `docker stop` — this forcefully kills the process regardless of what it's doing. Since `SIGKILL` cannot be caught by any application, you **must** increase the timeout if your backups take longer than 10 seconds. +By default, Docker sends a `SIGKILL` **10 seconds** after `docker stop` - this forcefully kills the process regardless of what it's doing. Since `SIGKILL` cannot be caught by any application, you **must** increase the timeout if your backups take longer than 10 seconds. -**Docker Compose** (recommended — add to your `docker-compose.yml`): +**Docker Compose** (recommended - add to your `docker-compose.yml`): ```yaml services: dbackup: diff --git a/wiki/user-guide/jobs/index.md b/wiki/user-guide/jobs/index.md index 8f72ff4c..b67fb98b 100644 --- a/wiki/user-guide/jobs/index.md +++ b/wiki/user-guide/jobs/index.md @@ -51,7 +51,7 @@ Automate backups with cron expressions. See [Scheduling](/user-guide/jobs/schedu ### Retention -Automatically clean up old backups. Retention is configured **per destination** — each destination can have its own retention policy. See [Retention Policies](/user-guide/jobs/retention). +Automatically clean up old backups. Retention is configured **per destination** - each destination can have its own retention policy. See [Retention Policies](/user-guide/jobs/retention). ### Notifications @@ -63,7 +63,7 @@ Get alerts when backups complete: ## Multi-Destination -A job can upload to **multiple storage destinations** simultaneously — ideal for implementing the 3-2-1 backup rule. +A job can upload to **multiple storage destinations** simultaneously - ideal for implementing the 3-2-1 backup rule. ### Adding Destinations @@ -81,7 +81,7 @@ Each destination has its own retention configuration: ### Upload Behavior -- The database dump runs **once** — the resulting file is uploaded to each destination sequentially +- The database dump runs **once** - the resulting file is uploaded to each destination sequentially - Destinations are processed in priority order (top to bottom) - If one destination fails, the others still continue - The same storage adapter cannot be selected twice in one job diff --git a/wiki/user-guide/notifications/discord.md b/wiki/user-guide/notifications/discord.md index 73b6fae8..41a9b52a 100644 --- a/wiki/user-guide/notifications/discord.md +++ b/wiki/user-guide/notifications/discord.md @@ -6,7 +6,7 @@ Send rich embed notifications to Discord channels via webhooks. | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Webhook URL** | Discord webhook URL | — | ✅ | +| **Webhook URL** | Discord webhook URL | - | ✅ | | **Username** | Bot display name in Discord | `Backup Manager` | ❌ | | **Avatar URL** | Bot avatar image URL | Discord default | ❌ | @@ -37,14 +37,14 @@ Each embed includes title, description, structured fields (job name, duration, s ## Troubleshooting -### 401 — Invalid Webhook Token +### 401 - Invalid Webhook Token Verify the webhook URL is complete. Check the webhook hasn't been deleted in Discord → Server Settings → Integrations. -### 429 — Rate Limited +### 429 - Rate Limited -Too many messages in a short period. Reduce notification frequency — avoid "Always" on high-frequency jobs. +Too many messages in a short period. Reduce notification frequency - avoid "Always" on high-frequency jobs. -### 404 — Unknown Webhook +### 404 - Unknown Webhook The webhook or channel was deleted. Create a new webhook and update the configuration. diff --git a/wiki/user-guide/notifications/email.md b/wiki/user-guide/notifications/email.md index b7717e58..c84aa7d3 100644 --- a/wiki/user-guide/notifications/email.md +++ b/wiki/user-guide/notifications/email.md @@ -6,15 +6,15 @@ Send HTML notifications via any SMTP server. Supports multiple recipients and pe | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **SMTP Host** | Mail server hostname | — | ✅ | +| **SMTP Host** | Mail server hostname | - | ✅ | | **Port** | SMTP port | `587` | ❌ | | **Security** | `none`, `ssl`, or `starttls` | `starttls` | ❌ | -| **User** | SMTP username | — | ❌ | -| **Password** | SMTP password | — | ❌ | -| **From** | Sender email address | — | ✅ | -| **To** | Recipient email address(es) | — | ✅ | +| **User** | SMTP username | - | ❌ | +| **Password** | SMTP password | - | ❌ | +| **From** | Sender email address | - | ✅ | +| **To** | Recipient email address(es) | - | ✅ | -**Security modes:** `none` (port 25, unencrypted), `ssl` (port 465, implicit TLS), `starttls` (port 587, upgrade to TLS — recommended). +**Security modes:** `none` (port 25, unencrypted), `ssl` (port 465, implicit TLS), `starttls` (port 587, upgrade to TLS - recommended). ## Setup Guide @@ -26,13 +26,13 @@ Send HTML notifications via any SMTP server. Supports multiple recipients and pe
Common SMTP provider settings -**Gmail:** `smtp.gmail.com:587` (STARTTLS) — requires an [App Password](https://myaccount.google.com/apppasswords), not your regular password. +**Gmail:** `smtp.gmail.com:587` (STARTTLS) - requires an [App Password](https://myaccount.google.com/apppasswords), not your regular password. -**SendGrid:** `smtp.sendgrid.net:587` (STARTTLS) — User: `apikey`, Password: your API key. +**SendGrid:** `smtp.sendgrid.net:587` (STARTTLS) - User: `apikey`, Password: your API key. -**Amazon SES:** `email-smtp.{region}.amazonaws.com:587` (STARTTLS) — SMTP credentials from SES console. +**Amazon SES:** `email-smtp.{region}.amazonaws.com:587` (STARTTLS) - SMTP credentials from SES console. -**Mailgun:** `smtp.mailgun.org:587` (STARTTLS) — User: `postmaster@your-domain.mailgun.org`. +**Mailgun:** `smtp.mailgun.org:587` (STARTTLS) - User: `postmaster@your-domain.mailgun.org`.
@@ -40,7 +40,7 @@ Send HTML notifications via any SMTP server. Supports multiple recipients and pe - **HTML template** with colored header bar (green = success, red = failure, blue = info) - **Multiple recipients**: Add multiple email addresses in the To field -- **Per-user delivery**: For login and account events, DBackup can email the affected user directly — configure in **Settings → Notifications** (see [System Notifications](/user-guide/features/notifications#notify-user-directly)) +- **Per-user delivery**: For login and account events, DBackup can email the affected user directly - configure in **Settings → Notifications** (see [System Notifications](/user-guide/features/notifications#notify-user-directly)) ## Troubleshooting diff --git a/wiki/user-guide/notifications/generic-webhook.md b/wiki/user-guide/notifications/generic-webhook.md index 848ec7b7..3ff6553e 100644 --- a/wiki/user-guide/notifications/generic-webhook.md +++ b/wiki/user-guide/notifications/generic-webhook.md @@ -6,12 +6,12 @@ Send JSON payloads to any HTTP endpoint. Use for custom integrations with PagerD | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Webhook URL** | Target HTTP endpoint URL | — | ✅ | +| **Webhook URL** | Target HTTP endpoint URL | - | ✅ | | **HTTP Method** | `POST`, `PUT`, or `PATCH` | `POST` | ❌ | | **Content-Type** | Content-Type header value | `application/json` | ❌ | -| **Authorization** | Authorization header value (e.g., `Bearer token`) | — | ❌ | -| **Custom Headers** | Additional headers (one per line, `Key: Value`) | — | ❌ | -| **Payload Template** | Custom JSON with `{{variable}}` placeholders | — | ❌ | +| **Authorization** | Authorization header value (e.g., `Bearer token`) | - | ❌ | +| **Custom Headers** | Additional headers (one per line, `Key: Value`) | - | ❌ | +| **Payload Template** | Custom JSON with `{{variable}}` placeholders | - | ❌ | ## Setup Guide @@ -53,7 +53,7 @@ Use `{{variable}}` placeholders to create your own payload structure: | `{{fields}}` | JSON array of fields | `[{"name":"Job","value":"Prod"}]` | ::: info -Variable names must match the pattern `[a-zA-Z0-9_]+` — no hyphens or special characters. +Variable names must match the pattern `[a-zA-Z0-9_]+` - no hyphens or special characters. :::
@@ -72,7 +72,7 @@ Variable names must match the pattern `[a-zA-Z0-9_]+` — no hyphens or special } ``` -**Uptime Kuma (Push):** No template needed — use the push URL directly: +**Uptime Kuma (Push):** No template needed - use the push URL directly: ``` https://uptime.example.com/api/push/TOKEN?status=up&msg={{message}} ``` @@ -92,14 +92,14 @@ https://uptime.example.com/api/push/TOKEN?status=up&msg={{message}} ## Troubleshooting -### 401 — Unauthorized +### 401 - Unauthorized Verify the Authorization header value. Check that the token hasn't expired and has the required permissions. -### 400 — Bad Request +### 400 - Bad Request Verify your custom template is valid JSON. Check the target service's expected payload format. Ensure Content-Type matches what the service expects. ### Template Variables Not Replaced -Check for typos — variable names are case-sensitive. Only the documented variables above are supported. +Check for typos - variable names are case-sensitive. Only the documented variables above are supported. diff --git a/wiki/user-guide/notifications/gotify.md b/wiki/user-guide/notifications/gotify.md index a77164cb..bbdf9ec3 100644 --- a/wiki/user-guide/notifications/gotify.md +++ b/wiki/user-guide/notifications/gotify.md @@ -6,8 +6,8 @@ Send push notifications to your self-hosted [Gotify](https://gotify.net/) server | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Server URL** | Gotify server URL (e.g., `https://gotify.example.com`) | — | ✅ | -| **App Token** | Application token (from Gotify → Apps) | — | ✅ | +| **Server URL** | Gotify server URL (e.g., `https://gotify.example.com`) | - | ✅ | +| **App Token** | Application token (from Gotify → Apps) | - | ✅ | | **Priority** | Default message priority (0–10) | `5` | ❌ | ## Setup Guide @@ -47,7 +47,7 @@ Priority range: 0 (silent) to 10 (highest). Priorities 8+ trigger high-urgency a ## Troubleshooting -### 401 — Unauthorized +### 401 - Unauthorized Verify the App Token is correct. Ensure it belongs to an **Application** (not a Client token). @@ -57,4 +57,4 @@ Ensure the Gotify server is running and reachable from DBackup. Check firewall r ### Notifications Not Appearing on Mobile -Check the Gotify Android app WebSocket connection is active. Some Android manufacturers kill background apps — add Gotify to battery optimization exceptions. +Check the Gotify Android app WebSocket connection is active. Some Android manufacturers kill background apps - add Gotify to battery optimization exceptions. diff --git a/wiki/user-guide/notifications/ntfy.md b/wiki/user-guide/notifications/ntfy.md index 11986db5..bbb21c53 100644 --- a/wiki/user-guide/notifications/ntfy.md +++ b/wiki/user-guide/notifications/ntfy.md @@ -1,14 +1,14 @@ # ntfy -Send push notifications via [ntfy](https://ntfy.sh/) — a simple, topic-based notification service. Use the public `ntfy.sh` instance or self-host your own server. +Send push notifications via [ntfy](https://ntfy.sh/) - a simple, topic-based notification service. Use the public `ntfy.sh` instance or self-host your own server. ## Configuration | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | | **Server URL** | ntfy server URL | `https://ntfy.sh` | ❌ | -| **Topic** | Notification topic name | — | ✅ | -| **Access Token** | Bearer token (for protected topics) | — | ❌ | +| **Topic** | Notification topic name | - | ✅ | +| **Access Token** | Bearer token (for protected topics) | - | ❌ | | **Priority** | Default message priority (1–5) | `3` | ❌ | ## Setup Guide @@ -54,7 +54,7 @@ DBackup maps events to ntfy priorities automatically: ## Troubleshooting -### 401/403 — Unauthorized +### 401/403 - Unauthorized Verify the access token is correct and has **write** permission to the topic. Topic names are case-sensitive. diff --git a/wiki/user-guide/notifications/slack.md b/wiki/user-guide/notifications/slack.md index d5b18a9e..0144d0d2 100644 --- a/wiki/user-guide/notifications/slack.md +++ b/wiki/user-guide/notifications/slack.md @@ -6,7 +6,7 @@ Send formatted notifications to Slack channels using Incoming Webhooks with Bloc | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Webhook URL** | Slack Incoming Webhook URL | — | ✅ | +| **Webhook URL** | Slack Incoming Webhook URL | - | ✅ | | **Channel** | Override channel (e.g., `#backups`) | Webhook default | ❌ | | **Username** | Bot display name | `DBackup` | ❌ | | **Icon Emoji** | Bot icon emoji (e.g., `:shield:`) | Default | ❌ | @@ -42,14 +42,14 @@ Channel override only works if the Slack app has the `chat:write` scope. Standar ## Troubleshooting -### 403 — invalid_token +### 403 - invalid_token Verify the webhook URL is complete. Check the Slack app hasn't been uninstalled, or regenerate the webhook. -### 404 — channel_not_found +### 404 - channel_not_found The channel override target doesn't exist or is archived. Verify the name with `#` prefix. For private channels, invite the bot first. -### 403 — team_disabled +### 403 - team_disabled The Slack app was removed. Reinstall it in your workspace settings. diff --git a/wiki/user-guide/notifications/teams.md b/wiki/user-guide/notifications/teams.md index 416ac5c5..8aaae32b 100644 --- a/wiki/user-guide/notifications/teams.md +++ b/wiki/user-guide/notifications/teams.md @@ -6,13 +6,13 @@ Send Adaptive Card notifications to Microsoft Teams channels via Power Automate | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Webhook URL** | Teams Workflow webhook URL | — | ✅ | +| **Webhook URL** | Teams Workflow webhook URL | - | ✅ | ## Setup Guide 1. Open the target **Teams channel** → **⋯ (More options)** → **Workflows** 2. Search for **"Post to a channel when a webhook request is received"** -3. Follow the setup wizard — select team and channel → **Add workflow** +3. Follow the setup wizard - select team and channel → **Add workflow** 4. Copy the generated **Webhook URL** 5. In DBackup: **Notifications** → **Add Notification** → **Microsoft Teams** 6. Paste the Webhook URL → **Test** → **Save** @@ -36,11 +36,11 @@ Each card includes title, summary, structured fields (FactSet), and timestamp. ## Troubleshooting -### 400 — Bad Request +### 400 - Bad Request Verify the URL is from a Power Automate Workflow (not a deprecated Office 365 Connector). Ensure the workflow is active and the channel still exists. -### 401/403 — Unauthorized +### 401/403 - Unauthorized The workflow may have expired or the creator lost channel access. Recreate the workflow in Power Automate. diff --git a/wiki/user-guide/notifications/telegram.md b/wiki/user-guide/notifications/telegram.md index ff592b8e..8446501d 100644 --- a/wiki/user-guide/notifications/telegram.md +++ b/wiki/user-guide/notifications/telegram.md @@ -6,8 +6,8 @@ Send notifications to Telegram chats, groups, or channels using a Telegram Bot. | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Bot Token** | Telegram Bot API token from [@BotFather](https://t.me/BotFather) | — | ✅ | -| **Chat ID** | Target chat, group, or channel ID | — | ✅ | +| **Bot Token** | Telegram Bot API token from [@BotFather](https://t.me/BotFather) | - | ✅ | +| **Chat ID** | Target chat, group, or channel ID | - | ✅ | | **Parse Mode** | Message format: `HTML`, `MarkdownV2`, `Markdown` | `HTML` | ❌ | | **Disable Notification** | Send silently (no notification sound) | `false` | ❌ | @@ -25,7 +25,7 @@ Send notifications to Telegram chats, groups, or channels using a Telegram Bot. | Error | Solution | | :--- | :--- | -| `401: Unauthorized` | Bot Token is invalid — regenerate via @BotFather | +| `401: Unauthorized` | Bot Token is invalid - regenerate via @BotFather | | `400: chat not found` | Chat ID is wrong, or bot hasn't been messaged yet | -| `403: bot was blocked` | User blocked the bot — unblock it in Telegram | +| `403: bot was blocked` | User blocked the bot - unblock it in Telegram | | `403: bot is not a member` | Add the bot to the group/channel first | diff --git a/wiki/user-guide/notifications/twilio-sms.md b/wiki/user-guide/notifications/twilio-sms.md index 4166ffa5..b50fedb0 100644 --- a/wiki/user-guide/notifications/twilio-sms.md +++ b/wiki/user-guide/notifications/twilio-sms.md @@ -1,15 +1,15 @@ # SMS (Twilio) -Send SMS notifications for critical backup events via the Twilio API. Works on any mobile phone — no app required. +Send SMS notifications for critical backup events via the Twilio API. Works on any mobile phone - no app required. ## Configuration | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **Account SID** | Twilio Account SID (starts with `AC`) | — | ✅ | -| **Auth Token** | Twilio Auth Token | — | ✅ | -| **From** | Sender phone number in E.164 format (e.g., `+1234567890`) | — | ✅ | -| **To** | Recipient phone number in E.164 format | — | ✅ | +| **Account SID** | Twilio Account SID (starts with `AC`) | - | ✅ | +| **Auth Token** | Twilio Auth Token | - | ✅ | +| **From** | Sender phone number in E.164 format (e.g., `+1234567890`) | - | ✅ | +| **To** | Recipient phone number in E.164 format | - | ✅ | ## Setup Guide @@ -24,7 +24,7 @@ Trial accounts can only send to verified numbers. Add recipients under **Verifie ## How It Works -- Messages are optimized for SMS length — only the first 4 fields are included +- Messages are optimized for SMS length - only the first 4 fields are included - Twilio charges per SMS segment (~$0.0079/segment US). Use SMS for **failure-only** notifications and free channels (Discord, ntfy) for success notifications ## Troubleshooting @@ -33,5 +33,5 @@ Trial accounts can only send to verified numbers. Add recipients under **Verifie | :--- | :--- | | `401: Authentication Error` | Account SID or Auth Token is incorrect | | `Invalid 'To' Phone Number` | Must be E.164 format: `+` followed by country code and number | -| `Unverified number` | Trial accounts require verified numbers — add in Twilio Console | +| `Unverified number` | Trial accounts require verified numbers - add in Twilio Console | | No SMS received | Check Twilio Console → Messaging → Logs for delivery status | diff --git a/wiki/user-guide/security/encryption-key.md b/wiki/user-guide/security/encryption-key.md index 2f59f43b..7cd0b510 100644 --- a/wiki/user-guide/security/encryption-key.md +++ b/wiki/user-guide/security/encryption-key.md @@ -44,27 +44,27 @@ If you start DBackup with a **different** `ENCRYPTION_KEY` than the one used whe | SSO client secrets | SSO login stops working | | Encryption Profile keys | Existing encrypted backups **cannot be restored** | -DBackup will **not crash** — it starts normally. Errors only surface when a feature tries to use an encrypted value. +DBackup will **not crash** - it starts normally. Errors only surface when a feature tries to use an encrypted value. ## Restoring After a Key Loss -There is no automatic recovery — AES-256-GCM encryption cannot be reversed without the correct key. +There is no automatic recovery - AES-256-GCM encryption cannot be reversed without the correct key. **Options:** -1. **Restore the original key** — If you have the key somewhere (password manager, old `.env` file, CI/CD secret), set it back. Everything works again immediately. +1. **Restore the original key** - If you have the key somewhere (password manager, old `.env` file, CI/CD secret), set it back. Everything works again immediately. -2. **Re-enter all credentials manually** — If the key is truly lost: +2. **Re-enter all credentials manually** - If the key is truly lost: - Delete and recreate all Sources and Destinations (re-enter passwords) - Re-authorize OAuth destinations (Google Drive, Dropbox, OneDrive) - Re-configure SSO providers - - Recreate Encryption Profiles — **note:** existing backup files encrypted with old profiles cannot be decrypted + - Recreate Encryption Profiles - **note:** existing backup files encrypted with old profiles cannot be decrypted -3. **Reset the database** — If starting fresh is acceptable, delete `dbackup.db` and start over with a new key. +3. **Reset the database** - If starting fresh is acceptable, delete `dbackup.db` and start over with a new key. ## Using a Database From a Different Installation -If you restore a `dbackup.db` file from a backup or another server, you **must also use the same `ENCRYPTION_KEY`** that was set when that database was created. The key is not stored inside the database file — it must be provided separately. +If you restore a `dbackup.db` file from a backup or another server, you **must also use the same `ENCRYPTION_KEY`** that was set when that database was created. The key is not stored inside the database file - it must be provided separately. Mismatched key + database = all credentials broken. The fix is to set the correct key for that database. @@ -74,10 +74,10 @@ DBackup does not currently support in-place key rotation (re-encrypting all data 1. Export your configuration via **Settings → Config Backup** 2. Spin up a fresh instance with a new `ENCRYPTION_KEY` -3. Re-import the config — credentials will need to be re-entered manually since they were encrypted with the old key +3. Re-import the config - credentials will need to be re-entered manually since they were encrypted with the old key ## Next Steps -- [Encryption Vault](/user-guide/security/encryption) — Encrypt backup files with Encryption Profiles -- [Recovery Kit](/user-guide/security/recovery-kit) — Offline decryption for encrypted backup files -- [System Backup](/user-guide/features/system-backup) — Back up your DBackup configuration +- [Encryption Vault](/user-guide/security/encryption) - Encrypt backup files with Encryption Profiles +- [Recovery Kit](/user-guide/security/recovery-kit) - Offline decryption for encrypted backup files +- [System Backup](/user-guide/features/system-backup) - Back up your DBackup configuration diff --git a/wiki/user-guide/sources/mongodb.md b/wiki/user-guide/sources/mongodb.md index 2115b3f9..b754f96c 100644 --- a/wiki/user-guide/sources/mongodb.md +++ b/wiki/user-guide/sources/mongodb.md @@ -22,14 +22,14 @@ DBackup uses `mongodump` from MongoDB Database Tools. | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | | **Connection Mode** | Direct (TCP) or SSH | `Direct` | ✅ | -| **Connection URI** | Full MongoDB URI (overrides other settings) | — | ❌ | +| **Connection URI** | Full MongoDB URI (overrides other settings) | - | ❌ | | **Host** | Database server hostname | `localhost` | ✅ | | **Port** | MongoDB port | `27017` | ✅ | -| **User** | Database username | — | ❌ | -| **Password** | Database password | — | ❌ | +| **User** | Database username | - | ❌ | +| **Password** | Database password | - | ❌ | | **Auth Database** | Authentication database | `admin` | ❌ | | **Database** | Database name(s) to backup | All databases | ❌ | -| **Additional Options** | Extra `mongodump` flags | — | ❌ | +| **Additional Options** | Extra `mongodump` flags | - | ❌ | ### SSH Mode Fields @@ -37,13 +37,13 @@ These fields appear when **Connection Mode** is set to **SSH**: | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **SSH Host** | SSH server hostname or IP | — | ✅ | +| **SSH Host** | SSH server hostname or IP | - | ✅ | | **SSH Port** | SSH server port | `22` | ❌ | -| **SSH Username** | SSH login username | — | ✅ | +| **SSH Username** | SSH login username | - | ✅ | | **SSH Auth Type** | Password, Private Key, or Agent | `Password` | ✅ | -| **SSH Password** | SSH password | — | ❌ | -| **SSH Private Key** | PEM-formatted private key | — | ❌ | -| **SSH Passphrase** | Passphrase for encrypted key | — | ❌ | +| **SSH Password** | SSH password | - | ❌ | +| **SSH Private Key** | PEM-formatted private key | - | ❌ | +| **SSH Passphrase** | Passphrase for encrypted key | - | ❌ | ## Prerequisites @@ -71,7 +71,7 @@ mongosh **Install on the remote host:**
-Debian/Ubuntu — MongoDB Database Tools + mongosh +Debian/Ubuntu - MongoDB Database Tools + mongosh Add the official MongoDB repository first: ```bash diff --git a/wiki/user-guide/sources/mssql.md b/wiki/user-guide/sources/mssql.md index a10d7f81..b2932e4d 100644 --- a/wiki/user-guide/sources/mssql.md +++ b/wiki/user-guide/sources/mssql.md @@ -63,7 +63,7 @@ DBackup supports two modes to access the `.bak` files that SQL Server creates on ### Local Mode (Shared Volume) -Use this when DBackup and SQL Server share a filesystem — typically via Docker volume mounts or NFS shares. +Use this when DBackup and SQL Server share a filesystem - typically via Docker volume mounts or NFS shares. ```yaml services: @@ -245,7 +245,7 @@ This error occurs when the **SQL Server service account** (typically `mssql`) ca sudo chmod 770 /path/to/backup-dir ``` 2. **Docker**: Verify the volume mount exists and the container user has write permissions -3. Verify the backup directory exists on the SQL Server — it is **not** created automatically +3. Verify the backup directory exists on the SQL Server - it is **not** created automatically ### File Not Found After Backup (Local Mode) @@ -272,7 +272,7 @@ SSH connection failed: Authentication failed 4. For private key auth, verify the key is in PEM format 5. Check firewall rules allow SSH connections (port 22) -### SSH File Transfer Failed — Permission Denied (SSH Mode) +### SSH File Transfer Failed - Permission Denied (SSH Mode) ``` Failed to download /path/to/backup.bak: Permission denied @@ -280,22 +280,22 @@ Failed to download /path/to/backup.bak: Permission denied This is the most common SSH mode issue. The backup **succeeds** (SQL Server writes the `.bak` file), but the SSH/SFTP download **fails** because the SSH user cannot read the file. -**Why this happens:** SQL Server runs as the `mssql` service account and creates `.bak` files with restrictive permissions (typically `640`, owner `mssql:mssql`). Even if the backup directory has `777` permissions, the **file itself** is owned by `mssql` with limited access — your SSH user cannot read it. +**Why this happens:** SQL Server runs as the `mssql` service account and creates `.bak` files with restrictive permissions (typically `640`, owner `mssql:mssql`). Even if the backup directory has `777` permissions, the **file itself** is owned by `mssql` with limited access - your SSH user cannot read it. -**Solution 1 — Add SSH user to the `mssql` group** (recommended): +**Solution 1 - Add SSH user to the `mssql` group** (recommended): ```bash sudo usermod -aG mssql your-ssh-user ``` Log out and back in (or run `newgrp mssql`) for the change to take effect. -**Solution 2 — Set default ACL on the backup directory:** +**Solution 2 - Set default ACL on the backup directory:** ```bash sudo setfacl -d -m u:your-ssh-user:rwx /path/to/backup-dir sudo setfacl -m u:your-ssh-user:rwx /path/to/backup-dir ``` This ensures every new file created in the directory is automatically readable by your SSH user. -**Solution 3 — Change SQL Server's default file permissions:** +**Solution 3 - Change SQL Server's default file permissions:** ```bash sudo systemctl edit mssql-server ``` @@ -383,4 +383,4 @@ The restore process depends on the configured **File Transfer Mode**: 6. **Monitor backup duration** and adjust timeout 7. **Use encrypted connections** in production 8. **Separate backup user** from application user -9. **Enable Trust Server Certificate** only in development — use valid certs in production +9. **Enable Trust Server Certificate** only in development - use valid certs in production diff --git a/wiki/user-guide/sources/mysql.md b/wiki/user-guide/sources/mysql.md index 146c20f0..f02132db 100644 --- a/wiki/user-guide/sources/mysql.md +++ b/wiki/user-guide/sources/mysql.md @@ -23,10 +23,10 @@ Configure MySQL or MariaDB databases for backup using `mysqldump` / `mariadb-dum | **Connection Mode** | Direct (TCP) or SSH | `Direct` | ✅ | | **Host** | Database server hostname | `localhost` | ✅ | | **Port** | MySQL port | `3306` | ✅ | -| **User** | Database username | — | ✅ | -| **Password** | Database password | — | ❌ | +| **User** | Database username | - | ✅ | +| **Password** | Database password | - | ❌ | | **Database** | Database name(s) to backup | All databases | ❌ | -| **Additional Options** | Extra `mysqldump` flags | — | ❌ | +| **Additional Options** | Extra `mysqldump` flags | - | ❌ | | **Disable SSL** | Disable SSL for self-signed certificates | `false` | ❌ | ### SSH Mode Fields @@ -35,13 +35,13 @@ These fields appear when **Connection Mode** is set to **SSH**: | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **SSH Host** | SSH server hostname or IP | — | ✅ | +| **SSH Host** | SSH server hostname or IP | - | ✅ | | **SSH Port** | SSH server port | `22` | ❌ | -| **SSH Username** | SSH login username | — | ✅ | +| **SSH Username** | SSH login username | - | ✅ | | **SSH Auth Type** | Password, Private Key, or Agent | `Password` | ✅ | -| **SSH Password** | SSH password | — | ❌ | -| **SSH Private Key** | PEM-formatted private key | — | ❌ | -| **SSH Passphrase** | Passphrase for encrypted key | — | ❌ | +| **SSH Password** | SSH password | - | ❌ | +| **SSH Private Key** | PEM-formatted private key | - | ❌ | +| **SSH Passphrase** | Passphrase for encrypted key | - | ❌ | ## Prerequisites @@ -70,7 +70,7 @@ DBackup auto-detects which binary is available (`mysqldump` vs `mariadb-dump`, ` # Debian/Ubuntu (MySQL client) apt-get install default-mysql-client -# Debian/Ubuntu (MariaDB client — also provides mysqldump) +# Debian/Ubuntu (MariaDB client - also provides mysqldump) apt-get install mariadb-client # RHEL/CentOS/Fedora @@ -127,7 +127,7 @@ For backup-only operations, `SELECT`, `SHOW VIEW`, `TRIGGER`, and `LOCK TABLES` 3. Set Connection Mode to **SSH** 4. In the **SSH Connection** tab: enter SSH host, username, and authentication details 5. Click **Test SSH** to verify SSH connectivity -6. In the **Database** tab: enter MySQL host (usually `127.0.0.1` or `localhost` — relative to the SSH server), port, user, and password +6. In the **Database** tab: enter MySQL host (usually `127.0.0.1` or `localhost` - relative to the SSH server), port, user, and password 7. Click **Test Connection** to verify database connectivity via SSH 8. Click **Fetch Databases** and select databases 9. Save @@ -172,10 +172,10 @@ Use `mysql` as the hostname in DBackup. DBackup uses `mysqldump` (or `mariadb-dump` for MariaDB) with these default flags: -- `--single-transaction` — Consistent backup without locking (InnoDB) -- `--routines` — Includes stored procedures and functions -- `--triggers` — Includes triggers -- `--events` — Includes scheduled events +- `--single-transaction` - Consistent backup without locking (InnoDB) +- `--routines` - Includes stored procedures and functions +- `--triggers` - Includes triggers +- `--events` - Includes scheduled events Output: `.sql` file with `CREATE` and `INSERT` statements. @@ -190,7 +190,7 @@ In SSH mode, DBackup: 5. Applies compression and encryption locally on the DBackup server 6. Uploads the processed backup to the configured storage destination -The database password is passed securely via the `MYSQL_PWD` environment variable in the remote session — it does not appear in the process arguments or shell history. +The database password is passed securely via the `MYSQL_PWD` environment variable in the remote session - it does not appear in the process arguments or shell history. ### Multi-Database Backups diff --git a/wiki/user-guide/sources/postgresql.md b/wiki/user-guide/sources/postgresql.md index a274b66a..15949ce0 100644 --- a/wiki/user-guide/sources/postgresql.md +++ b/wiki/user-guide/sources/postgresql.md @@ -24,10 +24,10 @@ DBackup uses `pg_dump` from PostgreSQL 18 client, which is backward compatible w | **Connection Mode** | Direct (TCP) or SSH | `Direct` | ✅ | | **Host** | Database server hostname | `localhost` | ✅ | | **Port** | PostgreSQL port | `5432` | ✅ | -| **User** | Database username | — | ✅ | -| **Password** | Database password | — | ❌ | +| **User** | Database username | - | ✅ | +| **Password** | Database password | - | ❌ | | **Database** | Database name(s) to backup | All databases | ❌ | -| **Additional Options** | Extra `pg_dump` flags | — | ❌ | +| **Additional Options** | Extra `pg_dump` flags | - | ❌ | ### SSH Mode Fields @@ -35,13 +35,13 @@ These fields appear when **Connection Mode** is set to **SSH**: | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **SSH Host** | SSH server hostname or IP | — | ✅ | +| **SSH Host** | SSH server hostname or IP | - | ✅ | | **SSH Port** | SSH server port | `22` | ❌ | -| **SSH Username** | SSH login username | — | ✅ | +| **SSH Username** | SSH login username | - | ✅ | | **SSH Auth Type** | Password, Private Key, or Agent | `Password` | ✅ | -| **SSH Password** | SSH password | — | ❌ | -| **SSH Private Key** | PEM-formatted private key | — | ❌ | -| **SSH Passphrase** | Passphrase for encrypted key | — | ❌ | +| **SSH Password** | SSH password | - | ❌ | +| **SSH Private Key** | PEM-formatted private key | - | ❌ | +| **SSH Passphrase** | Passphrase for encrypted key | - | ❌ | ## Prerequisites diff --git a/wiki/user-guide/sources/redis.md b/wiki/user-guide/sources/redis.md index 337caa94..fab0b316 100644 --- a/wiki/user-guide/sources/redis.md +++ b/wiki/user-guide/sources/redis.md @@ -31,14 +31,14 @@ DBackup uses `redis-cli --rdb` to download RDB snapshots. | **Connection Mode** | Direct (TCP) or SSH | `Direct` | ✅ | | **Host** | Redis server hostname or IP | `localhost` | ✅ | | **Port** | Redis server port | `6379` | ✅ | -| **Password** | Optional authentication password | — | ❌ | +| **Password** | Optional authentication password | - | ❌ | | **Database** | Database index (0-15) for display purposes | `0` | ❌ | -| **Username** | Redis 6+ ACL username | — | ❌ | +| **Username** | Redis 6+ ACL username | - | ❌ | | **TLS** | Enable TLS/SSL connection | `false` | ❌ | | **Mode** | Connection mode: `standalone` or `sentinel` | `standalone` | ❌ | -| **Sentinel Master Name** | Master name for Sentinel mode | — | ❌ | -| **Sentinel Nodes** | Comma-separated Sentinel node addresses | — | ❌ | -| **Additional Options** | Extra `redis-cli` flags | — | ❌ | +| **Sentinel Master Name** | Master name for Sentinel mode | - | ❌ | +| **Sentinel Nodes** | Comma-separated Sentinel node addresses | - | ❌ | +| **Additional Options** | Extra `redis-cli` flags | - | ❌ | ### SSH Mode Fields @@ -46,13 +46,13 @@ These fields appear when **Connection Mode** is set to **SSH**: | Field | Description | Default | Required | | :--- | :--- | :--- | :--- | -| **SSH Host** | SSH server hostname or IP | — | ✅ | +| **SSH Host** | SSH server hostname or IP | - | ✅ | | **SSH Port** | SSH server port | `22` | ❌ | -| **SSH Username** | SSH login username | — | ✅ | +| **SSH Username** | SSH login username | - | ✅ | | **SSH Auth Type** | Password, Private Key, or Agent | `Password` | ✅ | -| **SSH Password** | SSH password | — | ❌ | -| **SSH Private Key** | PEM-formatted private key | — | ❌ | -| **SSH Passphrase** | Passphrase for encrypted key | — | ❌ | +| **SSH Password** | SSH password | - | ❌ | +| **SSH Private Key** | PEM-formatted private key | - | ❌ | +| **SSH Passphrase** | Passphrase for encrypted key | - | ❌ | ## Example Configuration