Compare commits
27 Commits
543ca6b471
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| ccbb5073f6 | |||
| c24def73fd | |||
| c0619dfb9b | |||
| da1074c8ad | |||
| ecbc636035 | |||
| 82bff2d341 | |||
| 6042dabc8e | |||
| c28ce9e3b8 | |||
| 3fd9e6b6a8 | |||
| 32433d6ac8 | |||
| 1116f2e17a | |||
| d2dffacb33 | |||
| fb4a51b24d | |||
| 5886622004 | |||
| e81e3f7fbb | |||
| 44e71fd3a5 | |||
| 40daf20809 | |||
| ed12f04549 | |||
| 1f527476e6 | |||
| 93b6cd136c | |||
| 880fa42f5b | |||
| 3f98fa4843 | |||
| f328e03812 | |||
| 26279f91e8 | |||
| c4c5a3c7bf | |||
| 5f7ef09ad5 | |||
| c157e14fa9 |
2
.gitignore
vendored
2
.gitignore
vendored
@@ -1 +1,3 @@
|
||||
CLAUDE.md
|
||||
__pycache__/
|
||||
*.pyc
|
||||
|
||||
210
README.md
Normal file
210
README.md
Normal file
@@ -0,0 +1,210 @@
|
||||
# TrueMigration
|
||||
|
||||
A Python CLI tool for migrating TrueNAS configuration to a live destination system. Designed for systems integration teams working in pre-production deployment environments.
|
||||
|
||||
## What It Does
|
||||
|
||||
TrueMigration reads configuration from a source and re-creates it on a destination TrueNAS system via its WebSocket API. It also provides a destination audit mode for inspecting and cleaning up existing configuration before migration.
|
||||
|
||||
**Supported source types:**
|
||||
- **TrueNAS debug archive** — the `.tgz` produced by **System → Save Debug** in the TrueNAS UI (SCALE and CORE)
|
||||
- **CSV files** — customer-supplied spreadsheets for migrating from non-TrueNAS sources
|
||||
|
||||
**Supported migration types:**
|
||||
- SMB shares
|
||||
- NFS exports
|
||||
- iSCSI (extents, initiator groups, portals, targets, target-extent associations)
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.9+
|
||||
- No external packages — stdlib only
|
||||
|
||||
## Usage
|
||||
|
||||
### Interactive Mode (recommended)
|
||||
|
||||
Run with no arguments. The wizard will guide you through the full workflow.
|
||||
|
||||
```bash
|
||||
python -m truenas_migrate
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```bash
|
||||
python deploy.py
|
||||
```
|
||||
|
||||
At startup the wizard presents two top-level options:
|
||||
|
||||
```
|
||||
1. Migrate configuration to a destination system
|
||||
2. Audit destination system (view and manage existing config)
|
||||
```
|
||||
|
||||
#### Option 1 — Migrate
|
||||
|
||||
Walks through:
|
||||
1. Source selection (archive or CSV)
|
||||
2. Destination host, port, and API key
|
||||
3. Migration scope (SMB / NFS / iSCSI, or all)
|
||||
4. iSCSI portal IP remapping (destination IPs differ from source; MPIO supported)
|
||||
5. Check for existing iSCSI config — offers to remove it before migration
|
||||
6. Per-share selection (choose a subset or migrate all)
|
||||
7. Dry run preview — shows what will be created, flags missing datasets or zvols
|
||||
8. Optional auto-creation of missing datasets and zvols
|
||||
9. Final confirmation and live apply
|
||||
|
||||
#### Option 2 — Audit
|
||||
|
||||
Connects to the destination and displays a full inventory:
|
||||
- SMB shares (name, path, enabled status)
|
||||
- NFS exports
|
||||
- iSCSI configuration (extents, initiators, portals, targets, associations)
|
||||
- ZFS datasets (with space used)
|
||||
- ZFS zvols (with allocated size)
|
||||
|
||||
After displaying the inventory, offers selective deletion by category. Deletion safeguards:
|
||||
- SMB shares / NFS exports / iSCSI: standard `[y/N]` confirmation
|
||||
- Zvols: requires typing `DELETE` — data is permanently destroyed
|
||||
- Datasets: requires typing `DELETE` — all files and snapshots are permanently destroyed
|
||||
- A final confirmation gate is shown before any deletions execute
|
||||
|
||||
### Command Line Mode — Archive Source
|
||||
|
||||
```bash
|
||||
# Inspect the archive before doing anything
|
||||
python -m truenas_migrate --debug-tar debug.tgz --list-archive
|
||||
|
||||
# Dry run — connects to destination but makes no changes
|
||||
python -m truenas_migrate \
|
||||
--debug-tar debug.tgz \
|
||||
--dest 192.168.1.50 \
|
||||
--api-key "1-xxxxxxxxxxxx" \
|
||||
--dry-run
|
||||
|
||||
# Live migration (all types)
|
||||
python -m truenas_migrate \
|
||||
--debug-tar debug.tgz \
|
||||
--dest 192.168.1.50 \
|
||||
--api-key "1-xxxxxxxxxxxx"
|
||||
|
||||
# Migrate only SMB shares
|
||||
python -m truenas_migrate \
|
||||
--debug-tar debug.tgz \
|
||||
--dest 192.168.1.50 \
|
||||
--api-key "1-xxxxxxxxxxxx" \
|
||||
--migrate smb
|
||||
|
||||
# Migrate SMB and iSCSI, skip NFS
|
||||
python -m truenas_migrate \
|
||||
--debug-tar debug.tgz \
|
||||
--dest 192.168.1.50 \
|
||||
--api-key "1-xxxxxxxxxxxx" \
|
||||
--migrate smb iscsi
|
||||
```
|
||||
|
||||
### Command Line Mode — CSV Source
|
||||
|
||||
Fill in the provided template files and pass them on the command line. You can supply one or both.
|
||||
|
||||
```bash
|
||||
# Dry run from CSV files
|
||||
python -m truenas_migrate \
|
||||
--smb-csv smb_shares.csv \
|
||||
--nfs-csv nfs_shares.csv \
|
||||
--dest 192.168.1.50 \
|
||||
--api-key "1-xxxxxxxxxxxx" \
|
||||
--dry-run
|
||||
|
||||
# Live migration — SMB only from CSV
|
||||
python -m truenas_migrate \
|
||||
--smb-csv smb_shares.csv \
|
||||
--dest 192.168.1.50 \
|
||||
--api-key "1-xxxxxxxxxxxx"
|
||||
```
|
||||
|
||||
### CSV Templates
|
||||
|
||||
Copy and fill in the templates included in this repository:
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `smb_shares_template.csv` | One row per SMB share |
|
||||
| `nfs_shares_template.csv` | One row per NFS export |
|
||||
|
||||
**SMB columns:** `Share Name` *(required)*, `Path` *(required)*, `Description`, `Purpose`, `Read Only`, `Browsable`, `Guest Access`, `Access-Based Enumeration`, `Hosts Allow`, `Hosts Deny`, `Time Machine`, `Enabled`
|
||||
|
||||
**NFS columns:** `Path` *(required)*, `Description`, `Read Only`, `Map Root User`, `Map Root Group`, `Map All User`, `Map All Group`, `Security`, `Allowed Hosts`, `Allowed Networks`, `Enabled`
|
||||
|
||||
Boolean columns accept `true` or `false`. List columns (`Hosts Allow`, `Hosts Deny`, `Security`, `Allowed Hosts`, `Allowed Networks`) accept space-separated values.
|
||||
|
||||
Valid `Purpose` values: `NO_PRESET`, `DEFAULT_SHARE`, `ENHANCED_TIMEMACHINE`, `MULTI_PROTOCOL_NFS`, `PRIVATE_DATASETS`, `WORM_DROPBOX`
|
||||
|
||||
Valid `Security` values: `SYS`, `KRB5`, `KRB5I`, `KRB5P`
|
||||
|
||||
### Generating an API Key
|
||||
|
||||
In the TrueNAS UI: top-right account menu → **API Keys** → **Add**.
|
||||
|
||||
## iSCSI Migration Notes
|
||||
|
||||
iSCSI configuration involves relational objects with IDs that differ between systems. TrueMigration handles this automatically:
|
||||
|
||||
- **Creation order**: extents and initiator groups first (no dependencies), then portals, then targets (which reference portals and initiators), then target-extent associations
|
||||
- **ID remapping**: old source IDs are mapped to new destination IDs as each object is created; downstream objects are updated accordingly
|
||||
- **Portal IPs**: the wizard prompts for destination IP addresses for each portal. Enter multiple space-separated IPs for MPIO configurations
|
||||
- **Zvols**: DISK-type extents reference ZFS zvols. The dry run checks whether the required zvols exist on the destination. If any are missing, the wizard prompts for their size and creates them before the live run
|
||||
- **Existing config**: if the destination already has iSCSI objects, the wizard detects this and offers to remove them before migration begins
|
||||
|
||||
## Conflict Policy
|
||||
|
||||
TrueMigration never overwrites or deletes existing configuration on the destination. Conflicts are skipped:
|
||||
|
||||
| Type | Conflict detected by |
|
||||
|------|----------------------|
|
||||
| SMB share | Share name (case-insensitive) |
|
||||
| NFS export | Export path (exact match) |
|
||||
| iSCSI extent | Extent name (case-insensitive) |
|
||||
| iSCSI initiator group | Comment field (case-insensitive) |
|
||||
| iSCSI portal | Set of listen IP addresses |
|
||||
| iSCSI target | Target name (case-insensitive) |
|
||||
| iSCSI target-extent | Target ID + LUN ID combination |
|
||||
|
||||
Always run with `--dry-run` first to preview what will and won't be created.
|
||||
|
||||
## Archive Compatibility
|
||||
|
||||
| Source version | Archive format | Notes |
|
||||
|----------------|----------------|-------|
|
||||
| SCALE 24.04+ | ixdiagnose (lowercase dirs) | Combined JSON plugin files |
|
||||
| SCALE (older) | ixdiagnose (uppercase dirs) | Per-query JSON files |
|
||||
| CORE | freenas-debug / fndebug | Plain-text dumps with embedded JSON |
|
||||
| HA bundles (25.04+) | Outer .tgz + inner .txz per node | Active node archive selected automatically |
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
deploy.py # Entry point shim
|
||||
smb_shares_template.csv # SMB CSV template
|
||||
nfs_shares_template.csv # NFS CSV template
|
||||
truenas_migrate/
|
||||
__main__.py # python -m truenas_migrate entry point
|
||||
colors.py # ANSI color helpers and shared logger
|
||||
summary.py # Migration summary dataclass and report
|
||||
archive.py # Debug archive parser (SCALE + CORE)
|
||||
csv_source.py # CSV parser for non-TrueNAS sources
|
||||
client.py # TrueNAS WebSocket API client and utilities
|
||||
migrate.py # SMB, NFS, and iSCSI migration routines
|
||||
cli.py # Interactive wizard and argument parser
|
||||
```
|
||||
|
||||
## Safety Notes
|
||||
|
||||
- **Never destructive by default** — the migration path only creates, never modifies or deletes existing destination config
|
||||
- **Dry run first** — always preview with `--dry-run` before applying changes
|
||||
- **Audit deletions require explicit confirmation** — zvol and dataset deletion requires typing `DELETE` and a final confirmation gate
|
||||
- SSL certificate verification is disabled by default (TrueNAS systems commonly use self-signed certs). Use `--verify-ssl` to enable it
|
||||
- Targets the TrueNAS 25.04+ WebSocket API endpoint (`wss://<host>/api/current`)
|
||||
- Exit code `2` is returned if any errors occurred during migration
|
||||
6
deploy.py
Executable file
6
deploy.py
Executable file
@@ -0,0 +1,6 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Compatibility shim – delegates to the truenas_migrate package."""
|
||||
from truenas_migrate.cli import main
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
3
nfs_shares_template.csv
Normal file
3
nfs_shares_template.csv
Normal file
@@ -0,0 +1,3 @@
|
||||
Path,Description,Read Only,Map Root User,Map Root Group,Map All User,Map All Group,Security,Allowed Hosts,Allowed Networks,Enabled
|
||||
/mnt/tank/data,Primary data export,false,root,wheel,,,SYS,,192.168.1.0/24,true
|
||||
/mnt/tank/media,Media files read-only,true,,,,,,,,true
|
||||
|
3
smb_shares_template.csv
Normal file
3
smb_shares_template.csv
Normal file
@@ -0,0 +1,3 @@
|
||||
Share Name,Path,Description,Purpose,Read Only,Browsable,Guest Access,Access-Based Enumeration,Hosts Allow,Hosts Deny,Time Machine,Enabled
|
||||
Accounting,/mnt/tank/accounting,Accounting department files,NO_PRESET,false,true,false,false,,,false,true
|
||||
Public,/mnt/tank/public,Public read-only share,NO_PRESET,true,true,true,false,,,false,true
|
||||
|
1396
truenas_migrate.py
1396
truenas_migrate.py
File diff suppressed because it is too large
Load Diff
1
truenas_migrate/__init__.py
Normal file
1
truenas_migrate/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# truenas_migrate package
|
||||
3
truenas_migrate/__main__.py
Normal file
3
truenas_migrate/__main__.py
Normal file
@@ -0,0 +1,3 @@
|
||||
from .cli import main
|
||||
|
||||
main()
|
||||
378
truenas_migrate/archive.py
Normal file
378
truenas_migrate/archive.py
Normal file
@@ -0,0 +1,378 @@
|
||||
"""TrueNAS debug archive parser (SCALE ixdiagnose and CORE fndebug layouts)."""
|
||||
from __future__ import annotations
|
||||
|
||||
import contextlib
|
||||
import json
|
||||
import sys
|
||||
import tarfile
|
||||
from typing import Any, Optional
|
||||
|
||||
from .colors import log
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Archive layout constants
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
#
|
||||
# TrueNAS SCALE generates debug archives with the "ixdiagnose" tool.
|
||||
# The internal layout has changed across versions:
|
||||
#
|
||||
# SCALE 24.04+ (plugins layout, lowercase dirs, combined JSON files)
|
||||
# ixdiagnose/plugins/smb/smb_info.json – SMB shares + config combined
|
||||
# ixdiagnose/plugins/nfs/nfs_config.json – NFS shares + config combined
|
||||
#
|
||||
# Older SCALE (plugins layout, uppercase dirs, per-query JSON files)
|
||||
# ixdiagnose/plugins/SMB/sharing.smb.query.json
|
||||
# ixdiagnose/plugins/NFS/sharing.nfs.query.json
|
||||
# ixdiagnose/plugins/Sharing/sharing.smb.query.json
|
||||
# ixdiagnose/plugins/Sharing/sharing.nfs.query.json
|
||||
#
|
||||
# TrueNAS CORE uses the "freenas-debug" tool (stored as "fndebug" inside the
|
||||
# archive). It produces plain-text dump files with embedded JSON blocks.
|
||||
|
||||
_CANDIDATES: dict[str, list[str]] = {
|
||||
"smb_shares": [
|
||||
"ixdiagnose/plugins/smb/smb_info.json",
|
||||
"ixdiagnose/plugins/SMB/sharing.smb.query.json",
|
||||
"ixdiagnose/plugins/Sharing/sharing.smb.query.json",
|
||||
"ixdiagnose/SMB/sharing.smb.query.json",
|
||||
],
|
||||
"nfs_shares": [
|
||||
"ixdiagnose/plugins/nfs/nfs_config.json",
|
||||
"ixdiagnose/plugins/NFS/sharing.nfs.query.json",
|
||||
"ixdiagnose/plugins/Sharing/sharing.nfs.query.json",
|
||||
"ixdiagnose/NFS/sharing.nfs.query.json",
|
||||
],
|
||||
"iscsi": [
|
||||
"ixdiagnose/plugins/iscsi/iscsi_config.json",
|
||||
"ixdiagnose/plugins/ISCSI/iscsi_config.json",
|
||||
],
|
||||
}
|
||||
|
||||
# When a candidate file bundles multiple datasets, pull out the right sub-key.
|
||||
_KEY_WITHIN_FILE: dict[str, str] = {
|
||||
"smb_shares": "sharing_smb_query",
|
||||
"nfs_shares": "sharing_nfs_query",
|
||||
# "iscsi" intentionally omitted — iscsi_config.json is used as-is
|
||||
}
|
||||
|
||||
# Keyword fragments for heuristic fallback scan (SCALE archives only)
|
||||
_KEYWORDS: dict[str, list[str]] = {
|
||||
"smb_shares": ["sharing.smb", "smb_share", "sharing/smb", "smb_info"],
|
||||
"nfs_shares": ["sharing.nfs", "nfs_share", "sharing/nfs", "nfs_config"],
|
||||
"iscsi": ["iscsi_config", "iscsi/iscsi"],
|
||||
}
|
||||
|
||||
# Presence of this path prefix identifies a TrueNAS CORE archive.
|
||||
_CORE_MARKER = "ixdiagnose/fndebug"
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Internal helpers
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
def _members_map(tf: tarfile.TarFile) -> dict[str, tarfile.TarInfo]:
|
||||
"""Return {normalised_path: TarInfo} for every member."""
|
||||
return {m.name.lstrip("./"): m for m in tf.getmembers()}
|
||||
|
||||
|
||||
def _read_json(tf: tarfile.TarFile, info: tarfile.TarInfo) -> Optional[Any]:
|
||||
"""Extract and JSON-parse one archive member. Returns None on any error."""
|
||||
try:
|
||||
fh = tf.extractfile(info)
|
||||
if fh is None:
|
||||
return None
|
||||
raw = fh.read().decode("utf-8", errors="replace").strip()
|
||||
return json.loads(raw) if raw else None
|
||||
except Exception as exc:
|
||||
log.debug("Could not parse %s: %s", info.name, exc)
|
||||
return None
|
||||
|
||||
|
||||
def _extract_subkey(raw: Any, data_type: str) -> Optional[Any]:
|
||||
"""Pull out the relevant sub-key when a JSON file bundles multiple datasets."""
|
||||
if not isinstance(raw, dict):
|
||||
return raw
|
||||
key = _KEY_WITHIN_FILE.get(data_type)
|
||||
if key and key in raw:
|
||||
return raw[key]
|
||||
return raw
|
||||
|
||||
|
||||
def _find_data(
|
||||
tf: tarfile.TarFile,
|
||||
members: dict[str, tarfile.TarInfo],
|
||||
data_type: str,
|
||||
) -> Optional[Any]:
|
||||
"""Try candidate paths, then keyword heuristics. Return parsed JSON or None."""
|
||||
|
||||
# Pass 1 – exact / suffix match against known candidate paths
|
||||
for candidate in _CANDIDATES[data_type]:
|
||||
norm = candidate.lstrip("./")
|
||||
info = members.get(norm)
|
||||
if info is None:
|
||||
# Archive may have a date-stamped top-level directory
|
||||
for path, member in members.items():
|
||||
if path == norm or path.endswith("/" + norm):
|
||||
info = member
|
||||
break
|
||||
if info is not None:
|
||||
raw = _read_json(tf, info)
|
||||
result = _extract_subkey(raw, data_type)
|
||||
if result is not None:
|
||||
log.info(" %-12s → %s", data_type, info.name)
|
||||
return result
|
||||
|
||||
# Pass 2 – keyword heuristic scan over all .json members
|
||||
log.debug(" %s: candidates missed, scanning archive …", data_type)
|
||||
keywords = _KEYWORDS[data_type]
|
||||
for path in sorted(members):
|
||||
if not path.lower().endswith(".json"):
|
||||
continue
|
||||
if any(kw in path.lower() for kw in keywords):
|
||||
raw = _read_json(tf, members[path])
|
||||
result = _extract_subkey(raw, data_type)
|
||||
if result is not None:
|
||||
log.info(" %-12s → %s (heuristic)", data_type, path)
|
||||
return result
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def _extract_core_dump_json(dump_text: str, title_fragment: str) -> list[Any]:
|
||||
"""
|
||||
Extract all top-level JSON values from a named section of a CORE dump.txt.
|
||||
|
||||
CORE dump sections look like:
|
||||
+--------...--------+
|
||||
+ SECTION TITLE +
|
||||
+--------...--------+
|
||||
<content>
|
||||
debug finished in N seconds for SECTION TITLE
|
||||
|
||||
Returns a list of parsed JSON values found in the content block, in order.
|
||||
"""
|
||||
import re as _re
|
||||
|
||||
parts = _re.split(r'\+[-]{20,}\+', dump_text)
|
||||
for i, part in enumerate(parts):
|
||||
if title_fragment.lower() in part.lower() and i + 1 < len(parts):
|
||||
content = parts[i + 1]
|
||||
content = _re.sub(
|
||||
r'debug finished.*', '', content,
|
||||
flags=_re.IGNORECASE | _re.DOTALL,
|
||||
).strip()
|
||||
|
||||
results: list[Any] = []
|
||||
decoder = json.JSONDecoder()
|
||||
pos = 0
|
||||
while pos < len(content):
|
||||
remaining = content[pos:].lstrip()
|
||||
if not remaining or remaining[0] not in "{[":
|
||||
break
|
||||
pos += len(content[pos:]) - len(remaining)
|
||||
try:
|
||||
val, end = decoder.raw_decode(remaining)
|
||||
results.append(val)
|
||||
pos += end
|
||||
except json.JSONDecodeError:
|
||||
break
|
||||
return results
|
||||
|
||||
return []
|
||||
|
||||
|
||||
def _parse_core_into(
|
||||
tf: tarfile.TarFile,
|
||||
members: dict[str, tarfile.TarInfo],
|
||||
result: dict[str, Any],
|
||||
) -> None:
|
||||
"""Populate *result* from TrueNAS CORE fndebug dump files."""
|
||||
log.info("TrueNAS CORE archive detected; parsing fndebug dump files.")
|
||||
|
||||
smb_key = "ixdiagnose/fndebug/SMB/dump.txt"
|
||||
if smb_key in members:
|
||||
fh = tf.extractfile(members[smb_key])
|
||||
dump = fh.read().decode("utf-8", errors="replace") # type: ignore[union-attr]
|
||||
vals = _extract_core_dump_json(dump, "Database Dump")
|
||||
if len(vals) >= 2 and isinstance(vals[1], list):
|
||||
result["smb_shares"] = vals[1]
|
||||
log.info(" smb_shares → %s (CORE, %d share(s))", smb_key, len(vals[1]))
|
||||
elif vals:
|
||||
log.warning(" smb_shares → NOT FOUND in Database Dump")
|
||||
else:
|
||||
log.warning(" SMB dump not found: %s", smb_key)
|
||||
|
||||
nfs_key = "ixdiagnose/fndebug/NFS/dump.txt"
|
||||
if nfs_key in members:
|
||||
fh = tf.extractfile(members[nfs_key])
|
||||
dump = fh.read().decode("utf-8", errors="replace") # type: ignore[union-attr]
|
||||
vals = _extract_core_dump_json(dump, "Configuration")
|
||||
if len(vals) >= 2 and isinstance(vals[1], list):
|
||||
result["nfs_shares"] = vals[1]
|
||||
log.info(" nfs_shares → %s (CORE, %d share(s))", nfs_key, len(vals[1]))
|
||||
else:
|
||||
log.warning(" nfs_shares → NOT FOUND in Configuration")
|
||||
else:
|
||||
log.warning(" NFS dump not found: %s", nfs_key)
|
||||
|
||||
if not result["smb_shares"] and not result["nfs_shares"]:
|
||||
log.warning(
|
||||
"No share data found in CORE archive. "
|
||||
"This is expected when SMB/NFS services were disabled on the source system."
|
||||
)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def _open_source_tar(tar_path: str):
|
||||
"""
|
||||
Open the archive that actually contains the ixdiagnose data.
|
||||
|
||||
TrueNAS HA debug bundles (25.04+) wrap each node's ixdiagnose snapshot
|
||||
in a separate .txz inside the outer .tgz. We prefer the member whose
|
||||
name includes '_active'; if none is labelled that way we fall back to the
|
||||
first .txz found. Single-node (non-HA) bundles are used directly.
|
||||
"""
|
||||
with tarfile.open(tar_path, "r:*") as outer:
|
||||
txz_members = [
|
||||
m for m in outer.getmembers()
|
||||
if m.name.lower().endswith(".txz") and m.isfile()
|
||||
]
|
||||
if not txz_members:
|
||||
yield outer
|
||||
return
|
||||
|
||||
active = next(
|
||||
(m for m in txz_members if "_active" in m.name.lower()),
|
||||
txz_members[0],
|
||||
)
|
||||
log.info(" HA bundle detected; reading inner archive: %s", active.name)
|
||||
fh = outer.extractfile(active)
|
||||
with tarfile.open(fileobj=fh, mode="r:*") as inner:
|
||||
yield inner
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Public API
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
def parse_archive(tar_path: str) -> dict[str, Any]:
|
||||
"""
|
||||
Extract SMB shares, NFS shares, and iSCSI configuration from the debug archive.
|
||||
Returns: {"smb_shares": list, "nfs_shares": list, "iscsi": dict}
|
||||
"""
|
||||
log.info("Opening archive: %s", tar_path)
|
||||
result: dict[str, Any] = {
|
||||
"smb_shares": [],
|
||||
"nfs_shares": [],
|
||||
"iscsi": {},
|
||||
}
|
||||
|
||||
try:
|
||||
with _open_source_tar(tar_path) as tf:
|
||||
members = _members_map(tf)
|
||||
log.info(" Archive contains %d total entries.", len(members))
|
||||
|
||||
is_core = any(
|
||||
p == _CORE_MARKER or p.startswith(_CORE_MARKER + "/")
|
||||
for p in members
|
||||
)
|
||||
|
||||
if is_core:
|
||||
_parse_core_into(tf, members, result)
|
||||
else:
|
||||
for key in ("smb_shares", "nfs_shares"):
|
||||
data = _find_data(tf, members, key)
|
||||
if data is None:
|
||||
log.warning(" %-12s → NOT FOUND", key)
|
||||
continue
|
||||
|
||||
if isinstance(data, list):
|
||||
result[key] = data
|
||||
elif isinstance(data, dict):
|
||||
# Some versions wrap the list: {"result": [...]}
|
||||
for v in data.values():
|
||||
if isinstance(v, list):
|
||||
result[key] = v
|
||||
break
|
||||
|
||||
# iSCSI — combined dict file, not a bare list
|
||||
iscsi_raw = _find_data(tf, members, "iscsi")
|
||||
if iscsi_raw and isinstance(iscsi_raw, dict):
|
||||
result["iscsi"] = {
|
||||
"global_config": iscsi_raw.get("global_config", {}),
|
||||
"portals": iscsi_raw.get("portals", []),
|
||||
"initiators": iscsi_raw.get("initiators", []),
|
||||
"targets": iscsi_raw.get("targets", []),
|
||||
"extents": iscsi_raw.get("extents", []),
|
||||
"targetextents": iscsi_raw.get("targetextents", []),
|
||||
}
|
||||
elif iscsi_raw is not None:
|
||||
log.warning(" iscsi → unexpected format (expected dict)")
|
||||
|
||||
except (tarfile.TarError, OSError) as exc:
|
||||
log.error("Failed to open archive: %s", exc)
|
||||
sys.exit(1)
|
||||
|
||||
iscsi = result["iscsi"]
|
||||
log.info(
|
||||
"Parsed: %d SMB share(s), %d NFS share(s), "
|
||||
"iSCSI: %d target(s) / %d extent(s) / %d portal(s)",
|
||||
len(result["smb_shares"]),
|
||||
len(result["nfs_shares"]),
|
||||
len(iscsi.get("targets", [])),
|
||||
len(iscsi.get("extents", [])),
|
||||
len(iscsi.get("portals", [])),
|
||||
)
|
||||
return result
|
||||
|
||||
|
||||
def list_archive_and_exit(tar_path: str) -> None:
|
||||
"""
|
||||
Print a structured listing of the archive contents, then exit.
|
||||
For SCALE archives: lists all .json plugin files.
|
||||
For CORE archives: lists the fndebug dump files and the JSON sections
|
||||
that contain share data.
|
||||
"""
|
||||
try:
|
||||
with _open_source_tar(tar_path) as tf:
|
||||
members_map = _members_map(tf)
|
||||
is_core = any(
|
||||
p == _CORE_MARKER or p.startswith(_CORE_MARKER + "/")
|
||||
for p in members_map
|
||||
)
|
||||
|
||||
if is_core:
|
||||
print(f"\nTrueNAS CORE archive: {tar_path}\n")
|
||||
print(" fndebug plain-text dump files (JSON is embedded inside):\n")
|
||||
dump_files = sorted(
|
||||
p for p in members_map
|
||||
if p.startswith(_CORE_MARKER + "/") and p.endswith(".txt")
|
||||
)
|
||||
for p in dump_files:
|
||||
size = members_map[p].size / 1024
|
||||
print(f" {p} ({size:.1f} KB)")
|
||||
print()
|
||||
print(" Data this tool will extract:")
|
||||
print(" SMB shares → fndebug/SMB/dump.txt (\"Database Dump\" section)")
|
||||
print(" NFS shares → fndebug/NFS/dump.txt (\"Configuration\" section)")
|
||||
else:
|
||||
print(f"\nJSON plugin files in archive: {tar_path}\n")
|
||||
json_members = sorted(
|
||||
(m for m in tf.getmembers() if m.name.endswith(".json")),
|
||||
key=lambda m: m.name,
|
||||
)
|
||||
if not json_members:
|
||||
print(" (no .json files found)")
|
||||
else:
|
||||
current_dir = ""
|
||||
for m in json_members:
|
||||
parts = m.name.lstrip("./").split("/")
|
||||
top = "/".join(parts[:-1]) if len(parts) > 1 else ""
|
||||
if top != current_dir:
|
||||
print(f"\n {top or '(root)'}/")
|
||||
current_dir = top
|
||||
print(f" {parts[-1]} ({m.size / 1024:.1f} KB)")
|
||||
except (tarfile.TarError, OSError) as exc:
|
||||
sys.exit(f"ERROR: {exc}")
|
||||
print()
|
||||
sys.exit(0)
|
||||
954
truenas_migrate/cli.py
Normal file
954
truenas_migrate/cli.py
Normal file
@@ -0,0 +1,954 @@
|
||||
"""
|
||||
truenas_migrate – TrueNAS Share Migration Tool
|
||||
=================================================
|
||||
Reads SMB shares and NFS shares from either a TrueNAS debug archive (.tar / .tgz)
|
||||
or customer-supplied CSV files, then re-creates them on a destination TrueNAS
|
||||
system via the JSON-RPC 2.0 WebSocket API (TrueNAS 25.04+).
|
||||
|
||||
SAFE BY DEFAULT
|
||||
• Existing shares are never overwritten or deleted.
|
||||
• Always run with --dry-run first to preview what will happen.
|
||||
|
||||
REQUIREMENTS
|
||||
Python 3.9+ (stdlib only – no external packages needed)
|
||||
|
||||
QUICK START — Archive source
|
||||
# 1. Inspect your debug archive to confirm it contains the data you need:
|
||||
python -m truenas_migrate --debug-tar debug.tgz --list-archive
|
||||
|
||||
# 2. Dry-run – connect to destination but make zero changes:
|
||||
python -m truenas_migrate \\
|
||||
--debug-tar debug.tgz \\
|
||||
--dest 192.168.1.50 \\
|
||||
--api-key "1-xxxxxxxxxxxx" \\
|
||||
--dry-run
|
||||
|
||||
# 3. Live migration:
|
||||
python -m truenas_migrate \\
|
||||
--debug-tar debug.tgz \\
|
||||
--dest 192.168.1.50 \\
|
||||
--api-key "1-xxxxxxxxxxxx"
|
||||
|
||||
QUICK START — CSV source
|
||||
# Fill in smb_shares_template.csv / nfs_shares_template.csv, then:
|
||||
python -m truenas_migrate \\
|
||||
--smb-csv smb_shares.csv \\
|
||||
--nfs-csv nfs_shares.csv \\
|
||||
--dest 192.168.1.50 \\
|
||||
--api-key "1-xxxxxxxxxxxx" \\
|
||||
--dry-run
|
||||
|
||||
CONFLICT POLICY
|
||||
Shares that already exist on the destination are silently skipped:
|
||||
SMB – matched by share name (case-insensitive)
|
||||
NFS – matched by export path (exact match)
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import asyncio
|
||||
import getpass
|
||||
import logging
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from .archive import parse_archive, list_archive_and_exit
|
||||
from .client import (
|
||||
TrueNASClient,
|
||||
check_dataset_paths, create_missing_datasets,
|
||||
check_iscsi_zvols, create_missing_zvols,
|
||||
query_destination_inventory,
|
||||
delete_smb_shares, delete_nfs_exports, delete_zvols, delete_datasets,
|
||||
)
|
||||
from .colors import log, _bold, _bold_cyan, _bold_green, _bold_red, _bold_yellow, _cyan, _dim, _green, _yellow
|
||||
from .csv_source import parse_csv_sources
|
||||
from .migrate import migrate_smb_shares, migrate_nfs_shares, migrate_iscsi, query_existing_iscsi, clear_iscsi_config
|
||||
from .summary import Summary
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# CLI orchestration
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
async def run(
|
||||
args: argparse.Namespace,
|
||||
archive: Optional[dict] = None,
|
||||
) -> Summary:
|
||||
if archive is None:
|
||||
smb_csv = getattr(args, "smb_csv", None)
|
||||
nfs_csv = getattr(args, "nfs_csv", None)
|
||||
if smb_csv or nfs_csv:
|
||||
archive = parse_csv_sources(smb_csv, nfs_csv)
|
||||
else:
|
||||
archive = parse_archive(args.debug_tar)
|
||||
|
||||
migrate_set = set(args.migrate)
|
||||
|
||||
if args.dry_run:
|
||||
msg = " DRY RUN – no changes will be made on the destination "
|
||||
bar = _bold_yellow("─" * len(msg))
|
||||
print(f"\n{_bold_yellow('┌')}{bar}{_bold_yellow('┐')}", file=sys.stderr)
|
||||
print(f"{_bold_yellow('│')}{_bold_yellow(msg)}{_bold_yellow('│')}", file=sys.stderr)
|
||||
print(f"{_bold_yellow('└')}{bar}{_bold_yellow('┘')}\n", file=sys.stderr)
|
||||
|
||||
summary = Summary()
|
||||
|
||||
async with TrueNASClient(
|
||||
host=args.dest,
|
||||
port=args.port,
|
||||
api_key=args.api_key,
|
||||
verify_ssl=args.verify_ssl,
|
||||
) as client:
|
||||
|
||||
if "smb" in migrate_set:
|
||||
await migrate_smb_shares(
|
||||
client, archive["smb_shares"], args.dry_run, summary)
|
||||
|
||||
if "nfs" in migrate_set:
|
||||
await migrate_nfs_shares(
|
||||
client, archive["nfs_shares"], args.dry_run, summary)
|
||||
|
||||
if "iscsi" in migrate_set:
|
||||
await migrate_iscsi(
|
||||
client, archive.get("iscsi", {}), args.dry_run, summary)
|
||||
|
||||
if args.dry_run and summary.paths_to_create:
|
||||
summary.missing_datasets = await check_dataset_paths(
|
||||
client, summary.paths_to_create,
|
||||
)
|
||||
|
||||
if args.dry_run and summary.zvols_to_check:
|
||||
summary.missing_zvols = await check_iscsi_zvols(
|
||||
client, summary.zvols_to_check,
|
||||
)
|
||||
|
||||
return summary
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Interactive wizard helpers
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
def _parse_size(s: str) -> int:
|
||||
"""Parse a human-friendly size string to bytes. E.g. '100G', '500GiB', '1T'."""
|
||||
s = s.strip().upper()
|
||||
for suffix, mult in [
|
||||
("PIB", 1 << 50), ("PB", 1 << 50), ("P", 1 << 50),
|
||||
("TIB", 1 << 40), ("TB", 1 << 40), ("T", 1 << 40),
|
||||
("GIB", 1 << 30), ("GB", 1 << 30), ("G", 1 << 30),
|
||||
("MIB", 1 << 20), ("MB", 1 << 20), ("M", 1 << 20),
|
||||
("KIB", 1 << 10), ("KB", 1 << 10), ("K", 1 << 10),
|
||||
]:
|
||||
if s.endswith(suffix):
|
||||
try:
|
||||
return int(float(s[:-len(suffix)]) * mult)
|
||||
except ValueError:
|
||||
pass
|
||||
return int(s) # plain bytes
|
||||
|
||||
|
||||
def _fmt_bytes(n: int) -> str:
|
||||
"""Format a byte count as a human-readable string."""
|
||||
for suffix, div in [("TiB", 1 << 40), ("GiB", 1 << 30), ("MiB", 1 << 20), ("KiB", 1 << 10)]:
|
||||
if n >= div:
|
||||
return f"{n / div:.1f} {suffix}"
|
||||
return f"{n} B"
|
||||
|
||||
|
||||
def _find_debug_archives(directory: str = ".") -> list[Path]:
|
||||
"""Return sorted list of TrueNAS debug archives found in *directory*."""
|
||||
patterns = ("*.tgz", "*.tar.gz", "*.tar", "*.txz", "*.tar.xz")
|
||||
found: set[Path] = set()
|
||||
for pat in patterns:
|
||||
found.update(Path(directory).glob(pat))
|
||||
return sorted(found)
|
||||
|
||||
|
||||
def _prompt(label: str, default: str = "") -> str:
|
||||
suffix = f" [{default}]" if default else ""
|
||||
try:
|
||||
val = input(f"{label}{suffix}: ").strip()
|
||||
return val if val else default
|
||||
except (EOFError, KeyboardInterrupt):
|
||||
print()
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
def _confirm(label: str) -> bool:
|
||||
try:
|
||||
return input(f"{label} [y/N]: ").strip().lower() in ("y", "yes")
|
||||
except (EOFError, KeyboardInterrupt):
|
||||
print()
|
||||
return False
|
||||
|
||||
|
||||
def _prompt_csv_path(share_type: str) -> Optional[str]:
|
||||
"""Prompt for a CSV file path. Returns resolved path string or None if skipped."""
|
||||
template = f"{share_type.lower()}_shares_template.csv"
|
||||
print(f" {_dim('(template: ' + template + ')')}")
|
||||
while True:
|
||||
raw = _prompt(f" {share_type} shares CSV path (Enter to skip)")
|
||||
if not raw:
|
||||
return None
|
||||
p = Path(raw)
|
||||
if p.is_file():
|
||||
return str(p)
|
||||
print(f" {_bold_red('File not found:')} {raw}")
|
||||
|
||||
|
||||
|
||||
def _prompt_iscsi_portals(iscsi: dict) -> None:
|
||||
"""Walk each portal and prompt for destination IPs in-place."""
|
||||
portals = iscsi.get("portals", [])
|
||||
if not portals:
|
||||
return
|
||||
|
||||
print(f"\n {_bold('iSCSI Portal Configuration')}")
|
||||
print(f" {_dim('Portal IP addresses are unique per system and must be updated.')}")
|
||||
print(f" {_dim('For MPIO, enter multiple IPs separated by spaces.')}")
|
||||
|
||||
for portal in portals:
|
||||
comment = portal.get("comment", "")
|
||||
listen = portal.get("listen", [])
|
||||
src_ips = " ".join(f"{l['ip']}" for l in listen)
|
||||
|
||||
label = f"Portal {portal['id']}" + (f" ({comment!r})" if comment else "")
|
||||
print(f"\n {_bold(label)}")
|
||||
print(f" {_dim('Source IP(s):')} {src_ips}")
|
||||
|
||||
raw = _prompt(" Destination IP(s)").strip()
|
||||
if not raw:
|
||||
print(f" {_yellow('⚠')} No IPs entered — keeping source IPs.")
|
||||
continue
|
||||
|
||||
dest_ips = raw.split()
|
||||
portal["listen"] = [{"ip": ip} for ip in dest_ips]
|
||||
print(f" {_green('✓')} Portal: {', '.join(dest_ips)}")
|
||||
print()
|
||||
|
||||
|
||||
def _prompt_clear_existing_iscsi(host: str, port: int, api_key: str) -> None:
|
||||
"""
|
||||
Check whether the destination already has iSCSI configuration.
|
||||
If so, summarise what exists and offer to remove it before migration.
|
||||
"""
|
||||
async def _check():
|
||||
async with TrueNASClient(host=host, port=port, api_key=api_key, verify_ssl=False) as client:
|
||||
return await query_existing_iscsi(client)
|
||||
|
||||
existing = asyncio.run(_check())
|
||||
counts = {k: len(v) for k, v in existing.items()}
|
||||
total = sum(counts.values())
|
||||
if total == 0:
|
||||
return
|
||||
|
||||
print(f"\n {_bold_yellow('WARNING:')} Destination already has iSCSI configuration:")
|
||||
labels = [
|
||||
("extents", "extent(s)"),
|
||||
("initiators", "initiator group(s)"),
|
||||
("portals", "portal(s)"),
|
||||
("targets", "target(s)"),
|
||||
("targetextents", "target-extent association(s)"),
|
||||
]
|
||||
for key, label in labels:
|
||||
n = counts[key]
|
||||
if n:
|
||||
print(f" • {n} {label}")
|
||||
print()
|
||||
print(f" {_dim('Keep existing: new objects will be skipped if conflicts are detected.')}")
|
||||
print(f" {_dim('Remove existing: ALL iSCSI config will be deleted before migration.')}")
|
||||
print()
|
||||
|
||||
raw = _prompt(" [K]eep existing / [R]emove all existing iSCSI config", default="K")
|
||||
if raw.strip().lower().startswith("r"):
|
||||
if _confirm(f" Remove ALL {total} iSCSI object(s) from {host}?"):
|
||||
async def _clear():
|
||||
async with TrueNASClient(host=host, port=port, api_key=api_key, verify_ssl=False) as client:
|
||||
await clear_iscsi_config(client)
|
||||
print()
|
||||
asyncio.run(_clear())
|
||||
print(f" {_bold_cyan('✓')} iSCSI configuration cleared.\n")
|
||||
else:
|
||||
print(f" {_yellow('–')} Removal cancelled — keeping existing config.\n")
|
||||
else:
|
||||
print(f" {_dim('Keeping existing iSCSI configuration.')}\n")
|
||||
|
||||
|
||||
def _select_shares(shares: list[dict], share_type: str) -> list[dict]:
|
||||
"""
|
||||
Display a numbered list of *shares* and return only those the user selects.
|
||||
Enter (or 'all') returns all shares unchanged. 'n' / 'none' returns [].
|
||||
"""
|
||||
if not shares:
|
||||
return shares
|
||||
|
||||
print(f"\n {_bold(f'{share_type} shares ({len(shares)}):')} \n")
|
||||
for i, share in enumerate(shares, 1):
|
||||
if share_type == "SMB":
|
||||
name = share.get("name", "<unnamed>")
|
||||
path = share.get("path", "")
|
||||
print(f" {_cyan(str(i) + '.')} {name:<22} {_dim(path)}")
|
||||
else: # NFS
|
||||
pl = share.get("paths") or []
|
||||
path = share.get("path") or (pl[0] if pl else "")
|
||||
extra = f" {_dim('+ ' + str(len(pl) - 1) + ' more')}" if len(pl) > 1 else ""
|
||||
print(f" {_cyan(str(i) + '.')} {path}{extra}")
|
||||
|
||||
print()
|
||||
raw = _prompt(
|
||||
f" Select {share_type} shares to migrate "
|
||||
"(e.g. '1 3', Enter = all, 'n' = none)",
|
||||
default="all",
|
||||
)
|
||||
|
||||
low = raw.strip().lower()
|
||||
if low in ("", "all"):
|
||||
print(f" {_green('✓')} All {len(shares)} {share_type} share(s) selected.")
|
||||
return shares
|
||||
if low in ("n", "none", "0"):
|
||||
print(f" {_yellow('–')} No {share_type} shares selected.")
|
||||
return []
|
||||
|
||||
seen: set[int] = set()
|
||||
selected: list[dict] = []
|
||||
for tok in raw.split():
|
||||
if tok.isdigit():
|
||||
idx = int(tok) - 1
|
||||
if 0 <= idx < len(shares) and idx not in seen:
|
||||
seen.add(idx)
|
||||
selected.append(shares[idx])
|
||||
|
||||
if selected:
|
||||
print(f" {_green('✓')} {len(selected)} of {len(shares)} {share_type} share(s) selected.")
|
||||
else:
|
||||
print(f" {_yellow('–')} No valid selections; skipping {share_type} shares.")
|
||||
return selected
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Destination audit wizard
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
def _print_inventory_report(host: str, inv: dict) -> None:
|
||||
"""Print a structured inventory of all configuration on the destination."""
|
||||
smb = inv.get("smb_shares", [])
|
||||
nfs = inv.get("nfs_exports", [])
|
||||
ds = inv.get("datasets", [])
|
||||
zvols = inv.get("zvols", [])
|
||||
ext = inv.get("iscsi_extents", [])
|
||||
init = inv.get("iscsi_initiators", [])
|
||||
portals = inv.get("iscsi_portals", [])
|
||||
tgt = inv.get("iscsi_targets", [])
|
||||
te = inv.get("iscsi_targetextents", [])
|
||||
|
||||
header = f"DESTINATION INVENTORY: {host}"
|
||||
rule = _bold_cyan("─" * (len(header) + 4))
|
||||
print(f"\n {rule}")
|
||||
print(f" {_bold_cyan('│')} {_bold(header)} {_bold_cyan('│')}")
|
||||
print(f" {rule}")
|
||||
|
||||
# SMB
|
||||
if smb:
|
||||
print(f"\n {_bold(f'SMB Shares ({len(smb)})')}")
|
||||
for s in smb:
|
||||
name = s.get("name", "<unnamed>")
|
||||
path = s.get("path", "")
|
||||
enabled = "" if s.get("enabled", True) else _dim(" [disabled]")
|
||||
print(f" {_cyan('•')} {name:<24} {_dim(path)}{enabled}")
|
||||
else:
|
||||
print(f"\n {_dim('SMB Shares: none')}")
|
||||
|
||||
# NFS
|
||||
if nfs:
|
||||
print(f"\n {_bold(f'NFS Exports ({len(nfs)})')}")
|
||||
for n in nfs:
|
||||
path = n.get("path", "<no path>")
|
||||
enabled = "" if n.get("enabled", True) else _dim(" [disabled]")
|
||||
print(f" {_cyan('•')} {path}{enabled}")
|
||||
else:
|
||||
print(f"\n {_dim('NFS Exports: none')}")
|
||||
|
||||
# iSCSI
|
||||
has_iscsi = any([ext, init, portals, tgt, te])
|
||||
if has_iscsi:
|
||||
iscsi_total = len(ext) + len(init) + len(portals) + len(tgt) + len(te)
|
||||
print(f"\n {_bold(f'iSCSI Configuration ({iscsi_total} objects)')}")
|
||||
if ext:
|
||||
print(f" {_bold('Extents')} ({len(ext)}):")
|
||||
for e in ext:
|
||||
kind = e.get("type", "")
|
||||
backing = e.get("disk") or e.get("path") or ""
|
||||
print(f" {_cyan('•')} {e.get('name', '<unnamed>'):<22} {_dim(kind + ' ' + backing)}")
|
||||
if init:
|
||||
print(f" {_bold('Initiator Groups')} ({len(init)}):")
|
||||
for i in init:
|
||||
print(f" {_cyan('•')} {i.get('comment') or '<no comment>'}")
|
||||
if portals:
|
||||
print(f" {_bold('Portals')} ({len(portals)}):")
|
||||
for p in portals:
|
||||
ips = ", ".join(l["ip"] for l in p.get("listen", []))
|
||||
comment = p.get("comment", "")
|
||||
label = f"{comment} " if comment else ""
|
||||
print(f" {_cyan('•')} {label}{_dim(ips)}")
|
||||
if tgt:
|
||||
print(f" {_bold('Targets')} ({len(tgt)}):")
|
||||
for t in tgt:
|
||||
print(f" {_cyan('•')} {t.get('name', '<unnamed>')}")
|
||||
if te:
|
||||
print(f" {_bold('Target-Extent Associations')} ({len(te)})")
|
||||
else:
|
||||
print(f"\n {_dim('iSCSI: none')}")
|
||||
|
||||
# Datasets
|
||||
if ds:
|
||||
print(f"\n {_bold(f'Datasets ({len(ds)})')}")
|
||||
for d in ds[:20]:
|
||||
name = d.get("id", "")
|
||||
is_root = "/" not in name
|
||||
used_raw = d.get("used", {})
|
||||
used_bytes = used_raw.get("parsed", 0) if isinstance(used_raw, dict) else 0
|
||||
used_str = f" {_fmt_bytes(used_bytes)} used" if used_bytes else ""
|
||||
root_tag = _dim(" (pool root)") if is_root else ""
|
||||
print(f" {_cyan('•')} {name}{root_tag}{_dim(used_str)}")
|
||||
if len(ds) > 20:
|
||||
print(f" {_dim(f'… and {len(ds) - 20} more')}")
|
||||
else:
|
||||
print(f"\n {_dim('Datasets: none')}")
|
||||
|
||||
# Zvols
|
||||
if zvols:
|
||||
print(f"\n {_bold(f'Zvols ({len(zvols)})')}")
|
||||
for z in zvols:
|
||||
name = z.get("id", "")
|
||||
vs_raw = z.get("volsize", {})
|
||||
vs = vs_raw.get("parsed", 0) if isinstance(vs_raw, dict) else 0
|
||||
vs_str = f" {_fmt_bytes(vs)}" if vs else ""
|
||||
print(f" {_cyan('•')} {name}{_dim(vs_str)}")
|
||||
else:
|
||||
print(f"\n {_dim('Zvols: none')}")
|
||||
|
||||
print()
|
||||
|
||||
|
||||
def _run_audit_wizard(host: str, port: int, api_key: str) -> None:
|
||||
"""Query destination inventory and offer to selectively delete configuration."""
|
||||
print(f"\n Querying {_bold(host)} …\n")
|
||||
|
||||
async def _query() -> dict:
|
||||
async with TrueNASClient(host=host, port=port, api_key=api_key, verify_ssl=False) as client:
|
||||
return await query_destination_inventory(client)
|
||||
|
||||
try:
|
||||
inv = asyncio.run(_query())
|
||||
except (OSError, PermissionError) as exc:
|
||||
print(f" {_bold_red('Connection failed:')} {exc}\n")
|
||||
return
|
||||
|
||||
_print_inventory_report(host, inv)
|
||||
|
||||
total = sum(len(v) for v in inv.values())
|
||||
if total == 0:
|
||||
print(f" {_dim('The destination appears to have no configuration.')}\n")
|
||||
return
|
||||
|
||||
# ── Deletion options ───────────────────────────────────────────────────────
|
||||
print(f" {_bold_yellow('─' * 60)}")
|
||||
print(f" {_bold_yellow('DELETION OPTIONS')}")
|
||||
print(f" {_dim('You may choose to delete some or all of the configuration above.')}")
|
||||
print(f" {_bold_red('WARNING: Deleted datasets and zvols cannot be recovered — all data will be permanently lost.')}")
|
||||
print()
|
||||
|
||||
has_iscsi = any(inv[k] for k in ("iscsi_extents", "iscsi_initiators",
|
||||
"iscsi_portals", "iscsi_targets",
|
||||
"iscsi_targetextents"))
|
||||
iscsi_count = sum(len(inv[k]) for k in ("iscsi_extents", "iscsi_initiators",
|
||||
"iscsi_portals", "iscsi_targets",
|
||||
"iscsi_targetextents"))
|
||||
deletable_ds = [d for d in inv["datasets"] if "/" in d["id"]]
|
||||
|
||||
del_iscsi = False
|
||||
del_smb = False
|
||||
del_nfs = False
|
||||
del_zvols = False
|
||||
del_datasets = False
|
||||
|
||||
# iSCSI (must go first — uses zvols as backing)
|
||||
if has_iscsi:
|
||||
del_iscsi = _confirm(
|
||||
f" Delete ALL iSCSI configuration ({iscsi_count} objects)?"
|
||||
)
|
||||
|
||||
# SMB
|
||||
if inv["smb_shares"]:
|
||||
del_smb = _confirm(
|
||||
f" Delete all {len(inv['smb_shares'])} SMB share(s)?"
|
||||
)
|
||||
|
||||
# NFS
|
||||
if inv["nfs_exports"]:
|
||||
del_nfs = _confirm(
|
||||
f" Delete all {len(inv['nfs_exports'])} NFS export(s)?"
|
||||
)
|
||||
|
||||
# Zvols — require explicit confirmation phrase
|
||||
if inv["zvols"]:
|
||||
print()
|
||||
print(f" {_bold_red('⚠ DATA DESTRUCTION WARNING ⚠')}")
|
||||
print(f" Deleting zvols PERMANENTLY DESTROYS all data stored in them.")
|
||||
print(f" This action cannot be undone. Affected zvols:")
|
||||
for z in inv["zvols"]:
|
||||
print(f" {_yellow('•')} {z['id']}")
|
||||
print()
|
||||
raw = _prompt(
|
||||
f" Type DELETE to confirm deletion of {len(inv['zvols'])} zvol(s),"
|
||||
" or Enter to skip"
|
||||
).strip()
|
||||
del_zvols = (raw == "DELETE")
|
||||
if raw and raw != "DELETE":
|
||||
print(f" {_dim('Confirmation not matched — zvols will not be deleted.')}")
|
||||
print()
|
||||
|
||||
# Datasets — strongest warning
|
||||
if deletable_ds:
|
||||
print(f" {_bold_red('⚠⚠ CRITICAL DATA DESTRUCTION WARNING ⚠⚠')}")
|
||||
print(f" Deleting datasets PERMANENTLY DESTROYS ALL DATA including all files,")
|
||||
print(f" snapshots, and child datasets. Pool root datasets (e.g. 'tank') will")
|
||||
print(f" be skipped, but all child datasets WILL be deleted.")
|
||||
print(f" This action cannot be undone. {len(deletable_ds)} dataset(s) would be deleted.")
|
||||
print()
|
||||
raw = _prompt(
|
||||
f" Type DELETE to confirm deletion of {len(deletable_ds)} dataset(s),"
|
||||
" or Enter to skip"
|
||||
).strip()
|
||||
del_datasets = (raw == "DELETE")
|
||||
if raw and raw != "DELETE":
|
||||
print(f" {_dim('Confirmation not matched — datasets will not be deleted.')}")
|
||||
print()
|
||||
|
||||
# ── Nothing selected ───────────────────────────────────────────────────────
|
||||
if not any([del_iscsi, del_smb, del_nfs, del_zvols, del_datasets]):
|
||||
print(f" {_dim('Nothing selected for deletion. No changes made.')}\n")
|
||||
return
|
||||
|
||||
# ── Final confirmation ─────────────────────────────────────────────────────
|
||||
print(f" {_bold_yellow('─' * 60)}")
|
||||
print(f" {_bold_yellow('PENDING DELETIONS on ' + host + ':')}")
|
||||
if del_iscsi:
|
||||
print(f" {_yellow('•')} ALL iSCSI configuration ({iscsi_count} objects)")
|
||||
if del_smb:
|
||||
print(f" {_yellow('•')} {len(inv['smb_shares'])} SMB share(s)")
|
||||
if del_nfs:
|
||||
print(f" {_yellow('•')} {len(inv['nfs_exports'])} NFS export(s)")
|
||||
if del_zvols:
|
||||
print(f" {_bold_red('•')} {len(inv['zvols'])} zvol(s) "
|
||||
f"{_bold_red('⚠ ALL DATA WILL BE PERMANENTLY DESTROYED')}")
|
||||
if del_datasets:
|
||||
print(f" {_bold_red('•')} {len(deletable_ds)} dataset(s) "
|
||||
f"{_bold_red('⚠ ALL DATA WILL BE PERMANENTLY DESTROYED')}")
|
||||
print()
|
||||
print(f" {_bold_red('THIS ACTION CANNOT BE UNDONE.')}")
|
||||
print()
|
||||
|
||||
if not _confirm(f" Proceed with all selected deletions on {host}?"):
|
||||
print(f" {_dim('Aborted – no changes made.')}\n")
|
||||
return
|
||||
|
||||
# ── Execute ────────────────────────────────────────────────────────────────
|
||||
print()
|
||||
|
||||
async def _execute() -> None:
|
||||
async with TrueNASClient(host=host, port=port, api_key=api_key, verify_ssl=False) as client:
|
||||
if del_iscsi:
|
||||
print(f" Removing iSCSI configuration …")
|
||||
await clear_iscsi_config(client)
|
||||
print(f" {_bold_green('✓')} iSCSI configuration removed.")
|
||||
|
||||
if del_smb:
|
||||
print(f" Removing SMB shares …")
|
||||
ok, fail = await delete_smb_shares(client, inv["smb_shares"])
|
||||
suffix = f" {_bold_red(str(fail) + ' failed')}" if fail else ""
|
||||
print(f" {_bold_green('✓')} {ok} deleted{suffix}")
|
||||
|
||||
if del_nfs:
|
||||
print(f" Removing NFS exports …")
|
||||
ok, fail = await delete_nfs_exports(client, inv["nfs_exports"])
|
||||
suffix = f" {_bold_red(str(fail) + ' failed')}" if fail else ""
|
||||
print(f" {_bold_green('✓')} {ok} deleted{suffix}")
|
||||
|
||||
if del_zvols:
|
||||
print(f" Removing zvols …")
|
||||
ok, fail = await delete_zvols(client, inv["zvols"])
|
||||
suffix = f" {_bold_red(str(fail) + ' failed')}" if fail else ""
|
||||
print(f" {_bold_green('✓')} {ok} deleted{suffix}")
|
||||
|
||||
if del_datasets:
|
||||
print(f" Removing datasets …")
|
||||
ok, fail = await delete_datasets(client, deletable_ds)
|
||||
suffix = f" {_bold_red(str(fail) + ' failed')}" if fail else ""
|
||||
print(f" {_bold_green('✓')} {ok} deleted{suffix}")
|
||||
|
||||
asyncio.run(_execute())
|
||||
print(f"\n {_bold_cyan('Done.')}\n")
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Interactive wizard
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
def interactive_mode() -> None:
|
||||
"""Interactive wizard: pick source → configure → dry run → confirm → apply."""
|
||||
print(
|
||||
f"\n{_bold_cyan(' TrueNAS Share Migration Tool')}\n"
|
||||
f" {_dim('Migrate SMB/NFS shares to a live TrueNAS system.')}\n"
|
||||
)
|
||||
|
||||
# 0 ── Top-level action ─────────────────────────────────────────────────────
|
||||
print(f" {_bold('What would you like to do?')}")
|
||||
print(f" {_cyan('1.')} Migrate configuration to a destination system")
|
||||
print(f" {_cyan('2.')} Audit destination system (view and manage existing config)")
|
||||
action_raw = _prompt(" Select [1/2]", default="1")
|
||||
print()
|
||||
|
||||
if action_raw.strip() == "2":
|
||||
audit_host = ""
|
||||
while not audit_host:
|
||||
audit_host = _prompt("Destination TrueNAS host or IP")
|
||||
if not audit_host:
|
||||
print(" Host is required.")
|
||||
audit_port_raw = _prompt("WebSocket port", default="443")
|
||||
audit_port = int(audit_port_raw) if audit_port_raw.isdigit() else 443
|
||||
audit_key = ""
|
||||
while not audit_key:
|
||||
try:
|
||||
audit_key = getpass.getpass("API key (input hidden): ").strip()
|
||||
except (EOFError, KeyboardInterrupt):
|
||||
print()
|
||||
sys.exit(0)
|
||||
if not audit_key:
|
||||
print(" API key is required.")
|
||||
_run_audit_wizard(audit_host, audit_port, audit_key)
|
||||
return
|
||||
|
||||
# 1 ── Source type ──────────────────────────────────────────────────────────
|
||||
print(f" {_bold('Source type:')}")
|
||||
print(f" {_cyan('1.')} TrueNAS debug archive (.tgz / .tar)")
|
||||
print(f" {_cyan('2.')} CSV import (non-TrueNAS source)")
|
||||
src_raw = _prompt(" Select source [1/2]", default="1")
|
||||
use_csv = src_raw.strip() == "2"
|
||||
print()
|
||||
|
||||
# 2 ── Destination ──────────────────────────────────────────────────────────
|
||||
host = ""
|
||||
while not host:
|
||||
host = _prompt("Destination TrueNAS host or IP")
|
||||
if not host:
|
||||
print(" Host is required.")
|
||||
|
||||
port_raw = _prompt("WebSocket port", default="443")
|
||||
port = int(port_raw) if port_raw.isdigit() else 443
|
||||
|
||||
# 3 ── API key ──────────────────────────────────────────────────────────────
|
||||
api_key = ""
|
||||
while not api_key:
|
||||
try:
|
||||
api_key = getpass.getpass("API key (input hidden): ").strip()
|
||||
except (EOFError, KeyboardInterrupt):
|
||||
print()
|
||||
sys.exit(0)
|
||||
if not api_key:
|
||||
print(" API key is required.")
|
||||
|
||||
if use_csv:
|
||||
# ── CSV source ──────────────────────────────────────────────────────────
|
||||
print(f"\n {_bold('CSV file paths:')}")
|
||||
print(f" {_dim('Press Enter to skip a share type.')}\n")
|
||||
smb_csv_path = _prompt_csv_path("SMB")
|
||||
print()
|
||||
nfs_csv_path = _prompt_csv_path("NFS")
|
||||
|
||||
migrate: list[str] = []
|
||||
if smb_csv_path:
|
||||
migrate.append("smb")
|
||||
if nfs_csv_path:
|
||||
migrate.append("nfs")
|
||||
if not migrate:
|
||||
sys.exit("No CSV files provided – nothing to migrate.")
|
||||
|
||||
print()
|
||||
archive_data = parse_csv_sources(smb_csv_path, nfs_csv_path)
|
||||
extra_ns: dict = {"smb_csv": smb_csv_path, "nfs_csv": nfs_csv_path}
|
||||
|
||||
else:
|
||||
# ── Archive source ──────────────────────────────────────────────────────
|
||||
archives = _find_debug_archives()
|
||||
if not archives:
|
||||
sys.exit(
|
||||
"No debug archives (.tgz / .tar.gz / .tar / .txz) found in the "
|
||||
"current directory.\n"
|
||||
"Copy your TrueNAS debug file here, or use --debug-tar to specify a path."
|
||||
)
|
||||
|
||||
if len(archives) == 1:
|
||||
chosen = archives[0]
|
||||
print(f" {_dim('Archive:')} {_bold(chosen.name)} "
|
||||
f"{_dim('(' + f'{chosen.stat().st_size / 1_048_576:.1f} MB' + ')')}\n")
|
||||
else:
|
||||
print(f" {_bold('Debug archives found:')}\n")
|
||||
for i, p in enumerate(archives, 1):
|
||||
print(f" {_cyan(str(i) + '.')} {p.name} "
|
||||
f"{_dim('(' + f'{p.stat().st_size / 1_048_576:.1f} MB' + ')')}")
|
||||
print()
|
||||
while True:
|
||||
raw = _prompt(f"Select archive [1-{len(archives)}]")
|
||||
if raw.isdigit() and 1 <= int(raw) <= len(archives):
|
||||
chosen = archives[int(raw) - 1]
|
||||
break
|
||||
print(f" Enter a number from 1 to {len(archives)}.")
|
||||
|
||||
# ── Migration scope ─────────────────────────────────────────────────────
|
||||
print(f"\n {_bold('What to migrate?')}")
|
||||
print(f" {_cyan('1.')} SMB shares")
|
||||
print(f" {_cyan('2.')} NFS shares")
|
||||
print(f" {_cyan('3.')} iSCSI (targets, extents, portals, initiator groups)")
|
||||
sel_raw = _prompt(
|
||||
"Selection (space-separated numbers, Enter for all)", default="1 2 3"
|
||||
)
|
||||
_sel_map = {"1": "smb", "2": "nfs", "3": "iscsi"}
|
||||
migrate = []
|
||||
for tok in sel_raw.split():
|
||||
if tok in _sel_map and _sel_map[tok] not in migrate:
|
||||
migrate.append(_sel_map[tok])
|
||||
if not migrate:
|
||||
migrate = ["smb", "nfs", "iscsi"]
|
||||
|
||||
# ── Parse archive ───────────────────────────────────────────────────────
|
||||
print()
|
||||
archive_data = parse_archive(str(chosen))
|
||||
extra_ns = {"debug_tar": str(chosen)}
|
||||
|
||||
# ── iSCSI portal IP remapping ────────────────────────────────────────
|
||||
if "iscsi" in migrate and archive_data.get("iscsi", {}).get("portals"):
|
||||
_prompt_iscsi_portals(archive_data["iscsi"])
|
||||
|
||||
# ── iSCSI pre-migration check ────────────────────────────────────────
|
||||
if "iscsi" in migrate:
|
||||
_prompt_clear_existing_iscsi(host, port, api_key)
|
||||
|
||||
# ── Select individual shares (common) ──────────────────────────────────────
|
||||
if "smb" in migrate and archive_data["smb_shares"]:
|
||||
archive_data["smb_shares"] = _select_shares(archive_data["smb_shares"], "SMB")
|
||||
if "nfs" in migrate and archive_data["nfs_shares"]:
|
||||
archive_data["nfs_shares"] = _select_shares(archive_data["nfs_shares"], "NFS")
|
||||
print()
|
||||
|
||||
base_ns = dict(
|
||||
dest=host,
|
||||
port=port,
|
||||
api_key=api_key,
|
||||
verify_ssl=False,
|
||||
migrate=migrate,
|
||||
**extra_ns,
|
||||
)
|
||||
|
||||
# 6 ── Dry run ──────────────────────────────────────────────────────────────
|
||||
dry_summary = asyncio.run(
|
||||
run(argparse.Namespace(**base_ns, dry_run=True), archive_data)
|
||||
)
|
||||
print(dry_summary.report())
|
||||
|
||||
# Offer to create missing datasets before the live run
|
||||
if dry_summary.missing_datasets:
|
||||
non_mnt = [p for p in dry_summary.missing_datasets if not p.startswith("/mnt/")]
|
||||
creatable = [p for p in dry_summary.missing_datasets if p.startswith("/mnt/")]
|
||||
|
||||
if non_mnt:
|
||||
print(f" NOTE: {len(non_mnt)} path(s) cannot be auto-created "
|
||||
"(not under /mnt/):")
|
||||
for p in non_mnt:
|
||||
print(f" • {p}")
|
||||
print()
|
||||
|
||||
if creatable:
|
||||
print(f" {len(creatable)} dataset(s) can be created automatically:")
|
||||
for p in creatable:
|
||||
print(f" • {p}")
|
||||
print()
|
||||
if _confirm(f"Create these {len(creatable)} dataset(s) on {host} now?"):
|
||||
asyncio.run(create_missing_datasets(
|
||||
host=host,
|
||||
port=port,
|
||||
api_key=api_key,
|
||||
paths=creatable,
|
||||
))
|
||||
print()
|
||||
|
||||
if dry_summary.missing_zvols:
|
||||
print(f"\n {len(dry_summary.missing_zvols)} zvol(s) need to be created for iSCSI extents:")
|
||||
for z in dry_summary.missing_zvols:
|
||||
print(f" • {z}")
|
||||
print()
|
||||
if _confirm(f"Create these {len(dry_summary.missing_zvols)} zvol(s) on {host} now?"):
|
||||
zvol_sizes: dict[str, int] = {}
|
||||
for zvol in dry_summary.missing_zvols:
|
||||
while True:
|
||||
raw = _prompt(f" Size for {zvol} (e.g. 100G, 500GiB, 1T)").strip()
|
||||
if not raw:
|
||||
print(" Size is required.")
|
||||
continue
|
||||
try:
|
||||
zvol_sizes[zvol] = _parse_size(raw)
|
||||
break
|
||||
except ValueError:
|
||||
print(f" Cannot parse {raw!r} — try a format like 100G or 500GiB.")
|
||||
asyncio.run(create_missing_zvols(
|
||||
host=host, port=port, api_key=api_key, zvols=zvol_sizes,
|
||||
))
|
||||
print()
|
||||
print(f" Re-running dry run to verify zvol creation …")
|
||||
print()
|
||||
dry_summary = asyncio.run(
|
||||
run(argparse.Namespace(**base_ns, dry_run=True), archive_data)
|
||||
)
|
||||
print(dry_summary.report())
|
||||
|
||||
if not _confirm(f"Apply these changes to {host}?"):
|
||||
print("Aborted – no changes made.")
|
||||
sys.exit(0)
|
||||
|
||||
# 7 ── Live run ─────────────────────────────────────────────────────────────
|
||||
print()
|
||||
live_summary = asyncio.run(
|
||||
run(argparse.Namespace(**base_ns, dry_run=False), archive_data)
|
||||
)
|
||||
print(live_summary.report())
|
||||
if live_summary.errors:
|
||||
sys.exit(2)
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Argument parser + entry point
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
def main() -> None:
|
||||
if len(sys.argv) == 1:
|
||||
interactive_mode()
|
||||
return
|
||||
|
||||
p = argparse.ArgumentParser(
|
||||
prog="truenas_migrate",
|
||||
description=(
|
||||
"Migrate SMB and NFS shares to a live TrueNAS destination system. "
|
||||
"Source can be a TrueNAS debug archive or customer-supplied CSV files."
|
||||
),
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog=__doc__,
|
||||
)
|
||||
|
||||
# ── Source ────────────────────────────────────────────────────────────────
|
||||
src = p.add_argument_group("source (choose one)")
|
||||
src.add_argument(
|
||||
"--debug-tar", metavar="FILE",
|
||||
help="Path to the TrueNAS debug .tar / .tgz from the SOURCE system.",
|
||||
)
|
||||
src.add_argument(
|
||||
"--smb-csv", metavar="FILE",
|
||||
help="Path to a CSV file containing SMB share definitions (non-TrueNAS source).",
|
||||
)
|
||||
src.add_argument(
|
||||
"--nfs-csv", metavar="FILE",
|
||||
help="Path to a CSV file containing NFS share definitions (non-TrueNAS source).",
|
||||
)
|
||||
p.add_argument(
|
||||
"--list-archive", action="store_true",
|
||||
help=(
|
||||
"List all JSON files found in the archive and exit. "
|
||||
"Requires --debug-tar."
|
||||
),
|
||||
)
|
||||
|
||||
# ── Destination ───────────────────────────────────────────────────────────
|
||||
p.add_argument(
|
||||
"--dest", metavar="HOST",
|
||||
help="Hostname or IP of the DESTINATION TrueNAS system.",
|
||||
)
|
||||
p.add_argument(
|
||||
"--port", type=int, default=443, metavar="PORT",
|
||||
help="WebSocket port on the destination (default: 443).",
|
||||
)
|
||||
p.add_argument(
|
||||
"--verify-ssl", action="store_true",
|
||||
help=(
|
||||
"Verify the destination TLS certificate. "
|
||||
"Off by default because most TrueNAS systems use self-signed certs."
|
||||
),
|
||||
)
|
||||
|
||||
# ── Authentication ────────────────────────────────────────────────────────
|
||||
p.add_argument(
|
||||
"--api-key", metavar="KEY",
|
||||
help=(
|
||||
"TrueNAS API key. Generate one in TrueNAS UI: "
|
||||
"top-right account menu → API Keys."
|
||||
),
|
||||
)
|
||||
|
||||
# ── Scope ─────────────────────────────────────────────────────────────────
|
||||
p.add_argument(
|
||||
"--migrate",
|
||||
nargs="+",
|
||||
choices=["smb", "nfs", "iscsi"],
|
||||
default=["smb", "nfs", "iscsi"],
|
||||
metavar="TYPE",
|
||||
help=(
|
||||
"What to migrate. Choices: smb nfs iscsi "
|
||||
"(default: both). Example: --migrate smb"
|
||||
),
|
||||
)
|
||||
p.add_argument(
|
||||
"--dry-run", action="store_true",
|
||||
help="Parse source and connect to destination, but make no changes.",
|
||||
)
|
||||
p.add_argument(
|
||||
"--verbose", "-v", action="store_true",
|
||||
help="Enable DEBUG-level logging.",
|
||||
)
|
||||
|
||||
args = p.parse_args()
|
||||
|
||||
if args.verbose:
|
||||
log.setLevel(logging.DEBUG)
|
||||
|
||||
has_archive = bool(args.debug_tar)
|
||||
has_csv = bool(args.smb_csv or args.nfs_csv)
|
||||
|
||||
if has_archive and has_csv:
|
||||
p.error("Cannot combine --debug-tar with --smb-csv / --nfs-csv.")
|
||||
|
||||
if not has_archive and not has_csv:
|
||||
p.error(
|
||||
"Specify a source: --debug-tar FILE or --smb-csv / --nfs-csv FILE(s)."
|
||||
)
|
||||
|
||||
if has_archive:
|
||||
if not Path(args.debug_tar).is_file():
|
||||
p.error(f"Archive not found: {args.debug_tar}")
|
||||
if args.list_archive:
|
||||
list_archive_and_exit(args.debug_tar) # does not return
|
||||
else:
|
||||
if args.list_archive:
|
||||
p.error("--list-archive requires --debug-tar.")
|
||||
if args.smb_csv and not Path(args.smb_csv).is_file():
|
||||
p.error(f"SMB CSV not found: {args.smb_csv}")
|
||||
if args.nfs_csv and not Path(args.nfs_csv).is_file():
|
||||
p.error(f"NFS CSV not found: {args.nfs_csv}")
|
||||
|
||||
if not args.dest:
|
||||
p.error("--dest is required.")
|
||||
if not args.api_key:
|
||||
p.error("--api-key is required.")
|
||||
|
||||
summary = asyncio.run(run(args))
|
||||
print(summary.report())
|
||||
if summary.errors:
|
||||
sys.exit(2)
|
||||
488
truenas_migrate/client.py
Normal file
488
truenas_migrate/client.py
Normal file
@@ -0,0 +1,488 @@
|
||||
"""TrueNAS WebSocket client and dataset utilities."""
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import base64
|
||||
import contextlib
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import ssl
|
||||
import struct
|
||||
from typing import Any, Optional
|
||||
|
||||
from .colors import log
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Raw WebSocket implementation (stdlib only, RFC 6455)
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
def _ws_mask(data: bytes, mask: bytes) -> bytes:
|
||||
"""XOR *data* with a 4-byte repeating mask key."""
|
||||
out = bytearray(data)
|
||||
for i in range(len(out)):
|
||||
out[i] ^= mask[i & 3]
|
||||
return bytes(out)
|
||||
|
||||
|
||||
def _ws_encode_frame(payload: bytes, opcode: int = 0x1) -> bytes:
|
||||
"""Encode a masked client→server WebSocket frame."""
|
||||
mask = os.urandom(4)
|
||||
length = len(payload)
|
||||
header = bytearray([0x80 | opcode])
|
||||
if length < 126:
|
||||
header.append(0x80 | length)
|
||||
elif length < 65536:
|
||||
header.append(0x80 | 126)
|
||||
header += struct.pack("!H", length)
|
||||
else:
|
||||
header.append(0x80 | 127)
|
||||
header += struct.pack("!Q", length)
|
||||
return bytes(header) + mask + _ws_mask(payload, mask)
|
||||
|
||||
|
||||
async def _ws_recv_message(reader: asyncio.StreamReader) -> str:
|
||||
"""
|
||||
Read one complete WebSocket message, reassembling continuation frames.
|
||||
Skips ping/pong control frames. Raises OSError on close frame.
|
||||
"""
|
||||
fragments: list[bytes] = []
|
||||
while True:
|
||||
hdr = await reader.readexactly(2)
|
||||
fin = bool(hdr[0] & 0x80)
|
||||
opcode = hdr[0] & 0x0F
|
||||
masked = bool(hdr[1] & 0x80)
|
||||
length = hdr[1] & 0x7F
|
||||
|
||||
if length == 126:
|
||||
length = struct.unpack("!H", await reader.readexactly(2))[0]
|
||||
elif length == 127:
|
||||
length = struct.unpack("!Q", await reader.readexactly(8))[0]
|
||||
|
||||
mask_key = await reader.readexactly(4) if masked else None
|
||||
payload = await reader.readexactly(length) if length else b""
|
||||
if mask_key:
|
||||
payload = _ws_mask(payload, mask_key)
|
||||
|
||||
if opcode == 0x8:
|
||||
raise OSError("WebSocket: server sent close frame")
|
||||
if opcode in (0x9, 0xA):
|
||||
continue
|
||||
|
||||
fragments.append(payload)
|
||||
if fin:
|
||||
return b"".join(fragments).decode("utf-8")
|
||||
|
||||
|
||||
class _WebSocket:
|
||||
"""asyncio StreamReader/Writer wrapped to a simple send/recv/close API."""
|
||||
|
||||
def __init__(self, reader: asyncio.StreamReader, writer: asyncio.StreamWriter) -> None:
|
||||
self._reader = reader
|
||||
self._writer = writer
|
||||
|
||||
async def send(self, data: str) -> None:
|
||||
self._writer.write(_ws_encode_frame(data.encode("utf-8"), opcode=0x1))
|
||||
await self._writer.drain()
|
||||
|
||||
async def recv(self) -> str:
|
||||
return await _ws_recv_message(self._reader)
|
||||
|
||||
async def close(self) -> None:
|
||||
with contextlib.suppress(Exception):
|
||||
self._writer.write(_ws_encode_frame(b"", opcode=0x8))
|
||||
await self._writer.drain()
|
||||
self._writer.close()
|
||||
with contextlib.suppress(Exception):
|
||||
await self._writer.wait_closed()
|
||||
|
||||
|
||||
async def _ws_connect(host: str, port: int, path: str, ssl_ctx: ssl.SSLContext) -> _WebSocket:
|
||||
"""Open a TLS connection, perform the HTTP→WebSocket upgrade, return a _WebSocket."""
|
||||
reader, writer = await asyncio.open_connection(host, port, ssl=ssl_ctx)
|
||||
|
||||
key = base64.b64encode(os.urandom(16)).decode()
|
||||
writer.write((
|
||||
f"GET {path} HTTP/1.1\r\n"
|
||||
f"Host: {host}:{port}\r\n"
|
||||
f"Upgrade: websocket\r\n"
|
||||
f"Connection: Upgrade\r\n"
|
||||
f"Sec-WebSocket-Key: {key}\r\n"
|
||||
f"Sec-WebSocket-Version: 13\r\n"
|
||||
f"\r\n"
|
||||
).encode())
|
||||
await writer.drain()
|
||||
|
||||
response_lines: list[bytes] = []
|
||||
while True:
|
||||
line = await asyncio.wait_for(reader.readline(), timeout=20)
|
||||
if not line:
|
||||
raise OSError("Connection closed during WebSocket handshake")
|
||||
response_lines.append(line)
|
||||
if line in (b"\r\n", b"\n"):
|
||||
break
|
||||
|
||||
status = response_lines[0].decode("latin-1").strip()
|
||||
if " 101 " not in status:
|
||||
raise OSError(f"WebSocket upgrade failed: {status}")
|
||||
|
||||
expected = base64.b64encode(
|
||||
hashlib.sha1(
|
||||
(key + "258EAFA5-E914-47DA-95CA-C5AB0DC85B11").encode()
|
||||
).digest()
|
||||
).decode().lower()
|
||||
headers_text = b"".join(response_lines).decode("latin-1").lower()
|
||||
if expected not in headers_text:
|
||||
raise OSError("WebSocket upgrade: Sec-WebSocket-Accept mismatch")
|
||||
|
||||
return _WebSocket(reader, writer)
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# TrueNAS JSON-RPC 2.0 client
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
class TrueNASClient:
|
||||
"""
|
||||
Minimal async JSON-RPC 2.0 client for the TrueNAS WebSocket API.
|
||||
|
||||
TrueNAS 25.04+ endpoint: wss://<host>:<port>/api/current
|
||||
Authentication: auth.login_with_api_key
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
host: str,
|
||||
api_key: str,
|
||||
port: int = 443,
|
||||
verify_ssl: bool = False,
|
||||
) -> None:
|
||||
self._host = host
|
||||
self._port = port
|
||||
self._api_key = api_key
|
||||
self._verify_ssl = verify_ssl
|
||||
self._ws = None
|
||||
self._call_id = 0
|
||||
|
||||
@property
|
||||
def _url(self) -> str:
|
||||
return f"wss://{self._host}:{self._port}/api/current"
|
||||
|
||||
async def __aenter__(self) -> "TrueNASClient":
|
||||
await self._connect()
|
||||
return self
|
||||
|
||||
async def __aexit__(self, *_: Any) -> None:
|
||||
if self._ws:
|
||||
await self._ws.close()
|
||||
self._ws = None
|
||||
|
||||
async def _connect(self) -> None:
|
||||
ctx = ssl.create_default_context()
|
||||
if not self._verify_ssl:
|
||||
ctx.check_hostname = False
|
||||
ctx.verify_mode = ssl.CERT_NONE
|
||||
|
||||
log.info("Connecting to %s …", self._url)
|
||||
try:
|
||||
self._ws = await _ws_connect(
|
||||
host=self._host,
|
||||
port=self._port,
|
||||
path="/api/current",
|
||||
ssl_ctx=ctx,
|
||||
)
|
||||
except (OSError, asyncio.TimeoutError) as exc:
|
||||
log.error("Connection failed: %s", exc)
|
||||
raise
|
||||
|
||||
log.info("Authenticating with API key …")
|
||||
result = await self.call("auth.login_with_api_key", [self._api_key])
|
||||
if result is not True and result != "SUCCESS":
|
||||
raise PermissionError(f"Authentication rejected: {result!r}")
|
||||
log.info("Connected and authenticated.")
|
||||
|
||||
async def call(self, method: str, params: Optional[list] = None) -> Any:
|
||||
"""Send one JSON-RPC request and return its result.
|
||||
Raises RuntimeError if the API returns an error.
|
||||
"""
|
||||
self._call_id += 1
|
||||
req_id = self._call_id
|
||||
|
||||
await self._ws.send(json.dumps({
|
||||
"jsonrpc": "2.0",
|
||||
"id": req_id,
|
||||
"method": method,
|
||||
"params": params or [],
|
||||
}))
|
||||
|
||||
while True:
|
||||
raw = await asyncio.wait_for(self._ws.recv(), timeout=60)
|
||||
msg = json.loads(raw)
|
||||
if "id" not in msg:
|
||||
continue
|
||||
if msg["id"] != req_id:
|
||||
continue
|
||||
if "error" in msg:
|
||||
err = msg["error"]
|
||||
reason = (
|
||||
err.get("data", {}).get("reason")
|
||||
or err.get("message")
|
||||
or repr(err)
|
||||
)
|
||||
raise RuntimeError(f"API error [{method}]: {reason}")
|
||||
return msg.get("result")
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Dataset utilities
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
async def check_dataset_paths(
|
||||
client: TrueNASClient,
|
||||
paths: list[str],
|
||||
) -> list[str]:
|
||||
"""
|
||||
Return the subset of *paths* that have no matching ZFS dataset on the
|
||||
destination. Returns an empty list when the dataset query itself fails.
|
||||
"""
|
||||
if not paths:
|
||||
return []
|
||||
|
||||
unique = sorted({p.rstrip("/") for p in paths if p})
|
||||
log.info("Checking %d share path(s) against destination datasets …", len(unique))
|
||||
try:
|
||||
datasets = await client.call("pool.dataset.query") or []
|
||||
except RuntimeError as exc:
|
||||
log.warning("Could not query datasets (skipping check): %s", exc)
|
||||
return []
|
||||
|
||||
mountpoints = {
|
||||
d.get("mountpoint", "").rstrip("/")
|
||||
for d in datasets
|
||||
if d.get("mountpoint")
|
||||
}
|
||||
|
||||
missing = [p for p in unique if p not in mountpoints]
|
||||
if missing:
|
||||
for p in missing:
|
||||
log.warning(" MISSING dataset for path: %s", p)
|
||||
else:
|
||||
log.info(" All share paths exist as datasets.")
|
||||
return missing
|
||||
|
||||
|
||||
async def create_dataset(client: TrueNASClient, path: str) -> bool:
|
||||
"""
|
||||
Create a ZFS dataset whose mountpoint will be *path*.
|
||||
*path* must be an absolute /mnt/… path.
|
||||
Returns True on success, False on failure.
|
||||
"""
|
||||
if not path.startswith("/mnt/"):
|
||||
log.error("Cannot auto-create dataset for non-/mnt/ path: %s", path)
|
||||
return False
|
||||
|
||||
name = path[5:].rstrip("/")
|
||||
log.info("Creating dataset %r …", name)
|
||||
try:
|
||||
await client.call("pool.dataset.create", [{"name": name}])
|
||||
log.info(" Created: %s", name)
|
||||
return True
|
||||
except RuntimeError as exc:
|
||||
log.error(" Failed to create dataset %r: %s", name, exc)
|
||||
return False
|
||||
|
||||
|
||||
async def create_missing_datasets(
|
||||
host: str,
|
||||
port: int,
|
||||
api_key: str,
|
||||
paths: list[str],
|
||||
verify_ssl: bool = False,
|
||||
) -> None:
|
||||
"""Open a fresh connection and create ZFS datasets for *paths*."""
|
||||
async with TrueNASClient(
|
||||
host=host, port=port, api_key=api_key, verify_ssl=verify_ssl,
|
||||
) as client:
|
||||
for path in paths:
|
||||
await create_dataset(client, path)
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# iSCSI zvol utilities
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
async def check_iscsi_zvols(
|
||||
client: TrueNASClient,
|
||||
zvol_names: list[str],
|
||||
) -> list[str]:
|
||||
"""
|
||||
Return the subset of *zvol_names* that do not exist on the destination.
|
||||
Names are the dataset path without the leading 'zvol/' prefix
|
||||
(e.g. 'tank/VMWARE001'). Returns [] when the query itself fails.
|
||||
"""
|
||||
if not zvol_names:
|
||||
return []
|
||||
|
||||
unique = sorted(set(zvol_names))
|
||||
log.info("Checking %d zvol(s) against destination datasets …", len(unique))
|
||||
try:
|
||||
datasets = await client.call(
|
||||
"pool.dataset.query", [[["type", "=", "VOLUME"]]]
|
||||
) or []
|
||||
except RuntimeError as exc:
|
||||
log.warning("Could not query zvols (skipping check): %s", exc)
|
||||
return []
|
||||
|
||||
existing = {d["name"] for d in datasets}
|
||||
missing = [n for n in unique if n not in existing]
|
||||
if missing:
|
||||
for n in missing:
|
||||
log.warning(" MISSING zvol: %s", n)
|
||||
else:
|
||||
log.info(" All iSCSI zvols exist on destination.")
|
||||
return missing
|
||||
|
||||
|
||||
async def create_zvol(
|
||||
client: TrueNASClient,
|
||||
name: str,
|
||||
volsize: int,
|
||||
) -> bool:
|
||||
"""
|
||||
Create a ZFS volume (zvol) on the destination.
|
||||
*name* is the dataset path (e.g. 'tank/VMWARE001').
|
||||
*volsize* is the size in bytes.
|
||||
Returns True on success, False on failure.
|
||||
"""
|
||||
log.info("Creating zvol %r (%d bytes) …", name, volsize)
|
||||
try:
|
||||
await client.call("pool.dataset.create", [{
|
||||
"name": name,
|
||||
"type": "VOLUME",
|
||||
"volsize": volsize,
|
||||
}])
|
||||
log.info(" Created: %s", name)
|
||||
return True
|
||||
except RuntimeError as exc:
|
||||
log.error(" Failed to create zvol %r: %s", name, exc)
|
||||
return False
|
||||
|
||||
|
||||
async def create_missing_zvols(
|
||||
host: str,
|
||||
port: int,
|
||||
api_key: str,
|
||||
zvols: dict[str, int],
|
||||
verify_ssl: bool = False,
|
||||
) -> None:
|
||||
"""Open a fresh connection and create zvols from {name: volsize_bytes}."""
|
||||
async with TrueNASClient(
|
||||
host=host, port=port, api_key=api_key, verify_ssl=verify_ssl,
|
||||
) as client:
|
||||
for name, volsize in zvols.items():
|
||||
await create_zvol(client, name, volsize)
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Destination inventory
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
async def query_destination_inventory(client: TrueNASClient) -> dict[str, list]:
|
||||
"""
|
||||
Query all current configuration from the destination system.
|
||||
Returns a dict with keys: smb_shares, nfs_exports, datasets, zvols,
|
||||
iscsi_extents, iscsi_initiators, iscsi_portals, iscsi_targets, iscsi_targetextents.
|
||||
Each value is a list (may be empty if the query fails or returns nothing).
|
||||
"""
|
||||
result: dict[str, list] = {}
|
||||
for key, method, params in [
|
||||
("smb_shares", "sharing.smb.query", None),
|
||||
("nfs_exports", "sharing.nfs.query", None),
|
||||
("datasets", "pool.dataset.query", [[["type", "=", "FILESYSTEM"]]]),
|
||||
("zvols", "pool.dataset.query", [[["type", "=", "VOLUME"]]]),
|
||||
("iscsi_extents", "iscsi.extent.query", None),
|
||||
("iscsi_initiators", "iscsi.initiator.query", None),
|
||||
("iscsi_portals", "iscsi.portal.query", None),
|
||||
("iscsi_targets", "iscsi.target.query", None),
|
||||
("iscsi_targetextents", "iscsi.targetextent.query", None),
|
||||
]:
|
||||
try:
|
||||
result[key] = await client.call(method, params) or []
|
||||
except RuntimeError as exc:
|
||||
log.warning("Could not query %s: %s", key, exc)
|
||||
result[key] = []
|
||||
return result
|
||||
|
||||
|
||||
async def delete_smb_shares(
|
||||
client: TrueNASClient, shares: list[dict]
|
||||
) -> tuple[int, int]:
|
||||
"""Delete SMB shares by ID. Returns (deleted, failed)."""
|
||||
deleted = failed = 0
|
||||
for share in shares:
|
||||
try:
|
||||
await client.call("sharing.smb.delete", [share["id"]])
|
||||
log.info(" Deleted SMB share %r", share.get("name"))
|
||||
deleted += 1
|
||||
except RuntimeError as exc:
|
||||
log.error(" Failed to delete SMB share %r: %s", share.get("name"), exc)
|
||||
failed += 1
|
||||
return deleted, failed
|
||||
|
||||
|
||||
async def delete_nfs_exports(
|
||||
client: TrueNASClient, exports: list[dict]
|
||||
) -> tuple[int, int]:
|
||||
"""Delete NFS exports by ID. Returns (deleted, failed)."""
|
||||
deleted = failed = 0
|
||||
for export in exports:
|
||||
try:
|
||||
await client.call("sharing.nfs.delete", [export["id"]])
|
||||
log.info(" Deleted NFS export %r", export.get("path"))
|
||||
deleted += 1
|
||||
except RuntimeError as exc:
|
||||
log.error(" Failed to delete NFS export %r: %s", export.get("path"), exc)
|
||||
failed += 1
|
||||
return deleted, failed
|
||||
|
||||
|
||||
async def delete_zvols(
|
||||
client: TrueNASClient, zvols: list[dict]
|
||||
) -> tuple[int, int]:
|
||||
"""Delete zvols. Returns (deleted, failed)."""
|
||||
deleted = failed = 0
|
||||
for zvol in zvols:
|
||||
try:
|
||||
await client.call("pool.dataset.delete", [zvol["id"], {"recursive": True}])
|
||||
log.info(" Deleted zvol %r", zvol["id"])
|
||||
deleted += 1
|
||||
except RuntimeError as exc:
|
||||
log.error(" Failed to delete zvol %r: %s", zvol["id"], exc)
|
||||
failed += 1
|
||||
return deleted, failed
|
||||
|
||||
|
||||
async def delete_datasets(
|
||||
client: TrueNASClient, datasets: list[dict]
|
||||
) -> tuple[int, int]:
|
||||
"""
|
||||
Delete datasets deepest-first to avoid parent-before-child errors.
|
||||
Skips pool root datasets (no '/' in the dataset name).
|
||||
Returns (deleted, failed).
|
||||
"""
|
||||
sorted_ds = sorted(
|
||||
(d for d in datasets if "/" in d["id"]),
|
||||
key=lambda d: d["id"].count("/"),
|
||||
reverse=True,
|
||||
)
|
||||
deleted = failed = 0
|
||||
for ds in sorted_ds:
|
||||
try:
|
||||
await client.call("pool.dataset.delete", [ds["id"], {"recursive": True}])
|
||||
log.info(" Deleted dataset %r", ds["id"])
|
||||
deleted += 1
|
||||
except RuntimeError as exc:
|
||||
log.error(" Failed to delete dataset %r: %s", ds["id"], exc)
|
||||
failed += 1
|
||||
return deleted, failed
|
||||
55
truenas_migrate/colors.py
Normal file
55
truenas_migrate/colors.py
Normal file
@@ -0,0 +1,55 @@
|
||||
"""ANSI color helpers and shared logger."""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import re as _re
|
||||
import sys
|
||||
|
||||
_USE_COLOR = sys.stderr.isatty()
|
||||
|
||||
|
||||
def _c(code: str, text: str) -> str:
|
||||
return f"\033[{code}m{text}\033[0m" if _USE_COLOR else text
|
||||
|
||||
def _dim(t: str) -> str: return _c("2", t)
|
||||
def _bold(t: str) -> str: return _c("1", t)
|
||||
def _red(t: str) -> str: return _c("31", t)
|
||||
def _green(t: str) -> str: return _c("32", t)
|
||||
def _yellow(t: str) -> str: return _c("33", t)
|
||||
def _cyan(t: str) -> str: return _c("36", t)
|
||||
def _bold_red(t: str) -> str: return _c("1;31", t)
|
||||
def _bold_green(t: str) -> str: return _c("1;32", t)
|
||||
def _bold_yellow(t: str) -> str: return _c("1;33", t)
|
||||
def _bold_cyan(t: str) -> str: return _c("1;36", t)
|
||||
|
||||
|
||||
def _vis_len(s: str) -> int:
|
||||
"""Visible character width of a string, ignoring ANSI escape sequences."""
|
||||
return len(_re.sub(r'\033\[[0-9;]*m', '', s))
|
||||
|
||||
|
||||
class _ColorFormatter(logging.Formatter):
|
||||
_STYLES = {
|
||||
logging.DEBUG: "2",
|
||||
logging.INFO: "36",
|
||||
logging.WARNING: "1;33",
|
||||
logging.ERROR: "1;31",
|
||||
logging.CRITICAL: "1;31",
|
||||
}
|
||||
|
||||
def format(self, record: logging.LogRecord) -> str:
|
||||
ts = self.formatTime(record, self.datefmt)
|
||||
msg = record.getMessage()
|
||||
if _USE_COLOR:
|
||||
code = self._STYLES.get(record.levelno, "0")
|
||||
level = f"\033[{code}m{record.levelname:<8}\033[0m"
|
||||
ts = f"\033[2m{ts}\033[0m"
|
||||
else:
|
||||
level = f"{record.levelname:<8}"
|
||||
return f"{ts} {level} {msg}"
|
||||
|
||||
|
||||
_handler = logging.StreamHandler()
|
||||
_handler.setFormatter(_ColorFormatter(datefmt="%H:%M:%S"))
|
||||
logging.basicConfig(level=logging.INFO, handlers=[_handler])
|
||||
log = logging.getLogger("truenas_migrate")
|
||||
209
truenas_migrate/csv_source.py
Normal file
209
truenas_migrate/csv_source.py
Normal file
@@ -0,0 +1,209 @@
|
||||
"""CSV source parser – reads SMB/NFS share definitions from customer-supplied CSV files."""
|
||||
from __future__ import annotations
|
||||
|
||||
import csv
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from .colors import log
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Column name mappings (human-readable header → API field name)
|
||||
# Both the friendly names and the raw API names are accepted.
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
_SMB_COL_MAP: dict[str, str] = {
|
||||
"share name": "name",
|
||||
"path": "path",
|
||||
"description": "comment",
|
||||
"purpose": "purpose",
|
||||
"read only": "ro",
|
||||
"browsable": "browsable",
|
||||
"guest access": "guestok",
|
||||
"access-based enumeration": "abe",
|
||||
"hosts allow": "hostsallow",
|
||||
"hosts deny": "hostsdeny",
|
||||
"time machine": "timemachine",
|
||||
"enabled": "enabled",
|
||||
}
|
||||
|
||||
_NFS_COL_MAP: dict[str, str] = {
|
||||
"path": "path",
|
||||
"description": "comment",
|
||||
"read only": "ro",
|
||||
"map root user": "maproot_user",
|
||||
"map root group": "maproot_group",
|
||||
"map all user": "mapall_user",
|
||||
"map all group": "mapall_group",
|
||||
"security": "security",
|
||||
"allowed hosts": "hosts",
|
||||
"allowed networks": "networks",
|
||||
"enabled": "enabled",
|
||||
}
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Column type metadata (keyed by API field name)
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
# Columns coerced to bool
|
||||
_SMB_BOOL_COLS = frozenset({"ro", "browsable", "guestok", "abe", "timemachine", "enabled"})
|
||||
# Columns coerced to list[str] (space-or-comma-separated in CSV)
|
||||
_SMB_LIST_COLS = frozenset({"hostsallow", "hostsdeny"})
|
||||
_SMB_REQUIRED = frozenset({"name", "path"})
|
||||
|
||||
_NFS_BOOL_COLS = frozenset({"ro", "enabled"})
|
||||
_NFS_LIST_COLS = frozenset({"security", "hosts", "networks"})
|
||||
_NFS_REQUIRED = frozenset({"path"})
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Internal helpers
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
def _parse_bool(value: str, col: str, row_num: int) -> bool:
|
||||
v = value.strip().lower()
|
||||
if v in ("true", "yes", "1"):
|
||||
return True
|
||||
if v in ("false", "no", "0", ""):
|
||||
return False
|
||||
log.warning(" row %d: unrecognised boolean %r for column %r – treating as False",
|
||||
row_num, value, col)
|
||||
return False
|
||||
|
||||
|
||||
def _parse_list(value: str) -> list[str]:
|
||||
"""Split space-or-comma-separated value into a list, dropping blanks."""
|
||||
return [p for p in value.replace(",", " ").split() if p]
|
||||
|
||||
|
||||
def _coerce_row(
|
||||
row: dict[str, str],
|
||||
bool_cols: frozenset[str],
|
||||
list_cols: frozenset[str],
|
||||
required: frozenset[str],
|
||||
row_num: int,
|
||||
) -> dict[str, Any] | None:
|
||||
"""Validate and type-coerce one CSV row. Returns None to skip the row."""
|
||||
if not any((v or "").strip() for v in row.values()):
|
||||
return None # blank row
|
||||
|
||||
first_val = next(iter(row.values()), "") or ""
|
||||
if first_val.strip().startswith("#"):
|
||||
return None # comment row
|
||||
|
||||
result: dict[str, Any] = {}
|
||||
for col, raw in row.items():
|
||||
if col is None:
|
||||
continue
|
||||
col = col.strip()
|
||||
val = (raw or "").strip()
|
||||
|
||||
if not val:
|
||||
continue # omit empty optional fields; API uses its defaults
|
||||
|
||||
if col in bool_cols:
|
||||
result[col] = _parse_bool(val, col, row_num)
|
||||
elif col in list_cols:
|
||||
result[col] = _parse_list(val)
|
||||
else:
|
||||
result[col] = val
|
||||
|
||||
for req in required:
|
||||
if req not in result:
|
||||
log.warning(" row %d: missing required field %r – skipping row", row_num, req)
|
||||
return None
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def _normalize_col(col: str, col_map: dict[str, str]) -> str:
|
||||
"""Map a header name to its API field name; falls back to the lowercased original."""
|
||||
key = col.strip().lower()
|
||||
return col_map.get(key, key)
|
||||
|
||||
|
||||
def _parse_csv(
|
||||
csv_path: str,
|
||||
bool_cols: frozenset[str],
|
||||
list_cols: frozenset[str],
|
||||
required: frozenset[str],
|
||||
col_map: dict[str, str],
|
||||
label: str,
|
||||
) -> list[dict]:
|
||||
path = Path(csv_path)
|
||||
if not path.is_file():
|
||||
log.error("%s CSV file not found: %s", label, csv_path)
|
||||
sys.exit(1)
|
||||
|
||||
shares: list[dict] = []
|
||||
try:
|
||||
with path.open(newline="", encoding="utf-8-sig") as fh:
|
||||
reader = csv.DictReader(fh)
|
||||
if reader.fieldnames is None:
|
||||
log.error("%s CSV has no header row: %s", label, csv_path)
|
||||
sys.exit(1)
|
||||
|
||||
# Normalise header names using the column map
|
||||
normalised_header = {
|
||||
_normalize_col(c, col_map)
|
||||
for c in reader.fieldnames if c is not None
|
||||
}
|
||||
missing_req = required - normalised_header
|
||||
if missing_req:
|
||||
log.error(
|
||||
"%s CSV is missing required column(s): %s",
|
||||
label, ", ".join(sorted(missing_req)),
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
for row_num, row in enumerate(reader, start=2):
|
||||
normalised = {
|
||||
_normalize_col(k, col_map): v
|
||||
for k, v in row.items() if k is not None
|
||||
}
|
||||
share = _coerce_row(normalised, bool_cols, list_cols, required, row_num)
|
||||
if share is not None:
|
||||
shares.append(share)
|
||||
|
||||
except OSError as exc:
|
||||
log.error("Cannot read %s CSV: %s", label, exc)
|
||||
sys.exit(1)
|
||||
|
||||
log.info(" %-14s → %s (%d share(s))", label.lower() + "_shares", csv_path, len(shares))
|
||||
return shares
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Public API
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
def parse_smb_csv(csv_path: str) -> list[dict]:
|
||||
"""Parse an SMB shares CSV. Returns share dicts compatible with migrate.py."""
|
||||
return _parse_csv(csv_path, _SMB_BOOL_COLS, _SMB_LIST_COLS, _SMB_REQUIRED, _SMB_COL_MAP, "SMB")
|
||||
|
||||
|
||||
def parse_nfs_csv(csv_path: str) -> list[dict]:
|
||||
"""Parse an NFS shares CSV. Returns share dicts compatible with migrate.py."""
|
||||
return _parse_csv(csv_path, _NFS_BOOL_COLS, _NFS_LIST_COLS, _NFS_REQUIRED, _NFS_COL_MAP, "NFS")
|
||||
|
||||
|
||||
def parse_csv_sources(smb_csv: str | None, nfs_csv: str | None) -> dict[str, Any]:
|
||||
"""
|
||||
Parse one or both CSV files.
|
||||
Returns {"smb_shares": list, "nfs_shares": list} — same shape as parse_archive().
|
||||
"""
|
||||
log.info("Loading shares from CSV source(s).")
|
||||
result: dict[str, Any] = {"smb_shares": [], "nfs_shares": []}
|
||||
if smb_csv:
|
||||
result["smb_shares"] = parse_smb_csv(smb_csv)
|
||||
if nfs_csv:
|
||||
result["nfs_shares"] = parse_nfs_csv(nfs_csv)
|
||||
log.info(
|
||||
"Loaded: %d SMB share(s), %d NFS share(s)",
|
||||
len(result["smb_shares"]),
|
||||
len(result["nfs_shares"]),
|
||||
)
|
||||
return result
|
||||
587
truenas_migrate/migrate.py
Normal file
587
truenas_migrate/migrate.py
Normal file
@@ -0,0 +1,587 @@
|
||||
"""Migration routines for SMB and NFS shares."""
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from typing import Any
|
||||
|
||||
from .colors import log, _bold, _bold_cyan, _bold_green, _bold_red, _cyan, _yellow
|
||||
from .client import TrueNASClient
|
||||
from .summary import Summary
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Payload builders
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
# Read-only / server-generated fields that must NOT be sent on create/update
|
||||
_SMB_SHARE_READONLY = frozenset({"id", "locked", "path_local"})
|
||||
|
||||
# CORE SMB share fields that do not exist in the SCALE API
|
||||
_SMB_SHARE_CORE_EXTRAS = frozenset({
|
||||
"vuid", # server-generated Time Machine UUID; SCALE sets this automatically
|
||||
})
|
||||
|
||||
# CORE NFS share fields that do not exist in the SCALE API
|
||||
_NFS_SHARE_CORE_EXTRAS = frozenset({
|
||||
"paths", # CORE uses a list; SCALE uses a single "path" string (converted below)
|
||||
"alldirs", # removed in SCALE
|
||||
"quiet", # removed in SCALE
|
||||
})
|
||||
|
||||
|
||||
def _smb_share_payload(share: dict) -> dict:
|
||||
exclude = _SMB_SHARE_READONLY | _SMB_SHARE_CORE_EXTRAS
|
||||
return {k: v for k, v in share.items() if k not in exclude}
|
||||
|
||||
|
||||
def _nfs_share_payload(share: dict) -> dict:
|
||||
payload = {k: v for k, v in share.items()
|
||||
if k not in {"id", "locked"} | _NFS_SHARE_CORE_EXTRAS}
|
||||
# CORE stores export paths as a list under "paths"; SCALE expects a single "path" string.
|
||||
if "path" not in payload and share.get("paths"):
|
||||
payload["path"] = share["paths"][0]
|
||||
return payload
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Migration routines
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
async def migrate_smb_shares(
|
||||
client: TrueNASClient,
|
||||
shares: list[dict],
|
||||
dry_run: bool,
|
||||
summary: Summary,
|
||||
) -> None:
|
||||
summary.smb_found = len(shares)
|
||||
if not shares:
|
||||
log.info("No SMB shares found in archive.")
|
||||
return
|
||||
|
||||
log.info("Querying existing SMB shares on destination …")
|
||||
try:
|
||||
existing = await client.call("sharing.smb.query") or []
|
||||
except RuntimeError as exc:
|
||||
msg = f"Could not query SMB shares: {exc}"
|
||||
log.error(msg)
|
||||
summary.errors.append(msg)
|
||||
return
|
||||
|
||||
existing_names = {s.get("name", "").lower() for s in existing}
|
||||
log.info(" Destination has %d existing SMB share(s).", len(existing_names))
|
||||
|
||||
for share in shares:
|
||||
name = share.get("name", "<unnamed>")
|
||||
log.info("%s SMB share %s", _bold("──"), _bold_cyan(repr(name)))
|
||||
|
||||
if name.lower() in existing_names:
|
||||
log.info(" %s – already exists on destination.", _yellow("SKIP"))
|
||||
summary.smb_skipped += 1
|
||||
continue
|
||||
|
||||
payload = _smb_share_payload(share)
|
||||
log.debug(" payload: %s", json.dumps(payload))
|
||||
|
||||
if dry_run:
|
||||
log.info(" %s would create %s → %s",
|
||||
_cyan("[DRY RUN]"), _bold_cyan(repr(name)), payload.get("path"))
|
||||
summary.smb_created += 1
|
||||
if payload.get("path"):
|
||||
summary.paths_to_create.append(payload["path"])
|
||||
continue
|
||||
|
||||
try:
|
||||
r = await client.call("sharing.smb.create", [payload])
|
||||
log.info(" %s id=%s", _bold_green("CREATED"), r.get("id"))
|
||||
summary.smb_created += 1
|
||||
except RuntimeError as exc:
|
||||
log.error(" %s: %s", _bold_red("FAILED"), exc)
|
||||
summary.smb_failed += 1
|
||||
summary.errors.append(f"SMB share {name!r}: {exc}")
|
||||
|
||||
|
||||
async def migrate_nfs_shares(
|
||||
client: TrueNASClient,
|
||||
shares: list[dict],
|
||||
dry_run: bool,
|
||||
summary: Summary,
|
||||
) -> None:
|
||||
summary.nfs_found = len(shares)
|
||||
if not shares:
|
||||
log.info("No NFS shares found in archive.")
|
||||
return
|
||||
|
||||
log.info("Querying existing NFS shares on destination …")
|
||||
try:
|
||||
existing = await client.call("sharing.nfs.query") or []
|
||||
except RuntimeError as exc:
|
||||
msg = f"Could not query NFS shares: {exc}"
|
||||
log.error(msg)
|
||||
summary.errors.append(msg)
|
||||
return
|
||||
|
||||
existing_paths = {s.get("path", "").rstrip("/") for s in existing}
|
||||
log.info(" Destination has %d existing NFS share(s).", len(existing_paths))
|
||||
|
||||
for share in shares:
|
||||
core_paths = share.get("paths") or []
|
||||
path = (share.get("path") or (core_paths[0] if core_paths else "")).rstrip("/")
|
||||
all_paths = [p.rstrip("/") for p in (core_paths if core_paths else ([path] if path else []))]
|
||||
log.info("%s NFS export %s", _bold("──"), _bold_cyan(repr(path)))
|
||||
|
||||
if path in existing_paths:
|
||||
log.info(" %s – path already exported on destination.", _yellow("SKIP"))
|
||||
summary.nfs_skipped += 1
|
||||
continue
|
||||
|
||||
payload = _nfs_share_payload(share)
|
||||
log.debug(" payload: %s", json.dumps(payload))
|
||||
|
||||
if dry_run:
|
||||
log.info(" %s would create NFS export for %s",
|
||||
_cyan("[DRY RUN]"), _bold_cyan(repr(path)))
|
||||
summary.nfs_created += 1
|
||||
summary.paths_to_create.extend(all_paths)
|
||||
continue
|
||||
|
||||
try:
|
||||
r = await client.call("sharing.nfs.create", [payload])
|
||||
log.info(" %s id=%s", _bold_green("CREATED"), r.get("id"))
|
||||
summary.nfs_created += 1
|
||||
except RuntimeError as exc:
|
||||
log.error(" %s: %s", _bold_red("FAILED"), exc)
|
||||
summary.nfs_failed += 1
|
||||
summary.errors.append(f"NFS share {path!r}: {exc}")
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# iSCSI payload builders
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
_ISCSI_EXTENT_READONLY = frozenset({"id", "serial", "naa", "vendor", "locked"})
|
||||
_ISCSI_INITIATOR_READONLY = frozenset({"id"})
|
||||
_ISCSI_PORTAL_READONLY = frozenset({"id", "tag"})
|
||||
_ISCSI_TARGET_READONLY = frozenset({"id", "rel_tgt_id", "iscsi_parameters"})
|
||||
|
||||
|
||||
def _iscsi_extent_payload(extent: dict) -> dict:
|
||||
payload = {k: v for k, v in extent.items() if k not in _ISCSI_EXTENT_READONLY}
|
||||
if extent.get("type") == "DISK":
|
||||
payload.pop("path", None) # derived from disk on DISK extents
|
||||
payload.pop("filesize", None) # only meaningful for FILE extents
|
||||
else:
|
||||
payload.pop("disk", None)
|
||||
return payload
|
||||
|
||||
|
||||
def _iscsi_initiator_payload(initiator: dict) -> dict:
|
||||
return {k: v for k, v in initiator.items() if k not in _ISCSI_INITIATOR_READONLY}
|
||||
|
||||
|
||||
def _iscsi_portal_payload(portal: dict) -> dict:
|
||||
payload = {k: v for k, v in portal.items() if k not in _ISCSI_PORTAL_READONLY}
|
||||
# The API only accepts {"ip": "..."} in listen entries — port is a global setting
|
||||
payload["listen"] = [{"ip": l["ip"]} for l in payload.get("listen", [])]
|
||||
return payload
|
||||
|
||||
|
||||
def _iscsi_target_payload(
|
||||
target: dict,
|
||||
portal_id_map: dict[int, int],
|
||||
initiator_id_map: dict[int, int],
|
||||
) -> dict:
|
||||
payload = {k: v for k, v in target.items() if k not in _ISCSI_TARGET_READONLY}
|
||||
payload["groups"] = [
|
||||
{**g,
|
||||
"portal": portal_id_map.get(g["portal"], g["portal"]),
|
||||
"initiator": initiator_id_map.get(g.get("initiator"), g.get("initiator"))}
|
||||
for g in target.get("groups", [])
|
||||
]
|
||||
return payload
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# iSCSI migration sub-routines
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
async def _migrate_iscsi_extents(
|
||||
client: TrueNASClient,
|
||||
extents: list[dict],
|
||||
dry_run: bool,
|
||||
summary: Summary,
|
||||
id_map: dict[int, int],
|
||||
) -> None:
|
||||
log.info("Querying existing iSCSI extents on destination …")
|
||||
try:
|
||||
existing = await client.call("iscsi.extent.query") or []
|
||||
except RuntimeError as exc:
|
||||
msg = f"Could not query iSCSI extents: {exc}"
|
||||
log.error(msg); summary.errors.append(msg); return
|
||||
|
||||
existing_by_name = {e["name"].lower(): e for e in existing}
|
||||
log.info(" Destination has %d existing extent(s).", len(existing_by_name))
|
||||
|
||||
for ext in extents:
|
||||
name = ext.get("name", "<unnamed>")
|
||||
log.info("%s iSCSI extent %s", _bold("──"), _bold_cyan(repr(name)))
|
||||
|
||||
if name.lower() in existing_by_name:
|
||||
log.info(" %s – already exists on destination.", _yellow("SKIP"))
|
||||
id_map[ext["id"]] = existing_by_name[name.lower()]["id"]
|
||||
summary.iscsi_extents_skipped += 1
|
||||
continue
|
||||
|
||||
payload = _iscsi_extent_payload(ext)
|
||||
log.debug(" payload: %s", json.dumps(payload))
|
||||
|
||||
if dry_run:
|
||||
log.info(" %s would create extent %s → %s",
|
||||
_cyan("[DRY RUN]"), _bold_cyan(repr(name)),
|
||||
ext.get("disk") or ext.get("path"))
|
||||
summary.iscsi_extents_created += 1
|
||||
id_map[ext["id"]] = ext["id"] # placeholder — enables downstream dry-run remapping
|
||||
if ext.get("type") == "DISK" and ext.get("disk"):
|
||||
summary.zvols_to_check.append(ext["disk"].removeprefix("zvol/"))
|
||||
continue
|
||||
|
||||
try:
|
||||
r = await client.call("iscsi.extent.create", [payload])
|
||||
log.info(" %s id=%s", _bold_green("CREATED"), r.get("id"))
|
||||
id_map[ext["id"]] = r["id"]
|
||||
summary.iscsi_extents_created += 1
|
||||
except RuntimeError as exc:
|
||||
log.error(" %s: %s", _bold_red("FAILED"), exc)
|
||||
summary.iscsi_extents_failed += 1
|
||||
summary.errors.append(f"iSCSI extent {name!r}: {exc}")
|
||||
|
||||
|
||||
async def _migrate_iscsi_initiators(
|
||||
client: TrueNASClient,
|
||||
initiators: list[dict],
|
||||
dry_run: bool,
|
||||
summary: Summary,
|
||||
id_map: dict[int, int],
|
||||
) -> None:
|
||||
log.info("Querying existing iSCSI initiator groups on destination …")
|
||||
try:
|
||||
existing = await client.call("iscsi.initiator.query") or []
|
||||
except RuntimeError as exc:
|
||||
msg = f"Could not query iSCSI initiators: {exc}"
|
||||
log.error(msg); summary.errors.append(msg); return
|
||||
|
||||
existing_by_comment = {e["comment"].lower(): e for e in existing if e.get("comment")}
|
||||
log.info(" Destination has %d existing initiator group(s).", len(existing))
|
||||
|
||||
for init in initiators:
|
||||
comment = init.get("comment", "")
|
||||
log.info("%s iSCSI initiator group %s", _bold("──"), _bold_cyan(repr(comment)))
|
||||
|
||||
if comment and comment.lower() in existing_by_comment:
|
||||
log.info(" %s – comment already exists on destination.", _yellow("SKIP"))
|
||||
id_map[init["id"]] = existing_by_comment[comment.lower()]["id"]
|
||||
summary.iscsi_initiators_skipped += 1
|
||||
continue
|
||||
|
||||
payload = _iscsi_initiator_payload(init)
|
||||
log.debug(" payload: %s", json.dumps(payload))
|
||||
|
||||
if dry_run:
|
||||
log.info(" %s would create initiator group %s",
|
||||
_cyan("[DRY RUN]"), _bold_cyan(repr(comment)))
|
||||
summary.iscsi_initiators_created += 1
|
||||
id_map[init["id"]] = init["id"] # placeholder — enables downstream dry-run remapping
|
||||
continue
|
||||
|
||||
try:
|
||||
r = await client.call("iscsi.initiator.create", [payload])
|
||||
log.info(" %s id=%s", _bold_green("CREATED"), r.get("id"))
|
||||
id_map[init["id"]] = r["id"]
|
||||
summary.iscsi_initiators_created += 1
|
||||
except RuntimeError as exc:
|
||||
log.error(" %s: %s", _bold_red("FAILED"), exc)
|
||||
summary.iscsi_initiators_failed += 1
|
||||
summary.errors.append(f"iSCSI initiator {comment!r}: {exc}")
|
||||
|
||||
|
||||
async def _migrate_iscsi_portals(
|
||||
client: TrueNASClient,
|
||||
portals: list[dict],
|
||||
dry_run: bool,
|
||||
summary: Summary,
|
||||
id_map: dict[int, int],
|
||||
) -> None:
|
||||
log.info("Querying existing iSCSI portals on destination …")
|
||||
try:
|
||||
existing = await client.call("iscsi.portal.query") or []
|
||||
except RuntimeError as exc:
|
||||
msg = f"Could not query iSCSI portals: {exc}"
|
||||
log.error(msg); summary.errors.append(msg); return
|
||||
|
||||
def _ip_set(p: dict) -> frozenset:
|
||||
return frozenset(l["ip"] for l in p.get("listen", []))
|
||||
|
||||
existing_ip_sets = [(_ip_set(p), p["id"]) for p in existing]
|
||||
log.info(" Destination has %d existing portal(s).", len(existing))
|
||||
|
||||
for portal in portals:
|
||||
comment = portal.get("comment", "")
|
||||
ips = ", ".join(l['ip'] for l in portal.get("listen", []))
|
||||
log.info("%s iSCSI portal %s [%s]", _bold("──"), _bold_cyan(repr(comment)), ips)
|
||||
|
||||
my_ips = _ip_set(portal)
|
||||
match = next((eid for eips, eid in existing_ip_sets if eips == my_ips), None)
|
||||
if match is not None:
|
||||
log.info(" %s – IP set already exists on destination.", _yellow("SKIP"))
|
||||
id_map[portal["id"]] = match
|
||||
summary.iscsi_portals_skipped += 1
|
||||
continue
|
||||
|
||||
payload = _iscsi_portal_payload(portal)
|
||||
log.debug(" payload: %s", json.dumps(payload))
|
||||
|
||||
if dry_run:
|
||||
log.info(" %s would create portal %s → %s",
|
||||
_cyan("[DRY RUN]"), _bold_cyan(repr(comment)), ips)
|
||||
summary.iscsi_portals_created += 1
|
||||
id_map[portal["id"]] = portal["id"] # placeholder — enables downstream dry-run remapping
|
||||
continue
|
||||
|
||||
try:
|
||||
r = await client.call("iscsi.portal.create", [payload])
|
||||
log.info(" %s id=%s", _bold_green("CREATED"), r.get("id"))
|
||||
id_map[portal["id"]] = r["id"]
|
||||
summary.iscsi_portals_created += 1
|
||||
except RuntimeError as exc:
|
||||
log.error(" %s: %s", _bold_red("FAILED"), exc)
|
||||
summary.iscsi_portals_failed += 1
|
||||
summary.errors.append(f"iSCSI portal {comment!r}: {exc}")
|
||||
|
||||
|
||||
async def _migrate_iscsi_targets(
|
||||
client: TrueNASClient,
|
||||
targets: list[dict],
|
||||
dry_run: bool,
|
||||
summary: Summary,
|
||||
id_map: dict[int, int],
|
||||
portal_id_map: dict[int, int],
|
||||
initiator_id_map: dict[int, int],
|
||||
) -> None:
|
||||
log.info("Querying existing iSCSI targets on destination …")
|
||||
try:
|
||||
existing = await client.call("iscsi.target.query") or []
|
||||
except RuntimeError as exc:
|
||||
msg = f"Could not query iSCSI targets: {exc}"
|
||||
log.error(msg); summary.errors.append(msg); return
|
||||
|
||||
existing_by_name = {t["name"].lower(): t for t in existing}
|
||||
log.info(" Destination has %d existing target(s).", len(existing_by_name))
|
||||
|
||||
for target in targets:
|
||||
name = target.get("name", "<unnamed>")
|
||||
log.info("%s iSCSI target %s", _bold("──"), _bold_cyan(repr(name)))
|
||||
|
||||
if name.lower() in existing_by_name:
|
||||
log.info(" %s – already exists on destination.", _yellow("SKIP"))
|
||||
id_map[target["id"]] = existing_by_name[name.lower()]["id"]
|
||||
summary.iscsi_targets_skipped += 1
|
||||
continue
|
||||
|
||||
# Filter out groups whose portal or initiator could not be mapped (e.g. portal
|
||||
# creation failed). Warn per dropped group but still create the target — a
|
||||
# target without every portal group is valid and preferable to no target at all.
|
||||
valid_groups = []
|
||||
for g in target.get("groups", []):
|
||||
unmapped = []
|
||||
if g.get("portal") not in portal_id_map:
|
||||
unmapped.append(f"portal id={g['portal']}")
|
||||
if g.get("initiator") not in initiator_id_map:
|
||||
unmapped.append(f"initiator id={g['initiator']}")
|
||||
if unmapped:
|
||||
log.warning(" %s dropping group with unmapped %s",
|
||||
_yellow("WARN"), ", ".join(unmapped))
|
||||
else:
|
||||
valid_groups.append(g)
|
||||
|
||||
payload = _iscsi_target_payload({**target, "groups": valid_groups},
|
||||
portal_id_map, initiator_id_map)
|
||||
log.debug(" payload: %s", json.dumps(payload))
|
||||
|
||||
if dry_run:
|
||||
log.info(" %s would create target %s",
|
||||
_cyan("[DRY RUN]"), _bold_cyan(repr(name)))
|
||||
summary.iscsi_targets_created += 1
|
||||
id_map[target["id"]] = target["id"] # placeholder — enables downstream dry-run remapping
|
||||
continue
|
||||
|
||||
try:
|
||||
r = await client.call("iscsi.target.create", [payload])
|
||||
log.info(" %s id=%s", _bold_green("CREATED"), r.get("id"))
|
||||
id_map[target["id"]] = r["id"]
|
||||
summary.iscsi_targets_created += 1
|
||||
except RuntimeError as exc:
|
||||
log.error(" %s: %s", _bold_red("FAILED"), exc)
|
||||
summary.iscsi_targets_failed += 1
|
||||
summary.errors.append(f"iSCSI target {name!r}: {exc}")
|
||||
|
||||
|
||||
async def _migrate_iscsi_targetextents(
|
||||
client: TrueNASClient,
|
||||
targetextents: list[dict],
|
||||
dry_run: bool,
|
||||
summary: Summary,
|
||||
target_id_map: dict[int, int],
|
||||
extent_id_map: dict[int, int],
|
||||
) -> None:
|
||||
log.info("Querying existing iSCSI target-extent associations on destination …")
|
||||
try:
|
||||
existing = await client.call("iscsi.targetextent.query") or []
|
||||
except RuntimeError as exc:
|
||||
msg = f"Could not query iSCSI target-extents: {exc}"
|
||||
log.error(msg); summary.errors.append(msg); return
|
||||
|
||||
existing_keys = {(te["target"], te["lunid"]) for te in existing}
|
||||
log.info(" Destination has %d existing association(s).", len(existing))
|
||||
|
||||
for te in targetextents:
|
||||
src_tid = te["target"]
|
||||
src_eid = te["extent"]
|
||||
lunid = te["lunid"]
|
||||
dest_tid = target_id_map.get(src_tid)
|
||||
dest_eid = extent_id_map.get(src_eid)
|
||||
|
||||
if dest_tid is None or dest_eid is None:
|
||||
missing = []
|
||||
if dest_tid is None: missing.append(f"target id={src_tid}")
|
||||
if dest_eid is None: missing.append(f"extent id={src_eid}")
|
||||
msg = f"iSCSI target-extent (lun {lunid}): cannot remap {', '.join(missing)}"
|
||||
log.error(" %s", msg)
|
||||
summary.iscsi_targetextents_failed += 1
|
||||
summary.errors.append(msg)
|
||||
continue
|
||||
|
||||
log.info("%s iSCSI target↔extent target=%s lun=%s extent=%s",
|
||||
_bold("──"), dest_tid, lunid, dest_eid)
|
||||
|
||||
if (dest_tid, lunid) in existing_keys:
|
||||
log.info(" %s – target+LUN already assigned on destination.", _yellow("SKIP"))
|
||||
summary.iscsi_targetextents_skipped += 1
|
||||
continue
|
||||
|
||||
payload = {"target": dest_tid, "lunid": lunid, "extent": dest_eid}
|
||||
log.debug(" payload: %s", json.dumps(payload))
|
||||
|
||||
if dry_run:
|
||||
log.info(" %s would associate target=%s lun=%s extent=%s",
|
||||
_cyan("[DRY RUN]"), dest_tid, lunid, dest_eid)
|
||||
summary.iscsi_targetextents_created += 1
|
||||
continue
|
||||
|
||||
try:
|
||||
r = await client.call("iscsi.targetextent.create", [payload])
|
||||
log.info(" %s id=%s", _bold_green("CREATED"), r.get("id"))
|
||||
summary.iscsi_targetextents_created += 1
|
||||
except RuntimeError as exc:
|
||||
log.error(" %s: %s", _bold_red("FAILED"), exc)
|
||||
summary.iscsi_targetextents_failed += 1
|
||||
summary.errors.append(
|
||||
f"iSCSI target-extent (target={dest_tid}, lun={lunid}): {exc}")
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# iSCSI pre-migration utilities
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
async def query_existing_iscsi(client: TrueNASClient) -> dict:
|
||||
"""
|
||||
Query all iSCSI object counts from the destination.
|
||||
Returns a dict with keys: extents, initiators, portals, targets, targetextents
|
||||
Each value is a list of objects (may be empty).
|
||||
"""
|
||||
result = {}
|
||||
for key, method in [
|
||||
("extents", "iscsi.extent.query"),
|
||||
("initiators", "iscsi.initiator.query"),
|
||||
("portals", "iscsi.portal.query"),
|
||||
("targets", "iscsi.target.query"),
|
||||
("targetextents", "iscsi.targetextent.query"),
|
||||
]:
|
||||
try:
|
||||
result[key] = await client.call(method) or []
|
||||
except RuntimeError:
|
||||
result[key] = []
|
||||
return result
|
||||
|
||||
|
||||
async def clear_iscsi_config(client: TrueNASClient) -> None:
|
||||
"""
|
||||
Delete all iSCSI configuration from the destination in safe dependency order:
|
||||
target-extents → targets → portals → initiators → extents.
|
||||
"""
|
||||
for method_query, method_delete, label in [
|
||||
("iscsi.targetextent.query", "iscsi.targetextent.delete", "target-extent"),
|
||||
("iscsi.target.query", "iscsi.target.delete", "target"),
|
||||
("iscsi.portal.query", "iscsi.portal.delete", "portal"),
|
||||
("iscsi.initiator.query", "iscsi.initiator.delete", "initiator"),
|
||||
("iscsi.extent.query", "iscsi.extent.delete", "extent"),
|
||||
]:
|
||||
try:
|
||||
objects = await client.call(method_query) or []
|
||||
except RuntimeError as exc:
|
||||
log.warning(" Could not query iSCSI %ss: %s", label, exc)
|
||||
continue
|
||||
for obj in objects:
|
||||
try:
|
||||
await client.call(method_delete, [obj["id"]])
|
||||
log.info(" Deleted iSCSI %s id=%s", label, obj["id"])
|
||||
except RuntimeError as exc:
|
||||
log.warning(" Failed to delete iSCSI %s id=%s: %s", label, obj["id"], exc)
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Public iSCSI entry point
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
async def migrate_iscsi(
|
||||
client: TrueNASClient,
|
||||
iscsi: dict,
|
||||
dry_run: bool,
|
||||
summary: Summary,
|
||||
) -> None:
|
||||
if not iscsi:
|
||||
log.info("No iSCSI configuration found in archive.")
|
||||
return
|
||||
|
||||
portals = iscsi.get("portals", [])
|
||||
initiators = iscsi.get("initiators", [])
|
||||
targets = iscsi.get("targets", [])
|
||||
extents = iscsi.get("extents", [])
|
||||
targetextents = iscsi.get("targetextents", [])
|
||||
|
||||
summary.iscsi_extents_found = len(extents)
|
||||
summary.iscsi_initiators_found = len(initiators)
|
||||
summary.iscsi_portals_found = len(portals)
|
||||
summary.iscsi_targets_found = len(targets)
|
||||
summary.iscsi_targetextents_found = len(targetextents)
|
||||
|
||||
gc = iscsi.get("global_config", {})
|
||||
if gc.get("basename"):
|
||||
log.info(" Source iSCSI basename: %s (destination keeps its own)", gc["basename"])
|
||||
|
||||
if not any([portals, initiators, targets, extents, targetextents]):
|
||||
log.info("iSCSI configuration is empty – nothing to migrate.")
|
||||
return
|
||||
|
||||
extent_id_map: dict[int, int] = {}
|
||||
initiator_id_map: dict[int, int] = {}
|
||||
portal_id_map: dict[int, int] = {}
|
||||
target_id_map: dict[int, int] = {}
|
||||
|
||||
# Dependency order: extents and initiators first (no deps), then portals,
|
||||
# then targets (need portal + initiator maps), then target-extent links.
|
||||
await _migrate_iscsi_extents(client, extents, dry_run, summary, extent_id_map)
|
||||
await _migrate_iscsi_initiators(client, initiators, dry_run, summary, initiator_id_map)
|
||||
await _migrate_iscsi_portals(client, portals, dry_run, summary, portal_id_map)
|
||||
await _migrate_iscsi_targets(
|
||||
client, targets, dry_run, summary, target_id_map, portal_id_map, initiator_id_map)
|
||||
await _migrate_iscsi_targetextents(
|
||||
client, targetextents, dry_run, summary, target_id_map, extent_id_map)
|
||||
166
truenas_migrate/summary.py
Normal file
166
truenas_migrate/summary.py
Normal file
@@ -0,0 +1,166 @@
|
||||
"""Migration summary dataclass and report renderer."""
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
from .colors import (
|
||||
_dim, _bold, _red, _yellow, _cyan,
|
||||
_bold_red, _bold_green, _bold_yellow, _vis_len,
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class Summary:
|
||||
smb_found: int = 0
|
||||
smb_created: int = 0
|
||||
smb_skipped: int = 0
|
||||
smb_failed: int = 0
|
||||
|
||||
nfs_found: int = 0
|
||||
nfs_created: int = 0
|
||||
nfs_skipped: int = 0
|
||||
nfs_failed: int = 0
|
||||
|
||||
iscsi_extents_found: int = 0
|
||||
iscsi_extents_created: int = 0
|
||||
iscsi_extents_skipped: int = 0
|
||||
iscsi_extents_failed: int = 0
|
||||
|
||||
iscsi_initiators_found: int = 0
|
||||
iscsi_initiators_created: int = 0
|
||||
iscsi_initiators_skipped: int = 0
|
||||
iscsi_initiators_failed: int = 0
|
||||
|
||||
iscsi_portals_found: int = 0
|
||||
iscsi_portals_created: int = 0
|
||||
iscsi_portals_skipped: int = 0
|
||||
iscsi_portals_failed: int = 0
|
||||
|
||||
iscsi_targets_found: int = 0
|
||||
iscsi_targets_created: int = 0
|
||||
iscsi_targets_skipped: int = 0
|
||||
iscsi_targets_failed: int = 0
|
||||
|
||||
iscsi_targetextents_found: int = 0
|
||||
iscsi_targetextents_created: int = 0
|
||||
iscsi_targetextents_skipped: int = 0
|
||||
iscsi_targetextents_failed: int = 0
|
||||
|
||||
errors: list[str] = field(default_factory=list)
|
||||
|
||||
# Populated during dry-run dataset safety checks
|
||||
paths_to_create: list[str] = field(default_factory=list)
|
||||
missing_datasets: list[str] = field(default_factory=list)
|
||||
|
||||
# Populated during iSCSI dry-run zvol safety checks
|
||||
zvols_to_check: list[str] = field(default_factory=list)
|
||||
missing_zvols: list[str] = field(default_factory=list)
|
||||
|
||||
@property
|
||||
def _has_iscsi(self) -> bool:
|
||||
return (self.iscsi_extents_found + self.iscsi_initiators_found +
|
||||
self.iscsi_portals_found + self.iscsi_targets_found +
|
||||
self.iscsi_targetextents_found) > 0
|
||||
|
||||
def report(self) -> str:
|
||||
w = 60
|
||||
|
||||
def _stat(label: str, n: int, color_fn) -> str:
|
||||
s = f"{label}={n}"
|
||||
return color_fn(s) if n > 0 else _dim(s)
|
||||
|
||||
def _iscsi_val(found, created, skipped, failed) -> str:
|
||||
return (
|
||||
f"{_dim('found=' + str(found))} "
|
||||
f"{_stat('created', created, _bold_green)} "
|
||||
f"{_stat('skipped', skipped, _yellow)} "
|
||||
f"{_stat('failed', failed, _bold_red)}"
|
||||
)
|
||||
|
||||
smb_val = (
|
||||
f"{_dim('found=' + str(self.smb_found))} "
|
||||
f"{_stat('created', self.smb_created, _bold_green)} "
|
||||
f"{_stat('skipped', self.smb_skipped, _yellow)} "
|
||||
f"{_stat('failed', self.smb_failed, _bold_red)}"
|
||||
)
|
||||
nfs_val = (
|
||||
f"{_dim('found=' + str(self.nfs_found))} "
|
||||
f"{_stat('created', self.nfs_created, _bold_green)} "
|
||||
f"{_stat('skipped', self.nfs_skipped, _yellow)} "
|
||||
f"{_stat('failed', self.nfs_failed, _bold_red)}"
|
||||
)
|
||||
|
||||
hr = _cyan("─" * w)
|
||||
tl = _cyan("┌"); tr = _cyan("┐")
|
||||
ml = _cyan("├"); mr = _cyan("┤")
|
||||
bl = _cyan("└"); br = _cyan("┘")
|
||||
side = _cyan("│")
|
||||
|
||||
title_text = "MIGRATION SUMMARY"
|
||||
lpad = (w - len(title_text)) // 2
|
||||
rpad = w - len(title_text) - lpad
|
||||
title_row = f"{side}{' ' * lpad}{_bold(title_text)}{' ' * rpad}{side}"
|
||||
|
||||
def row(label: str, val: str) -> str:
|
||||
right = max(0, w - 2 - len(label) - _vis_len(val))
|
||||
return f"{side} {_dim(label)}{val}{' ' * right} {side}"
|
||||
|
||||
lines = [
|
||||
"",
|
||||
f"{tl}{hr}{tr}",
|
||||
title_row,
|
||||
f"{ml}{hr}{mr}",
|
||||
row("SMB shares : ", smb_val),
|
||||
row("NFS shares : ", nfs_val),
|
||||
]
|
||||
|
||||
if self._has_iscsi:
|
||||
lines.append(f"{ml}{hr}{mr}")
|
||||
lines.append(row("iSCSI extents : ", _iscsi_val(
|
||||
self.iscsi_extents_found, self.iscsi_extents_created,
|
||||
self.iscsi_extents_skipped, self.iscsi_extents_failed)))
|
||||
lines.append(row("iSCSI initiators: ", _iscsi_val(
|
||||
self.iscsi_initiators_found, self.iscsi_initiators_created,
|
||||
self.iscsi_initiators_skipped, self.iscsi_initiators_failed)))
|
||||
lines.append(row("iSCSI portals : ", _iscsi_val(
|
||||
self.iscsi_portals_found, self.iscsi_portals_created,
|
||||
self.iscsi_portals_skipped, self.iscsi_portals_failed)))
|
||||
lines.append(row("iSCSI targets : ", _iscsi_val(
|
||||
self.iscsi_targets_found, self.iscsi_targets_created,
|
||||
self.iscsi_targets_skipped, self.iscsi_targets_failed)))
|
||||
lines.append(row("iSCSI tgt↔ext : ", _iscsi_val(
|
||||
self.iscsi_targetextents_found, self.iscsi_targetextents_created,
|
||||
self.iscsi_targetextents_skipped, self.iscsi_targetextents_failed)))
|
||||
|
||||
lines.append(f"{bl}{hr}{br}")
|
||||
|
||||
if self.errors:
|
||||
lines.append(f"\n {_bold_red(str(len(self.errors)) + ' error(s):')} ")
|
||||
for e in self.errors:
|
||||
lines.append(f" {_red('•')} {e}")
|
||||
|
||||
if self.missing_datasets:
|
||||
lines.append(
|
||||
f"\n {_bold_yellow('WARNING:')} "
|
||||
f"{len(self.missing_datasets)} share path(s) have no "
|
||||
"matching dataset on the destination:"
|
||||
)
|
||||
for p in self.missing_datasets:
|
||||
lines.append(f" {_yellow('•')} {p}")
|
||||
lines.append(
|
||||
" These paths must exist before shares can be created.\n"
|
||||
" Use interactive mode or answer 'y' at the dataset prompt to create them."
|
||||
)
|
||||
if self.missing_zvols:
|
||||
lines.append(
|
||||
f"\n {_bold_yellow('WARNING:')} "
|
||||
f"{len(self.missing_zvols)} zvol(s) do not exist on the destination:"
|
||||
)
|
||||
for z in self.missing_zvols:
|
||||
lines.append(f" {_yellow('•')} {z}")
|
||||
lines.append(
|
||||
" These zvols must exist before iSCSI extents can be created.\n"
|
||||
" Use interactive mode to be prompted for size and auto-create them."
|
||||
)
|
||||
lines.append("")
|
||||
return "\n".join(lines)
|
||||
Reference in New Issue
Block a user