Raid Recovery

RAID Data Recovery

No Fix - No Fee!

With 25 years of experience in the field of data recovery, our highly trained experts can easily recover your valuable data from RAID servers. We can also guide you through the data recovery process and recover your data that might be considered lost.
Raid Recovery

Software Fault From £495

2-4 Days

Mechanical FaultFrom £895

2-4 Days

Critical Service From £995

1-2 Days

Need help recovering your data?

Call us on 0151 3050365 or use the form below to make an enquiry.
Chat with us
Monday-Friday: 9am-6pm

Liverpool Data Recovery — RAID 0 / 1 / 5 / 10 Recovery Specialists

25+ years of successful recoveries for home users, SMBs, enterprises, and public sector.

We recover software & hardware RAID across Windows, macOS, Linux, and virtualised estates (VMware/Hyper-V/Proxmox). We work image-first (never on your originals) using PC-3000, DeepSpar, Atola, controller-aware metadata parsers, and our own parity/stripe reconstruction tools.

Do not re-initialise, rebuild, or run CHKDSK/fsck on a degraded array. Power down and contact us—those actions can permanently overwrite parity or metadata.


Platforms & Vendors We Handle

Controllers / Host RAID: Broadcom/LSI MegaRAID, Dell PERC, HPE Smart Array, Adaptec by Microchip, Areca, HighPoint, Intel RST/Matrix, Promise, mdadm, Windows Storage Spaces, ZFS/btrfs.

NAS / SAN: Synology, QNAP, Netgear ReadyNAS, Buffalo (TeraStation/LinkStation), WD My Cloud, TerraMaster, Asustor, Thecus, LaCie, TrueNAS/ixSystems, Lenovo-Iomega.

Filesystems / LVMs: NTFS, exFAT, ReFS, EXT3/4, XFS, btrfs, ZFS, APFS/HFS+, VMFS/VMDK, VHDX/CSV, LVM2.


NAS brands & models we frequently see in UK recoveries (15)

  • Synology: DS920+/DS923+/DS1522+/RS1221+

  • QNAP: TS-453D/TS-464/TVS-872XT/TS-873A

  • Western Digital (WD My Cloud): EX2 Ultra/EX4100/PR4100

  • Netgear ReadyNAS: RN424/RN528X

  • Buffalo: TeraStation TS3410, LinkStation LS220D

  • TerraMaster: F4-423/F5-422

  • Asustor: AS5304T/AS6604T

  • Thecus: N4810/N5810

  • LaCie: 2big/5big (mdadm/XFS/btrfs generations)

  • TrueNAS / FreeNAS: Mini X/X+ (ZFS)

  • Lenovo-Iomega: ix4-300d/px4-300r

  • Zyxel: NAS326/520

  • Promise: VTrak (RAID) / Pegasus (DAS)

  • Seagate: Business/BlackArmor NAS

  • Drobo (legacy): 5N/5N2 (BeyondRAID)

These reflect common UK lab intake, not a sales ranking.


Rack/Server platforms & typical models (15)

  • Dell EMC PowerEdge: R730/R740/R750xd

  • HPE ProLiant: DL380 Gen10/Gen11, ML350

  • Lenovo ThinkSystem: SR630/SR650

  • IBM xSeries/ServeRAID: x3650 (legacy)

  • Supermicro: SC846/SC847 2U/4U platforms

  • Cisco UCS: C220/C240

  • Fujitsu PRIMERGY: RX2540

  • NetApp: FAS iSCSI/NFS LUN exports

  • Synology RackStation: RS1221RP+/RS3618xs

  • QNAP Rack: TS-x53U/TS-x83XU

  • Promise VTrak arrays

  • Areca: ARC-1883/1886

  • Adaptec/Microchip: ASR series

  • Intel: RS3xxx HBAs/RAID

  • D-Link: ShareCenter Pro (legacy)


How We Recover RAID Safely (Step-by-Step)

  1. Triage & Preservation
    Label drives; capture controller NVRAM/foreign config; disable any auto-rebuild; read-only clone every member (head-mapped, per-zone imaging for failing HDDs).

  2. Metadata Acquisition
    Extract mdadm superblocks, DDF headers, MegaRAID/PERC/Smart Array configs, DSM/QTS layouts (btrfs/ext), ZFS labels, Storage Spaces/Pool metadata.

  3. Virtual Reassembly
    Reconstruct array geometry (level, member order, offsets, chunk/stripe size, parity rotation/delay). We never write to originals—assembly occurs on the clones/images.

  4. Parity & Stripe Repair
    Resolve write-hole/half-stripe states, reconcile stale members, correct sector-size mismatches (512e/4Kn), and rebuild consistent volume images.

  5. Filesystem / LVM Repair
    Repair NTFS ($MFT/$LogFile), ReFS, EXT/XFS/btrfs, ZFS (MOS/txg), LVM/Storage Spaces. Mount read-only; extract target datasets.

  6. Verification & Delivery
    Hash manifests, parity/FS consistency checks, test-open critical files/DBs/VMs; secure hand-off with engineering report.


RAID Levels — Typical Failures & Our Approach

  • RAID 0 (striped, no parity)
    One failed disk = incomplete stripes. We stabilise/imaging both/all members, brute member order and stripe size by signature/parity-less heuristics, then carve intact extents. Partial recovery is common.

  • RAID 1 (mirror)
    Divergent mirrors after a dirty shutdown are frequent. We image each member, pick the newest consistent bitmap/FS journal, and export from the best clone (or merge at file level).

  • RAID 5 (single parity)
    UREs during rebuild and stale-member introductions are typical. We image all members, exclude stale disks, reconstruct stripes/parity, then repair the filesystem.

  • RAID 10 (striped mirrors)
    Simultaneous failures from the same mirror set are critical. We identify healthy mirror partners, rebuild the correct stripe map, and export from the reconstructed image.


50 RAID Errors We Recover — with Technical Resolution

Disk & Media (Physical)

  1. Multiple members with bad sectors → Per-disk adaptive imaging; gap fill guided by parity and FS anchors.

  2. Head failures on one/more disks → HSA swaps; per-head imaging; assemble virtually.

  3. URE during rebuild → Clone failing disk first; offline parity correction on images.

  4. SMR latency storms → Long sequential imaging; zone-aware re-stripe.

  5. 4Kn/512e mixing → Normalise sector size in the virtual map; recalc offsets.

  6. HPA/DCO truncation → Remove on clones; rebuild array with full LBA.

  7. Shock damage (stiction/media) → Mechanical remediation; short-window reads; parity interpolation.

  8. Thermal throttle resets → Duty-cycle imaging; reduced queue depth; persist error maps.

  9. Backplane link flaps → Direct-attach to HBA; re-image.

  10. Power-surge casualties → Replace TVS/regulators; preamp test; clone and repair FS.

Controller / Cache / NVRAM

  1. Dead controller (PERC/Smart Array) → Dump NVRAM; import foreign config on a bench controller to decode geometry; assemble virtually.

  2. Dirty cache shutdown → Reconstruct write journal; resolve half-stripes by majority logic + FS logs.

  3. Firmware mismatch/downgrade → Compare config epochs; select consistent epoch; rebuild.

  4. Foreign config conflicts → Snapshot all; choose quorum; disable init; virtual assemble.

  5. Battery/BBU failure mid-IO → Identify torn stripes; parity repair with syndrome checks.

  6. Cache policy change (WB→WT) → Map write ordering; reconcile with journal.

  7. Controller writes metadata over superblocks → Carve residual headers; reconstruct map from surviving members.

  8. Rebuild to wrong drive (same size) → Serial/GUID audit; revert to valid member.

  9. Rebuild aborted at % → Split by epoch; choose consistent generation stripe-wise.

  10. Controller migration without cache → Import as foreign on bench; no init; rebuild virtually.

Geometry / Order / Offsets

  1. Unknown disk order → Brute search with XOR parity scoring and entropy tests; confirm via FS anchors.

  2. Unknown stripe size → Probe 16–1024 KB; score by parity satisfaction + MFT alignment.

  3. Parity rotation unknown → Test left-sym/left-asym/right-*; confirm on directory structures.

  4. Delayed parity / data-start offset → Detect skip-blocks; rebase LBA0; re-stripe.

  5. Wrong member count after hot-spare promotion → Epoch-split map at cut-over; merge results.

  6. Mixed block sizes on mdadm sets → Normalise chunk size in virtual map.

  7. Off-by-one offset from hot-plug → Shift detection by pattern correlation; correct globally.

  8. Sector re-map invisibility (drive firmware) → Image raw; ignore drive-level remap in parity calc.

Filesystem / Volume Layer

  1. NTFS $MFT/$MFTMirr corruption → Rebuild from mirror + $LogFile on the array image.

  2. ReFS metadata rot → Salvage intact objects/stream maps; export valid trees.

  3. EXT4 journal needs replay → fsck on the image only; restore directories/inodes.

  4. XFS log damaged → Zero corrupt log on copy; xfs_repair; rebuild AG headers.

  5. btrfs (Synology/QNAP) → Reconstruct chunk tree; restore subvolumes/snapshots.

  6. ZFS pool degraded → Import RO; heal by txg; if labels lost, recreate vdevs by GUID order.

  7. LVM PV/VG metadata lost → Scan backups on clones; rebuild VG; map LVs; repair FS.

  8. Storage Spaces parity → Read metadata; reconstruct column/stripe; export volume.

NAS-Specific Issues

  1. Synology SHR (mixed sizes) → Identify md layers; assemble SHR; mount btrfs/ext4 volumes.

  2. QNAP QuTS hero (ZFS) → Import RO; map datasets with zdb; export files.

  3. ReadyNAS X-RAID → Decode layering; assemble md/btrfs; recover.

  4. Buffalo TeraStation (XFS/btrfs) → Extract uBoot/env + md sets; mount volumes.

Parity Math / Write-Hole

  1. Half-stripe torn writes → Majority logic + FS journal reconciliation per stripe.

  2. RAID6 P+Q resolution → Reed-Solomon syndrome analysis to isolate the two bad members.

  3. Silent corruption on a “good” disk → CRC/diff maps; exclude poisoned member from votes.

  4. Stale member present → Timestamp/sequence check; drop stale; parity repair.

  5. XOR parity mismatch w/out clear culprit → Sliding vote windows; choose consistent parity across file ranges.

Human / Admin Actions

  1. Accidental re-initialise / quick-init → Carve prior superblocks; rebuild original geometry.

  2. Disk order shuffled after transport → Brute order + parity/FS anchor scoring.

  3. Swap of 512e/4Kn replacement drives → Logical sector emulation in virtual map.

  4. Forced filesystem repair on degraded array → Reverse damage using journal/backup metadata (best-effort).

  5. Rebuild triggered with bad spare → Stop; image all members; rebuild virtually from best set.


20 Common Virtual System Failures on RAID — and Our Approach

  1. Broken VMDK snapshot chain → Repair descriptors; consolidate child→parent; export flat.

  2. Corrupt VMFS metadata → vmkfstools on the array image; rebuild allocation; recover VMs.

  3. VHDX/AVHDX chain mismatch → Rebind by parent GUIDs; merge differencing disks.

  4. CBT mismatch → Ignore CBT; full-scan export; reconstruct from base disks.

  5. Thin-provisioned over-commit → Rehydrate sparse extents; zero gaps safely; repair guest FS.

  6. RDM mis-mapping → Map to correct LUN image; export guest volume.

  7. Proxmox/KVM LVM-thin damage → Activate metadata backups; salvage LV snapshots.

  8. qcow2 header corruption → Convert via qemu-img salvage flags on clone; mount guest FS.

  9. vSAN object loss (backed by RAID) → Reassemble surviving components; export VMDKs.

  10. NFS datastore journal loss → Repair underlying FS; re-index VMs; copy out.

  11. Guest NTFS dirty after host crash → Journal replay on image; restore data.

  12. SQL/Exchange in VM → Use transaction/redo logs to roll to consistency before export.

  13. AD Domain Controller USN rollback → Export NTDS.dit/SYSVOL; coordinate authoritative restore (data preserved).

  14. APFS inside VM → Select intact checkpoint; rebuild container; export volume.

  15. Linux LVM inside VM → vgscan on image; map LVs; fsck on copies only.

  16. BitLocker/VeraCrypt inside VM → Decrypt with keys on the image; then repair inner FS.

  17. Corrupted VM template/gold image → Carve flat disks; re-link descriptors; redeploy.

  18. Datastore signature collision after move → Force mount on clone; resignature only if required; export VMs.

  19. Ceph/RBD exported via iSCSI → Reassemble RBD map; export guest images.

  20. Backup proxy wrote to production LUN → Timeline diff; carve pre-overwrite blocks; restore from snapshots.


What to Send & How to Package

  • Power down the array. Label each disk by slot/order.

  • Pack each drive in anti-static bags and a small padded box or envelope with your contact details and a short incident summary (controller model, timestamps, symptoms).

  • Post or drop off in person—both are fine. Do not attempt further rebuilds.


Why Liverpool Data Recovery

  • 25+ years of RAID/NAS/server recoveries across consumer, SMB, and enterprise estates

  • Controller-aware image-first workflow; parity/stripe reconstruction on clones only

  • Deep expertise with NTFS/ReFS, EXT/XFS/btrfs, ZFS, VMFS/VHDX and snapshot technologies

  • Clear engineer-to-engineer communication and free diagnostics

Contact our RAID engineers today for free diagnostics.
Tell us the RAID level, disk count, controller/NAS model, symptoms, and your most critical data—we’ll prioritise those during recovery.

Contact Us

Tell us about your issue and we'll get back to you.