Fixing DawiControl DC-614e Detection Issues on Proxmox VE
I just needed some extra SATA ports in my home server — and thought it would be easy to solve by adding a DawiControl DC-614e RAID controller card which comes with the Marvell 88SE9215 chipset. Marvellous 🐐, you might think... Turns out, I was wrong!
I’m not using the RAID functionality of the card — I only needed the four extra SATA ports 🤷♂️. Unfortunately, the card wasn’t properly recognized out of the box by Linux... 🙄 — Bad penguin! 🐧 Verry baaad, bad penguin! 👎🏻
My best Google searches quickly yielded results. This issue is described in detail in the following blog post: Arbeitsnotizen zu: Debian und Marvell 88SE9215. This excellent site turns out to be by Johannes Keßler (thanks for sharing! — and caring, Johannes 😉), and I found it through a post on good old Reddit.
The author provides a solid solution by applying a kernel patch that adds the necessary PCI ID to the ahci driver. That article is based on kernel version 6.0.8.
You’d expect this to be fixed by now — especially since I’m running Proxmox VE with kernel version 6.8.12 — but... "NEIN! (Du Schwalbe!)" the problem still persists.
Since I didn’t want to bother with compiling a custom kernel, I found a different solution:
I resolved the issue by injecting the PCI ID into the initramfs, which allows the ahci driver to pick up the card automatically during boot 🤓.
In this article, I’ll walk you through how I made that work 👨🏻🏫.
1. Lookup details 🔭
Using lspci
, I confirmed that the card is visible on the PCI bus:
lspci -nn -kk
[..]
03:00.0 RAID bus controller [0104]: Marvell Technology Group Ltd. 88SE9215 PCIe 2.0 x1 4-port SATA 6 Gb/s Controller [1b4b:9215] (rev 01)
Subsystem: Dawicontrol GmbH Device [dc93:614e]
Kernel driver in use: ahci
Kernel modules: ahci
1b4b → the vendor ID of the manufacturer.
In this case, 1b4b stands for Marvell Technology Group Ltd.
9215 → the device ID that identifies the specific model/chipset.
Here, that’s the 88SE9215, the SATA controller chip on the card.
2. Ensure AHCI is available during boot 👢
We must ensure that the AHCI driver is available in the initramfs at boot. This can be done by adding ahci
to the /etc/initramfs-tools/modules
file.
The following command checks whether ahci
is already listed in /etc/initramfs-tools/modules
. If not, it appends it:
grep '^ahci$' /etc/initramfs-tools/modules || echo ahci >> /etc/initramfs-tools/modules
It should look something like this (just listing ahci
):
# List of modules that you want to include in your initramfs.
# They will be loaded at boot time in the order below.
#
# Syntax: module_name [args ...]
#
# You must run update-initramfs(8) to effect this change.
#
# Examples:
#
# raid1
# sd_mod
ahci
Update the initramfs
to include the ahci driver:
update-initramfs -u
3. Init Script 🚀
Ensure the init-top folder exists...
mkdir -p /etc/initramfs-tools/scripts/init-top
...and create a custom init-top script
nano /etc/initramfs-tools/scripts/init-top/dawicontrol
With the following contents:
#!/bin/sh
PREREQ="modules"
prereqs() {
echo "$PREREQ"
}
case "$1" in
prereqs)
prereqs
exit 0
;;
esac
echo "### DawiControl init-top script starting ###"
PCI_ID="1b4b 9215"
TARGET="/sys/bus/pci/drivers/ahci/new_id"
# Wait up to 10 seconds until the target exists and is writable
WAIT=0
while [ ! -w "$TARGET" ] && [ $WAIT -lt 10 ]; do
echo "Waiting for $TARGET to be writable... ($WAIT)"
sleep 1
WAIT=$((WAIT + 1))
done
if [ ! -w "$TARGET" ]; then
echo "ERROR: $TARGET is not writable after waiting"
exit 1
fi
# Check if the PCI ID is already registered
if grep -q "$PCI_ID" "$TARGET" 2>/dev/null; then
echo "PCI ID $PCI_ID already registered"
exit 0
fi
# Inject the PCI ID into the AHCI driver
echo "$PCI_ID" > "$TARGET" || {
echo "Failed to inject PCI ID $PCI_ID"
exit 1
}
echo "PCI ID $PCI_ID successfully injected"
exit 0
Make the script executable...
chmod +x /etc/initramfs-tools/scripts/init-top/dawicontrol
...and rebuild initramfs
update-initramfs -u
After a reboot, the card was correctly initialized and all connected drives became available.
Update 6th of August 2025 – I have performed an in‑place upgrade of my Proxmox version 8 installation to version 9, and my fix survived the upgrade. This means the fix works for both version 8 and version 9!