Difference between revisions of "X1011"

From Geekworm Wiki
Jump to navigation Jump to search
Line 13: Line 13:
 
The X1011 currently does not support all RAID types, including RAID 1 and ZFS. This might be due to driver compatibility issues, which are yet to be confirmed and advised. The X1011 is a collection of four M.2 SSD drives recognized as separate drives by the OS. It uses the ASM1184e PCI express packet switch, with 1 PCIe x1 Gen2 upstream port to 4 PCIe x1 Gen2 downstream ports, enabling users to extend PCIe ports on a Raspberry Pi 5.
 
The X1011 currently does not support all RAID types, including RAID 1 and ZFS. This might be due to driver compatibility issues, which are yet to be confirmed and advised. The X1011 is a collection of four M.2 SSD drives recognized as separate drives by the OS. It uses the ASM1184e PCI express packet switch, with 1 PCIe x1 Gen2 upstream port to 4 PCIe x1 Gen2 downstream ports, enabling users to extend PCIe ports on a Raspberry Pi 5.
  
To verify whether it is a hardware problem:
+
'''To verify whether it is a hardware problem:'''
  
 
1. Clear any RAID settings and mount each drive as a separate volume.
 
1. Clear any RAID settings and mount each drive as a separate volume.

Revision as of 17:37, 1 November 2024

X1011 V1.1
X1011 V1.1
X1011 V1.1
How to power
X1011 V1.1
X1011 V1.1
X1011 V1.1
The matching case x1011-c1 for X1011

Tips

The X1011 currently does not support all RAID types, including RAID 1 and ZFS. This might be due to driver compatibility issues, which are yet to be confirmed and advised. The X1011 is a collection of four M.2 SSD drives recognized as separate drives by the OS. It uses the ASM1184e PCI express packet switch, with 1 PCIe x1 Gen2 upstream port to 4 PCIe x1 Gen2 downstream ports, enabling users to extend PCIe ports on a Raspberry Pi 5.

To verify whether it is a hardware problem:

1. Clear any RAID settings and mount each drive as a separate volume.

X1011-raid-test1.png

2. Open a second terminal to Monitor for any NVMe errors (I/O timeout, reset controller, I/O error, etc.):

  pi@raspberrypi ~ $ sudo dmesg -w | grep -i nvme	
X1011-raid-test3.png

3. Create a 30GB testing file on one of the SSDs:

  pi@raspberrypi ~ $ sudo dd if=/dev/zero of=./TestingFile bs=100M count=300 oflag=direct	
X1011-raid-test2.png

4. Copy the 30GB file to multiple SSDs simultaneously:

  pi@raspberrypi ~ $ echo /media/pi/cn600/ /media/pi/spcc/ /media/pi/netac/ | xargs -n 1 cp ./TestingFile

Overview

Enhance your Raspberry Pi 5 with effortless installation and lightning-fast PCIe storage speeds!


The X1011 four M.2 NVMe SSD shield, designed to provide a mass-capacity storage and high-speed storage solution for your Raspberry Pi 5. Its sleek and compact design enables easy attachment of four full-size M.2 2280 SSDs to your Raspberry Pi 5. With its PCIe 2.0 interface, you can experience data transfer rates of up to 5Gbps, allowing you to effortlessly transfer large amounts of data within seconds.

The X1011 connects to the underside of the Raspberry Pi 5, eliminating the need for a passthrough for the GPIO. This means you can use your favorite HATs while also utilizing this expansion board. Moreover, the X1011 offers versatile power options – it can draw power from the Raspberry Pi5 through pogo pins using a USB-C power supply, or alternatively, power the Raspberry Pi5 from the X1011 using a DC power adapter via the onboard DC power jack, streamlining the power supply process to a single source.

The X1011 is an ideal storage solution for creating a home media center or building a network-attached storage (NAS) system. It allows you to store and stream your own videos, music, and digital photos within your home or even remotely across the world.


Geekworm PCIe to NVME Sets:

After the release of the Raspberry Pi AI Kit, we tested four PIPs: X1001, X1004, X1011, and M901. X1001, X1004, and M901 all support the hailo-8 ai ​​accelerator, but X1011 does not.

It should be noted that X1004 uses ASMedia ASM1182e PCIe switch, and X1011 uses ASM1184e, they can't support PCIe Gen 3 speed, so even though we forced to enable PCIe Gen 3.0 setting in Raspberry Pi 5, it is limited by ASMedia ASM1182e PCIe switch, and speed is still PCIe Gen 2.0 5Gbps speed. when you use an hailo-8 ai accelerator, Raspberry Pi Fundation highly recommends using PCIe 3.0 to achieve best performance with your AI Kit.

Our tentative conclusions are as follows:

  • If you need to use hailo-8 ai accelerator with high performance, it is recommended to use X1015/X1002/X1003/M901/ the official M.2 HAT+ etc. When choosing these PIP boards, you should focus on whether there is a conflict between the camera cable and the PIP board installation, and enable PCIe3.0 to use hailo-8 ai accelerator. At the same time, you need to prepare an SD card as the system disk.
  • If you don't care about the high performance brought by PCIe 3.0, then you can consider using X1004, so that you can use any socket of X1004 to install NVME SSD as the system disk, and another socket to install hailo-8 ai accelerator, so as to have both.
Model Compatible with Position NVMe M2 SSD Length Support Matching Case Matching Cooler Support NVMe Boot Support PCIe 3.0 Support Hailo-8 AI Accelerator
X1000 Raspberry Pi 5 Top 2230/2242 P579 Official Cooler / Argon THRML Cooler / H505/H501 Yes - Not tested
X1001 Raspberry Pi 5 Top 2230/2242/2260/2280 P579 Official Cooler / Argon THRML Cooler / H505/H501 Yes - Yes
X1002 Raspberry Pi 5 Bottom 2230/2242/2260/2280 P580 /
P580-V2
Official Cooler / Argon THRML Cooler / H505/H501 Yes - NO
X1003 Raspberry Pi 5 Top 2230/2242 P579 / P425 Official Cooler / H501 Only Yes - Not tested
X1004 Raspberry Pi 5 Top Dual ssd: 2280 P579-V2 Official Cooler / Argon THRML Cooler / H505/H501 Yes (Requires EEPROM 2024/05/17 and later version) NO Yes
X1015 Raspberry Pi 5 Top 2230/2242/2260/2280 P579 Official Cooler / Argon THRML Cooler / H505/H501 Yes - Yes
X1005 Raspberry Pi 5 Bottom Dual ssd: 2230/2242/2260/2280 P580-V2 Official Cooler / Argon THRML Cooler / H505/H501 Yes (Requires EEPROM 2024/05/17 and later version) NO Yes
X1011 Raspberry Pi 5 Bottom 4 ssds: 2230/2242/2260/2280 X1011-C1 Official Cooler / Argon THRML Cooler / H505/H501 Yes (eeprom 2024/05/17 and later version) NO NO
X1012 Raspberry Pi 5 Top 2230/2242/2260/2280 P579 Official Cooler / Argon THRML Cooler / H505/H501 Yes - Not tested
M901 Raspberry Pi 5 Top 2230/2242/2260/2280 P579 Official Cooler / Argon THRML Cooler / H505/H501 Yes - Yes
Q100 Raspberry Pi 5 Top 2242 P579 Official Cooler / Argon THRML Cooler / H505/H501 Yes - Not tested
Q200 Raspberry Pi 5 Top Dual ssd: 2280 P579 Official Cooler / Argon THRML Cooler / H505/H501 NO - Not tested
M300 Raspberry Pi 5 Top 2230/2242 P579 Official Cooler / Argon THRML Cooler / H505/H501 Yes - Not tested
M400 Raspberry Pi 5 Top 2230/2242/2280 P579 Official Cooler/ Argon THRML Cooler / H505/H501 Yes - Not tested

Features

For use with

Raspberry Pi 5 Model B

Key Features
  • The perfect storage solution for your Raspberry Pi 5 the M.2 NVMe 4 SSD Shield
  • Accommodates various M.2 NVMe SSD form factors, including 2280, 2260, 2242, and 2230
  • Provides speedy data transfer with PCIe 2.0 5Gbps
  • LED indicators in blue display power and drive status
  • Features an integrated high-performance PCIe packet switch
  • Equipped with high-efficiency DC/DC step-down converter, delivering a maximum of 10A to power your SSDs
  • Powering via both pogo pins & FFC or DC power jack, ensuring sufficient power supply without any worries
  • Designed to be attached on bottom, allows using your favorite HATs alongside it
  • Compatible with the HAT+ STANDBY power state, automatically turning off when the Pi 5 shuts down.
  • Compatible with the official active cooler without affecting cooling performance
  • PCB Size: 109mm x 87.2mm

PS: The X1011 hardware has no limit on NVME SSD capacity, which is dependent on the Raspberry Pi OS.

Ports & Connectors
  • DC power jack: 5.5x2.1mm, polarity: center positive (+)
  • PCIe connector x1 - 16-pin pitch0.5mm
  • SSD connectors x4 - M.2 KEY-M 67P
How to Power
  • 5Vdc +/-5% ≥5A power via FFC & pogo pins
  • 5Vdc +/-5% ≥5A power via type-c of Pi 5 or DC power jack of X1011

Don't power the X1011 via DC powe jack and the Raspberry Pi5 via USB-C at the same time.

Important Notes
  • Not compatible with M.2 SATA SSDs, M.2 PCIe AHCI SSDs, or other M.2 non-NVMe devices
  • Older NVMe drives with less efficient flash media may not perform as well as newer drives
  • New NVMe SSDs are not partitioned and will need to be both partitioned and formatted when first connected to the Raspberry Pi before they will be accessed in the Explorer.
  • The X1011 supports booting from NVME SSDs starting with bootloader version 2024-05-17 and later versions.

Matching Case

This is a X1011-C1 Metal Case for Raspberry Pi 5 & X1011 pcie to nvme shield only.

Note: Because it is a metal case, it will shield the WiFi signal, please use wired Ethernet.


Packing List of X1011-C1:

  • 1 x Metal Case
  • 2 x M2.5*6+3 Female/Male Space
  • 2 x M2.5*6 Female/Female Space
  • 4 x KM2.5*4 Screw
  • 4 x pads with a diameter of 8 mm

X1011-C1-IMG-8181-Packing-List.jpg 003-X1011-C1-IMG-8410-interface-1.jpg 004-X1011-C1-IMG-8419-interface-2.jpg 005-X1011-C1-IMG-8409-size.jpg 002-X1011-C1-IMG-8213-layer-2.jpg

Packing List

  • 1 x X1011 V1.1 M.2 NVMe 4 SSD shield
  • 2 x 37mm PCIe FFC cable (1pc is for backup)
  • 8 x M2.5*5 Screws
  • 2 x M2.5*5 Female/Female Spacer
  • 2 x M2.5*5+5 Male/Female Spacer
  • 4 x M2*4 Screws
  • 4 x M2 Copper Nut

X1011-V1.1-Packing-List.jpg

User Manual

Related links

Test & Reviews

Test Conditions

  • System board details: Raspberry Pi 5 Model B Rev 1.0 , 4G RAM
  • Interface board details: X1011 v1.1 NVMe dual SSD Shield
  • Operation system: Raspberry Pi OS with desktop (Debian12 (bookworm), 64bit, Release date: December 5th 2023)
  • Storage details: Colorful CN600 120G, Samsung PM961 120G, Netec N930E 120G, Silicon Power P34A60 120GB


Testing disk drives read speed at PCIe2.0 with hdparm

X1011-Testing-disk-drives-read-speed-at-PCIe2.0-with-hdparm.png

FAQ

Q: Why is PCIE SWITCH 3.0 not used?

A: Cost reasons. X1011 use pcie 2.0 is based on cost considerations, in the design and production of X1011 in the Chinese market pcie 3.0 ic price close to 30 dollars (now the price should be lower, in addition, we are a small batch production manufacturing, there is no IC purchasing bargaining power), is pcie 2.0 ic price 6 times or more. We think the final selling price is too high to be accepted by consumers; another point to note is that PI5 is certified to support PCIE 2.0 only, not PCIE 3.0.

Q: Why does the X1011 use a DC Jack instead of type-c?

A: We think about it more. TYPE-C is limited to 5A, if 4 NVME SSDs are read/written at the same time + motherboard + fan, is the TYPE-C power supply enough at peak? But DC JACK can provide more than 5A power supply, is there an extra option with DC JACK?


Add your comment
Geekworm Wiki welcomes all comments. If you do not want to be anonymous, register or log in. It is free.


Anonymous user #16

18 days ago
Score 0++

I have tried to configure this X1011 board with a RPi5B within the C1 case for nearly 2 weeks now - roughly 60 hours of testing. The NVMe SSDs function just fine as independent drives, but I have not been able to write to two or more SSDs simultaneously, not matter my configuration settings. I've tried to do so using three different sets of four SSDs: Fikwot 1TB SSDs, MMoment 1TB SSDs, and Goldenfir 256GB SSDs. In fact, I also could not create any filesystem for a software RAID0 across all four SSDs. The efforts error out with errors like this most ofttimes:

[ 0.517966] nvme nvme0: pci function 0000:03:00.0 [ 0.517973] nvme 0000:03:00.0: enabling device (0000 -> 0002) [ 0.521768] nvme nvme0: 3/0/0 default/read/poll queues [ 0.522660] nvme nvme0: Ignoring bogus Namespace Identifiers [ 0.523600] nvme0n1: p1 p2 [ 0.523916] nvme nvme1: pci function 0000:04:00.0 [ 0.523924] nvme 0000:04:00.0: enabling device (0000 -> 0002) [ 0.527246] nvme nvme1: 1/0/0 default/read/poll queues [ 0.527722] nvme nvme1: Ignoring bogus Namespace Identifiers [ 0.533278] nvme1n1: p1 p2 [ 0.533647] nvme nvme2: pci function 0000:05:00.0 [ 0.533657] nvme 0000:05:00.0: enabling device (0000 -> 0002) [ 0.537068] nvme nvme2: 1/0/0 default/read/poll queues [ 0.537614] nvme nvme2: Ignoring bogus Namespace Identifiers [ 0.541164] nvme2n1: p1 p2 [ 0.541496] nvme nvme3: pci function 0000:06:00.0 [ 0.541504] nvme 0000:06:00.0: enabling device (0000 -> 0002) [ 0.544821] nvme nvme3: 1/0/0 default/read/poll queues [ 0.545306] nvme nvme3: Ignoring bogus Namespace Identifiers [ 0.549162] nvme3n1: p1 p2

[ 439.140011] [<000000009db9d36e>] nvme_irq [ 439.140017] [<000000009db9d36e>] nvme_irq [ 439.140020] [<000000009db9d36e>] nvme_irq [ 439.140023] [<000000009db9d36e>] nvme_irq

[ 470.187812] nvme nvme1: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10 [ 470.187820] nvme nvme1: Does your device have a faulty power saving mode enabled? [ 470.187823] nvme nvme1: Try "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off" and report a bug [ 470.187836] nvme nvme3: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10 [ 470.187839] nvme nvme3: Does your device have a faulty power saving mode enabled? [ 470.187841] nvme nvme3: Try "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off" and report a bug [ 470.187854] nvme nvme2: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10 [ 470.187857] nvme nvme2: Does your device have a faulty power saving mode enabled? [ 470.187858] nvme nvme2: Try "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off" and report a bug [ 470.263828] nvme 0000:05:00.0: enabling device (0000 -> 0002) [ 470.266477] nvme nvme2: 1/0/0 default/read/poll queues [ 470.271909] nvme 0000:06:00.0: enabling device (0000 -> 0002) [ 470.272349] nvme 0000:04:00.0: enabling device (0000 -> 0002) [ 470.273677] nvme nvme2: Ignoring bogus Namespace Identifiers [ 470.275687] nvme nvme1: 1/0/0 default/read/poll queues [ 470.276323] nvme nvme3: 1/0/0 default/read/poll queues [ 470.284265] nvme nvme1: Ignoring bogus Namespace Identifiers [ 470.291824] nvme nvme3: Ignoring bogus Namespace Identifiers [ 501.248802] nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0xffff [ 501.248809] nvme nvme0: Does your device have a faulty power saving mode enabled? [ 501.248811] nvme nvme0: Try "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off" and report a bug [ 501.248811] nvme nvme2: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10 [ 501.248815] nvme nvme2: Does your device have a faulty power saving mode enabled? [ 501.248817] nvme nvme2: Try "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off" and report a bug [ 501.538195] nvme nvme3: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10 [ 501.538197] nvme nvme3: Does your device have a faulty power saving mode enabled? [ 501.538198] nvme nvme3: Try "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off" and report a bug [ 501.538213] nvme nvme1: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10 [ 501.538215] nvme nvme1: Does your device have a faulty power saving mode enabled? [ 501.538217] nvme nvme1: Try "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off" and report a bug [ 501.887998] nvme 0000:06:00.0: enabling device (0000 -> 0002) [ 501.888234] nvme 0000:04:00.0: enabling device (0000 -> 0002) [ 501.888428] nvme 0000:05:00.0: enabling device (0000 -> 0002) [ 501.890650] nvme nvme2: 1/0/0 default/read/poll queues [ 501.890963] nvme nvme1: 1/0/0 default/read/poll queues [ 501.892727] nvme nvme1: Ignoring bogus Namespace Identifiers [ 501.893103] nvme nvme3: 1/0/0 default/read/poll queues [ 502.204741] nvme nvme2: Ignoring bogus Namespace Identifiers [ 503.362417] nvme nvme3: Ignoring bogus Namespace Identifiers [ 504.582800] nvme 0000:03:00.0: Unable to change power state from D3cold to D0, device inaccessible [ 506.897825] nvme nvme0: Disabling device after reset failure: -19 [ 506.924122] I/O error, dev nvme0n1, sector 383009792 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 2

These errors and others akin to these happen no matter the ARGs sumbitted to the config.txt and cmdline.txt files. I've attempted to use suggestions on GEEKWORK wiki/forum, and many on usual Linux troubleshooting sites. Even those suggested here within the errors do not improve nor suggest they're making promising progress. These are scripts to write to the four SSDs simultaneously --- cut and paste one line each to separate terminal windows and execute. Each will attempt to write 931GB to each 1TB SSD (each 1TB SSD actually holds 953.86946868896484375GB).

clear; echo; time taskset -c 0 nice dd if=/dev/urandom of=/dev/nvme0n1 bs=2097152 count=487936 conv=notrunc status=progress & clear; echo; time taskset -c 1 nice dd if=/dev/urandom of=/dev/nvme1n1 bs=2097152 count=487936 conv=notrunc status=progress & clear; echo; time taskset -c 2 nice dd if=/dev/urandom of=/dev/nvme2n1 bs=2097152 count=487936 conv=notrunc status=progress & clear; echo; time taskset -c 3 nice dd if=/dev/urandom of=/dev/nvme3n1 bs=2097152 count=487936 conv=notrunc status=progress &

Create a software RAID0 across created partitions on all four SSDs:

mdadm --create --verbose /dev/md0 --level=0 --raid-devices=4 /dev/nvme0n1p1 /dev/nvme1n1p1 /dev/nvme2n1p1 /dev/nvme3n1p1

Then attempt to create a filesystem:

mkfs.ext2 /dev/md0p1

Or after a reboot:

mkfs.ext2 /dev/md127p1

AND... The errors persist.

I have had no success attempting to do these things.

Lisa

18 days ago
Score 0++
Hello,please connect to us by email:support@geekworm.com (and please let us known your order NO.);

Anonymous user #15

one month ago
Score 0++
test

Anonymous user #14

one month ago
Score 0++

In case someone battling making NAS with large NVMEs... I'm almost 100% sure that 1A@5V(5.00W) is not enough for powering them. Example below is from 4x(Shenzhen Longsys Electronics Co., Ltd. Lexar NM790 NVME SSD (DRAM-less) (rev 01)).

```

  1. smartctl -c /dev/nvme0

smartctl 7.4 2023-08-01 r5530 [aarch64-linux-6.11.0-1004-raspi] (local build) Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

START OF INFORMATION SECTION

Firmware Updates (0x14): 2 Slots, no Reset required Optional Admin Commands (0x0017): Security Format Frmw_DL Self_Test Optional NVM Commands (0x005f): Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp Log Page Attributes (0x0e): Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg Maximum Data Transfer Size: 128 Pages Warning Comp. Temp. Threshold: 90 Celsius Critical Comp. Temp. Threshold: 95 Celsius

Supported Power States St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat 0 + 6.50W - - 0 0 0 0 0 0 1 + 5.80W - - 1 1 1 1 0 0 2 + 3.60W - - 2 2 2 2 0 0 3 - 0.0500W - - 3 3 3 3 5000 10000 4 - 0.0025W - - 4 4 4 4 8000 41000

Supported LBA Sizes (NSID 0x1) Id Fmt Data Metadt Rel_Perf 0 + 512 0 0 ```

After 2 days battling this is a `partial` workaroung I found working almost stable using RaidZ1(in RaidZ2 setup NAS still hangs after seconds when loaded).

With this command you are saying NVME drive to use power modes <=2. ``` nvme set-feature /dev/nvme0 --feature-id=2 --value=2 ```

You can check current power mode starus like this. ``` nvme get-feature /dev/nvme0 -f 2 get-feature:0x02 (Power Management), Current value:0x00000004 ```

More info when available.

Harry

one month ago
Score 0++

Hello.

Which product are you using?

X1011 is powered by spring pins, not by FFC cables, so there is no 5V/1A problem you mentioned.

In addition, you can send your markdown text to support@geekworm.com, and we can format it and put it in the comment.

Harry

one month ago
Score 0++

Formatting of the above comments:

In case someone battling making NAS with large NVMEs... I'm almost 100% sure that 1A@5V(5.00W) is not enough for powering them. Example below is from 4x(Shenzhen Longsys Electronics Co., Ltd. Lexar NM790 NVME SSD (DRAM-less) (rev 01)).

smartctl -c /dev/nvme0

smartctl 7.4 2023-08-01 r5530 [aarch64-linux-6.11.0-1004-raspi] (local build) Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
START OF INFORMATION SECTION

Firmware Updates (0x14): 2 Slots, no Reset required Optional Admin Commands (0x0017): Security Format Frmw_DL Self_Test Optional NVM Commands (0x005f): Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp Log Page Attributes (0x0e): Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg Maximum Data Transfer Size: 128 Pages Warning Comp. Temp. Threshold: 90 Celsius Critical Comp. Temp. Threshold: 95 Celsius

Supported Power States St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat 0 + 6.50W - - 0 0 0 0 0 0 1 + 5.80W - - 1 1 1 1 0 0 2 + 3.60W - - 2 2 2 2 0 0 3 - 0.0500W - - 3 3 3 3 5000 10000 4 - 0.0025W - - 4 4 4 4 8000 41000

Supported LBA Sizes (NSID 0x1) Id Fmt Data Metadt Rel_Perf 0 + 512 0 0

After 2 days battling this is a partial workaroung I found working almost stable using RaidZ1(in RaidZ2 setup NAS still hangs after seconds when loaded).

With this command you are saying NVME drive to use power modes <=2. nvme set-feature /dev/nvme0 --feature-id=2 --value=2 You can check current power mode starus like this. nvme get-feature /dev/nvme0 -f 2 get-feature:0x02 (Power Management), Current value:0x00000004

More info when available.


PSU I use is 20W that I use before so it's checked. I never saw under-voltage alarm on RPI5 itself. Same four drives connected on 4x4x4x4 bifurcatated PCIe3 slot on PC are performing like a beasts. Now I'm having stable configuration using both RaidZ1 and 2 by software limiting IO bursts to drives.

Anonymous user #12

one month ago
Score 0++
I have one of these with 4 SSD and it works fine when all drives are separate but as soon as I run them in RAID5 one drive fails. It always seems to be the drive in slot 2 that fails even when I swap them round.

Anonymous user #12

one month ago
Score 0++
Update, I have change from the RPi 25w USB PSU to a 50w PSU direct to the x1011. I am using Open Media Vault and when a drive disappears it comes back after a reboot and after a second reboot the RAID 5 array is "clean" again...

Harry

one month ago
Score 0++
X1011 does not support hardware RAID, please start with software

Anonymous user #13

one month ago
Score 0++
Same for me - doesn't work

Anonymous user #11

one month ago
Score 0++
Anyone have .STL file to print case for that?

Harry

one month ago
Score 0++
Please contact us via mail

Anonymous user #10

5 months ago
Score 0++
Does it compatible with X1203 ( rpi5 - X1203 - x1011 ) ?

Lisa

5 months ago
Score 0++
X1011 and X1203 are both installed on the bottom of pi5 and cannot be used together.You can consider using X728 ups or X-UPS1

Anonymous user #9

5 months ago
Score 0++
Can I use the hailo-8 ai accelerator and nvme at the same time?

Lisa

5 months ago
Score 0++
Sorry, we haven't tested it.

Harry

5 months ago
Score 0++

We tested X1011 and X1004. X1011 DOES NOT support hailo-8 ai accelerator, BUT X1004 can support.



Due to X1004 uses ASMedia ASM1182e PCIe switch, it can't support PCIe Gen 3 speed, so even though we forced to enable PCIe Gen 3.0 setting in Raspberry Pi 5, it is limited by ASMedia ASM1182e PCIe switch. speed is still PCIe Gen 2.0 speed.



In addition, it should be noted that when you use an hailo-8 ai accelerator, Raspberry Pi Fundation highly recommends using PCIe 3.0 to achieve best performance with your AI Kit.



Finally, our conclusion is:

  • If you need to use hailo-8 ai accelerator with high performance, it is recommended to use the official M.2 HAT+/X1015/X1002/X1003/M901, etc. When choosing these PIP boards, you should focus on whether there is a conflict between the camera cable and the PIP board installation, and enable PCIe3.0 to use hailo-8 ai accelerator. At the same time, you need to prepare an SD card as the system disk.
  • If you don't care about the high performance brought by PCIe 3.0, then you can consider using X1004, so that you can use any socket of X1004 to install NVME SSD as the system disk, and another socket to install hailo-8 ai accelerator, so as to have both.

Anonymous user #8

5 months ago
Score 0++

Hello

Does the X1011 also support NVMe with a size of 4TB and 8TB?

Lisa

5 months ago
Score 0++
Hi,The X1011 hardware has no limit on NVME SSD capacity, which is dependent on the Raspberry Pi OS.

Anonymous user #7

6 months ago
Score 0++
Will it fit NASPi Alu case? What case is it supposed to be used with?

Lisa

6 months ago
Score 0++
Hi,NASPi Alu case not compatible with X1011. We are planning to make X1011 matching case, please pay attention to our updates.Thanks.

Anonymous user #5

6 months ago
Score 0++
Which model of DC power is compatible ?

Lisa

6 months ago
Score 0++
Hi,5Vdc +/-5% ≥5A; DC power jack: 5.5x2.1mm, polarity: center positive (+); please refer to X1011#Ports & Connectors

Anonymous user #4

7 months ago
Score 0++
I would like to create a NAS, Raid 5 out of this, are you planing to release instructions to do so?

Anonymous user #2

7 months ago
Score 0++
Is it possible to disable the 4 blue status LED's?

Lisa

7 months ago
Score 0++
Hi,it not support disabled, you can cover it with something.

Anonymous user #2

7 months ago
Score 0++
Thank you Lisa :)

Anonymous user #1

8 months ago
Score 0++
Any plans for a case for this hat ?

Lisa

8 months ago
Score 0++
Hello,The X1011 has just gone on sale and there are no plans to produce the case yet.

Anonymous user #3

7 months ago
Score 0++
I would also like a case produced. Thanks

Anonymous user #6

6 months ago
Score 0++
Not so interesting without good case