Skip to content

25.10.5

Simplyblock is happy to release the general availability release of Simplyblock 25.10.5.

New Features

  • We can force-bind now both the clients (via the nvme connect) and the storage nodes to particular NICs on the host
  • There is no requirement any longer to have a route from the management nodes to the storage network (Data-NICs)
  • Optional CSI feature, which allows to auto-delete and restart all pods after a cluster got suspended (pods lost both IO path) once the cluster becomes operational again
  • if a device dissappears on node restart directly from NEW state, just remove it from database
  • variable port ranges
  • accelerate activate: parallelize the creation of distribs on cluster activate
  • accelerate node restart: Parallelize connecting to other / from other nodes on node restart
  • support for kubernetes topology manager

Fixes (Summary)

  • node affinity and IO interrupt and device format failure on 2+2 after multiple 2-node outages in a row [multiple issues with journal and placement fixed]
  • outages in a row without data migration in between can lead to IO interrupt [changes to placement algorithm]
  • Snapshot delete failed while primary node in outage
  • dpdk fails to initialize
  • Lvol delete fails when secondary is in down state
  • force option of remove-device command doesnt work
  • records param in get-io-stats return same 20 values

Upgrade Considerations

It is possible to upgrade from 25.10.4. and 25.10.4.2

Known Issues

  • we still have unnecessary retries of data migration while a node is down in +2 schemas (2+2, 4+2). data migration should pause once all migratable chunks have been moved and resume only for the remaining part once all nodes are online. Currently, it retries without success until all nodes are online. We will deliver this as a hotfix asap.
  • at the moment, to sustain full fault tolerance we need more nodes than the theoretical minimum. This is due to a missing feature in the placement, which we will deliver as hotfix asap.
  • still not fixed to use different erasure coding schemas per cluster (will be delivered with next major release)

Features to expect with next major release

  • Ability to use different erasure coding schemas in the same cluster
  • remote snapshot replication (send snapshots to remote cluster)
  • Kubernetes: asynchronous replication (replicate volumes via snapshots in regular intervals and support fail-over in kubernetes)
  • Kubernetes Operator: Use CRDs to specify, create and track a cluster, storage nodes, volumes and snapshots, and replications
  • Significant Performance Optimizations during Node Outage (Journal Writes)
  • Cluster-internal Multi-Pathing for both RDMA and TCP
  • Snapshot Backups to S3
  • n+2: 3 paths from client (2 secondaries per primary)