cwbe coordinatez:
101
63540
63542
1098481
1097457

ABSOLUT
KYBERIA
permissions
you: r,
system: public
net: yes

neurons

stats|by_visit|by_K
source
tiamat
K|my_K|given_K
last
commanders
polls

total descendants::
total children::9
6 ❤️


show[ 2 | 3] flat


uz.nebudem.t...0
thevlakx0
sly0
BlackDeath2
Mal som raid5 pole o 4 diskoch (kazdy po 10gb) ktore som resizoval na 5 diskov po pridani dalsieho disku do stroja.

Proces to bol bezstratovy - vsetky data z povodneho pola vyzeraju OK, Povodna velkost pola bola nejakych 28GB (4*10GB - 1*10GB na checksumy). Terajsia velkost je 37GB (5*10GB - 1*10GB na checksumy). Tie velkosti su velkosti ktore pise df z toho co vravi filessytem, takze ja este pocitat viem :)

Na p2 400/768mb ram trvalo pod dve hodiny, hd[f-h] su na SIL CMD640 radici [UDMA5] a hda je na doske [UDMA2]

raidtab.new je fajl kde je uz pridany ten dalsi disk..
Po zbehnuti tohto celeho sa po raidstart /dev/md1 spustil este klasicky resync

sh-2.05b# raidstop /dev/md1
sh-2.05b# raidreconf -o /etc/raidtab -n /etc/raidtab.new -m /dev/md1
Parsing /etc/raidtab
Parsing /etc/raidtab.new
Size of old array: 78204168 blocks, Size of new array: 97755210 blocks
Old raid-disk 0 has 76370 chunks, 9775424 blocks
Old raid-disk 1 has 76370 chunks, 9775424 blocks
Old raid-disk 2 has 76370 chunks, 9775424 blocks
Old raid-disk 3 has 76370 chunks, 9775424 blocks
New raid-disk 0 has 76370 chunks, 9775424 blocks
New raid-disk 1 has 76370 chunks, 9775424 blocks
New raid-disk 2 has 76370 chunks, 9775424 blocks
New raid-disk 3 has 76370 chunks, 9775424 blocks
New raid-disk 4 has 76370 chunks, 9775424 blocks
Using 128 Kbyte blocks to move from 128 Kbyte chunks to 128 Kbyte chunks.
Detected 774336 KB of physical memory in system
A maximum of 1573 outstanding requests is allowed
---------------------------------------------------
I will grow your old device /dev/md1 of 229110 blocks
to a new device /dev/md1 of 305480 blocks
using a block-size of 128 KB
Is this what you want? (yes/no): yes
Converting 229110 block device to 305480 block device
Allocated free block map for 4 disks
5 unique disks detected.
Working () [00229110/00229110] []
Source drained, flushing sink.
Reconfiguration succeeded, will update superblocks...
Updating superblocks...
handling MD device /dev/md1
analyzing super-block
disk 0: /dev/hda1, 9775521kB, raid superblock at 9775424kB
disk 1: /dev/hdf1, 9775521kB, raid superblock at 9775424kB
disk 2: /dev/hdg1, 9775521kB, raid superblock at 9775424kB
disk 3: /dev/hdh1, 9775521kB, raid superblock at 9775424kB
disk 4: /dev/hda6, 9775521kB, raid superblock at 9775424kB
Array is updated with kernel.
Disks re-inserted in array... Hold on while starting the array...
Maximum friend-freeing depth: 8
Total wishes hooked: 229110
Maximum wishes hooked: 1573
Total gifts hooked: 229110
Maximum gifts hooked: 990
Congratulations, your array has been reconfigured,
and no errors seem to have occured.
sh-2.05b# e2fsck -f /dev/md1
e2fsck 1.34 (25-Jul-2003)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
/lost+found not found. Create<y>? yes

Pass 4: Checking reference counts
Pass 5: Checking group summary information

/dev/md1: ***** FILE SYSTEM WAS MODIFIED *****
/dev/md1: 8874/3670016 files (1.8% non-contiguous), 7329064/7331520 blocks
sh-2.05b#
sh-2.05b# resize2fs /dev/md1
resize2fs 1.34 (25-Jul-2003)
Resizing the filesystem on /dev/md1 to 9775360 (4k) blocks.
The filesystem on /dev/md1 is now 9775360 blocks long.

sh-2.05b#

============================================================

tak som to odskusal uz aj na ostrom poli 4x150GB -> 5x150GB

30 min fsck
13 hod raidreconf
30 min fsck
30 min resize2fs
------------------
15 hod dokopy aj s konfigurovanim atd

vsetko sa zda byt ok:
root@junk:~# df -h /dev/md0
Filesystem Size Used Avail Use% Mounted on
/dev/md0 565G 423G 142G 75% /storage

root@junk:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5]
md0 : active raid5 hda5[4] hdh5[3] hdg5[2] hdf5[1] hde5[0]
601216000 blocks level 5, 128k chunk, algorithm 2 [5/5] [UUUUU]




000001010006354000063542010984810109745701114172
maniac
 maniac      12.09.2004 - 22:50:05 [2K] , level: 1, UP   NEW

raiddev /dev/md1
raid-level 5
nr-raid-disks 5
nr-spare-disks 0
persistent-superblock 1
parity-algorithm left-symmetric
chunk-size 128

device /dev/hda1
raid-disk 0

device /dev/hdf1
raid-disk 1

device /dev/hdg1
raid-disk 2

device /dev/hdh1
raid-disk 3

device /dev/hda6
raid-disk 4

# failed-disk 0

00000101000635400006354201098481010974570111417201114195
oryon
 oryon      12.09.2004 - 22:59:10 , level: 2, UP   NEW
danke ;)

000001010006354000063542010984810109745701114170
maniac
 maniac      12.09.2004 - 22:49:17 [1K] , level: 1, UP   NEW

raiddev /dev/md1
raid-level 5
nr-raid-disks 4
nr-spare-disks 0
persistent-superblock 1
parity-algorithm left-symmetric
chunk-size 128

device /dev/hda1
raid-disk 0

device /dev/hdf1
raid-disk 1

device /dev/hdg1
raid-disk 2

device /dev/hdh1
raid-disk 3

# failed-disk 0

000001010006354000063542010984810109745701100668
ᨋ
       06.09.2004 - 21:08:38 , level: 1, UP   NEW
tie 2 hodky trval resize2fs, alebo raidreconf ?
predpokladam ze ten resize

00000101000635400006354201098481010974570110066801100949
maniac
 maniac      06.09.2004 - 23:07:27 , level: 2, UP   NEW
cca dve hodiny trval raidreconf (u toho pola z 10gb diskov)
resizefs trva cca tolko, co fsck
u toho velkeho pola zo 150gb diskov raidreconf trval 13hodin a resizefs 30minut

000001010006354000063542010984810109745701098497
maniac
 maniac      06.09.2004 - 01:52:57 , level: 1, UP   NEW
checknut co sa stane ked pridavany disk bude mat inu kapacitu (vacsiu, ci dokonca mensiu..)
checknut co sa stane ked drbne elektrika pri rebuilde